diff --git a/published/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md b/published/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md
new file mode 100644
index 0000000000..5c39f24424
--- /dev/null
+++ b/published/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md
@@ -0,0 +1,901 @@
+[#]: collector: (lujun9972)
+[#]: translator: (guevaraya)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10700-1.html)
+[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 11 Input02)
+[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html)
+[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
+
+计算机实验室之树莓派:课程 11 输入02
+======
+
+课程输入 02 是以课程输入 01 为基础讲解的,通过一个简单的命令行实现用户的命令输入和计算机的处理和显示。本文假设你已经具备 [课程11:输入01][1] 的操作系统代码基础。
+
+### 1、终端
+
+几乎所有的操作系统都是以字符终端显示启动的。经典的黑底白字,通过键盘输入计算机要执行的命令,然后会提示你拼写错误,或者恰好得到你想要的执行结果。这种方法有两个主要优点:键盘和显示器可以提供简易、健壮的计算机交互机制,几乎所有的计算机系统都采用这个机制,这个也广泛被系统管理员应用。
+
+> 早期的计算一般是在一栋楼里的一个巨型计算机系统,它有很多可以输命令的'终端'。计算机依次执行不同来源的命令。
+
+让我们分析下真正想要哪些信息:
+
+1. 计算机打开后,显示欢迎信息
+2. 计算机启动后可以接受输入标志
+3. 用户从键盘输入带参数的命令
+4. 用户输入回车键或提交按钮
+5. 计算机解析命令后执行可用的命令
+6. 计算机显示命令的执行结果,过程信息
+7. 循环跳转到步骤 2
+
+这样的终端被定义为标准的输入输出设备。用于(显示)输入的屏幕和打印输出内容的屏幕是同一个(LCTT 译注:最早期的输出打印真是“打印”到打印机/电传机的,而用于输入的终端只是键盘,除非做了回显,否则输出终端是不会显示输入的字符的)。也就是说终端是对字符显示的一个抽象。字符显示中,单个字符是最小的单元,而不是像素。屏幕被划分成固定数量不同颜色的字符。我们可以在现有的屏幕代码基础上,先存储字符和对应的颜色,然后再用方法 `DrawCharacter` 把其推送到屏幕上。一旦我们需要字符显示,就只需要在屏幕上画出一行字符串。
+
+新建文件名为 `terminal.s`,如下:
+
+```
+.section .data
+.align 4
+terminalStart:
+.int terminalBuffer
+terminalStop:
+.int terminalBuffer
+terminalView:
+.int terminalBuffer
+terminalColour:
+.byte 0xf
+.align 8
+terminalBuffer:
+.rept 128*128
+.byte 0x7f
+.byte 0x0
+.endr
+terminalScreen:
+.rept 1024/8 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 768/16
+.byte 0x7f
+.byte 0x0
+.endr
+```
+
+这是文件终端的配置数据文件。我们有两个主要的存储变量:`terminalBuffer` 和 `terminalScreen`。`terminalBuffer` 保存所有显示过的字符。它保存 128 行字符文本(1 行包含 128 个字符)。每个字符有一个 ASCII 字符和颜色单元组成,初始值为 0x7f(ASCII 的删除字符)和 0(前景色和背景色为黑)。`terminalScreen` 保存当前屏幕显示的字符。它保存 128x48 个字符,与 `terminalBuffer` 初始化值一样。你可能会觉得我仅需要 `terminalScreen` 就够了,为什么还要`terminalBuffer`,其实有两个好处:
+
+ 1. 我们可以很容易看到字符串的变化,只需画出有变化的字符。
+ 2. 我们可以回滚终端显示的历史字符,也就是缓冲的字符(有限制)
+
+这种独特的技巧在低功耗系统里很常见。画屏是很耗时的操作,因此我们仅在不得已的时候才去执行这个操作。在这个系统里,我们可以任意改变 `terminalBuffer`,然后调用一个仅拷贝屏幕上字节变化的方法。也就是说我们不需要持续画出每个字符,这样可以节省一大段跨行文本的操作时间。
+
+> 你总是需要尝试去设计一个高效的系统,如果在很少变化的情况下这个系统会运行的更快。
+
+其他在 `.data` 段的值得含义如下:
+
+ * `terminalStart`
+ 写入到 `terminalBuffer` 的第一个字符
+ * `terminalStop`
+ 写入到 `terminalBuffer` 的最后一个字符
+ * `terminalView`
+ 表示当前屏幕的第一个字符,这样我们可以控制滚动屏幕
+ * `temrinalColour`
+ 即将被描画的字符颜色
+
+`terminalStart` 需要保存起来的原因是 `termainlBuffer` 是一个环状缓冲区。意思是当缓冲区变满时,末尾地方会回滚覆盖开始位置,这样最后一个字符变成了第一个字符。因此我们需要将 `terminalStart` 往前推进,这样我们知道我们已经占满它了。如何实现缓冲区检测:如果索引越界到缓冲区的末尾,就将索引指向缓冲区的开始位置。环状缓冲区是一个比较常见的存储大量数据的高明方法,往往这些数据的最近部分比较重要。它允许无限制的写入,只保证最近一些特定数据有效。这个常常用于信号处理和数据压缩算法。这样的情况,可以允许我们存储 128 行终端记录,超过128行也不会有问题。如果不是这样,当超过第 128 行时,我们需要把 127 行分别向前拷贝一次,这样很浪费时间。
+
+![显示 Hellow world 插入到大小为5的循环缓冲区的示意图。][2]
+
+> 环状缓冲区是**数据结构**一个例子。这是一个组织数据的思路,有时我们通过软件实现这种思路。
+
+之前已经提到过 `terminalColour` 几次了。你可以根据你的想法实现终端颜色,但这个文本终端有 16 个前景色和 16 个背景色(这里相当于有 16^2 = 256 种组合)。[CGA][3]终端的颜色定义如下:
+
+
+表格 1.1 - CGA 颜色编码
+
+| 序号 | 颜色 (R, G, B) |
+| ------ | ------------------------|
+| 0 | 黑 (0, 0, 0) |
+| 1 | 蓝 (0, 0, ⅔) |
+| 2 | 绿 (0, ⅔, 0) |
+| 3 | 青色 (0, ⅔, ⅔) |
+| 4 | 红色 (⅔, 0, 0) |
+| 5 | 品红 (⅔, 0, ⅔) |
+| 6 | 棕色 (⅔, ⅓, 0) |
+| 7 | 浅灰色 (⅔, ⅔, ⅔) |
+| 8 | 灰色 (⅓, ⅓, ⅓) |
+| 9 | 淡蓝色 (⅓, ⅓, 1) |
+| 10 | 淡绿色 (⅓, 1, ⅓) |
+| 11 | 淡青色 (⅓, 1, 1) |
+| 12 | 淡红色 (1, ⅓, ⅓) |
+| 13 | 浅品红 (1, ⅓, 1) |
+| 14 | 黄色 (1, 1, ⅓) |
+| 15 | 白色 (1, 1, 1) |
+
+我们将前景色保存到颜色的低字节,背景色保存到颜色高字节。除了棕色,其他这些颜色遵循一种模式如二进制的高位比特代表增加 ⅓ 到每个组件,其他比特代表增加 ⅔ 到各自组件。这样很容易进行 RGB 颜色转换。
+
+> 棕色作为替代色(黑黄色)既不吸引人也没有什么用处。
+
+我们需要一个方法从 `TerminalColour` 读取颜色编码的四个比特,然后用 16 比特等效参数调用 `SetForeColour`。尝试你自己实现。如果你感觉麻烦或者还没有完成屏幕系列课程,我们的实现如下:
+
+```
+.section .text
+TerminalColour:
+teq r0,#6
+ldreq r0,=0x02B5
+beq SetForeColour
+
+tst r0,#0b1000
+ldrne r1,=0x52AA
+moveq r1,#0
+tst r0,#0b0100
+addne r1,#0x15
+tst r0,#0b0010
+addne r1,#0x540
+tst r0,#0b0001
+addne r1,#0xA800
+mov r0,r1
+b SetForeColour
+```
+
+### 2、文本显示
+
+我们的终端第一个真正需要的方法是 `TerminalDisplay`,它用来把当前的数据从 `terminalBuffer`拷贝到 `terminalScreen` 和实际的屏幕。如上所述,这个方法必须是最小开销的操作,因为我们需要频繁调用它。它主要比较 `terminalBuffer` 与 `terminalDisplay` 的文本,然后只拷贝有差异的字节。请记住 `terminalBuffer` 是以环状缓冲区运行的,这种情况,就是从 `terminalView` 到 `terminalStop`,或者 128*48 个字符,要看哪个来的最快。如果我们遇到 `terminalStop`,我们将会假定在这之后的所有字符是 7f16 (ASCII 删除字符),颜色为 0(黑色前景色和背景色)。
+
+让我们看看必须要做的事情:
+
+ 1. 加载 `terminalView`、`terminalStop` 和 `terminalDisplay` 的地址。
+ 2. 对于每一行:
+ 1. 对于每一列:
+ 1. 如果 `terminalView` 不等于 `terminalStop`,根据 `terminalView` 加载当前字符和颜色
+ 2. 否则加载 0x7f 和颜色 0
+ 3. 从 `terminalDisplay` 加载当前的字符
+ 4. 如果字符和颜色相同,直接跳转到第 10 步
+ 5. 存储字符和颜色到 `terminalDisplay`
+ 6. 用 `r0` 作为背景色参数调用 `TerminalColour`
+ 7. 用 `r0 = 0x7f`(ASCII 删除字符,一个块)、 `r1 = x`、`r2 = y` 调用 `DrawCharacter`
+ 8. 用 `r0` 作为前景色参数调用 `TerminalColour`
+ 9. 用 `r0 = 字符`、`r1 = x`、`r2 = y` 调用 `DrawCharacter`
+ 10. 对位置参数 `terminalDisplay` 累加 2
+ 11. 如果 `terminalView` 不等于 `terminalStop`,`terminalView` 位置参数累加 2
+ 12. 如果 `terminalView` 位置已经是文件缓冲器的末尾,将它设置为缓冲区的开始位置
+ 13. x 坐标增加 8
+ 2. y 坐标增加 16
+
+尝试去自己实现吧。如果你遇到问题,我们的方案下面给出来了:
+
+1、我这里的变量有点乱。为了方便起见,我用 `taddr` 存储 `textBuffer` 的末尾位置。
+
+```
+.globl TerminalDisplay
+TerminalDisplay:
+push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
+x .req r4
+y .req r5
+char .req r6
+col .req r7
+screen .req r8
+taddr .req r9
+view .req r10
+stop .req r11
+
+ldr taddr,=terminalStart
+ldr view,[taddr,#terminalView - terminalStart]
+ldr stop,[taddr,#terminalStop - terminalStart]
+add taddr,#terminalBuffer - terminalStart
+add taddr,#128*128*2
+mov screen,taddr
+```
+
+2、从 `yLoop` 开始运行。
+
+```
+mov y,#0
+yLoop$:
+```
+
+2.1、
+
+```
+mov x,#0
+xLoop$:
+```
+从 `xLoop` 开始运行。
+
+
+2.1.1、为了方便起见,我把字符和颜色同时加载到 `char` 变量了
+
+```
+teq view,stop
+ldrneh char,[view]
+```
+
+2.1.2、这行是对上面一行的补充说明:读取黑色的删除字符
+
+
+```
+moveq char,#0x7f
+```
+
+2.1.3、为了简便我把字符和颜色同时加载到 `col` 里。
+
+```
+ldrh col,[screen]
+```
+
+2.1.4、 现在我用 `teq` 指令检查是否有数据变化
+
+```
+teq col,char
+beq xLoopContinue$
+```
+
+2.1.5、我可以容易的保存当前值
+
+
+```
+strh char,[screen]
+```
+
+2.1.6、我用比特偏移指令 `lsr` 和 `and` 指令从切分 `char` 变量,将颜色放到 `col` 变量,字符放到 `char` 变量,然后再用比特偏移指令 `lsr` 获取背景色后调用 `TerminalColour` 。
+
+```
+lsr col,char,#8
+and char,#0x7f
+lsr r0,col,#4
+bl TerminalColour
+```
+
+2.1.7、写入一个彩色的删除字符
+
+```
+mov r0,#0x7f
+mov r1,x
+mov r2,y
+bl DrawCharacter
+```
+
+2.1.8、用 `and` 指令获取 `col` 变量的低半字节,然后调用 `TerminalColour`
+
+```
+and r0,col,#0xf
+bl TerminalColour
+```
+
+2.1.9、写入我们需要的字符
+
+```
+mov r0,char
+mov r1,x
+mov r2,y
+bl DrawCharacter
+```
+
+2.1.10、自增屏幕指针
+
+```
+xLoopContinue$:
+add screen,#2
+```
+
+2.1.11、如果可能自增 `view` 指针
+
+```
+teq view,stop
+addne view,#2
+```
+
+2.1.12、很容易检测 `view` 指针是否越界到缓冲区的末尾,因为缓冲区的地址保存在 `taddr` 变量里
+
+```
+teq view,taddr
+subeq view,#128*128*2
+```
+
+2.1.13、 如果还有字符需要显示,我们就需要自增 `x` 变量然后到 `xLoop` 循环执行
+
+```
+add x,#8
+teq x,#1024
+bne xLoop$
+```
+
+2.2、 如果还有更多的字符显示我们就需要自增 `y` 变量,然后到 `yLoop` 循环执行
+
+```
+add y,#16
+teq y,#768
+bne yLoop$
+```
+
+3、不要忘记最后清除变量
+
+```
+pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
+.unreq x
+.unreq y
+.unreq char
+.unreq col
+.unreq screen
+.unreq taddr
+.unreq view
+.unreq stop
+```
+
+### 3、行打印
+
+现在我有了自己 `TerminalDisplay` 方法,它可以自动显示 `terminalBuffer` 内容到 `terminalScreen`,因此理论上我们可以画出文本。但是实际上我们没有任何基于字符显示的例程。 首先快速容易上手的方法便是 `TerminalClear`, 它可以彻底清除终端。这个方法不用循环也很容易实现。可以尝试分析下面的方法应该不难:
+
+```
+.globl TerminalClear
+TerminalClear:
+ldr r0,=terminalStart
+add r1,r0,#terminalBuffer-terminalStart
+str r1,[r0]
+str r1,[r0,#terminalStop-terminalStart]
+str r1,[r0,#terminalView-terminalStart]
+mov pc,lr
+```
+
+现在我们需要构造一个字符显示的基础方法:`Print` 函数。它将保存在 `r0` 的字符串和保存在 `r1` 的字符串长度简单的写到屏幕上。有一些特定字符需要特别的注意,这些特定的操作是确保 `terminalView` 是最新的。我们来分析一下需要做什么:
+
+ 1. 检查字符串的长度是否为 0,如果是就直接返回
+ 2. 加载 `terminalStop` 和 `terminalView`
+ 3. 计算出 `terminalStop` 的 x 坐标
+ 4. 对每一个字符的操作:
+ 1. 检查字符是否为新起一行
+ 2. 如果是的话,自增 `bufferStop` 到行末,同时写入黑色删除字符
+ 3. 否则拷贝当前 `terminalColour` 的字符
+ 4. 检查是否在行末
+ 5. 如果是,检查从 `terminalView` 到 `terminalStop` 之间的字符数是否大于一屏
+ 6. 如果是,`terminalView` 自增一行
+ 7. 检查 `terminalView` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
+ 8. 检查 `terminalStop` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
+ 9. 检查 `terminalStop` 是否等于 `terminalStart`, 如果是的话 `terminalStart` 自增一行。
+ 10. 检查 `terminalStart` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
+ 5. 存取 `terminalStop` 和 `terminalView`
+
+试一下自己去实现。我们的方案提供如下:
+
+1、这个是 `Print` 函数开始快速检查字符串为0的代码
+
+```
+.globl Print
+Print:
+teq r1,#0
+moveq pc,lr
+```
+
+2、这里我做了很多配置。 `bufferStart` 代表 `terminalStart`, `bufferStop` 代表`terminalStop`, `view` 代表 `terminalView`,`taddr` 代表 `terminalBuffer` 的末尾地址。
+
+```
+push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
+bufferStart .req r4
+taddr .req r5
+x .req r6
+string .req r7
+length .req r8
+char .req r9
+bufferStop .req r10
+view .req r11
+
+mov string,r0
+mov length,r1
+
+ldr taddr,=terminalStart
+ldr bufferStop,[taddr,#terminalStop-terminalStart]
+ldr view,[taddr,#terminalView-terminalStart]
+ldr bufferStart,[taddr]
+add taddr,#terminalBuffer-terminalStart
+add taddr,#128*128*2
+```
+
+3、和通常一样,巧妙的对齐技巧让许多事情更容易。由于需要对齐 `terminalBuffer`,每个字符的 x 坐标需要 8 位要除以 2。
+
+
+```
+and x,bufferStop,#0xfe
+lsr x,#1
+```
+
+4.1、我们需要检查新行
+
+```
+charLoop$:
+ldrb char,[string]
+and char,#0x7f
+teq char,#'\n'
+bne charNormal$
+```
+
+4.2、循环执行值到行末写入 0x7f;黑色删除字符
+
+```
+mov r0,#0x7f
+clearLine$:
+strh r0,[bufferStop]
+add bufferStop,#2
+add x,#1
+teq x,#128 blt clearLine$
+
+b charLoopContinue$
+```
+
+4.3、存储字符串的当前字符和 `terminalBuffer` 末尾的 `terminalColour` 然后将它和 x 变量自增
+
+```
+charNormal$:
+strb char,[bufferStop]
+ldr r0,=terminalColour
+ldrb r0,[r0]
+strb r0,[bufferStop,#1]
+add bufferStop,#2
+add x,#1
+```
+
+4.4、检查 x 是否为行末;128
+
+
+```
+charLoopContinue$:
+cmp x,#128
+blt noScroll$
+```
+
+4.5、设置 x 为 0 然后检查我们是否已经显示超过 1 屏。请记住,我们是用的循环缓冲区,因此如果 `bufferStop` 和 `view` 之前的差是负值,我们实际上是环绕了缓冲区。
+
+```
+mov x,#0
+subs r0,bufferStop,view
+addlt r0,#128*128*2
+cmp r0,#128*(768/16)*2
+```
+
+4.6、增加一行字节到 `view` 的地址
+
+```
+addge view,#128*2
+```
+
+4.7、 如果 `view` 地址是缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
+
+```
+teq view,taddr
+subeq view,taddr,#128*128*2
+```
+
+4.8、如果 `stop` 的地址在缓冲区末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
+
+```
+noScroll$:
+teq bufferStop,taddr
+subeq bufferStop,taddr,#128*128*2
+```
+
+4.9、检查 `bufferStop` 是否等于 `bufferStart`。 如果等于增加一行到 `bufferStart`。
+
+```
+teq bufferStop,bufferStart
+addeq bufferStart,#128*2
+```
+
+4.10、如果 `start` 的地址在缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
+
+```
+teq bufferStart,taddr
+subeq bufferStart,taddr,#128*128*2
+```
+循环执行知道字符串结束
+
+```
+subs length,#1
+add string,#1
+bgt charLoop$
+```
+
+5、保存变量然后返回
+
+```
+charLoopBreak$:
+sub taddr,#128*128*2
+sub taddr,#terminalBuffer-terminalStart
+str bufferStop,[taddr,#terminalStop-terminalStart]
+str view,[taddr,#terminalView-terminalStart]
+str bufferStart,[taddr]
+
+pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
+.unreq bufferStart
+.unreq taddr
+.unreq x
+.unreq string
+.unreq length
+.unreq char
+.unreq bufferStop
+.unreq view
+```
+
+这个方法允许我们打印任意字符到屏幕。然而我们用了颜色变量,但实际上没有设置它。一般终端用特性的组合字符去行修改颜色。如 ASCII 转义(1b16)后面跟着一个 0 - f 的 16 进制的数,就可以设置前景色为 CGA 颜色号。如果你自己想尝试实现;在下载页面有一个我的详细的例子。
+
+### 4、标志输入
+
+现在我们有一个可以打印和显示文本的输出终端。这仅仅是说对了一半,我们需要输入。我们想实现一个方法:`ReadLine`,可以保存文件的一行文本,文本位置由 `r0` 给出,最大的长度由 `r1` 给出,返回 `r0` 里面的字符串长度。棘手的是用户输出字符的时候要回显功能,同时想要退格键的删除功能和命令回车执行功能。它们还需要一个闪烁的下划线代表计算机需要输入。这些完全合理的要求让构造这个方法更具有挑战性。有一个方法完成这些需求就是存储用户输入的文本和文件大小到内存的某个地方。然后当调用 `ReadLine` 的时候,移动 `terminalStop` 的地址到它开始的地方然后调用 `Print`。也就是说我们只需要确保在内存维护一个字符串,然后构造一个我们自己的打印函数。
+
+> 按照惯例,许多编程语言中,任意程序可以访问 stdin 和 stdin,它们可以连接到终端的输入和输出流。在图形程序其实也可以进行同样操作,但实际几乎不用。
+
+让我们看看 `ReadLine` 做了哪些事情:
+
+ 1. 如果字符串可保存的最大长度为 0,直接返回
+ 2. 检索 `terminalStop` 和 `terminalStop` 的当前值
+ 3. 如果字符串的最大长度大约缓冲区的一半,就设置大小为缓冲区的一半
+ 4. 从最大长度里面减去 1 来确保输入的闪烁字符或结束符
+ 5. 向字符串写入一个下划线
+ 6. 写入一个 `terminalView` 和 `terminalStop` 的地址到内存
+ 7. 调用 `Print` 打印当前字符串
+ 8. 调用 `TerminalDisplay`
+ 9. 调用 `KeyboardUpdate`
+ 10. 调用 `KeyboardGetChar`
+ 11. 如果是一个新行直接跳转到第 16 步
+ 12. 如果是一个退格键,将字符串长度减 1(如果其大于 0)
+ 13. 如果是一个普通字符,将它写入字符串(字符串大小确保小于最大值)
+ 14. 如果字符串是以下划线结束,写入一个空格,否则写入下划线
+ 15. 跳转到第 6 步
+ 16. 字符串的末尾写入一个新行字符
+ 17. 调用 `Print` 和 `TerminalDisplay`
+ 18. 用结束符替换新行
+ 19. 返回字符串的长度
+
+
+为了方便读者理解,然后然后自己去实现,我们的实现提供如下:
+
+1. 快速处理长度为 0 的情况
+
+```
+.globl ReadLine
+ReadLine:
+teq r1,#0
+moveq r0,#0
+moveq pc,lr
+```
+
+2、考虑到常见的场景,我们初期做了很多初始化动作。`input` 代表 `terminalStop` 的值,`view` 代表 `terminalView`。`Length` 默认为 `0`。
+
+```
+string .req r4
+maxLength .req r5
+input .req r6
+taddr .req r7
+length .req r8
+view .req r9
+
+push {r4,r5,r6,r7,r8,r9,lr}
+
+mov string,r0
+mov maxLength,r1
+ldr taddr,=terminalStart
+ldr input,[taddr,#terminalStop-terminalStart]
+ldr view,[taddr,#terminalView-terminalStart]
+mov length,#0
+```
+
+3、我们必须检查异常大的读操作,我们不能处理超过 `terminalBuffer` 大小的输入(理论上可行,但是 `terminalStart` 移动越过存储的 terminalStop`,会有很多问题)。
+
+```
+cmp maxLength,#128*64
+movhi maxLength,#128*64
+```
+
+4、由于用户需要一个闪烁的光标,我们需要一个备用字符在理想状况在这个字符串后面放一个结束符。
+
+```
+sub maxLength,#1
+```
+
+5、写入一个下划线让用户知道我们可以输入了。
+
+```
+mov r0,#'_'
+strb r0,[string,length]
+```
+
+6、保存 `terminalStop` 和 `terminalView`。这个对重置一个终端很重要,它会修改这些变量。严格讲也可以修改 `terminalStart`,但是不可逆。
+
+```
+readLoop$:
+str input,[taddr,#terminalStop-terminalStart]
+str view,[taddr,#terminalView-terminalStart]
+```
+
+7、写入当前的输入。由于下划线因此字符串长度加 1
+
+```
+mov r0,string
+mov r1,length
+add r1,#1
+bl Print
+```
+
+8、拷贝下一个文本到屏幕
+
+```
+bl TerminalDisplay
+```
+
+
+9、获取最近一次键盘输入
+
+```
+bl KeyboardUpdate
+```
+
+10、检索键盘输入键值
+
+```
+bl KeyboardGetChar
+```
+
+11、如果我们有一个回车键,循环中断。如果有结束符和一个退格键也会同样跳出循环。
+
+```
+teq r0,#'\n'
+beq readLoopBreak$
+teq r0,#0
+beq cursor$
+teq r0,#'\b'
+bne standard$
+```
+
+12、从 `length` 里面删除一个字符
+
+```
+delete$:
+cmp length,#0
+subgt length,#1
+b cursor$
+```
+
+13、写回一个普通字符
+
+```
+standard$:
+cmp length,maxLength
+bge cursor$
+strb r0,[string,length]
+add length,#1
+```
+
+14、加载最近的一个字符,如果不是下划线则修改为下换线,如果是则修改为空格
+
+```
+cursor$:
+ldrb r0,[string,length]
+teq r0,#'_'
+moveq r0,#' '
+movne r0,#'_'
+strb r0,[string,length]
+```
+
+15、循环执行值到用户输入按下
+
+```
+b readLoop$
+readLoopBreak$:
+```
+
+16、在字符串的结尾处存入一个新行字符
+
+```
+mov r0,#'\n'
+strb r0,[string,length]
+```
+
+17、重置 `terminalView` 和 `terminalStop` 然后调用 `Print` 和 `TerminalDisplay` 显示最终的输入
+
+```
+str input,[taddr,#terminalStop-terminalStart]
+str view,[taddr,#terminalView-terminalStart]
+mov r0,string
+mov r1,length
+add r1,#1
+bl Print
+bl TerminalDisplay
+```
+
+18、写入一个结束符
+
+```
+mov r0,#0
+strb r0,[string,length]
+```
+
+19、返回长度
+
+```
+mov r0,length
+pop {r4,r5,r6,r7,r8,r9,pc}
+.unreq string
+.unreq maxLength
+.unreq input
+.unreq taddr
+.unreq length
+.unreq view
+```
+
+### 5、终端:机器进化
+
+现在我们理论用终端和用户可以交互了。最显而易见的事情就是拿去测试了!删除 `main.s` 里 `bl UsbInitialise` 后面的代码后如下:
+
+```
+reset$:
+ mov sp,#0x8000
+ bl TerminalClear
+
+ ldr r0,=welcome
+ mov r1,#welcomeEnd-welcome
+ bl Print
+
+loop$:
+ ldr r0,=prompt
+ mov r1,#promptEnd-prompt
+ bl Print
+
+ ldr r0,=command
+ mov r1,#commandEnd-command
+ bl ReadLine
+
+ teq r0,#0
+ beq loopContinue$
+
+ mov r4,r0
+
+ ldr r5,=command
+ ldr r6,=commandTable
+
+ ldr r7,[r6,#0]
+ ldr r9,[r6,#4]
+ commandLoop$:
+ ldr r8,[r6,#8]
+ sub r1,r8,r7
+
+ cmp r1,r4
+ bgt commandLoopContinue$
+
+ mov r0,#0
+ commandName$:
+ ldrb r2,[r5,r0]
+ ldrb r3,[r7,r0]
+ teq r2,r3
+ bne commandLoopContinue$
+ add r0,#1
+ teq r0,r1
+ bne commandName$
+
+ ldrb r2,[r5,r0]
+ teq r2,#0
+ teqne r2,#' '
+ bne commandLoopContinue$
+
+ mov r0,r5
+ mov r1,r4
+ mov lr,pc
+ mov pc,r9
+ b loopContinue$
+
+ commandLoopContinue$:
+ add r6,#8
+ mov r7,r8
+ ldr r9,[r6,#4]
+ teq r9,#0
+ bne commandLoop$
+
+ ldr r0,=commandUnknown
+ mov r1,#commandUnknownEnd-commandUnknown
+ ldr r2,=formatBuffer
+ ldr r3,=command
+ bl FormatString
+
+ mov r1,r0
+ ldr r0,=formatBuffer
+ bl Print
+
+loopContinue$:
+ bl TerminalDisplay
+ b loop$
+
+echo:
+ cmp r1,#5
+ movle pc,lr
+
+ add r0,#5
+ sub r1,#5
+ b Print
+
+ok:
+ teq r1,#5
+ beq okOn$
+ teq r1,#6
+ beq okOff$
+ mov pc,lr
+
+ okOn$:
+ ldrb r2,[r0,#3]
+ teq r2,#'o'
+ ldreqb r2,[r0,#4]
+ teqeq r2,#'n'
+ movne pc,lr
+ mov r1,#0
+ b okAct$
+
+ okOff$:
+ ldrb r2,[r0,#3]
+ teq r2,#'o'
+ ldreqb r2,[r0,#4]
+ teqeq r2,#'f'
+ ldreqb r2,[r0,#5]
+ teqeq r2,#'f'
+ movne pc,lr
+ mov r1,#1
+
+ okAct$:
+
+ mov r0,#16
+ b SetGpio
+
+.section .data
+.align 2
+welcome: .ascii "Welcome to Alex's OS - Everyone's favourite OS"
+welcomeEnd:
+.align 2
+prompt: .ascii "\n> "
+promptEnd:
+.align 2
+command:
+ .rept 128
+ .byte 0
+ .endr
+commandEnd:
+.byte 0
+.align 2
+commandUnknown: .ascii "Command `%s' was not recognised.\n"
+commandUnknownEnd:
+.align 2
+formatBuffer:
+ .rept 256
+ .byte 0
+ .endr
+formatEnd:
+
+.align 2
+commandStringEcho: .ascii "echo"
+commandStringReset: .ascii "reset"
+commandStringOk: .ascii "ok"
+commandStringCls: .ascii "cls"
+commandStringEnd:
+
+.align 2
+commandTable:
+.int commandStringEcho, echo
+.int commandStringReset, reset$
+.int commandStringOk, ok
+.int commandStringCls, TerminalClear
+.int commandStringEnd, 0
+```
+
+这块代码集成了一个简易的命令行操作系统。支持命令:`echo`、`reset`、`ok` 和 `cls`。`echo` 拷贝任意文本到终端,`reset` 命令会在系统出现问题的是复位操作系统,`ok` 有两个功能:设置 OK 灯亮灭,最后 `cls` 调用 TerminalClear 清空终端。
+
+试试树莓派的代码吧。如果遇到问题,请参照问题集锦页面吧。
+
+如果运行正常,祝贺你完成了一个操作系统基本终端和输入系列的课程。很遗憾这个教程先讲到这里,但是我希望将来能制作更多教程。有问题请反馈至 awc32@cam.ac.uk。
+
+你已经在建立了一个简易的终端操作系统。我们的代码在 commandTable 构造了一个可用的命令表格。每个表格的入口是一个整型数字,用来表示字符串的地址,和一个整型数字表格代码的执行入口。 最后一个入口是 为 0 的 `commandStringEnd`。尝试实现你自己的命令,可以参照已有的函数,建立一个新的。函数的参数 `r0` 是用户输入的命令地址,`r1` 是其长度。你可以用这个传递你输入值到你的命令。也许你有一个计算器程序,或许是一个绘图程序或国际象棋。不管你的什么点子,让它跑起来!
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html
+
+作者:[Alex Chadwick][a]
+选题:[lujun9972][b]
+译者:[guevaraya](https://github.com/guevaraya)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.cl.cam.ac.uk
+[b]: https://github.com/lujun9972
+[1]: https://linux.cn/article-10676-1.html
+[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/circular_buffer.png
+[3]: https://en.wikipedia.org/wiki/Color_Graphics_Adapter
diff --git a/published/20160301 How To Set Password Policies In Linux.md b/published/20160301 How To Set Password Policies In Linux.md
new file mode 100644
index 0000000000..3cfedf6341
--- /dev/null
+++ b/published/20160301 How To Set Password Policies In Linux.md
@@ -0,0 +1,351 @@
+[#]: collector: (lujun9972)
+[#]: translator: (liujing97)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10698-1.html)
+[#]: subject: (How To Set Password Policies In Linux)
+[#]: via: (https://www.ostechnix.com/how-to-set-password-policies-in-linux/)
+[#]: author: (SK https://www.ostechnix.com/author/sk/)
+
+如何设置 Linux 系统的密码策略
+======
+
+
+
+虽然 Linux 的设计是安全的,但还是存在许多安全漏洞的风险,弱密码就是其中之一。作为系统管理员,你必须为用户提供一个强密码。因为大部分的系统漏洞就是由于弱密码而引发的。本教程描述了在基于 DEB 系统的 Linux,比如 Debian、Ubuntu、Linux Mint 等和基于 RPM 系统的 Linux,比如 RHEL、CentOS、Scientific Linux 等的系统下设置像**密码长度**、**密码复杂度**、**密码有效期**等密码策略。
+
+### 在基于 DEB 的系统中设置密码长度
+
+默认情况下,所有的 Linux 操作系统要求用户**密码长度最少 6 个字符**。我强烈建议不要低于这个限制。并且不要使用你的真实名称、父母、配偶、孩子的名字,或者你的生日作为密码。即便是一个黑客新手,也可以很快地破解这类密码。一个好的密码必须是至少 6 个字符,并且包含数字、大写字母和特殊符号。
+
+通常地,在基于 DEB 的操作系统中,密码和身份认证相关的配置文件被存储在 `/etc/pam.d/` 目录中。
+
+设置最小密码长度,编辑 `/etc/pam.d/common-password` 文件;
+
+```
+$ sudo nano /etc/pam.d/common-password
+```
+
+找到下面这行:
+
+```
+password [success=2 default=ignore] pam_unix.so obscure sha512
+```
+
+![][2]
+
+在末尾添加额外的文字:`minlen=8`。在这里我设置的最小密码长度为 `8`。
+
+```
+password [success=2 default=ignore] pam_unix.so obscure sha512 minlen=8
+```
+
+
+
+保存并关闭该文件。这样一来,用户现在不能设置小于 8 个字符的密码。
+
+### 在基于 RPM 的系统中设置密码长度
+
+**在 RHEL、CentOS、Scientific Linux 7.x** 系统中, 以 root 身份执行下面的命令来设置密码长度。
+
+```
+# authconfig --passminlen=8 --update
+```
+
+查看最小密码长度,执行:
+
+```
+# grep "^minlen" /etc/security/pwquality.conf
+```
+
+**输出样例:**
+
+```
+minlen = 8
+```
+
+**在 RHEL、CentOS、Scientific Linux 6.x** 系统中,编辑 `/etc/pam.d/system-auth` 文件:
+
+```
+# nano /etc/pam.d/system-auth
+```
+
+找到下面这行并在该行末尾添加:
+
+```
+password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8
+```
+
+
+
+如上设置中,最小密码长度是 `8` 个字符。
+
+### 在基于 DEB 的系统中设置密码复杂度
+
+此设置会强制要求密码中应该包含多少类型,比如大写字母、小写字母和其他字符。
+
+首先,用下面命令安装密码质量检测库:
+
+```
+$ sudo apt-get install libpam-pwquality
+```
+
+之后,编辑 `/etc/pam.d/common-password` 文件:
+
+```
+$ sudo nano /etc/pam.d/common-password
+```
+
+为了设置密码中至少有一个**大写字母**,则在下面这行的末尾添加文字 `ucredit=-1`。
+
+```
+password requisite pam_pwquality.so retry=3 ucredit=-1
+```
+
+
+
+设置密码中至少有一个**小写字母**,如下所示。
+
+```
+password requisite pam_pwquality.so retry=3 dcredit=-1
+```
+
+设置密码中至少含有其他字符,如下所示。
+
+```
+password requisite pam_pwquality.so retry=3 ocredit=-1
+```
+
+正如你在上面样例中看到的一样,我们设置了密码中至少含有一个大写字母、一个小写字母和一个特殊字符。你可以设置被最大允许的任意数量的大写字母、小写字母和特殊字符。
+
+你还可以设置密码中被允许的字符类的最大或最小数量。
+
+下面的例子展示了设置一个新密码中被要求的字符类的最小数量:
+
+```
+password requisite pam_pwquality.so retry=3 minclass=2
+```
+
+### 在基于 RPM 的系统中设置密码复杂度
+
+**在 RHEL 7.x / CentOS 7.x / Scientific Linux 7.x 中:**
+
+设置密码中至少有一个小写字母,执行:
+
+```
+# authconfig --enablereqlower --update
+```
+
+查看该设置,执行:
+
+```
+# grep "^lcredit" /etc/security/pwquality.conf
+```
+
+**输出样例:**
+
+```
+lcredit = -1
+```
+
+类似地,使用以下命令去设置密码中至少有一个大写字母:
+
+```
+# authconfig --enablerequpper --update
+```
+
+查看该设置:
+
+```
+# grep "^ucredit" /etc/security/pwquality.conf
+```
+
+**输出样例:**
+
+```
+ucredit = -1
+```
+
+设置密码中至少有一个数字,执行:
+
+```
+# authconfig --enablereqdigit --update
+```
+
+查看该设置,执行:
+
+```
+# grep "^dcredit" /etc/security/pwquality.conf
+```
+
+**输出样例:**
+
+```
+dcredit = -1
+```
+
+设置密码中至少含有一个其他字符,执行:
+
+```
+# authconfig --enablereqother --update
+```
+
+查看该设置,执行:
+
+```
+# grep "^ocredit" /etc/security/pwquality.conf
+```
+
+**输出样例:**
+
+```
+ocredit = -1
+```
+
+在 **RHEL 6.x / CentOS 6.x / Scientific Linux 6.x systems** 中,以 root 身份编辑 `/etc/pam.d/system-auth` 文件:
+
+```
+# nano /etc/pam.d/system-auth
+```
+
+找到下面这行并且在该行末尾添加:
+
+```
+password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8 dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1
+```
+
+如上设置中,密码必须要至少包含 `8` 个字符。另外,密码必须至少包含一个大写字母、一个小写字母、一个数字和一个其他字符。
+
+### 在基于 DEB 的系统中设置密码有效期
+
+现在,我们将要设置下面的策略。
+
+ 1. 密码被使用的最长天数。
+ 2. 密码更改允许的最小间隔天数。
+ 3. 密码到期之前发出警告的天数。
+
+设置这些策略,编辑:
+
+```
+$ sudo nano /etc/login.defs
+```
+
+在你的每个需求后设置值。
+
+```
+PASS_MAX_DAYS 100
+PASS_MIN_DAYS 0
+PASS_WARN_AGE 7
+```
+
+
+
+正如你在上面样例中看到的一样,用户应该每 `100` 天修改一次密码,并且密码到期之前的 `7` 天开始出现警告信息。
+
+请注意,这些设置将会在新创建的用户中有效。
+
+为已存在的用户设置修改密码的最大间隔天数,你必须要运行下面的命令:
+
+```
+$ sudo chage -M
+```
+
+设置修改密码的最小间隔天数,执行:
+
+```
+$ sudo chage -m
+```
+
+设置密码到期之前的警告,执行:
+
+```
+$ sudo chage -W
+```
+
+显示已存在用户的密码,执行:
+
+```
+$ sudo chage -l sk
+```
+
+这里,**sk** 是我的用户名。
+
+**输出样例:**
+
+```
+Last password change : Feb 24, 2017
+Password expires : never
+Password inactive : never
+Account expires : never
+Minimum number of days between password change : 0
+Maximum number of days between password change : 99999
+Number of days of warning before password expires : 7
+```
+
+正如你在上面看到的输出一样,该密码是无限期的。
+
+修改已存在用户的密码有效期,
+
+```
+$ sudo chage -E 24/06/2018 -m 5 -M 90 -I 10 -W 10 sk
+```
+
+上面的命令将会设置用户 `sk` 的密码期限是 `24/06/2018`。并且修改密码的最小间隔时间为 `5` 天,最大间隔时间为 `90` 天。用户账号将会在 `10` 天后被自动锁定,而且在到期之前的 `10` 天前显示警告信息。
+
+### 在基于 RPM 的系统中设置密码效期
+
+这点和基于 DEB 的系统是相同的。
+
+### 在基于 DEB 的系统中禁止使用近期使用过的密码
+
+你可以限制用户去设置一个已经使用过的密码。通俗的讲,就是说用户不能再次使用相同的密码。
+
+为设置这一点,编辑 `/etc/pam.d/common-password` 文件:
+
+```
+$ sudo nano /etc/pam.d/common-password
+```
+
+找到下面这行并且在末尾添加文字 `remember=5`:
+
+```
+password [success=2 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 remember=5
+```
+
+上面的策略将会阻止用户去使用最近使用过的 5 个密码。
+
+### 在基于 RPM 的系统中禁止使用近期使用过的密码
+
+这点对于 RHEL 6.x 和 RHEL 7.x 和它们的衍生系统 CentOS、Scientific Linux 是相同的。
+
+以 root 身份编辑 `/etc/pam.d/system-auth` 文件,
+
+```
+# vi /etc/pam.d/system-auth
+```
+
+找到下面这行,并且在末尾添加文字 `remember=5`。
+
+```
+password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5
+```
+
+现在你了解了 Linux 中的密码策略,以及如何在基于 DEB 和 RPM 的系统中设置不同的密码策略。
+
+就这样,我很快会在这里发表另外一天有趣而且有用的文章。在此之前请保持关注。如果您觉得本教程对你有帮助,请在您的社交,专业网络上分享并支持我们。
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-set-password-policies-in-linux/
+
+作者:[SK][a]
+选题:[lujun9972][b]
+译者:[liujing97](https://github.com/liujing97)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: http://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_003-2-1.jpg
diff --git a/published/20171117 5 open source fonts ideal for programmers.md b/published/20171117 5 open source fonts ideal for programmers.md
new file mode 100644
index 0000000000..20f38b280d
--- /dev/null
+++ b/published/20171117 5 open source fonts ideal for programmers.md
@@ -0,0 +1,99 @@
+5 款适合程序员的开源字体
+======
+
+> 编程字体有些在普通字体中没有的特点,这五种字体你可以看看。
+
+
+
+什么是最好的编程字体呢?首先,你需要考虑到字体被设计出来的初衷可能并不相同。当选择一款用于休闲阅读的字体时,读者希望该字体的字母能够顺滑地衔接,提供一种轻松愉悦的体验。一款标准字体的每个字符,类似于拼图的一块,它需要被仔细的设计,从而与整个字体的其他部分融合在一起。
+
+然而,在编写代码时,通常来说对字体的要求更具功能性。这也是为什么大多数程序员在选择时更偏爱使用固定宽度的等宽字体。选择一款带有容易分辨的数字和标点的字体在美学上令人愉悦;但它是否拥有满足你需求的版权许可也是非常重要的。
+
+某些功能使得字体更适合编程。首先要清楚是什么使得等宽字体看上去井然有序。这里,让我们对比一下字母 `w` 和字母 `i`。当选择一款字体时,重要的是要考虑字母本身及周围的空白。在纸质的书籍和报纸中,有效地利用空间是极为重要的,为瘦小的 `i` 分配较小的空间,为宽大的字母 `w` 分配较大的空间是有意义的。
+
+然而在终端中,你没有这些限制。每个字符享有相等的空间将非常有用。这么做的首要好处是你可以随意扫过一段代码来“估测”代码的长度。第二个好处是能够轻松地对齐字符和标点,高亮在视觉上更加明显。另外打印纸张上的等宽字体比均衡字体更加容易通过 OCR 识别。
+
+在本篇文章中,我们将探索 5 款卓越的开源字体,使用它们来编程和写代码都非常理想。
+
+### 1、Firacode:最佳整套编程字体
+
+![FiraCode 示例][1]
+
+*FiraCode, Andrew Lekashman*
+
+在我们列表上的首款字体是 [FiraCode][3],一款真正符合甚至超越了其职责的编程字体。FiraCode 是 Fira 的扩展,而后者是由 Mozilla 委托设计的开源字体族。使得 FiraCode 与众不同的原因是它修改了在代码中常使用的一些符号的组合或连字,使得它看上去更具可读性。这款字体有几种不同的风格,特别是还包含 Retina 选项。你可以在它的 [GitHub][3] 主页中找到它被使用到多种编程语言中的例子。
+
+![FiraCode compared to Fira Mono][2]
+
+*FiraCode 与 Fira Mono 的对比,[Nikita Prokopov][3],源自 GitHub*
+
+### 2、Inconsolata:优雅且由卓越设计者创造
+
+![Inconsolata 示例][4]
+
+*Inconsolata, Andrew Lekashman*
+
+[Inconsolata][5] 是最为漂亮的等宽字体之一。从 2006 年开始它便一直是一款开源和可免费获取的字体。它的创造者 Raph Levien 在设计 Inconsolata 时秉承的一个基本原则是:等宽字体并不应该那么糟糕。使得 Inconsolata 如此优秀的两个原因是:对于 `0` 和 `o` 这两个字符它们有很大的不同,另外它还特别地设计了标点符号。
+
+### 3、DejaVu Sans Mono:许多 Linux 发行版的标准配置,庞大的字形覆盖率
+
+![DejaVu Sans Mono example][6]
+
+*DejaVu Sans Mono, Andrew Lekashman*
+
+受在 GNOME 中使用的带有版权和闭源的 Vera 字体的启发,[DejaVu Sans Mono][7] 是一个非常受欢迎的编程字体,几乎在每个现代的 Linux 发行版中都带有它。在 Book Variant 风格下 DejaVu 拥有惊人的 3310 个字形,相比于一般的字体,它们含有 100 个左右的字形。在工作中你将不会出现缺少某些字符的情况,它覆盖了 Unicode 的绝大部分,并且一直在活跃地增长着。
+
+### 4、Source Code Pro:优雅、可读性强,由 Adobe 中一个小巧但天才的团队打造
+
+![Source Code Pro example][8]
+
+*Source Code Pro, Andrew Lekashman*
+
+由 Paul Hunt 和 Teo Tuominen 设计,[Source Code Pro][9] 是[由 Adobe 创造的][10],成为了它的首款开源字体。Source Code Pro 值得注意的地方在于它极具可读性,且对于容易混淆的字符和标点,它有着非常好的区分度。Source Code Pro 也是一个字体族,有 7 中不同的风格:Extralight、Light、Regular、Medium、Semibold、Bold 和 Black,每种风格都还有斜体变体。
+
+![Differentiating potentially confusable characters][11]
+
+*潜在易混淆的字符之间的区别,[Paul D. Hunt][10] 源自 Adobe Typekit 博客。*
+
+![Metacharacters with special meaning in computer languages][12]
+
+*在计算机领域中有特别含义的特殊元字符, [Paul D. Hunt][10] 源自 Adobe Typekit 博客。*
+
+### 5、Noto Mono:巨量的语言覆盖率,由 Google 中的一个大团队打造
+
+![Noto Mono example][13]
+
+*Noto Mono, Andrew Lekashman*
+
+在我们列表上的最后一款字体是 [Noto Mono][14],这是 Google 打造的庞大 Note 字体族中的等宽版本。尽管它并不是专为编程所设计,但它在 209 种语言(包括 emoji 颜文字!)中都可以使用,并且一直在维护和更新。该项目非常庞大,是 Google 宣称 “组织全世界信息” 的使命的延续。假如你想更多地了解它,可以查看这个绝妙的[关于这些字体的视频][15]。
+
+### 选择合适的字体
+
+无论你选择那个字体,你都有可能在每天中花费数小时面对它,所以请确保它在审美和哲学层面上与你产生共鸣。选择正确的开源字体是确保你拥有最佳生产环境的一个重要部分。这些字体都是很棒的选择,每个都具有让它脱颖而出的功能强大的特性。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/11/how-select-open-source-programming-font
+
+作者:[Andrew Lekashman][a]
+译者:[FSSlc](https://github.com/FSSlc)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com
+[1]:https://opensource.com/sites/default/files/u128651/firacode.png (FiraCode example)
+[2]:https://opensource.com/sites/default/files/u128651/firacode2.png (FiraCode compared to Fira Mono)
+[3]:https://github.com/tonsky/FiraCode
+[4]:https://opensource.com/sites/default/files/u128651/inconsolata.png (Inconsolata example)
+[5]:http://www.levien.com/type/myfonts/inconsolata.html
+[6]:https://opensource.com/sites/default/files/u128651/dejavu_sans_mono.png (DejaVu Sans Mono example)
+[7]:https://dejavu-fonts.github.io/
+[8]:https://opensource.com/sites/default/files/u128651/source_code_pro.png (Source Code Pro example)
+[9]:https://github.com/adobe-fonts/source-code-pro
+[10]:https://blog.typekit.com/2012/09/24/source-code-pro/
+[11]:https://opensource.com/sites/default/files/u128651/source_code_pro2.png (Differentiating potentially confusable characters)
+[12]:https://opensource.com/sites/default/files/u128651/source_code_pro3.png (Metacharacters with special meaning in computer languages)
+[13]:https://opensource.com/sites/default/files/u128651/noto.png (Noto Mono example)
+[14]:https://www.google.com/get/noto/#mono-mono
+[15]:https://www.youtube.com/watch?v=AAzvk9HSi84
diff --git a/published/20180407 12 Best GTK Themes for Ubuntu and other Linux Distributions.md b/published/20180407 12 Best GTK Themes for Ubuntu and other Linux Distributions.md
new file mode 100644
index 0000000000..a137ed5915
--- /dev/null
+++ b/published/20180407 12 Best GTK Themes for Ubuntu and other Linux Distributions.md
@@ -0,0 +1,173 @@
+12 个最佳 GNOME(GTK)主题
+======
+
+> 让我们来看一些漂亮的 GTK 主题,你不仅可以用在 Ubuntu 上,也可以用在其它使用 GNOME 的 Linux 发行版上。
+
+对于我们这些使用 Ubuntu 的人来说,默认的桌面环境从 Unity 变成了 Gnome 使得主题和定制变得前所未有的简单。Gnome 有个相当大的定制用户社区,其中不乏可供用户选择的漂亮的 GTK 主题。最近几个月,我不断找到了一些喜欢的主题。我相信这些是你所能找到的最好的主题之一了。
+
+### Ubuntu 和其它 Linux 发行版的最佳主题
+
+这不是一个详细清单,可能不包括一些你已经使用和喜欢的主题,但希望你能至少找到一个能让你喜爱的没见过的主题。所有这里提及的主题都可以工作在 Gnome 3 上,不管是 Ubuntu 还是其它 Linux 发行版。有一些主题的屏幕截屏我没有,所以我从官方网站上找到了它们的图片。
+
+在这里列出的主题没有特别的次序。
+
+但是,在你看这些最好的 GNOME 主题前,你应该学习一下 [如何在 Ubuntu GNOME 中安装主题][1]。
+
+#### 1、Arc-Ambiance
+
+![][2]
+
+Arc 和 Arc 变体主题已经出现了相当长的时间,普遍认为它们是最好的主题之一。在这个示例中,我选择了 Arc-Ambiance ,因为它是 Ubuntu 中的默认 Ambiance 主题。
+
+我是 Arc 主题和默认 Ambiance 主题的粉丝,所以不用说,当我遇到一个融合了两者优点的主题,我不禁长吸了一口气。如果你是 Arc 主题的粉丝,但不是这个特定主题的粉丝,Gnome 的外观上当然还有适合你口味的大量的选择。
+
+- [下载 Arc-Ambiance 主题][3]
+
+#### 2、Adapta Colorpack
+
+![][4]
+
+Adapta 主题是我所见过的最喜欢的扁平主题之一。像 Arc 一样,Adapata 被很多 Linux 用户广泛采用。我选择这个配色包,是因为一次下载你就有数个可选择的配色方案。事实上,有 19 个配色方案可以选择,是的,你没看错,19 个呢!
+
+所以,如果你是如今常见的扁平风格/材料设计风格的粉丝,那么,在这个主题包中很可能至少有一个能满足你喜好的变体。
+
+- [下载 Adapta Colorpack 主题][5]
+
+#### 3、Numix Collection
+
+![][6]
+
+啊,Numix! 让我想起了我们一起度过的那些年!对于那些在过去几年装点过桌面环境的人来说,你肯定在某个时间点上遇到过 Numix 主题或图标包。Numix 可能是我爱上的第一个 Linux 现代主题,现在我仍然爱它。虽然经过这些年,但它仍然魅力不失。
+
+灰色色调贯穿主题,尤其是默认的粉红色高亮,带来了真正干净而完整的体验。你可能很难找到一个像 Numix 一样精美的主题包。而且在这个主题包中,你还有很多可供选择的余地,简直不要太棒了!
+
+- [下载 Numix Collection 主题][7]
+
+#### 4、Hooli
+
+![][8]
+
+Hooli 是一个已经出现了一段时间的主题,但是我最近才偶然发现它。我是很多扁平主题的粉丝,但是通常不太喜欢材料设计风格的主题。Hooli 像 Adapta 一样吸取了那些设计风格,但是我认为它和其它的那些有所不同。绿色高亮是我对这个主题最喜欢的部分之一,并且,它在不冲击整个主题方面做的很好。
+
+- [下载 Hooli 主题][9]
+
+#### 5、Arrongin/Telinkrin
+
+![][10]
+
+福利:二合一主题!它们是在主题领域中的相对新的竞争者。它们都吸取了 Ubuntu 接近完成的 “[communitheme][11]” 的思路,并带它到了你的桌面。这两个主题我能找到的唯一真正的区别就是颜色。Arrongin 以 Ubuntu 家族的橙色颜色为中心,而 Telinkrin 则更偏向于 KDE Breeze 系的蓝色,我个人更喜欢蓝色,但是两者都是极好的选择!
+
+- [下载 Arrongin/Telinkrin 主题][12]
+
+#### 6、Gnome-osx
+
+![][13]
+
+我不得不承认,通常,当我看到一个主题有 “osx” 或者在标题中有类似的内容时我就不会不期望太多。大多数受 Apple 启发的主题看起来都比较雷同,我真不能找到使用它们的原因。但我想这两个主题能够打破这种思维定式:这就是 Arc-osc 主题和 Gnome-osx 主题。
+
+我喜欢 Gnome-osx 主题的原因是它在 Gnome 桌面上看起来确实很像 OSX。它在融入桌面环境而不至于变的太扁平方面做得很好。所以,对于那些喜欢稍微扁平的主题的人来说,如果你喜欢红黄绿按钮样式(用于关闭、最小化和最大化),这个主题非常适合你。
+
+- [下载 Gnome-osx 主题][14]
+
+#### 7、Ultimate Maia
+
+![][15]
+
+曾经有一段时间我使用 Manjaro Gnome。尽管那以后我又回到了 Ubuntu,但是,我希望我能打包带走的一个东西是 Manjaro 主题。如果你对 Manjaro 主题和我一样感受相同,那么你是幸运的,因为你可以带它到你想运行 Gnome 的任何 Linux 发行版!
+
+丰富的绿色颜色,Breeze 式的关闭、最小化、最大化按钮,以及全面雕琢过的主题使它成为一个不可抗拒的选择。如果你不喜欢绿色,它甚至为你提供一些其它颜色的变体。但是说实话……谁会不喜欢 Manjaro 的绿色呢?
+
+- [下载 Ultimate Maia 主题][16]
+
+#### 8、Vimix
+
+![][17]
+
+这是一个让我激动的主题。它是现代风格的,吸取了 macOS 的红黄绿按钮的风格,但并不是直接复制了它们,并且减少了多变的主题颜色,使之成为了大多数主题的独特替代品。它带来三个深色的变体和几个彩色配色,我们中大多数人都可以从中找到我们喜欢的。
+
+- [下载 Vimix 主题][18]
+
+#### 9、Ant
+
+![][19]
+
+像 Vimix 一样,Ant 从 macOS 的按钮颜色中吸取了灵感,但不是直接复制了样式。在 Vimix 减少了颜色花哨的地方,Ant 却增加了丰富的颜色,在我的 System 76 Galago Pro 屏幕看起来绚丽极了。三个主题变体的变化差异大相径庭,虽然它可能不见得符合每个人的口味,它无疑是最适合我的。
+
+- [下载 Ant 主题][20]
+
+#### 10、Flat Remix
+
+![][21]
+
+如果你还没有注意到这点,对于一些关注关闭、最小化、最大化按钮的人来说我就是一个傻瓜。Flat Remix 使用的颜色主题是我从未在其它地方看到过的,它采用红色、蓝色和橙色方式。把这些添加到一个几乎看起来像是一个混合了 Arc 和 Adapta 的主题的上面,就有了 Flat Remix。
+
+我本人喜欢它的深色主题,但是换成亮色的也是非常好的。因此,如果你喜欢稍稍透明、风格一致的深色主题,以及偶尔的一点点颜色,那 Flat Remix 就适合你。
+
+- [下载 Flat Remix 主题][22]
+
+#### 11、Paper
+
+![][23]
+
+[Paper][24] 已经出现一段时间。我记得第一次使用它是在 2014 年。可以说,Paper 的图标包比其 GTK 主题更出名,但是这并不意味着它自身的主题不是一个极好的选择。即使我从一开始就倾心于 Paper 图标,我不能说当我第一次尝试它的时候我就是一个 Paper 主题忠实粉丝。
+
+我觉得鲜亮的色彩和有趣的方式被放到一个主题里是一种“不成熟”的体验。现在,几年后,Paper 在我心目中已经长大,至少可以这样说,这个主题采取的轻快方式是我非常欣赏的一个。
+
+- [下载 Paper 主题][25]
+
+#### 12、Pop
+
+![][26]
+
+Pop 在这个列表上是一个较新的主题,是由 [System 76][27] 的人们创造的,Pop GTK 主题是前面列出的 Adapta 主题的一个分支,并带有一个匹配的图标包,图标包是先前提到的 Paper 图标包的一个分支。
+
+该主题是在 System 76 发布了 [他们自己的发行版][28] Pop!_OS 之后不久发布的。你可以阅读我的 [Pop!_OS 点评][29] 来了解更多信息。不用说,我认为 Pop 是一个极好的主题,带有华丽的装饰,并为 Gnome 桌面带来了一股清新之风。
+
+- [下载 Pop 主题][30]
+
+#### 结束语
+
+很明显,我们有比文中所描述的主题更多的选择,但是这些大多是我在最近几月所使用的最完整、最精良的主题。如果你认为我们错过一些你确实喜欢的主题,或你确实不喜欢我在上面描述的主题,那么在下面的评论区让我们知道,并分享你喜欢的主题更好的原因!
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/best-gtk-themes/
+
+作者:[Phillip Prado][a]
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://itsfoss.com/author/phillip/
+[1]:https://itsfoss.com/install-themes-ubuntu/
+[2]:https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/03/arcambaince.png
+[3]:https://www.gnome-look.org/p/1193861/
+[4]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/03/adapta.jpg
+[5]:https://www.gnome-look.org/p/1190851/
+[6]:https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/03/numix.png
+[7]:https://www.gnome-look.org/p/1170667/
+[8]:https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/03/hooli2.jpg
+[9]:https://www.gnome-look.org/p/1102901/
+[10]:https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/03/AT.jpg
+[11]:https://itsfoss.com/ubuntu-community-theme/
+[12]:https://www.gnome-look.org/p/1215199/
+[13]:https://itsfoss.com/wp-content/uploads/2018/03/gosx-800x473.jpg
+[14]:https://www.opendesktop.org/s/Gnome/p/1171688/
+[15]:https://itsfoss.com/wp-content/uploads/2018/03/ultimatemaia-800x450.jpg
+[16]:https://www.opendesktop.org/s/Gnome/p/1193879/
+[17]:https://itsfoss.com/wp-content/uploads/2018/03/vimix-800x450.jpg
+[18]:https://www.gnome-look.org/p/1013698/
+[19]:https://itsfoss.com/wp-content/uploads/2018/03/ant-800x533.png
+[20]:https://www.opendesktop.org/p/1099856/
+[21]:https://itsfoss.com/wp-content/uploads/2018/03/flatremix-800x450.png
+[22]:https://www.opendesktop.org/p/1214931/
+[23]:https://itsfoss.com/wp-content/uploads/2018/04/paper-800x450.jpg
+[24]:https://itsfoss.com/install-paper-theme-linux/
+[25]:https://snwh.org/paper/download
+[26]:https://itsfoss.com/wp-content/uploads/2018/04/pop-800x449.jpg
+[27]:https://system76.com/
+[28]:https://itsfoss.com/system76-popos-linux/
+[29]:https://itsfoss.com/pop-os-linux-review/
+[30]:https://github.com/pop-os/gtk-theme/blob/master/README.md
diff --git a/translated/tech/20181119 Arch-Wiki-Man - A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline.md b/published/20181119 Arch-Wiki-Man - A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline.md
similarity index 61%
rename from translated/tech/20181119 Arch-Wiki-Man - A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline.md
rename to published/20181119 Arch-Wiki-Man - A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline.md
index e82c0a885e..0771c64582 100644
--- a/translated/tech/20181119 Arch-Wiki-Man - A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline.md
+++ b/published/20181119 Arch-Wiki-Man - A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline.md
@@ -1,48 +1,48 @@
[#]: collector: "lujun9972"
-[#]: translator: " "
-[#]: reviewer: " "
-[#]: publisher: " "
+[#]: translator: "Auk7F7"
+[#]: reviewer: "wxy"
+[#]: publisher: "wxy"
[#]: subject: "Arch-Wiki-Man – A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline"
[#]: via: "https://www.2daygeek.com/arch-wiki-man-a-tool-to-browse-the-arch-wiki-pages-as-linux-man-page-from-offline/"
-[#]: author: "[Prakash Subramanian](https://www.2daygeek.com/author/prakash/)"
-[#]: url: " "
+[#]: author: "Prakash Subramanian https://www.2daygeek.com/author/prakash/"
+[#]: url: "https://linux.cn/article-10694-1.html"
-Arch-Wiki-Man – 一个以 Linux Man 手册样式离线浏览 Arch Wiki 的工具
+Arch-Wiki-Man:一个以 Linux Man 手册样式离线浏览 Arch Wiki 的工具
======
-现在上网已经很方便了,但技术上会有限制。
+现在上网已经很方便了,但技术上会有限制。看到技术的发展,我很惊讶,但与此同时,各种地方也都会出现衰退。
-看到技术的发展,我很惊讶,但与此同时,各个地方都会出现衰退。
-
-当你搜索有关其他 Linux 发型版本的某些东西时,大多数时候你会首先得到一个第三方的链接,但是对于 Arch Linux 来说,每次你都会得到 Arch Wiki 页面的结果。
+当你搜索有关其他 Linux 发行版的某些东西时,大多数时候你会得到的是一个第三方的链接,但是对于 Arch Linux 来说,每次你都会得到 Arch Wiki 页面的结果。
因为 Arch Wiki 提供了除第三方网站以外的大多数解决方案。
到目前为止,你也许可以使用 Web 浏览器为你的 Arch Linux 系统找到一个解决方案,但现在你可以不用这么做了。
-一个名为 arch-wiki-man 的工具t提供了一个在命令行中更快地执行这个操作的方案。如果你是一个 Arch Linux 爱好者,我建议你阅读 **[Arch Linux 安装后指南][1]** ,它可以帮助你调整你的系统以供日常使用。
+一个名为 arch-wiki-man 的工具提供了一个在命令行中更快地执行这个操作的方案。如果你是一个 Arch Linux 爱好者,我建议你阅读 [Arch Linux 安装后指南][1],它可以帮助你调整你的系统以供日常使用。
### arch-wiki-man 是什么?
-[arch-wiki-man][2] 工具允许用户在离线的时候从命令行(CLI)中搜索 Arch Wiki 页面。它允许用户以 Linux Man 手册样式访问和搜索整个 Wiki 页面。
+[arch-wiki-man][2] 工具允许用户从命令行(CLI)中离线搜索 Arch Wiki 页面。它允许用户以 Linux Man 手册样式访问和搜索整个 Wiki 页面。
-而且,你无需切换到GUI。更新将每两天自动推送一次,因此,你的 Arch Wiki 本地副本页面将是最新的。这个工具的名字是`awman`, `awman` 是 Arch Wiki Man 的缩写。
+而且,你无需切换到 GUI。更新将每两天自动推送一次,因此,你的 Arch Wiki 本地副本页面将是最新的。这个工具的名字是 `awman`, `awman` 是 “Arch Wiki Man” 的缩写。
-我们已经写出了名为 **[Arch Wiki 命令行实用程序][3]** (arch-wiki-cli)的类似工具。它允许用户从互联网上搜索 Arch Wiki。但确保你因该在线使用这个实用程序。
+我们之前写过一篇类似工具 [Arch Wiki 命令行实用程序][3](arch-wiki-cli)的文章。这个工具允许用户从互联网上搜索 Arch Wiki。但你需要在线使用这个实用程序。
### 如何安装 arch-wiki-man 工具?
-arch-wiki-man 工具可以在 AUR 仓库(LCTT译者注:AUR 即 Arch 用户软件仓库(Archx User Repository))中获得,因此,我们需要使用 AUR 工具来安装它。有许多 AUR 工具可用,而且我们曾写了一篇有关非常著名的 AUR 工具: **[Yaourt AUR helper][4]** 和 **[Packer AUR helper][5]** 的文章,
+arch-wiki-man 工具可以在 AUR 仓库(LCTT 译注:AUR 即Arch 用户软件仓库)中获得,因此,我们需要使用 AUR 工具来安装它。有许多 AUR 工具可用,而且我们曾写了一篇关于流行的 AUR 辅助工具: [Yaourt AUR helper][4] 和 [Packer AUR helper][5] 的文章。
```
$ yaourt -S arch-wiki-man
+```
-or
+或
+```
$ packer -S arch-wiki-man
```
-或者,我们可以使用 npm 包管理器来安装它,确保你已经在你的系统上安装了 **[NodeJS][6]** 。然后运行以下命令来安装它。
+或者,我们可以使用 npm 包管理器来安装它,确保你已经在你的系统上安装了 [NodeJS][6]。然后运行以下命令来安装它。
```
$ npm install -g arch-wiki-man
@@ -61,13 +61,15 @@ $ sudo awman-update
arch-wiki-md-repo has been successfully updated or reinstalled.
```
-awman-update 是一种更快更方便的更新方法。但是,你也可以通过运行以下命令重新安装arch-wiki-man 来获取更新。
+`awman-update` 是一种更快、更方便的更新方法。但是,你也可以通过运行以下命令重新安装 arch-wiki-man 来获取更新。
```
$ yaourt -S arch-wiki-man
+```
-or
+或
+```
$ packer -S arch-wiki-man
```
@@ -81,7 +83,7 @@ $ awman Search-Term
### 如何搜索多个匹配项?
-如果希望列出包含`installation`字符串的所有结果的标题,运行以下格式的命令,如果输出有多个结果,那么你将会获得一个选择菜单来浏览每个项目。
+如果希望列出包含 “installation” 字符串的所有结果的标题,运行以下格式的命令,如果输出有多个结果,那么你将会获得一个选择菜单来浏览每个项目。
```
$ awman installation
@@ -89,35 +91,39 @@ $ awman installation
![][8]
-详细页面的截屏
+详细页面的截屏:
![][9]
### 在标题和描述中搜索给定的字符串
- `-d` 或 `--desc-search` 选项允许用户在标题和描述中搜索给定的字符串。
+`-d` 或 `--desc-search` 选项允许用户在标题和描述中搜索给定的字符串。
```
$ awman -d mirrors
+```
-or
+或
+```
$ awman --desc-search mirrors
? Select an article: (Use arrow keys)
❯ [1/3] Mirrors: Related articles
- [2/3] DeveloperWiki-NewMirrors: Contents
- [3/3] Powerpill: Powerpill is a pac
+ [2/3] DeveloperWiki-NewMirrors: Contents
+ [3/3] Powerpill: Powerpill is a pac
```
### 在内容中搜索给定的字符串
- `-k` 或 `--apropos` 选项也允许用户在内容中搜索给定的字符串。但须注意,此选项会显著降低搜索速度,因为此选项会扫描整个 Wiki 页面的内容。
+`-k` 或 `--apropos` 选项也允许用户在内容中搜索给定的字符串。但须注意,此选项会显著降低搜索速度,因为此选项会扫描整个 Wiki 页面的内容。
```
$ awman -k openjdk
+```
-or
+或
+```
$ awman --apropos openjdk
? Select an article: (Use arrow keys)
❯ [1/26] Hadoop: Related articles
@@ -132,13 +138,15 @@ $ awman --apropos openjdk
### 在浏览器中打开搜索结果
- `-w` 或 `--web` 选项允许用户在 Web 浏览器中打开搜索结果。
+`-w` 或 `--web` 选项允许用户在 Web 浏览器中打开搜索结果。
```
$ awman -w AUR helper
+```
-or
+或
+```
$ awman --web AUR helper
```
@@ -146,7 +154,7 @@ $ awman --web AUR helper
### 以其他语言搜索
-`-w` 或 `--web` 选项允许用户在 Web 浏览器中打开搜索结果。想要查看支持的语言列表,请运行以下命令。
+想要查看支持的语言列表,请运行以下命令。
```
$ awman --list-languages
@@ -196,7 +204,7 @@ via: https://www.2daygeek.com/arch-wiki-man-a-tool-to-browse-the-arch-wiki-pages
作者:[Prakash Subramanian][a]
选题:[lujun9972][b]
译者:[Auk7F7](https://github.com/Auk7F7)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/20190104 Take to the virtual skies with FlightGear.md b/published/20190104 Take to the virtual skies with FlightGear.md
new file mode 100644
index 0000000000..eac736b98e
--- /dev/null
+++ b/published/20190104 Take to the virtual skies with FlightGear.md
@@ -0,0 +1,95 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10709-1.html)
+[#]: subject: (Take to the virtual skies with FlightGear)
+[#]: via: (https://opensource.com/article/19/1/flightgear)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
+
+使用 FlightGear 翱翔天空
+======
+
+> 你梦想驾驶飞机么?试试开源飞行模拟器 FlightGear 吧。
+
+
+
+如果你曾梦想驾驶飞机,你会喜欢 [FlightGear][1] 的。它是一个功能齐全的[开源][2]飞行模拟器,可在 Linux、MacOS 和 Windows 中运行。
+
+FlightGear 项目始于 1996 年,原因是对商业飞行模拟程序的不满,因为这些程序无法扩展。它的目标是创建一个复杂、强大、可扩展、开放的飞行模拟器框架,来用于学术界和飞行员培训,以及任何想要玩飞行模拟场景的人。
+
+### 入门
+
+FlightGear 的硬件要求适中,包括支持 OpenGL 以实现平滑帧速的加速 3D 显卡。它在我的配备 i5 处理器和仅 4GB 的内存的 Linux 笔记本上运行良好。它的文档包括[在线手册][3]、一个面向[用户][5]和[开发者][6]的 [wiki][4] 门户网站,还有大量的教程(例如它的默认飞机 [Cessna 172p][7])教你如何操作它。
+
+在 [Fedora][8] 和 [Ubuntu][9] Linux 中很容易安装。Fedora 用户可以参考 [Fedora 安装页面][10]来运行 FlightGear。
+
+在 Ubuntu 18.04 中,我需要安装一个仓库:
+
+```
+$ sudo add-apt-repository ppa:saiarcot895/flightgear
+$ sudo apt-get update
+$ sudo apt-get install flightgear
+```
+
+安装完成后,我从 GUI 启动它,但你也可以通过输入以下命令从终端启动应用:
+
+```
+$ fgfs
+```
+
+### 配置 FlightGear
+
+应用窗口左侧的菜单提供配置选项。
+
+
+
+“Summary” 返回应用的主页面。
+
+“Aircraft” 显示你已安装的飞机,并提供了 FlightGear 的默认“机库”中安装多达 539 种其他飞机的选项。我安装了 Cessna 150L、Piper J-3 Cub 和 Bombardier CRJ-700。一些飞机(包括 CRJ-700)有教你如何驾驶商用喷气式飞机的教程。我发现这些教程内容翔实且准确。
+
+
+
+要选择驾驶的飞机,请将其高亮显示,然后单击菜单底部的 “Fly!”。我选择了默认的 Cessna 172p 并发现驾驶舱的刻画非常准确。
+
+
+
+默认机场是檀香山,但你在 “Location” 菜单中提供你最喜欢机场的 [ICAO 机场代码] [11]进行修改。我找到了一些小型的本地无塔机场,如 Olean 和 Dunkirk,纽约,以及包括 Buffalo,O'Hare 和 Raleigh 在内的大型机场,甚至可以选择特定的跑道。
+
+在 “Environment” 下,你可以调整一天中的时间、季节和天气。模拟包括高级天气建模和从 [NOAA][12] 下载当前天气的能力。
+
+“Settings” 提供在暂停模式中开始模拟的选项。同样在设置中,你可以选择多人模式,这样你就可以与 FlightGear 支持者的全球服务器网络上的其他玩家一起“飞行”。你必须有比较快速的互联网连接来支持此功能。
+
+“Add-ons” 菜单允许你下载飞机和其他场景。
+
+### 开始飞行
+
+为了“起飞”我的 Cessna,我使用了罗技操纵杆,它用起来不错。你可以使用顶部 “File” 菜单中的选项校准操纵杆。
+
+总的来说,我发现模拟非常准确,图形界面也很棒。你自己试下 FlightGear —— 我想你会发现它是一个非常有趣和完整的模拟软件。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/flightgear
+
+作者:[Don Watkins][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: http://home.flightgear.org/
+[2]: http://wiki.flightgear.org/GNU_General_Public_License
+[3]: http://flightgear.sourceforge.net/getstart-en/getstart-en.html
+[4]: http://wiki.flightgear.org/FlightGear_Wiki
+[5]: http://wiki.flightgear.org/Portal:User
+[6]: http://wiki.flightgear.org/Portal:Developer
+[7]: http://wiki.flightgear.org/Cessna_172P
+[8]: http://rpmfind.net/linux/rpm2html/search.php?query=flightgear
+[9]: https://launchpad.net/~saiarcot895/+archive/ubuntu/flightgear
+[10]: https://apps.fedoraproject.org/packages/FlightGear/
+[11]: https://en.wikipedia.org/wiki/ICAO_airport_code
+[12]: https://www.noaa.gov/
diff --git a/published/20190108 How To Understand And Identify File types in Linux.md b/published/20190108 How To Understand And Identify File types in Linux.md
new file mode 100644
index 0000000000..1fc8bc6aac
--- /dev/null
+++ b/published/20190108 How To Understand And Identify File types in Linux.md
@@ -0,0 +1,348 @@
+[#]: collector: (lujun9972)
+[#]: translator: (liujing97)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10716-1.html)
+[#]: subject: (How To Understand And Identify File types in Linux)
+[#]: via: (https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+怎样理解和识别 Linux 中的文件类型
+======
+
+众所周知,在 Linux 中一切皆为文件,包括硬盘和显卡等。在 Linux 中导航时,大部分的文件都是普通文件和目录文件。但是也有其他的类型,对应于 5 类不同的作用。因此,理解 Linux 中的文件类型在许多方面都是非常重要的。
+
+如果你不相信,那只需要浏览全文,就会发现它有多重要。如果你不能理解文件类型,就不能够毫无畏惧的做任意的修改。
+
+如果你做了一些错误的修改,会毁坏你的文件系统,那么当你操作的时候请小心一点。在 Linux 系统中文件是非常重要的,因为所有的设备和守护进程都被存储为文件。
+
+### 在 Linux 中有多少种可用类型?
+
+据我所知,在 Linux 中总共有 7 种类型的文件,分为 3 大类。具体如下。
+
+ * 普通文件
+ * 目录文件
+ * 特殊文件(该类有 5 个文件类型)
+ * 链接文件
+ * 字符设备文件
+ * Socket 文件
+ * 命名管道文件
+ * 块文件
+
+参考下面的表可以更好地理解 Linux 中的文件类型。
+
+| 符号 | 意义 |
+| ------- | --------------------------------- |
+| `–` | 普通文件。长列表中以下划线 `_` 开头。 |
+| `d` | 目录文件。长列表中以英文字母 `d` 开头。 |
+| `l` | 链接文件。长列表中以英文字母 `l` 开头。 |
+| `c` | 字符设备文件。长列表中以英文字母 `c` 开头。 |
+| `s` | Socket 文件。长列表中以英文字母 `s` 开头。 |
+| `p` | 命名管道文件。长列表中以英文字母 `p` 开头。 |
+| `b` | 块文件。长列表中以英文字母 `b` 开头。 |
+
+
+### 方法1:手动识别 Linux 中的文件类型
+
+如果你很了解 Linux,那么你可以借助上表很容易地识别文件类型。
+
+#### 在 Linux 中如何查看普通文件?
+
+在 Linux 中使用下面的命令去查看普通文件。在 Linux 文件系统中普通文件可以出现在任何地方。
+普通文件的颜色是“白色”。
+
+```
+# ls -la | grep ^-
+-rw-------. 1 mageshm mageshm 1394 Jan 18 15:59 .bash_history
+-rw-r--r--. 1 mageshm mageshm 18 May 11 2012 .bash_logout
+-rw-r--r--. 1 mageshm mageshm 176 May 11 2012 .bash_profile
+-rw-r--r--. 1 mageshm mageshm 124 May 11 2012 .bashrc
+-rw-r--r--. 1 root root 26 Dec 27 17:55 liks
+-rw-r--r--. 1 root root 104857600 Jan 31 2006 test100.dat
+-rw-r--r--. 1 root root 104874307 Dec 30 2012 test100.zip
+-rw-r--r--. 1 root root 11536384 Dec 30 2012 test10.zip
+-rw-r--r--. 1 root root 61 Dec 27 19:05 test2-bzip2.txt
+-rw-r--r--. 1 root root 61 Dec 31 14:24 test3-bzip2.txt
+-rw-r--r--. 1 root root 60 Dec 27 19:01 test-bzip2.txt
+```
+
+#### 在 Linux 中如何查看目录文件?
+
+在 Linux 中使用下面的命令去查看目录文件。在 Linux 文件系统中目录文件可以出现在任何地方。目录文件的颜色是“蓝色”。
+
+```
+# ls -la | grep ^d
+drwxr-xr-x. 3 mageshm mageshm 4096 Dec 31 14:24 links/
+drwxrwxr-x. 2 mageshm mageshm 4096 Nov 16 15:44 perl5/
+drwxr-xr-x. 2 mageshm mageshm 4096 Nov 16 15:37 public_ftp/
+drwxr-xr-x. 3 mageshm mageshm 4096 Nov 16 15:37 public_html/
+```
+
+#### 在 Linux 中如何查看链接文件?
+
+在 Linux 中使用下面的命令去查看链接文件。在 Linux 文件系统中链接文件可以出现在任何地方。
+链接文件有两种可用类型,软连接和硬链接。链接文件的颜色是“浅绿宝石色”。
+
+```
+# ls -la | grep ^l
+lrwxrwxrwx. 1 root root 31 Dec 7 15:11 s-link-file -> /links/soft-link/test-soft-link
+lrwxrwxrwx. 1 root root 38 Dec 7 15:12 s-link-folder -> /links/soft-link/test-soft-link-folder
+```
+
+#### 在 Linux 中如何查看字符设备文件?
+
+在 Linux 中使用下面的命令查看字符设备文件。字符设备文件仅出现在特定位置。它出现在目录 `/dev` 下。字符设备文件的颜色是“黄色”。
+
+```
+# ls -la | grep ^c
+# ls -la | grep ^c
+crw-------. 1 root root 5, 1 Jan 28 14:05 console
+crw-rw----. 1 root root 10, 61 Jan 28 14:05 cpu_dma_latency
+crw-rw----. 1 root root 10, 62 Jan 28 14:05 crash
+crw-rw----. 1 root root 29, 0 Jan 28 14:05 fb0
+crw-rw-rw-. 1 root root 1, 7 Jan 28 14:05 full
+crw-rw-rw-. 1 root root 10, 229 Jan 28 14:05 fuse
+```
+
+#### 在 Linux 中如何查看块文件?
+
+在 Linux 中使用下面的命令查看块文件。块文件仅出现在特定位置。它出现在目录 `/dev` 下。块文件的颜色是“黄色”。
+
+```
+# ls -la | grep ^b
+brw-rw----. 1 root disk 7, 0 Jan 28 14:05 loop0
+brw-rw----. 1 root disk 7, 1 Jan 28 14:05 loop1
+brw-rw----. 1 root disk 7, 2 Jan 28 14:05 loop2
+brw-rw----. 1 root disk 7, 3 Jan 28 14:05 loop3
+brw-rw----. 1 root disk 7, 4 Jan 28 14:05 loop4
+```
+
+#### 在 Linux 中如何查看 Socket 文件?
+
+在 Linux 中使用下面的命令查看 Socket 文件。Socket 文件可以出现在任何地方。Scoket 文件的颜色是“粉色”。(LCTT 译注:此处及下面关于 Socket 文件、命名管道文件可出现的位置原文描述有误,已修改。)
+
+```
+# ls -la | grep ^s
+srw-rw-rw- 1 root root 0 Jan 5 16:36 system_bus_socket
+```
+
+#### 在 Linux 中如何查看命名管道文件?
+
+在 Linux 中使用下面的命令查看命名管道文件。命名管道文件可以出现在任何地方。命名管道文件的颜色是“黄色”。
+
+```
+# ls -la | grep ^p
+prw-------. 1 root root 0 Jan 28 14:06 replication-notify-fifo|
+prw-------. 1 root root 0 Jan 28 14:06 stats-mail|
+```
+
+### 方法2:在 Linux 中如何使用 file 命令识别文件类型
+
+在 Linux 中 `file` 命令允许我们去确定不同的文件类型。这里有三个测试集,按此顺序进行三组测试:文件系统测试、魔术字节测试和用于识别文件类型的语言测试。
+
+#### 在 Linux 中如何使用 file 命令查看普通文件
+
+在你的终端简单地输入 `file` 命令跟着普通文件。`file` 命令将会读取提供的文件内容并且准确地显示文件的类型。
+
+这就是我们看到对于每个普通文件有不同结果的原因。参考下面普通文件的不同结果。
+
+```
+# file 2daygeek_access.log
+2daygeek_access.log: ASCII text, with very long lines
+
+# file powertop.html
+powertop.html: HTML document, ASCII text, with very long lines
+
+# file 2g-test
+2g-test: JSON data
+
+# file powertop.txt
+powertop.txt: HTML document, UTF-8 Unicode text, with very long lines
+
+# file 2g-test-05-01-2019.tar.gz
+2g-test-05-01-2019.tar.gz: gzip compressed data, last modified: Sat Jan 5 18:22:20 2019, from Unix, original size 450560
+```
+
+#### 在 Linux 中如何使用 file 命令查看目录文件?
+
+在你的终端简单地输入 `file` 命令跟着目录。参阅下面的结果。
+
+```
+# file Pictures/
+Pictures/: directory
+```
+
+#### 在 Linux 中如何使用 file 命令查看链接文件?
+
+在你的终端简单地输入 `file` 命令跟着链接文件。参阅下面的结果。
+
+```
+# file log
+log: symbolic link to /run/systemd/journal/dev-log
+```
+
+#### 在 Linux 中如何使用 file 命令查看字符设备文件?
+
+在你的终端简单地输入 `file` 命令跟着字符设备文件。参阅下面的结果。
+
+```
+# file vcsu
+vcsu: character special (7/64)
+```
+
+#### 在 Linux 中如何使用 file 命令查看块文件?
+
+在你的终端简单地输入 `file` 命令跟着块文件。参阅下面的结果。
+
+```
+# file sda1
+sda1: block special (8/1)
+```
+
+#### 在 Linux 中如何使用 file 命令查看 Socket 文件?
+
+在你的终端简单地输入 `file` 命令跟着 Socket 文件。参阅下面的结果。
+
+```
+# file system_bus_socket
+system_bus_socket: socket
+```
+
+#### 在 Linux 中如何使用 file 命令查看命名管道文件?
+
+在你的终端简单地输入 `file` 命令跟着命名管道文件。参阅下面的结果。
+
+```
+# file pipe-test
+pipe-test: fifo (named pipe)
+```
+
+### 方法 3:在 Linux 中如何使用 stat 命令识别文件类型?
+
+`stat` 命令允许我们去查看文件类型或文件系统状态。该实用程序比 `file` 命令提供更多的信息。它显示文件的大量信息,例如大小、块大小、IO 块大小、Inode 值、链接、文件权限、UID、GID、文件的访问/更新和修改的时间等详细信息。
+
+#### 在 Linux 中如何使用 stat 命令查看普通文件?
+
+在你的终端简单地输入 `stat` 命令跟着普通文件。参阅下面的结果。
+
+```
+# stat 2daygeek_access.log
+ File: 2daygeek_access.log
+ Size: 14406929 Blocks: 28144 IO Block: 4096 regular file
+Device: 10301h/66305d Inode: 1727555 Links: 1
+Access: (0644/-rw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
+Access: 2019-01-03 14:05:26.430328867 +0530
+Modify: 2019-01-03 14:05:26.460328868 +0530
+Change: 2019-01-03 14:05:26.460328868 +0530
+ Birth: -
+```
+
+#### 在 Linux 中如何使用 stat 命令查看目录文件?
+
+在你的终端简单地输入 `stat` 命令跟着目录文件。参阅下面的结果。
+
+```
+# stat Pictures/
+ File: Pictures/
+ Size: 4096 Blocks: 8 IO Block: 4096 directory
+Device: 10301h/66305d Inode: 1703982 Links: 3
+Access: (0755/drwxr-xr-x) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
+Access: 2018-11-24 03:22:11.090000828 +0530
+Modify: 2019-01-05 18:27:01.546958817 +0530
+Change: 2019-01-05 18:27:01.546958817 +0530
+ Birth: -
+```
+
+#### 在 Linux 中如何使用 stat 命令查看链接文件?
+
+在你的终端简单地输入 `stat` 命令跟着链接文件。参阅下面的结果。
+
+```
+# stat /dev/log
+ File: /dev/log -> /run/systemd/journal/dev-log
+ Size: 28 Blocks: 0 IO Block: 4096 symbolic link
+Device: 6h/6d Inode: 278 Links: 1
+Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
+Access: 2019-01-05 16:36:31.033333447 +0530
+Modify: 2019-01-05 16:36:30.766666768 +0530
+Change: 2019-01-05 16:36:30.766666768 +0530
+ Birth: -
+```
+
+#### 在 Linux 中如何使用 stat 命令查看字符设备文件?
+
+在你的终端简单地输入 `stat` 命令跟着字符设备文件。参阅下面的结果。
+
+```
+# stat /dev/vcsu
+ File: /dev/vcsu
+ Size: 0 Blocks: 0 IO Block: 4096 character special file
+Device: 6h/6d Inode: 16 Links: 1 Device type: 7,40
+Access: (0660/crw-rw----) Uid: ( 0/ root) Gid: ( 5/ tty)
+Access: 2019-01-05 16:36:31.056666781 +0530
+Modify: 2019-01-05 16:36:31.056666781 +0530
+Change: 2019-01-05 16:36:31.056666781 +0530
+ Birth: -
+```
+
+#### 在 Linux 中如何使用 stat 命令查看块文件?
+
+在你的终端简单地输入 `stat` 命令跟着块文件。参阅下面的结果。
+
+```
+# stat /dev/sda1
+ File: /dev/sda1
+ Size: 0 Blocks: 0 IO Block: 4096 block special file
+Device: 6h/6d Inode: 250 Links: 1 Device type: 8,1
+Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 994/ disk)
+Access: 2019-01-05 16:36:31.596666806 +0530
+Modify: 2019-01-05 16:36:31.596666806 +0530
+Change: 2019-01-05 16:36:31.596666806 +0530
+ Birth: -
+```
+
+#### 在 Linux 中如何使用 stat 命令查看 Socket 文件?
+
+在你的终端简单地输入 `stat` 命令跟着 Socket 文件。参阅下面的结果。
+
+```
+# stat /var/run/dbus/system_bus_socket
+ File: /var/run/dbus/system_bus_socket
+ Size: 0 Blocks: 0 IO Block: 4096 socket
+Device: 15h/21d Inode: 576 Links: 1
+Access: (0666/srw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root)
+Access: 2019-01-05 16:36:31.823333482 +0530
+Modify: 2019-01-05 16:36:31.810000149 +0530
+Change: 2019-01-05 16:36:31.810000149 +0530
+ Birth: -
+```
+
+#### 在 Linux 中如何使用 stat 命令查看命名管道文件?
+
+在你的终端简单地输入 `stat` 命令跟着命名管道文件。参阅下面的结果。
+
+```
+# stat pipe-test
+ File: pipe-test
+ Size: 0 Blocks: 0 IO Block: 4096 fifo
+Device: 10301h/66305d Inode: 1705583 Links: 1
+Access: (0644/prw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
+Access: 2019-01-06 02:00:03.040394731 +0530
+Modify: 2019-01-06 02:00:03.040394731 +0530
+Change: 2019-01-06 02:00:03.040394731 +0530
+ Birth: -
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[liujing97](https://github.com/liujing97)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
diff --git a/published/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md b/published/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md
new file mode 100644
index 0000000000..7a0b39efa2
--- /dev/null
+++ b/published/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md
@@ -0,0 +1,153 @@
+[#]: collector: (lujun9972)
+[#]: translator: (liujing97)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10727-1.html)
+[#]: subject: (7 Methods To Identify Disk Partition/FileSystem UUID On Linux)
+[#]: via: (https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+Linux 中获取硬盘分区或文件系统的 UUID 的七种方法
+======
+
+作为一个 Linux 系统管理员,你应该知道如何去查看分区的 UUID 或文件系统的 UUID。因为现在大多数的 Linux 系统都使用 UUID 挂载分区。你可以在 `/etc/fstab` 文件中可以验证。
+
+有许多可用的实用程序可以查看 UUID。本文我们将会向你展示多种查看 UUID 的方法,并且你可以选择一种适合于你的方法。
+
+### 何为 UUID?
+
+UUID 意即通用唯一识别码,它可以帮助 Linux 系统识别一个磁盘分区而不是块设备文件。
+
+自内核 2.15.1 起,libuuid 就是 util-linux-ng 包中的一部分,它被默认安装在 Linux 系统中。UUID 由该库生成,可以合理地认为在一个系统中 UUID 是唯一的,并且在所有系统中也是唯一的。
+
+这是在计算机系统中用来标识信息的一个 128 位(比特)的数字。UUID 最初被用在阿波罗网络计算机系统(NCS)中,之后 UUID 被开放软件基金会(OSF)标准化,成为分布式计算环境(DCE)的一部分。
+
+UUID 以 32 个十六进制的数字表示,被连字符分割为 5 组显示,总共的 36 个字符的格式为 8-4-4-4-12(32 个字母或数字和 4 个连字符)。
+
+例如: `d92fa769-e00f-4fd7-b6ed-ecf7224af7fa`
+
+我的 `/etc/fstab` 文件示例:
+
+```
+# cat /etc/fstab
+
+# /etc/fstab: static file system information.
+#
+# Use 'blkid' to print the universally unique identifier for a device; this may
+# be used with UUID= as a more robust way to name devices that works even if
+# disks are added and removed. See fstab(5).
+#
+#
+UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f / ext4 defaults,noatime 0 1
+UUID=a2092b92-af29-4760-8e68-7a201922573b swap swap defaults,noatime 0 2
+```
+
+我们可以使用下面的 7 个命令来查看。
+
+ * `blkid` 命令:定位或打印块设备的属性。
+ * `lsblk` 命令:列出所有可用的或指定的块设备的信息。
+ * `hwinfo` 命令:硬件信息工具,是另外一个很好的实用工具,用于查询系统中已存在硬件。
+ * `udevadm` 命令:udev 管理工具
+ * `tune2fs` 命令:调整 ext2/ext3/ext4 文件系统上的可调文件系统参数。
+ * `dumpe2fs` 命令:查询 ext2/ext3/ext4 文件系统的信息。
+ * 使用 `by-uuid` 路径:该目录下包含有 UUID 和实际的块设备文件,UUID 与实际的块设备文件链接在一起。
+
+### Linux 中如何使用 blkid 命令查看磁盘分区或文件系统的 UUID?
+
+`blkid` 是定位或打印块设备属性的命令行实用工具。它利用 libblkid 库在 Linux 系统中获得到磁盘分区的 UUID。
+
+```
+# blkid
+/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01"
+/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01"
+/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03"
+/dev/sdc5: PARTUUID="8cc8f9e5-05"
+```
+
+### Linux 中如何使用 lsblk 命令查看磁盘分区或文件系统的 UUID?
+
+`lsblk` 列出所有有关可用或指定块设备的信息。`lsblk` 命令读取 sysfs 文件系统和 udev 数据库以收集信息。
+
+如果 udev 数据库不可用或者编译的 lsblk 不支持 udev,它会试图从块设备中读取卷标、UUID 和文件系统类型。这种情况下,必须以 root 身份运行。该命令默认会以类似于树的格式打印出所有的块设备(RAM 盘除外)。
+
+```
+# lsblk -o name,mountpoint,size,uuid
+NAME MOUNTPOINT SIZE UUID
+sda 30G
+└─sda1 / 20G d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
+sdb 10G
+sdc 10G
+├─sdc1 1G d17e3c31-e2c9-4f11-809c-94a549bc43b7
+├─sdc3 1G ca307aa4-0866-49b1-8184-004025789e63
+├─sdc4 1K
+└─sdc5 1G
+sdd 10G
+sde 10G
+sr0 1024M
+```
+
+### Linux 中如何使用 by-uuid 路径查看磁盘分区或文件系统的 UUID?
+
+该目录包含了 UUID 和实际的块设备文件,UUID 与实际的块设备文件链接在一起。
+
+```
+# ls -lh /dev/disk/by-uuid/
+total 0
+lrwxrwxrwx 1 root root 10 Jan 29 08:34 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3
+lrwxrwxrwx 1 root root 10 Jan 29 08:34 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1
+lrwxrwxrwx 1 root root 10 Jan 29 08:34 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1
+```
+
+### Linux 中如何使用 hwinfo 命令查看磁盘分区或文件系统的 UUID?
+
+[hwinfo][1] 意即硬件信息工具,是另外一种很好的实用工具。它被用来检测系统中已存在的硬件,并且以可读的格式显示各种硬件组件的细节信息。
+
+```
+# hwinfo --block | grep by-uuid | awk '{print $3,$7}'
+/dev/sdc1, /dev/disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
+/dev/sdc3, /dev/disk/by-uuid/ca307aa4-0866-49b1-8184-004025789e63
+/dev/sda1, /dev/disk/by-uuid/d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
+```
+
+### Linux 中如何使用 udevadm 命令查看磁盘分区或文件系统的 UUID?
+
+`udevadm` 需要命令和命令特定的操作。它控制 systemd-udevd 的运行时行为,请求内核事件、管理事件队列并且提供简单的调试机制。
+
+```
+# udevadm info -q all -n /dev/sdc1 | grep -i by-uuid | head -1
+S: disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
+```
+
+### Linux 中如何使用 tune2fs 命令查看磁盘分区或文件系统的 UUID?
+
+`tune2fs` 允许系统管理员在 Linux 的 ext2、ext3、ext4 文件系统中调整各种可调的文件系统参数。这些选项的当前值可以使用选项 `-l` 显示。
+
+```
+# tune2fs -l /dev/sdc1 | grep UUID
+Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
+```
+
+### Linux 中如何使用 dumpe2fs 命令查看磁盘分区或文件系统的 UUID?
+
+`dumpe2fs` 打印出现在设备文件系统中的超级块和块组的信息。
+
+```
+# dumpe2fs /dev/sdc1 | grep UUID
+dumpe2fs 1.43.5 (04-Aug-2017)
+Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[liujing97](https://github.com/liujing97)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
diff --git a/published/20190211 Ubuntu 14.04 is Reaching the End of Life. Here are Your Options.md b/published/20190211 Ubuntu 14.04 is Reaching the End of Life. Here are Your Options.md
new file mode 100644
index 0000000000..d43ad53c67
--- /dev/null
+++ b/published/20190211 Ubuntu 14.04 is Reaching the End of Life. Here are Your Options.md
@@ -0,0 +1,74 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10723-1.html)
+[#]: subject: (Ubuntu 14.04 is Reaching the End of Life. Here are Your Options)
+[#]: via: (https://itsfoss.com/ubuntu-14-04-end-of-life/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+Ubuntu 14.04 即将结束支持,你该怎么办?
+======
+
+Ubuntu 14.04 即将于 2019 年 4 月 30 日结束支持。这意味着在此日期之后 Ubuntu 14.04 用户将无法获得安全和维护更新。
+
+你甚至不会获得已安装应用的更新,并且不手动修改 `sources.list` 则无法使用 `apt` 命令或软件中心安装新应用。
+
+Ubuntu 14.04 大约在五年前发布。这是 Ubuntu 长期支持版本(LTS)。
+
+[检查 Ubuntu 版本][1]并查看你是否仍在使用 Ubuntu 14.04。如果是桌面或服务器版,你可能想知道在这种情况下你应该怎么做。
+
+我来帮助你。告诉你在这种情况下你有些什么选择。
+
+![][2]
+
+### 升级到 Ubuntu 16.04 LTS(最简单的方式)
+
+如果你可以连接互联网,你可以从 Ubuntu 14.04 升级到 Ubuntu 16.04 LTS。
+
+Ubuntu 16.04 也是一个长期支持版本,它将支持到 2021 年 4 月。这意味着下次升级前你还有两年的时间。
+
+我建议阅读这个[升级 Ubuntu 版本][3]的教程。它最初是为了将 Ubuntu 16.04 升级到 Ubuntu 18.04 而编写的,但这些步骤也适用于你的情况。
+
+### 做好备份,全新安装 Ubuntu 18.04 LTS(非常适合桌面用户)
+
+另一个选择是备份你的文档、音乐、图片、下载和其他任何你不想丢失数据的文件夹。
+
+我说的备份指的是将这些文件夹复制到外部 USB 盘。换句话说,你应该有办法将数据复制回计算机,因为你将格式化你的系统。
+
+我建议桌面用户使用此选项。Ubuntu 18.04 是目前的长期支持版本,它将至少在 2023 年 4 月之前得到支持。在你被迫进行下次升级之前,你将有四年的时间。
+
+### 支付扩展安全维护费用并继续使用 Ubuntu 14.04
+
+这适用于企业客户。Canonical 是 Ubuntu 的母公司,它提供 Ubuntu Advantage 计划,客户可以支付电话电子邮件支持和其他益处。
+
+Ubuntu Advantage 计划用户还有[扩展安全维护][4](ESM)功能。即使给定版本的生命周期结束后,此计划也会提供安全更新。
+
+这需要付出金钱。服务器用户每个物理节点每年花费 225 美元。对于桌面用户,价格为每年 150 美元。你可以在[此处][5]了解 Ubuntu Advantage 计划的详细定价。
+
+### 还在使用 Ubuntu 14.04 吗?
+
+如果你还在使用 Ubuntu 14.04,那么你应该开始了解这些选择,因为你还有不到一个月的时间。
+
+在任何情况下,你都不能在 2019 年 4 月 30 日之后使用 Ubuntu 14.04,因为你的系统由于缺乏安全更新而容易受到攻击。无法安装新应用将是一个额外的痛苦。
+
+那么,你会做什么选择?升级到 Ubuntu 16.04 或 18.04 或付费 ESM?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/ubuntu-14-04-end-of-life/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
+[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/ubuntu-14-04-end-of-life-featured.png?resize=800%2C450&ssl=1
+[3]: https://itsfoss.com/upgrade-ubuntu-version/
+[4]: https://www.ubuntu.com/esm
+[5]: https://www.ubuntu.com/support/plans-and-pricing
diff --git a/translated/tech/20190311 7 resources for learning to use your Raspberry Pi.md b/published/20190311 7 resources for learning to use your Raspberry Pi.md
similarity index 51%
rename from translated/tech/20190311 7 resources for learning to use your Raspberry Pi.md
rename to published/20190311 7 resources for learning to use your Raspberry Pi.md
index ee0b1451b1..d3f24ba5d2 100644
--- a/translated/tech/20190311 7 resources for learning to use your Raspberry Pi.md
+++ b/published/20190311 7 resources for learning to use your Raspberry Pi.md
@@ -1,41 +1,42 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10707-1.html)
[#]: subject: (7 resources for learning to use your Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/resources-raspberry-pi)
[#]: author: (Manuel Dewald https://opensource.com/users/ntlx)
学习使用树莓派的 7 个资源
======
-缩短树莓派学习曲线的书籍、课程和网站。
+
+> 一些缩短树莓派学习曲线的书籍、课程和网站。
+

-[树莓派][1]是一款小型单板计算机,最初用于教学和学习编程和计算机科学。但如今它有更多用处。它是一种经济、低功耗计算机,人们将它用于各种各样的事情 - 从家庭娱乐到服务器应用,再到物联网 (IoT) 项目。
+[树莓派][1]是一款小型单板计算机,最初用于教学和学习编程和计算机科学。但如今它有更多用处。它是一种经济的低功耗计算机,人们将它用于各种各样的事情 —— 从家庭娱乐到服务器应用,再到物联网(IoT) 项目。
-关于这个主题有很多资源,你可以做很多不同的项目,很难知道从哪里开始。以下是一些资源,可以帮助你开始使用树莓派。愉快地浏览,但不要停留在这里。到处看下,深入下去你就会发现树莓派的新世界。
+关于这个主题有很多资源,你可以做很多不同的项目,却很难知道从哪里开始。以下是一些资源,可以帮助你开始使用树莓派。看看这篇文章,但不要满足于此。到处看下,深入下去你就会发现树莓派的新世界。
### 书籍
-关于树莓派有很多不同语言的书籍。这两本将帮助你开始了解,然后深入了解树莓派。
+关于树莓派有很多不同语言的书籍。这两本书将帮助你开始了解,然后深入了解树莓派。
-#### 由 Simon Monk 编写的 Raspberry Pi Cookbook:软件和硬件问题及解决方案
+#### 由 Simon Monk 编写的《树莓派手边书:软件和硬件问题及解决方案》
-Simon Monk 是一名软件工程师,并且多年来一直是手工业余爱好者。他最初被 Arduino 这块易于使用的开发板所吸引,后来出版了一本关于它的[书][2]。后来,他开始使用树莓派并写了 [Raspberry Pi Cookbook:软件和硬件问题和解决方案][3]这本书。在本书中,你可以找到大量树莓派项目的最佳时间,以及你可能面对的各种挑战的解决方案。
+Simon Monk 是一名软件工程师,并且多年来一直是业余手工爱好者。他最初被 Arduino 这块易于使用的开发板所吸引,后来出版了一本关于它的[书][2]。后来,他开始使用树莓派并写了《[树莓派手边书:软件和硬件问题和解决方案][3]》这本书。在本书中,你可以找到大量树莓派项目的最佳时间,以及你可能面对的各种挑战的解决方案。
-####由 Simon Monk 编写的树莓派编程:从 Python 入门
+#### 由 Simon Monk 编写的《树莓派编程:从 Python 入门》
-Python 已经发展成为开始树莓派项目的首选编程语言,因为它易于学习和使用,即使你没有任何编程经验。此外,它的许多库可以帮助你专注于使你的项目变得特别,而不是实现协议反复地与传感器不断通信。Monk 在 Raspberry Pi Cookbook 中写了两章关于 Python 编程,但[树莓派编程:从 Python 入门][4]是一个更全面的快速入门。它向你介绍了 Python,并向你展示了可以在树莓派上使用它创建的一些项目。
+Python 已经发展成为开始一个树莓派项目的首选编程语言,因为它易于学习和使用,即使你没有任何编程经验。此外,它的许多库可以帮助你专注于使你的项目变得特别,而不是实现协议以与传感器反复通信。Monk 在《树莓派手边书》中写了两章关于 Python 编程,但《[树莓派编程:从 Python 入门][4]》是一个更全面的快速入门。它向你介绍了 Python,并向你展示了可以在树莓派上使用它创建的一些项目。
### 在线课程
新的树莓派用户可以选择许多在线课程和教程,包括这个入门课程。
-#### Raspberry Pi Class
-
-Instructables 的免费 [Raspberry Pi Class][5] 在线课程提供了对树莓派的全面介绍。它从树莓派和 Linux 操作基础开始,然后进入 Python 编程和 GPIO 通信。如果你是这方面的新手,并希望快速入门,这使它成为一个很好的从上到下的树莓派指南。
+#### 树莓派课程
+Instructables 免费的在线[树莓派课程][5]提供了对树莓派的全面介绍。它从树莓派和 Linux 操作基础开始,然后进入 Python 编程和 GPIO 通信。如果你是这方面的新手,并希望快速入门,这使它成为一个很好的自上而下的树莓派指南。
### 网站
@@ -43,7 +44,7 @@ Instructables 的免费 [Raspberry Pi Class][5] 在线课程提供了对树莓
#### RaspberryPi.org
-官方的[树莓派][6]网站是最好的入门之一。许多关于特定项目的文章有链接到基础知识的链接,如将 Raspbian 安装到树莓派上。 (这是我倾向的,而不是在每个操作中重复说明。)你还可以找到学生技术[教育][8]方面的[示例项目][7]和课程。
+官方的[树莓派][6]网站是最好的入门之一。有许多关于特定项目的文章会链接到这里的基础知识,如将 Raspbian 安装到树莓派上。(这是我倾向的做法,而不是在每篇文章中重复说明。)你还可以找到学生技术[教育][8]方面的[示例项目][7]和课程。
#### Opensource.com
@@ -51,7 +52,7 @@ Instructables 的免费 [Raspberry Pi Class][5] 在线课程提供了对树莓
#### Instructables 和 Hackaday
-你想造自己的复古街机么?或者在镜子上显示当天的天气信息、时间和第一事务?你是否想要为派对创建一个文字时钟或者相簿?你可以在 [Instructables][10] 和 [Hackaday][11] 这样的网站上找到如何使用树莓派完成所有这些(以及更多!)的说明。如果你不确定是否要买树莓派,请浏览这些网站,你会发现有很多理由可以购买。
+你想造自己的复古街机么?或者在镜子上显示当天的天气信息、时间和第一事务?你是否想要为派对创建一个文字时钟或者相簿?你可以在 [Instructables][10] 和 [Hackaday][11] 这样的网站上找到如何使用树莓派完成所有这些(以及更多!)的说明。如果你不确定是否要买树莓派,请浏览这些网站,你会发现有很多理由值得购买。
你最喜欢的树莓派资源是什么?请在评论中分享!
@@ -62,7 +63,7 @@ via: https://opensource.com/article/19/3/resources-raspberry-pi
作者:[Manuel Dewald][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/20190312 Do advanced math with Mathematica on the Raspberry Pi.md b/published/20190312 Do advanced math with Mathematica on the Raspberry Pi.md
similarity index 69%
rename from translated/tech/20190312 Do advanced math with Mathematica on the Raspberry Pi.md
rename to published/20190312 Do advanced math with Mathematica on the Raspberry Pi.md
index 74d8c6798d..a27ee216f8 100644
--- a/translated/tech/20190312 Do advanced math with Mathematica on the Raspberry Pi.md
+++ b/published/20190312 Do advanced math with Mathematica on the Raspberry Pi.md
@@ -1,19 +1,20 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10711-1.html)
[#]: subject: (Do advanced math with Mathematica on the Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/do-math-raspberry-pi)
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
-在树莓派上使用 Mathematica 进行高级数学运算
+树莓派使用入门:在树莓派上使用 Mathematica 进行高级数学运算
======
-Wolfram 将一个版本 Mathematica 捆绑到了 Raspbian 中。在我们关于树莓派入门系列的第 12 篇文章中学习如何使用它。
+
+> Wolfram 在 Raspbian 中捆绑了一个版本的 Mathematica。在我们的树莓派入门系列的第 12 篇文章中将学习如何使用它。

-在 90 年代中期,我进入了大学数学专业,即使我以计算机科学学位毕业,第二专业数学我已经上了足够的课程,但还有两门小课没有上。当时,我被介绍了 [Wolfram][2] 中一个名为[Mathematica][1] 的应用,我们可以将黑板上的许多代数和微分方程输入计算机。我每月花几个小时在实验室学习 Wolfram 语言并在 Mathematica 上解决积分等问题。
+在 90 年代中期,我进入了大学数学专业,虽然我是以计算机科学学位毕业的,但是我就差两门课程就拿到了双学位,包括数学专业的学位。当时,我接触到了 [Wolfram][2] 的一个名为 [Mathematica][1] 的应用,我们可以将黑板上的许多代数和微分方程输入计算机。我每月花几个小时在实验室学习 Wolfram 语言,并在 Mathematica 上解决积分等问题。
对于大学生来说 Mathematica 是闭源而且昂贵的,因此在差不多 20 年后,看到 Wolfram 将一个版本的 Mathematica 与 Raspbian 和 Raspberry Pi 捆绑在一起是一个惊喜。如果你决定使用另一个基于 Debian 的发行版,你可以从这里[下载][3]。请注意,此版本仅供非商业用途免费使用。
@@ -23,7 +24,7 @@ Wolfram 将一个版本 Mathematica 捆绑到了 Raspbian 中。在我们关于
要深入了解 Mathematica,请查看 [Wolfram 语言文档][5]。如果你只是想解决一些基本的微积分问题,请[查看它的函数][6]部分。如果你想[绘制一些 2D 和 3D 图形][7],请阅读链接的教程。
-或者,如果你想在做数学运算时坚持使用开源工具,请查看命令行工具 **expr**、**factor** 和 **bc**。(记住使用 [**man** 命令][8] 阅读使用帮助)如果想画图,[Gnuplot][9] 是个不错的选择。
+或者,如果你想在做数学运算时坚持使用开源工具,请查看命令行工具 `expr`、`factor` 和 `bc`。(记住使用 [man 命令][8] 阅读使用帮助)如果想画图,[Gnuplot][9] 是个不错的选择。
--------------------------------------------------------------------------------
@@ -32,7 +33,7 @@ via: https://opensource.com/article/19/3/do-math-raspberry-pi
作者:[Anderson Silva][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -46,4 +47,4 @@ via: https://opensource.com/article/19/3/do-math-raspberry-pi
[6]: https://reference.wolfram.com/language/guide/Calculus.html
[7]: https://reference.wolfram.com/language/howto/PlotAGraph.html
[8]: https://opensource.com/article/19/3/learn-linux-raspberry-pi
-[9]: http://gnuplot.info/
\ No newline at end of file
+[9]: http://gnuplot.info/
diff --git a/published/20190313 How to contribute to the Raspberry Pi community.md b/published/20190313 How to contribute to the Raspberry Pi community.md
new file mode 100644
index 0000000000..208cb7fc44
--- /dev/null
+++ b/published/20190313 How to contribute to the Raspberry Pi community.md
@@ -0,0 +1,53 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10731-1.html)
+[#]: subject: (How to contribute to the Raspberry Pi community)
+[#]: via: (https://opensource.com/article/19/3/contribute-raspberry-pi-community)
+[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva/users/kepler22b/users/ansilva)
+
+树莓派使用入门:如何为树莓派社区做出贡献
+======
+
+> 在我们的入门系列的第 13 篇文章中,发现参与树莓派社区的方法。
+
+![][1]
+
+这个系列已经逐渐接近尾声,我已经写了很多它的乐趣,我大多希望它能帮助人们使用树莓派进行教育或娱乐。也许这些文章能说服你买你的第一个树莓派,或者让你重新发现抽屉里的吃灰设备。如果这里有真的,那么我认为这个系列就是成功的。
+
+如果你想买一台,并宣传这块绿色的小板子有多么多功能,这里有几个方法帮你与树莓派社区建立连接:
+
+ * 帮助改进[官方文档][2]
+ * 贡献代码给依赖的[项目][3]
+ * 用 Raspbian 报告 [bug][4]
+ * 报告不同 ARM 架构分发版的的 bug
+ * 看一眼英国国内的树莓派基金会的[代码俱乐部][5]或英国境外的[国际代码俱乐部][6],帮助孩子学习编码
+ * 帮助[翻译][7]
+ * 在 [Raspberry Jam][8] 当志愿者
+
+这些只是你可以为树莓派社区做贡献的几种方式。最后但同样重要的是,你可以加入我并[投稿文章][9]到你最喜欢的开源网站 [Opensource.com][10]。 :-)
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/contribute-raspberry-pi-community
+
+作者:[Anderson Silva (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ansilva/users/kepler22b/users/ansilva
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_community.jpg?itok=dcKwb5et
+[2]: https://www.raspberrypi.org/documentation/CONTRIBUTING.md
+[3]: https://www.raspberrypi.org/github/
+[4]: https://www.raspbian.org/RaspbianBugs
+[5]: https://www.codeclub.org.uk/
+[6]: https://www.codeclubworld.org/
+[7]: https://www.raspberrypi.org/translate/
+[8]: https://www.raspberrypi.org/jam/
+[9]: https://opensource.com/participate
+[10]: http://Opensource.com
diff --git a/published/20190314 14 days of celebrating the Raspberry Pi.md b/published/20190314 14 days of celebrating the Raspberry Pi.md
new file mode 100644
index 0000000000..697781da3e
--- /dev/null
+++ b/published/20190314 14 days of celebrating the Raspberry Pi.md
@@ -0,0 +1,75 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10734-1.html)
+[#]: subject: (14 days of celebrating the Raspberry Pi)
+[#]: via: (https://opensource.com/article/19/3/happy-pi-day)
+[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva)
+
+树莓派使用入门:庆祝树莓派的 14 天
+======
+
+> 在我们关于树莓派入门系列的第 14 篇也是最后一篇文章中,回顾一下我们学到的所有东西。
+
+![][1]
+
+### 派节快乐!
+
+每年的 3 月 14 日,我们这些极客都会庆祝派节。我们用这种方式缩写日期: `MMDD`,3 月 14 于是写成 03/14,它的数字上提醒我们 3.14,或者说 [π][2] 的前三位数字。许多美国人没有意识到的是,世界上几乎没有其他国家使用这种[日期格式][3],因此派节几乎只适用于美国,尽管它在全球范围内得到了庆祝。
+
+无论你身在何处,让我们一起庆祝树莓派,并通过回顾过去两周我们所涉及的主题来结束本系列:
+
+* 第 1 天:[你应该选择哪种树莓派?][4]
+* 第 2 天:[如何购买树莓派][5]
+* 第 3 天:[如何启动一个新的树莓派][6]
+* 第 4 天:[用树莓派学习 Linux][7]
+* 第 5 天:[教孩子们用树莓派学编程的 5 种方法][8]
+* 第 6 天:[可以使用树莓派学习的 3 种流行编程语言][9]
+* 第 7 天:[如何更新树莓派][10]
+* 第 8 天:[如何使用树莓派来娱乐][11]
+* 第 9 天:[树莓派上的模拟器和原生 Linux 游戏][12]
+* 第 10 天:[进入物理世界 —— 如何使用树莓派的 GPIO 针脚][13]
+* 第 11 天:[通过树莓派和 kali Linux 学习计算机安全][14]
+* 第 12 天:[在树莓派上使用 Mathematica 进行高级数学运算][15]
+* 第 13 天:[如何为树莓派社区做出贡献][16]
+
+![Pi Day illustration][18]
+
+我将结束本系列,感谢所有关注的人,尤其是那些在过去 14 天里从中学到了东西的人!我还想鼓励大家不断扩展他们对树莓派以及围绕它构建的所有开源(和闭源)技术的了解。
+
+我还鼓励你了解其他文化、哲学、宗教和世界观。让我们成为人类的是这种惊人的 (有时是有趣的) 能力,我们不仅要适应外部环境,而且要适应智力环境。
+
+不管你做什么,保持学习!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/happy-pi-day
+
+作者:[Anderson Silva (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ansilva
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA
+[2]: https://www.piday.org/million/
+[3]: https://en.wikipedia.org/wiki/Date_format_by_country
+[4]: https://linux.cn/article-10611-1.html
+[5]: https://linux.cn/article-10615-1.html
+[6]: https://linux.cn/article-10644-1.html
+[7]: https://linux.cn/article-10645-1.html
+[8]: https://linux.cn/article-10653-1.html
+[9]: https://linux.cn/article-10661-1.html
+[10]: https://linux.cn/article-10665-1.html
+[11]: https://linux.cn/article-10669-1.html
+[12]: https://linux.cn/article-10682-1.html
+[13]: https://linux.cn/article-10687-1.html
+[14]: https://linux.cn/article-10690-1.html
+[15]: https://linux.cn/article-10711-1.html
+[16]: https://linux.cn/article-10731-1.html
+[17]: /file/426561
+[18]: https://opensource.com/sites/default/files/uploads/raspberrypi_14_piday.jpg (Pi Day illustration)
diff --git a/published/20190314 A Look Back at the History of Firefox.md b/published/20190314 A Look Back at the History of Firefox.md
new file mode 100644
index 0000000000..ac9341d9a0
--- /dev/null
+++ b/published/20190314 A Look Back at the History of Firefox.md
@@ -0,0 +1,118 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Moelf)
+[#]: reviewer: (acyanbird, wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10714-1.html)
+[#]: subject: (A Look Back at the History of Firefox)
+[#]: via: (https://itsfoss.com/history-of-firefox)
+[#]: author: (John Paul https://itsfoss.com/author/john/)
+
+回顾 Firefox 历史
+======
+
+从很久之前开始,火狐浏览器就一直是开源社区的一根顶梁柱。这些年来它几乎是所有 Linux 发行版的默认浏览器,并且曾是阻挡微软彻底争霸浏览器界的最后一块磐石。这款浏览器的起源可以一直回溯到互联网创生的时代。本周(LCTT 译注:此文发布于 2019.3.14)是互联网成立 30 周年的纪念日,趁这个机会回顾一下我们熟悉并爱戴的火狐浏览器实在是再好不过了。
+
+### 发源
+
+在上世纪 90 年代早期,一个叫 [Marc Andreessen][1] 的年轻人正在伊利诺伊大学攻读计算机科学学士学位。在那里,他开始为[国家超算应用中心(NCSA)][2]工作。就在这段时间内,[蒂姆·伯纳斯·李][3] 爵士发布了今天已经为我们所熟知的 Web 的早期标准。Marc 在那时候[了解][4]到了一款叫 [ViolaWWW][5] 的化石级浏览器。Marc 和 Eric Bina 看到了这种技术的潜力,他们开发了一个易于安装的基于 Unix 平台的浏览器,并取名 [NCSA Mosaic][6]。第一个 alpha 版本发布于 1993 年 6 月。到 9 月的时候,浏览器已经有 Windows 和 Macintosh 移植版本了。因为比当时其他任何浏览器软件都易于使用,Mosaic 很快变得相当流行。
+
+1994 年,Marc 毕业并移居到加州。一个叫 Jim Clark 的人结识了他,Clark 那时候通过卖电脑软硬件赚了点钱。Clark 也用过 Mosaic 浏览器并且看到了互联网的经济前景。Clark 创立了一家公司并且雇了 Marc 和 Eric 专做互联网软件。公司一开始叫 “Mosaic 通讯”,但是伊利诺伊大学并不喜欢他们用 [Mosaic 这个名字][7]。所以公司转而改名为 “网景通讯”。
+
+该公司的第一个项目是给任天堂 64 开发在线对战网络,然而不怎么成功。他们第一个以公司名义发布的产品是一款叫做 Mosaic Netscape 0.9 的浏览器,很快这款浏览器被改名叫 Netscape Navigator。在内部,浏览器的开发代号就是 mozilla,意即 “Mosaic 杀手”。一位员工还创作了一幅[哥斯拉风格的][8]卡通画。他们当时想在竞争中彻底胜出。
+
+![Early Firefox Mascot][9]
+
+*早期 Mozilla 在 Netscape 的吉祥物*
+
+他们取得了辉煌的胜利。那时,Netscape 最大的优势是他们的浏览器在各种操作系统上体验极为一致。Netscape 将其宣传为给所有人平等的互联网体验。
+
+随着越来越多的人使用 Netscape Navigator,NCSA Mosaic 的市场份额逐步下降。到了 1995 年,Netscape 公开上市了。[上市首日][10],股价从开盘的 $28,直窜到 $78,收盘于 $58。Netscape 那时所向披靡。
+
+但好景不长。在 1994 年的夏天,微软发布了 Internet Explorer 1.0,这款浏览器基于 Spyglass Mosaic,而后者又直接基于 NCSA Mosaic。[浏览器战争][11] 就此展开。
+
+在接下来的几年里,Netscape 和微软就浏览器霸主地位展开斗争。他们各自加入了很多新特性以取得优势。不幸的是,IE 有和 Windows 操作系统捆绑的巨大优势。更甚于此,微软也有更多的程序员和资本可以调动。在 1997 年年底,Netscape 公司开始遇到财务问题。
+
+### 迈向开源
+
+![Mozilla Firefox][12]
+
+1998 年 1 月,Netscape 开源了 Netscape Communicator 4.0 软件套装的代码。[旨在][13] “集合互联网成千上万的程序员的才智,把最好的功能加入 Netscape 的软件。这一策略旨在加速开发,并且让 Netscape 在未来能向个人和商业用户免费提供高质量的 Netscape Communicator 版本”。
+
+这个项目由新创立的 Mozilla 机构管理。然而,Netscape Communicator 4.0 的代码由于大小和复杂程度而很难开发。雪上加霜的是,浏览器的一些组件由于第三方的许可证问题而不能被开源。到头来,他们决定用新兴的 [Gecko][14] 渲染引擎重新开发浏览器。
+
+到了 1998 年的 11 月,Netscape 被美国在线(AOL)以[价值 42 亿美元的股权][15]收购。
+
+从头来过是一项艰巨的任务。Mozilla Firefox(最初名为 Phoenix)直到 2002 年 6 月才面世,它同样可以运行在多种操作系统上:Linux、Mac OS、Windows 和 Solaris。
+
+1999 年,AOL 宣布他们将停止浏览器开发。随后创建了 Mozilla 基金会,用于管理 Mozilla 的商标和项目相关的融资事宜。最早 Mozilla 基金会从 AOL、IBM、Sun Microsystems 和红帽(Red Hat)收到了总计 200 万美金的捐赠。
+
+到了 2003 年 3 月,因为套件越来越臃肿,Mozilla [宣布][16] 计划把该套件分割成单独的应用。这个单独的浏览器一开始起名 Phoenix。但是由于和 BIOS 制造企业凤凰科技的商标官司,浏览器改名 Firebird(火鸟) —— 结果和火鸟数据库的开发者又起了冲突。浏览器只能再次被重命名,才有了现在家喻户晓的 Firefox(火狐)。
+
+那时,[Mozilla 说][17],”我们在过去一年里学到了很多关于起名的技巧(不是因为我们愿意才学的)。我们现在很小心地研究了名字,确保不会再有什么夭蛾子了。我们已经开始向美国专利商标局注册我们新商标”。
+
+![Mozilla Firefox 1.0][18]
+
+*Firefox 1.0 : [图片致谢][19]*
+
+第一个正式的 Firefox 版本是 [0.8][20],发布于 2004 年 2 月 8 日。紧接着 11 月 9 日他们发布了 1.0 版本。2.0 和 3.0 版本分别在 06 年 10 月 和 08 年 6 月问世。每个大版本更新都带来了很多新的特性和提升。从很多角度上讲,Firefox 都领先 IE 不少,无论是功能还是技术先进性,即便如此 IE 还是有更多用户。
+
+一切都在 Google 发布 Chrome 浏览器的时候改变了。在 Chrome 发布(2008 年 9 月)的前几个月,Firefox 占有 30% 的[浏览器份额][21] 而 IE 有超过 60%。而在 StatCounter 的 [2019 年 1 月][22]报告里,Firefox 有不到 10% 的份额,而 Chrome 有超过 70%。
+
+> 趣味知识点
+
+> 和大家以为的不一样,火狐的 logo 其实没有狐狸。那其实是个 [小熊猫][23]。在中文里,“火狐狸”是小熊猫的另一个名字。
+
+### 展望未来
+
+如上文所说的一样,Firefox 正在经历很长一段以来的份额低谷。曾经有那么一段时间,有很多浏览器都基于 Firefox 开发,比如早期的 [Flock 浏览器][24]。而现在大多数浏览器都基于谷歌的技术了,比如 Opera 和 Vivaldi。甚至连微软都放弃开发自己的浏览器而转而[加入 Chromium 帮派][25]。
+
+这也许看起来和 Netscape 当年的辉煌形成鲜明的对比。但让我们不要忘记 Firefox 已经有的许多成就。一群来自世界各地的程序员,就这么开发出了这个星球上第二大份额的浏览器。他们在微软垄断如日中天的时候还占据这 30% 的份额,他们可以再次做到这一点。无论如何,他们都有我们。开源社区坚定地站在他们身后。
+
+抗争垄断是我使用 Firefox [的众多原因之一][26]。随着 Mozilla 在改头换面的 [Firefox Quantum][27] 上赢回了一些份额,我相信它将一路向上攀爬。
+
+你还想了解 Linux 和开源历史上的什么其他事件?欢迎在评论区告诉我们。
+
+如果你觉得这篇文章不错,请在社交媒体上分享!比如 Hacker News 或者 [Reddit][28]。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/history-of-firefox
+
+作者:[John Paul][a]
+选题:[lujun9972][b]
+译者:[Moelf](https://github.com/Moelf)
+校对:[acyanbird](https://github.com/acyanbird), [wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/john/
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Marc_Andreessen
+[2]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
+[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
+[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
+[5]: http://viola.org/
+[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser)
+[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
+[8]: http://www.davetitus.com/mozilla/
+[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
+[10]: https://www.marketwatch.com/story/netscape-ipo-ignited-the-boom-taught-some-hard-lessons-20058518550
+[11]: https://en.wikipedia.org/wiki/Browser_wars
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?resize=800%2C450&ssl=1
+[13]: https://web.archive.org/web/20021001071727/wp.netscape.com/newsref/pr/newsrelease558.html
+[14]: https://en.wikipedia.org/wiki/Gecko_(software)
+[15]: http://news.cnet.com/2100-1023-218360.html
+[16]: https://web.archive.org/web/20050618000315/http://www.mozilla.org/roadmap/roadmap-02-Apr-2003.html
+[17]: https://www-archive.mozilla.org/projects/firefox/firefox-name-faq.html
+[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firefox-1.jpg?ssl=1
+[19]: https://www.iceni.com/blog/firefox-1-0-introduced-2004/
+[20]: https://en.wikipedia.org/wiki/Firefox_version_history
+[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
+[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
+[23]: https://en.wikipedia.org/wiki/Red_panda
+[24]: https://en.wikipedia.org/wiki/Flock_(web_browser)
+[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
+[26]: https://itsfoss.com/why-firefox/
+[27]: https://itsfoss.com/firefox-quantum-ubuntu/
+[28]: http://reddit.com/r/linuxusersgroup
+[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?fit=800%2C450&ssl=1
diff --git a/published/20190315 Sweet Home 3D- An open source tool to help you decide on your dream home.md b/published/20190315 Sweet Home 3D- An open source tool to help you decide on your dream home.md
new file mode 100644
index 0000000000..55f899a321
--- /dev/null
+++ b/published/20190315 Sweet Home 3D- An open source tool to help you decide on your dream home.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10732-1.html)
+[#]: subject: (Sweet Home 3D: An open source tool to help you decide on your dream home)
+[#]: via: (https://opensource.com/article/19/3/tool-find-home)
+[#]: author: (Jeff Macharyas (Community Moderator) )
+
+Sweet Home 3D:一个帮助你寻找梦想家庭的开源工具
+======
+
+> 室内设计应用可以轻松渲染你喜欢的房子,不管是真实的或是想象的。
+
+![Houses in a row][1]
+
+我最近接受了一份在弗吉尼亚州的新工作。由于我妻子一直在纽约工作,看着我们在纽约的房子直至出售,我有责任出去为我们和我们的猫找一所新房子。在我们搬进去之前她看不到新房子。
+
+我和一个房地产经纪人签约,并看了几间房子,拍了许多照片,写下了潦草的笔记。晚上,我会将照片上传到 Google Drive 文件夹中,我和我老婆会通过手机同时查看这些照片,同时我还要记住房间是在右边还是左边,是否有风扇等。
+
+由于这是一个相当繁琐且不太准确的展示我的发现的方式,我因此去寻找一个开源解决方案,以更好地展示我们未来的梦想之家将会是什么样的,而不会取决于我的模糊记忆和模糊的照片。
+
+[Sweet Home 3D][2] 完全满足了我的要求。Sweet Home 3D 可在 Sourceforge 上获取,并在 GNU 通用公共许可证下发布。它的[网站][3]信息非常丰富,我能够立即启动并运行。Sweet Home 3D 由总部位于巴黎的 eTeks 的 Emmanuel Puybaret 开发。
+
+### 绘制内墙
+
+我将 Sweet Home 3D 下载到我的 MacBook Pro 上,并添加了 PNG 版本的平面楼层图,用作背景底图。
+
+在此处,使用 Rooms 面板跟踪图案并设置“真实房间”尺寸是一件简单的事情。在我绘制房间后,我添加了墙壁,我可以定制颜色、厚度、高度等。
+
+![Sweet Home 3D floorplan][5]
+
+现在我画完了“内墙”,我从网站下载了各种“家具”,其中包括实际的家具以及门、窗、架子等。每个项目都以 ZIP 文件的形式下载,因此我创建了一个包含所有未压缩文件的文件夹。我可以自定义每件家具和重复的物品比如门,可以方便地复制粘贴到指定的地方。
+
+在我将所有墙壁和门窗都布置完后,我就使用这个应用的 3D 视图浏览房屋。根据照片和记忆,我对所有物体进行了调整,直到接近房屋的样子。我可以花更多时间添加纹理,附属家具和物品,但这已经达到了我需要的程度。
+
+![Sweet Home 3D floorplan][7]
+
+完成之后,我将该项目导出为 OBJ 文件,它可在各种程序中打开,例如 [Blender][8] 和 Mac 上的“预览”中,方便旋转房屋并从各个角度查看。视频功能最有用,我可以创建一个起点,然后在房子中绘制一条路径,并记录“旅程”。我将视频导出为 MOV 文件,并使用 QuickTime 在 Mac 上打开和查看。
+
+我的妻子能够(几乎)能看到所有我看到的,我们甚至可以开始在搬家前布置家具。现在,我所要做的就是把行李装上卡车搬到新家。
+
+Sweet Home 3D 在我的新工作中也是有用的。我正在寻找一种方法来改善学院建筑的地图,并计划在 [Inkscape][9] 或 Illustrator 或其他软件中重新绘制它。但是,由于我有平面地图,我可以使用 Sweet Home 3D 创建平面图的 3D 版本并将其上传到我们的网站以便更方便地找到地方。
+
+### 开源犯罪现场?
+
+一件有趣的事:根据 [Sweet Home 3D 的博客][10],“法国法医办公室(科学警察)最近选择 Sweet Home 3D 作为设计规划表示路线和犯罪现场的工具。这是法国政府建议优先考虑自由开源解决方案的具体应用。“
+
+这是公民和政府如何利用开源解决方案创建个人项目、解决犯罪和建立世界的又一点证据。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/tool-find-home
+
+作者:[Jeff Macharyas (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jeffmacharyas
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
+[2]: https://sourceforge.net/projects/sweethome3d/
+[3]: http://www.sweethome3d.com/
+[4]: /file/426441
+[5]: https://opensource.com/sites/default/files/uploads/virginia-house-create-screenshot.png (Sweet Home 3D floorplan)
+[6]: /file/426451
+[7]: https://opensource.com/sites/default/files/uploads/virginia-house-3d-screenshot.png (Sweet Home 3D floorplan)
+[8]: https://opensource.com/article/18/5/blender-hotkey-cheat-sheet
+[9]: https://opensource.com/article/19/1/inkscape-cheat-sheet
+[10]: http://www.sweethome3d.com/blog/2018/12/10/customization_for_the_forensic_police.html
diff --git a/published/20190317 How To Configure sudo Access In Linux.md b/published/20190317 How To Configure sudo Access In Linux.md
new file mode 100644
index 0000000000..efbd663b44
--- /dev/null
+++ b/published/20190317 How To Configure sudo Access In Linux.md
@@ -0,0 +1,296 @@
+[#]: collector: (lujun9972)
+[#]: translator: (liujing97)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10746-1.html)
+[#]: subject: (How To Configure sudo Access In Linux?)
+[#]: via: (https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+如何在 Linux 中配置 sudo 访问权限
+======
+
+Linux 系统中 root 用户拥有 Linux 中全部控制权力。Linux 系统中 root 是拥有最高权力的用户,可以在系统中实施任意的行为。
+
+如果其他用户想去实施一些行为,不能为所有人都提供 root 访问权限。因为如果他或她做了一些错误的操作,没有办法去纠正它。
+
+为了解决这个问题,有什么方案吗?
+
+我们可以把 sudo 权限发放给相应的用户来克服这种情况。
+
+`sudo` 命令提供了一种机制,它可以在不用分享 root 用户的密码的前提下,为信任的用户提供系统的管理权限。
+
+他们可以执行大部分的管理操作,但又不像 root 一样有全部的权限。
+
+### 什么是 sudo?
+
+`sudo` 是一个程序,普通用户可以使用它以超级用户或其他用户的身份执行命令,是由安全策略指定的。
+
+sudo 用户的访问权限是由 `/etc/sudoers` 文件控制的。
+
+### sudo 用户有什么优点?
+
+在 Linux 系统中,如果你不熟悉一个命令,`sudo` 是运行它的一个安全方式。
+
+* Linux 系统在 `/var/log/secure` 和 `/var/log/auth.log` 文件中保留日志,并且你可以验证 sudo 用户实施了哪些行为操作。
+* 每一次它都为当前的操作提示输入密码。所以,你将会有时间去验证这个操作是不是你想要执行的。如果你发觉它是不正确的行为,你可以安全地退出而且没有执行此操作。
+
+基于 RHEL 的系统(如 Redhat (RHEL)、 CentOS 和 Oracle Enterprise Linux (OEL))和基于 Debian 的系统(如 Debian、Ubuntu 和 LinuxMint)在这点是不一样的。
+
+我们将会教你如何在本文中提及的两种发行版中执行该操作。
+
+这里有三种方法可以应用于两个发行版本。
+
+* 增加用户到相应的组。基于 RHEL 的系统,我们需要添加用户到 `wheel` 组。基于 Debain 的系统,我们添加用户到 `sudo` 或 `admin` 组。
+* 手动添加用户到 `/etc/group` 文件中。
+* 用 `visudo` 命令添加用户到 `/etc/sudoers` 文件中。
+
+### 如何在 RHEL/CentOS/OEL 系统中配置 sudo 访问权限?
+
+在基于 RHEL 的系统中(如 Redhat (RHEL)、 CentOS 和 Oracle Enterprise Linux (OEL)),使用下面的三个方法就可以做到。
+
+#### 方法 1:在 Linux 中如何使用 wheel 组为普通用户授予超级用户访问权限?
+
+wheel 是基于 RHEL 的系统中的一个特殊组,它提供额外的权限,可以授权用户像超级用户一样执行受到限制的命令。
+
+注意,应该在 `/etc/sudoers` 文件中激活 `wheel` 组来获得该访问权限。
+
+```
+# grep -i wheel /etc/sudoers
+
+## Allows people in group wheel to run all commands
+%wheel ALL=(ALL) ALL
+# %wheel ALL=(ALL) NOPASSWD: ALL
+```
+
+假设我们已经创建了一个用户账号来执行这些操作。在此,我将会使用 `daygeek` 这个用户账号。
+
+执行下面的命令,添加用户到 `wheel` 组。
+
+```
+# usermod -aG wheel daygeek
+```
+
+我们可以通过下面的命令来确定这一点。
+
+```
+# getent group wheel
+wheel:x:10:daygeek
+```
+
+我将要检测用户 `daygeek` 是否可以访问属于 root 用户的文件。
+
+```
+$ tail -5 /var/log/secure
+tail: cannot open /var/log/secure for reading: Permission denied
+```
+
+当我试图以普通用户身份访问 `/var/log/secure` 文件时出现错误。 我将使用 `sudo` 访问同一个文件,让我们看看这个魔术。
+
+```
+$ sudo tail -5 /var/log/secure
+[sudo] password for daygeek:
+Mar 17 07:01:56 CentOS7 sudo: daygeek : TTY=pts/0 ; PWD=/home/daygeek ; USER=root ; COMMAND=/bin/tail -5 /var/log/secure
+Mar 17 07:01:56 CentOS7 sudo: pam_unix(sudo:session): session opened for user root by daygeek(uid=0)
+Mar 17 07:01:56 CentOS7 sudo: pam_unix(sudo:session): session closed for user root
+Mar 17 07:05:10 CentOS7 sudo: daygeek : TTY=pts/0 ; PWD=/home/daygeek ; USER=root ; COMMAND=/bin/tail -5 /var/log/secure
+Mar 17 07:05:10 CentOS7 sudo: pam_unix(sudo:session): session opened for user root by daygeek(uid=0)
+```
+
+#### 方法 2:在 RHEL/CentOS/OEL 中如何使用 /etc/group 文件为普通用户授予超级用户访问权限?
+
+我们可以通过编辑 `/etc/group` 文件来手动地添加用户到 `wheel` 组。
+
+只需打开该文件,并在恰当的组后追加相应的用户就可完成这一点。
+
+```
+$ grep -i wheel /etc/group
+wheel:x:10:daygeek,user1
+```
+
+在该例中,我将使用 `user1` 这个用户账号。
+
+我将要通过在系统中重启 Apache httpd 服务来检查用户 `user1` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
+
+```
+$ sudo systemctl restart httpd
+[sudo] password for user1:
+
+$ sudo grep -i user1 /var/log/secure
+[sudo] password for user1:
+Mar 17 07:09:47 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/systemctl restart httpd
+Mar 17 07:10:40 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/systemctl restart httpd
+Mar 17 07:12:35 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/grep -i httpd /var/log/secure
+```
+
+#### 方法 3:在 Linux 中如何使用 /etc/sudoers 文件为普通用户授予超级用户访问权限?
+
+sudo 用户的访问权限是被 `/etc/sudoers` 文件控制的。因此,只需将用户添加到 `sudoers` 文件中 的 `wheel` 组下即可。
+
+只需通过 `visudo` 命令将期望的用户追加到 `/etc/sudoers` 文件中。
+
+```
+# grep -i user2 /etc/sudoers
+user2 ALL=(ALL) ALL
+```
+
+在该例中,我将使用 `user2` 这个用户账号。
+
+我将要通过在系统中重启 MariaDB 服务来检查用户 `user2` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
+
+```
+$ sudo systemctl restart mariadb
+[sudo] password for user2:
+
+$ sudo grep -i mariadb /var/log/secure
+[sudo] password for user2:
+Mar 17 07:23:10 CentOS7 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/systemctl restart mariadb
+Mar 17 07:26:52 CentOS7 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/grep -i mariadb /var/log/secure
+```
+
+### 在 Debian/Ubuntu 系统中如何配置 sudo 访问权限?
+
+在基于 Debian 的系统中(如 Debian、Ubuntu 和 LinuxMint),使用下面的三个方法就可以做到。
+
+#### 方法 1:在 Linux 中如何使用 sudo 或 admin 组为普通用户授予超级用户访问权限?
+
+`sudo` 或 `admin` 是基于 Debian 的系统中的特殊组,它提供额外的权限,可以授权用户像超级用户一样执行受到限制的命令。
+
+注意,应该在 `/etc/sudoers` 文件中激活 `sudo` 或 `admin` 组来获得该访问权限。
+
+```
+# grep -i 'sudo\|admin' /etc/sudoers
+
+# Members of the admin group may gain root privileges
+%admin ALL=(ALL) ALL
+
+# Allow members of group sudo to execute any command
+%sudo ALL=(ALL:ALL) ALL
+```
+
+假设我们已经创建了一个用户账号来执行这些操作。在此,我将会使用 `2gadmin` 这个用户账号。
+
+执行下面的命令,添加用户到 `sudo` 组。
+
+```
+# usermod -aG sudo 2gadmin
+```
+
+我们可以通过下面的命令来确定这一点。
+
+```
+# getent group sudo
+sudo:x:27:2gadmin
+```
+
+我将要检测用户 `2gadmin` 是否可以访问属于 root 用户的文件。
+
+```
+$ less /var/log/auth.log
+/var/log/auth.log: Permission denied
+```
+
+当我试图以普通用户身份访问 `/var/log/auth.log` 文件时出现错误。 我将要使用 `sudo` 访问同一个文件,让我们看看这个魔术。
+
+```
+$ sudo tail -5 /var/log/auth.log
+[sudo] password for 2gadmin:
+Mar 17 20:39:47 Ubuntu18 sudo: 2gadmin : TTY=pts/0 ; PWD=/home/2gadmin ; USER=root ; COMMAND=/bin/bash
+Mar 17 20:39:47 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by 2gadmin(uid=0)
+Mar 17 20:40:23 Ubuntu18 sudo: pam_unix(sudo:session): session closed for user root
+Mar 17 20:40:48 Ubuntu18 sudo: 2gadmin : TTY=pts/0 ; PWD=/home/2gadmin ; USER=root ; COMMAND=/usr/bin/tail -5 /var/log/auth.log
+Mar 17 20:40:48 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by 2gadmin(uid=0)
+```
+
+或者,我们可以通过添加用户到 `admin` 组来执行相同的操作。
+
+运行下面的命令,添加用户到 `admin` 组。
+
+```
+# usermod -aG admin user1
+```
+
+我们可以通过下面的命令来确定这一点。
+
+```
+# getent group admin
+admin:x:1011:user1
+```
+
+让我们看看输出信息。
+
+```
+$ sudo tail -2 /var/log/auth.log
+[sudo] password for user1:
+Mar 17 20:53:36 Ubuntu18 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/usr/bin/tail -2 /var/log/auth.log
+Mar 17 20:53:36 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user1(uid=0)
+```
+
+#### 方法 2:在 Debian/Ubuntu 中如何使用 /etc/group 文件为普通用户授予超级用户访问权限?
+
+我们可以通过编辑 `/etc/group` 文件来手动地添加用户到 `sudo` 组或 `admin` 组。
+
+只需打开该文件,并在恰当的组后追加相应的用户就可完成这一点。
+
+```
+$ grep -i sudo /etc/group
+sudo:x:27:2gadmin,user2
+```
+
+在该例中,我将使用 `user2` 这个用户账号。
+
+我将要通过在系统中重启 Apache httpd 服务来检查用户 `user2` 是不是拥有 `sudo` 访问权限。让我们看看这个魔术。
+
+```
+$ sudo systemctl restart apache2
+[sudo] password for user2:
+
+$ sudo tail -f /var/log/auth.log
+[sudo] password for user2:
+Mar 17 21:01:04 Ubuntu18 systemd-logind[559]: New session 22 of user user2.
+Mar 17 21:01:04 Ubuntu18 systemd: pam_unix(systemd-user:session): session opened for user user2 by (uid=0)
+Mar 17 21:01:33 Ubuntu18 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/systemctl restart apache2
+```
+
+#### 方法 3:在 Linux 中如何使用 /etc/sudoers 文件为普通用户授予超级用户访问权限?
+
+sudo 用户的访问权限是被 `/etc/sudoers` 文件控制的。因此,只需将用户添加到 `sudoers` 文件中的 `sudo` 或 `admin` 组下即可。
+
+只需通过 `visudo` 命令将期望的用户追加到 `/etc/sudoers` 文件中。
+
+```
+# grep -i user3 /etc/sudoers
+user3 ALL=(ALL:ALL) ALL
+```
+
+在该例中,我将使用 `user3` 这个用户账号。
+
+我将要通过在系统中重启 MariaDB 服务来检查用户 `user3` 是不是拥有 `sudo` 访问权限。让我们看看这个魔术。
+
+```
+$ sudo systemctl restart mariadb
+[sudo] password for user3:
+
+$ sudo tail -f /var/log/auth.log
+[sudo] password for user3:
+Mar 17 21:12:32 Ubuntu18 systemd-logind[559]: New session 24 of user user3.
+Mar 17 21:12:49 Ubuntu18 sudo: user3 : TTY=pts/0 ; PWD=/home/user3 ; USER=root ; COMMAND=/bin/systemctl restart mariadb
+Mar 17 21:12:49 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user3(uid=0)
+Mar 17 21:12:53 Ubuntu18 sudo: pam_unix(sudo:session): session closed for user root
+Mar 17 21:13:08 Ubuntu18 sudo: user3 : TTY=pts/0 ; PWD=/home/user3 ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log
+Mar 17 21:13:08 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user3(uid=0)
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[liujing97](https://github.com/liujing97)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
diff --git a/translated/tech/20190320 Quickly Go Back To A Specific Parent Directory Using bd Command In Linux.md b/published/20190320 Quickly Go Back To A Specific Parent Directory Using bd Command In Linux.md
similarity index 79%
rename from translated/tech/20190320 Quickly Go Back To A Specific Parent Directory Using bd Command In Linux.md
rename to published/20190320 Quickly Go Back To A Specific Parent Directory Using bd Command In Linux.md
index 73ab10c939..6136d80dd4 100644
--- a/translated/tech/20190320 Quickly Go Back To A Specific Parent Directory Using bd Command In Linux.md
+++ b/published/20190320 Quickly Go Back To A Specific Parent Directory Using bd Command In Linux.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10695-1.html)
[#]: subject: (Quickly Go Back To A Specific Parent Directory Using bd Command In Linux)
[#]: via: (https://www.2daygeek.com/bd-quickly-go-back-to-a-specific-parent-directory-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
@@ -10,45 +10,43 @@
在 Linux 中使用 bd 命令快速返回到特定的父目录
======
-
-
-两天前我们写了一篇关于 `autocd` 的文章,它是一个内置的 `shell` 变量,可以帮助我们在**[没有 `cd` 命令的情况下导航到目录中][1]**.
+两天前我们写了一篇关于 `autocd` 的文章,它是一个内置的 shell 变量,可以帮助我们在[没有 cd 命令的情况下导航到目录中][1]。
如果你想回到上一级目录,那么你需要输入 `cd ..`。
如果你想回到上两级目录,那么你需要输入 `cd ../..`。
-这在 Linux 中是正常的,但如果你想从第九个目录回到第三个目录,那么使用 cd 命令是很糟糕的。
+这在 Linux 中是正常的,但如果你想从第九级目录回到第三级目录,那么使用 `cd` 命令是很糟糕的。
有什么解决方案呢?
-是的,在 Linux 中有一个解决方案。我们可以使用 bd 命令来轻松应对这种情况。
+是的,在 Linux 中有一个解决方案。我们可以使用 `bd` 命令来轻松应对这种情况。
### 什么是 bd 命令?
-bd 命令允许用户快速返回 Linux 中的父目录,而不是反复输入 `cd ../../..`。
+`bd` 命令允许用户快速返回 Linux 中的父目录,而不是反复输入 `cd ../../..`。
-你可以列出给定目录的内容,而不用提供完整路径 `ls `bd Directory_Name``。它支持以下其它命令,如 ls、ln、echo、zip、tar 等。
+你可以列出给定目录的内容,而不用提供完整路径 ls `bd Directory_Name`。它支持以下其它命令,如 `ls`、`ln`、`echo`、`zip`、`tar` 等。
-另外,它还允许我们执行 shell 文件而不用提供完整路径 `bd p`/shell_file.sh``。
+另外,它还允许我们执行 shell 文件而不用提供完整路径 bd p`/shell_file.sh`。
### 如何在 Linux 中安装 bd 命令?
-除了 Debian/Ubuntu 之外,bd 没有官方发行包。因此,我们需要手动执行方法。
+除了 Debian/Ubuntu 之外,`bd` 没有官方发行包。因此,我们需要手动执行方法。
-对于 **`Debian/Ubuntu`** 系统,使用 **[APT-GET 命令][2]**或**[APT 命令][3]**来安装 bd。
+对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][2]或[APT 命令][3]来安装 `bd`。
```
$ sudo apt install bd
```
-对于其它 Linux 发行版,使用 **[wget 命令][4]**下载 bd 可执行二进制文件。
+对于其它 Linux 发行版,使用 [wget 命令][4]下载 `bd` 可执行二进制文件。
```
$ sudo wget --no-check-certificate -O /usr/local/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd
```
-设置 bd 二进制文件的可执行权限。
+设置 `bd` 二进制文件的可执行权限。
```
$ sudo chmod +rx /usr/local/bin/bd
@@ -61,17 +59,19 @@ $ echo 'alias bd=". bd -si"' >> ~/.bashrc
```
运行以下命令以使更改生效。
+
```
$ source ~/.bashrc
```
要启用自动完成,执行以下两个步骤。
+
```
$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd
$ sudo source /etc/bash_completion.d/bd
```
-我们已经在系统上成功安装并配置了 bd 实用程序,现在是时候测试一下了。
+我们已经在系统上成功安装并配置了 `bd` 实用程序,现在是时候测试一下了。
我将使用下面的目录路径进行测试。
@@ -79,7 +79,7 @@ $ sudo source /etc/bash_completion.d/bd
```
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ pwd
-或者
+或
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ dirs
/usr/share/icons/Adwaita/256x256/apps
@@ -94,19 +94,20 @@ daygeek@Ubuntu18:/usr/share/icons$
```
甚至,你不需要输入完整的目录名称,也可以输入几个字母。
+
```
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ bd i
/usr/share/icons/
daygeek@Ubuntu18:/usr/share/icons$
```
-`注意:` 如果层次结构中有多个同名的目录,bd 会将你带到最近的目录。(不考虑直接的父目录)
+注意:如果层次结构中有多个同名的目录,`bd` 会将你带到最近的目录。(不考虑直接的父目录)
如果要列出给定的目录内容,使用以下格式。它会打印出 `/usr/share/icons/` 的内容。
```
$ ls -lh `bd icons`
-or
+或
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ ls -lh `bd i`
total 64K
drwxr-xr-x 12 root root 4.0K Jul 25 2018 Adwaita
@@ -132,7 +133,7 @@ drwxr-xr-x 3 root root 4.0K Jul 25 2018 whiteglass
```
$ `bd i`/users-list.sh
-or
+或
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ `bd icon`/users-list.sh
daygeek
thanu
@@ -151,7 +152,7 @@ user3
```
$ cd `bd i`/gnome
-or
+或
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ cd `bd icon`/gnome
daygeek@Ubuntu18:/usr/share/icons/gnome$
```
@@ -167,7 +168,7 @@ drwxr-xr-x 2 root root 4096 Mar 16 05:44 /usr/share/icons//2g
本教程允许你快速返回到特定的父目录,但没有快速前进的选项。
-我们有另一个解决方案,很快就会提出新的解决方案,请跟我们保持联系。
+我们有另一个解决方案,很快就会提出,请保持关注。
--------------------------------------------------------------------------------
@@ -176,7 +177,7 @@ via: https://www.2daygeek.com/bd-quickly-go-back-to-a-specific-parent-directory-
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/20190326 Using Square Brackets in Bash- Part 1.md b/published/20190326 Using Square Brackets in Bash- Part 1.md
new file mode 100644
index 0000000000..d58ee923bf
--- /dev/null
+++ b/published/20190326 Using Square Brackets in Bash- Part 1.md
@@ -0,0 +1,144 @@
+[#]: collector: (lujun9972)
+[#]: translator: (HankChow)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10717-1.html)
+[#]: subject: (Using Square Brackets in Bash: Part 1)
+[#]: via: (https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1)
+[#]: author: (Paul Brown https://www.linux.com/users/bro66)
+
+在 Bash 中使用[方括号] (一)
+======
+
+![square brackets][1]
+
+> 这篇文章将要介绍方括号及其在命令行中的不同用法。
+
+看完[花括号在命令行中的用法][3]之后,现在我们继续来看方括号(`[]`)在上下文中是如何发挥作用的。
+
+### 通配
+
+方括号最简单的用法就是通配。你可能在知道“”这个概念之前就已经通过通配来匹配内容了,列出具有相同特征的多个文件就是一个很常见的场景,例如列出所有 JPEG 文件:
+
+```
+ls *.jpg
+```
+
+使用通配符来得到符合某个模式的所有内容,这个过程就叫通配。
+
+在上面的例子当中,星号(`*`)就代表“0 个或多个字符”。除此以外,还有代表“有且仅有一个字符”的问号(`?`)。因此
+
+```
+ls d*k*
+```
+
+可以列出 `darkly` 和 `ducky`,而且 `dark` 和 `duck` 也是可以被列出的,因为 `*` 可以匹配 0 个字符。而
+
+```
+ls d*k?
+```
+
+则只能列出 `ducky`,不会列出 `darkly`、`dark` 和 `duck`。
+
+方括号也可以用于通配。为了便于演示,可以创建一个用于测试的目录,并在这个目录下创建文件:
+
+```
+touch file0{0..9}{0..9}
+```
+
+(如果你还不清楚上面这个命令的原理,可以看一下[另一篇介绍花括号的文章][3])
+
+执行上面这个命令之后,就会创建 `file000`、`file001`、……、`file099` 这 100 个文件。
+
+如果要列出这些文件当中第二位数字是 7 或 8 的文件,可以执行:
+
+```
+ls file0[78]?
+```
+
+如果要列出 `file022`、`file027`、`file028`、`file052`、`file057`、`file058`、`file092`、`file097`、`file098`,可以执行:
+
+```
+ls file0[259][278]
+```
+
+当然,不仅仅是 `ls`,很多其它的命令行工具都可以使用方括号来进行通配操作。但在删除文件、移动文件、复制文件的过程中使用通配,你需要有一点横向思维。
+
+例如将 `file010` 到 `file029` 这 30 个文件复制成 `archive010` 到 `archive029` 这 30 个副本,不可以这样执行:
+
+```
+cp file0[12]? archive0[12]?
+```
+
+因为通配只能针对已有的文件,而 `archive` 开头的文件并不存在,不能进行通配。
+
+而这条命令
+
+```
+cp file0[12]? archive0[1..2][0..9]
+```
+
+也同样不行,因为 `cp` 并不允许将多个文件复制到多个文件。在复制多个文件的情况下,只能将多个文件复制到一个指定的目录下:
+
+```
+mkdir archive
+cp file0[12]? archive
+```
+
+这条命令是可以正常运行的,但它只会把这 30 个文件以同样的名称复制到 `archive/` 目录下,而这并不是我们想要的效果。
+
+如果你阅读过我[关于花括号的文章][3],你大概会记得可以使用 `%` 来截掉字符串的末尾部分,而使用 `#` 则可以截掉字符串的开头部分。
+
+例如:
+
+```
+myvar="Hello World"
+echo Goodbye Cruel ${myvar#Hello}
+```
+
+就会输出 `Goodbye Cruel World`,因为 `#Hello` 将 `myvar` 变量中开头的 `Hello` 去掉了。
+
+在通配的过程中,也可以使用这一个技巧。
+
+```
+for i in file0[12]?;\
+do\
+cp $i archive${i#file};\
+done
+```
+
+上面的第一行命令告诉 Bash 需要对所有 `file01` 开头或者 `file02` 开头,且后面只跟一个任意字符的文件进行操作,第二行的 `do` 和第四行的 `done` 代表需要对这些文件都执行这一块中的命令。
+
+第三行就是实际的复制操作了,这里使用了两次 `$i` 变量:第一次在 `cp` 命令中直接作为源文件的文件名使用,第二次则是截掉文件名开头的 `file` 部分,然后在开头补上一个 `archive`,也就是这样:
+
+```
+"archive" + "file019" - "file" = "archive019"
+```
+
+最终整个 `cp` 命令展开为:
+
+```
+cp file019 archive019
+```
+
+最后,顺带说明一下反斜杠 `\` 的作用是将一条长命令拆分成多行,这样可以方便阅读。
+
+在下一节,我们会了解方括号的更多用法,敬请关注。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
+
+作者:[Paul Brown][a]
+选题:[lujun9972][b]
+译者:[HankChow](https://github.com/HankChow)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/bro66
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-gabriele-diwald-475007-unsplash.jpg?itok=cKmysLfd "square brackets"
+[2]: https://www.linux.com/LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://linux.cn/article-10624-1.html
+
diff --git a/published/20190327 Setting kernel command line arguments with Fedora 30.md b/published/20190327 Setting kernel command line arguments with Fedora 30.md
new file mode 100644
index 0000000000..3521176e6b
--- /dev/null
+++ b/published/20190327 Setting kernel command line arguments with Fedora 30.md
@@ -0,0 +1,71 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10715-1.html)
+[#]: subject: (Setting kernel command line arguments with Fedora 30)
+[#]: via: (https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/)
+[#]: author: (Laura Abbott https://fedoramagazine.org/makes-fedora-kernel/)
+
+如何在 Fedora 30 中设置内核命令行参数
+======
+
+![][1]
+
+在调试或试验内核时,向内核命令行添加选项是一项常见任务。即将发布的 Fedora 30 版本改为使用 Bootloader 规范([BLS][2])。根据你修改内核命令行选项的方式,你的工作流可能会更改。继续阅读获取更多信息。
+
+要确定你的系统是使用 BLS 还是旧的规范,请查看文件:
+
+```
+/etc/default/grub
+```
+
+如果你看到:
+
+```
+GRUB_ENABLE_BLSCFG=true
+```
+
+看到这个,你运行的是 BLS,你可能需要更改设置内核命令行参数的方式。
+
+如果你只想修改单个内核条目(例如,暂时解决显示问题),可以使用 `grubby` 命令:
+
+```
+$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --args="amdgpu.dc=0"
+```
+
+要删除内核参数,可以传递 `--remove-args` 参数给 `grubby`:
+
+```
+$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --remove-args="amdgpu.dc=0"
+```
+
+如果有应该添加到每个内核命令行的选项(例如,你希望禁用 `rdrand` 指令生成随机数),则可以运行 `grubby` 命令:
+
+```
+$ grubby --update-kernel=ALL --args="nordrand"
+```
+
+这将更新所有内核条目的命令行,并保存作为将来条目的命令行选项。
+
+如果你想要从所有内核中删除该选项,则可以再次使用 `--remove-args` 和 `--update-kernel=ALL`:
+
+```
+$ grubby --update-kernel=ALL --remove-args="nordrand"
+```
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/
+
+作者:[Laura Abbott][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/makes-fedora-kernel/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/f30-kernel-1-816x345.jpg
+[2]: https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault
diff --git a/published/20190401 3 cool text-based email clients.md b/published/20190401 3 cool text-based email clients.md
new file mode 100644
index 0000000000..b29c55f83e
--- /dev/null
+++ b/published/20190401 3 cool text-based email clients.md
@@ -0,0 +1,70 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10722-1.html)
+[#]: subject: (3 cool text-based email clients)
+[#]: via: (https://fedoramagazine.org/3-cool-text-based-email-clients/)
+[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
+
+3 个很酷的基于文本的邮件客户端
+======
+
+![][1]
+
+编写和接收电子邮件是每个人日常工作的重要组成部分,选择电子邮件客户端通常是一个重要决定。Fedora 系统提供了大量的电子邮件客户端可供选择,其中包括基于文本的电子邮件应用。
+
+### Mutt
+
+Mutt 可能是最受欢迎的基于文本的电子邮件客户端之一。它有人们期望的所有常用功能。Mutt 支持颜色代码、邮件会话、POP3 和 IMAP。但它最好的功能之一是它具有高度可配置性。实际上,用户可以轻松地更改键绑定,并创建宏以使工具适应特定的工作流程。
+
+要尝试 Mutt,请[使用 sudo][2] 和 `dnf` 安装它:
+
+```
+$ sudo dnf install mutt
+```
+
+为了帮助新手入门,Mutt 有一个非常全面的充满了宏示例和配置技巧的 [wiki][3]。
+
+### Alpine
+
+Alpine 也是最受欢迎的基于文本的电子邮件客户端。它比 Mutt 更适合初学者,你可以通过应用本身配置大部分功能而无需编辑配置文件。Alpine 的一个强大功能是能够对电子邮件进行评分。这对那些订阅含有大量邮件的邮件列表如 Fedora 的[开发列表][4]的用户来说尤其有趣。通过使用分数,Alpine 可以根据用户的兴趣对电子邮件进行排序,首先显示高分的电子邮件。
+
+也可以使用 `dnf` 从 Fedora 的仓库安装 Alpine。
+
+```
+$ sudo dnf install alpine
+```
+
+使用 Alpine 时,你可以按 `Ctrl+G` 组合键轻松访问文档。
+
+### nmh
+
+nmh(new Mail Handling)遵循 UNIX 工具哲学。它提供了一组用于发送、接收、保存、检索和操作电子邮件的单一用途程序。这使你可以将 `nmh` 命令与其他程序交换,或利用 `nmh` 编写脚本来创建更多自定义工具。例如,你可以将 Mutt 与 `nmh` 一起使用。
+
+使用 `dnf` 可以轻松安装 `nmh`。
+
+```
+$ sudo dnf install nmh
+```
+
+要了解有关 `nmh` 和邮件处理的更多信息,你可以阅读这本 GPL 许可的[书][5]。
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/3-cool-text-based-email-clients/
+
+作者:[Clément Verna][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/cverna/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2018/07/email-clients-816x345.png
+[2]: https://fedoramagazine.org/howto-use-sudo/
+[3]: https://gitlab.com/muttmua/mutt/wikis/home
+[4]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/
+[5]: https://rand-mh.sourceforge.io/book/
diff --git a/sources/tech/20190401 How to create a filesystem on a Linux partition or logical volume.md b/published/20190401 How to create a filesystem on a Linux partition or logical volume.md
similarity index 51%
rename from sources/tech/20190401 How to create a filesystem on a Linux partition or logical volume.md
rename to published/20190401 How to create a filesystem on a Linux partition or logical volume.md
index 02cbe07431..8b5e45287b 100644
--- a/sources/tech/20190401 How to create a filesystem on a Linux partition or logical volume.md
+++ b/published/20190401 How to create a filesystem on a Linux partition or logical volume.md
@@ -1,27 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10705-1.html)
[#]: subject: (How to create a filesystem on a Linux partition or logical volume)
[#]: via: (https://opensource.com/article/19/4/create-filesystem-linux-partition)
-[#]: author: (Kedar Vijay Kulkarni (Red Hat) https://opensource.com/users/kkulkarn)
+[#]: 作者: (Kedar Vijay Kulkarni (Red Hat) https://opensource.com/users/kkulkarn)
-How to create a filesystem on a Linux partition or logical volume
+如何在 Linux 分区或逻辑卷中创建文件系统
======
-Learn to create a filesystem and mount it persistently or
-non-persistently in your system.
+
+> 学习在你的系统中创建一个文件系统,并且长期或者非长期地挂载它。
+
![Filing papers and documents][1]
-In computing, a filesystem controls how data is stored and retrieved and helps organize the files on the storage media. Without a filesystem, information in storage would be one large block of data, and you couldn't tell where one piece of information stopped and the next began. A filesystem helps manage all of this by providing names to files that store data and maintaining a table of files and directories—along with their start/end location, total size, etc.—on disks within the filesystem.
+在计算技术中,文件系统控制如何存储和检索数据,并且帮助组织存储媒介中的文件。如果没有文件系统,信息将被存储为一个大数据块,而且你无法知道一条信息在哪结束,下一条信息在哪开始。文件系统通过为存储数据的文件提供名称,并且在文件系统中的磁盘上维护文件和目录表以及它们的开始和结束位置、总的大小等来帮助管理所有的这些信息。
-In Linux, when you create a hard disk partition or a logical volume, the next step is usually to create a filesystem by formatting the partition or logical volume. This how-to assumes you know how to create a partition or a logical volume, and you just want to format it to contain a filesystem and mount it.
+在 Linux 中,当你创建一个硬盘分区或者逻辑卷之后,接下来通常是通过格式化这个分区或逻辑卷来创建文件系统。这个操作方法假设你已经知道如何创建分区或逻辑卷,并且你希望将它格式化为包含有文件系统,并且挂载它。
-### Create a filesystem
+### 创建文件系统
-Imagine you just added a new disk to your system and created a partition named **/dev/sda1** on it.
+假设你为你的系统添加了一块新的硬盘并且在它上面创建了一个叫 `/dev/sda1` 的分区。
- 1. To verify that the Linux kernel can see the partition, you can **cat** out **/proc/partitions** like this:
+1、为了验证 Linux 内核已经发现这个分区,你可以 `cat` 出 `/proc/partitions` 的内容,就像这样:
```
[root@localhost ~]# cat /proc/partitions
@@ -40,7 +41,7 @@ major minor #blocks name
```
- 2. Decide what kind of filesystem you want to create, such as ext4, XFS, or anything else. Here are a few options:
+2、决定你想要去创建的文件系统种类,比如 ext4、XFS,或者其他的一些。这里是一些可选项:
```
[root@localhost ~]# mkfs.
@@ -48,7 +49,7 @@ mkfs.btrfs mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.minix mkfs.xfs
```
- 3. For the purposes of this exercise, choose ext4. (I like ext4 because it allows you to shrink the filesystem if you need to, a thing that isn't as straightforward with XFS.) Here's how it can be done (the output may differ based on device name/sizes):
+3、为了这次练习的目的,选择 ext4。(我喜欢 ext4,因为如果你需要的话,它可以允许你去压缩文件系统,这对于 XFS 并不简单。)这里是完成它的方法(输出可能会因设备名称或者大小而不同):
```
[root@localhost ~]# mkfs.ext4 /dev/sda1
@@ -74,18 +75,16 @@ Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
```
- 4. In the previous step, if you want to create a different kind of filesystem, use a different **mkfs** command variation.
+4、在上一步中,如果你想去创建不同的文件系统,请使用不同变种的 `mkfs` 命令。
+### 挂载文件系统
+当你创建好文件系统后,你可以在你的操作系统中挂载它。
-### Mount a filesystem
-
-After you create your filesystem, you can mount it in your operating system.
-
- 1. First, identify the UUID of your new filesystem. Issue the **blkid** command to list all known block storage devices and look for **sda1** in the output:
+1、首先,识别出新文件系统的 UUID 编码。使用 `blkid` 命令列出所有可识别的块存储设备并且在输出信息中查找 `sda1`:
```
- [root@localhost ~]# blkid
+[root@localhost ~]# blkid
/dev/vda1: UUID="716e713d-4e91-4186-81fd-c6cfa1b0974d" TYPE="xfs"
/dev/sr1: UUID="2019-03-08-16-17-02-00" LABEL="config-2" TYPE="iso9660"
/dev/sda1: UUID="wow9N8-dX2d-ETN4-zK09-Gr1k-qCVF-eCerbF" TYPE="LVM2_member"
@@ -94,11 +93,10 @@ After you create your filesystem, you can mount it in your operating system.
[root@localhost ~]#
```
-
- 2. Run the following command to mount the **/dev/sd1** device :
+2、运行下面的命令挂载 `/dev/sd1` 设备:
```
- [root@localhost ~]# mkdir /mnt/mount_point_for_dev_sda1
+[root@localhost ~]# mkdir /mnt/mount_point_for_dev_sda1
[root@localhost ~]# ls /mnt/
mount_point_for_dev_sda1
[root@localhost ~]# mount -t ext4 /dev/sda1 /mnt/mount_point_for_dev_sda1/
@@ -113,19 +111,16 @@ tmpfs 93M 0 93M 0% /run/user/0
/dev/sda1 2.9G 9.0M 2.7G 1% /mnt/mount_point_for_dev_sda1
[root@localhost ~]#
```
- The **df -h** command shows which filesystem is mounted on which mount point. Look for **/dev/sd1**. The mount command above used the device name **/dev/sda1**. Substitute it with the UUID identified in the **blkid** command. Also, note that a new directory was created to mount **/dev/sda1** under **/mnt**.
+命令 `df -h` 显示了每个文件系统被挂载的挂载点。查找 `/dev/sd1`。上面的挂载命令使用的设备名称是 `/dev/sda1`。用 `blkid` 命令中的 UUID 编码替换它。注意,在 `/mnt` 下一个被新创建的目录挂载了 `/dev/sda1`。
-
- 3. A problem with using the mount command directly on the command line (as in the previous step) is that the mount won't persist across reboots. To mount the filesystem persistently, edit the **/etc/fstab** file to include your mount information:
+3、直接在命令行下使用挂载命令(就像上一步一样)会有一个问题,那就是挂载不会在设备重启后存在。为使永久性地挂载文件系统,编辑 `/etc/fstab` 文件去包含你的挂载信息:
```
UUID=ac96b366-0cdd-4e4c-9493-bb93531be644 /mnt/mount_point_for_dev_sda1/ ext4 defaults 0 0
```
-
-
- 4. After you edit **/etc/fstab** , you can **umount /mnt/mount_point_for_dev_sda1** and run the command **mount -a** to mount everything listed in **/etc/fstab**. If everything went right, you can still list **df -h** and see your filesystem mounted:
+4、编辑完 `/etc/fstab` 文件后,你可以 `umount /mnt/mount_point_for_fev_sda1` 并且运行 `mount -a` 命令去挂载被列在 `/etc/fstab` 文件中的所有设备文件。如果一切顺利的话,你可以使用 `df -h` 列出并且查看你挂载的文件系统:
```
root@localhost ~]# umount /mnt/mount_point_for_dev_sda1/
@@ -141,25 +136,23 @@ tmpfs 93M 0 93M 0% /run/user/0
/dev/sda1 2.9G 9.0M 2.7G 1% /mnt/mount_point_for_dev_sda1
```
- 5. You can also check whether the filesystem was mounted:
+5、你也可以检测文件系统是否被挂载:
```
[root@localhost ~]# mount | grep ^/dev/sd
/dev/sda1 on /mnt/mount_point_for_dev_sda1 type ext4 (rw,relatime,seclabel,stripe=8191,data=ordered)
```
-
-
-Now you know how to create a filesystem and mount it persistently or non-persistently within your system.
+现在你已经知道如何去创建文件系统并且长期或者非长期的挂载在你的系统中。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/create-filesystem-linux-partition
-作者:[Kedar Vijay Kulkarni (Red Hat)][a]
+作者:[Kedar Vijay Kulkarni][a]
选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
+译者:[liujing97](https://github.com/liujing97)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/20190402 Parallel computation in Python with Dask.md b/published/20190402 Parallel computation in Python with Dask.md
new file mode 100644
index 0000000000..818b242cc6
--- /dev/null
+++ b/published/20190402 Parallel computation in Python with Dask.md
@@ -0,0 +1,74 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10742-1.html)
+[#]: subject: (Parallel computation in Python with Dask)
+[#]: via: (https://opensource.com/article/19/4/parallel-computation-python-dask)
+[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
+
+使用 Dask 在 Python 中进行并行计算
+======
+
+> Dask 库可以将 Python 计算扩展到多个核心甚至是多台机器。
+
+![Pair programming][1]
+
+关于 Python 性能的一个常见抱怨是[全局解释器锁][2](GIL)。由于 GIL,同一时刻只能有一个线程执行 Python 字节码。因此,即使在现代的多核机器上,使用线程也不会加速计算。
+
+但当你需要并行化到多核时,你不需要放弃使用 Python:[Dask][3] 库可以将计算扩展到多个内核甚至多个机器。某些设置可以在数千台机器上配置 Dask,每台机器都有多个内核。虽然存在扩展规模的限制,但一般达不到。
+
+虽然 Dask 有许多内置的数组操作,但举一个非内置的例子,我们可以计算[偏度][4]:
+
+```
+import numpy
+import dask
+from dask import array as darray
+
+arr = dask.from_array(numpy.array(my_data), chunks=(1000,))
+mean = darray.mean()
+stddev = darray.std(arr)
+unnormalized_moment = darry.mean(arr * arr * arr)
+## See formula in wikipedia:
+skewness = ((unnormalized_moment - (3 * mean * stddev ** 2) - mean ** 3) /
+ stddev ** 3)
+```
+
+请注意,每个操作将根据需要使用尽可能多的内核。这将在所有核心上并行化执行,即使在计算数十亿个元素时也是如此。
+
+当然,并不是我们所有的操作都可由这个库并行化,有时我们需要自己实现并行性。
+
+为此,Dask 有一个“延迟”功能:
+
+```
+import dask
+
+def is_palindrome(s):
+ return s == s[::-1]
+
+palindromes = [dask.delayed(is_palindrome)(s) for s in string_list]
+total = dask.delayed(sum)(palindromes)
+result = total.compute()
+```
+
+这将计算字符串是否是回文并返回回文的数量。
+
+虽然 Dask 是为数据科学家创建的,但它绝不仅限于数据科学。每当我们需要在 Python 中并行化任务时,我们可以使用 Dask —— 无论有没有 GIL。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/parallel-computation-python-dask
+
+作者:[Moshe Zadka (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1 (Pair programming)
+[2]: https://wiki.python.org/moin/GlobalInterpreterLock
+[3]: https://github.com/dask/dask
+[4]: https://en.wikipedia.org/wiki/Skewness#Definition
diff --git a/published/20190405 Streaming internet radio with RadioDroid.md b/published/20190405 Streaming internet radio with RadioDroid.md
new file mode 100644
index 0000000000..801098b3a1
--- /dev/null
+++ b/published/20190405 Streaming internet radio with RadioDroid.md
@@ -0,0 +1,89 @@
+[#]: collector: (lujun9972)
+[#]: translator: (tomjlw)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10741-1.html)
+[#]: subject: (Streaming internet radio with RadioDroid)
+[#]: via: (https://opensource.com/article/19/4/radiodroid-internet-radio-player)
+[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen)
+
+使用 RadioDroid 流传输网络广播
+======
+
+> 通过简单的设置使用你家中的音响收听你最爱的网络电台。
+
+![][1]
+
+最近网络媒体对 [Google 的 Chromecast 音频设备的下架][2]发出叹息。该设备在音频媒体界备受[好评][3],因此我已经在考虑入手一个。基于 Chromecast 退场的消息,我决定在它们全部被打包扔进垃圾堆之前以一个合理价位买一个。
+
+我在 [MobileFun][4] 上找到一个放进我的订单中。这个设备最终到货了。它被包在一个普普通通、简简单单的 Google 包装袋中,外面打印着非常简短的使用指南。
+
+![Google Chromecast 音频][5]
+
+我通过我的数模转换器的光纤 S/PDIF 连接接入到家庭音响,希望以此能提供最佳的音质。
+
+安装过程并无纰漏,在五分钟后我就可以播放一些音乐了。我知道一些安卓应用支持 Chromecast,因此我决定用 Google Play Music 测试它。意料之中,它工作得不错,音乐效果听上去也相当好。然而作为一个具有开源精神的人,我决定看看我能找到什么开源播放器能兼容 Chromecast。
+
+### RadioDroid 的救赎
+
+[RadioDroid 安卓应用][6] 满足条件。它是开源的,并且可从 [GitHub][7]、Google Play 以及 [F-Droid][8] 上获取。根据帮助文档,RadioDroid 从 [Community Radio Browser][9] 网页寻找播放流。因此我决定在我的手机上安装尝试一下。
+
+![RadioDroid][10]
+
+安装过程快速顺利,RadioDroid 打开展示当地电台十分迅速。你可以在这个屏幕截图的右上方附近看到 Chromecast 按钮(看上去像一个有着波阵面的长方形图标)。
+
+我尝试了几个当地电台。这个应用可靠地在我手机喇叭上播放了音乐。但是我不得不摆弄 Chromecast 按钮来通过 Chromecast 把音乐传到流上。但是它确实可以做到流传输。
+
+我决定找一下我喜爱的网络广播电台:法国马赛的 [格雷诺耶广播电台][11]。在 RadioDroid 上有许多找到电台的方法。其中一种是使用标签——“当地”、“最流行”等——就在电台列表上方。其中一个标签是国家,我找到法国,在其 1500 个电台中划来划去寻找格雷诺耶广播电台。另一种办法是使用屏幕上方的查询按钮;查询迅速找到了那家美妙的电台。我尝试了其它几次查询它们都返回了合理的信息。
+
+回到“当地”标签,我在列表中翻来覆去,发现“当地”的定义似乎是“在同一个国家”。因此尽管西雅图、波特兰、旧金山、洛杉矶和朱诺比多伦多更靠近我的家,我并没有在“当地”标签中看到它们。然而通过使用查询功能,我可以发现所有名字中带有西雅图的电台。
+
+“语言”标签使我找到所有用葡语(及葡语方言)播报的电台。我很快发现了另一个最爱的电台 [91 Rock Curitiba][12]。
+
+接着灵感来了,虽然现在是春天了,但又如何呢?让我们听一些圣诞音乐。意料之中,搜寻圣诞把我引到了 [181.FM – Christmas Blender][13]。不错,一两分钟的欣赏对我就够了。
+
+因此总的来说,我推荐把 RadioDroid 和 Chromecast 的组合作为一种用家庭音响以合理价位播放网络电台的良好方式。
+
+### 对于音乐方面……
+
+最近我从 [Blue Coast Music][16] 商店里选了一个 [Qua Continuum][15] 创作的叫作 [Continuum One][14] 的有趣的氛围(甚至无节拍)音乐专辑。
+
+Blue Coast 有许多可提供给开源音乐爱好者的。音乐可以无需通过那些奇怪的平台专用下载管理器下载(有时以物理形式)。它通常提供几种形式,包括 WAV、FLAC 和 DSD;WAV 和 FLAC 还提供不同的字长和比特率,包括 16/44.1、24/96 和 24/192,针对 DSD 则有 2.8、5.6 和 11.2 MHz。音乐是用优秀的仪器精心录制的。不幸的是,我并没有找到许多符合我口味的音乐,尽管我喜欢 Blue Coast 上能获取的几个艺术家,包括 Qua Continuum,[Art Lande][17] 以及 [Alex De Grassi][18]。
+
+在 [Bandcamp][19] 上,我挑选了 [Emancipator's Baralku][20] 和 [Framework's Tides][21],两个都是我喜欢的。两位艺术家创作的音乐符合我的口味——电音但又(总体来说)不是舞蹈,它们的音乐旋律优美,副歌也很好听。有许多可以让开源音乐发烧友爱上 Bandcamp 的东西,比如买前试听整首歌的服务;没有垃圾软件下载器;与大量音乐家的合作;以及对 [Creative Commons music][22] 的支持。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/radiodroid-internet-radio-player
+
+作者:[Chris Hermansen (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[tomjlw](https://github.com/tomjlw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clhermansen
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (woman programming)
+[2]: https://www.theverge.com/2019/1/11/18178751/google-chromecast-audio-discontinued-sale
+[3]: https://www.whathifi.com/google/chromecast-audio/review
+[4]: https://www.mobilefun.com/google-chromecast-audio-black-70476
+[5]: https://opensource.com/sites/default/files/uploads/internet-radio_chromecast.png (Google Chromecast Audio)
+[6]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2
+[7]: https://github.com/segler-alex/RadioDroid
+[8]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/
+[9]: http://www.radio-browser.info/gui/#!/
+[10]: https://opensource.com/sites/default/files/uploads/internet-radio_radiodroid.png (RadioDroid)
+[11]: http://www.radiogrenouille.com/
+[12]: https://91rock.com.br/
+[13]: http://player.181fm.com/?station=181-xblender
+[14]: https://www.youtube.com/watch?v=PqLCQXPS8iQ
+[15]: https://bluecoastmusic.com/artists/qua-continuum
+[16]: https://bluecoastmusic.com/store
+[17]: https://bluecoastmusic.com/store?f%5B0%5D=search_api_multi_aggregation_1%3Aart%20lande
+[18]: https://bluecoastmusic.com/store?f%5B0%5D=search_api_multi_aggregation_1%3Aalex%20de%20grassi
+[19]: https://bandcamp.com/
+[20]: https://emancipator.bandcamp.com/album/baralku
+[21]: https://frameworksuk.bandcamp.com/album/tides
+[22]: https://bandcamp.com/tag/creative-commons
diff --git a/published/20190407 Happy 14th anniversary Git- What do you love about Git.md b/published/20190407 Happy 14th anniversary Git- What do you love about Git.md
new file mode 100644
index 0000000000..25400372b3
--- /dev/null
+++ b/published/20190407 Happy 14th anniversary Git- What do you love about Git.md
@@ -0,0 +1,74 @@
+[#]: collector: (lujun9972)
+[#]: translator: (zhs852)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10702-1.html)
+[#]: subject: (Happy 14th anniversary Git: What do you love about Git?)
+[#]: via: (https://opensource.com/article/19/4/what-do-you-love-about-git)
+[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/seth)
+
+Git 十四周年:你喜欢 Git 的哪一点?
+======
+
+> Git 为软件开发所带来的巨大影响是其它工具难以企及的。
+
+![arrows cycle symbol for failing faster][1]
+
+在 Linus Torvalds 开发 Git 后的十四年间,它为软件开发所带来的影响是其它工具难以企及的:在 [StackOverflow 的 2018 年开发者调查][2] 中,87% 的受访者都表示他们使用 Git 来作为他们项目的版本控制工具。显然,没有其它工具能撼动 Git 版本控制管理工具(SCM)之王的地位。
+
+为了在 4 月 7 日 Git 的十四周年这一天向 Git 表示敬意,我问了一些爱好者他们最喜欢 Git 的哪一点。以下便是他们所告诉我的:
+
+*(为了便于理解,部分回答已经进行了小幅修改)*
+
+“我无法忍受 Git。无论是难以理解的术语还是它的分布式。使用 Gerrit 这样的插件才能使它像 Subversion 或 Perforce 这样的集中式仓库管理器使用的工具的一半好用。不过既然这次的问题是‘你喜欢 Git 的什么?’,我还是希望回答:Git 使得对复杂的源代码树操作成为可能,并且它的回滚功能使得实现一个要 20 次修改才能更正的问题变得简单起来。” — _[Sweet Tea Dorminy][3]_
+
+“我喜欢 Git 是因为它不会强制我执行特定的工作流程,并且开发团队可以自由地以适合自己的方式来进行团队开发,无论是拉取请求、以电子邮件递送差异文件或是给予所有人推送的权限。” — _[Andy Price][4]_
+
+“我从 2006、2007 年的样子就开始使用 Git 了。我喜欢 Git 是因为,它既适用于那种从未离开过我电脑的小项目,也适用于大型的团队合作的分布式项目。Git 使你可以从(几乎)所有的错误提交中回滚到先前版本,这个功能显著地减轻了我在软件版本管理方面的压力。” — _[Jonathan S. Katz][5]_
+
+“我很欣赏 Git 那种 [底层命令和高层命令][6] 的理念。用户可以使用 Git 有效率地分享任何形式的信息,而不需要知道其内部工作原理。而好奇的人可以透过其表层的命令,而发现其为许多代码分享平台提供了支持的可以定位内容的文件系统。” — _[Matthew Broberg][7]_
+
+“我喜欢 Git 是因为浏览、开发、构建、测试和向我的 Git 仓库中提交代码的工作几乎都能用它来完成。它经常会调动起我参与开源项目的积极性。” — _[Daniel Oh][8]_
+
+“Git 是我用过的首个版本控制工具。数年间,它从一个可怕的工具变成了一个友好的工具。我喜欢它使你在修改代码的时候更加自信,因为它能保证你主分支的安全(除非你强制提交了一段考虑不周的代码到主分支)。你可以检出先前的提交来撤销更改,这一点也是很棒的。” — _[Kedar Vijay Kulkarni][9]_
+
+“我之所以喜欢 Git 是因为它淘汰了一些其它的版本控制工具。没人使用 VSS,而 Subversion 可以和 git-svn 一起使用(如果必要),BitKeeper 则和 Monotone 一样只为老一辈所知。当然,我们还有 Mercurial,不过在我几年之前用它来为 Firefox 添加 AArch64 支持时,我觉得它仍是那种还未完善的工具。部分人可能还会提到 Perforce、SourceSafe 或是其它企业级的解决方案,我只想说它们在开源世界里并不流行。” — _[Marcin Juszkiewicz][10]_
+
+“我喜欢内置的 SHA1 化对象模型(commit → tree → blob)的简易性。我也喜欢它的高层命令。同时我也将它作为对 JBoss/Red Hat Fuse 的补丁机制。并且这种机制确实有效。我还喜欢 Git 的 [三棵树的故事][11]。” — _[Grzegorz Grzybek][12]_
+
+“我喜欢 [自动生成的 Git 说明页][13](这个页面虽然听起来是有关 Git 的,但是事实上这是一个没有实际意义的页面,不过它总是会给人一种像是真的 Git 页面的感觉…),这使得我对 Git 的敬意油然而生。” — _[Marko Myllynen][14]_
+
+“Git 改变了我作为开发者的生活。它使得 SCM 问题从世界上消失得无影无踪。”— _[Joel Takvorian][15]_
+
+* * *
+
+看完这十个爱好者的回答之后,就轮到你了:你最欣赏 Git 的什么?请在评论区分享你的看法!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/what-do-you-love-about-git
+
+作者:[Jen Wike Huger][a]
+选题:[lujun9972][b]
+译者:[zhs852](https://github.com/zhs852)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jen-wike/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
+[2]: https://insights.stackoverflow.com/survey/2018/#work-_-version-control
+[3]: https://github.com/sweettea
+[4]: https://www.linkedin.com/in/andrew-price-8771796/
+[5]: https://opensource.com/users/jkatz05
+[6]: https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain
+[7]: https://opensource.com/users/mbbroberg
+[8]: https://opensource.com/users/daniel-oh
+[9]: https://opensource.com/users/kkulkarn
+[10]: https://github.com/hrw
+[11]: https://speakerdeck.com/schacon/a-tale-of-three-trees
+[12]: https://github.com/grgrzybek
+[13]: https://git-man-page-generator.lokaltog.net/
+[14]: https://github.com/myllynen
+[15]: https://github.com/jotak
diff --git a/published/20190408 Bash vs. Python- Which language should you use.md b/published/20190408 Bash vs. Python- Which language should you use.md
new file mode 100644
index 0000000000..c665f00cbf
--- /dev/null
+++ b/published/20190408 Bash vs. Python- Which language should you use.md
@@ -0,0 +1,65 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10725-1.html)
+[#]: subject: (Bash vs. Python: Which language should you use?)
+[#]: via: (https://opensource.com/article/19/4/bash-vs-python)
+[#]: author: (Archit Modi Red Hat https://opensource.com/users/architmodi/users/greg-p/users/oz123)
+
+Bash vs Python:你该使用哪个?
+======
+
+> 两种编程语言都各有优缺点,它们在某些任务方面互有胜负。
+
+![][1]
+
+[Bash][2] 和 [Python][3] 是大多数自动化工程师最喜欢的编程语言。它们都各有优缺点,有时很难选择应该使用哪一个。所以,最诚实的答案是:这取决于任务、范围、背景和任务的复杂性。
+
+让我们来比较一下这两种语言,以便更好地理解它们各自的优点。
+
+### Bash
+
+ * 是一种 Linux/Unix shell 命令语言
+ * 非常适合编写使用命令行界面(CLI)实用程序的 shell 脚本,利用一个命令的输出传递给另一个命令(管道),以及执行简单的任务(可以多达 100 行代码)
+ * 可以按原样使用命令行命令和实用程序
+ * 启动时间比 Python 快,但执行时性能差
+ * Windows 中默认没有安装。你的脚本可能不会兼容多个操作系统,但是 Bash 是大多数 Linux/Unix 系统的默认 shell
+ * 与其它 shell (如 csh、zsh、fish) *不* 完全兼容。
+ * 通过管道(`|`)传递 CLI 实用程序如 `sed`、`awk`、`grep` 等会降低其性能
+ * 缺少很多函数、对象、数据结构和多线程支持,这限制了它在复杂脚本或编程中的使用
+ * 缺少良好的调试工具和实用程序
+
+### Python
+
+ * 是一种面对对象编程语言(OOP),因此它比 Bash 更加通用
+ * 几乎可以用于任何任务
+ * 适用于大多数操作系统,默认情况下它在大多数 Unix/Linux 系统中都有安装
+ * 与伪代码非常相似
+ * 具有简单、清晰、易于学习和阅读的语法
+ * 拥有大量的库、文档以及一个活跃的社区
+ * 提供比 Bash 更友好的错误处理特性
+ * 有比 Bash 更好的调试工具和实用程序,这使得它在开发涉及到很多行代码的复杂软件应用程序时是一种很棒的语言
+ * 应用程序(或脚本)可能包含许多第三方依赖项,这些依赖项必须在执行前安装
+ * 对于简单任务,需要编写比 Bash 更多的代码
+
+我希望这些列表能够让你更好地了解该使用哪种语言以及在何时使用它。
+
+你在日常工作中更多会使用哪种语言,Bash 还是 Python?请在评论中分享。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/bash-vs-python
+
+作者:[Archit Modi (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/architmodi/users/greg-p/users/oz123
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_happy_sad_developer_programming.png?itok=72nkfSQ_
+[2]: /article/18/7/admin-guide-bash
+[3]: /article/17/11/5-approaches-learning-python
diff --git a/published/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md b/published/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
new file mode 100644
index 0000000000..2e63380c76
--- /dev/null
+++ b/published/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
@@ -0,0 +1,74 @@
+[#]: collector: (lujun9972)
+[#]: translator: (tomjlw)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10747-1.html)
+[#]: subject: (Cisco, Google reenergize multicloud/hybrid cloud joint development)
+[#]: via: (https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+思科、谷歌重新赋能多/混合云共同开发
+======
+> 思科、VMware、HPE 等公司开始采用了新的 Google Cloud Athos 云技术。
+
+![Thinkstock][1]
+
+思科与谷歌已扩展它们的混合云开发活动,以帮助其客户可以在从本地数据中心到公共云上的任何地方更轻松地搭建安全的多云以及混合云应用。
+
+这次扩张围绕着谷歌被称作 Anthos 的新的开源混合云包展开,它是在这周的 Google Next 活动上推出的。Anthos 基于并取代了谷歌现有的谷歌云服务测试版。Anthos 将让客户们无须修改应用就可以在现有的本地硬件或公共云上运行应用。据谷歌说,它可以在[谷歌云平台][5] (GCP) 与 [谷歌 Kubernetes 引擎][6] (GKE) 或者在数据中心中与 [GKE On-Prem][7] 一同使用。谷歌说,Anthos 首次让客户们可以无需管理员和开发者了解不同的坏境和 API 就能从谷歌平台上管理在第三方云上(如 AWS 和 Azure)的工作负荷。
+
+关键在于,Athos 提供了一个单一的托管服务,它使得客户们无须担心不同的环境或 API 就能跨云管理、部署工作负荷。
+
+作为首秀的一部分,谷歌也宣布一个叫做 [Anthos Migrate][8] 的测试计划,它能够从本地环境或者其它云自动迁移虚拟机到 GKE 上的容器中。谷歌说,“这种独特的迁移技术使你无须修改原来的虚拟机或者应用就能以一种行云流水般的方式迁移、更新你的基础设施”。谷歌称它给予了公司按客户节奏转移本地应用到云环境的灵活性。
+
+### 思科和谷歌
+
+就思科来说,它宣布对 Anthos 的支持并承诺将它紧密集成进思科的数据中心技术中,例如 HyperFlex 超融合包、应用中心基础设施(思科的旗舰 SDN 方案)、SD-WAN 和 StealthWatch 云。思科说,无论是本地的还是在云端的,这次集成将通过自动更新到最新版本和安全补丁,给予一种一致的、云般的感觉。
+
+“谷歌云在容器(Kubernetes)和服务网格(Istio)上的专业与它们在开发者社区的领导力,再加上思科的企业级网络、计算、存储和安全产品及服务,将为我们的顾客促成一次强强联合。”思科的云平台和解决方案集团资深副总裁 Kip Compton 这样[写道][9],“思科对于 Anthos 的集成将会帮助顾客跨本地数据中心和公共云搭建、管理多云/混合云应用,让他们专注于创新和灵活性,同时不会影响安全性或增加复杂性。”
+
+### 谷歌云和思科
+
+谷歌云工程副总裁 Eyal Manor [写道][10] 通过思科对 Anthos 的支持,客户将能够:
+
+* 受益于全托管服务例如 GKE 以及思科的超融合基础设施、网络和安全技术;
+* 在企业数据中心和云中一致运行
+* 在企业数据中心使用云服务
+* 用最新的云技术更新本地基础设施
+
+思科和谷歌从 2017 年 10 月就在紧密合作,当时他们表示正在开发一个能够连接本地基础设施和云环境的开放混合云平台。该套件,即[思科为谷歌云打造的混合云平台][11],大致在 2018 年 9 月上市。它使得客户们能通过谷歌云托管 Kubernetes 容器开发企业级功能,包含思科网络和安全技术以及来自 Istio 的服务网格监控。
+
+谷歌说开源的 Istio 的容器和微服务优化技术给开发者提供了一种一致的方式,通过服务级的 mTLS (双向传输层安全)身份验证访问控制来跨云连接、保护、管理和监听微服务。因此,客户能够轻松实施新的可移植的服务,并集中配置和管理这些服务。
+
+思科不是唯一宣布对 Anthos 支持的供应商。谷歌表示,至少 30 家大型合作商包括 [VMware][12]、[Dell EMC][13]、[HPE][14]、Intel 和联想致力于为他们的客户在它们自己的超融合基础设施上提供 Anthos 服务。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[tomjlw](https://github.com/tomjlw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2016/12/hybrid_cloud-100700390-large.jpg
+[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
+[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://cloud.google.com/
+[6]: https://cloud.google.com/kubernetes-engine/
+[7]: https://cloud.google.com/gke-on-prem/
+[8]: https://cloud.google.com/contact/
+[9]: https://blogs.cisco.com/news/next-phase-cisco-google-cloud
+[10]: https://cloud.google.com/blog/topics/partners/google-cloud-partners-with-cisco-on-hybrid-cloud-next19?utm_medium=unpaidsocial&utm_campaign=global-googlecloud-liveevent&utm_content=event-next
+[11]: https://cloud.google.com/cisco/
+[12]: https://blogs.vmware.com/networkvirtualization/2019/04/vmware-and-google-showcase-hybrid-cloud-deployment.html/
+[13]: https://www.dellemc.com/en-us/index.htm
+[14]: https://www.hpe.com/us/en/newsroom/blog-post/2019/04/hpe-and-google-cloud-join-forces-to-accelerate-innovation-with-hybrid-cloud-solutions-optimized-for-containerized-applications.html
+[15]: https://www.facebook.com/NetworkWorld/
+[16]: https://www.linkedin.com/company/network-world
+
diff --git a/published/20190410 How To Check The List Of Open Ports In Linux.md b/published/20190410 How To Check The List Of Open Ports In Linux.md
new file mode 100644
index 0000000000..0242d3cec2
--- /dev/null
+++ b/published/20190410 How To Check The List Of Open Ports In Linux.md
@@ -0,0 +1,231 @@
+[#]: collector: (lujun9972)
+[#]: translator: (heguangzhi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10736-1.html)
+[#]: subject: (How To Check The List Of Open Ports In Linux?)
+[#]: via: (https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+如何检查 Linux 中的开放端口列表?
+======
+
+最近,我们就同一主题写了两篇文章。这些文章内容帮助你如何检查远程服务器中给定的端口是否打开。
+
+如果你想 [检查远程 Linux 系统上的端口是否打开][1] 请点击链接浏览。如果你想 [检查多个远程 Linux 系统上的端口是否打开][2] 请点击链接浏览。如果你想 [检查多个远程 Linux 系统上的多个端口状态][2] 请点击链接浏览。
+
+但是本文帮助你检查本地系统上的开放端口列表。
+
+在 Linux 中很少有用于此目的的实用程序。然而,我提供了四个最重要的 Linux 命令来检查这一点。
+
+你可以使用以下四个命令来完成这个工作。这些命令是非常出名的并被 Linux 管理员广泛使用。
+
+ * `netstat`:netstat (“network statistics”) 是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表, 伪装连接,多点传送成员和网络端口。
+ * `nmap`:Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络。
+ * `ss`: ss 被用于转储套接字统计信息。它也可以类似 netstat 使用。相比其他工具它可以展示更多的TCP状态信息。
+ * `lsof`: lsof 是 List Open File 的缩写. 它用于输出被某个进程打开的所有文件。
+
+### 如何使用 Linux 命令 netstat 检查系统中的开放端口列表
+
+`netstat` 是 Network Statistics 的缩写,是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表、伪装连接、多播成员和网络端口。
+
+它可以列出所有的 tcp、udp 连接和所有的 unix 套接字连接。
+
+它用于发现发现网络问题,确定网络连接数量。
+
+```
+# netstat -tplugn
+
+Active Internet connections (only servers)
+Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
+tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2038/master
+tcp 0 0 127.0.0.1:199 0.0.0.0:* LISTEN 1396/snmpd
+tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1398/httpd
+tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1388/sshd
+tcp6 0 0 :::25 :::* LISTEN 2038/master
+tcp6 0 0 :::22 :::* LISTEN 1388/sshd
+udp 0 0 0.0.0.0:39136 0.0.0.0:* 1396/snmpd
+udp 0 0 0.0.0.0:56130 0.0.0.0:* 1396/snmpd
+udp 0 0 0.0.0.0:40105 0.0.0.0:* 1396/snmpd
+udp 0 0 0.0.0.0:11584 0.0.0.0:* 1396/snmpd
+udp 0 0 0.0.0.0:30105 0.0.0.0:* 1396/snmpd
+udp 0 0 0.0.0.0:50656 0.0.0.0:* 1396/snmpd
+udp 0 0 0.0.0.0:1632 0.0.0.0:* 1396/snmpd
+udp 0 0 0.0.0.0:28265 0.0.0.0:* 1396/snmpd
+udp 0 0 0.0.0.0:40764 0.0.0.0:* 1396/snmpd
+udp 0 0 10.90.56.21:123 0.0.0.0:* 895/ntpd
+udp 0 0 127.0.0.1:123 0.0.0.0:* 895/ntpd
+udp 0 0 0.0.0.0:123 0.0.0.0:* 895/ntpd
+udp 0 0 0.0.0.0:53390 0.0.0.0:* 1396/snmpd
+udp 0 0 0.0.0.0:161 0.0.0.0:* 1396/snmpd
+udp6 0 0 :::123 :::* 895/ntpd
+
+IPv6/IPv4 Group Memberships
+Interface RefCnt Group
+--------------- ------ ---------------------
+lo 1 224.0.0.1
+eth0 1 224.0.0.1
+lo 1 ff02::1
+lo 1 ff01::1
+eth0 1 ff02::1
+eth0 1 ff01::1
+```
+
+你也可以使用下面的命令检查特定的端口。
+
+```
+# # netstat -tplugn | grep :22
+
+tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1388/sshd
+tcp6 0 0 :::22 :::* LISTEN 1388/sshd
+```
+
+### 如何使用 Linux 命令 ss 检查系统中的开放端口列表?
+
+`ss` 被用于转储套接字统计信息。它也可以显示类似 `netstat` 的信息。相比其他工具它可以展示更多的 TCP 状态信息。
+
+```
+# ss -lntu
+
+Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
+udp UNCONN 0 0 *:39136 *:*
+udp UNCONN 0 0 *:56130 *:*
+udp UNCONN 0 0 *:40105 *:*
+udp UNCONN 0 0 *:11584 *:*
+udp UNCONN 0 0 *:30105 *:*
+udp UNCONN 0 0 *:50656 *:*
+udp UNCONN 0 0 *:1632 *:*
+udp UNCONN 0 0 *:28265 *:*
+udp UNCONN 0 0 *:40764 *:*
+udp UNCONN 0 0 10.90.56.21:123 *:*
+udp UNCONN 0 0 127.0.0.1:123 *:*
+udp UNCONN 0 0 *:123 *:*
+udp UNCONN 0 0 *:53390 *:*
+udp UNCONN 0 0 *:161 *:*
+udp UNCONN 0 0 :::123 :::*
+tcp LISTEN 0 100 *:25 *:*
+tcp LISTEN 0 128 127.0.0.1:199 *:*
+tcp LISTEN 0 128 *:80 *:*
+tcp LISTEN 0 128 *:22 *:*
+tcp LISTEN 0 100 :::25 :::*
+tcp LISTEN 0 128 :::22 :::*
+```
+
+你也可以使用下面的命令检查特定的端口。
+
+```
+# # ss -lntu | grep ':25'
+
+tcp LISTEN 0 100 *:25 *:*
+tcp LISTEN 0 100 :::25 :::*
+```
+
+### 如何使用 Linux 命令 nmap 检查系统中的开放端口列表?
+
+Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络,当然它也可以工作在独立主机上。
+
+Nmap 使用裸 IP 数据包以一种新颖的方式来确定网络上有哪些主机可用,这些主机提供什么服务(应用程序名称和版本),它们运行什么操作系统(版本),使用什么类型的数据包过滤器/防火墙,以及许多其他特征。
+
+虽然 Nmap 通常用于安全审计,但许多系统和网络管理员发现它对于日常工作也非常有用,例如网络资产清点、管理服务升级计划以及监控主机或服务正常运行时间。
+
+```
+# nmap -sTU -O localhost
+
+Starting Nmap 6.40 ( http://nmap.org ) at 2019-03-20 09:57 CDT
+Nmap scan report for localhost (127.0.0.1)
+Host is up (0.00028s latency).
+Other addresses for localhost (not scanned): 127.0.0.1
+Not shown: 1994 closed ports
+
+PORT STATE SERVICE
+22/tcp open ssh
+25/tcp open smtp
+80/tcp open http
+199/tcp open smux
+123/udp open ntp
+161/udp open snmp
+
+Device type: general purpose
+Running: Linux 3.X
+OS CPE: cpe:/o:linux:linux_kernel:3
+OS details: Linux 3.7 - 3.9
+Network Distance: 0 hops
+
+OS detection performed. Please report any incorrect results at http://nmap.org/submit/ .
+Nmap done: 1 IP address (1 host up) scanned in 1.93 seconds
+```
+
+你也可以使用下面的命令检查特定的端口。
+
+```
+# nmap -sTU -O localhost | grep 123
+
+123/udp open ntp
+```
+
+### 如何使用 Linux 命令 lsof 检查系统中的开放端口列表?
+
+它向你显示系统上打开的文件列表以及打开它们的进程。还会向你显示与文件相关的其他信息。
+
+```
+# lsof -i
+
+COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
+ntpd 895 ntp 16u IPv4 18481 0t0 UDP *:ntp
+ntpd 895 ntp 17u IPv6 18482 0t0 UDP *:ntp
+ntpd 895 ntp 18u IPv4 18487 0t0 UDP localhost:ntp
+ntpd 895 ntp 20u IPv4 23020 0t0 UDP CentOS7.2daygeek.com:ntp
+sshd 1388 root 3u IPv4 20065 0t0 TCP *:ssh (LISTEN)
+sshd 1388 root 4u IPv6 20067 0t0 TCP *:ssh (LISTEN)
+snmpd 1396 root 6u IPv4 22739 0t0 UDP *:snmp
+snmpd 1396 root 7u IPv4 22729 0t0 UDP *:40105
+snmpd 1396 root 8u IPv4 22730 0t0 UDP *:50656
+snmpd 1396 root 9u IPv4 22731 0t0 UDP *:pammratc
+snmpd 1396 root 10u IPv4 22732 0t0 UDP *:30105
+snmpd 1396 root 11u IPv4 22733 0t0 UDP *:40764
+snmpd 1396 root 12u IPv4 22734 0t0 UDP *:53390
+snmpd 1396 root 13u IPv4 22735 0t0 UDP *:28265
+snmpd 1396 root 14u IPv4 22736 0t0 UDP *:11584
+snmpd 1396 root 15u IPv4 22737 0t0 UDP *:39136
+snmpd 1396 root 16u IPv4 22738 0t0 UDP *:56130
+snmpd 1396 root 17u IPv4 22740 0t0 TCP localhost:smux (LISTEN)
+httpd 1398 root 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+master 2038 root 13u IPv4 21638 0t0 TCP *:smtp (LISTEN)
+master 2038 root 14u IPv6 21639 0t0 TCP *:smtp (LISTEN)
+sshd 9052 root 3u IPv4 1419955 0t0 TCP CentOS7.2daygeek.com:ssh->Ubuntu18-04.2daygeek.com:11408 (ESTABLISHED)
+httpd 13371 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+httpd 13372 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+httpd 13373 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+httpd 13374 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+httpd 13375 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+```
+
+你也可以使用下面的命令检查特定的端口。
+
+```
+# lsof -i:80
+
+COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
+httpd 1398 root 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+httpd 13371 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+httpd 13372 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+httpd 13373 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+httpd 13374 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+httpd 13375 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[heguangzhi](https://github.com/heguangzhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://linux.cn/article-10675-1.html
+[2]: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/
diff --git a/published/20190413 The Fargate Illusion.md b/published/20190413 The Fargate Illusion.md
new file mode 100644
index 0000000000..ef0cc6153e
--- /dev/null
+++ b/published/20190413 The Fargate Illusion.md
@@ -0,0 +1,443 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10740-1.html)
+[#]: subject: (The Fargate Illusion)
+[#]: via: (https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html)
+[#]: author: (Lee Briggs https://leebriggs.co.uk/)
+
+破除对 AWS Fargate 的幻觉
+======
+
+我在 $work 建立了一个基于 Kubernetes 的平台已经快一年了,而且有点像 Kubernetes 的布道者了。真的,我认为这项技术太棒了。然而我并没有对它的运营和维护的困难程度抱过什么幻想。今年早些时候我读了[这样][1]的一篇文章,并对其中某些观点深以为然。如果我在一家规模较小的、有 10 到 15 个工程师的公司,假如有人建议管理和维护一批 Kubernetes 集群,那我会感到害怕的,因为它的运维开销太高了!
+
+尽管我现在对 Kubernetes 的一切都很感兴趣,但我仍然对“无服务器”计算会消灭运维工程师的说法抱有好奇。这种好奇心主要来源于我希望在未来仍然能有一份有报酬的工作 —— 如果我们前景光明的未来不需要运维工程师,那我得明白到底是怎么回事。我已经在 Lamdba 和Google Cloud Functions 上做了一些实验,结果让我印象十分深刻,但我仍然坚信无服务器解决方案只是解决了一部分问题。
+
+我关注 [AWS Fargate][2] 已经有一段时间了,这就是 $work 的开发人员所推崇为“无服务器计算”的东西 —— 主要是因为 Fargate,用它你就可以无需管理底层节点而运行你的 Docker 容器。我想看看它到底意味着什么,所以我开始尝试从头开始在 Fargate 上运行一个应用,看看是否可以成功。这里我对成功的定义是一个与“生产级”应用程序相近的东西,我想应该包含以下内容:
+
+* 一个在 Fargate 上运行的容器
+* 配置信息以环境变量的形式下推
+* “秘密信息” 不能是明文的
+* 位于负载均衡器之后
+* 有效的 SSL 证书的 TLS 通道
+
+我以“基础设施即代码”的角度来开始整个任务,不遵循默认的 AWS 控制台向导,而是使用 terraform 来定义基础架构。这很可能让整个事情变得复杂,但我想确保任何部署都是可重现的,任何想要遵循此步骤的人都可发现我的结论。
+
+上述所有标准通常都可以通过基于 Kubernetes 的平台使用一些外部的附加组件和插件来实现,所以我确实是以一种比较的心态来处理整个任务的,因为我要将它与我的常用工作流程进行比较。我的主要目标是看看Fargate 有多容易,特别是与 Kubernetes 相比时。结果让我感到非常惊讶。
+
+### AWS 是有开销的
+
+我有一个干净的 AWS 账户,并决定从零到部署一个 webapp。与 AWS 中的其它基础设施一样,我必须首先使基本的基础设施正常工作起来,因此我需要先定义一个 VPC。
+
+遵循最佳实践,因此我将这个 VPC 划分为跨可用区(AZ)的子网,一个公共子网和私有子网。这时我想到,只要这种设置基础设施的需求存在,我就能找到一份这种工作。AWS 是"免"运维的这一概念一直让我感到愤怒。开发者社区中的许多人理所当然地认为在设置和定义一个设计良好的 AWS 账户和基础设施是不需要付出多少工作和努力的。而这种想当然甚至发生在开始谈论多帐户架构*之前*就有了——现在我仍然使用单一帐户,我已经必须定义好基础设施和传统的网络设备。
+
+这里也值得记住,我已经做了很多次,所以我*很清楚*该做什么。我可以在我的帐户中使用默认的 VPC 以及预先提供的子网,我觉得很多刚开始的人也可以使用它。这大概花了我半个小时才运行起来,但我不禁想到,即使我想运行 lambda 函数,我仍然需要某种连接和网络。定义 NAT 网关和在 VPC 中路由根本不会让你觉得很“Serverless”,但要往下进行这就是必须要做的。
+
+### 运行简单的容器
+
+在我启动运行了基本的基础设施之后,现在我想让我的 Docker 容器运行起来。我开始翻阅 Fargate 文档并浏览 [入门][3] 文档,这些就马上就展现在了我面前:
+
+![][4]
+
+等等,只是让我的容器运行就至少要有**三个**步骤?这完全不像我所想的,不过还是让我们开始吧。
+
+#### 任务定义
+
+“任务定义”用来定义要运行的实际容器。我在这里遇到的问题是,任务定义这件事非常复杂。这里有很多选项都很简单,比如指定 Docker 镜像和内存限制,但我还必须定义一个网络模型以及我并不熟悉的其它各种选项。真需要这样吗?如果我完全没有 AWS 方面的知识就进入到这个过程里,那么在这个阶段我会感觉非常的不知所措。可以在 AWS 页面上找到这些 [参数][5] 的完整列表,这个列表很长。我知道我的容器需要一些环境变量,它需要暴露一个端口。所以我首先在一个神奇的 [terraform 模块][6] 的帮助下定义了这一点,这真的让这件事更容易了。如果没有这个模块,我就得手写 JSON 来定义我的容器定义。
+
+首先我定义了一些环境变量:
+
+```
+container_environment_variables = [
+ {
+ name = "USER"
+ value = "${var.user}"
+ },
+ {
+ name = "PASSWORD"
+ value = "${var.password}"
+ }
+]
+```
+
+然后我使用上面提及的模块组成了任务定义:
+
+```
+module "container_definition_app" {
+ source = "cloudposse/ecs-container-definition/aws"
+ version = "v0.7.0"
+
+ container_name = "${var.name}"
+ container_image = "${var.image}"
+
+ container_cpu = "${var.ecs_task_cpu}"
+ container_memory = "${var.ecs_task_memory}"
+ container_memory_reservation = "${var.container_memory_reservation}"
+
+ port_mappings = [
+ {
+ containerPort = "${var.app_port}"
+ hostPort = "${var.app_port}"
+ protocol = "tcp"
+ },
+ ]
+
+ environment = "${local.container_environment_variables}"
+
+}
+```
+
+在这一点上我非常困惑,我需要在这里定义很多配置才能运行,而这时什么都没有开始呢,但这是必要的 —— 运行 Docker 容器肯定需要了解一些容器配置的知识。我 [之前写过][7] 关于 Kubernetes 和配置管理的问题的文章,在这里似乎遇到了同样的问题。
+
+接下来,我在上面的模块中定义了任务定义(幸好从我这里抽象出了所需的 JSON —— 如果我不得不手写JSON,我可能已经放弃了)。
+
+当我定义模块参数时,我突然意识到我漏掉了一些东西。我需要一个 IAM 角色!好吧,让我来定义:
+
+```
+resource "aws_iam_role" "ecs_task_execution" {
+ name = "${var.name}-ecs_task_execution"
+
+ assume_role_policy = <secret 管理secret management部分的方式是使用 [AWS SSM][12](此服务的全名是 AWS 系统管理器参数存储库 Systems Manager Parameter Store,但我不想使用这个名称,因为坦率地说这个名字太愚蠢了)。
+
+AWS 文档很好的[涵盖了这个内容][13],因此我开始将其转换为 terraform。
+
+##### 指定秘密信息
+
+首先,你必须定义一个参数并为其命名。在 terraform 中,它看起来像这样:
+
+```
+resource "aws_ssm_parameter" "app_password" {
+ name = "${var.app_password_param_name}" # The name of the value in AWS SSM
+ type = "SecureString"
+ value = "${var.app_password}" # The actual value of the password, like correct-horse-battery-stable
+}
+```
+
+显然,这里的关键部分是 “SecureString” 类型。这会使用默认的 AWS KMS 密钥来加密数据,这对我来说并不是很直观。这比 Kubernetes 的 Secret 管理具有巨大优势,默认情况下,这些 Secret 在 etcd 中是不加密的。
+
+然后我为 ECS 指定了另一个本地值映射,并将其作为 Secret 参数传递:
+
+```
+container_secrets = [
+ {
+ name = "PASSWORD"
+ valueFrom = "${var.app_password_param_name}"
+ },
+]
+
+module "container_definition_app" {
+ source = "cloudposse/ecs-container-definition/aws"
+ version = "v0.7.0"
+
+ container_name = "${var.name}"
+ container_image = "${var.image}"
+
+ container_cpu = "${var.ecs_task_cpu}"
+ container_memory = "${var.ecs_task_memory}"
+ container_memory_reservation = "${var.container_memory_reservation}"
+
+ port_mappings = [
+ {
+ containerPort = "${var.app_port}"
+ hostPort = "${var.app_port}"
+ protocol = "tcp"
+ },
+ ]
+
+ environment = "${local.container_environment_variables}"
+ secrets = "${local.container_secrets}"
+```
+
+##### 出了个问题
+
+此刻,我重新部署了我的任务定义,并且非常困惑。为什么任务没有正确拉起?当新的任务定义(版本 8)可用时,我一直在控制台中看到正在运行的应用程序仍在使用先前的任务定义(版本 7)。解决这件事花费的时间比我预期的要长,但是在控制台的事件屏幕上,我注意到了 IAM 错误。我错过了一个步骤,容器无法从 AWS SSM 中读取 Secret 信息,因为它没有正确的 IAM 权限。这是我第一次真正对整个这件事情感到沮丧。从用户体验的角度来看,这里的反馈非常*糟糕*。如果我没有发觉的话,我会认为一切都很好,因为仍然有一个任务正在运行,我的应用程序仍然可以通过正确的 URL 访问 —— 只不过是旧的配置而已。
+
+在 Kubernetes 里,我会清楚地看到 pod 定义中的错误。Fargate 可以确保我的应用不会停止,这绝对是太棒了,但作为一名运维,我需要一些关于发生了什么的实际反馈。这真的不够好。我真的希望 Fargate 团队的人能够读到这篇文章,改善这种体验。
+
+### 就这样了
+
+到这里就结束了,我的应用程序正在运行,也符合我的所有标准。我确实意识到我做了一些改进,其中包括:
+
+* 定义一个 cloudwatch 日志组,这样我就可以正确地写日志了
+* 添加了一个 route53 托管区域,使整个事情从 DNS 角度更容易自动化
+* 修复并重新调整了 IAM 权限,这里太宽泛了
+
+但老实说,现在我想反思一下这段经历。我写了一个关于我的经历的 [推特会话][14],然后花了其余时间思考我在这里的真实感受。
+
+### 代价
+
+经过一夜的反思,我意识到无论你是使用 Fargate 还是 Kubernetes,这个过程都大致相同。最让我感到惊讶的是,尽管我经常听说 Fargate “更容易”,但我真的没有看到任何超过 Kubernetes 平台的好处。现在,如果你正在构建 Kubernetes 集群,我绝对可以看到这里的价值 —— 管理节点和控制面板只是不必要的开销,问题是 —— 基于 Kubernetes 的平台的大多数消费者都*没有*这样做。如果你很幸运能够使用 GKE,你几乎不需要考虑集群的管理,你可以使用单个 `gcloud` 命令来运行集群。我经常使用 Digital Ocean 的 Kubernetes 托管服务,我可以肯定地说它就像操作 Fargate 集群一样简单,实际上在某种程度上它更容易。
+
+必须定义一些基础设施来运行你的容器就是此时的代价。谷歌本周可能刚刚使用他们的 [Google Cloud Run][15] 产品改变了游戏规则,但他们在这一领域的领先优势远远领先于其他所有人。
+
+从这整个经历中,我可以肯定的说:*大规模运行容器仍然很难。*它需要思考,需要领域知识,需要运维和开发人员之间的协作。它还需要一个基础来构建 —— 任何基于 AWS 的操作都需要事先定义和运行一些基础架构。我对一些公司似乎渴望的 “NoOps” 概念非常感兴趣。我想如果你正在运行一个无状态应用程序,你可以把它全部放在一个 lambda 函数和一个 API 网关中,这可能不错,但我们是否真的适合在任何一种企业环境中这样做?我真的不这么认为。
+
+#### 公平比较
+
+令我印象深刻的另一个现实是,技术 A 和技术 B 之间的比较通常不太公平,我经常在 AWS 上看到这一点。这种实际情况往往与 Jeff Barr 博客文章截然不同。如果你是一家足够小的公司,你可以使用 AWS 控制台在 AWS 中部署你的应用程序并接受所有默认值,这绝对更容易。但是,我不想使用默认值,因为默认值几乎是不适用于生产环境的。一旦你开始剥离掉云服务商服务的层面,你就会开始意识到最终你仍然是在运行软件 —— 它仍然需要设计良好、部署良好、运行良好。我相信 AWS 和 Kubernetes 以及所有其他云服务商的增值服务使得它更容易运行、设计和操作,但它绝对不是免费的。
+
+#### Kubernetes 的争议
+
+最后就是:如果你将 Kubernetes 纯粹视为一个容器编排工具,你可能会喜欢 Fargate。然而,随着我对 Kubernetes 越来越熟悉,我开始意识到它作为一种技术的重要性 —— 不仅因为它是一个伟大的容器编排工具,而且因为它的设计模式 —— 它是声明性的、API 驱动的平台。 在*整个* Fargate 过程期间发生的一个简单的事情是,如果我删除这里某个东西,Fargate 不一定会为我重新创建它。自动缩放很不错,不需要管理服务器和操作系统的补丁及更新也很棒,但我觉得因为无法使用 Kubernetes 自我修复和 API 驱动模型而失去了很多。当然,Kubernetes 有一个学习曲线,但从这里的体验来看,Fargate 也是如此。
+
+### 总结
+
+尽管我在这个过程中遭遇了困惑,但我确实很喜欢这种体验。我仍然相信 Fargate 是一项出色的技术,AWS 团队对 ECS/Fargate 所做的工作确实非常出色。然而,我的观点是,这绝对不比 Kubernetes “更容易”,只是……难点不同。
+
+在生产环境中运行容器时出现的问题大致相同。如果你从这篇文章中有所收获,它应该是这样的:*不管你选择的哪种方式都有运维开销*。不要相信你选择一些东西你的世界就变得更轻松。我个人的意见是:如果你有一个运维团队,而你的公司要为多个应用程序团队部署容器 —— 选择一种技术并围绕它构建流程和工具以使其更容易。
+
+人们说的一点肯定没错,用点技巧可以更容易地使用某种技术。在这个阶段,谈到 Fargate,下面的漫画这总结了我的感受:
+
+![][16]
+
+--------------------------------------------------------------------------------
+
+via: https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html
+
+作者:[Lee Briggs][a]
+选题:[lujun9972][b]
+译者:[Bestony](https://github.com/Bestony)
+校对:[wxy](https://github.com/wxy), 临石(阿里云智能技术专家)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://leebriggs.co.uk/
+[b]: https://github.com/lujun9972
+[1]: https://matthias-endler.de/2019/maybe-you-dont-need-kubernetes/
+[2]: https://aws.amazon.com/fargate/
+[3]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted.html
+[4]: https://i.imgur.com/YfMyXBdl.png
+[5]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
+[6]: https://github.com/cloudposse/terraform-aws-ecs-container-definition
+[7]: https://leebriggs.co.uk/blog/2018/05/08/kubernetes-config-mgmt.html
+[8]: https://github.com/kubernetes-incubator/external-dns
+[9]: https://github.com/jetstack/cert-manager
+[10]: https://github.com/terraform-aws-modules/terraform-aws-ecs
+[11]: https://kubernetes.io/docs/concepts/configuration/secret/
+[12]: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html
+[13]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
+[14]: https://twitter.com/briggsl/status/1116870900719030272
+[15]: https://cloud.google.com/run/
+[16]: https://i.imgur.com/Bx7Q50Jl.jpg
diff --git a/sources/talk/20190208 Which programming languages should you learn.md b/sources/talk/20190208 Which programming languages should you learn.md
deleted file mode 100644
index 31cef16f03..0000000000
--- a/sources/talk/20190208 Which programming languages should you learn.md
+++ /dev/null
@@ -1,46 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Which programming languages should you learn?)
-[#]: via: (https://opensource.com/article/19/2/which-programming-languages-should-you-learn)
-[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
-
-Which programming languages should you learn?
-======
-Learning a new programming language is a great way to get ahead in your career. But which one?
-
-
-If you want to get started or get ahead in your programming career, learning a new language is a smart idea. But the huge number of languages in active use invites the question: Which programming language is the best one to know? To answer that, let's start with a simplifying question: What sort of programming do you want to do?
-
-If you want to do web programming on the client side, then the specialized languages HTML, CSS, and JavaScript—in one of its seemingly infinite dialects—are de rigueur.
-
-If you want to do web programming on the server side, the options include all of the familiar general-purpose languages: C++, Golang, Java, C#, Node.js, Perl, Python, Ruby, and so on. As a matter of course, server-side programs interact with datastores, such as relational and other databases, which means query languages such as SQL may come into play.
-
-If you're writing native apps for mobile devices, knowing the target platform is important. For Apple devices, Swift has supplanted Objective C as the language of choice. For Android devices, Java (with dedicated libraries and toolsets) remains the dominant language. There are special languages such as Xamarin, used with C#, that can generate platform-specific code for Apple, Android, and Windows devices.
-
-What about general-purpose languages? There are various choices within the usual pigeonholes. Among the dynamic or scripting languages (e.g., Perl, Python, and Ruby), there are newer offerings such as Node.js. Java and C#, which are more alike than their fans like to admit, remain the dominant statically compiled languages targeted at a virtual machine (the JVM and CLR, respectively). Among languages that compile into native executables, C++ is still in the mix, along with later arrivals such as Golang and Rust. General-purpose functional languages abound (e.g., Clojure, Haskell, Erlang, F#, Lisp, and Scala), often with passionately devoted communities. It's worth noting that object-oriented languages such as Java and C# have added functional constructs (in particular, lambdas), and the dynamic languages have had functional constructs from the start.
-
-Let me end with a pitch for C, which is a small, elegant, and extensible language not to be confused with C++. Modern operating systems are written mostly in C, with the rest in assembly language. The standard libraries on any platform are likewise mostly in C. For example, any program that issues the Hello, world! greeting does so through a call to the C library function named **write**.
-
-C serves as a portable assembly language, exposing details about the underlying system that other high-level languages deliberately hide. To understand C is thus to gain a better grasp of how programs contend for the shared system resources (processors, memory, and I/O devices) required for execution. C is at once high-level and close-to-the-metal, so unrivaled in performance—except, of course, for assembly language. Finally, C is the lingua franca among programming languages, and almost every general-purpose language supports C calls in one form or another.
-
-For a modern introduction to C, consider my book [C Programming: Introducing Portable Assembler][1]. No matter how you go about it, learn C and you'll learn a lot more than just another programming language.
-
-What programming languages do you think are important to know? Do you agree or disagree with these recommendations? Let us know in the comments!
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/2/which-programming-languages-should-you-learn
-
-作者:[Marty Kalin][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/mkalindepauledu
-[b]: https://github.com/lujun9972
-[1]: https://www.amazon.com/dp/1977056954?ref_=pe_870760_150889320
diff --git a/sources/talk/20190314 A Look Back at the History of Firefox.md b/sources/talk/20190314 A Look Back at the History of Firefox.md
deleted file mode 100644
index f4118412b4..0000000000
--- a/sources/talk/20190314 A Look Back at the History of Firefox.md
+++ /dev/null
@@ -1,115 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (A Look Back at the History of Firefox)
-[#]: via: (https://itsfoss.com/history-of-firefox)
-[#]: author: (John Paul https://itsfoss.com/author/john/)
-
-A Look Back at the History of Firefox
-======
-
-The Firefox browser has been a mainstay of the open-source community for a long time. For many years it was the default web browser on (almost) all Linux distros and the lone obstacle to Microsoft’s total dominance of the internet. This browser has roots that go back all the way to the very early days of the internet. Since this week marks the 30th anniversary of the internet, there is no better time to talk about how Firefox became the browser we all know and love.
-
-### Early Roots
-
-In the early 1990s, a young man named [Marc Andreessen][1] was working on his bachelor’s degree in computer science at the University of Illinois. While there, he started working for the [National Center for Supercomputing Applications][2]. During that time [Sir Tim Berners-Lee][3] released an early form of the web standards that we know today. Marc [was introduced][4] to a very primitive web browser named [ViolaWWW][5]. Seeing that the technology had potential, Marc and Eric Bina created an easy to install browser for Unix named [NCSA Mosaic][6]). The first alpha was released in June 1993. By September, there were ports to Windows and Macintosh. Mosaic became very popular because it was easier to use than other browsing software.
-
-In 1994, Marc graduated and moved to California. He was approached by Jim Clark, who had made his money selling computer hardware and software. Clark had used Mosaic and saw the financial possibilities of the internet. Clark recruited Marc and Eric to start an internet software company. The company was originally named Mosaic Communications Corporation, however, the University of Illinois did not like [their use of the name Mosaic][7]. As a result, the company name was changed to Netscape Communications Corporation.
-
-The company’s first project was an online gaming network for the Nintendo 64, but that fell through. The first product they released was a web browser named Mosaic Netscape 0.9, subsequently renamed Netscape Navigator. Internally, the browser project was codenamed mozilla, which stood for “Mosaic killer”. An employee created a cartoon of a [Godzilla like creature][8]. They wanted to take out the competition.
-
-![Early Firefox Mascot][9]Early Mozilla mascot at Netscape
-
-They succeed mightily. At the time, one of the biggest advantages that Netscape had was the fact that its browser looked and functioned the same on every operating system. Netscape described this as giving everyone a level playing field.
-
-As usage of Netscape Navigator increase, the market share of NCSA Mosaic cratered. In 1995, Netscape went public. [On the first day][10], the stock started at $28, jumped to $75 and ended the day at $58. Netscape was without any rivals.
-
-But that didn’t last for long. In the summer of 1994, Microsoft released Internet Explorer 1.0, which was based on Spyglass Mosaic which was based on NCSA Mosaic. The [browser wars][11] had begun.
-
-Over the next few years, Netscape and Microsoft competed for dominance of the internet. Each added features to compete with the other. Unfortunately, Internet Explorer had an advantage because it came bundled with Windows. On top of that, Microsoft had more programmers and money to throw at the problem. Toward the end of 1997, Netscape started to run into financial problems.
-
-### Going Open Source
-
-![Mozilla Firefox][12]
-
-In January 1998, Netscape open-sourced the code of the Netscape Communicator 4.0 suite. The [goal][13] was to “harness the creative power of thousands of programmers on the Internet by incorporating their best enhancements into future versions of Netscape’s software. This strategy is designed to accelerate development and free distribution by Netscape of future high-quality versions of Netscape Communicator to business customers and individuals.”
-
-The project was to be shepherded by the newly created Mozilla Organization. However, the code from Netscape Communicator 4.0 proved to be very difficult to work with due to its size and complexity. On top of that, several parts could not be open sourced because of licensing agreements with third parties. In the end, it was decided to rewrite the browser from scratch using the new [Gecko][14]) rendering engine.
-
-In November 1998, Netscape was acquired by AOL for [stock swap valued at $4.2 billion][15].
-
-Starting from scratch was a major undertaking. Mozilla Firefox (initially nicknamed Phoenix) was created in June 2002 and it worked on multiple operating systems, such as Linux, Mac OS, Microsoft Windows, and Solaris.
-
-The following year, AOL announced that they would be shutting down browser development. The Mozilla Foundation was subsequently created to handle the Mozilla trademarks and handle the financing of the project. Initially, the Mozilla Foundation received $2 million in donations from AOL, IBM, Sun Microsystems, and Red Hat.
-
-In March 2003, Mozilla [announced pl][16][a][16][ns][16] to separate the suite into stand-alone applications because of creeping software bloat. The stand-alone browser was initially named Phoenix. However, the name was changed due to a trademark dispute with the BIOS manufacturer Phoenix Technologies, which had a BIOS-based browser named trademark dispute with the BIOS manufacturer Phoenix Technologies. Phoenix was renamed Firebird only to run afoul of the Firebird database server people. The browser was once more renamed to the Firefox that we all know.
-
-At the time, [Mozilla said][17], “We’ve learned a lot about choosing names in the past year (more than we would have liked to). We have been very careful in researching the name to ensure that we will not have any problems down the road. We have begun the process of registering our new trademark with the US Patent and Trademark office.”
-
-![Mozilla Firefox 1.0][18]Firefox 1.0 : [Picture Credit][19]
-
-The first official release of Firefox was [0.8][20] on February 8, 2004. 1.0 followed on November 9, 2004. Version 2.0 and 3.0 followed in October 2006 and June 2008 respectively. Each major release brought with it many new features and improvements. In many respects, Firefox pulled ahead of Internet Explorer in terms of features and technology, but IE still had more users.
-
-That changed with the release of Google’s Chrome browser. In the months before the release of Chrome in September 2008, Firefox accounted for 30% of all [browser usage][21] and IE had over 60%. According to StatCounter’s [January 2019 report][22], Firefox accounts for less than 10% of all browser usage, while Chrome has over 70%.
-
-Fun Fact
-
-Contrary to popular belief, the logo of Firefox doesn’t feature a fox. It’s actually a [Red Panda][23]. In Chinese, “fire fox” is another name for the red panda.
-
-### The Future
-
-As noted above, Firefox currently has the lowest market share in its recent history. There was a time when a bunch of browsers were based on Firefox, such as the early version of the [Flock browser][24]). Now most browsers are based on Google technology, such as Opera and Vivaldi. Even Microsoft is giving up on browser development and [joining the Chromium band wagon][25].
-
-This might seem like quite a downer after the heights of the early Netscape years. But don’t forget what Firefox has accomplished. A group of developers from around the world have created the second most used browser in the world. They clawed 30% market share away from Microsoft’s monopoly, they can do it again. After all, they have us, the open source community, behind them.
-
-The fight against the monopoly is one of the several reasons [why I use Firefox][26]. Mozilla regained some of its lost market-share with the revamped release of [Firefox Quantum][27] and I believe that it will continue the upward path.
-
-What event from Linux and open source history would you like us to write about next? Please let us know in the comments below.
-
-If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][28].
-
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/history-of-firefox
-
-作者:[John Paul][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/john/
-[b]: https://github.com/lujun9972
-[1]: https://en.wikipedia.org/wiki/Marc_Andreessen
-[2]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
-[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
-[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
-[5]: http://viola.org/
-[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser
-[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
-[8]: http://www.davetitus.com/mozilla/
-[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
-[10]: https://www.marketwatch.com/story/netscape-ipo-ignited-the-boom-taught-some-hard-lessons-20058518550
-[11]: https://en.wikipedia.org/wiki/Browser_wars
-[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?resize=800%2C450&ssl=1
-[13]: https://web.archive.org/web/20021001071727/wp.netscape.com/newsref/pr/newsrelease558.html
-[14]: https://en.wikipedia.org/wiki/Gecko_(software)
-[15]: http://news.cnet.com/2100-1023-218360.html
-[16]: https://web.archive.org/web/20050618000315/http://www.mozilla.org/roadmap/roadmap-02-Apr-2003.html
-[17]: https://www-archive.mozilla.org/projects/firefox/firefox-name-faq.html
-[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firefox-1.jpg?ssl=1
-[19]: https://www.iceni.com/blog/firefox-1-0-introduced-2004/
-[20]: https://en.wikipedia.org/wiki/Firefox_version_history
-[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
-[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
-[23]: https://en.wikipedia.org/wiki/Red_panda
-[24]: https://en.wikipedia.org/wiki/Flock_(web_browser
-[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
-[26]: https://itsfoss.com/why-firefox/
-[27]: https://itsfoss.com/firefox-quantum-ubuntu/
-[28]: http://reddit.com/r/linuxusersgroup
-[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?fit=800%2C450&ssl=1
diff --git a/sources/talk/20190327 Why DevOps is the most important tech strategy today.md b/sources/talk/20190327 Why DevOps is the most important tech strategy today.md
deleted file mode 100644
index 288977e789..0000000000
--- a/sources/talk/20190327 Why DevOps is the most important tech strategy today.md
+++ /dev/null
@@ -1,130 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Why DevOps is the most important tech strategy today)
-[#]: via: (https://opensource.com/article/19/3/devops-most-important-tech-strategy)
-[#]: author: (Kelly AlbrechtWilly-Peter Schaub https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht)
-
-Why DevOps is the most important tech strategy today
-======
-Clearing up some of the confusion about DevOps.
-![CICD with gears][1]
-
-Many people first learn about [DevOps][2] when they see one of its outcomes and ask how it happened. It's not necessary to understand why something is part of DevOps to implement it, but knowing that—and why a DevOps strategy is important—can mean the difference between being a leader or a follower in an industry.
-
-Maybe you've heard some the incredible outcomes attributed to DevOps, such as production environments that are so resilient they can handle thousands of releases per day while a "[Chaos Monkey][3]" is running around randomly unplugging things. This is impressive, but on its own, it's a weak business case, essentially burdened with [proving a negative][4]: The DevOps environment is resilient because a serious failure hasn't been observed… yet.
-
-There is a lot of confusion about DevOps and many people are still trying to make sense of it. Here's an example from someone in my LinkedIn feed:
-
-> Recently attended few #DevOps sessions where some speakers seemed to suggest #Agile is a subset of DevOps. Somehow, my understanding was just the opposite.
->
-> Would like to hear your thoughts. What do you think is the relationship between Agile and DevOps?
->
-> 1. DevOps is a subset of Agile
-> 2. Agile is a subset of DevOps
-> 3. DevOps is an extension of Agile, starts where Agile ends
-> 4. DevOps is the new version of Agile
->
-
-
-Tech industry professionals have been weighing in on the LinkedIn post with a wide range of answers. How would you respond?
-
-### DevOps' roots in lean and agile
-
-DevOps makes a lot more sense if we start with the strategies of Henry Ford and the Toyota Production System's refinements of Ford's model. Within this history is the birthplace of lean manufacturing, which has been well studied. In [_Lean Thinking_][5], James P. Womack and Daniel T. Jones distill it into five principles:
-
- 1. Specify the value desired by the customer
- 2. Identify the value stream for each product providing that value and challenge all of the wasted steps currently necessary to provide it
- 3. Make the product flow continuously through the remaining value-added steps
- 4. Introduce pull between all steps where continuous flow is possible
- 5. Manage toward perfection so that the number of steps and the amount of time and information needed to serve the customer continually falls
-
-
-
-Lean seeks to continuously remove waste and increase the flow of value to the customer. This is easily recognizable and understood through a core tenet of lean: single piece flow. We can do a number of activities to learn why moving single pieces at a time is magnitudes faster than batches of many pieces; the [Penny Game][6] and the [Airplane Game][7] are two of them. In the Penny Game, if a batch of 20 pennies takes two minutes to get to the customer, they get the whole batch after waiting two minutes. If you move one penny at a time, the customer gets the first penny in about five seconds and continues getting pennies until the 20th penny arrives approximately 25 seconds later.
-
-This is a huge difference, but not everything in life is as simple and predictable as the penny in the Penny Game. This is where agile comes in. We certainly see lean principles on high-performing agile teams, but these teams need more than lean to do what they do.
-
-To be able to handle the unpredictability and variance of typical software development tasks, agile methodology focuses on awareness, deliberation, decision, and action to adjust course in the face of a constantly changing reality. For example, agile frameworks (like scrum) increase awareness with ceremonies like the daily standup and the sprint review. If the scrum team becomes aware of a new reality, the framework allows and encourages them to adjust course if necessary.
-
-For teams to make these types of decisions, they need to be self-organizing in a high-trust environment. High-performing agile teams working this way achieve a fast flow of value while continuously adjusting course, removing the waste of going in the wrong direction.
-
-### Optimal batch size
-
-To understand the power of DevOps in software development, it helps to understand the economics of batch size. Consider the following U-curve optimization illustration from Donald Reinertsen's _[Principles of Product Development Flow][8]:_
-
-![U-curve optimization illustration of optimal batch size][9]
-
-This can be explained with an analogy about grocery shopping. Suppose you need to buy some eggs and you live 30 minutes from the store. Buying one egg (far left on the illustration) at a time would mean a 30-minute trip each time. This is your _transaction cost_. The _holding cost_ might represent the eggs spoiling and taking up space in your refrigerator over time. The _total cost_ is the _transaction cost_ plus your _holding cost_. This U-curve explains why, for most people, buying a dozen eggs at a time is their _optimal batch size_. If you lived next door to the store, it'd cost you next to nothing to walk there, and you'd probably buy a smaller carton each time to save room in your refrigerator and enjoy fresher eggs.
-
-This U-curve optimization illustration can shed some light on why productivity increases significantly in successful agile transformations. Consider the effect of agile transformation on decision making in an organization. In traditional hierarchical organizations, decision-making authority is centralized. This leads to larger decisions made less frequently by fewer people. An agile methodology will effectively reduce an organization's transaction cost for making decisions by decentralizing the decisions to where the awareness and information is the best known: across the high-trust, self-organizing agile teams.
-
-The following animation shows how reducing transaction cost shifts the optimal batch size to the left. You can't understate the value to an organization in making faster decisions more frequently.
-
-![U-curve optimization illustration][10]
-
-### Where does DevOps fit in?
-
-Automation is one of the things DevOps is most known for. The previous illustration shows the value of automation in great detail. Through automation, we reduce our transaction costs to nearly zero, essentially getting our testing and deployments for free. This lets us take advantage of smaller and smaller batch sizes of work. Smaller batches of work are easier to understand, commit to, test, review, and know when they are done. These smaller batch sizes also contain less variance and risk, making them easier to deploy and, if something goes wrong, to troubleshoot and recover from. With automation combined with a solid agile practice, we can get our feature development very close to single piece flow, providing value to customers quickly and continuously.
-
-More traditionally, DevOps is understood as a way to knock down the walls of confusion between the dev and ops teams. In this model, development teams develop new features, while operations teams keep the system stable and running smoothly. Friction occurs because new features from development introduce change into the system, increasing the risk of an outage, which the operations team doesn't feel responsible for—but has to deal with anyway. DevOps is not just trying to get people working together, it's more about trying to make more frequent changes safely in a complex environment.
-
-We can look to [Ron Westrum][11] for research about achieving safety in complex organizations. In researching why some organizations are safer than others, he found that an organization's culture is predictive of its safety. He identified three types of culture: Pathological, Bureaucratic, and Generative. He found that the Pathological culture was predictive of less safety and the Generative culture was predictive of more safety (e.g., far fewer plane crashes or accidental hospital deaths in his main areas of research).
-
-![Three types of culture identified by Ron Westrum][12]
-
-Effective DevOps teams achieve a Generative culture with lean and agile practices, showing that speed and safety are complementary, or two sides of the same coin. By reducing the optimal batch sizes of decisions and features to become very small, DevOps achieves a faster flow of information and value while removing waste and reducing risk.
-
-In line with Westrum's research, change can happen easily with safety and reliability improving at the same time. When an agile DevOps team is trusted to make its own decisions, we get the tools and techniques DevOps is most known for today: automation and continuous delivery. Through this automation, transaction costs are reduced further than ever, and a near single piece lean flow is achieved, creating the potential for thousands of decisions and releases per day, as we've seen happen in high-performing DevOps organizations.
-
-### Flow, feedback, learning
-
-DevOps doesn't stop there. We've mainly been talking about DevOps achieving a revolutionary flow, but lean and agile practices are further amplified through similar efforts that achieve faster feedback loops and faster learning. In the [_DevOps Handbook_][13], the authors explain in detail how, beyond its fast flow, DevOps achieves telemetry across its entire value stream for fast and continuous feedback. Further, leveraging the [kaizen][14] bursts of lean and the [retrospectives][15] of scrum, high-performing DevOps teams will continuously drive learning and continuous improvement deep into the foundations of their organizations, achieving a lean manufacturing revolution in the software product development industry.
-
-### Start with a DevOps assessment
-
-The first step in leveraging DevOps is, either after much study or with the help of a DevOps consultant and coach, to conduct an assessment across a suite of dimensions consistently found in high-performing DevOps teams. The assessment should identify weak or non-existent team norms that need improvement. Evaluate the assessment's results to find quick wins—focus areas with high chances for success that will produce high-impact improvement. Quick wins are important for gaining the momentum needed to tackle more challenging areas. The teams should generate ideas that can be tried quickly and start to move the needle on the DevOps transformation.
-
-After some time, the team should reassess on the same dimensions to measure improvements and identify new high-impact focus areas, again with fresh ideas from the team. A good coach will consult, train, mentor, and support as needed until the team owns its own continuous improvement and achieves near consistency on all dimensions by continually reassessing, experimenting, and learning.
-
-In the [second part][16] of this article, we'll look at results from a DevOps survey in the Drupal community and see where the quick wins are most likely to be found.
-
-* * *
-
-_Rob_ _Bayliss and Kelly Albrecht will present[DevOps: Why, How, and What][17] and host a follow-up [Birds of a][18]_ [_Feather_][18] _[discussion][18] at [DrupalCon 2019][19] in Seattle, April 8-12._
-
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/3/devops-most-important-tech-strategy
-
-作者:[Kelly AlbrechtWilly-Peter Schaub][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
-[2]: https://opensource.com/resources/devops
-[3]: https://github.com/Netflix/chaosmonkey
-[4]: https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative
-[5]: https://www.amazon.com/dp/B0048WQDIO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
-[6]: https://youtu.be/5t6GhcvKB8o?t=54
-[7]: https://www.shmula.com/paper-airplane-game-pull-systems-push-systems/8280/
-[8]: https://www.amazon.com/dp/B00K7OWG7O/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
-[9]: https://opensource.com/sites/default/files/uploads/batch_size_optimal_650.gif (U-curve optimization illustration of optimal batch size)
-[10]: https://opensource.com/sites/default/files/uploads/batch_size_650.gif (U-curve optimization illustration)
-[11]: https://en.wikipedia.org/wiki/Ron_Westrum
-[12]: https://opensource.com/sites/default/files/uploads/information_flow.png (Three types of culture identified by Ron Westrum)
-[13]: https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002/ref=sr_1_3?keywords=DevOps+handbook&qid=1553197361&s=books&sr=1-3
-[14]: https://en.wikipedia.org/wiki/Kaizen
-[15]: https://www.scrum.org/resources/what-is-a-sprint-retrospective
-[16]: https://opensource.com/article/19/3/where-drupal-community-stands-devops-adoption
-[17]: https://events.drupal.org/seattle2019/sessions/devops-why-how-and-what
-[18]: https://events.drupal.org/seattle2019/bofs/devops-getting-started
-[19]: https://events.drupal.org/seattle2019
diff --git a/sources/talk/20190410 Anti-lasers could give us perfect antennas, greater data capacity.md b/sources/talk/20190410 Anti-lasers could give us perfect antennas, greater data capacity.md
new file mode 100644
index 0000000000..2d2f4d5c05
--- /dev/null
+++ b/sources/talk/20190410 Anti-lasers could give us perfect antennas, greater data capacity.md
@@ -0,0 +1,69 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Anti-lasers could give us perfect antennas, greater data capacity)
+[#]: via: (https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html#tk.rss_all)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Anti-lasers could give us perfect antennas, greater data capacity
+======
+Anti-lasers get close to providing a 100% efficient signal channel for data, say engineers.
+![Guirong Hao / Valery Brozhinsky / Getty Images][1]
+
+Playing laser light backwards could adjust data transmission signals so that they perfectly match receiving antennas. The fine-tuning of signals like this, not achieved with such detail before, could create more capacity for ever-increasing data demand.
+
+"Imagine, for example, that you could adjust a cell phone signal exactly the right way, so that it is perfectly absorbed by the antenna in your phone," says Stefan Rotter of the Institute for Theoretical Physics of Technische Universität Wien (TU Wien) in a [press release][2].
+
+Rotter is talking about “Random Anti-Laser,” a project he has been a part of. The idea behind it is that if one could time-reverse a laser, then the laser (right now considered the best light source ever built) becomes the best available light absorber. Perfect absorption of a signal wave would mean that all of the data-carrying energy is absorbed by the receiving device, thus it becomes 100% efficient.
+
+**[ Related:[What is 5G wireless? How it will change networking as we know it?][3] ]**
+
+“The easiest way to think about this process is in terms of a movie showing a conventional laser sending out laser light, which is played backwards,” the TU Wein article says. The anti-laser is the exact opposite of the laser — instead of sending specific colors perfectly when energy is applied, it receives specific colors perfectly.
+
+Perfect absorption of a signal wave would mean that all of the data-carrying energy is absorbed by the receiving device, thus it becomes 100% efficient.
+
+Counter-intuitively, it’s the random scattering of light in all directions that’s behind the engineering. However, the Vienna, Austria, university group performs precise calculations on those scattering, splitting signals. That lets the researchers harness the light.
+
+### How the anti-laser technology works
+
+The microwave-based, experimental device the researchers have built in the lab to prove the idea doesn’t just potentially apply to cell phones; wireless internet of things (IoT) devices would also get more data throughput. How it works: The device consists of an antenna-containing chamber encompassed by cylinders, all arranged haphazardly, the researchers explain. The cylinders distribute an elaborate, arbitrary wave pattern “similar to [throwing] stones in a puddle of water, at which water waves are deflected.”
+
+Measurements then take place to identify exactly how the signals return. The team involved, which also includes collaborators from the University of Nice, France, then “characterize the random structure and calculate the wave front that is completely swallowed by the central antenna at the right absorption strength.” Ninety-nine point eight percent is absorbed, making it remarkably and virtually perfect. Data throughput, range, and other variables thus improve.
+
+**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][4] ]**
+
+Achieving perfect antennas has been pretty much only theoretically possible for engineers to date. Reflected energy (RF back into the transmitter from antenna inefficiencies) has always been an issue in general. Reflections from surfaces, too, have been always been a problem.
+
+“Think about a mobile phone signal that is reflected several times before it reaches your cell phone,” Rotter says. It’s not easy to get the tuning right — as the antennas’ physical locations move, reflected surfaces become different.
+
+### Scattering lasers
+
+Scattering, similar to that used in this project, is becoming more important in communications overall. “Waves that are being scattered in a complex way are really all around us,” the group says.
+
+An example is random-lasers (which the group’s anti-laser is based on) that unlike traditional lasers, do not use reflective surfaces but trap scattered light and then “emit a very complicated, system-specific laser field when supplied with energy.” The anti-random-laser developed by Rotter and his group simply reverses that in time:
+
+“Instead of a light source that emits a specific wave depending on its random inner structure, it is also possible to build the perfect absorber.” The anti-random-laser.
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html#tk.rss_all
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/data_cubes_transformation_conversion_by_guirong_hao_gettyimages-1062387214_plus_abstract_binary_by_valerybrozhinsky_gettyimages-865457032_3x2_2400x1600-100790211-large.jpg
+[2]: https://www.tuwien.ac.at/en/news/news_detail/article/126574/
+[3]: https://www.networkworld.com/article/3203489/lan-wan/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html
+[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190410 Google partners with Intel, HPE and Lenovo for hybrid cloud.md b/sources/talk/20190410 Google partners with Intel, HPE and Lenovo for hybrid cloud.md
new file mode 100644
index 0000000000..5603086a53
--- /dev/null
+++ b/sources/talk/20190410 Google partners with Intel, HPE and Lenovo for hybrid cloud.md
@@ -0,0 +1,60 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Google partners with Intel, HPE and Lenovo for hybrid cloud)
+[#]: via: (https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Google partners with Intel, HPE and Lenovo for hybrid cloud
+======
+Google boosted its on-premises and cloud connections with Kubernetes and serverless computing.
+![Ilze Lucero \(CC0\)][1]
+
+Still struggling to get its Google Cloud business out of single-digit marketshare, Google this week introduced new partnerships with Lenovo and Intel to help bolster its hybrid cloud offerings, both built on Google’s Kubernetes container technology.
+
+At Google’s Next ’19 show this week, Intel and Google said they will collaborate on Google's Anthos, a new reference design based on the second-Generation Xeon Scalable processor introduced last week and an optimized Kubernetes software stack designed to deliver increased workload portability between public and private cloud environments.
+
+**[ Read also:[What hybrid cloud means in practice][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
+
+As part the Anthos announcement, Hewlett Packard Enterprise (HPE) said it has validated Anthos on its ProLiant servers, while Lenovo has done the same for its ThinkAgile platform. This solution will enable customers to get a consistent Kubernetes experience between Google Cloud and their on-premises HPE or Lenovo servers. No official word from Dell yet, but they can’t be far behind.
+
+Users will be able to manage their Kubernetes clusters and enforce policy consistently across environments – either in the public cloud or on-premises. In addition, Anthos delivers a fully integrated stack of hardened components, including OS and container runtimes that are tested and validated by Google, so customers can upgrade their clusters with confidence and minimize downtime.
+
+### What is Google Anthos?
+
+Google formally introduced [Anthos][4] at this year’s show. Anthos, formerly Cloud Services Platform, is meant to allow users to run their containerized applications without spending time on building, managing, and operating Kubernetes clusters. It runs both on Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE) and in your data center with GKE On-Prem. Anthos will also let you manage workloads running on third-party clouds such as Amazon Web Services (AWS) and Microsoft Azure.
+
+Google also announced the beta release of Anthos Migrate, which auto-migrates virtual machines (VM) from on-premises or other clouds directly into containers in GKE with minimal effort. This allows enterprises to migrate their infrastructure in one streamlined motion, without upfront modifications to the original VMs or applications.
+
+Intel said it will publish the production design as an Intel Select Solution, as well as a developer platform, making it available to anyone who wants it.
+
+### Serverless environments
+
+Google isn’t stopping with Kubernetes containers, it’s also pushing ahead with serverless environments. [Cloud Run][5] is Google’s implementation of serverless computing, which is something of a misnomer. You still run your apps on servers; you just aren’t using a dedicated physical server. It is stateless, so resources are not allocated until you actually run or use the application.
+
+Cloud Run is a fully serverless offering that takes care of all infrastructure management, including the provisioning, configuring, scaling, and managing of servers. It automatically scales up or down within seconds, even down to zero depending on traffic, ensuring you pay only for the resources you actually use. Cloud Run can be used on GKE, offering the option to run side by side with other workloads deployed in the same cluster.
+
+Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/03/cubes_blocks_squares_containers_ilze_lucero_cc0_via_unsplash_1200x800-100752172-large.jpg
+[2]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
+[3]: https://www.networkworld.com/newsletters/signup.html
+[4]: https://cloud.google.com/blog/topics/hybrid-cloud/new-platform-for-managing-applications-in-todays-multi-cloud-world
+[5]: https://cloud.google.com/blog/products/serverless/announcing-cloud-run-the-newest-member-of-our-serverless-compute-stack
+[6]: https://www.facebook.com/NetworkWorld/
+[7]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190410 HPE and Nutanix partner for hyperconverged private cloud systems.md b/sources/talk/20190410 HPE and Nutanix partner for hyperconverged private cloud systems.md
new file mode 100644
index 0000000000..76f908c68b
--- /dev/null
+++ b/sources/talk/20190410 HPE and Nutanix partner for hyperconverged private cloud systems.md
@@ -0,0 +1,60 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (HPE and Nutanix partner for hyperconverged private cloud systems)
+[#]: via: (https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+HPE and Nutanix partner for hyperconverged private cloud systems
+======
+Both companies will sell HP ProLiant appliances with Nutanix software but to different markets.
+![Hewlett Packard Enterprise][1]
+
+Hewlett Packard Enterprise (HPE) has partnered with Nutanix to offer Nutanix’s hyperconverged infrastructure (HCI) software available as a managed private cloud service and on HPE-branded appliances.
+
+As part of the deal, the two companies will be competing against each other in hardware sales, sort of. If you want the consumption model you get through HPE’s GreenLake, where your usage is metered and you pay for only the time you use it (similar to the cloud), then you would get the ProLiant hardware from HPE.
+
+If you want an appliance model where you buy the hardware outright, like in the traditional sense of server sales, you would get the same ProLiant through Nutanix.
+
+**[ Read also:[What is hybrid cloud computing?][2] and [Multicloud mania: what to know][3] ]**
+
+As it is, HPE GreenLake offers multiple cloud offerings to customers, including virtualization courtesy of VMware and Microsoft. With the Nutanix partnership, HPE is adding Nutanix’s free Acropolis hypervisor to its offerings.
+
+“Customers get to choose an alternative to VMware with this,” said Pradeep Kumar, senior vice president and general manager of HPE’s Pointnext consultancy. “They like the Acropolis license model, since it’s license-free. Then they have choice points so pricing is competitive. Some like VMware, and I think it’s our job to offer them both and they can pick and choose.”
+
+Kumar added that the whole Nutanix stack is 15 to 18% less with Acropolis than a VMware-powered system, since they save on the hypervisor.
+
+The HPE-Nutanix partnership offers a fully managed hybrid cloud infrastructure delivered as a service and deployed in customers’ data centers or co-location facility. The managed private cloud service gives enterprises a hyperconverged environment in-house without having to manage the infrastructure themselves and, more importantly, without the burden of ownership. GreenLake operates more like a lease than ownership.
+
+### HPE GreenLake's private cloud services promise to significantly reduce costs
+
+HPE is pushing hard on GreenLake, which basically mimics cloud platform pricing models of paying for what you use rather than outright ownership. Kumar said HPE projects the consumption model will account for 30% of HPE’s business in the next few years.
+
+GreenLake makes some hefty promises. According to Nutanix-commissioned IDC research, customers will achieve a 60% reduction in the five-year cost of operations, while a HPE-commissioned Forrester report found customers benefit from a 30% Capex savings due to eliminated need for overprovisioning and a 90% reduction in support and professional services costs.
+
+By shifting to an IT as a Service model, HPE claims to provide a 40% increase in productivity by reducing the support load on IT operations staff and to shorten the time to deploy IT projects by 65%.
+
+The two new offerings from the partnership – HPE GreenLake’s private cloud service running Nutanix software and the HPE-branded appliances integrated with Nutanix software – are expected to be available during the 2019 third quarter, the companies said.
+
+Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg
+[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
+[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190412 Gov-t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software.md b/sources/talk/20190412 Gov-t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software.md
new file mode 100644
index 0000000000..b5d5c21ee6
--- /dev/null
+++ b/sources/talk/20190412 Gov-t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software.md
@@ -0,0 +1,76 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Gov’t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software)
+[#]: via: (https://www.networkworld.com/article/3388646/gov-t-warns-on-vpn-security-bug-in-cisco-palo-alto-f5-pulse-software.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+Gov’t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software
+======
+VPN packages from Cisco, Palo Alto, F5 and Pusle may improperly secure tokens and cookies
+![Getty Images][1]
+
+The Department of Homeland Security has issued a warning that some [VPN][2] packages from Cisco, Palo Alto, F5 and Pusle may improperly secure tokens and cookies, allowing nefarious actors an opening to invade and take control over an end user’s system.
+
+The DHS’s Cybersecurity and Infrastructure Security Agency (CISA) [warning][3] comes on the heels of a notice from Carnegie Mellon's CERT that multiple VPN applications store the authentication and/or session cookies insecurely in memory and/or log files.
+
+**[Also see:[What to consider when deploying a next generation firewall][4]. Get regularly scheduled insights by [signing up for Network World newsletters][5]]**
+
+“If an attacker has persistent access to a VPN user's endpoint or exfiltrates the cookie using other methods, they can replay the session and bypass other authentication methods,” [CERT wrote][6]. “An attacker would then have access to the same applications that the user does through their VPN session.”
+
+According to the CERT warning, the following products and versions store the cookie insecurely in log files:
+
+ * Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS0 ([CVE-2019-1573][7])
+ * Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2.
+
+
+
+The following products and versions store the cookie insecurely in memory:
+
+ * Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS0.
+ * Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2.
+ * Cisco AnyConnect 4.7.x and prior.
+
+
+
+CERT says that Palo Alto Networks GlobalProtect version 4.1.1 [patches][8] this vulnerability.
+
+In the CERT warning F5 stated it has been aware of the insecure memory storage since 2013 and has not yet been patched. More information can be found [here][9]. F5 also stated it has been aware of the insecure log storage since 2017 and fixed it in version 12.1.3 and 13.1.0 and onwards. More information can be found [here][10].
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][11] ]**
+
+CERT said it is unaware of any patches at the time of publishing for Cisco AnyConnect and Pulse Secure Connect Secure.
+
+CERT credited the [National Defense ISAC Remote Access Working Group][12] for reporting the vulnerability.
+
+Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3388646/gov-t-warns-on-vpn-security-bug-in-cisco-palo-alto-f5-pulse-software.html#tk.rss_all
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/10/broken-chain_metal_link_breach_security-100777433-large.jpg
+[2]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html
+[3]: https://www.us-cert.gov/ncas/current-activity/2019/04/12/Vulnerability-Multiple-VPN-Applications
+[4]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
+[5]: https://www.networkworld.com/newsletters/signup.html
+[6]: https://www.kb.cert.org/vuls/id/192371/
+[7]: https://nvd.nist.gov/vuln/detail/CVE-2019-1573
+[8]: https://securityadvisories.paloaltonetworks.com/Home/Detail/146
+[9]: https://support.f5.com/csp/article/K14969
+[10]: https://support.f5.com/csp/article/K45432295
+[11]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[12]: https://ndisac.org/workinggroups/
+[13]: https://www.facebook.com/NetworkWorld/
+[14]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190415 Nyansa-s Voyance expands to the IoT.md b/sources/talk/20190415 Nyansa-s Voyance expands to the IoT.md
new file mode 100644
index 0000000000..e893c86d53
--- /dev/null
+++ b/sources/talk/20190415 Nyansa-s Voyance expands to the IoT.md
@@ -0,0 +1,75 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Nyansa’s Voyance expands to the IoT)
+[#]: via: (https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+Nyansa’s Voyance expands to the IoT
+======
+
+![Brandon Mowinkel \(CC0\)][1]
+
+Nyansa announced today that their flagship Voyance product can now apply its AI-based secret sauce to [IoT][2] devices, over and above the networking equipment and IT endpoints it could already manage.
+
+Voyance – a network management product that leverages AI to automate the discovery of devices on the network and identify unusual behavior – has been around for two years now, and Nyansa says that it’s being used to observe a total of 25 million client devices operating across roughly 200 customer networks.
+
+**More on IoT:**
+
+ * [Most powerful Internet of Things companies][3]
+ * [10 Hot IoT startups to watch][4]
+ * [The 6 ways to make money in IoT][5]
+ * [][6] [Blockchain, service-centric networking key to IoT success][7]
+ * [Getting grounded in IoT networking and security][8]
+ * [Building IoT-ready networks must become a priority][9]
+ * [What is the Industrial IoT? [And why the stakes are so high]][10]
+
+
+
+It’s a software-only product (available either via public SaaS or private cloud) that works by scanning a customer’s network and identifying every device attached to it, then establishing a behavioral baseline that will let it flag suspicious actions (e.g., sending a lot more data than other devices of its kind, connecting to unusual servers) and even perform automated root-cause analysis of network issues.
+
+The process doesn’t happen instantaneously, particularly the creation of the baseline, but it’s designed to be minimally invasive to existing network management frameworks and easy to implement.
+
+Nyansa said that the medical field has been one of the key targets for the newly IoT-enabled iteration of Voyance, and one early customer – Baptist Health, a Florida-based healthcare company that runs four hospitals and several other clinics and practices – said that Voyance IoT has offered a new level of visibility into the business’ complex array of connected diagnostic and treatment machines.
+
+“In the past we didn’t have the ability to identify security concerns in this way, related to rogue devices on the enterprise network, and now we’re able to do that,” said CISO Thad Phillips.
+
+While spiraling network complexity isn’t an issue confined to the IoT, there’s a strong argument that the number and variety of devices connected to an IoT-enabled network represent a new challenge to network management, particularly in light of the fact that many such devices aren’t particularly secure.
+
+“They’re not manufactured by networking vendors or security vendors, so for a performance standpoint, they have a lot of quirks … and on the security side, that’s sort of a big problem there as well,” said Anand Srinivas, Nyansa’s co-founder and CTO.
+
+Enabling the Voyance platform to identify and manage IoT devices along with traditional endpoints seems to be mostly a matter of adding new device signatures to the system, but Enterprise Management Associates research director Shamus McGillicuddy said that, while the system’s designed for automation and ease of use, AIOps products like Voyance do need to be managed to make sure that they’re functioning correctly.
+
+“Anything based on machine learning is going to take a while to make sure it understands your environment and you might have to retrain it,” he said. “There’s always going to be more and more things connecting to IP networks, and it’s just going to be a question of building up a database.”
+
+Voyance IoT is available now. Pricing starts at $16,000 per year, and goes up with the number of total devices managed. (Current Voyance users can manage up to 100 IoT devices at no additional cost.)
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/geometric_architecture_ceiling_structure_lines_connections_networks_perspective_by_brandon_mowinkel_cc0_via_unsplash_2400x1600-100788530-large.jpg
+[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
+[3]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[4]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[5]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[6]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[7]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[8]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[9]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[10]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190416 Two tools to help visualize and simplify your data-driven operations.md b/sources/talk/20190416 Two tools to help visualize and simplify your data-driven operations.md
new file mode 100644
index 0000000000..8a44c56ca7
--- /dev/null
+++ b/sources/talk/20190416 Two tools to help visualize and simplify your data-driven operations.md
@@ -0,0 +1,59 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Two tools to help visualize and simplify your data-driven operations)
+[#]: via: (https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all)
+[#]: author: (Kent McNeil, Vice President of Software, Ciena Blue Planet )
+
+Two tools to help visualize and simplify your data-driven operations
+======
+Amidst the rising complexity of networks, and influx of data, service providers are striving to keep operational complexity under control. Blue Planet’s Kent McNeil explains how they can turn this challenge into a huge opportunity, and in fact reduce operational effort by exploiting state-of-the-art graph database visualization and delta-based federation technologies.
+![danleap][1]
+
+**Build the picture: Visualize your data**
+
+The Internet of Things (IoT), 5G, smart technology, virtual reality – all these applications guarantee one thing for communications service providers (CSPs): more data. As networks become increasingly overwhelmed by mounds of data, CSPs are on the hunt for ways to make the most of the intelligence collected and are looking for ways to monetize their services, provide more customizable offerings, and enhance their network performance.
+
+Customer analytics has gone some way towards fulfilling this need for greater insights, but with the rise in the volume and variety of consumer and IoT applications, the influx of data will increase at a phenomenal rate. The data includes not only customer-related data, but also device and network data, adding complexity to the picture. CSPs must harness this information to understand the relationships between any two things, to understand the connections within their data and to ultimately, leverage it for a better customer experience.
+
+**See the upward graphical trend with graph databases**
+
+Traditional relational databases certainly have their use, but graph databases offer a novel perspective. The visual representation between the component parts enables CSPs to understand and analyze the characteristics, as well as to act in a timely manner when confronted with any discrepancies.
+
+Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
+
+The use of graph databases has started to become more mainstream, as businesses see the benefits. IBM conducted a generic industry study, entitled “The State of Graph Databases Worldwide”, which found that people are moving to graph databases for speed, performance enhancement of applications, and streamlined operations. Ways in which businesses are using, or are planning to use, graph technology is highest for network and IT operations, followed by master data management. Performance is a key factor for CSPs, as is personalization, which enables support for more tailored service offerings.
+
+Another advantage of graph databases for CSPs is that of unravelling the complexity of network inventory in a clear, visualized picture – this capability gives CSPs a competitive advantage as speed and performance become increasingly paramount. This need for speed and reliability will increase tenfold as IoT continues its impressive global ramp-up. Operational complexity also grows as the influx of generated data produced by IoT will further challenge the scalability of existing operational environments. Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
+
+**Change the tide of data with delta-based federation**
+
+New data, updated data, corrected data, deleted data – all needs to be managed, in line with regulations, and instantaneously. But this capability does not exist in the reality of many CSP’s Operational Support Systems (OSS). Many still battle with updating data and relying on full uploads of network inventory in order to perform key service fulfillment and assurance tasks. This method is time-intensive and risky due to potential conflicts and inaccuracies. With data being accessed from a variety of systems, CSPs must have a way to effectively hone in on only what is required.
+
+Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
+
+A delta-based federation model ensures that an accurate picture is presented, and only essential changes are conducted reliably and quickly. This simplified method filters the delta changes, reducing the time involved in updating, and minimizing the system load and risks. A validation process takes place to catch any errors or issues with the data, so CSPs can apply checks and retain control over modifications. Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
+
+**Ride the wave**
+
+25 billion connected things are predicted by Gartner on a global scale by 2021 and CSPs are already struggling with the current levels of data, which Gartner estimates at 14.2 billion in 2019. Over the last decade, CSPs have faced significant rises in the levels of data consumed as demand for new services and higher bandwidth applications has taken off. This data wave is set to continue and CSPs have two important tools at their disposal helping them ride the wave. Firstly, CSPs have specialist, legacy OSS already in place which they can leverage as a basis for integrating data and implementing optimized systems. Secondly, they can utilize new technologies in database inventory management: graph databases and delta-based federation. The advantages of effectively integrating network data, visualizing it, and creating a clear map of the inter-connections, enable CSPs to make critical decisions more quickly and accurately, resulting in most optimized and informed service operations.
+
+[Watch this video to learn more about Blue Planet][2]
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all
+
+作者:[Kent McNeil, Vice President of Software, Ciena Blue Planet][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/istock-165721901-100793858-large.jpg
+[2]: https://www.blueplanet.com/resources/IT-plus-network-now-a-powerhouse-combination.html?utm_campaign=X1058319&utm_source=NWW&utm_term=BPVideo&utm_medium=sponsoredpost4
diff --git a/sources/talk/20190416 What SDN is and where it-s going.md b/sources/talk/20190416 What SDN is and where it-s going.md
new file mode 100644
index 0000000000..381c227b65
--- /dev/null
+++ b/sources/talk/20190416 What SDN is and where it-s going.md
@@ -0,0 +1,146 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What SDN is and where it’s going)
+[#]: via: (https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+What SDN is and where it’s going
+======
+Software-defined networking (SDN) established a foothold in cloud computing, intent-based networking, and network security, with Cisco, VMware, Juniper and others leading the charge.
+![seedkin / Getty Images][1]
+
+Hardware reigned supreme in the networking world until the emergence of software-defined networking (SDN), a category of technologies that separate the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources.
+
+SDN's origins can be traced to a research collaboration between Stanford University and the University of California at Berkeley that ultimately yielded the [OpenFlow][2] protocol in the 2008 timeframe.
+
+**[Learn more about the[difference between SDN and NFV][3]. Get regularly scheduled insights by [signing up for Network World newsletters][4]]**
+
+OpenFlow is only one of the first SDN canons, but it's a key component because it started the networking software revolution. OpenFlow defined a programmable network protocol that could help manage and direct traffic among routers and switches no matter which vendor made the underlying router or switch.
+
+In the years since its inception, SDN has evolved into a reputable networking technology offered by key vendors including Cisco, VMware, Juniper, Pluribus and Big Switch. The Open Networking Foundation develops myriad open-source SDN technologies as well.
+
+"Datacenter SDN no longer attracts breathless hype and fevered expectations, but the market is growing healthily, and its prospects remain robust," wrote Brad Casemore, IDC research vice president, data center networks, in a recent report, [_Worldwide Datacenter Software-Defined Networking Forecast, 2018–2022_][5]*. "*Datacenter modernization, driven by the relentless pursuit of digital transformation and characterized by the adoption of cloudlike infrastructure, will help to maintain growth, as will opportunities to extend datacenter SDN overlays and fabrics to multicloud application environments."
+
+SDN will be increasingly perceived as a form of established, conventional networking, Casemore said.
+
+IDC estimates that the worldwide data center SDN market will be worth more than $12 billion in 2022, recording a CAGR of 18.5% during the 2017–2022 period. The market generated revenue of nearly $5.15 billion in 2017, up more than 32.2% from 2016.
+
+In 2017, the physical network represented the largest segment of the worldwide datacenter SDN market, accounting for revenue of nearly $2.2 billion, or about 42% of the overall total revenue. In 2022, however, the physical network is expected to claim about $3.65 billion in revenue, slightly less than the $3.68 billion attributable to network virtualization overlays/SDN controller software but more than the $3.18 billion for SDN applications.
+
+“We're now at a point where SDN is better understood, where its use cases and value propositions are familiar to most datacenter network buyers and where a growing number of enterprises are finding that SDN offerings offer practical benefits,” Casemore said. “With SDN growth and the shift toward software-based network automation, the network is regaining lost ground and moving into better alignment with a wave of new application workloads that are driving meaningful business outcomes.”
+
+### **What is SDN? **
+
+The idea of programmability is the basis for the most precise definition of what SDN is: technology that separates the control plane management of network devices from the underlying data plane that forwards network traffic.
+
+IDC broadens that definition of SDN by stating: “Datacenter SDN architectures feature software-defined overlays or controllers that are abstracted from the underlying network hardware, offering intent-or policy-based management of the network as a whole. This results in a datacenter network that is better aligned with the needs of application workloads through automated (thereby faster) provisioning, programmatic network management, pervasive application-oriented visibility, and where needed, direct integration with cloud orchestration platforms.”
+
+The driving ideas behind the development of SDN are myriad. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources, everywhere from the data center to the campus or wide area network.
+
+Separating the control and data planes is the most common way to think of what SDN is, but it is much more than that, said Mike Capuano, chief marketing officer for [Pluribus][6].
+
+“At its heart SDN has a centralized or distributed intelligent entity that has an entire view of the network, that can make routing and switching decisions based on that view,” Capuano said. “Typically, network routers and switches only know about their neighboring network gear. But with a properly configured SDN environment, that central entity can control everything, from easily changing policies to simplifying configuration and automation across the enterprise.”
+
+### How does SDN support edge computing, IoT and remote access?
+
+A variety of networking trends have played into the central idea of SDN. Distributing computing power to remote sites, moving data center functions to the [edge][7], adopting cloud computing, and supporting [Internet of Things][8] environments – each of these efforts can be made easier and more cost efficient via a properly configured SDN environment.
+
+Typically in an SDN environment, customers can see all of their devices and TCP flows, which means they can slice up the network from the data or management plane to support a variety of applications and configurations, Capuano said. So users can more easily segment an IoT application from the production world if they want, for example.
+
+Some SDN controllers have the smarts to see that the network is getting congested and, in response, pump up bandwidth or processing to make sure remote and edge components don’t suffer latency.
+
+SDN technologies also help in distributed locations that have few IT personnel on site, such as an enterprise branch office or service provider central office, said Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks.
+
+“Naturally these places require remote and centralized delivery of connectivity, visibility and security. SDN solutions that centralize and abstract control and automate workflows across many places in the network, and their devices, improve operational reliability, speed and experience,” Bushong said.
+
+### **How does SDN support intent-based networking?**
+
+Intent-based networking ([IBN][9]) has a variety of components, but basically is about giving network administrators the ability to define what they want the network to do, and having an automated network management platform create the desired state and enforce policies to ensure what the business wants happens.
+
+“If a key tenet of SDN is abstracted control over a fleet of infrastructure, then the provisioning paradigm and dynamic control to regulate infrastructure state is necessarily higher level,” Bushong said. “Policy is closer to declarative intent, moving away from the minutia of individual device details and imperative and reactive commands.”
+
+IDC says that intent-based networking “represents an evolution of SDN to achieve even greater degrees of operational simplicity, automated intelligence, and closed-loop functionality.”
+
+For that reason, IBN represents a notable milestone on the journey toward autonomous infrastructure that includes a self-driving network, which will function much like the self-driving car, producing desired outcomes based on what network operators and their organizations wish to accomplish, Casemore stated.
+
+“While the self-driving car has been designed to deliver passengers safely to their destination with minimal human intervention, the self-driving network, as part of autonomous datacenter infrastructure, eventually will achieve similar outcomes in areas such as network provisioning, management, and troubleshooting — delivering applications and data, dynamically creating and altering network paths, and providing security enforcement with minimal need for operator intervention,” Casemore stated.
+
+While IBN technologies are relatively young, Gartner says by 2020, more than 1,000 large enterprises will use intent-based networking systems in production, up from less than 15 in the second quarter of 2018.
+
+### **How does SDN help customers with security?**
+
+SDN enables a variety of security benefits. A customer can split up a network connection between an end user and the data center and have different security settings for the various types of network traffic. A network could have one public-facing, low security network that does not touch any sensitive information. Another segment could have much more fine-grained remote access control with software-based [firewall][10] and encryption policies on it, which allow sensitive data to traverse over it.
+
+“For example, if a customer has an IoT group it doesn’t feel is all that mature with regards to security, via the SDN controller you can segment that group off away from the critical high-value corporate traffic,” Capuano stated. “SDN users can roll out security policies across the network from the data center to the edge and if you do all of this on top of white boxes, deployments can be 30 – 60 percent cheaper than traditional gear.”
+
+The ability to look at a set of workloads and see if they match a given security policy is a key benefit of SDN, especially as data is distributed, said Thomas Scheibe, vice president of product management for Cisco’s Nexus and ACI product lines.
+
+"The ability to deploy a whitelist security model like we do with ACI [Application Centric Infrastructure] that lets only specific entities access explicit resources across your network fabric is another key security element SDN enables," Scheibe said.
+
+A growing number of SDN platforms now support [microsegmentation][11], according to Casemore.
+
+“In fact, micro-segmentation has developed as a notable use case for SDN. As SDN platforms are extended to support multicloud environments, they will be used to mitigate the inherent complexity of establishing and maintaining consistent network and security policies across hybrid IT landscapes,” Casemore said.
+
+### **What is SDN’s role in cloud computing?**
+
+SDN’s role in the move toward [private cloud][12] and [hybrid cloud][13] adoption seems a natural. In fact, big SDN players such as Cisco, Juniper and VMware have all made moves to tie together enterprise data center and cloud worlds.
+
+Cisco's ACI Anywhere package would, for example, let policies configured through Cisco's SDN APIC (Application Policy Infrastructure Controller) use native APIs offered by a public-cloud provider to orchestrate changes within both the private and public cloud environments, Cisco said.
+
+“As organizations look to scale their hybrid cloud environments, it will be critical to leverage solutions that help improve productivity and processes,” said [Bob Laliberte][14], a senior analyst with Enterprise Strategy Group, in a recent [Network World article][15]. “The ability to leverage the same solution, like Cisco’s ACI, in your own private-cloud environment as well as across multiple public clouds will enable organizations to successfully scale their cloud environments.”
+
+Growth of public and private clouds and enterprises' embrace of distributed multicloud application environments will have an ongoing and significant impact on data center SDN, representing both a challenge and an opportunity for vendors, said IDC’s Casemore.
+
+“Agility is a key attribute of digital transformation, and enterprises will adopt architectures, infrastructures, and technologies that provide for agile deployment, provisioning, and ongoing operational management. In a datacenter networking context, the imperative of digital transformation drives adoption of extensive network automation, including SDN,” Casemore said.
+
+### Where does SD-WAN fit in?
+
+The software-defined wide area network ([SD-WAN][16]) is a natural application of SDN that extends the technology over a WAN. While the SDN architecture is typically the underpinning in a data center or campus, SD-WAN takes it a step further.
+
+At its most basic, SD-WAN lets companies aggregate a variety of network connections – including MPLS, 4G LTE and DSL – into a branch or network edge location and have a software management platform that can turn up new sites, prioritize traffic and set security policies.
+
+SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized – for data, voice or video – and potentially save money in the process.
+
+[SD-WAN][17] lets networks route traffic based on centrally managed roles and rules, no matter what the entry and exit points of the traffic are, and with full security. For example, if a user in a branch office is working in Office365, SD-WAN can route their traffic directly to the closest cloud data center for that app, improving network responsiveness for the user and lowering bandwidth costs for the business.
+
+"SD-WAN has been a promised technology for years, but in 2019 it will be a major driver in how networks are built and re-built," Anand Oswal, senior vice president of engineering in Cisco’s Enterprise Networking Business, said a Network World [article][18] earlier this year.
+
+It's a profoundly hot market with tons of players including [Cisco][19], VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa.
+
+IDC says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40% yearly clip between now and then.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/what-is-sdn_2_where-is-it-going_arrows_fork-in-the-road-100793314-large.jpg
+[2]: https://www.networkworld.com/article/2202144/data-center-faq-what-is-openflow-and-why-is-it-needed.html
+[3]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://www.idc.com/getdoc.jsp?containerId=US43862418
+[6]: https://www.networkworld.com/article/3192318/pluribus-recharges-expands-software-defined-network-platform.html
+[7]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[8]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
+[9]: https://www.networkworld.com/article/3202699/what-is-intent-based-networking.html
+[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
+[11]: https://www.networkworld.com/article/3247672/what-is-microsegmentation-how-getting-granular-improves-network-security.html
+[12]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
+[13]: https://www.networkworld.com/article/3233132/what-is-hybrid-cloud-computing.html
+[14]: https://www.linkedin.com/in/boblaliberte90/
+[15]: https://www.networkworld.com/article/3336075/cisco-serves-up-flexible-data-center-options.html
+[16]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
+[17]: https://www.networkworld.com/article/3031279/sd-wan/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
+[18]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html
+[19]: https://www.networkworld.com/article/3322937/what-will-be-hot-for-cisco-in-2019.html
diff --git a/sources/tech/20160301 How To Set Password Policies In Linux.md b/sources/tech/20160301 How To Set Password Policies In Linux.md
deleted file mode 100644
index 8fb6f000f0..0000000000
--- a/sources/tech/20160301 How To Set Password Policies In Linux.md
+++ /dev/null
@@ -1,356 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (liujing97)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How To Set Password Policies In Linux)
-[#]: via: (https://www.ostechnix.com/how-to-set-password-policies-in-linux/)
-[#]: author: (SK https://www.ostechnix.com/author/sk/)
-
-How To Set Password Policies In Linux
-======
-
-
-Even though Linux is secure by design, there are many chances for the security breach. One of them is weak passwords. As a System administrator, you must provide a strong password for the users. Because, mostly system breaches are happening due to weak passwords. This tutorial describes how to set password policies such as **password length** , **password complexity** , **password** **expiration period** etc., in DEB based systems like Debian, Ubuntu, Linux Mint, and RPM based systems like RHEL, CentOS, Scientific Linux.
-
-### Set password length in DEB based systems
-
-By default, all Linux operating systems requires **password length of minimum 6 characters** for the users. I strongly advice you not to go below this limit. Also, don’t use your real name, parents/spouse/kids name, or your date of birth as a password. Even a novice hacker can easily break such kind of passwords in minutes. The good password must always contains more than 6 characters including a number, a capital letter, and a special character.
-
-Usually, the password and authentication-related configuration files will be stored in **/etc/pam.d/** location in DEB based operating systems.
-
-To set minimum password length, edit**/etc/pam.d/common-password** file;
-
-```
-$ sudo nano /etc/pam.d/common-password
-```
-
-Find the following line:
-
-```
-password [success=2 default=ignore] pam_unix.so obscure sha512
-```
-
-![][2]
-
-And add an extra word: **minlen=8** at the end. Here I set the minimum password length as **8**.
-
-```
-password [success=2 default=ignore] pam_unix.so obscure sha512 minlen=8
-```
-
-
-
-Save and close the file. So, now the users can’t use less than 8 characters for their password.
-
-### Set password length in RPM based systems
-
-**In RHEL, CentOS, Scientific Linux 7.x** systems, run the following command as root user to set password length.
-
-```
-# authconfig --passminlen=8 --update
-```
-
-To view the minimum password length, run:
-
-```
-# grep "^minlen" /etc/security/pwquality.conf
-```
-
-**Sample output:**
-
-```
-minlen = 8
-```
-
-**In RHEL, CentOS, Scientific Linux 6.x** systems, edit **/etc/pam.d/system-auth** file:
-
-```
-# nano /etc/pam.d/system-auth
-```
-
-Find the following line and add the following at the end of the line:
-
-```
-password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8
-```
-
-
-
-As per the above setting, the minimum password length is **8** characters.
-
-### Set password complexity in DEB based systems
-
-This setting enforces how many classes, i.e upper-case, lower-case, and other characters, should be in a password.
-
-First install password quality checking library using command:
-
-```
-$ sudo apt-get install libpam-pwquality
-```
-
-Then, edit **/etc/pam.d/common-password** file:
-
-```
-$ sudo nano /etc/pam.d/common-password
-```
-
-To set at least one **upper-case** letters in the password, add a word **‘ucredit=-1’** at the end of the following line.
-
-```
-password requisite pam_pwquality.so retry=3 ucredit=-1
-```
-
-
-
-Set at least one **lower-case** letters in the password as shown below.
-
-```
-password requisite pam_pwquality.so retry=3 dcredit=-1
-```
-
-Set at least **other** letters in the password as shown below.
-
-```
-password requisite pam_pwquality.so retry=3 ocredit=-1
-```
-
-As you see in the above examples, we have set at least (minimum) one upper-case, lower-case, and a special character in the password. You can set any number of maximum allowed upper-case, lower-case, and other letters in your password.
-
-You can also set the minimum/maximum number of allowed classes in the password.
-
-The following example shows the minimum number of required classes of characters for the new password:
-
-```
-password requisite pam_pwquality.so retry=3 minclass=2
-```
-
-### Set password complexity in RPM based systems
-
-**In RHEL 7.x / CentOS 7.x / Scientific Linux 7.x:**
-
-To set at least one lower-case letter in the password, run:
-
-```
-# authconfig --enablereqlower --update
-```
-
-To view the settings, run:
-
-```
-# grep "^lcredit" /etc/security/pwquality.conf
-```
-
-**Sample output:**
-
-```
-lcredit = -1
-```
-
-Similarly, set at least one upper-case letter in the password using command:
-
-```
-# authconfig --enablerequpper --update
-```
-
-To view the settings:
-
-```
-# grep "^ucredit" /etc/security/pwquality.conf
-```
-
-**Sample output:**
-
-```
-ucredit = -1
-```
-
-To set at least one digit in the password, run:
-
-```
-# authconfig --enablereqdigit --update
-```
-
-To view the setting, run:
-
-```
-# grep "^dcredit" /etc/security/pwquality.conf
-```
-
-**Sample output:**
-
-```
-dcredit = -1
-```
-
-To set at least one other character in the password, run:
-
-```
-# authconfig --enablereqother --update
-```
-
-To view the setting, run:
-
-```
-# grep "^ocredit" /etc/security/pwquality.conf
-```
-
-**Sample output:**
-
-```
-ocredit = -1
-```
-
-In **RHEL 6.x / CentOS 6.x / Scientific Linux 6.x systems** , edit **/etc/pam.d/system-auth** file as root user:
-
-```
-# nano /etc/pam.d/system-auth
-```
-
-Find the following line and add the following at the end of the line:
-
-```
-password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8 dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1
-```
-
-As per the above setting, the password must have at least 8 characters. In addtion, the password should also have at least one upper-case letter, one lower-case letter, one digit, and one other characters.
-
-### Set password expiration period in DEB based systems
-
-Now, We are going to set the following policies.
-
- 1. Maximum number of days a password may be used.
- 2. Minimum number of days allowed between password changes.
- 3. Number of days warning given before a password expires.
-
-
-
-To set this policy, edit:
-
-```
-$ sudo nano /etc/login.defs
-```
-
-Set the values as per your requirement.
-
-```
-PASS_MAX_DAYS 100
-PASS_MIN_DAYS 0
-PASS_WARN_AGE 7
-```
-
-
-
-As you see in the above example, the user should change the password once in every **100** days and the warning message will appear **7** days before password expiration.
-
-Be mindful that these settings will impact the newly created users.
-
-To set maximum number of days between password change to existing users, you must run the following command:
-
-```
-$ sudo chage -M
-```
-
-To set minimum number of days between password change, run:
-
-```
-$ sudo chage -m
-```
-
-To set warning before password expires, run:
-
-```
-$ sudo chage -W
-```
-
-To display the password for the existing users, run:
-
-```
-$ sudo chage -l sk
-```
-
-Here, **sk** is my username.
-
-**Sample output:**
-
-```
-Last password change : Feb 24, 2017
-Password expires : never
-Password inactive : never
-Account expires : never
-Minimum number of days between password change : 0
-Maximum number of days between password change : 99999
-Number of days of warning before password expires : 7
-```
-
-As you see in the above output, the password never expires.
-
-To change the password expiration period of an existing user,
-
-```
-$ sudo chage -E 24/06/2018 -m 5 -M 90 -I 10 -W 10 sk
-```
-
-The above command will set password of the user **‘sk’** to expire on **24/06/2018**. Also the the minimum number days between password change is set 5 days and the maximum number of days between password changes is set to **90** days. The user account will be locked automatically after **10 days** and It will display a warning message for **10 days** before password expiration.
-
-### Set password expiration period in RPM based systems
-
-This is same as DEB based systems.
-
-### Forbid previously used passwords in DEB based systems
-
-You can limit the users to set a password which is already used in the past. To put this in layman terms, the users can’t use the same password again.
-
-To do so, edit**/etc/pam.d/common-password** file:
-
-```
-$ sudo nano /etc/pam.d/common-password
-```
-
-Find the following line and add the word **‘remember=5’** at the end:
-
-```
-password [success=2 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 remember=5
-```
-
-The above policy will prevent the users to use the last 5 used passwords.
-
-### Forbid previously used passwords in RPM based systems
-
-This is same for both RHEL 6.x and RHEL 7.x and it’s clone systems like CentOS, Scientific Linux.
-
-Edit **/etc/pam.d/system-auth** file as root user,
-
-```
-# vi /etc/pam.d/system-auth
-```
-
-Find the following line, and add **remember=5** at the end.
-
-```
-password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5
-```
-
-You know now what is password policies in Linux, and how to set different password policies in DEB and RPM based systems.
-
-That’s all for now. I will be here soon with another interesting and useful article. Until then stay tuned with OSTechNix. If you find this tutorial helpful, share it on your social, professional networks and support us.
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/how-to-set-password-policies-in-linux/
-
-作者:[SK][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/sk/
-[b]: https://github.com/lujun9972
-[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[2]: http://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_003-2-1.jpg
diff --git a/sources/tech/20161106 Myths about -dev-urandom.md b/sources/tech/20161106 Myths about -dev-urandom.md
deleted file mode 100644
index f88a439e31..0000000000
--- a/sources/tech/20161106 Myths about -dev-urandom.md
+++ /dev/null
@@ -1,290 +0,0 @@
-Moelf translating
-Myths about /dev/urandom
-======
-
-There are a few things about /dev/urandom and /dev/random that are repeated again and again. Still they are false.
-
-I'm mostly talking about reasonably recent Linux systems, not other UNIX-like systems.
-
-### /dev/urandom is insecure. Always use /dev/random for cryptographic purposes.
-
-Fact: /dev/urandom is the preferred source of cryptographic randomness on UNIX-like systems.
-
-### /dev/urandom is a pseudo random number generator, a PRNG, while /dev/random is a “true” random number generator.
-
-Fact: Both /dev/urandom and /dev/random are using the exact same CSPRNG (a cryptographically secure pseudorandom number generator). They only differ in very few ways that have nothing to do with “true” randomness.
-
-### /dev/random is unambiguously the better choice for cryptography. Even if /dev/urandom were comparably secure, there's no reason to choose the latter.
-
-Fact: /dev/random has a very nasty problem: it blocks.
-
-### But that's good! /dev/random gives out exactly as much randomness as it has entropy in its pool. /dev/urandom will give you insecure random numbers, even though it has long run out of entropy.
-
-Fact: No. Even disregarding issues like availability and subsequent manipulation by users, the issue of entropy “running low” is a straw man. About 256 bits of entropy are enough to get computationally secure numbers for a long, long time.
-
-And the fun only starts here: how does /dev/random know how much entropy there is available to give out? Stay tuned!
-
-### But cryptographers always talk about constant re-seeding. Doesn't that contradict your last point?
-
-Fact: You got me! Kind of. It is true, the random number generator is constantly re-seeded using whatever entropy the system can lay its hands on. But that has (partly) other reasons.
-
-Look, I don't claim that injecting entropy is bad. It's good. I just claim that it's bad to block when the entropy estimate is low.
-
-### That's all good and nice, but even the man page for /dev/(u)random contradicts you! Does anyone who knows about this stuff actually agree with you?
-
-Fact: No, it really doesn't. It seems to imply that /dev/urandom is insecure for cryptographic use, unless you really understand all that cryptographic jargon.
-
-The man page does recommend the use of /dev/random in some cases (it doesn't hurt, in my opinion, but is not strictly necessary), but it also recommends /dev/urandom as the device to use for “normal” cryptographic use.
-
-And while appeal to authority is usually nothing to be proud of, in cryptographic issues you're generally right to be careful and try to get the opinion of a domain expert.
-
-And yes, quite a few experts share my view that /dev/urandom is the go-to solution for your random number needs in a cryptography context on UNIX-like systems. Obviously, their opinions influenced mine, not the other way around.
-
-Hard to believe, right? I must certainly be wrong! Well, read on and let me try to convince you.
-
-I tried to keep it out, but I fear there are two preliminaries to be taken care of, before we can really tackle all those points.
-
-Namely, what is randomness, or better: what kind of randomness am I talking about here?
-
-And, even more important, I'm really not being condescending. I have written this document to have a thing to point to, when this discussion comes up again. More than 140 characters. Without repeating myself again and again. Being able to hone the writing and the arguments itself, benefitting many discussions in many venues.
-
-And I'm certainly willing to hear differing opinions. I'm just saying that it won't be enough to state that /dev/urandom is bad. You need to identify the points you're disagreeing with and engage them.
-
-### You're saying I'm stupid!
-
-Emphatically no!
-
-Actually, I used to believe that /dev/urandom was insecure myself, a few years ago. And it's something you and me almost had to believe, because all those highly respected people on Usenet, in web forums and today on Twitter told us. Even the man page seems to say so. Who were we to dismiss their convincing argument about “entropy running low”?
-
-This misconception isn't so rampant because people are stupid, it is because with a little knowledge about cryptography (namely some vague idea what entropy is) it's very easy to be convinced of it. Intuition almost forces us there. Unfortunately intuition is often wrong in cryptography. So it is here.
-
-### True randomness
-
-What does it mean for random numbers to be “truly random”?
-
-I don't want to dive into that issue too deep, because it quickly gets philosophical. Discussions have been known to unravel fast, because everyone can wax about their favorite model of randomness, without paying attention to anyone else. Or even making himself understood.
-
-I believe that the “gold standard” for “true randomness” are quantum effects. Observe a photon pass through a semi-transparent mirror. Or not. Observe some radioactive material emit alpha particles. It's the best idea we have when it comes to randomness in the world. Other people might reasonably believe that those effects aren't truly random. Or even that there is no randomness in the world at all. Let a million flowers bloom.
-
-Cryptographers often circumvent this philosophical debate by disregarding what it means for randomness to be “true”. They care about unpredictability. As long as nobody can get any information about the next random number, we're fine. And when you're talking about random numbers as a prerequisite in using cryptography, that's what you should aim for, in my opinion.
-
-Anyway, I don't care much about those “philosophically secure” random numbers, as I like to think of your “true” random numbers.
-
-### Two kinds of security, one that matters
-
-But let's assume you've obtained those “true” random numbers. What are you going to do with them?
-
-You print them out, frame them and hang them on your living-room wall, to revel in the beauty of a quantum universe? That's great, and I certainly understand.
-
-Wait, what? You're using them? For cryptographic purposes? Well, that spoils everything, because now things get a bit ugly.
-
-You see, your truly-random, quantum effect blessed random numbers are put into some less respectable, real-world tarnished algorithms.
-
-Because almost all of the cryptographic algorithms we use do not hold up to ### information-theoretic security**. They can “only” offer **computational security. The two exceptions that come to my mind are Shamir's Secret Sharing and the One-time pad. And while the first one may be a valid counterpoint (if you actually intend to use it), the latter is utterly impractical.
-
-But all those algorithms you know about, AES, RSA, Diffie-Hellman, Elliptic curves, and all those crypto packages you're using, OpenSSL, GnuTLS, Keyczar, your operating system's crypto API, these are only computationally secure.
-
-What's the difference? While information-theoretically secure algorithms are secure, period, those other algorithms cannot guarantee security against an adversary with unlimited computational power who's trying all possibilities for keys. We still use them because it would take all the computers in the world taken together longer than the universe has existed, so far. That's the level of “insecurity” we're talking about here.
-
-Unless some clever guy breaks the algorithm itself, using much less computational power. Even computational power achievable today. That's the big prize every cryptanalyst dreams about: breaking AES itself, breaking RSA itself and so on.
-
-So now we're at the point where you don't trust the inner building blocks of the random number generator, insisting on “true randomness” instead of “pseudo randomness”. But then you're using those “true” random numbers in algorithms that you so despise that you didn't want them near your random number generator in the first place!
-
-Truth is, when state-of-the-art hash algorithms are broken, or when state-of-the-art block ciphers are broken, it doesn't matter that you get “philosophically insecure” random numbers because of them. You've got nothing left to securely use them for anyway.
-
-So just use those computationally-secure random numbers for your computationally-secure algorithms. In other words: use /dev/urandom.
-
-### Structure of Linux's random number generator
-
-#### An incorrect view
-
-Chances are, your idea of the kernel's random number generator is something similar to this:
-
-![image: mythical structure of the kernel's random number generator][1]
-
-“True randomness”, albeit possibly skewed and biased, enters the system and its entropy is precisely counted and immediately added to an internal entropy counter. After de-biasing and whitening it's entering the kernel's entropy pool, where both /dev/random and /dev/urandom get their random numbers from.
-
-The “true” random number generator, /dev/random, takes those random numbers straight out of the pool, if the entropy count is sufficient for the number of requested numbers, decreasing the entropy counter, of course. If not, it blocks until new entropy has entered the system.
-
-The important thing in this narrative is that /dev/random basically yields the numbers that have been input by those randomness sources outside, after only the necessary whitening. Nothing more, just pure randomness.
-
-/dev/urandom, so the story goes, is doing the same thing. Except when there isn't sufficient entropy in the system. In contrast to /dev/random, it does not block, but gets “low quality random” numbers from a pseudorandom number generator (conceded, a cryptographically secure one) that is running alongside the rest of the random number machinery. This CSPRNG is just seeded once (or maybe every now and then, it doesn't matter) with “true randomness” from the randomness pool, but you can't really trust it.
-
-In this view, that seems to be in a lot of people's minds when they're talking about random numbers on Linux, avoiding /dev/urandom is plausible.
-
-Because either there is enough entropy left, then you get the same you'd have gotten from /dev/random. Or there isn't, then you get those low-quality random numbers from a CSPRNG that almost never saw high-entropy input.
-
-Devilish, right? Unfortunately, also utterly wrong. In reality, the internal structure of the random number generator looks like this.
-
-#### A better simplification
-
-##### Before Linux 4.8
-
-![image: actual structure of the kernel's random number generator before Linux 4.8][2] This is a pretty rough simplification. In fact, there isn't just one, but three pools filled with entropy. One primary pool, and one for /dev/random and /dev/urandom each, feeding off the primary pool. Those three pools all have their own entropy counts, but the counts of the secondary pools (for /dev/random and /dev/urandom) are mostly close to zero, and “fresh” entropy flows from the primary pool when needed, decreasing its entropy count. Also there is a lot of mixing and re-injecting outputs back into the system going on. All of this is far more detail than is necessary for this document.
-
-See the big difference? The CSPRNG is not running alongside the random number generator, filling in for those times when /dev/urandom wants to output something, but has nothing good to output. The CSPRNG is an integral part of the random number generation process. There is no /dev/random handing out “good and pure” random numbers straight from the whitener. Every randomness source's input is thoroughly mixed and hashed inside the CSPRNG, before it emerges as random numbers, either via /dev/urandom or /dev/random.
-
-Another important difference is that there is no entropy counting going on here, but estimation. The amount of entropy some source is giving you isn't something obvious that you just get, along with the data. It has to be estimated. Please note that when your estimate is too optimistic, the dearly held property of /dev/random, that it's only giving out as many random numbers as available entropy allows, is gone. Unfortunately, it's hard to estimate the amount of entropy.
-
-The Linux kernel uses only the arrival times of events to estimate their entropy. It does that by interpolating polynomials of those arrival times, to calculate “how surprising” the actual arrival time was, according to the model. Whether this polynomial interpolation model is the best way to estimate entropy is an interesting question. There is also the problem that internal hardware restrictions might influence those arrival times. The sampling rates of all kinds of hardware components may also play a role, because it directly influences the values and the granularity of those event arrival times.
-
-In the end, to the best of our knowledge, the kernel's entropy estimate is pretty good. Which means it's conservative. People argue about how good it really is, but that issue is far above my head. Still, if you insist on never handing out random numbers that are not “backed” by sufficient entropy, you might be nervous here. I'm sleeping sound because I don't care about the entropy estimate.
-
-So to make one thing crystal clear: both /dev/random and /dev/urandom are fed by the same CSPRNG. Only the behavior when their respective pool runs out of entropy, according to some estimate, differs: /dev/random blocks, while /dev/urandom does not.
-
-##### From Linux 4.8 onward
-
-In Linux 4.8 the equivalency between /dev/urandom and /dev/random was given up. Now /dev/urandom output does not come from an entropy pool, but directly from a CSPRNG.
-
-![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
-
-We will see shortly why that is not a security problem.
-
-### What's wrong with blocking?
-
-Have you ever waited for /dev/random to give you more random numbers? Generating a PGP key inside a virtual machine maybe? Connecting to a web server that's waiting for more random numbers to create an ephemeral session key?
-
-That's the problem. It inherently runs counter to availability. So your system is not working. It's not doing what you built it to do. Obviously, that's bad. You wouldn't have built it if you didn't need it.
-
-I'm working on safety-related systems in factory automation. Can you guess what the main reason for failures of safety systems is? Manipulation. Simple as that. Something about the safety measure bugged the worker. It took too much time, was too inconvenient, whatever. People are very resourceful when it comes to finding “inofficial solutions”.
-
-But the problem runs even deeper: people don't like to be stopped in their ways. They will devise workarounds, concoct bizarre machinations to just get it running. People who don't know anything about cryptography. Normal people.
-
-Why not patching out the call to `random()`? Why not having some guy in a web forum tell you how to use some strange ioctl to increase the entropy counter? Why not switch off SSL altogether?
-
-In the end you just educate your users to do foolish things that compromise your system's security without you ever knowing about it.
-
-It's easy to disregard availability, usability or other nice properties. Security trumps everything, right? So better be inconvenient, unavailable or unusable than feign security.
-
-But that's a false dichotomy. Blocking is not necessary for security. As we saw, /dev/urandom gives you the same kind of random numbers as /dev/random, straight out of a CSPRNG. Use it!
-
-### The CSPRNGs are alright
-
-But now everything sounds really bleak. If even the high-quality random numbers from /dev/random are coming out of a CSPRNG, how can we use them for high-security purposes?
-
-It turns out, that “looking random” is the basic requirement for a lot of our cryptographic building blocks. If you take the output of a cryptographic hash, it has to be indistinguishable from a random string so that cryptographers will accept it. If you take a block cipher, its output (without knowing the key) must also be indistinguishable from random data.
-
-If anyone could gain an advantage over brute force breaking of cryptographic building blocks, using some perceived weakness of those CSPRNGs over “true” randomness, then it's the same old story: you don't have anything left. Block ciphers, hashes, everything is based on the same mathematical fundament as CSPRNGs. So don't be afraid.
-
-### What about entropy running low?
-
-It doesn't matter.
-
-The underlying cryptographic building blocks are designed such that an attacker cannot predict the outcome, as long as there was enough randomness (a.k.a. entropy) in the beginning. A usual lower limit for “enough” may be 256 bits. No more.
-
-Considering that we were pretty hand-wavey about the term “entropy” in the first place, it feels right. As we saw, the kernel's random number generator cannot even precisely know the amount of entropy entering the system. Only an estimate. And whether the model that's the basis for the estimate is good enough is pretty unclear, too.
-
-### Re-seeding
-
-But if entropy is so unimportant, why is fresh entropy constantly being injected into the random number generator?
-
-djb [remarked][4] that more entropy actually can hurt.
-
-First, it cannot hurt. If you've got more randomness just lying around, by all means use it!
-
-There is another reason why re-seeding the random number generator every now and then is important:
-
-Imagine an attacker knows everything about your random number generator's internal state. That's the most severe security compromise you can imagine, the attacker has full access to the system.
-
-You've totally lost now, because the attacker can compute all future outputs from this point on.
-
-But over time, with more and more fresh entropy being mixed into it, the internal state gets more and more random again. So that such a random number generator's design is kind of self-healing.
-
-But this is injecting entropy into the generator's internal state, it has nothing to do with blocking its output.
-
-### The random and urandom man page
-
-The man page for /dev/random and /dev/urandom is pretty effective when it comes to instilling fear into the gullible programmer's mind:
-
-> A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead.
-
-Such an attack is not known in “unclassified literature”, but the NSA certainly has one in store, right? And if you're really concerned about this (you should!), please use /dev/random, and all your problems are solved.
-
-The truth is, while there may be such an attack available to secret services, evil hackers or the Bogeyman, it's just not rational to just take it as a given.
-
-And even if you need that peace of mind, let me tell you a secret: no practical attacks on AES, SHA-3 or other solid ciphers and hashes are known in the “unclassified” literature, either. Are you going to stop using those, as well? Of course not!
-
-Now the fun part: “use /dev/random instead”. While /dev/urandom does not block, its random number output comes from the very same CSPRNG as /dev/random's.
-
-If you really need information-theoretically secure random numbers (you don't!), and that's about the only reason why the entropy of the CSPRNGs input matters, you can't use /dev/random, either!
-
-The man page is silly, that's all. At least it tries to redeem itself with this:
-
-> If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter. As a general rule, /dev/urandom should be used for everything except long-lived GPG/SSL/SSH keys.
-
-Fine. I think it's unnecessary, but if you want to use /dev/random for your “long-lived keys”, by all means, do so! You'll be waiting a few seconds typing stuff on your keyboard, that's no problem.
-
-But please don't make connections to a mail server hang forever, just because you “wanted to be safe”.
-
-### Orthodoxy
-
-The view espoused here is certainly a tiny minority's opinions on the Internet. But ask a real cryptographer, you'll be hard pressed to find someone who sympathizes much with that blocking /dev/random.
-
-Let's take [Daniel Bernstein][5], better known as djb:
-
-> Cryptographers are certainly not responsible for this superstitious nonsense. Think about this for a moment: whoever wrote the /dev/random manual page seems to simultaneously believe that
->
-> * (1) we can't figure out how to deterministically expand one 256-bit /dev/random output into an endless stream of unpredictable keys (this is what we need from urandom), but
->
-> * (2) we _can_ figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.).
->
->
-
->
-> For a cryptographer this doesn't even pass the laugh test.
-
-Or [Thomas Pornin][6], who is probably one of the most helpful persons I've ever encountered on the Stackexchange sites:
-
-> The short answer is yes. The long answer is also yes. /dev/urandom yields data which is indistinguishable from true randomness, given existing technology. Getting "better" randomness than what /dev/urandom provides is meaningless, unless you are using one of the few "information theoretic" cryptographic algorithm, which is not your case (you would know it).
->
-> The man page for urandom is somewhat misleading, arguably downright wrong, when it suggests that /dev/urandom may "run out of entropy" and /dev/random should be preferred;
-
-Or maybe [Thomas Ptacek][7], who is not a real cryptographer in the sense of designing cryptographic algorithms or building cryptographic systems, but still the founder of a well-reputed security consultancy that's doing a lot of penetration testing and breaking bad cryptography:
-
-> Use urandom. Use urandom. Use urandom. Use urandom. Use urandom. Use urandom.
-
-### Not everything is perfect
-
-/dev/urandom isn't perfect. The problems are twofold:
-
-On Linux, unlike FreeBSD, /dev/urandom never blocks. Remember that the whole security rested on some starting randomness, a seed?
-
-Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.
-
-FreeBSD does the right thing: they don't have the distinction between /dev/random and /dev/urandom, both are the same device. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
-
-In the meantime, Linux has implemented a new syscall, originally introduced by OpenBSD as getentropy(2): getrandom(2). This syscall does the right thing: blocking until it has gathered enough initial entropy, and never blocking after that point. Of course, it is a syscall, not a character device, so it isn't as easily accessible from shell or script languages. It is available from Linux 3.17 onward.
-
-On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting. So you carry over the randomness from the last running of the machine.
-
-Obviously that isn't as good as if you let the shutdown scripts write out the seed, because in that case there would have been much more time to gather entropy. The advantage is obviously that this does not depend on a proper shutdown with execution of the shutdown scripts (in case the computer crashes, for example).
-
-And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer. So that's mostly okay.
-
-Virtual machines are the other problem. Because people like to clone them, or rewind them to a previously saved check point, this seed file doesn't help you.
-
-But the solution still isn't using /dev/random everywhere, but properly seeding each and every virtual machine after cloning, restoring a checkpoint, whatever.
-
-### tldr;
-
- Just use /dev/urandom!
-
-
---------------------------------------------------------------------------------
-
-via: https://www.2uo.de/myths-about-urandom/
-
-作者:[Thomas Hühn][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.2uo.de/
-[1]:https://www.2uo.de/myths-about-urandom/structure-no.png
-[2]:https://www.2uo.de/myths-about-urandom/structure-yes.png
-[3]:https://www.2uo.de/myths-about-urandom/structure-new.png
-[4]:http://blog.cr.yp.to/20140205-entropy.html
-[5]:http://www.mail-archive.com/cryptography@randombit.net/msg04763.html
-[6]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key/3939#3939
-[7]:http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
diff --git a/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md b/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md
index 3c61f6dd8f..20c14074c6 100644
--- a/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md
+++ b/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md
@@ -1,6 +1,3 @@
-ezio is translating
-
-
In Device We Trust: Measure Twice, Compute Once with Xen, Linux, TPM 2.0 and TXT
============================================================
diff --git a/sources/tech/20171117 5 open source fonts ideal for programmers.md b/sources/tech/20171117 5 open source fonts ideal for programmers.md
deleted file mode 100644
index 7bd9c677f6..0000000000
--- a/sources/tech/20171117 5 open source fonts ideal for programmers.md
+++ /dev/null
@@ -1,134 +0,0 @@
-FSSlc translating
-5 open source fonts ideal for programmers
-======
-
-
-
-What is the best programming font? First, you need to consider that not all fonts are created equally. When choosing a font for casual reading, the reader expects the letters to smoothly flow into one another, giving an easy and enjoyable experience. A single character for a standard font is akin to puzzle piece designed to carefully mesh with every other part of the overall typeface.
-
-When writing code, however, your font requirements are typically more functional in nature. This is why most programmers prefer to use monospaced fonts with fixed-width letters, when given the option. Selecting a font that has distinguishable numbers and punctuation, is aesthetically pleasing, and has a copyright license that meets your needs is also important.
-
-There are certain features that make a font optimal for programming. First, a detailed definition of what makes a monospaced font is in order. Consider the letter "w" as it compares to the letter "i" for a moment. When you are dealing with a font, it is important to think about the whitespace around the letter, as well as the letter itself. In the world of physical books and newspapers, where efficient use of space is often critical, it makes sense to assign less width to the thin "i" than the wide "w."
-
-There are certain features that make a font optimal for programming.
-
-Inside a terminal, however, you are blessed with no such restrictions, and it can be very useful for every character to share an identical amount of space. The main functional benefit is that you can effectively "guesstimate" how long your code is by casually glancing at a block of text. Secondary benefits include the ability to align characters and punctuation easily, highlighting is much more visually obvious, and optical character recognition on printed sheets is more effective for monospaced fonts than proportional fonts.
-
-Inside a terminal, however, you are blessed with no such restrictions, and it can be very useful for every character to share an identical amount of space. The main functional benefit is that you can effectively "guesstimate" how long your code is by casually glancing at a block of text. Secondary benefits include the ability to align characters and punctuation easily, highlighting is much more visually obvious, and optical character recognition on printed sheets is more effective for monospaced fonts than proportional fonts.
-
-In this article we will explore five excellent open source font options that are ideal for programming and writing code.
-
-### 1. Firacode: The best overall programming font
-
-### [firacode.png][1]
-
-![FiraCode example][2]
-
-
-FiraCode, Andrew Lekashman
-
-### [firacode2.png][3]
-
-![FiraCode compared to Fira Mono][4]
-
-
-FiraCode compared to Fira Mono, [Nikita Prokopov][5] via GitHub
-
-### 2. Inconsolata: Elegant and created by a brilliant designer
-
-### [inconsolata.png][6]
-
-![Inconsolata example][7]
-
-
-Inconsolata, Andrew Lekashman
-
-The first font on our list is [FiraCode][5] , a programming font that truly goes above and beyond the call of duty. FiraCode is an extension of Fira, the open source font family commissioned by Mozilla. What makes FiraCode different is that it modifies the common symbol combinations or ligatures used in code to be extraordinarily readable. This font family comes in several styles, notably including a Retina option. You can find examples for how it applies to many programming languages on its [GitHub][5] page.
-
-[Inconsolata][8] is one of the most beautiful monospaced fonts. It has been around since 2006 as an open source and freely available option. The creator, Raph Levien designed Inconsolata with one basic statement in mind: "monospaced fonts do not have to suck." Two things that stand out about Inconsolata are its extremely clear differences between 0 and O and its well-defined punctuation.
-
-### 3. DejaVu Sans Mono: Standard issue with many Linux distros and huge glyph coverage
-
-### [dejavu_sans_mono.png][9]
-
-![DejaVu Sans Mono example][10]
-
-
-DejaVu Sans Mono, Andrew Lekashman
-
-Inspired by the copyrighted and closed Vera font family used in GNOME, [DejaVu Sans Mono][11] is an extremely popular programming font that comes bundled with nearly every modern Linux distribution. DejaVu comes packed with a whopping 3,310 glyphs under the Book Variant, compared to a standard font, which normally rests easy at around 100 glyphs. You'll have no shortage of characters to work with, it has enormous coverage over Unicode, and it is actively growing all of the time.
-
-### 4. Source Code Pro: Elegant and readable, created by a small, talented team at Adobe
-
-### [source_code_pro.png][12]
-
-![Source Code Pro example][13]
-
-
-Source Code Pro, Andrew Lekashman
-
-Designed by Paul Hunt and Teo Tuominen, [Source Code Pro][14] was [produced by Adobe][15] to be one of its first open source fonts. Source Code Pro is notable in that it is extremely readable and has excellent differentiation between potentially confusing characters and punctuation. Source Code Pro is also a font family and comes in seven different styles: Extralight, Light, Regular, Medium, Semibold, Bold, and Black, with italic variants of each.
-
-### [source_code_pro2.png][16]
-
-![Differentiating potentially confusable characters][17]
-
-
-Differentiating potentially confusable characters, [Paul D. Hunt][15] via Adobe Typekit Blog.
-
-### [source_code_pro3.png][18]
-
-![Metacharacters with special meaning in computer languages][19]
-
-
-Metacharacters with special meaning in computer languages, [Paul D. Hunt][15] via Adobe Typekit Blog
-
-### 5. Noto Mono: Enormous language coverage, created by a large team at Google
-
-### [noto.png][20]
-
-![Noto Mono example][21]
-
-
-Noto Mono, Andrew Lekashman
-
-The last font on our list is [Noto Mono][22], the monospaced version of the expansive Noto font family by Google. While not specifically designed for programming, Noto Mono is available in 209 languages (including emoji!) and is actively supported and updated. The project is enormous and is an extension of Google's stated mission to organize the world's information. If you want to learn more about it, check out this excellent [video about the font][23].
-
-### Choosing the right font
-
-Whichever typeface you select, you will most likely spend hours each day immersed within it, so make sure it resonates with you on an aesthetic and philosophical level. Choosing the right open source font is an important part of making sure that you have the best possible environment for productivity. Any of these fonts is a fantastic choice, and each option has a powerful feature set that lets it stand out from the rest.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/17/11/how-select-open-source-programming-font
-
-作者:[Andrew Lekashman][a]
-译者:[FSSlc](https://github.com/FSSlc)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com
-[1]:/https://opensource.comfile/377151
-[2]:https://opensource.com/sites/default/files/u128651/firacode.png (FiraCode example)
-[3]:https://opensource.com/file/377156
-[4]:https://opensource.com/sites/default/files/u128651/firacode2.png (FiraCode compared to Fira Mono)
-[5]:https://github.com/tonsky/FiraCode
-[6]:https://opensource.com/file/377161
-[7]:https://opensource.com/sites/default/files/u128651/inconsolata.png (Inconsolata example)
-[8]:http://www.levien.com/type/myfonts/inconsolata.html
-[9]:https://opensource.com/file/377146
-[10]:https://opensource.com/sites/default/files/u128651/dejavu_sans_mono.png (DejaVu Sans Mono example)
-[11]:https://dejavu-fonts.github.io/
-[12]:https://opensource.com/file/377171
-[13]:https://opensource.com/sites/default/files/u128651/source_code_pro.png (Source Code Pro example)
-[14]:https://github.com/adobe-fonts/source-code-pro
-[15]:https://blog.typekit.com/2012/09/24/source-code-pro/
-[16]:https://opensource.com/file/377176
-[17]:https://opensource.com/sites/default/files/u128651/source_code_pro2.png (Differentiating potentially confusable characters)
-[18]:https://opensource.com/file/377181
-[19]:https://opensource.com/sites/default/files/u128651/source_code_pro3.png (Metacharacters with special meaning in computer languages)
-[20]:https://opensource.com/file/377166
-[21]:https://opensource.com/sites/default/files/u128651/noto.png (Noto Mono example)
-[22]:https://www.google.com/get/noto/#mono-mono
-[23]:https://www.youtube.com/watch?v=AAzvk9HSi84
diff --git a/sources/tech/20171214 Build a game framework with Python using the module Pygame.md b/sources/tech/20171214 Build a game framework with Python using the module Pygame.md
index 704c74e042..1acdd12a7c 100644
--- a/sources/tech/20171214 Build a game framework with Python using the module Pygame.md
+++ b/sources/tech/20171214 Build a game framework with Python using the module Pygame.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/tech/20180407 12 Best GTK Themes for Ubuntu and other Linux Distributions.md b/sources/tech/20180407 12 Best GTK Themes for Ubuntu and other Linux Distributions.md
deleted file mode 100644
index ef91a88431..0000000000
--- a/sources/tech/20180407 12 Best GTK Themes for Ubuntu and other Linux Distributions.md
+++ /dev/null
@@ -1,174 +0,0 @@
-translating by robsean
-
-12 Best GTK Themes for Ubuntu and other Linux Distributions
-======
-**Brief: Let’s have a look at some of the beautiful GTK themes that you can use not only in Ubuntu but other Linux distributions that use GNOME.**
-
-For those of us that use Ubuntu proper, the move from Unity to Gnome as the default desktop environment has made theming and customizing easier than ever. Gnome has a fairly large tweaking community, and there is no shortage of fantastic GTK themes for users to choose from. With that in mind, I went ahead and found some of my favorite themes that I have come across in recent months. These are what I believe offer some of the best experiences that you can find.
-
-### Best themes for Ubuntu and other Linux distributions
-
-This is not an exhaustive list and may exclude some of the themes you already use and love, but hopefully, you find at least one theme that you enjoy that you did not already know about. All themes present should work on any Gnome 3 setup, Ubuntu or not. I lost some screenshots so I have taken images from the official websites.
-
-The themes listed here are in no particular order.
-
-But before you see the best GNOME themes, you should learn [how to install themes in Ubuntu GNOME][1].
-
-#### 1\. Arc-Ambiance
-
-![][2]
-
-Arc and Arc variant themes have been around for quite some time now, and are widely regarded as some of the best themes you can find. In this example, I have selected Arc-Ambiance because of its modern take on the default Ambiance theme in Ubuntu.
-
-I am a fan of both the Arc theme and the default Ambiance theme, so needless to say, I was pumped when I came across a theme that merged the best of both worlds. If you are a fan of the arc themes but not a fan of this one in particular, Gnome look has plenty of other options that will most certainly suit your taste.
-
-[Arc-Ambiance Theme][3]
-
-#### 2\. Adapta Colorpack
-
-![][4]
-
-The Adapta theme has been one of my favorite flat themes I have ever found. Like Arc, Adapata is widely adopted by many-a-linux user. I have selected this color pack because in one download you have several options to choose from. In fact, there are 19 to choose from. Yep. You read that correctly. 19!
-
-So, if you are a fan of the flat/material design language that we see a lot of today, then there is most likely a variant in this theme pack that will satisfy you.
-
-[Adapta Colorpack Theme][5]
-
-#### 3\. Numix Collection
-
-![][6]
-
-Ah, Numix! Oh, the years we have spent together! For those of us that have been theming our DE for the last couple of years, you must have come across the Numix themes or icon packs at some point in time. Numix was probably the first modern theme for Linux that I fell in love with, and I am still in love with it today. And after all these years, it still hasn’t lost its charm.
-
-The gray tone throughout the theme, especially with the default pinkish-red highlight color, makes for a genuinely clean and complete experience. You would be hard pressed to find a theme pack as polished as Numix. And in this offering, you have plenty of options to choose from, so go crazy!
-
-[Numix Collection Theme][7]
-
-#### 4\. Hooli
-
-![][8]
-
-Hooli is a theme that has been out for some time now, but only recently came across my radar. I am a fan of most flat themes but have usually strayed away from themes that come to close to the material design language. Hooli, like Adapta, takes notes from that design language, but does it in a way that I think sets it apart from the rest. The green highlight color is one of my favorite parts about the theme, and it does a good job at not overpowering the entire theme.
-
-[Hooli Theme][9]
-
-#### 5\. Arrongin/Telinkrin
-
-![][10]
-
-Bonus: Two themes in one! And they are relatively new contenders in the theming realm. They both take notes from Ubuntu’s soon to be finished “[communitheme][11]” and bring it to your desktop today. The only real difference I can find between the offerings are the colors. Arrongin is centered around an Ubuntu-esq orange color, while Telinkrin uses a slightly more KDE Breeze-esq blue. I personally prefer the blue, but both are great options!
-
-[Arrongin/Telinkrin Themes][12]
-
-#### 6\. Gnome-osx
-
-![][13]
-
-I have to admit, usually, when I see that a theme has “osx” or something similar in the title, I don’t expect much. Most Apple inspired themes seem to have so much in common that I can’t really find a reason to use them. There are two themes I can think of that break this mold: the Arc-osc them and the Gnome-osx theme that we have here.
-
-The reason I like the Gnome-osx theme is because it truly does look at home on the Gnome desktop. It does a great job at blending into the DE without being too flat. So for those of you that enjoy a slightly less flat theme, and you like the red, yellow, and green button scheme for the close, minimize, and maximize buttons, than this theme is perfect for you.
-
-[Gnome-osx Theme][14]
-
-#### 7\. Ultimate Maia
-
-![][15]
-
-There was a time when I used Manjaro Gnome. Since then I have reverted back to Ubuntu, but one thing I wish I could have brought with me was the Manjaro theme. If you feel the same about the Manjaro theme as I do, then you are in luck because you can bring it to ANY distro you want that is running Gnome!
-
-The rich green color, the Breeze-esq close, minimize, maximize buttons, and the over-all polish of the theme makes for one compelling option. It even offers some other color variants of you are not a fan of the green. But let’s be honest…who isn’t a fan of that Manjaro green color?
-
-[Ultimate Maia Theme][16]
-
-#### 8\. Vimix
-
-![][17]
-
-This was a theme I easily got excited about. It is modern, pulls from the macOS red, yellow, green buttons without directly copying them, and tones down the vibrancy of the theme, making for one unique alternative to most other themes. It comes with three dark variants and several colors to choose from so most of us will find something we like.
-
-[Vimix Theme][18]
-
-#### 9\. Ant
-
-![][19]
-
-Like Vimix, Ant pulls inspiration from macOS for the button colors without directly copying the style. Where Vimix tones down the color options, Ant adds a richness to the colors that looks fantastic on my System 76 Galago Pro screen. The variation between the three theme options is pretty dramatic, and though it may not be to everyone’s taste, it is most certainly to mine.
-
-[Ant Theme][20]
-
-#### 10\. Flat Remix
-
-![][21]
-
-If you haven’t noticed by this point, I am a sucker for someone who pays attention to the details in the close, minimize, maximize buttons. The color theme that Flat Remix uses is one I have not seen anywhere else, with a red, blue, and orange color way. Add that on top of a theme that looks almost like a mix between Arc and Adapta, and you have Flat Remix.
-
-I am personally a fan of the dark option, but the light alternative is very nice as well. So if you like subtle transparencies, a cohesive dark theme, and a touch of color here and there, Flat Remix is for you.
-
-[Flat Remix Theme][22]
-
-#### 11\. Paper
-
-![][23]
-
-[Paper][24] has been around for some time now. I remember using it for the first back in 2014. I would say, at this point, Paper is more known for its icon pack than for its GTK theme, but that doesn’t mean that the theme isn’t a wonderful option in and of its self. Even though I adored the Paper icons from the beginning, I can’t say that I was a huge fan of the Paper theme when I first tried it out.
-
-I felt like the bright colors and fun approach to a theme made for an “immature” experience. Now, years later, Paper has grown on me, to say the least, and the light hearted approach that the theme takes is one I greatly appreciate.
-
-[Paper Theme][25]
-
-#### 12\. Pop
-
-![][26]
-
-Pop is one of the newer offerings on this list. Created by the folks over at [System 76][27], the Pop GTK theme is a fork of the Adapta theme listed earlier and comes with a matching icon pack, which is a fork of the previously mentioned Paper icon pack.
-
-The theme was released soon after System 76 announced that they were releasing [their own distribution,][28] Pop!_OS. You can read my [Pop!_OS review][29] to know more about it. Needless to say, I think Pop is a fantastic theme with a superb amount of polish and offers a fresh feel to any Gnome desktop.
-
-[Pop Theme][30]
-
-#### Conclusion
-
-Obviously, there way more themes to choose from than we could feature in one article, but these are some of the most complete and polished themes I have used in recent months. If you think we missed any that you really like or you just really dislike one that I featured above, then feel free to let me know in the comment section below and share why you think your favorite themes are better!
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/best-gtk-themes/
-
-作者:[Phillip Prado][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://itsfoss.com/author/phillip/
-[1]:https://itsfoss.com/install-themes-ubuntu/
-[2]:https://itsfoss.com/wp-content/uploads/2018/03/arcambaince-300x225.png
-[3]:https://www.gnome-look.org/p/1193861/
-[4]:https://itsfoss.com/wp-content/uploads/2018/03/adapta-300x169.jpg
-[5]:https://www.gnome-look.org/p/1190851/
-[6]:https://itsfoss.com/wp-content/uploads/2018/03/numix-300x169.png
-[7]:https://www.gnome-look.org/p/1170667/
-[8]:https://itsfoss.com/wp-content/uploads/2018/03/hooli2-800x500.jpg
-[9]:https://www.gnome-look.org/p/1102901/
-[10]:https://itsfoss.com/wp-content/uploads/2018/03/AT-800x590.jpg
-[11]:https://itsfoss.com/ubuntu-community-theme/
-[12]:https://www.gnome-look.org/p/1215199/
-[13]:https://itsfoss.com/wp-content/uploads/2018/03/gosx-800x473.jpg
-[14]:https://www.opendesktop.org/s/Gnome/p/1171688/
-[15]:https://itsfoss.com/wp-content/uploads/2018/03/ultimatemaia-800x450.jpg
-[16]:https://www.opendesktop.org/s/Gnome/p/1193879/
-[17]:https://itsfoss.com/wp-content/uploads/2018/03/vimix-800x450.jpg
-[18]:https://www.gnome-look.org/p/1013698/
-[19]:https://itsfoss.com/wp-content/uploads/2018/03/ant-800x533.png
-[20]:https://www.opendesktop.org/p/1099856/
-[21]:https://itsfoss.com/wp-content/uploads/2018/03/flatremix-800x450.png
-[22]:https://www.opendesktop.org/p/1214931/
-[23]:https://itsfoss.com/wp-content/uploads/2018/04/paper-800x450.jpg
-[24]:https://itsfoss.com/install-paper-theme-linux/
-[25]:https://snwh.org/paper/download
-[26]:https://itsfoss.com/wp-content/uploads/2018/04/pop-800x449.jpg
-[27]:https://system76.com/
-[28]:https://itsfoss.com/system76-popos-linux/
-[29]:https://itsfoss.com/pop-os-linux-review/
-[30]:https://github.com/pop-os/gtk-theme/blob/master/README.md
diff --git a/sources/tech/20180601 Get Started with Snap Packages in Linux.md b/sources/tech/20180601 Get Started with Snap Packages in Linux.md
index 1693d3c44e..632151832a 100644
--- a/sources/tech/20180601 Get Started with Snap Packages in Linux.md
+++ b/sources/tech/20180601 Get Started with Snap Packages in Linux.md
@@ -1,12 +1,3 @@
-[#]: collector: (lujun9972)
-[#]: translator: (pityonline)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Get Started with Snap Packages in Linux)
-[#]: via: (https://www.linux.com/learn/intro-to-linux/2018/5/get-started-snap-packages-linux)
-[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
-
Get Started with Snap Packages in Linux
======
@@ -148,7 +139,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/5/get-started-snap-packages
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
-译者:[pityonline](https://github.com/pityonline)
+译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/sources/tech/20180629 100 Best Ubuntu Apps.md b/sources/tech/20180629 100 Best Ubuntu Apps.md
index 487ebd6e7d..581d22b527 100644
--- a/sources/tech/20180629 100 Best Ubuntu Apps.md
+++ b/sources/tech/20180629 100 Best Ubuntu Apps.md
@@ -1,4 +1,3 @@
-DaivdMax2006 is translating
100 Best Ubuntu Apps
======
diff --git a/sources/tech/20180823 Getting started with Sensu monitoring.md b/sources/tech/20180823 Getting started with Sensu monitoring.md
deleted file mode 100644
index 7d0a65e306..0000000000
--- a/sources/tech/20180823 Getting started with Sensu monitoring.md
+++ /dev/null
@@ -1,290 +0,0 @@
-Getting started with Sensu monitoring
-======
-
-
-Sensu is an open source infrastructure and application monitoring solution that monitors servers, services, and application health, and sends alerts and notifications with third-party integration. Written in Ruby, Sensu can use either [RabbitMQ][1] or [Redis][2] to handle messages. It uses Redis to store data.
-
-If you want to monitor your cloud infrastructure in a simple and efficient manner, Sensu is a good option. It can be integrated with many of the modern DevOps stacks your organization may already be using, such as [Slack][3], [HipChat][4], or [IRC][5], and it can even send mobile/pager alerts with [PagerDuty][6].
-
-Sensu's [modular architecture][7] means every component can be installed on the same server or on completely separate machines.
-
-### Architecture
-
-Sensu's main communication mechanism is the Transport. Every Sensu component must connect to the Transport in order to send messages to each other. Transport can use either RabbitMQ (recommended in production) or Redis.
-
-Sensu Server processes event data and takes action. It registers clients and processes check results and monitoring events using filters, mutators, and handlers. The server publishes check definitions to the clients and the Sensu API provides a RESTful API, providing access to monitoring data and core functionality.
-
-[Sensu Client][8] executes checks either scheduled by Sensu Server or local checks definitions. Sensu uses a data store (Redis) to keep all the persistent data. Finally, [Uchiwa][9] is the web interface to communicate with Sensu API.
-
-![sensu_system.png][11]
-
-### Installing Sensu
-
-#### Prerequisites
-
- * One Linux installation to act as the server node (I used CentOS 7 for this article)
-
- * One or more Linux machines to monitor (clients)
-
-
-
-
-#### Server side
-
-Sensu requires Redis to be installed. To install Redis, enable the EPEL repository:
-```
-$ sudo yum install epel-release -y
-
-```
-
-Then install Redis:
-```
-$ sudo yum install redis -y
-
-```
-
-Modify `/etc/redis.conf` to disable protected mode, listen on every interface, and set a password:
-```
-$ sudo sed -i 's/^protected-mode yes/protected-mode no/g' /etc/redis.conf
-
-$ sudo sed -i 's/^bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.conf
-
-$ sudo sed -i 's/^# requirepass foobared/requirepass password123/g' /etc/redis.conf
-
-```
-
-Enable and start Redis service:
-```
-$ sudo systemctl enable redis
-$ sudo systemctl start redis
-```
-
-Redis is now installed and ready to be used by Sensu.
-
-Now let’s install Sensu.
-
-First, configure the Sensu repository and install the packages:
-```
-$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
-[sensu]
-name=sensu
-baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
-gpgcheck=0
-enabled=1
-EOF
-
-$ sudo yum install sensu uchiwa -y
-```
-
-Let’s create the bare minimum configuration files for Sensu:
-```
-$ sudo tee /etc/sensu/conf.d/api.json << EOF
-{
- "api": {
- "host": "127.0.0.1",
- "port": 4567
- }
-}
-EOF
-```
-
-Next, configure `sensu-api` to listen on localhost, with Port 4567:
-```
-$ sudo tee /etc/sensu/conf.d/redis.json << EOF
-{
- "redis": {
- "host": "",
- "port": 6379,
- "password": "password123"
- }
-}
-EOF
-
-
-$ sudo tee /etc/sensu/conf.d/transport.json << EOF
-{
- "transport": {
- "name": "redis"
- }
-}
-EOF
-```
-
-In these two files, we configure Sensu to use Redis as the transport mechanism and the address where Redis will listen. Clients need to connect directly to the transport mechanism. These two files will be required on each client machine.
-```
-$ sudo tee /etc/sensu/uchiwa.json << EOF
-{
- "sensu": [
- {
- "name": "sensu",
- "host": "127.0.0.1",
- "port": 4567
- }
- ],
- "uchiwa": {
- "host": "0.0.0.0",
- "port": 3000
- }
-}
-EOF
-```
-
-In this file, we configure Uchiwa to listen on every interface (0.0.0.0) on Port 3000. We also configure Uchiwa to use `sensu-api` (already configured).
-
-For security reasons, change the owner of the configuration files you just created:
-```
-$ sudo chown -R sensu:sensu /etc/sensu
-```
-
-Enable and start the Sensu services:
-```
-$ sudo systemctl enable sensu-server sensu-api sensu-client
-$ sudo systemctl start sensu-server sensu-api sensu-client
-$ sudo systemctl enable uchiwa
-$ sudo systemctl start uchiwa
-```
-
-Try accessing the Uchiwa website: http://:3000
-
-For production environments, it’s recommended to run a cluster of RabbitMQ as the Transport instead of Redis (a Redis cluster can be used in production too), and to run more than one instance of Sensu Server and API for load balancing and high availability.
-
-Sensu is now installed. Now let’s configure the clients.
-
-#### Client side
-
-To add a new client, you will need to enable Sensu repository on the client machines by creating the file `/etc/yum.repos.d/sensu.repo`.
-```
-$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
-[sensu]
-name=sensu
-baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
-gpgcheck=0
-enabled=1
-EOF
-```
-
-With the repository enabled, install the package Sensu:
-```
-$ sudo yum install sensu -y
-```
-
-To configure `sensu-client`, create the same `redis.json` and `transport.json` created in the server machine, as well as the `client.json` configuration file:
-```
-$ sudo tee /etc/sensu/conf.d/client.json << EOF
-{
- "client": {
- "name": "rhel-client",
- "environment": "development",
- "subscriptions": [
- "frontend"
- ]
- }
-}
-EOF
-```
-
-In the name field, specify a name to identify this client (typically the hostname). The environment field can help you filter, and subscription defines which monitoring checks will be executed by the client.
-
-Finally, enable and start the services and check in Uchiwa, as the new client will register automatically:
-```
-$ sudo systemctl enable sensu-client
-$ sudo systemctl start sensu-client
-```
-
-### Sensu checks
-
-Sensu checks have two components: a plugin and a definition.
-
-Sensu is compatible with the [Nagios check plugin specification][12], so any check for Nagios can be used without modification. Checks are executable files and are run by the Sensu client.
-
-Check definitions let Sensu know how, where, and when to run the plugin.
-
-#### Client side
-
-Let’s install one check plugin on the client machine. Remember, this plugin will be executed on the clients.
-
-Enable EPEL and install `nagios-plugins-http` :
-```
-$ sudo yum install -y epel-release
-$ sudo yum install -y nagios-plugins-http
-```
-
-Now let’s explore the plugin by executing it manually. Try checking the status of a web server running on the client machine. It should fail as we don’t have a web server running:
-```
-$ /usr/lib64/nagios/plugins/check_http -I 127.0.0.1
-connect to address 127.0.0.1 and port 80: Connection refused
-HTTP CRITICAL - Unable to open TCP socket
-```
-
-It failed, as expected. Check the return code of the execution:
-```
-$ echo $?
-2
-
-```
-
-The Nagios check plugin specification defines four return codes for the plugin execution:
-
-| **Plugin return code** | **State** |
-|------------------------|-----------|
-| 0 | OK |
-| 1 | WARNING |
-| 2 | CRITICAL |
-| 3 | UNKNOWN |
-
-With this information, we can now create the check definition on the server.
-
-#### Server side
-
-On the server machine, create the file `/etc/sensu/conf.d/check_http.json`:
-```
-{
- "checks": {
- "check_http": {
- "command": "/usr/lib64/nagios/plugins/check_http -I 127.0.0.1",
- "interval": 10,
- "subscribers": [
- "frontend"
- ]
- }
- }
-}
-```
-
-In the command field, use the command we tested before. `Interval` will tell Sensu how frequently, in seconds, this check should be executed. Finally, `subscribers` will define the clients where the check will be executed.
-
-Restart both sensu-api and sensu-server and confirm that the new check is available in Uchiwa.
-```
-$ sudo systemctl restart sensu-api sensu-server
-```
-
-### What’s next?
-
-Sensu is a powerful tool, and this article covers just a glimpse of what it can do. See the [documentation][13] to learn more, and visit the Sensu site to learn more about the [Sensu community][14].
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/8/getting-started-sensu-monitoring-solution
-
-作者:[Michael Zamot][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/mzamot
-[1]:https://www.rabbitmq.com/
-[2]:https://redis.io/topics/config
-[3]:https://slack.com/
-[4]:https://en.wikipedia.org/wiki/HipChat
-[5]:http://www.irc.org/
-[6]:https://www.pagerduty.com/
-[7]:https://docs.sensu.io/sensu-core/1.4/overview/architecture/
-[8]:https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/
-[9]:https://uchiwa.io/#/
-[10]:/file/406576
-[11]:https://opensource.com/sites/default/files/uploads/sensu_system.png (sensu_system.png)
-[12]:https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html
-[13]:https://docs.sensu.io/
-[14]:https://sensu.io/community
diff --git a/sources/tech/20181031 Working with data streams on the Linux command line.md b/sources/tech/20181031 Working with data streams on the Linux command line.md
index 87403558d7..b391b2af0b 100644
--- a/sources/tech/20181031 Working with data streams on the Linux command line.md
+++ b/sources/tech/20181031 Working with data streams on the Linux command line.md
@@ -1,3 +1,4 @@
+liujing97 is translating
Working with data streams on the Linux command line
======
Learn to connect data streams from one utility to another using STDIO.
diff --git a/sources/tech/20190104 Take to the virtual skies with FlightGear.md b/sources/tech/20190104 Take to the virtual skies with FlightGear.md
deleted file mode 100644
index b6122e8aff..0000000000
--- a/sources/tech/20190104 Take to the virtual skies with FlightGear.md
+++ /dev/null
@@ -1,93 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Take to the virtual skies with FlightGear)
-[#]: via: (https://opensource.com/article/19/1/flightgear)
-[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
-
-Take to the virtual skies with FlightGear
-======
-Dreaming of piloting a plane? Try open source flight simulator FlightGear.
-
-
-If you've ever dreamed of piloting a plane, you'll love [FlightGear][1]. It's a full-featured, [open source][2] flight simulator that runs on Linux, MacOS, and Windows.
-
-The FlightGear project began in 1996 due to dissatisfaction with commercial flight simulation programs, which were not scalable. Its goal was to create a sophisticated, robust, extensible, and open flight simulator framework for use in academia and pilot training or by anyone who wants to play with a flight simulation scenario.
-
-### Getting started
-
-FlightGear's hardware requirements are fairly modest, including an accelerated 3D video card that supports OpenGL for smooth framerates. It runs well on my Linux laptop with an i5 processor and only 4GB of RAM. Its documentation includes an [online manual][3]; a [wiki][4] with portals for [users][5] and [developers][6]; and extensive tutorials (such as one for its default aircraft, the [Cessna 172p][7]) to teach you how to operate it.
-
-It's easy to install on both [Fedora][8] and [Ubuntu][9] Linux. Fedora users can consult the [Fedora installation page][10] to get FlightGear running.
-
-On Ubuntu 18.04, I had to install a repository:
-
-```
-$ sudo add-apt-repository ppa:saiarcot895/flightgear
-$ sudo apt-get update
-$ sudo apt-get install flightgear
-```
-
-Once the installation finished, I launched it from the GUI, but you can also launch the application from a terminal by entering:
-
-```
-$ fgfs
-```
-
-### Configuring FlightGear
-
-The menu on the left side of the application window provides configuration options.
-
-
-
-**Summary** returns you to the application's home screen.
-
-**Aircraft** shows the aircraft you have installed and offers the option to install up to 539 other aircraft available in FlightGear's default "hangar." I installed a Cessna 150L, a Piper J-3 Cub, and a Bombardier CRJ-700. Some of the aircraft (including the CRJ-700) have tutorials to teach you how to fly a commercial jet; I found the tutorials informative and accurate.
-
-
-
-To select an aircraft to pilot, highlight it and click on **Fly!** at the bottom of the menu. I chose the default Cessna 172p and found the cockpit depiction extremely accurate.
-
-
-
-The default airport is Honolulu, but you can change it in the **Location** menu by providing your favorite airport's [ICAO airport code][11] identifier. I found some small, local, non-towered airports like Olean and Dunkirk, New York, as well as larger airports including Buffalo, O'Hare, and Raleigh—and could even choose a specific runway.
-
-Under **Environment** , you can adjust the time of day, the season, and the weather. The simulation includes advance weather modeling and the ability to download current weather from [NOAA][12].
-
-**Settings** provides an option to start the simulation in Paused mode by default. Also in Settings, you can select multi-player mode, which allows you to "fly" with other players on FlightGear supporters' global network of servers that allow for multiple users. You must have a moderately fast internet connection to support this functionality.
-
-The **Add-ons** menu allows you to download aircraft and additional scenery.
-
-### Take flight
-
-To "fly" my Cessna, I used a Logitech joystick that worked well. You can calibrate your joystick using an option in the **File** menu at the top.
-
-Overall, I found the simulation very accurate and think the graphics are great. Try FlightGear yourself — I think you will find it a very fun and complete simulation package.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/flightgear
-
-作者:[Don Watkins][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/don-watkins
-[b]: https://github.com/lujun9972
-[1]: http://home.flightgear.org/
-[2]: http://wiki.flightgear.org/GNU_General_Public_License
-[3]: http://flightgear.sourceforge.net/getstart-en/getstart-en.html
-[4]: http://wiki.flightgear.org/FlightGear_Wiki
-[5]: http://wiki.flightgear.org/Portal:User
-[6]: http://wiki.flightgear.org/Portal:Developer
-[7]: http://wiki.flightgear.org/Cessna_172P
-[8]: http://rpmfind.net/linux/rpm2html/search.php?query=flightgear
-[9]: https://launchpad.net/~saiarcot895/+archive/ubuntu/flightgear
-[10]: https://apps.fedoraproject.org/packages/FlightGear/
-[11]: https://en.wikipedia.org/wiki/ICAO_airport_code
-[12]: https://www.noaa.gov/
diff --git a/sources/tech/20190108 How To Understand And Identify File types in Linux.md b/sources/tech/20190108 How To Understand And Identify File types in Linux.md
deleted file mode 100644
index c1c4ca4c0a..0000000000
--- a/sources/tech/20190108 How To Understand And Identify File types in Linux.md
+++ /dev/null
@@ -1,359 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How To Understand And Identify File types in Linux)
-[#]: via: (https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/)
-[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-
-How To Understand And Identify File types in Linux
-======
-
-We all are knows, that everything is a file in Linux which includes Hard Disk, Graphics Card, etc.
-
-When you are navigating the Linux filesystem most of the files are fall under regular files and directories.
-
-But it has other file types as well for different purpose which fall in five categories.
-
-So, it’s very important to understand the file types in Linux that helps you in many ways.
-
-If you can’t believe this, you just gone through the complete article then you come to know how important is.
-
-If you don’t understand the file types you can’t make any changes on that without fear.
-
-If you made the changes wrongly that damage your system very badly so be careful when you are doing that.
-
-Files are very important in Linux because all the devices and daemon’s were stored as a file in Linux system.
-
-### How Many Types of File is Available in Linux?
-
-As per my knowledge, totally 7 types of files are available in Linux with 3 Major categories. The details are below.
-
- * Regular File
- * Directory File
- * Special Files (This category having five type of files)
- * Link File
- * Character Device File
- * Socket File
- * Named Pipe File
- * Block File
-
-
-
-Refer the below table for better understanding of file types in Linux.
-| Symbol | Meaning |
-| – | Regular File. It starts with underscore “_”. |
-| d | Directory File. It starts with English alphabet letter “d”. |
-| l | Link File. It starts with English alphabet letter “l”. |
-| c | Character Device File. It starts with English alphabet letter “c”. |
-| s | Socket File. It starts with English alphabet letter “s”. |
-| p | Named Pipe File. It starts with English alphabet letter “p”. |
-| b | Block File. It starts with English alphabet letter “b”. |
-
-### Method-1: Manual Way to Identify File types in Linux
-
-If you are having good knowledge in Linux then you can easily identify the files type with help of above table.
-
-#### How to view the Regular files in Linux?
-
-Use the below command to view the Regular files in Linux. Regular files are available everywhere in Linux filesystem.
-The Regular files color is `WHITE`
-
-```
-# ls -la | grep ^-
--rw-------. 1 mageshm mageshm 1394 Jan 18 15:59 .bash_history
--rw-r--r--. 1 mageshm mageshm 18 May 11 2012 .bash_logout
--rw-r--r--. 1 mageshm mageshm 176 May 11 2012 .bash_profile
--rw-r--r--. 1 mageshm mageshm 124 May 11 2012 .bashrc
--rw-r--r--. 1 root root 26 Dec 27 17:55 liks
--rw-r--r--. 1 root root 104857600 Jan 31 2006 test100.dat
--rw-r--r--. 1 root root 104874307 Dec 30 2012 test100.zip
--rw-r--r--. 1 root root 11536384 Dec 30 2012 test10.zip
--rw-r--r--. 1 root root 61 Dec 27 19:05 test2-bzip2.txt
--rw-r--r--. 1 root root 61 Dec 31 14:24 test3-bzip2.txt
--rw-r--r--. 1 root root 60 Dec 27 19:01 test-bzip2.txt
-```
-
-#### How to view the Directory files in Linux?
-
-Use the below command to view the Directory files in Linux. Directory files are available everywhere in Linux filesystem. The Directory files colour is `BLUE`
-
-```
-# ls -la | grep ^d
-drwxr-xr-x. 3 mageshm mageshm 4096 Dec 31 14:24 links/
-drwxrwxr-x. 2 mageshm mageshm 4096 Nov 16 15:44 perl5/
-drwxr-xr-x. 2 mageshm mageshm 4096 Nov 16 15:37 public_ftp/
-drwxr-xr-x. 3 mageshm mageshm 4096 Nov 16 15:37 public_html/
-```
-
-#### How to view the Link files in Linux?
-
-Use the below command to view the Link files in Linux. Link files are available everywhere in Linux filesystem.
-Two type of link files are available, it’s Soft link and Hard link. The Link files color is `LIGHT TURQUOISE`
-
-```
-# ls -la | grep ^l
-lrwxrwxrwx. 1 root root 31 Dec 7 15:11 s-link-file -> /links/soft-link/test-soft-link
-lrwxrwxrwx. 1 root root 38 Dec 7 15:12 s-link-folder -> /links/soft-link/test-soft-link-folder
-```
-
-#### How to view the Character Device files in Linux?
-
-Use the below command to view the Character Device files in Linux. Character Device files are available only in specific location.
-
-It’s available under `/dev` directory. The Character Device files color is `YELLOW`
-
-```
-# ls -la | grep ^c
-crw-------. 1 root root 5, 1 Jan 28 14:05 console
-crw-rw----. 1 root root 10, 61 Jan 28 14:05 cpu_dma_latency
-crw-rw----. 1 root root 10, 62 Jan 28 14:05 crash
-crw-rw----. 1 root root 29, 0 Jan 28 14:05 fb0
-crw-rw-rw-. 1 root root 1, 7 Jan 28 14:05 full
-crw-rw-rw-. 1 root root 10, 229 Jan 28 14:05 fuse
-```
-
-#### How to view the Block files in Linux?
-
-Use the below command to view the Block files in Linux. The Block files are available only in specific location.
-It’s available under `/dev` directory. The Block files color is `YELLOW`
-
-```
-# ls -la | grep ^b
-brw-rw----. 1 root disk 7, 0 Jan 28 14:05 loop0
-brw-rw----. 1 root disk 7, 1 Jan 28 14:05 loop1
-brw-rw----. 1 root disk 7, 2 Jan 28 14:05 loop2
-brw-rw----. 1 root disk 7, 3 Jan 28 14:05 loop3
-brw-rw----. 1 root disk 7, 4 Jan 28 14:05 loop4
-```
-
-#### How to view the Socket files in Linux?
-
-Use the below command to view the Socket files in Linux. The Socket files are available only in specific location.
-The Socket files color is `PINK`
-
-```
-# ls -la | grep ^s
-srw-rw-rw- 1 root root 0 Jan 5 16:36 system_bus_socket
-```
-
-#### How to view the Named Pipe files in Linux?
-
-Use the below command to view the Named Pipe files in Linux. The Named Pipe files are available only in specific location. The Named Pipe files color is `YELLOW`
-
-```
-# ls -la | grep ^p
-prw-------. 1 root root 0 Jan 28 14:06 replication-notify-fifo|
-prw-------. 1 root root 0 Jan 28 14:06 stats-mail|
-```
-
-### Method-2: How to Identify File types in Linux Using file Command?
-
-The file command allow us to determine various file types in Linux. There are three sets of tests, performed in this order: filesystem tests, magic tests, and language tests to identify file types.
-
-#### How to view the Regular files in Linux Using file Command?
-
-Simple enter the file command on your terminal and followed by Regular file. The file command will read the given file contents and display exactly what kind of file it is.
-
-That’s why we are seeing different results for each Regular files. See the below various results for Regular files.
-
-```
-# file 2daygeek_access.log
-2daygeek_access.log: ASCII text, with very long lines
-
-# file powertop.html
-powertop.html: HTML document, ASCII text, with very long lines
-
-# file 2g-test
-2g-test: JSON data
-
-# file powertop.txt
-powertop.txt: HTML document, UTF-8 Unicode text, with very long lines
-
-# file 2g-test-05-01-2019.tar.gz
-2g-test-05-01-2019.tar.gz: gzip compressed data, last modified: Sat Jan 5 18:22:20 2019, from Unix, original size 450560
-```
-
-#### How to view the Directory files in Linux Using file Command?
-
-Simple enter the file command on your terminal and followed by Directory file. See the results below.
-
-```
-# file Pictures/
-Pictures/: directory
-```
-
-#### How to view the Link files in Linux Using file Command?
-
-Simple enter the file command on your terminal and followed by Link file. See the results below.
-
-```
-# file log
-log: symbolic link to /run/systemd/journal/dev-log
-```
-
-#### How to view the Character Device files in Linux Using file Command?
-
-Simple enter the file command on your terminal and followed by Character Device file. See the results below.
-
-```
-# file vcsu
-vcsu: character special (7/64)
-```
-
-#### How to view the Block files in Linux Using file Command?
-
-Simple enter the file command on your terminal and followed by Block file. See the results below.
-
-```
-# file sda1
-sda1: block special (8/1)
-```
-
-#### How to view the Socket files in Linux Using file Command?
-
-Simple enter the file command on your terminal and followed by Socket file. See the results below.
-
-```
-# file system_bus_socket
-system_bus_socket: socket
-```
-
-#### How to view the Named Pipe files in Linux Using file Command?
-
-Simple enter the file command on your terminal and followed by Named Pipe file. See the results below.
-
-```
-# file pipe-test
-pipe-test: fifo (named pipe)
-```
-
-### Method-3: How to Identify File types in Linux Using stat Command?
-
-The stat command allow us to check file types or file system status. This utility giving more information than file command. It shows lot of information about the given file such as Size, Block Size, IO Block Size, Inode Value, Links, File permission, UID, GID, File Access, Modify and Change time details.
-
-#### How to view the Regular files in Linux Using stat Command?
-
-Simple enter the stat command on your terminal and followed by Regular file.
-
-```
-# stat 2daygeek_access.log
- File: 2daygeek_access.log
- Size: 14406929 Blocks: 28144 IO Block: 4096 regular file
-Device: 10301h/66305d Inode: 1727555 Links: 1
-Access: (0644/-rw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
-Access: 2019-01-03 14:05:26.430328867 +0530
-Modify: 2019-01-03 14:05:26.460328868 +0530
-Change: 2019-01-03 14:05:26.460328868 +0530
- Birth: -
-```
-
-#### How to view the Directory files in Linux Using stat Command?
-
-Simple enter the stat command on your terminal and followed by Directory file. See the results below.
-
-```
-# stat Pictures/
- File: Pictures/
- Size: 4096 Blocks: 8 IO Block: 4096 directory
-Device: 10301h/66305d Inode: 1703982 Links: 3
-Access: (0755/drwxr-xr-x) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
-Access: 2018-11-24 03:22:11.090000828 +0530
-Modify: 2019-01-05 18:27:01.546958817 +0530
-Change: 2019-01-05 18:27:01.546958817 +0530
- Birth: -
-```
-
-#### How to view the Link files in Linux Using stat Command?
-
-Simple enter the stat command on your terminal and followed by Link file. See the results below.
-
-```
-# stat /dev/log
- File: /dev/log -> /run/systemd/journal/dev-log
- Size: 28 Blocks: 0 IO Block: 4096 symbolic link
-Device: 6h/6d Inode: 278 Links: 1
-Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
-Access: 2019-01-05 16:36:31.033333447 +0530
-Modify: 2019-01-05 16:36:30.766666768 +0530
-Change: 2019-01-05 16:36:30.766666768 +0530
- Birth: -
-```
-
-#### How to view the Character Device files in Linux Using stat Command?
-
-Simple enter the stat command on your terminal and followed by Character Device file. See the results below.
-
-```
-# stat /dev/vcsu
- File: /dev/vcsu
- Size: 0 Blocks: 0 IO Block: 4096 character special file
-Device: 6h/6d Inode: 16 Links: 1 Device type: 7,40
-Access: (0660/crw-rw----) Uid: ( 0/ root) Gid: ( 5/ tty)
-Access: 2019-01-05 16:36:31.056666781 +0530
-Modify: 2019-01-05 16:36:31.056666781 +0530
-Change: 2019-01-05 16:36:31.056666781 +0530
- Birth: -
-```
-
-#### How to view the Block files in Linux Using stat Command?
-
-Simple enter the stat command on your terminal and followed by Block file. See the results below.
-
-```
-# stat /dev/sda1
- File: /dev/sda1
- Size: 0 Blocks: 0 IO Block: 4096 block special file
-Device: 6h/6d Inode: 250 Links: 1 Device type: 8,1
-Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 994/ disk)
-Access: 2019-01-05 16:36:31.596666806 +0530
-Modify: 2019-01-05 16:36:31.596666806 +0530
-Change: 2019-01-05 16:36:31.596666806 +0530
- Birth: -
-```
-
-#### How to view the Socket files in Linux Using stat Command?
-
-Simple enter the stat command on your terminal and followed by Socket file. See the results below.
-
-```
-# stat /var/run/dbus/system_bus_socket
- File: /var/run/dbus/system_bus_socket
- Size: 0 Blocks: 0 IO Block: 4096 socket
-Device: 15h/21d Inode: 576 Links: 1
-Access: (0666/srw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root)
-Access: 2019-01-05 16:36:31.823333482 +0530
-Modify: 2019-01-05 16:36:31.810000149 +0530
-Change: 2019-01-05 16:36:31.810000149 +0530
- Birth: -
-```
-
-#### How to view the Named Pipe files in Linux Using stat Command?
-
-Simple enter the stat command on your terminal and followed by Named Pipe file. See the results below.
-
-```
-# stat pipe-test
- File: pipe-test
- Size: 0 Blocks: 0 IO Block: 4096 fifo
-Device: 10301h/66305d Inode: 1705583 Links: 1
-Access: (0644/prw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
-Access: 2019-01-06 02:00:03.040394731 +0530
-Modify: 2019-01-06 02:00:03.040394731 +0530
-Change: 2019-01-06 02:00:03.040394731 +0530
- Birth: -
-```
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/
-
-作者:[Magesh Maruthamuthu][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/magesh/
-[b]: https://github.com/lujun9972
diff --git a/sources/tech/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md b/sources/tech/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md
deleted file mode 100644
index 366e75846d..0000000000
--- a/sources/tech/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md
+++ /dev/null
@@ -1,159 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (7 Methods To Identify Disk Partition/FileSystem UUID On Linux)
-[#]: via: (https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/)
-[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-
-7 Methods To Identify Disk Partition/FileSystem UUID On Linux
-======
-
-As a Linux administrator you should aware of that how do you check partition UUID or filesystem UUID.
-
-Because most of the Linux systems are mount the partitions with UUID. The same has been verified in the `/etc/fstab` file.
-
-There are many utilities are available to check UUID. In this article we will show you how to check UUID in many ways and you can choose the one which is suitable for you.
-
-### What Is UUID?
-
-UUID stands for Universally Unique Identifier which helps Linux system to identify a hard drives partition instead of block device file.
-
-libuuid is part of the util-linux-ng package since kernel version 2.15.1 and it’s installed by default in Linux system.
-
-The UUIDs generated by this library can be reasonably expected to be unique within a system, and unique across all systems.
-
-It’s a 128 bit number used to identify information in computer systems. UUIDs were originally used in the Apollo Network Computing System (NCS) and later UUIDs are standardized by the Open Software Foundation (OSF) as part of the Distributed Computing Environment (DCE).
-
-UUIDs are represented as 32 hexadecimal (base 16) digits, displayed in five groups separated by hyphens, in the form 8-4-4-4-12 for a total of 36 characters (32 alphanumeric characters and four hyphens).
-
-For example: d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
-
-Sample of my /etc/fstab file.
-
-```
-# cat /etc/fstab
-
-# /etc/fstab: static file system information.
-#
-# Use 'blkid' to print the universally unique identifier for a device; this may
-# be used with UUID= as a more robust way to name devices that works even if
-# disks are added and removed. See fstab(5).
-#
-#
-UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f / ext4 defaults,noatime 0 1
-UUID=a2092b92-af29-4760-8e68-7a201922573b swap swap defaults,noatime 0 2
-```
-
-We can check this using the following seven commands.
-
- * **`blkid Command:`** locate/print block device attributes.
- * **`lsblk Command:`** lsblk lists information about all available or the specified block devices.
- * **`hwinfo Command:`** hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system.
- * **`udevadm Command:`** udev management tool.
- * **`tune2fs Command:`** adjust tunable filesystem parameters on ext2/ext3/ext4 filesystems.
- * **`dumpe2fs Command:`** dump ext2/ext3/ext4 filesystem information.
- * **`Using by-uuid Path:`** The directory contains UUID and real block device files, UUIDs were symlink with real block device files.
-
-
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing blkid Command?
-
-blkid is a command-line utility to locate/print block device attributes. It uses libblkid library to get disk partition UUID in Linux system.
-
-```
-# blkid
-/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01"
-/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01"
-/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03"
-/dev/sdc5: PARTUUID="8cc8f9e5-05"
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing lsblk Command?
-
-lsblk lists information about all available or the specified block devices. The lsblk command reads the sysfs filesystem and udev db to gather information.
-
-If the udev db is not available or lsblk is compiled without udev support than it tries to read LABELs, UUIDs and filesystem types from the block device. In this case root permissions are necessary. The command prints all block devices (except RAM disks) in a tree-like format by default.
-
-```
-# lsblk -o name,mountpoint,size,uuid
-NAME MOUNTPOINT SIZE UUID
-sda 30G
-└─sda1 / 20G d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
-sdb 10G
-sdc 10G
-├─sdc1 1G d17e3c31-e2c9-4f11-809c-94a549bc43b7
-├─sdc3 1G ca307aa4-0866-49b1-8184-004025789e63
-├─sdc4 1K
-└─sdc5 1G
-sdd 10G
-sde 10G
-sr0 1024M
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing by-uuid path?
-
-The directory contains UUID and real block device files, UUIDs were symlink with real block device files.
-
-```
-# ls -lh /dev/disk/by-uuid/
-total 0
-lrwxrwxrwx 1 root root 10 Jan 29 08:34 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3
-lrwxrwxrwx 1 root root 10 Jan 29 08:34 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1
-lrwxrwxrwx 1 root root 10 Jan 29 08:34 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing hwinfo Command?
-
-**[hwinfo][1]** stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format.
-
-```
-# hwinfo --block | grep by-uuid | awk '{print $3,$7}'
-/dev/sdc1, /dev/disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
-/dev/sdc3, /dev/disk/by-uuid/ca307aa4-0866-49b1-8184-004025789e63
-/dev/sda1, /dev/disk/by-uuid/d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing udevadm Command?
-
-udevadm expects a command and command specific options. It controls the runtime behavior of systemd-udevd, requests kernel events, manages the event queue, and provides simple debugging mechanisms.
-
-```
-udevadm info -q all -n /dev/sdc1 | grep -i by-uuid | head -1
-S: disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing tune2fs Command?
-
-tune2fs allows the system administrator to adjust various tunable filesystem parameters on Linux ext2, ext3, or ext4 filesystems. The current values of these options can be displayed by using the -l option.
-
-```
-# tune2fs -l /dev/sdc1 | grep UUID
-Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
-```
-
-### How To Check Disk Partition/FileSystem UUID In Linux Uusing dumpe2fs Command?
-
-dumpe2fs prints the super block and blocks group information for the filesystem present on device.
-
-```
-# dumpe2fs /dev/sdc1 | grep UUID
-dumpe2fs 1.43.5 (04-Aug-2017)
-Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
-```
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/
-
-作者:[Magesh Maruthamuthu][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/magesh/
-[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
diff --git a/sources/tech/20190131 VA Linux- The Linux Company That Once Ruled NASDAQ.md b/sources/tech/20190131 VA Linux- The Linux Company That Once Ruled NASDAQ.md
new file mode 100644
index 0000000000..78e0d0ecfd
--- /dev/null
+++ b/sources/tech/20190131 VA Linux- The Linux Company That Once Ruled NASDAQ.md
@@ -0,0 +1,147 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (VA Linux: The Linux Company That Once Ruled NASDAQ)
+[#]: via: (https://itsfoss.com/story-of-va-linux/)
+[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
+
+VA Linux: The Linux Company That Once Ruled NASDAQ
+======
+
+This is our first article in the Linux and open source history series. We will be covering more trivia, anecdotes and other nostalgic events from the past.
+
+At its time, _VA Linux_ was indeed a crusade to free the world from Microsoft’s domination.
+
+On a historical incident in December 1999, the shares of a private firm skyrocketed from just $30 to a whopping $239 within just a day of its [IPO][1]! It was a record-breaking development that day.
+
+The company was _VA Linux_ , a firm with only 200 employees that was based on the idea of deploying Intel Hardware with Linux and FOSS, had begun a fantastic journey [on the likes of Sun and Dell][2].
+
+It traded with a symbol LNUX and gained around 700 percent on its first day of trading. But hardly one year later, the [LNUX stocks were selling below $9 per share][3].
+
+How come a successful Linux based company become a subsidiary of [Gamestop][4], a gaming company?
+
+Let us look back into the highs and lows of this record-breaking Linux corporation by knowing their history in brief.
+
+### How did it all actually begin?
+
+In the year 1993, a graduate student at Stanford University wanted to own a powerful workstation but could not afford to buy expensive [Sun][5] Workstations, which used to be sold at extremely high prices of $7,000 per system at that time.
+
+So, he decided to do build one on his own ([DIY][6] [FTW][7]!). Using an Intel 486-chip running at just 33 megahertz, he installed Linux and finally had a machine that was twice as fast as Sun’s but at a much lower price tag: $2,000.
+
+That student was none other than _VA Research_ founder [Larry Augustin][8], whose idea was loved by many at that exciting time in the Stanford campus. People started buying machines with similar configurations from him and his friend and co-founder, James Vera. This is how _VA Research_ was formed.
+
+![VA Linux founder, Larry Augustin][9]
+
+> Once software goes into the GPL, you can’t take it back. People can stop contributing, but the code that exists, people can continue to develop on it.
+>
+> Without a doubt, a futuristic quote from VA Linux founder, Larry Augustin, 10 years ago | Read the whole interview [here][10]
+
+#### Some screenshots of their web domains from the early days
+
+![Linux Powered Machines on sale on varesearch.com | July 15, 1997][11]
+
+![varesearch.com reveals emerging growth | February 16, 1998][12]
+
+![On June 26, 2001, they transitioned from hardware to software | valinux.com as on June 22, 2001][13]
+
+### The spectacular rise and the devastating fall of VA Linux
+
+VA Research had a big year in 1999 and perhaps it was the biggest for them as they acquired many growing companies and competitors at that time, along with starting many innovative initiatives. The next year in 2000, they created a subsidiary in Japan named _VA Linux Systems Japan K.K._ They were at their peak that year.
+
+After they transitioned completely from hardware to software, stock prices started to fall drastically since 2002. It all happened because of slower-than-expected sales growth from new customers in the dot-com sector. In the later years they sold off a few brands and top employees also resigned in 2010.
+
+Gamestop finally [acquired][14] Geeknet Inc. (the new name of VA Linux) for $140 million on June 2, 2015.
+
+In case you’re curious for a detailed chronicle, I have separately created this [timeline][15], highlighting events year-wise.
+
+![Image Credit: Wikipedia][16]
+
+### What happened to VA Linux afterward?
+
+Geeknet owned by Gamestop is now an online retailer for the global geek community as [ThinkGeek][17].
+
+SourceForge and Slashdot were what still kept them linked with Linux and Open Source until _Dice Holdings_ acquired Slashdot, SourceForge, and Freecode.
+
+An [article][18] from 2016 sadly quotes in its final paragraph:
+
+> “Being acquired by a company that caters to gamers and does not have anything in particular to do with open source software may be a lackluster ending for what was once a spectacularly valuable Linux business.”
+
+Did we note Linux and Gamers? Does Linux really not have anything to do with Gaming? Are these two terms really so far apart? What about [Gaming on Linux][19]? What about [Open Source Games][20]?
+
+How could have the stalwarts from _VA Linux_ with years and years of experience in the Linux arena contributed to the Linux Gaming community? What could have happened had [Valve][21] (who are currently so [dedicated][22] towards Linux Gaming) acquired _VA Linux_ instead of Gamestop? Can we ponder?
+
+The seeds of ideas that were planted by _VA Research_ will continue to inspire the Linux and FOSS community because of its significant contributions in the world of Open Source. At _It’s FOSS,_ our heartfelt salute goes out to those noble ideas!
+
+Want to feel the nostalgia? Use the [timeline][15] dates with the [Way Back Machine][23] to check out previously owned _VA_ domains like _valinux.com_ or _varesearch.com_ in the past three decades! You can even check _linux.com_ that was once owned by _VA Linux Systems_.
+
+But wait, are we really done here? What happened to the subsidiary named _VA Linux Systems Japan K.K._? Well, it’s [a different story there][24] and still going strong with the original ideologies of _VA Linux_!
+
+![VA Linux booth circa 2000 | Image Credit: Storem][25]
+
+#### _VA Linux_ Subsidiary Still Operational in Japan!
+
+VA Linux is still operational through its [Japanese subsidiary][26]. It provides the following services:
+
+ * Failure Analysis and Support Services: [_VA Quest_][27]
+ * Entrusted Development Service
+ * Consulting Service
+
+
+
+_VA_ _Quest_ , in particular, continues its services as a failure-analysis solution for tracking down and dealing with kernel bugs which might be getting in its customers’ way since 2005. [Tetsuro Yogo][28] took over as the New President and CEO on April 3, 2017. Check out their timeline [here][29]! They are also [on GitHub][30]!
+
+You can also read about a recent development reported on August 2 last year, on this [translated][31] version of a Japanese IT news page. It’s an update about _VA Linux_ providing technical support service of “[Kubernetes][32]” container management software in Japan.
+
+Its good to know that their 18-year-old subsidiary is still doing well in Japan and the name of _VA Linux_ continues to flourish there even today!
+
+What are your views? Do you want to share anything on _VA Linux_? Please let us know in the comments section below.
+
+I hope you liked this first article in the Linux history series. If you know such interesting facts from the past that you would like us to cover here, please let us know.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/story-of-va-linux/
+
+作者:[Avimanyu Bandyopadhyay][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/avimanyu/
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Initial_public_offering
+[2]: https://www.forbes.com/1999/05/03/feat.html
+[3]: https://www.channelfutures.com/open-source/open-source-history-the-spectacular-rise-and-fall-of-va-linux
+[4]: https://www.gamestop.com/
+[5]: http://www.sun.com/
+[6]: https://en.wikipedia.org/wiki/Do_it_yourself
+[7]: https://www.urbandictionary.com/define.php?term=FTW
+[8]: https://www.linkedin.com/in/larryaugustin/
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-Founder-Larry-Augustin.jpg?ssl=1
+[10]: https://www.linuxinsider.com/story/SourceForges-Larry-Augustin-A-Better-Way-to-Build-Web-Apps-62155.html
+[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-July-15-1997.jpg?ssl=1
+[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-Feb-16-1998.jpg?ssl=1
+[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-com-Snapshot-June-22-2001.jpg?ssl=1
+[14]: http://geekgirlpenpals.com/geeknet-parent-company-to-thinkgeek-entered-agreement-with-gamestop/
+[15]: https://medium.com/@avimanyu786/a-timeline-of-va-linux-through-the-years-6813e2bd4b13
+[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/LNUX-stock-fall.png?ssl=1
+[17]: https://www.thinkgeek.com/
+[18]: https://www.channelfutures.com/open-source/open-source-history-spectacular-rise-and-fall-va-linux
+[19]: https://itsfoss.com/linux-gaming-distributions/
+[20]: https://en.wikipedia.org/wiki/Open-source_video_game
+[21]: https://www.valvesoftware.com/
+[22]: https://itsfoss.com/steam-play-proton/
+[23]: https://archive.org/web/web.php
+[24]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fwww.valinux.co.jp%2Fcorp%2Fstatement%2F&edit-text=
+[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/va-linux-team-booth.jpg?resize=800%2C600&ssl=1
+[26]: https://www.valinux.co.jp/english/
+[27]: https://www.linux.com/news/va-linux-announces-linux-failure-analysis-service
+[28]: https://www.linkedin.com/in/yogo45/
+[29]: https://www.valinux.co.jp/english/about/timeline/
+[30]: https://github.com/vaj
+[31]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fit.impressbm.co.jp%2Farticles%2F-%2F16499
+[32]: https://en.wikipedia.org/wiki/Kubernetes
diff --git a/sources/tech/20190202 CrossCode is an Awesome 16-bit Sci-Fi RPG Game.md b/sources/tech/20190202 CrossCode is an Awesome 16-bit Sci-Fi RPG Game.md
new file mode 100644
index 0000000000..15349fbf32
--- /dev/null
+++ b/sources/tech/20190202 CrossCode is an Awesome 16-bit Sci-Fi RPG Game.md
@@ -0,0 +1,98 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (CrossCode is an Awesome 16-bit Sci-Fi RPG Game)
+[#]: via: (https://itsfoss.com/crosscode-game/)
+[#]: author: (Phillip Prado https://itsfoss.com/author/phillip/)
+
+CrossCode is an Awesome 16-bit Sci-Fi RPG Game
+======
+
+What starts off as an obvious sci-fi 16-bit 2D action RPG quickly turns into a JRPG inspired pseudo-MMO open-world puzzle platformer. Though at first glance this sounds like a jumbled mess, [CrossCode][1] manages to bundle all of its influences into a seamless gaming experience that feels nothing shy of excellent.
+
+Note: CrossCode is not open source software. We have covered it because it is Linux specific.
+
+![][2]
+
+### Story
+
+You play as Lea, a girl who has forgotten her identity, where she comes from, and how to speak. As you walk through the early parts of the story, you come to find that you are a character in a digital world — a video game. But not just any video game — an MMO. And you, Lea, must venture into the digital world known as CrossWorlds in order to unravel the secrets of your past.
+
+As you progress through the game, you unveil more and more about yourself, learning how you got to this point in the first place. This doesn’t sound too crazy of a story, but the gameplay implementation and appropriately paced storyline make for quite a captivating experience.
+
+The story unfolds at a satisfying speed and the character’s development is genuinely gratifying — both fictionally and mechanically. The only critique I had was that it felt like the introductory segment took a little too long — dragging the tutorial into the gameplay for quite some time, and keeping the player from getting into the real meat of the game.
+
+All-in-all, CrossCode’s story did not leave me wanting, not even in the slightest. It’s deep, fun, heartwarming, intelligent, and all while never sacrificing great character development. Without spoiling anything, I will say that if you are someone that enjoys a good story, you will need to give CrossCode a look.
+
+![][3]
+
+### Gameplay
+
+Yes, the story is great and all, but if there is one place that CrossCode truly shines, it has to be its gameplay. The game’s mechanics are fast-paced, challenging, intuitive, and downright fun!
+
+You start off with a dodge, block, melee, and ranged attack, each slowly developing overtime as the character tree is unlocked. This all-too-familiar mix of combat elements balances skill and hack-n-slash mechanics in a way that doesn’t conflict with one another.
+
+The game utilizes this mix of skills to create some amazing puzzle solving and combat that helps CrossCode’s gameplay truly stand out. Whether you are making your way through one of the four main dungeons, or you are taking a boss head on, you can’t help but periodically stop and think “wow, this game is great!”
+
+Though this has to be the game’s strongest feature, it can also be the game’s biggest downfall. Part of the reason that the story and character progression is so satisfying is because the combat and puzzle mechanics can be incredibly challenging, and that’s putting it lightly.
+
+There are times in which CrossCode’s gameplay feels downright impossible. Bosses take an expert amount of focus, and dungeons require all of the patience you can muster up just to simply finish them.
+
+![][4]
+
+The game requires a type of dexterity I have not quite had to master yet. I mean, sure there are more challenging puzzle games out there, yes there are more difficult platformers, and of course there are more grueling RPGs, but adding all of these elements into one game while spurring the player along with an alluring story requires a level of mechanical balance that I haven’t found in many other games.
+
+And though there were times I felt the gameplay was flat out punishing, I was constantly reminded that this is simply not the case. Death doesn’t cause serious character regression, you can take a break from dungeons when you feel overwhelmed, and there is a plethora of checkpoints throughout the game’s most difficult parts to help the player along.
+
+Where other games fall short by giving the player nothing to lose, this reality redeems CrossCode amid its rigorous gameplay. CrossCode may be one of the only games I know that takes two common flaws in games and holds the tension between them so well that it becomes one of the game’s best strengths.
+
+![][5]
+
+### Design
+
+One of the things that surprised me most about CrossCode was how well it’s world and sound design come together. Right off the bat, from the moment you boot the game up, it is clear the developers meant business when designing CrossCode.
+
+Being in a fictional MMO world, the game’s character ensemble is vibrant and distinctive, each having its own tone and personality. The games sound and motion graphics are tactile and responsive, giving the player a healthy amount of feedback during gameplay. And the soundtrack behind the game is simply beautiful, ebbing and flowing between intense moments of combat to blissful moments of exploration.
+
+If I had to fault CrossCode in this category it would have to be in the size of the map. Yes, the dungeons are long, and yes, the CrossWorlds map looks gigantic, but I still wanted more to explore outside crippling dungeons. The game is beautiful and fluid, but akin to RPG games of yore — aka. Zelda games pre-Breath of the Wild — I wish there was just a little more for me to freely explore.
+
+It is obvious that the developers really cared about this aspect of the game, and you can tell they spent an incredible amount of time developing its design. CrossCode set itself up for success here in its plot and content, and the developers capitalize on the opportunity, knocking another category out of the park.
+
+![][6]
+
+### Conclusion
+
+In the end, it is obvious how I feel about this game. And just in case you haven’t caught on yet…I love it. It holds a near perfect balance between being difficult and rewarding, simple and complex, linear and open, making CrossCode one of [the best Linux games][7] out there.
+
+Developed by [Radical Fish Games][8], CrossCode was officially released for Linux on September 21, 2018, seven years after development began. You can pick up the game over on [Steam][9], [GOG][10], or [Humble Bundle][11].
+
+If you play games regularly, you may want to [subscribe to Humble Monthly][12] ([affiliate][13] link). For $12 per month, you’ll get games worth over $100 (not all for Linux). Over 450,000 gamers worldwide use Humble Monthly.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/crosscode-game/
+
+作者:[Phillip Prado][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/phillip/
+[b]: https://github.com/lujun9972
+[1]: http://www.cross-code.com/en/home
+[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Level-up.png?fit=800%2C451&ssl=1
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Equpiment.png?fit=800%2C451&ssl=1
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-character-development.png?fit=800%2C451&ssl=1
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Environment.png?fit=800%2C451&ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-dungeon.png?fit=800%2C451&ssl=1
+[7]: https://itsfoss.com/free-linux-games/
+[8]: http://www.radicalfishgames.com/
+[9]: https://store.steampowered.com/app/368340/CrossCode/
+[10]: https://www.gog.com/game/crosscode
+[11]: https://www.humblebundle.com/store/crosscode
+[12]: https://www.humblebundle.com/monthly?partner=itsfoss
+[13]: https://itsfoss.com/affiliate-policy/
diff --git a/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md
index 13b441f85d..7ce1201c4f 100644
--- a/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md
+++ b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: (ustblixin)
+[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md b/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md
new file mode 100644
index 0000000000..e8722c63cc
--- /dev/null
+++ b/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md
@@ -0,0 +1,146 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Installing Kali Linux on VirtualBox: Quickest & Safest Way)
+[#]: via: (https://itsfoss.com/install-kali-linux-virtualbox/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Installing Kali Linux on VirtualBox: Quickest & Safest Way
+======
+
+_**This tutorial shows you how to install Kali Linux on Virtual Box in Windows and Linux in the quickest way possible.**_
+
+[Kali Linux][1] is one of the [best Linux distributions for hacking][2] and security enthusiasts.
+
+Since it deals with a sensitive topic like hacking, it’s like a double-edged sword. We have discussed it in the detailed Kali Linux review in the past so I am not going to bore you with the same stuff again.
+
+While you can install Kali Linux by replacing the existing operating system, using it via a virtual machine would be a better and safer option.
+
+With Virtual Box, you can use Kali Linux as a regular application in your Windows/Linux system. It’s almost the same as running VLC or a game in your system.
+
+Using Kali Linux in a virtual machine is also safe. Whatever you do inside Kali Linux will NOT impact your ‘host system’ (i.e. your original Windows or Linux operating system). Your actual operating system will be untouched and your data in the host system will be safe.
+
+![][3]
+
+### How to install Kali Linux on VirtualBox
+
+I’ll be using [VirtualBox][4] here. It is a wonderful open source virtualization solution for just about anyone (professional or personal use). It’s available free of cost.
+
+In this tutorial, we will talk about Kali Linux in particular but you can install almost any other OS whose ISO file exists or a pre-built virtual machine save file is available.
+
+**Note:** _The same steps apply for Windows/Linux running VirtualBox._
+
+As I already mentioned, you can have either Windows or Linux installed as your host. But, in this case, I have Windows 10 installed (don’t hate me!) where I try to install Kali Linux in VirtualBox step by step.
+
+And, the best part is – even if you happen to use a Linux distro as your primary OS, the same steps will be applicable!
+
+Wondering, how? Let’s see…
+
+[Subscribe to Our YouTube Channel for More Linux Videos][5]
+
+### Step by Step Guide to install Kali Linux on VirtualBox
+
+_We are going to use a custom Kali Linux image made for VirtualBox specifically. You can also download the ISO file for Kali Linux and create a new virtual machine – but why do that when you have an easy alternative?_
+
+#### 1\. Download and install VirtualBox
+
+The first thing you need to do is to download and install VirtualBox from Oracle’s official website.
+
+[Download VirtualBox][6]
+
+Once you download the installer, just double click on it to install VirtualBox. It’s the same for [installing VirtualBox on Ubuntu][7]/Fedora Linux as well.
+
+#### 2\. Download ready-to-use virtual image of Kali Linux
+
+After installing it successfully, head to [Offensive Security’s download page][8] to download the VM image for VirtualBox. If you change your mind to utilize [VMware][9], that is available too.
+
+![][10]
+
+As you can see the file size is well over 3 GB, you should either use the torrent option or download it using a [download manager][11].
+
+[Kali Linux Virtual Image][8]
+
+#### 3\. Install Kali Linux on Virtual Box
+
+Once you have installed VirtualBox and downloaded the Kali Linux image, you just need to import it to VirtualBox in order to make it work.
+
+Here’s how to import the VirtualBox image for Kali Linux:
+
+**Step 1** : Launch VirtualBox. You will notice an **Import** button – click on it
+
+![Click on Import button][12]
+
+**Step 2:** Next, browse the file you just downloaded and choose it to be imported (as you can see in the image below). The file name should start with ‘kali linux‘ and end with . **ova** extension.
+
+![Importing Kali Linux image][13]
+
+**S** Once selected, proceed by clicking on **Next**.
+
+**Step 3** : Now, you will be shown the settings for the virtual machine you are about to import. So, you can customize them or not – that is your choice. It is okay if you go with the default settings.
+
+You need to select a path where you have sufficient storage available. I would never recommend the **C:** drive on Windows.
+
+![Import hard drives as VDI][14]
+
+Here, the hard drives as VDI refer to virtually mount the hard drives by allocating the storage space set.
+
+After you are done with the settings, hit **Import** and wait for a while.
+
+**Step 4:** You will now see it listed. So, just hit **Start** to launch it.
+
+You might get an error at first for USB port 2.0 controller support, you can disable it to resolve it or just follow the on-screen instruction of installing an additional package to fix it. And, you are done!
+
+![Kali Linux running in VirtualBox][15]
+
+The default username in Kali Linux is root and the default password is toor. You should be able to login to the system with it.
+
+Do note that you should [update Kali Linux][16] before trying to install a new applications or trying to hack your neighbor’s WiFi.
+
+I hope this guide helps you easily install Kali Linux on Virtual Box. Of course, Kali Linux has a lot of useful tools in it for penetration testing – good luck with that!
+
+**Tip** : Both Kali Linux and Ubuntu are Debian-based. If you face any issues or error with Kali Linux, you may follow the tutorials intended for Ubuntu or Debian on the internet.
+
+### Bonus: Free Kali Linux Guide Book
+
+If you are just starting with Kali Linux, it will be a good idea to know how to use Kali Linux.
+
+Offensive Security, the company behind Kali Linux, has created a guide book that explains the basics of Linux, basics of Kali Linux, configuration, setups. It also has a few chapters on penetration testing and security tools.
+
+Basically, it has everything you need to get started with Kali Linux. And the best thing is that the book is available to download for free.
+
+[Download Kali Linux Revealed for FREE][17]
+
+Let us know in the comments below if you face an issue or simply share your experience with Kali Linux on VirtualBox.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-kali-linux-virtualbox/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://www.kali.org/
+[2]: https://itsfoss.com/linux-hacking-penetration-testing/
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?resize=800%2C450&ssl=1
+[4]: https://www.virtualbox.org/
+[5]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[6]: https://www.virtualbox.org/wiki/Downloads
+[7]: https://itsfoss.com/install-virtualbox-ubuntu/
+[8]: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/
+[9]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
+[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box-image.jpg?resize=800%2C347&ssl=1
+[11]: https://itsfoss.com/4-best-download-managers-for-linux/
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-import-kali-linux.jpg?ssl=1
+[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-linux-next.jpg?ssl=1
+[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-kali-linux-settings.jpg?ssl=1
+[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-on-windows-virtualbox.jpg?resize=800%2C429&ssl=1
+[16]: https://linuxhandbook.com/update-kali-linux/
+[17]: https://kali.training/downloads/Kali-Linux-Revealed-1st-edition.pdf
diff --git a/sources/tech/20190206 Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI.md b/sources/tech/20190206 Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI.md
new file mode 100644
index 0000000000..603ae570eb
--- /dev/null
+++ b/sources/tech/20190206 Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI.md
@@ -0,0 +1,96 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI)
+[#]: via: (https://itsfoss.com/flowblade-video-editor-release/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI
+======
+
+[Flowblade][1] is one of the rare [video editors that are only available for Linux][2]. It is not the feature set – but the simplicity, flexibility, and being an open source project that counts.
+
+However, with Flowblade 2.0 – released recently – it is now more powerful and useful. A lot of new tools along with a complete overhaul in the workflow can be seen.
+
+In this article, we shall take a look at what’s new with Flowblade 2.0.
+
+### New Features in Flowblade 2.0
+
+Here are some of the major new changes in the latest release of Flowblade.
+
+#### GUI Updates
+
+![Flowblade 2.0][3]
+
+This was a much needed change. I’m always looking for open source solutions that works as expected along with a great GUI.
+
+So, in this update, you will observe a new custom theme set as the default – which looks good though.
+
+Overall, the panel design, the toolbox and stuff has been taken care of to make it look modern. The overhaul considers small changes like the cursor icon upon tool selection and so on.
+
+#### Workflow Overhaul
+
+No matter what features you get to utilize, the workflow matters to people who regularly edit videos. So, it has to be intuitive.
+
+With the recent release, they have made sure that you can configure and set the workflow as per your preference. Well, that is definitely flexible because not everyone has the same requirement.
+
+#### New Tools
+
+![Flowblade Video Editor Interface][4]
+
+**Keyframe tool** : This enables editing and adjusting the Volume and Brightness [keyframes][5] on timeline.
+
+**Multitrim** : A combination of trill, roll, and slip tool.
+
+**Cut:** Available now as a tool in addition to the traditional cut at the playhead.
+
+**Ripple trim:** It is a mode of Trim tool – not often used by many – now available as a separate tool.
+
+#### More changes?
+
+In addition to these major changes listed above, they have added some keyframe editing updates and compositors ( _AlphaXOR, Alpha Out, and Alpha_ ) to utilize alpha channel data to combine images.
+
+A lot of more tiny little changes have taken place as well – you can check those out in the official [changelog][6] on GitHub.
+
+### Installing Flowblade 2.0
+
+If you use Debian or Ubuntu based Linux distributions, there are .deb binaries available for easily installing Flowblade 2.0.
+
+For the rest, you’ll have to [install it using the source code][7].
+
+All the files are available on it’s GitHub page. You can download it from the page below.
+
+[Download Flowblade 2.0][8]
+
+### Wrapping Up
+
+If you are interested in video editing, perhaps you would like to follow the development of [Olive][9], a new open source video editor in development.
+
+Now that you know about the latest changes and additions. What do you think about Flowblade 2.0 as a video editor? Is it good enough for you?
+
+Let us know your thoughts in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/flowblade-video-editor-release/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://github.com/jliljebl/flowblade
+[2]: https://itsfoss.com/best-video-editing-software-linux/
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2.jpg?ssl=1
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2-1.jpg?resize=800%2C450&ssl=1
+[5]: https://en.wikipedia.org/wiki/Key_frame
+[6]: https://github.com/jliljebl/flowblade/blob/master/flowblade-trunk/docs/RELEASE_NOTES.md
+[7]: https://itsfoss.com/install-software-from-source-code/
+[8]: https://github.com/jliljebl/flowblade/releases/tag/v2.0
+[9]: https://itsfoss.com/olive-video-editor/
diff --git a/sources/tech/20190207 Review of Debian System Administrator-s Handbook.md b/sources/tech/20190207 Review of Debian System Administrator-s Handbook.md
new file mode 100644
index 0000000000..7b51459c6b
--- /dev/null
+++ b/sources/tech/20190207 Review of Debian System Administrator-s Handbook.md
@@ -0,0 +1,133 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Review of Debian System Administrator’s Handbook)
+[#]: via: (https://itsfoss.com/debian-administrators-handbook/)
+[#]: author: (Shirish https://itsfoss.com/author/shirish/)
+
+Review of Debian System Administrator’s Handbook
+======
+
+_**Debian System Administrator’s Handbook is a free-to-download book that covers all the essential part of Debian that a sysadmin might need.**_
+
+This has been on my to-do review list for quite some time. The book was started by two French Debian Developers Raphael Hertzog and Roland Mas to increase awareness about the Debian project in France. The book was a huge hit among francophone Linux users. The English translation followed soon after that.
+
+### Debian Administrator’s Handbook
+
+![][1]
+
+[Debian Administrator’s Handbook][2] is targeted from a newbie who may be looking to understand what the [Debian project][3] is all about to somebody who might be running a Debian in a production server.
+
+The latest version of the book covers Debian 8 while the current stable version is Debian 9. But it doesn’t mean that book is outdated and is of no use to Debian 9 users. Most of the part of the book is valid for all Debian and Linux users.
+
+Let me give you a quick summary of what this book covers.
+
+#### Section 1 – Debian Project
+
+The first section sets the tone of the book where it gives a solid foundation to somebody who might be looking into Debian as to what it actually means. Some of it will probably be updated to match the current scenario.
+
+#### Section 2 – Using fictional case studies for different needs
+
+The second section deals with the various case-scenarios as to where Debian could be used. The idea being how Debian can be used in various hierarchical or functional scenarios. One aspect which I felt that should have stressed upon is the culture mindshift and openness which at least should have been mentioned.
+
+#### Section 3 & 4- Setups and Installation
+
+The third section goes into looking in existing setups. I do think it should have stressed more into documenting existing setups, migrating partial services and users before making a full-fledged transition. While all of the above seem minor points, I have seen many of them come and bit me on the back during a transition.
+
+Section Four covers the various ways you could install, how the installation process flows and things to keep in mind before installing a Debian System. Unfortunately, UEFI was not present at that point so it was not talked about.
+
+#### Section 5 & 6 – Packaging System and Updates
+
+Section Five starts on how a binary package is structured and then goes on to tell how a source package is structured as well. It does mention several gotchas or tricky ways in which a sys-admin can be caught.
+
+Section Six is perhaps where most of the sysadmins spend most of the time apart from troubleshooting which is another chapter altogether. While it starts from many of the most often used sysadmin commands, the interesting point which I liked was on page 156 which is on better solver algorithims.
+
+#### Section 7 – Solving Problems and finding Relevant Solutions
+
+Section Seven, on the other hand, speaks of the various problem scenarios and various ways when you find yourself with a problem. In Debian and most GNU/Linux distributions, the keyword is ‘patience’. If you are patient then many problems in Debian are resolved or can be resolved after a good night’s sleep.
+
+#### Section 8 – Basic Configuration, Network, Accounts, Printing
+
+Section Eight introduces you to the basics of networking and having single or multiple user accounts on the workstation. It goes a bit into user and group configuration and practices then gives a brief introduction to the bash shell and gets a brief overview of the [CUPS][4] printing daemon. There is much to explore here.
+
+#### Section 9 – Unix Service
+
+Section 9 starts with the introduction to specific Unix services. While it starts with the much controversial, hated and reviled in many quarters [systemd][5], they also shared System V which is still used by many a sysadmin.
+
+#### Section 10, 11 & 12 – Networking and Adminstration
+
+Section 10 makes you dive into network infrastructure where it goes into the basics of Virtual Private Networks (OpenVPN), OpenSSH, the PKI credentials and some basics of information security. It also gets into basics of DNS, DHCP and IPv6 and ends with some tools which could help in troubleshooting network issues.
+
+Section 11 starts with basic configuration and workflow of mail server and postfix. It tries to a bit into depth as there is much to play with. It then goes into the popular web server Apache, FTP File server, NFS and CIFS with Windows shares via Samba. Again, much to explore therein.
+
+Section 12 starts with Advanced Administration topics such as RAID, LVM, when one is better than the other. Then gets into Virtualization, Xen and give brief about lxc. Again, there is much more to explore than shared herein.
+
+![Author Raphael Hertzog at a Debian booth circa 2013 | Image Credit][6]
+
+#### Section 13 – Workstation
+
+Section 13 shares about having schemas for xserver, display managers, window managers, menu management, the different desktops i.e. GNOME, KDE, XFCE and others. It does mention about lxde in the others. The one omission I felt which probably will be updated in a new release would be [Wayland][7] and [Xwayland][8]. Again much to explore in this section as well. This is rectified in the conclusion
+
+#### Section 14 – Security
+
+Section 14 is somewhat comprehensive on what constitues security and bits of threats analysis but stops short as it shares in the introduction of the chapter itself that it’s a vast topic.
+
+#### Section 15 – Creating a Debian package
+
+Section 15 explains the tools and processes to ‘ _debianize_ ‘ an application so it becomes part of the Debian archive and available for distribution on the 10 odd hardware architectures that Debian supports.
+
+### Pros and Cons
+
+Where Raphael and Roland have excelled is at breaking the visual monotony of the book by using a different style and structure wherever possible from the rest of the reading material. This compels the reader to refresh her eyes while at the same time focus on the important matter at the hand. The different visual style also indicates that this is somewhat more important from the author’s point of view.
+
+One of the drawbacks, if I may call it that, is the absolute absence of humor in the book.
+
+### Final Thoughts
+
+I have been [using Debian][9] for a decade so lots of it was a refresher for myself. Some of it is outdated if I look it from a buster perspective but is invaluable as a historical artifact.
+
+If you are looking to familiarize yourself with Debian or looking to run Debian 8 or 9 as a production server for your business wouldn’t be able to recommend a better book than this.
+
+### Download Debian Administrator’s Handbook
+
+The Debian Handbook has been available in every Debian release after 2012. The [liberation][10] of the Debian Handbook was done in 2012 using [ulule][11].
+
+You can download an electronic version of the Debian Administrator’s Handbook in PDF, ePub or Mobi format from the link below:
+
+[Download Debian Administrator’s Handbook][12]
+
+You can also buy the book paperback edition of the book if you want to support the amazing work of the authors.
+
+[Buy the paperback edition][13]
+
+Lastly, if you want to motivate Raphael, you can reward by donating to his PayPal [account][14].
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/debian-administrators-handbook/
+
+作者:[Shirish][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/shirish/
+[b]: https://github.com/lujun9972
+[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/Debian-Administrators-Handbook-review.png?resize=800%2C450&ssl=1
+[2]: https://debian-handbook.info/
+[3]: https://www.debian.org/
+[4]: https://www.cups.org
+[5]: https://itsfoss.com/systemd-features/
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/stand-debian-Raphael.jpg?resize=800%2C600&ssl=1
+[7]: https://wayland.freedesktop.org/
+[8]: https://en.wikipedia.org/wiki/X.Org_Server#XWayland
+[9]: https://itsfoss.com/reasons-why-i-love-debian/
+[10]: https://debian-handbook.info/liberation/
+[11]: https://www.ulule.com/debian-handbook/
+[12]: https://debian-handbook.info/get/now/
+[13]: https://debian-handbook.info/get/
+[14]: https://raphaelhertzog.com/
diff --git a/sources/tech/20190209 LibreOffice 6.2 is Here- This is the Last Release with 32-bit Binaries.md b/sources/tech/20190209 LibreOffice 6.2 is Here- This is the Last Release with 32-bit Binaries.md
new file mode 100644
index 0000000000..ad4bcee236
--- /dev/null
+++ b/sources/tech/20190209 LibreOffice 6.2 is Here- This is the Last Release with 32-bit Binaries.md
@@ -0,0 +1,114 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (LibreOffice 6.2 is Here: This is the Last Release with 32-bit Binaries)
+[#]: via: (https://itsfoss.com/libreoffice-drops-32-bit-support/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+LibreOffice 6.2 is Here: This is the Last Release with 32-bit Binaries
+======
+
+LibreOffice is my favorite office suite as a free and powerful [alternative to Microsoft Office tools on Linux][1]. Even when I use my Windows machine – I prefer to have LibreOffice installed instead of Microsoft Office tools any day.
+
+Now, with the recent [LibreOffice][2] 6.2 update, there’s a lot of good stuff to talk about along with a bad news.
+
+### What’s New in LibreOffice 6.2?
+
+Let’s have a quick look at the major new features in the [latest release of LibreOffice][3].
+
+If you like Linux videos, don’t forget to [subscribe to our YouTube channel][4] as well.
+
+#### The new NotebookBar
+
+![][5]
+
+A new addition to the interface that is optional and not enabled by default. In order to enable it, go to **View - > User Interface -> Tabbed**.
+
+You can either set it as a tabbed layout or a grouped compact layout.
+
+While it is not something that is mind blowing – but it still counts as a significant user interface update considering a variety of user preferences.
+
+#### Icon Theme
+
+![][6]
+
+A new set of icons is now available to choose from. I will definitely utilize the new set of icons – they look good!
+
+#### Platform Compatibility
+
+With the new update, the compatibility has been improved across all the platforms (Mac, Windows, and Linux).
+
+#### Performance Improvements
+
+This shouldn’t concern you if you didn’t have any issues. But, still, the better they work on this – it is a win-win for all.
+
+They have removed unnecessary animations, worked on latency reduction, avoided repeated re-layout, and more such things to improve the performance.
+
+#### More fixes and improvements
+
+A lot of bugs have been fixed in this new update along with little tweaks here and there for all the tools (Writer, Calc, Draw, Impress).
+
+To get to know all the technical details, you should check out their [release notes.
+][7]
+
+### The Sad News: Dropping the support for 32-bit binaries
+
+Of course, this is not a feature. But, this was bound to happen – because it was anticipated a few months ago. LibreOffice will no more provide 32-bit binary releases.
+
+This is inevitable. [Ubuntu has dropped 32-bit support][8]. Many other Linux distributions have also stopped supporting 32-bit processors. The number of [Linux distributions still supporting a 32-bit architecture][9] is fast dwindling.
+
+For the future versions of LibreOffice on 32-bit systems, you’ll have to rely on your distribution to provide it to you. You cannot download the binaries anymore.
+
+### Installing LibreOffice 6.2
+
+![][10]
+
+Your Linux distribution should be providing this update sooner or later.
+
+Arch-based Linux users should be getting it already while Ubuntu and Debian users would have to wait a bit longer.
+
+If you cannot wait, you should download it and [install it from the deb file][11]. Do remove the existing LibreOffice install before using the DEB file.
+
+[Download LibreOffice 6.2][12]
+
+If you don’t want to use the deb file, you may use the official PPA should provide you LibreOffice 6.2 before Ubuntu (it doesn’t have 6.2 release at the moment). It will update your existing LibreOffice install.
+
+```
+sudo add-apt-repository ppa:libreoffice/ppa
+sudo apt update
+sudo apt install libreoffice
+```
+
+### Wrapping Up
+
+LibreOffice 6.2 is definitely a major step up to keep it as a better alternative to Microsoft Office for Linux users.
+
+Do you happen to use LibreOffice? Do these updates matter to you? Let us know in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/libreoffice-drops-32-bit-support/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
+[2]: https://www.libreoffice.org/
+[3]: https://itsfoss.com/libreoffice-6-0-released/
+[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/libreoffice-tabbed.png?resize=800%2C434&ssl=1
+[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/Libreoffice-style-elementary.png?ssl=1
+[7]: https://wiki.documentfoundation.org/ReleaseNotes/6.2
+[8]: https://itsfoss.com/ubuntu-drops-32-bit-desktop/
+[9]: https://itsfoss.com/32-bit-os-list/
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/libre-office-6-2-release.png?resize=800%2C450&ssl=1
+[11]: https://itsfoss.com/install-deb-files-ubuntu/
+[12]: https://www.libreoffice.org/download/download/
diff --git a/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md b/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md
new file mode 100644
index 0000000000..3b9af595d6
--- /dev/null
+++ b/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The Earliest Linux Distros: Before Mainstream Distros Became So Popular)
+[#]: via: (https://itsfoss.com/earliest-linux-distros/)
+[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
+
+The Earliest Linux Distros: Before Mainstream Distros Became So Popular
+======
+
+In this throwback history article, we’ve tried to look back into how some of the earliest Linux distributions evolved and came into being as we know them today.
+
+![][1]
+
+In here we have tried to explore how the idea of popular distros such as Red Hat, Debian, Slackware, SUSE, Ubuntu and many others came into being after the first Linux kernel became available.
+
+As Linux was initially released in the form of a kernel in 1991, the distros we know today was made possible with the help of numerous collaborators throughout the world with the creation of shells, libraries, compilers and related packages to make it a complete Operating System.
+
+### 1\. The first known “distro” by HJ Lu
+
+The way we know Linux distributions today goes back to 1992, when the first known distro-like tools to get access to Linux were released by HJ Lu. It consisted of two 5.25” floppy diskettes:
+
+![Linux 0.12 Boot and Root Disks | Photo Credit][2]
+
+ * **LINUX 0.12 BOOT DISK** : The “boot” disk was used to boot the system first.
+ * **LINUX 0.12 ROOT DISK** : The second “root” disk for getting a command prompt for access to the Linux file system after booting.
+
+
+
+To install 0.12 on a hard drive, one had to use a hex editor to edit its master boot record (MBR) and that was quite a complex process, especially during that era.
+
+Feeling too nostalgic?
+
+You can [install cool-retro-term application][3] that gives you a Linux terminal in the vintage looks of the 90’s computers.
+
+### 2\. MCC Interim Linux
+
+![MCC Linux 0.99.14, 1993 | Image Credit][4]
+
+Initially released in the same year as “LINUX 0.12” by Owen Le Blanc of Manchester Computing Centre in England, MCC Interim Linux was the first Linux distribution for novice users with a menu driven installer and end user/programming tools. Also in the form of a collection of diskettes, it could be installed on a system to provide a basic text-based environment.
+
+MCC Interim Linux was much more user-friendly than 0.12 and the installation process on a hard drive was much easier and similar to modern ways. It did not require using a hex editor to edit the MBR.
+
+Though it was first released in February 1992, it was also available for download through FTP since November that year.
+
+### 3\. TAMU Linux
+
+![TAMU Linux | Image Credit][5]
+
+TAMU Linux was developed by Aggies at Texas A&M with the Texas A&M Unix & Linux Users Group in May 1992 and was called TAMU 1.0A. It was the first Linux distribution to offer the X Window System instead of just a text based operating system.
+
+### 4\. Softlanding Linux System (SLS)
+
+![SLS Linux 1.05, 1994 | Image Credit][6]
+
+“Gentle Touchdowns for DOS Bailouts” was their slogan! SLS was released by Peter McDonald in May 1992. SLS was quite widely used and popular during its time and greatly promoted the idea of Linux. But due to a decision by the developers to change the executable format in the distro, users stopped using it.
+
+Many of the popular distros the present community is most familiar with, evolved via SLS. Two of them are:
+
+ * **Slackware** : One of the earliest Linux distros, Slackware was created by Patrick Volkerding in 1993. Slackware is based on SLS and was one of the very first Linux distributions.
+ * **Debian** : An initiative by Ian Murdock, Debian was also released in 1993 after moving on from the SLS model. The very popular Ubuntu distro we know today is based on Debian.
+
+
+
+### 5\. Yggdrasil
+
+![LGX Yggdrasil Fall 1993 | Image Credit][7]
+
+Released on December 1992, Yggdrasil was the first distro to give birth to the idea of Live Linux CDs. It was developed by Yggdrasil Computing, Inc., founded by Adam J. Richter in Berkeley, California. It could automatically configure itself on system hardware as “Plug-and-Play”, which is a very regular and known feature in today’s time. The later versions of Yggdrasil included a hack for running any proprietary MS-DOS CD-ROM driver within Linux.
+
+![Yggdrasil’s Plug-and-Play Promo | Image Credit][8]
+
+Their motto was “Free Software For The Rest of Us”.
+
+In the late 90s, one very popular distro was [Mandriva][9], first released in 1998, by unifying the French _Mandrake Linux_ distribution with the Brazilian _Conectiva Linux_ distribution. It had a release lifetime of 18 months for updates related to Linux and system software and desktop based updates were released every year. It also had server versions with 5 years of support. Now we have [Open Mandriva][10].
+
+If you have more nostalgic distros to share from the earliest days of Linux release, please share with us in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/earliest-linux-distros/
+
+作者:[Avimanyu Bandyopadhyay][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/avimanyu/
+[b]: https://github.com/lujun9972
+[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/earliest-linux-distros.png?resize=800%2C450&ssl=1
+[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-0.12-Floppies.jpg?ssl=1
+[3]: https://itsfoss.com/cool-retro-term/
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/MCC-Interim-Linux-0.99.14-1993.jpg?fit=800%2C600&ssl=1
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/TAMU-Linux.jpg?ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/SLS-1.05-1994.jpg?ssl=1
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/LGX_Yggdrasil_CD_Fall_1993.jpg?fit=781%2C800&ssl=1
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Yggdrasil-Linux-Summer-1994.jpg?ssl=1
+[9]: https://en.wikipedia.org/wiki/Mandriva_Linux
+[10]: https://www.openmandriva.org/
diff --git a/sources/tech/20190220 Decentralized Slack Alternative Riot Releases its First Stable Version.md b/sources/tech/20190220 Decentralized Slack Alternative Riot Releases its First Stable Version.md
new file mode 100644
index 0000000000..92c6ded8c3
--- /dev/null
+++ b/sources/tech/20190220 Decentralized Slack Alternative Riot Releases its First Stable Version.md
@@ -0,0 +1,99 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Decentralized Slack Alternative Riot Releases its First Stable Version)
+[#]: via: (https://itsfoss.com/riot-stable-release/)
+[#]: author: (Shirish https://itsfoss.com/author/shirish/)
+
+Decentralized Slack Alternative Riot Releases its First Stable Version
+======
+
+Remember [Riot messenger][1]? It’s a decentralized, encrypted open source messaging software based on the [Matrix protocol][2].
+
+I wrote a [detailed tutorial on using Riot on Linux desktop][3]. The software was in beta back then. The first stable version, Riot 1.0 has been released a few days ago. Wonder what’s new?
+
+![][4]
+
+### New Features in Riot 1.0
+
+Let’s look at some of the changes which were introduced in the move to Riot 1.0.
+
+#### New Looks and Branding
+
+![][5]
+
+The first thing that you see is the welcome screen which has a nice background and also a refreshed sky and dark blue logo which is cleaner and clearer than the previous logo.
+
+The welcome screen gives you the option to sign into an existing riot account on either matrix.org or any other homeserver or create an account. There is also the option to talk with the Riot Bot and have a room directory listing.
+
+#### Changing Homeservers and Making your own homeserver
+
+![Make your own homeserver][6]
+
+As you can see, here you can change the homeserver. The idea of riot as was shared before is to have [de-centralized][7] chat services, without foregoing the simplicity that centralized services offer. For those who want to run their own homeservers, you need the new [matrix-syanpse 0.99.1.1 reference homeserver][8].
+
+You can find an unofficial list of matrix homeservers listed [here][9] although it’s far from complete.
+
+#### Internationalization and Languages.
+
+One of the more interesting things are that the UI and everything is now il8n-aware and has been translated to catala, dansk, duetsch, Spanish along with English (US) which is/was the default when I installed. We can hope to see some more improvements in language support going ahead.
+
+#### Favoriting a channel
+
+![Favoriting a channel in Riot][10]
+
+One of the things that has changed from last time is how you favorite a channel. Now as you can see, you select the channel, click on the three vertical dots in it and then either favorite or do whatever you want with it.
+
+#### Making changes to your profile and Settings
+
+![Riot Different settings you can do. ][11]
+
+Just clicking the drop-down box beside your Avatar you get the settings box. You click on the box and it gives a wide variety of settings you can change.
+
+As you can see there are lot more choices and the language is easier than before.
+
+#### Encryption and E2E
+
+![Riot encryption screen][12]
+
+One of the big things which riot has been talked about is Encryption and end-to-end encryption. This is still a work in progress.
+
+The new release brings the focus on two enhancements in encryption: key backup and emoji device verification (still in progress).
+
+With Riot 1.0, you can automatically backup your keys on your server. This key itself will be encrypted with a password so that it is stored securely. With this, you’ll never lose your encrypted message because you won’t lose your encryption key.
+
+You will soon be able to verify your device with emoji now which is easier than matching long strings, isn’t it?
+
+**In the end**
+
+Using Riot requires a bit of patience. Once you get the hang of it, there is nothing like it. This decentralized messaging app becomes an important tool in the arsenal of privacy cautious people.
+
+Riot is an important tool in the continuous effort to keep our data secure and privacy intact. The new major release makes it even more awesome. What do you think?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/riot-stable-release/
+
+作者:[Shirish][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/shirish/
+[b]: https://github.com/lujun9972
+[1]: https://about.riot.im/
+[2]: https://matrix.org/blog/home/
+[3]: https://itsfoss.com/riot-desktop/
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-messenger.jpg?ssl=1
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-im-web-1.0-welcome-screen.jpg?ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-change-homeservers.jpg?resize=800%2C420&ssl=1
+[7]: https://medium.com/s/story/why-decentralization-matters-5e3f79f7638e
+[8]: https://github.com/matrix-org/synapse/releases/tag/v0.99.1.1
+[9]: https://www.hello-matrix.net/public_servers.php
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-channel-preferences.jpg?resize=800%2C420&ssl=1
+[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-settings-1-e1550427251686.png?ssl=1
+[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-encryption.jpg?fit=800%2C572&ssl=1
diff --git a/sources/tech/20190221 DevOps for Network Engineers- Linux Foundation-s New Training Course.md b/sources/tech/20190221 DevOps for Network Engineers- Linux Foundation-s New Training Course.md
new file mode 100644
index 0000000000..e99c5e1edf
--- /dev/null
+++ b/sources/tech/20190221 DevOps for Network Engineers- Linux Foundation-s New Training Course.md
@@ -0,0 +1,104 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (DevOps for Network Engineers: Linux Foundation’s New Training Course)
+[#]: via: (https://itsfoss.com/devops-for-network-engineers/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+DevOps for Network Engineers: Linux Foundation’s New Training Course
+======
+
+_**Linux Foundation has launched a[DevOps course for sysadmins][1] and network engineers. They are also offering a limited time 40% discount on the launch.**_
+
+DevOps is no longer a buzzword. It has become the necessity for any IT company.
+
+The role and responsibility of a sysadmin and a network engineer have changed as well. They are required to have knowledge of the DevOps tools popular in the IT industry.
+
+If you are a sysadmin or a network engineer, you can no longer laugh off DevOps anymore. It’s time to learn new skills to stay relevant in today’s rapidly changing IT industry otherwise the ‘automation’ trend might cost you your job.
+
+And who knows it better than Linux Foundation, the official organization behind Linux project and the employer of Linux-creator Linus Torvalds?
+
+[Linux Foundation has a number of courses on Linux and related technologies][2] that help you in getting a job or improving your existing skills at work.
+
+The [latest course offering][1] from Linux Foundation specifically focuses on sysadmins who would like to familiarize with DevOps tools.
+
+### DevOps for Network Engineers Course
+
+![][3]
+
+[This course][1] is intended for existing sysadmins and network engineers. So you need to have some knowledge of Linux system administration, shell scripting and Python.
+
+The course will help you with:
+
+ * Integrating into a DevOps/Agile environment
+ * Familiarizing with commonly used DevOps tools
+ * Collaborating on projects as DevOps
+ * Confidently working with software and configuration files in version control
+ * Recognizing the roles of SCRUM team members
+ * Confidently applying Agile principles in an organization
+
+
+
+This is the course outline:
+
+ * Chapter 1. Course Introduction
+ * Chapter 2. Modern Project Management
+ * Chapter 3. The DevOps Process: A Network Engineer’s Perspective
+ * Chapter 4. Network Simulation and Testing with [Mininet][4]
+ * Chapter 5. [OpenFlow][5] and [ONOS][6]
+ * Chapter 6. Infrastructure as Code ([Ansible][7] Basics)
+ * Chapter 7. Version Control ([Git][8])
+ * Chapter 8. Continuous Integration and Continuous Delivery ([Jenkins][9])
+ * Chapter 9. Using [Gerrit][10] in DevOps
+ * Chapter 10. Jenkins, Gerrit and Code Review for DevOps
+ * Chapter 11. The DevOps Process and Tools (Review)
+
+
+
+Altogether, you get 25-30 hours of course material. The online course is self-paced and you can access the material for one year from the date of purchase.
+
+_**Unlike most other courses on Linux Foundation, this is NOT a video course.**_
+
+There is no certification for this course because it is more focused on learning and improving skills.
+
+#### Get the course at a 40% discount (limited time)
+
+The course costs $299 but since it’s just launched, they are offering 40% discount till March 1st, 2019. You can get the discount by using the **DEVOPSNET** coupon code at checkout.
+
+[DevOps for Network Engineers][1]
+
+By the way, if you are interested in Open Source development, you can benefit from the “[Introduction to Open Source Development, Git, and Linux][11]” video course. You can get a limited time 50% discount using **OSDEV50** code at the checkout.
+
+Staying relevant is absolutely necessary in any industry, not just IT industry. Learning new skills that are in-demand in your industry is perhaps the best way in this regard.
+
+What do you think? What are your views on the current automation trend? How would you go about it?
+
+_Disclaimer: This post contains affiliate links. Please read our_ [_affiliate policy_][12] _for more details._
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/devops-for-network-engineers/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: http://shrsl.com/1glcb
+[2]: https://shareasale.com/r.cfm?b=1074561&u=747593&m=59485&urllink=&afftrack=
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/DevOps-for-Network-Engineers-800x450.png?resize=800%2C450&ssl=1
+[4]: http://mininet.org/
+[5]: https://en.wikipedia.org/wiki/OpenFlow
+[6]: https://onosproject.org/
+[7]: https://www.ansible.com/
+[8]: https://itsfoss.com/basic-git-commands-cheat-sheet/
+[9]: https://jenkins.io/
+[10]: https://www.gerritcodereview.com/
+[11]: https://shareasale.com/r.cfm?b=1193750&u=747593&m=59485&urllink=&afftrack=
+[12]: https://itsfoss.com/affiliate-policy/
diff --git a/sources/tech/20190314 Open Source is Eating the Startup Ecosystem- A Guide for Assessing the Value Creation of Startups.md b/sources/tech/20190314 Open Source is Eating the Startup Ecosystem- A Guide for Assessing the Value Creation of Startups.md
new file mode 100644
index 0000000000..b720bd8f96
--- /dev/null
+++ b/sources/tech/20190314 Open Source is Eating the Startup Ecosystem- A Guide for Assessing the Value Creation of Startups.md
@@ -0,0 +1,56 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Open Source is Eating the Startup Ecosystem: A Guide for Assessing the Value Creation of Startups)
+[#]: via: (https://www.linux.com/BLOG/2019/3/OPEN-SOURCE-EATING-STARTUP-ECOSYSTEM-GUIDE-ASSESSING-VALUE-CREATION-STARTUPS)
+[#]: author: (Ibrahim Haddad https://www.linux.com/USERS/IBRAHIM)
+
+Open Source is Eating the Startup Ecosystem: A Guide for Assessing the Value Creation of Startups
+======
+
+![Open Source][1]
+
+If you want a deeper understanding of defining, implementing, and improving open source compliance programs within your organizations—this ebook is a must read. Download now.
+
+[Creative Commons Zero][2]
+
+Unsplash
+
+In the last few years, we have witnessed the unprecedented growth of open source in all industries—from the increased adoption of open source software in products and services, to the extensive growth in open source contributions and the releasing of proprietary technologies under an open source license. It has been an incredible experience to be a part of.
+
+![Open Source][3]
+
+[The Linux Foundation][4]
+
+As many have stated, Open Source is the New Normal, Open Source is Eating the World, Open Source is Eating Software, etc. all of which are true statements. To that extent, I’d like to add one more maxim: Open Source is Eating the Startup Ecosystem. It is almost impossible to find a technology startup today that does not rely in one shape or form on open source software to boot up its operation and develop its product offering. As a result, we are operating in a space where open source due diligence is now a mandatory exercise in every M&A transaction. These exercises evaluate the open source practices of an organization and scope out all open source software used in product(s)/service(s) and how it interacts with proprietary components—all of which is necessary to assess the value creation of the company in relation to open source software.
+
+Being intimately involved in this space has allowed me to observe, learn, and apply many open source best practices. I decided to chronicle these learnings in an ebook as a contribution to the [OpenChain project][5]: [Assessment of Open Source Practices as part of Due Diligence in Merger and Acquisition Transactions][6]. This ebook addresses the basic question of: How does one evaluate open source practices in a given organization that is an acquisition target? We address this question by offering a path to evaluate these practices along with appropriate checklists for reference. Essentially, it explains how the acquirer and the target company can prepare for this due diligence, offers an explanation of the audit process, and provides general recommended practices for ensuring open source compliance.
+
+If is important to note that not every organization will see a need to implement every practice we recommend. Some organizations will find alternative practices or implementation approaches to achieve the same results. Appropriately, an organization will adapt its open source approach based upon the nature and amount of the open source it uses, the licenses that apply to open source it uses, the kinds of products it distributes or services it offers, and the design of the products or services themselves
+
+If you are involved in assessing the open source and compliance practices of organizations, or involved in an M&A transaction focusing on open source due diligence, or simply want to have a deeper level of understanding of defining, implementing, and improving open source compliance programs within your organizations—this ebook is a must read. [Download the Brief][6].
+
+This article originally appeared at the [Linux Foundation.][7]
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/BLOG/2019/3/OPEN-SOURCE-EATING-STARTUP-ECOSYSTEM-GUIDE-ASSESSING-VALUE-CREATION-STARTUPS
+
+作者:[Ibrahim Haddad][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/USERS/IBRAHIM
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-alexandre-godreau-510220-unsplash.jpg?itok=2udo1XKo (Open Source)
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/assessmentofopensourcepractices_ebook_mockup-768x994.png?itok=qpLKAVGR (Open Source)
+[4]: /LICENSES/CATEGORY/LINUX-FOUNDATION
+[5]: https://www.openchainproject.org/
+[6]: https://www.linuxfoundation.org/open-source-management/2019/03/assessment-open-source-practices/
+[7]: https://www.linuxfoundation.org/blog/2019/03/open-source-is-eating-the-startup-ecosystem-a-guide-for-assessing-the-value-creation-of-startups/
diff --git a/sources/tech/20190315 Mageia Linux Is a Modern Throwback to the Underdog Days.md b/sources/tech/20190315 Mageia Linux Is a Modern Throwback to the Underdog Days.md
new file mode 100644
index 0000000000..78d8741c17
--- /dev/null
+++ b/sources/tech/20190315 Mageia Linux Is a Modern Throwback to the Underdog Days.md
@@ -0,0 +1,125 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Mageia Linux Is a Modern Throwback to the Underdog Days)
+[#]: via: (https://www.linux.com/BLOG/LEARN/2019/3/MAGEIA-LINUX-MODERN-THROWBACK-UNDERDOG-DAYS)
+[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
+
+Mageia Linux Is a Modern Throwback to the Underdog Days
+======
+
+![Welcome to Mageia][1]
+
+The Mageia Welcome App is a boon for new Linux users.
+
+[Used with permission][2]
+
+I’ve been using Linux long enough to remember Linux Mandrake. I recall, at one of my first-ever Linux conventions, hanging out with the MandrakeSoft crew and being starstruck to think that they were creating a Linux distribution that was sure to bring about world domination for the open source platform.
+
+Well, that didn’t happen. In fact, Linux Mandrake didn’t even stand the test of time. It was renamed Mandriva and rebranded. Mandriva retained popularity but eventually came to a halt in 2011. The company disbanded, sending all those star developers to other projects. Of course, rising from the ashes of Mandrake Linux came the likes of [OpenMandriva][3], as well as another distribution called [Mageia Linux][4].
+
+Like OpenMandriva, Mageia Linux is a fork of Mandriva. It was created (by a group of former Mandriva employees) in 2010 and first released in 2011, so there was next to no downtime between the end of Mandriva and the release of Mageia. Since then, Mageia has existed in the shadows of bigger, more popular flavors of Linux (e.g., Ubuntu, Mint, Fedora, Elementary OS, etc.), but it’s never faltered. As of this writing, Mageia is listed as number 26 on the [Distrowatch][5] Page Hit Ranking chart and is enjoying release number 6.1.
+
+### What Sets Mageia Apart?
+
+This question has become quite important when looking at Linux distributions, considering just how many distros there are—many of which are hard to tell apart. If you’ve seen one KDE, GNOME, or Xfce distribution, you’ve seen them all, right? Anyone who's used Linux enough knows this statement is not even remotely true. For many distributions, though, the differences lie in the subtleties. It’s not about what you do with the desktop; it’s how you put everything together to improve the user experience.
+
+Mageia Linux defaults to the KDE desktop and does as good a job as any other distribution at presenting KDE to users. But before you start using KDE, you should note some differences between Mageia and other distributions. To start, the installation is quite simple, but it’s slightly askew from what might expect. In similar fashion to most modern distributions, you boot up the live instance and click on the Install icon (Figure 1).
+
+![Installing Mageia][6]
+
+Figure 1: Installing Mageia from the Live instance.
+
+[Used with permission][2]
+
+Once you’ve launched the installation app, it’s fairly straightforward, although not quite as simple as some other versions of Linux. New users might hesitate when they are presented with the partition choice between Use free space or Custom disk partition (Remember, I’m talking about new users here). This type of user might prefer a bit simpler verbiage. Consider this: What if you were presented (at the partition section) by two choices:
+
+ * Basic Install
+
+ * Custom Install
+
+
+
+
+The Basic install path would choose a fairly standard set of options (e.g., using the whole disk for installation and placing the bootloader in the proper/logical place). In contrast, the Custom install would allow the user to install in a non-default fashion (for dual boot, etc.) and choose where the bootloader would go and what options to apply.
+
+The next possible confusing step (again, for new users) is the bootloader (Figure 2). For those who have installed Linux before, this option is a no-brainer. For new users, even understanding what a bootloader does can be a bit of an obstacle.
+
+![bootloader][7]
+
+Figure 2: Configuring the Mageia bootloader.
+
+[Used with permission][2]
+
+The bootloader configuration screen also allows you to password protect GRUB2. Because of the layout of this screen, it could be confused as the root user password. It’s not. If you don’t want to password protect GRUB2, leave this blank. In the final installation screen (Figure 3), you can set any bootloader options you might want. Once again, we find a window that could cause confusion with new users.
+
+![bootloader options][8]
+
+Figure 3: Advanced bootloader options can be configured here.
+
+[Used with permission][2]
+
+Click Finish and the installation will complete. You might have noticed the absence of user configuration or root user password options. With the first stage of the installation complete, you reboot the machine, remove the installer media, and (when the machine reboots) you’ll then be prompted to configure both the root user password and a standard user account (Figure 4).
+
+![Configuring your users][9]
+
+Figure 4: Configuring your users.
+
+[Used with permission][2]
+
+And that’s all there is to the Mageia installation.
+
+### Welcome to Mageia
+
+Once you log into Mageia, you’ll be greeted by something every Linux distribution should use—a welcome app (Figure 5).
+
+![welcome app][10]
+
+Figure 5: The Mageia welcome app is a new user’s best friend.
+
+[Used with permission][2]
+
+From this welcome app, you can get information about the distribution, get help, and join communities. The importance of having such an approach to greet users at login cannot be overstated. When new users log into Linux for the first time, they want to know that help is available, should they need it. Mageia Linux has done an outstanding job with this feature. Granted, all this app does is serve as a means to point users to various websites, but it’s important information for users to have at the ready.
+
+Beyond the welcome app, the Mageia Control Center (Figure 6) also helps Mageia stand out. This one-stop-shop is where users can take care of installing/updating software, configuring media sources for installation, configure update frequency, manage/configure hardware, configure network devices (e.g., VPNs, proxies, and more), configure system services, view logs, open an administrator console, create network shares, and so much more. This is as close to the openSUSE YaST tool as you’ll find (without using either SUSE or openSUSE).
+
+![Control Center][11]
+
+Figure 6: The Mageia Control Center is an outstanding system management tool.
+
+[Used with permission][2]
+
+Beyond those two tools, you’ll find everything else you need to work. Mageia Linux comes with the likes of LibreOffice, Firefox, KMail, GIMP, Clementine, VLC, and more. Out of the box, you’d be hard pressed to find another tool you need to install to get your work done. It’s that complete a distribution.
+
+### Target Audience
+
+Figuring out the Mageia Linux target audience is a tough question to answer. If new users can get past the somewhat confusing installation (which isn’t really that challenging, just slightly misleading), using Mageia Linux is a dream.
+
+The slick, barely modified KDE desktop, combined with the welcome app and control center make for a desktop Linux that will let users of all skill levels feel perfectly at home. If the developers could tighten up the verbiage on the installation, Mageia Linux could be one of the greatest new user Linux experiences available. Until then, new users should make sure they understand what they’re getting into with the installation portion of this take on the Linux platform.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/BLOG/LEARN/2019/3/MAGEIA-LINUX-MODERN-THROWBACK-UNDERDOG-DAYS
+
+作者:[Jack Wallen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/jlwallen
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia-main.jpg?itok=ZmkbMxfM (Welcome to Mageia)
+[2]: /LICENSES/CATEGORY/USED-PERMISSION
+[3]: https://www.openmandriva.org/
+[4]: https://www.mageia.org/en/
+[5]: https://distrowatch.com/
+[6]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_1.jpg?itok=RYXPU70j (Installing Mageia)
+[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_2.jpg?itok=m2IPxgA4 (bootloader)
+[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_3.jpg?itok=Bs2PPrMF (bootloader options)
+[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_4.jpg?itok=YZBIZ0Ua (Configuring your users)
+[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_5.jpg?itok=gYcTfUKv (welcome app)
+[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_6.jpg?itok=eSl2qpPp (Control Center)
diff --git a/sources/tech/20190315 Sweet Home 3D- An open source tool to help you decide on your dream home.md b/sources/tech/20190315 Sweet Home 3D- An open source tool to help you decide on your dream home.md
deleted file mode 100644
index 8d1df5a7c5..0000000000
--- a/sources/tech/20190315 Sweet Home 3D- An open source tool to help you decide on your dream home.md
+++ /dev/null
@@ -1,73 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Sweet Home 3D: An open source tool to help you decide on your dream home)
-[#]: via: (https://opensource.com/article/19/3/tool-find-home)
-[#]: author: (Jeff Macharyas (Community Moderator) )
-
-Sweet Home 3D: An open source tool to help you decide on your dream home
-======
-
-Interior design application makes it easy to render your favorite house—real or imaginary.
-
-![Houses in a row][1]
-
-I recently accepted a new job in Virginia. Since my wife was working and watching our house in New York until it sold, it was my responsibility to go out and find a new house for us and our cat. A house that she would not see until we moved into it!
-
-I contracted with a real estate agent and looked at a few houses, taking many pictures and writing down illegible notes. At night, I would upload the photos into a Google Drive folder, and my wife and I would review them simultaneously over the phone while I tried to remember whether the room was on the right or the left, whether it had a fan, etc.
-
-Since this was a rather tedious and not very accurate way to present my findings, I went in search of an open source solution to better illustrate what our future dream house would look like that wouldn't hinge on my fuzzy memory and blurry photos.
-
-[Sweet Home 3D][2] did exactly what I wanted it to do. Sweet Home 3D is available on Sourceforge and released under the GNU General Public License. The [website][3] is very informative, and I was able to get it up and running in no time. Sweet Home 3D was developed by Paris-based Emmanuel Puybaret of eTeks.
-
-### Hanging the drywall
-
-I downloaded Sweet Home 3D onto my MacBook Pro and added a PNG version of a flat floorplan of a house to use as a background base map.
-
-From there, it was a simple matter of using the Rooms palette to trace the pattern and set the "real life" dimensions. After I mapped the rooms, I added the walls, which I could customize by color, thickness, height, etc.
-
-![Sweet Home 3D floorplan][5]
-
-Now that I had the "drywall" built, I downloaded various pieces of "furniture" from a large array that includes actual furniture as well as doors, windows, shelves, and more. Each item downloads as a ZIP file, so I created a folder of all my uncompressed pieces. I could customize each piece of furniture, and repetitive items, such as doors, were easy to copy-and-paste into place.
-
-Once I had all my walls and doors and windows in place, I used the application's 3D view to navigate through the house. Drawing upon my photos and memory, I made adjustments to all the objects until I had a close representation of the house. I could have spent more time modifying the house by adding textures, additional furniture, and objects, but I got it to the point I needed.
-
-![Sweet Home 3D floorplan][7]
-
-After I finished, I exported the plan as an OBJ file, which can be opened in a variety of programs, such as [Blender][8] and Preview on the Mac, to spin the house around and examine it from various angles. The Video function was most useful, as I could create a starting point, draw a path through the house, and record the "journey." I exported the video as a MOV file, which I opened and viewed on the Mac using QuickTime.
-
-My wife was able to see (almost) exactly what I saw, and we could even start arranging furniture ahead of the move, too. Now, all I have to do is load up the moving truck and head south.
-
-Sweet Home 3D will also prove useful at my new job. I was looking for a way to improve the map of the college's buildings and was planning to just re-draw it in [Inkscape][9] or Illustrator or something. However, since I have the flat map, I can use Sweet Home 3D to create a 3D version of the floorplan and upload it to our website to make finding the bathrooms so much easier!
-
-### An open source crime scene?
-
-An interesting aside: according to the [Sweet Home 3D blog][10], "the French Forensic Police Office (Scientific Police) recently chose Sweet Home 3D as a tool to design plans [to represent roads and crime scenes]. This is a concrete application of the recommendation of the French government to give the preference to free open source solutions."
-
-This is one more bit of evidence of how open source solutions are being used by citizens and governments to create personal projects, solve crimes, and build worlds.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/3/tool-find-home
-
-作者:[Jeff Macharyas (Community Moderator)][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
-[2]: https://sourceforge.net/projects/sweethome3d/
-[3]: http://www.sweethome3d.com/
-[4]: /file/426441
-[5]: https://opensource.com/sites/default/files/uploads/virginia-house-create-screenshot.png (Sweet Home 3D floorplan)
-[6]: /file/426451
-[7]: https://opensource.com/sites/default/files/uploads/virginia-house-3d-screenshot.png (Sweet Home 3D floorplan)
-[8]: https://opensource.com/article/18/5/blender-hotkey-cheat-sheet
-[9]: https://opensource.com/article/19/1/inkscape-cheat-sheet
-[10]: http://www.sweethome3d.com/blog/2018/12/10/customization_for_the_forensic_police.html
diff --git a/sources/tech/20190317 How To Configure sudo Access In Linux.md b/sources/tech/20190317 How To Configure sudo Access In Linux.md
deleted file mode 100644
index f147c07d55..0000000000
--- a/sources/tech/20190317 How To Configure sudo Access In Linux.md
+++ /dev/null
@@ -1,301 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How To Configure sudo Access In Linux?)
-[#]: via: (https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/)
-[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-
-How To Configure sudo Access In Linux?
-======
-
-The root user has all the controls in Linux system.
-
-root user is the most powerful user in the Linux system and can perform any action in the system.
-
-If any users wants to perform some actions, don’t provide the root access to anybody because if he/she done anything wrong there is no option/way to rectify it.
-
-To fix this, what will be the solution?
-
-We can grant sudo permission to the corresponding user to overcome this situation.
-
-The sudo command offers a mechanism for providing trusted users with administrative access to a system without sharing the password of the root user.
-
-They can perform most of the administrative operations but not all operations like root.
-
-### What Is sudo?
-
-sudo is a program, which can be used by a normal users to execute a command as the super user or another user, as specified by the security policy.
-
-sudo users access is controlled by `/etc/sudoers` file.
-
-### What Is An Advantage Of sudo Users?
-
-sudo is a safe way to run a command in Linux system if you are not familiar on it.
-
- * The Linux system keeps a logs into the `/var/log/secure` and `/var/log/auth.log` file where you can verify what actions was made by the sudo user.
- * Every time, it will prompt a password to perform the current action. So, you will be getting a time to verify the action, which you are going to perform. If you feel it’s not a correct action then you can safely exit there itself without perform the current action.
-
-
-
-It’s different for RHEL based systems such as Redhat (RHEL), CentOS and Oracle Enterprise Linux (OEL) and Debian based systems such as Debian, Ubuntu and LinuxMint.
-
-We will tech you, how to perform this on both the distributions in this article.
-
-It can be done in three ways in both the distributions.
-
- * Add a user into corresponding groups. For RHEL based system, we need to add a user into `wheel` group. For Debian based system, we need to add a user into `sudo` or `admin` groups.
- * Add a user into `/etc/group` file manually.
- * Add a user into `/etc/sudoers` file using visudo.
-
-
-
-### How To Configure sudo Access In RHEL/CentOS/OEL Systems?
-
-It can be done on RHEL based systems such as Redhat (RHEL), CentOS and Oracle Enterprise Linux (OEL) using following three methods.
-
-### Method-1: How To Grant The Super User Access To A Normal User In Linux Using wheel Group?
-
-Wheel is a special group in the RHEL based systems that provides additional privileges that empower a user to execute restricted commands as the super user.
-
-Make a note that the `wheel` group should be enabled in the `/etc/sudoers` file to gain this access.
-
-```
-# grep -i wheel /etc/sudoers
-
-## Allows people in group wheel to run all commands
-%wheel ALL=(ALL) ALL
-# %wheel ALL=(ALL) NOPASSWD: ALL
-```
-
-I assume that we had already created an user account to perform this. In my case, I’m going to use `daygeek` user account.
-
-Run the following command to add an user into wheel group.
-
-```
-# usermod -aG wheel daygeek
-```
-
-We can doube confirm this by running the following command.
-
-```
-# getent group wheel
-wheel:x:10:daygeek
-```
-
-I’m going to check whether `daygeek` user can access a file which is owned by the root user.
-
-```
-$ tail -5 /var/log/secure
-tail: cannot open _/var/log/secure_ for reading: Permission denied
-```
-
-I was getting an error when i try to access the `/var/log/secure` file as a normal user. I’m going to access the same file with sudo, let’s see the magic.
-
-```
-$ sudo tail -5 /var/log/secure
-[sudo] password for daygeek:
-Mar 17 07:01:56 CentOS7 sudo: daygeek : TTY=pts/0 ; PWD=/home/daygeek ; USER=root ; COMMAND=/bin/tail -5 /var/log/secure
-Mar 17 07:01:56 CentOS7 sudo: pam_unix(sudo:session): session opened for user root by daygeek(uid=0)
-Mar 17 07:01:56 CentOS7 sudo: pam_unix(sudo:session): session closed for user root
-Mar 17 07:05:10 CentOS7 sudo: daygeek : TTY=pts/0 ; PWD=/home/daygeek ; USER=root ; COMMAND=/bin/tail -5 /var/log/secure
-Mar 17 07:05:10 CentOS7 sudo: pam_unix(sudo:session): session opened for user root by daygeek(uid=0)
-```
-
-### Method-2: How To Grant The Super User Access To A Normal User In RHEL/CentOS/OEL using /etc/group file?
-
-We can manually add an user into the wheel group by editing the `/etc/group` file.
-
-Just open the file then append the corresponding user in the appropriate group to achieve this.
-
-```
-$ grep -i wheel /etc/group
-wheel:x:10:daygeek,user1
-```
-
-In this example, I’m going to use `user1` user account.
-
-I’m going to check whether `user1` user has sudo access or not by restarting the `Apache` service in the system. let’s see the magic.
-
-```
-$ sudo systemctl restart httpd
-[sudo] password for user1:
-
-$ sudo grep -i user1 /var/log/secure
-[sudo] password for user1:
-Mar 17 07:09:47 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/systemctl restart httpd
-Mar 17 07:10:40 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/systemctl restart httpd
-Mar 17 07:12:35 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/grep -i httpd /var/log/secure
-```
-
-### Method-3: How To Grant The Super User Access To A Normal User In Linux Using /etc/sudoers file?
-
-sudo users access is controlled by `/etc/sudoers` file. So, simply add an user into the sudoers file under wheel group.
-
-Just append the desired user into /etc/suoders file by using visudo command.
-
-```
-# grep -i user2 /etc/sudoers
-user2 ALL=(ALL) ALL
-```
-
-In this example, I’m going to use `user2` user account.
-
-I’m going to check whether `user2` user has sudo access or not by restarting the `MariaDB` service in the system. let’s see the magic.
-
-```
-$ sudo systemctl restart mariadb
-[sudo] password for user2:
-
-$ sudo grep -i mariadb /var/log/secure
-[sudo] password for user2:
-Mar 17 07:23:10 CentOS7 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/systemctl restart mariadb
-Mar 17 07:26:52 CentOS7 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/grep -i mariadb /var/log/secure
-```
-
-### How To Configure sudo Access In Debian/Ubuntu Systems?
-
-It can be done on Debian based systems such as Debian based systems such as Debian, Ubuntu and LinuxMint using following three methods.
-
-### Method-1: How To Grant The Super User Access To A Normal User In Linux Using sudo or admin Groups?
-
-sudo or admin is a special group in the Debian based systems that provides additional privileges that empower a user to execute restricted commands as the super user.
-
-Make a note that the `sudo` or `admin` group should be enabled in the `/etc/sudoers` file to gain this access.
-
-```
-# grep -i 'sudo\|admin' /etc/sudoers
-
-# Members of the admin group may gain root privileges
-%admin ALL=(ALL) ALL
-
-# Allow members of group sudo to execute any command
-%sudo ALL=(ALL:ALL) ALL
-```
-
-I assume that we had already created an user account to perform this. In my case, I’m going to use `2gadmin` user account.
-
-Run the following command to add an user into sudo group.
-
-```
-# usermod -aG sudo 2gadmin
-```
-
-We can doube confirm this by running the following command.
-
-```
-# getent group sudo
-sudo:x:27:2gadmin
-```
-
-I’m going to check whether `2gadmin` user can access a file which is owned by the root user.
-
-```
-$ less /var/log/auth.log
-/var/log/auth.log: Permission denied
-```
-
-I was getting an error when i try to access the `/var/log/auth.log` file as a normal user. I’m going to access the same file with sudo, let’s see the magic.
-
-```
-$ sudo tail -5 /var/log/auth.log
-[sudo] password for 2gadmin:
-Mar 17 20:39:47 Ubuntu18 sudo: 2gadmin : TTY=pts/0 ; PWD=/home/2gadmin ; USER=root ; COMMAND=/bin/bash
-Mar 17 20:39:47 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by 2gadmin(uid=0)
-Mar 17 20:40:23 Ubuntu18 sudo: pam_unix(sudo:session): session closed for user root
-Mar 17 20:40:48 Ubuntu18 sudo: 2gadmin : TTY=pts/0 ; PWD=/home/2gadmin ; USER=root ; COMMAND=/usr/bin/tail -5 /var/log/auth.log
-Mar 17 20:40:48 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by 2gadmin(uid=0)
-```
-
-Alternatively we can perform the same by adding an user to `admin` group.
-
-Run the following command to add an user into sudo group.
-
-```
-# usermod -aG admin user1
-```
-
-We can doube confirm this by running the following command.
-
-```
-# getent group admin
-admin:x:1011:user1
-```
-
-Let’s see the output.
-
-```
-$ sudo tail -2 /var/log/auth.log
-[sudo] password for user1:
-Mar 17 20:53:36 Ubuntu18 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/usr/bin/tail -2 /var/log/auth.log
-Mar 17 20:53:36 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user1(uid=0)
-```
-
-### Method-2: How To Grant The Super User Access To A Normal User In Debian/Ubuntu using /etc/group file?
-
-We can manually add an user into the sudo or admin group by editing the `/etc/group` file.
-
-Just open the file then append the corresponding user in the appropriate group to achieve this.
-
-```
-$ grep -i sudo /etc/group
-sudo:x:27:2gadmin,user2
-```
-
-In this example, I’m going to use `user2` user account.
-
-I’m going to check whether `user2` user has sudo access or not by restarting the `Apache` service in the system. let’s see the magic.
-
-```
-$ sudo systemctl restart apache2
-[sudo] password for user2:
-
-$ sudo tail -f /var/log/auth.log
-[sudo] password for user2:
-Mar 17 21:01:04 Ubuntu18 systemd-logind[559]: New session 22 of user user2.
-Mar 17 21:01:04 Ubuntu18 systemd: pam_unix(systemd-user:session): session opened for user user2 by (uid=0)
-Mar 17 21:01:33 Ubuntu18 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/systemctl restart apache2
-```
-
-### Method-3: How To Grant The Super User Access To A Normal User In Linux Using /etc/sudoers file?
-
-sudo users access is controlled by `/etc/sudoers` file. So, simply add an user into the sudoers file under sudo or admin group.
-
-Just append the desired user into /etc/suoders file by using visudo command.
-
-```
-# grep -i user3 /etc/sudoers
-user3 ALL=(ALL:ALL) ALL
-```
-
-In this example, I’m going to use `user3` user account.
-
-I’m going to check whether `user3` user has sudo access or not by restarting the `MariaDB` service in the system. let’s see the magic.
-
-```
-$ sudo systemctl restart mariadb
-[sudo] password for user3:
-
-$ sudo tail -f /var/log/auth.log
-[sudo] password for user3:
-Mar 17 21:12:32 Ubuntu18 systemd-logind[559]: New session 24 of user user3.
-Mar 17 21:12:49 Ubuntu18 sudo: user3 : TTY=pts/0 ; PWD=/home/user3 ; USER=root ; COMMAND=/bin/systemctl restart mariadb
-Mar 17 21:12:49 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user3(uid=0)
-Mar 17 21:12:53 Ubuntu18 sudo: pam_unix(sudo:session): session closed for user root
-Mar 17 21:13:08 Ubuntu18 sudo: user3 : TTY=pts/0 ; PWD=/home/user3 ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log
-Mar 17 21:13:08 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user3(uid=0)
-```
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
-
-作者:[Magesh Maruthamuthu][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/magesh/
-[b]: https://github.com/lujun9972
diff --git a/sources/tech/20190318 Let-s try dwm - dynamic window manager.md b/sources/tech/20190318 Let-s try dwm - dynamic window manager.md
new file mode 100644
index 0000000000..48f44a33cb
--- /dev/null
+++ b/sources/tech/20190318 Let-s try dwm - dynamic window manager.md
@@ -0,0 +1,150 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Let’s try dwm — dynamic window manager)
+[#]: via: (https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/)
+[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
+
+Let’s try dwm — dynamic window manager
+======
+
+![][1]
+
+If you like efficiency and minimalism, and are looking for a new window manager for your Linux desktop, you should try _dwm_ — dynamic window manager. Written in under 2000 standard lines of code, dwm is extremely fast yet powerful and highly customizable window manager.
+
+You can dynamically choose between tiling, monocle and floating layouts, organize your windows into multiple workspaces using tags, and quickly navigate through using keyboard shortcuts. This article helps you get started using dwm.
+
+## **Installation**
+
+To install dwm on Fedora, run:
+
+```
+$ sudo dnf install dwm dwm-user
+```
+
+The _dwm_ package installs the window manager itself, and the _dwm-user_ package significantly simplifies configuration which will be explained later in this article.
+
+Additionally, to be able to lock the screen when needed, we’ll also install _slock_ — a simple X display locker.
+
+```
+$ sudo dnf install slock
+```
+
+However, you can use a different one based on your personal preference.
+
+## **Quick start**
+
+To start dwm, choose the _dwm-user_ option on the login screen.
+
+![][2]
+
+After you log in, you’ll see a very simple desktop. In fact, the only thing there will be a bar at the top listing our nine tags that represent workspaces and a _[]=_ symbol that represents the layout of your windows.
+
+### Launching applications
+
+Before looking into the layouts, first launch some applications so you can play with the layouts as you go. Apps can be started by pressing _Alt+p_ and typing the name of the app followed by _Enter_. There’s also a shortcut _Alt+Shift+Enter_ for opening a terminal.
+
+Now that some apps are running, have a look at the layouts.
+
+### Layouts
+
+There are three layouts available by default: the tiling layout, the monocle layout, and the floating layout.
+
+The tiling layout, represented by _[]=_ on the bar, organizes windows into two main areas: master on the left, and stack on the right. You can activate the tiling layout by pressing _Alt+t._
+
+![][3]
+
+The idea behind the tiling layout is that you have your primary window in the master area while still seeing the other ones in the stack. You can quickly switch between them as needed.
+
+To swap windows between the two areas, hover your mouse over one in the stack area and press _Alt+Enter_ to swap it with the one in the master area.
+
+![][4]
+
+The monocle layout, represented by _[N]_ on the top bar, makes your primary window take the whole screen. You can switch to it by pressing _Alt+m_.
+
+Finally, the floating layout lets you move and resize your windows freely. The shortcut for it is _Alt+f_ and the symbol on the top bar is _> <>_.
+
+### Workspaces and tags
+
+Each window is assigned to a tag (1-9) listed at the top bar. To view a specific tag, either click on its number using your mouse or press _Alt+1..9._ You can even view multiple tags at once by clicking on their number using the secondary mouse button.
+
+Windows can be moved between different tags by highlighting them using your mouse, and pressing _Alt+Shift+1..9._
+
+## **Configuration**
+
+To make dwm as minimalistic as possible, it doesn’t use typical configuration files. Instead, you modify a C header file representing the configuration, and recompile it. But don’t worry, in Fedora it’s as simple as just editing one file in your home directory and everything else happens in the background thanks to the _dwm-user_ package provided by the maintainer in Fedora.
+
+First, you need to copy the file into your home directory using a command similar to the following:
+
+```
+$ mkdir ~/.dwm
+$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
+```
+
+You can get the exact path by running _man dwm-start._
+
+Second, just edit the _~/.dwm/config.h_ file. As an example, let’s configure a new shortcut to lock the screen by pressing _Alt+Shift+L_.
+
+Considering we’ve installed the _slock_ package mentioned earlier in this post, we need to add the following two lines into the file to make it work:
+
+Under the _/* commands */_ comment, add:
+
+```
+static const char *slockcmd[] = { "slock", NULL };
+```
+
+And the following line into _static Key keys[]_ :
+
+```
+{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
+```
+
+In the end, it should look like as follows: (added lines are highlighted)
+
+```
+...
+ /* commands */
+ static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
+ static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
+ static const char *termcmd[] = { "st", NULL };
+ static const char *slockcmd[] = { "slock", NULL };
+
+ static Key keys[] = {
+ /* modifier key function argument */
+ { MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
+ { MODKEY, XK_p, spawn, {.v = dmenucmd } },
+ { MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
+ ...
+```
+
+Save the file.
+
+Finally, just log out by pressing _Alt+Shift+q_ and log in again. The scripts provided by the _dwm-user_ package will recognize that you have changed the _config.h_ file in your home directory and recompile dwm on login. And becuse dwm is so tiny, it’s fast enough you won’t even notice it.
+
+You can try locking your screen now by pressing _Alt+Shift+L_ , and then logging back in again by typing your password and pressing enter.
+
+## **Conclusion**
+
+If you like minimalism and want a very fast yet powerful window manager, dwm might be just what you’ve been looking for. However, it probably isn’t for beginners. There might be a lot of additional configuration you’ll need to do in order to make it just as you like it.
+
+To learn more about dwm, see the project’s homepage at .
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/
+
+作者:[Adam Šamalík][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/asamalik/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-magazine-image-816x345.png
+[2]: https://fedoramagazine.org/wp-content/uploads/2019/03/choosing-dwm-1024x469.png
+[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-desktop-1024x593.png
+[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-2019-03-15-at-11.12.32-1024x592.png
diff --git a/sources/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md b/sources/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
deleted file mode 100644
index 6277e85bdc..0000000000
--- a/sources/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
+++ /dev/null
@@ -1,188 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?)
-[#]: via: (https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/)
-[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-
-How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?
-======
-
-We had recently written an article to check if a port is open on the remote Linux server. It will help you to check for single server.
-
-If you want to check for five servers then no issues, you can use any of the one following command such as nc (netcat), nmap and telnet.
-
-If you would like to check for 50+ servers then what will be the solution?
-
-It’s not easy to check all servers, if you do the same then there is no point and you will be wasting a lots of time unnecessarily.
-
-To overcome this situation, i had coded a small shell script using nc command that will allow us to scan any number of servers with given port.
-
-If you are looking for a single server scan then you have multiple options, to know more about it. Simply navigate to the following URL to **[Check Whether A Port Is Open On The Remote Linux System?][1]**
-
-There are two scripts available in this tutorial and both the scripts are useful.
-
-Both scripts are used for different purpose, which you can easily understand by reading a head line.
-
-I will ask you few questions before you are reading this article, just answer yourself if you know or you can get it by reading this article.
-
-How to check, if a port is open on the remote Linux server?
-
-How to check, if a port is open on the multiple remote Linux server?
-
-How to check, if multiple ports are open on the multiple remote Linux server?
-
-### What Is nc (netcat) Command?
-
-nc stands for netcat. Netcat is a simple Unix utility which reads and writes data across network connections, using TCP or UDP protocol.
-
-It is designed to be a reliable “back-end” tool that can be used directly or easily driven by other programs and scripts.
-
-At the same time, it is a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need and has several interesting built-in capabilities.
-
-Netcat has three main modes of functionality. These are the connect mode, the listen mode, and the tunnel mode.
-
-**Common Syntax for nc (netcat):**
-
-```
-$ nc [-options] [HostName or IP] [PortNumber]
-```
-
-### How To Check If A Port Is Open On Multiple Remote Linux Server?
-
-Use the following shell script if you would like to check the given port is open on multiple remote Linux servers or not.
-
-In my case, we are going to check whether the port 22 is open in the following remote servers or not? Make sure you have to update your servers list in the file instead of us.
-
-Make sure you have to update the servers list into `server-list.txt file`. Each server should be in separate line.
-
-```
-# cat server-list.txt
-192.168.1.2
-192.168.1.3
-192.168.1.4
-192.168.1.5
-192.168.1.6
-192.168.1.7
-```
-
-Use the following script to achieve this.
-
-```
-# vi port_scan.sh
-
-#!/bin/sh
-for server in `more server-list.txt`
-do
-#echo $i
-nc -zvw3 $server 22
-done
-```
-
-Set an executable permission to `port_scan.sh` file.
-
-```
-$ chmod +x port_scan.sh
-```
-
-Finally run the script to achieve this.
-
-```
-# sh port_scan.sh
-
-Connection to 192.168.1.2 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.3 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.4 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.5 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.6 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
-```
-
-### How To Check If Multiple Ports Are Open On Multiple Remote Linux Server?
-
-Use the following script if you want to check the multiple ports in multiple servers.
-
-In my case, we are going to check whether the port 22 and 80 is open or not in the given servers. Make sure you have to replace your required ports and servers name instead of us.
-
-Make sure you have to update the port lists into `port-list.txt` file. Each port should be in a separate line.
-
-```
-# cat port-list.txt
-22
-80
-```
-
-Make sure you have to update the servers list into `server-list.txt` file. Each server should be in separate line.
-
-```
-# cat server-list.txt
-192.168.1.2
-192.168.1.3
-192.168.1.4
-192.168.1.5
-192.168.1.6
-192.168.1.7
-```
-
-Use the following script to achieve this.
-
-```
-# vi multiple_port_scan.sh
-
-#!/bin/sh
-for server in `more server-list.txt`
-do
-for port in `more port-list.txt`
-do
-#echo $server
-nc -zvw3 $server $port
-echo ""
-done
-done
-```
-
-Set an executable permission to `multiple_port_scan.sh` file.
-
-```
-$ chmod +x multiple_port_scan.sh
-```
-
-Finally run the script to achieve this.
-
-```
-# sh multiple_port_scan.sh
-Connection to 192.168.1.2 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.2 80 port [tcp/http] succeeded!
-
-Connection to 192.168.1.3 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.3 80 port [tcp/http] succeeded!
-
-Connection to 192.168.1.4 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.4 80 port [tcp/http] succeeded!
-
-Connection to 192.168.1.5 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.5 80 port [tcp/http] succeeded!
-
-Connection to 192.168.1.6 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.6 80 port [tcp/http] succeeded!
-
-Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
-Connection to 192.168.1.7 80 port [tcp/http] succeeded!
-```
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/
-
-作者:[Magesh Maruthamuthu][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/magesh/
-[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/
diff --git a/sources/tech/20190321 NVIDIA Jetson Nano is a -99 Raspberry Pi Rival for AI Development.md b/sources/tech/20190321 NVIDIA Jetson Nano is a -99 Raspberry Pi Rival for AI Development.md
new file mode 100644
index 0000000000..52f02edc95
--- /dev/null
+++ b/sources/tech/20190321 NVIDIA Jetson Nano is a -99 Raspberry Pi Rival for AI Development.md
@@ -0,0 +1,98 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development)
+[#]: via: (https://itsfoss.com/nvidia-jetson-nano/)
+[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
+
+NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development
+======
+
+At the [GPU Technology Conference][1] NVIDIA announced the [Jetson Nano Module][2] and the [Jetson Nano Developer Kit][3]. Compared to other Jetson boards which cost between $299 and $1099, the Jetson Nano bears a low cost of $99. This puts it within the reach of many developers, educators, and researchers who could not spend hundreds of dollars to get such a product.
+
+![The Jetson Nano Development Kit \(left\) and the Jetson Nano Module \(right\)][4]
+
+### Bringing back AI development from ‘cloud’
+
+In the last few years, we have seen a lot of [advances in AI research][5]. Traditionally AI computing was always done in the cloud, where there was plenty of processing power available.
+
+Recently, there’s been a trend in shifting this computation away from the cloud and do it locally. This is called [Edge Computing][6]. Now at the embedded level, products which could do such complex calculations required for AI and Machine Learning were sparse, but we’re seeing a great explosion these days in this product segment.
+
+Products like the [SparkFun Edge][7] and [OpenMV Board][8] are good examples. The Jetson Nano, is NVIDIA’s latest offering in this market. When connected to your system, it will be able to supply the processing power needed for Machine Learning and AI tasks without having to rely on the cloud.
+
+This is great for privacy as well as saving on internet bandwidth. It is also more secure since your data always stays on the device itself.
+
+### Jetson Nano focuses on smaller AI projects
+
+![Jetson Nano powered JetBot][9]
+
+Previously released Jetson Boards like the [TX2][10] and [AGX Xavier][11] were used in products like drones and cars, the Jetson Nano is targeting smaller projects, projects where you need to have the processing power which boards like the [Raspberry Pi][12] cannot provide.
+
+Did you know?
+
+NVIDIA’s JetPack SDK provides a ‘complete desktop Linux environment based on Ubuntu 18.04 LTS’. In other words, the Jetson Nano is powered by Ubuntu Linux.
+
+### NVIDIA Jetson Nano Specifications
+
+For $99, you get 472 GFLOPS of processing power due to 128 NVIDIA Maxwell Architecture CUDA Cores, a quad-core ARM A57 processor, 4GB of LP-DDR4 RAM, 16GB of on-board storage, and 4k video encode/decode capabilities. The port selection is also pretty decent with the Nano having Gigabit Ethernet, MIPI Camera, Display outputs, and a couple of USB ports (1×3.0, 3×2.0). Full range of specifications can be found [here][13].
+
+CPU | Quad-core ARM® Cortex®-A57 MPCore processor
+---|---
+GPU | NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores
+RAM | 4 GB 64-bit LPDDR4
+Storage | 16 GB eMMC 5.1 Flash
+Camera | 12 lanes (3×4 or 4×2) MIPI CSI-2 DPHY 1.1 (1.5 Gbps)
+Connectivity | Gigabit Ethernet
+Display Ports | HDMI 2.0 and DP 1.2
+USB Ports | 1 USB 3.0 and 3 USB 2.0
+Other | 1 x1/2/4 PCIE, 1x SDIO / 2x SPI / 6x I2C / 2x I2S / GPIOs
+Size | 69.6 mm x 45 mm
+
+Along with good hardware, you get support for the majority of popular AI frameworks like TensorFlow, PyTorch, Keras, etc. It also has support for NVIDIA’s [JetPack][14] and [DeepStream][15] SDKs, same as the more expensive TX2 and AGX Boards.
+
+“Jetson Nano makes AI more accessible to everyone — and is supported by the same underlying architecture and software that powers our nation’s supercomputer. Bringing AI to the maker movement opens up a whole new world of innovation, inspiring people to create the next big thing.” said Deepu Talla, VP and GM of Autonomous Machines at NVIDIA.
+
+[Subscribe to It’s FOSS YouTube Channel][16]
+
+**What do you think of Jetson Nano?**
+
+The availability of Jetson Nano differs from country to country.
+
+The [Intel Neural Stick][17], is also one such accelerator which is competitively prices at $79. It’s good to see competition stirring up at these lower price points from the big manufacturers.
+
+I’m looking forward to getting my hands on the product if possible.
+
+What do you guys think about a product like this? Let us know in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/nvidia-jetson-nano/
+
+作者:[Atharva Lele][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/atharva/
+[b]: https://github.com/lujun9972
+[1]: https://www.nvidia.com/en-us/gtc/
+[2]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/
+[3]: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/jetson-nano-family-press-image-hd.jpg?ssl=1
+[5]: https://itsfoss.com/nanotechnology-open-science-ai/
+[6]: https://en.wikipedia.org/wiki/Edge_computing
+[7]: https://www.sparkfun.com/news/2886
+[8]: https://openmv.io/
+[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nvidia_jetson_bot.jpg?ssl=1
+[10]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2/
+[11]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/
+[12]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
+[13]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/#specifications
+[14]: https://developer.nvidia.com/embedded/jetpack
+[15]: https://developer.nvidia.com/deepstream-sdk
+[16]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[17]: https://software.intel.com/en-us/movidius-ncs-get-started
diff --git a/sources/tech/20190321 Top 10 New Linux SBCs to Watch in 2019.md b/sources/tech/20190321 Top 10 New Linux SBCs to Watch in 2019.md
new file mode 100644
index 0000000000..f3f1f7c72b
--- /dev/null
+++ b/sources/tech/20190321 Top 10 New Linux SBCs to Watch in 2019.md
@@ -0,0 +1,101 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top 10 New Linux SBCs to Watch in 2019)
+[#]: via: (https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019)
+[#]: author: (Eric Brown https://www.linux.com/users/ericstephenbrown)
+
+Top 10 New Linux SBCs to Watch in 2019
+======
+
+![UP Xtreme][1]
+
+Aaeon's Linux-ready UP Xtreme SBC.
+
+[Used with permission][2]
+
+A recent [Global Market Insights report][3] projects the single board computer market will grow from $600 million in 2018 to $1 billion by 2025. Yet, you don’t need to read a market research report to realize the SBC market is booming. Driven by the trends toward IoT and AI-enabled edge computing, new boards keep rolling off the assembly lines, many of them [tailored for highly specific applications][4].
+
+Much of the action has been in Linux-compatible boards, including the insanely popular Raspberry Pi. The number of different vendors and models has exploded thanks in part to the rise of [community-backed, open-spec SBCs][5].
+
+Here we examine 10 of the most intriguing, Linux-driven SBCs among the many products announced in the last four weeks that bookended the recent [Embedded World show][6] in Nuremberg. (There was also some [interesting Linux software news][7] at the show.) Two of the SBCs—the Intel Whiskey Lake based UP Xtreme and Nvidia Jetson Nano driven Jetson Nano Dev Kit—were announced only this week.
+
+Our mostly open source list also includes a few commercial boards. Processors range from the modest, Cortex-A7 driven STM32MP1 to the high-powered Whiskey Lake and Snapdragon 845. Mid-range models include Google’s i.MX8M powered Coral Dev Board and a similarly AI-enhanced, TI AM5729 based BeagleBone AI. Deep learning acceleration chips—and standard RPi 40-pin or 96Boards expansion connectors—are common themes among most of these boards.
+
+The SBCs are listed in reverse chronological order according to their announcement dates. The links in the product names go to recent LinuxGizmos reports, which link to vendor product pages.
+
+**[UP Xtreme][8]** —The latest in Aaeon’s line of community-backed SBCs taps Intel’s 8th Gen Whiskey Lake-U CPUs, which maintain a modest 15W TDP while boosting performance with up to quad-core, dual threaded configurations. Depending on when it ships, this Linux-ready model will likely be the most powerful community-backed SBC around -- and possibly the most expensive.
+
+The SBC supports up to 16GB DDR4 and 128GB eMMC and offers 4K displays via HDMI, DisplayPort, and eDP. Other features include SATA, 2x GbE, 4x USB 3.0, and 40-pin “HAT” and 100-pin GPIO add-on board connectors. You also get mini-PCIe and dual M.2 slots that support wireless modems and more SATA options. The slots also support Aaeon’s new AI Core X modules, which offer Intel’s latest Movidius Myriad X VPUs for 1TOPS neural processing acceleration.
+
+**[Jetson Nano Dev Kit][9]** —Nvidia just announced a low-end Jetson Nano compute module that’s sort of like a smaller (70 x 45mm) version of the old Jetson TX1. It offers the same 4x Cortex-A57 cores but has an even lower-end 128-core Maxwell GPU. The module has half the RAM and flash (4GB/16GB) of the TX1 and TX2, and no WiFi/Bluetooth radios. Like the hexa-core Jetson TX2, however, it supports 4K video and the GPU offers similar CUDA-X deep learning libraries.
+
+Although Nvidia has backed all its Linux-driven Jetson modules with development kits, the Jetson Nano Dev Kit is its first community-backed, maker-oriented kit. It does not appear to offer open specifications, but it costs only $99 and there’s a forum and other community resources. Many of the specs match or surpass the Raspberry Pi 3B+, including the addition of a 40-pin GPIO. Highlights include an M.2 slot, GbE with Power-over-Ethernet, HDMI 2.0 and eDP links, and 4x USB 3.0 ports.
+
+**[Coral Dev Board][10]** —Google’s very first Linux maker board arrived earlier this month featuring an NXP i.MX8M and Google’s Edge TPU AI chip—a stripped-down version of Google’s TPU Unit is designed to run TensorFlow Lite ML models. The $150, Raspberry Pi-like Coral Dev Board was joined by a similarly Edge TPU-enabled Coral USB Accelerator USB stick. These will be followed by an Edge TPU based Coral PCIe Accelerator and a Coral SOM compute module. All these devices are backed with schematics, community resources, and other open-spec resources.
+
+The Coral Dev Board combines the Edge TPU chip with NXP’s quad-core, 1.5GHz Cortex-A53 i.MX8M with a 3D Vivante GPU/VPU and a Cortex-M4 MCU. The SBC is even more like the Raspberry Pi 3B+ than Nvidia’s Dev Kit, mimicking the size and much of the layout and I/O, including the 40-pin GPIO connector. Highlights include 4K-ready GbE, HDMI 2.0a, 4-lane MIPI-DSI and CSI, and USB 3.0 host and Type-C ports.
+
+**[SBC-C43][11]** —Seco’s commercial, industrial temperature SBC-C43 board is the first SBC based on NXP’s high-end, up to hexa-core i.MX8. The 3.5-inch SBC supports the i.MX8 QuadMax with 2x Cortex-A72 cores and 4x Cortex-A53 cores, the QuadPlus with a single Cortex-A72 and 4x -A53, and the Quad with no -A72 cores and 4x -A53. There are also 2x Cortex-M4F real-time cores and 2x Vivante GPU/VPU cores. Yocto Project, Wind River Linux, and Android are available.
+
+The feature-rich SBC-C43 supports up to 8GB DDR4 and 32GB eMMC, both soldered for greater reliability. Highlights include dual GbE, HDMI 2.0a in and out ports, WiFi/Bluetooth, and a variety of industrial interfaces. Dual M.2 slots support SATA, wireless, and more.
+
+**[Nitrogen8M_Mini][12]** —This Boundary Devices cousin to the earlier, i.MX8M based Nitrogen8M is available for $135, with shipments due this Spring. The open-spec Nitrogen8M_Mini is the first SBC to feature NXP’s new i.MX8M Mini SoC. The Mini uses a more advanced 14LPC FinFET process than the i.MX8M, resulting in lower power consumption and higher clock rates for both the 4x Cortex-A53 (1.5GHz to 2GHz) and Cortex-M4 (400MHz) cores. The drawback is that you’re limited to HD video resolution.
+
+Supported with Linux and Android, the Nitrogen8M_Mini ships with 2GB to 4GB LPDDR4 RAM and 8GB to 128GB eMMC. MIPI-DSI and -CSI interfaces support optional touchscreens and cameras, respectively. A GbE port is standard and PoE and WiFi/BT are optional. Other features include 3x USB ports, one or two PCIe slots, and optional -40 to 85°C support. A Nitrogen8M_Mini SOM module with similar specs is also in the works.
+
+**[Pine H64 Model B][13]** —Pine64’s latest hacker board was teased in late January as part of an [ambitious roll-out][14] of open source products, including a laptop, tablet, and phone. The Raspberry Pi semi-clone, which recently went on sale for $39 (2GB) or $49 (3GB), showcases the high-end, but low-cost Allwinner H64. The quad -A53 SoC is notable for its 4K video with HDR support.
+
+The Pine H64 Model B offers up to 128GB eMMC storage, WiFi/BT, and a GbE port. I/O includes 2x USB 2.0 and single USB 3.0 and HDMI 2.0a ports plus SPDIF audio and an RPi-like 40-pin connector. Images include Android 7.0 and an “in progress” Armbian Debian Stretch.
+
+**[AI-ML Board][15]** —Arrow unveiled this i.MX8X based SBC early this month along with a similarly 96Boards CE Extended format, i.MX8M based Thor96 SBC. While there are plenty of i.MX8M boards these days, we’re more intrigued with the lowest-end i.MX8X member of the i.MX8 family. The AI-ML Board is the first SBC we’ve seen to feature the low-power i.MX8X, which offers up to 4x 64-bit, 1.2GHz Cortex-A35 cores, a 4-shader, 4K-ready Vivante GPU/VPU, a Cortex-M4F chip, and a Tensilica HiFi 4 DSP.
+
+The open-spec, Yocto Linux driven AI-ML Board is targeted at low-power, camera-equipped applications such as drones. The board has 2GB LPDDR4, Ethernet, WiFi/BT, and a pair each of MIPI-DSI and USB 3.0 ports. Cameras are controlled via the 96Boards 60-pin, high-power GPIO connector, which is joined by the usual 40-pin low-power link. The launch is expected June 1.
+
+**[BeagleBone AI][16]** —The long-awaited successor to the Cortex-A8 AM3358 based BeagleBone family of boards advances to TIs dual-core Cortex-A15 AM5729, with similar PowerVR GPU and MCU-like PRU cores. The real story, however, is the AI firepower enabled by the SoC’s dual TI C66x DSPs and four embedded-vision-engine (EVE) neural processing cores. BeagleBoard.org claims that calculations for computer-vision models using EVE run at 8x times the performance per watt compared to the similar, but EVE-less, AM5728. The EVE and DSP chips are supported through a TIDL machine learning OpenCL API and pre-installed tools.
+
+Due to go on sale in April for about $100, the Linux-powered BeagleBone AI is based closely on the BeagleBone Black and offers backward header, mechanical, and software compatibility. It doubles the RAM to 1GB and quadruples the eMMC storage to 16GB. You now get GbE and high-speed WiFi, as well as a USB Type-C port.
+
+**[Robotics RB3 Platform (DragonBoard 845c)][17]** —Qualcomm and Thundercomm are initially launching their 96Boards CE form factor, Snapdragon 845-based upgrade to the Snapdragon 820-based [DragonBoard 820c][18] SBC as part of a Qualcomm Robotics RB3 Platform. Yet, 96Boards.org has already posted a [DragonBoard 845c product page][17], and we imagine the board will be available in the coming months without all the robotics bells and whistles. A compute module version is also said to be in the works.
+
+The 10nm, octa-core, “Kryo” based Snapdragon 845 is one of the most powerful Arm SoCs around. It features an advanced Adreno 630 GPU with “eXtended Reality” (XR) VR technology and a Hexagon 685 DSP with a third-gen Neural Processing Engine (NPE) for AI applications. On the RB3 kit, the board’s expansion connectors are pre-stocked with Qualcomm cellular and robotics camera mezzanines. The $449 and up kit also includes standard 4K video and tracking cameras, and there are optional Time-of-Flight (ToF) and stereo SLM camera depth cameras. The SBC runs Linux with ROS (Robot Operating System).
+
+**[Avenger96][19]** —Like Arrow’s AI-ML Board, the Avenger96 is a 96Boards CE Extended SBC aimed at low-power IoT applications. Yet, the SBC features an even more power-efficient (and slower) SoC: ST’s recently announced [STM32MP153][20]. The Avenger96 runs Linux on the high-end STM32MP157 model, which has dual, 650MHz Cortex-A7 cores, a Cortex-M4, and a Vivante 3D GPU.
+
+This sandwich-style board features an Avenger96 module with the STM32MP157 SoC, 1GB of DDR3L, 2MB SPI flash, and a power management IC. It’s unclear if the 8GB eMMC and WiFi-ac/Bluetooth 4.2 module are on the module or carrier board. The Avenger96 SBC is further equipped with GbE, HDMI, micro-USB OTG, and dual USB 2.0 host ports. There’s also a microSD slot and the usual 40- and 60-pin GPIO connectors. The board is expected to go on sale in April.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019
+
+作者:[Eric Brown][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/ericstephenbrown
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/aaeon_upxtreme.jpg?itok=QnwAt3mp (UP Xtreme)
+[2]: /LICENSES/CATEGORY/USED-PERMISSION
+[3]: https://www.globenewswire.com/news-release/2019/02/13/1724445/0/en/Single-Board-Computer-Market-to-surpass-1bn-by-2025-Global-Market-Insights-Inc.html
+[4]: https://www.linux.com/blog/2019/1/linux-hacker-board-trends-2018-and-beyond
+[5]: http://linuxgizmos.com/catalog-of-122-open-spec-linux-hacker-boards/
+[6]: https://www.embedded-world.de/en
+[7]: https://www.linux.com/news/2019/2/embedded-linux-software-highlights-embedded-world
+[8]: http://linuxgizmos.com/latest-up-board-combines-whiskey-lake-with-ai-core-x-modules/
+[9]: http://linuxgizmos.com/trimmed-down-jetson-nano-modules-ships-on-99-linux-dev-kit/
+[10]: http://linuxgizmos.com/google-launches-i-mx8m-dev-board-with-edge-tpu-ai-chip/
+[11]: http://linuxgizmos.com/first-i-mx8-quadmax-sbc-breaks-cover/
+[12]: http://linuxgizmos.com/open-spec-nitrogen8m_mini-sbc-ships-along-with-new-mini-based-som/
+[13]: http://linuxgizmos.com/revised-allwiner-h64-based-pine-h64-sbc-has-rpi-size-and-gpio/
+[14]: https://www.linux.com/blog/2019/2/pine64-launch-open-source-phone-laptop-tablet-and-camera
+[15]: http://linuxgizmos.com/arrows-latest-96boards-sbcs-tap-i-mx8x-and-i-mx8m/
+[16]: http://linuxgizmos.com/beaglebone-ai-sbc-features-dual-a15-soc-with-eve-ai-cores/
+[17]: http://linuxgizmos.com/robotics-kit-runs-linux-on-new-dragonboard-845c-96boards-sbc/
+[18]: http://linuxgizmos.com/debian-driven-dragonboard-expands-to-96boards-extended-spec/
+[19]: http://linuxgizmos.com/sandwich-style-96boards-sbc-runs-linux-on-sts-new-cortex-a7-m4-soc/
+[20]: https://www.linux.com/news/2019/2/st-spins-its-first-linux-powered-cortex-soc
diff --git a/sources/tech/20190322 How to set up Fedora Silverblue as a gaming station.md b/sources/tech/20190322 How to set up Fedora Silverblue as a gaming station.md
new file mode 100644
index 0000000000..2d794f2d29
--- /dev/null
+++ b/sources/tech/20190322 How to set up Fedora Silverblue as a gaming station.md
@@ -0,0 +1,100 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to set up Fedora Silverblue as a gaming station)
+[#]: via: (https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/)
+[#]: author: (Michal Konečný https://fedoramagazine.org/author/zlopez/)
+
+How to set up Fedora Silverblue as a gaming station
+======
+
+![][1]
+
+This article gives you a step by step guide to turn your Fedora Silverblue into an awesome gaming station with the help of Flatpak and Steam.
+
+Note: Do you need the NVIDIA proprietary driver on Fedora 29 Silverblue for a complete experience? Check out [this blog post][2] for pointers.
+
+### Add the Flathub repository
+
+This process starts with a clean Fedora 29 Silverblue installation with a user already created for you.
+
+First, go to and enable the Flathub repository on your system. To do this, click the _Quick setup_ button on the main page.
+
+![Quick setup button on flathub.org/home][3]
+
+This redirects you to where you should click on the Fedora icon.
+
+![Fedora icon on flatpak.org/setup][4]
+
+Now you just need to click on _Flathub repository file._ Open the downloaded file with the _Software Install_ application.
+
+![Flathub repository file button on flatpak.org/setup/Fedora][5]
+
+The GNOME Software application opens. Next, click on the _Install_ button. This action needs _sudo_ permissions, because it installs the Flathub repository for use by the whole system.
+
+![Install button in GNOME Software][6]
+
+### Install the Steam flatpak
+
+You can now search for the S _team_ flatpak in _GNOME Software_. If you can’t find it, try rebooting — or logout and login — in case _GNOME Software_ didn’t read the metadata. That happens automatically when you next login.
+
+![Searching for Steam][7]
+
+Click on the _Steam_ row and the _Steam_ page opens in _GNOME Software._ Next, click on _Install_.
+
+![Steam page in GNOME Software][8]
+
+And now you have installed _Steam_ flatpak on your system.
+
+### Enable Steam Play in Steam
+
+Now that you have _Steam_ installed, launch it and log in. To play Windows games too, you need to enable _Steam Play_ in _Steam._ To enable it, choose _Steam > Settings_ from the menu in the main window.
+
+![Settings button in Steam][9]
+
+Navigate to the _Steam Play_ section. You should see the option _Enable Steam Play for supported titles_ is already ticked, but it’s recommended you also tick the _Enable Steam Play_ option for all other titles. There are plenty of games that are actually playable, but not whitelisted yet on _Steam._ To see which games are playable, visit [ProtonDB][10] and search for your favorite game. Or just look for the games with the most platinum reports.
+
+![Steam Play settings menu on Steam][11]
+
+If you want to know more about Steam Play, you can read the [article][12] about it here on Fedora Magazine:
+
+> [Play Windows games on Fedora with Steam Play and Proton][12]
+
+### Appendix
+
+You’re now ready to play plenty of games on Linux. Please remember to share your experience with others using the _Contribute_ button on [ProtonDB][10] and report bugs you find on [GitHub][13], because sharing is nice. 🙂
+
+* * *
+
+_Photo by _[ _Hardik Sharma_][14]_ on _[_Unsplash_][15]_._
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/
+
+作者:[Michal Konečný][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/zlopez/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/silverblue-gaming-816x345.jpg
+[2]: https://blogs.gnome.org/alexl/2019/03/06/nvidia-drivers-in-fedora-silverblue/
+[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-29-00.png
+[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-36-35-1024x713.png
+[5]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-45-12.png
+[6]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-57-37.png
+[7]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-08-21.png
+[8]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-13-59-1024x769.png
+[9]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-30-20.png
+[10]: https://www.protondb.com/
+[11]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-41-53.png
+[12]: https://fedoramagazine.org/play-windows-games-steam-play-proton/
+[13]: https://github.com/ValveSoftware/Proton
+[14]: https://unsplash.com/photos/I7rXyzBNVQM?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[15]: https://unsplash.com/search/photos/video-game-laptop?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
diff --git a/sources/tech/20190325 Contribute at the Fedora Test Day for Fedora Modularity.md b/sources/tech/20190325 Contribute at the Fedora Test Day for Fedora Modularity.md
new file mode 100644
index 0000000000..3de297db06
--- /dev/null
+++ b/sources/tech/20190325 Contribute at the Fedora Test Day for Fedora Modularity.md
@@ -0,0 +1,50 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Contribute at the Fedora Test Day for Fedora Modularity)
+[#]: via: (https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/)
+[#]: author: (Sumantro Mukherjee https://fedoramagazine.org/author/sumantrom/)
+
+Contribute at the Fedora Test Day for Fedora Modularity
+======
+
+![][1]
+
+Modularity lets you keep the right version of an application, language runtime, or other software on your Fedora system even as the operating system is updated. You can read more about Modularity in general on the [Fedora documentation site][2].
+
+The Modularity folks have been working on Modules for everyone. As a result, the Fedora Modularity and QA teams have organized a test day for **Tuesday, March 26, 2019**. Refer to the [wiki page][3] for links to the test images you’ll need to participate. Read on for more information on the test day.
+
+### How do test days work?
+
+A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.
+
+To contribute, you only need to be able to do the following things:
+
+ * Download test materials, which include some large files
+ * Read and follow directions step by step
+
+
+
+The [wiki page][3] for the modularity test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day [web application][4]. If you’re available on or around the day of the event, please do some testing and report your results.
+
+Happy testing, and we hope to see you on test day.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/
+
+作者:[Sumantro Mukherjee][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/sumantrom/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2015/03/test-days-945x400.png
+[2]: https://docs.fedoraproject.org/en-US/modularity/
+[3]: https://fedoraproject.org/wiki/Test_Day:2019-03-26_Modularity_Test_Day
+[4]: http://testdays.fedorainfracloud.org/events/61
diff --git a/sources/tech/20190325 Getting started with Vim- The basics.md b/sources/tech/20190325 Getting started with Vim- The basics.md
deleted file mode 100644
index 978f9293d0..0000000000
--- a/sources/tech/20190325 Getting started with Vim- The basics.md
+++ /dev/null
@@ -1,222 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (Modrisco)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Getting started with Vim: The basics)
-[#]: via: (https://opensource.com/article/19/3/getting-started-vim)
-[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson)
-
-Getting started with Vim: The basics
-======
-
-Learn to use Vim enough to get by at work or for a new project.
-
-![Person standing in front of a giant computer screen with numbers, data][1]
-
-I remember the very first time I encountered Vim. I was a university student, and the computers in the computer science department's lab were installed with Ubuntu Linux. While I had been exposed to different Linux variations (like RHEL) even before my college years (Red Hat sold its CDs at Best Buy!), this was the first time I needed to use the Linux operating system regularly, because my classes required me to do so. Once I started using Linux, like many others before and after me, I began to feel like a "real programmer."
-
-![Real Programmers comic][2]
-
-Real Programmers, by [xkcd][3]
-
-Students could use a graphical text editor like [Kate][4], which was installed on the lab computers by default. For students who could use the shell but weren't used to the console-based editor, the popular choice was [Nano][5], which provided good interactive menus and an experience similar to Windows' graphical text editor.
-
-I used Nano sometimes, but I heard awesome things about [Vi/Vim][6] and [Emacs][7] and really wanted to give them a try (mainly because they looked cool, and I was also curious to see what was so great about them). Using Vim for the first time scared me—I did not want to mess anything up! But once I got the hang of it, things became much easier and I could appreciate the editor's powerful capabilities. As for Emacs, well, I sort of gave up, but I'm happy I stuck with Vim.
-
-In this article, I will walk through Vim (based on my personal experience) just enough so you can get by with it as an editor on a Linux system. This will neither make you an expert nor even scratch the surface of many of Vim's powerful capabilities. But the starting point always matters, and I want to make the beginning experience as easy as possible, and you can explore the rest on your own.
-
-### Step 0: Open a console window
-
-Before jumping into Vim, you need to do a little preparation. Open a console terminal from your Linux operating system. (Since Vim is also available on MacOS, Mac users can use these instructions, also.)
-
-Once a terminal window is up, type the **ls** command to list the current directory. Then, type **mkdir Tutorial** to create a new directory called **Tutorial**. Go inside the directory by typing **cd Tutorial**.
-
-![Create a folder][8]
-
-That's it for preparation. Now it's time to move on to the fun part—starting to use Vim.
-
-### Step 1: Create and close a Vim file without saving
-
-Remember when I said I was scared to use Vim at first? Well, the scary part was thinking, "what if I change an existing file and mess things up?" After all, several computer science assignments required me to work on existing files by modifying them. I wanted to know: _How can I open and close a file without saving my changes?_
-
-The good news is you can use the same command to create or open a file in Vim: **vim , where **< FILE_NAME>** represents the target file name you want to create or modify. Let's create a file named **HelloWorld.java** by typing **vim HelloWorld.java**.
-
-Hello, Vim! Now, here is a very important concept in Vim, possibly the most important to remember: Vim has multiple modes. Here are three you need to know to do Vim basics:
-
-Mode | Description
----|---
-Normal | Default; for navigation and simple editing
-Insert | For explicitly inserting and modifying text
-Command Line | For operations like saving, exiting, etc.
-
-Vim has other modes, like Visual, Select, and Ex-Mode, but Normal, Insert, and Command Line modes are good enough for us.
-
-You are now in Normal mode. If you have text, you can move around with your arrow keys or other navigation keystrokes (which you will see later). To make sure you are in Normal mode, simply hit the **Esc** (Escape) key.
-
-> **Tip:** **Esc** switches to Normal mode. Even though you are already in Normal mode, hit **Esc** just for practice's sake.
-
-Now, this will be interesting. Press **:** (the colon key) followed by **q!** (i.e., **:q!** ). Your screen will look like this:
-
-![Editing Vim][9]
-
-Pressing the colon in Normal mode switches Vim to Command Line mode, and the **:q!** command quits the Vim editor without saving. In other words, you are abandoning all changes. You can also use **ZQ** ; choose whichever option is more convenient.
-
-Once you hit **Enter** , you should no longer be in Vim. Repeat the exercise a few times, just to get the hang of it. Once you've done that, move on to the next section to learn how to make a change to this file.
-
-### Step 2: Make and save modifications in Vim
-
-Reopen the file by typing **vim HelloWorld.java** and pressing the **Enter** key. Insert mode is where you can make changes to a file. First, hit **Esc** to make sure you are in Normal mode, then press **i** to go into Insert mode. (Yes, that is the letter **i**.)
-
-In the lower-left, you should see **\-- INSERT --**. This means you are in Insert mode.
-
-![Vim insert mode][10]
-
-Type some Java code. You can type anything you want, but here is an example for you to follow. Your screen will look like this:
-
-
-```
-public class HelloWorld {
- public static void main([String][11][] args) {
- }
-}
-```
-Very pretty! Notice how the text is highlighted in Java syntax highlight colors. Because you started the file in Java, Vim will detect the syntax color.
-
-Save the file. Hit **Esc** to leave Insert mode and enter Command Line mode. Type **:** and follow that with **x!** (i.e., a colon followed by x and !). Hit **Enter** to save the file. You can also type **wq** to perform the same operation.
-
-Now you know how to enter text using Insert mode and save the file using **:x!** or **:wq**.
-
-### Step 3: Basic navigation in Vim
-
-While you can always use your friendly Up, Down, Left, and Right arrow buttons to move around a file, that would be very difficult in a large file with almost countless lines. It's also helpful to be able to be able to jump around within a line. Although Vim has a ton of awesome navigation features, the first one I want to show you is how to go to a specific line.
-
-Press the **Esc** key to make sure you are in Normal mode, then type **:set number** and hit **Enter** .
-
-Voila! You see line numbers on the left side of each line.
-
-![Showing Line Numbers][12]
-
-OK, you may say, "that's cool, but how do I jump to a line?" Again, make sure you are in Normal mode, then press **: **, where **< LINE_NUMBER>** is the number of the line you want to go to, and press **Enter**. Try moving to line 2.
-
-```
-:2
-```
-
-Now move to line 3.
-
-![Jump to line 3][13]
-
-But imagine a scenario where you are dealing with a file that is 1,000 lines long and you want to go to the end of the file. How do you get there? Make sure you are in Normal mode, then type **:$** and press **Enter**.
-
-You will be on the last line!
-
-Now that you know how to jump among the lines, as a bonus, let's learn how to move to the end of a line. Make sure you are on a line with some text, like line 3, and type **$**.
-
-![Go to the last character][14]
-
-You're now at the last character on the line. In this example, the open curly brace is highlighted to show where your cursor moved to, and the closing curly brace is highlighted because it is the opening curly brace's matching character.
-
-That's it for basic navigation in Vim. Wait, don't exit the file, though. Let's move to basic editing in Vim. Feel free to grab a cup of coffee or tea, though.
-
-### Step 4: Basic editing in Vim
-
-Now that you know how to navigate around a file by hopping onto the line you want, you can use that skill to do some basic editing in Vim. Switch to Insert mode. (Remember how to do that, by hitting the **i** key?) Sure, you can edit by using the keyboard to delete or insert characters, but Vim offers much quicker ways to edit files.
-
-Move to line 3, where it shows **public static void main(String[] args) {**. Quickly hit the **d** key twice in succession. Yes, that is **dd**. If you did it successfully, you will see a screen like this, where line 3 is gone, and every following line moved up by one (i.e., line 4 became line 3).
-
-![Deleting A Line][15]
-
-That's the _delete_ command. Don't fear! Hit **u** and you will see the deleted line recovered. Whew. This is the _undo_ command.
-
-![Undoing a change in Vim][16]
-
-The next lesson is learning how to copy and paste text, but first, you need to learn how to highlight text in Vim. Press **v** and move your Left and Right arrow buttons to select and deselect text. This feature is also very useful when you are showing code to others and want to identify the code you want them to see.
-
-![Highlighting text in Vim][17]
-
-Move to line 4, where it says **System.out.println("Hello, Opensource");**. Highlight all of line 4. Done? OK, while line 4 is still highlighted, press **y**. This is called _yank_ mode, and it will copy the text to the clipboard. Next, create a new line underneath by entering **o**. Note that this will put you into Insert mode. Get out of Insert mode by pressing **Esc** , then hit **p** , which stands for _paste_. This will paste the copied text from line 3 to line 4.
-
-![Pasting in Vim][18]
-
-As an exercise, repeat these steps but also modify the text on your newly created lines. Also, make sure the lines are aligned well.
-
-> **Hint:** You need to switch back and forth between Insert mode and Command Line mode to accomplish this task.
-
-Once you are finished, save the file with the **x!** command. That's all for basic editing in Vim.
-
-### Step 5: Basic searching in Vim
-
-Imagine your team lead wants you to change a text string in a project. How can you do that quickly? You might want to search for the line using a certain keyword.
-
-Vim's search functionality can be very useful. Go into the Command Line mode by (1) pressing **Esc** key, then (2) pressing colon **:** ****key. We can search a keyword by entering : **/ **, where **< SEARCH_KEYWORD>** is the text string you want to find. Here we are searching for the keyword string "Hello." In the image below, the colon is missing but required.
-
-![Searching in Vim][19]
-
-However, a keyword can appear more than once, and this may not be the one you want. So, how do you navigate around to find the next match? You simply press the **n** key, which stands for _next_. Make sure that you aren't in Insert mode when you do this!
-
-### Bonus step: Use split mode in Vim
-
-That pretty much covers all the Vim basics. But, as a bonus, I want to show you a cool Vim feature called _split mode_.
-
-Get out of _HelloWorld.java_ and create a new file. In a terminal window, type **vim GoodBye.java** and hit **Enter** to create a new file named _GoodBye.java_.
-
-Enter any text you want; I decided to type "Goodbye." Save the file. (Remember you can use **:x!** or **:wq** in Command Line mode.)
-
-In Command Line mode, type **:split HelloWorld.java** , and see what happens.
-
-![Split mode in Vim][20]
-
-Wow! Look at that! The **split** command created horizontally divided windows with _HelloWorld.java_ above and _GoodBye.java_ below. How can you switch between the windows? Hold **Control** (on a Mac) or **CTRL** (on a PC) then hit **ww** (i.e., **w** twice in succession).
-
-As a final exercise, try to edit _GoodBye.java_ to match the screen below by copying and pasting from _HelloWorld.java_.
-
-![Modify GoodBye.java file in Split Mode][21]
-
-Save both files, and you are done!
-
-> **TIP 1:** If you want to arrange the files vertically, use the command **:vsplit ** (instead of **:split **, where **< FILE_NAME>** is the name of the file you want to open in Split mode.
->
-> **TIP 2:** You can open more than two files by calling as many additional **split** or **vsplit** commands as you want. Try it and see how it looks.
-
-### Vim cheat sheet
-
-In this article, you learned how to use Vim just enough to get by for work or a project. But this is just the beginning of your journey to unlock Vim's powerful capabilities. Be sure to check out other great tutorials and tips on Opensource.com.
-
-To make things a little easier, I've summarized everything you've learned into [a handy cheat sheet][22].
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/3/getting-started-vim
-
-作者:[Bryant Son (Red Hat, Community Moderator)][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/brson
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
-[2]: https://opensource.com/sites/default/files/uploads/1_xkcdcartoon.jpg (Real Programmers comic)
-[3]: https://xkcd.com/378/
-[4]: https://kate-editor.org
-[5]: https://www.nano-editor.org
-[6]: https://www.vim.org
-[7]: https://www.gnu.org/software/emacs
-[8]: https://opensource.com/sites/default/files/uploads/2_createtestfolder.jpg (Create a folder)
-[9]: https://opensource.com/sites/default/files/uploads/4_existingvim.jpg (Editing Vim)
-[10]: https://opensource.com/sites/default/files/uploads/6_insertionmode.jpg (Vim insert mode)
-[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
-[12]: https://opensource.com/sites/default/files/uploads/10_setnumberresult_0.jpg (Showing Line Numbers)
-[13]: https://opensource.com/sites/default/files/uploads/12_jumpintoline3.jpg (Jump to line 3)
-[14]: https://opensource.com/sites/default/files/uploads/14_gotolastcharacter.jpg (Go to the last character)
-[15]: https://opensource.com/sites/default/files/uploads/15_deletinglines.jpg (Deleting A Line)
-[16]: https://opensource.com/sites/default/files/uploads/16_undoingtheline.jpg (Undoing a change in Vim)
-[17]: https://opensource.com/sites/default/files/uploads/17_highlighting.jpg (Highlighting text in Vim)
-[18]: https://opensource.com/sites/default/files/uploads/19_pasting.jpg (Pasting in Vim)
-[19]: https://opensource.com/sites/default/files/uploads/22_searchmode.jpg (Searching in Vim)
-[20]: https://opensource.com/sites/default/files/uploads/26_copytonewfiles.jpg (Split mode in Vim)
-[21]: https://opensource.com/sites/default/files/uploads/27_exercise.jpg (Modify GoodBye.java file in Split Mode)
-[22]: https://opensource.com/downloads/cheat-sheet-vim
diff --git a/sources/tech/20190325 How Open Source Is Accelerating NFV Transformation.md b/sources/tech/20190325 How Open Source Is Accelerating NFV Transformation.md
new file mode 100644
index 0000000000..22f7df8876
--- /dev/null
+++ b/sources/tech/20190325 How Open Source Is Accelerating NFV Transformation.md
@@ -0,0 +1,77 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How Open Source Is Accelerating NFV Transformation)
+[#]: via: (https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation)
+[#]: author: (Pam Baker https://www.linux.com/users/pambaker)
+
+How Open Source Is Accelerating NFV Transformation
+======
+
+![NFV][1]
+
+In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, about the role of open source in innovation for telecommunications service providers.
+
+[Creative Commons Zero][2]
+
+Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of [open source as the path to innovation][3] resonates on many levels.
+
+In anticipation of the upcoming [Open Networking Summit][4], we talked with [Thomas Nadeau][5], Technical Director NFV at Red Hat, who gave a [keynote address][6] at last year’s event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers.
+
+One reason for open source’s broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs.
+
+“There are projects now, like Kubernetes, that are too big for any one company to do. There's technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.”
+
+Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies.
+
+**Linux.com: Why is open source central to innovation in general for telecommunications service providers?**
+
+**Nadeau:** The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed.
+
+And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They're becoming much more flexible, more modular, and open source is the best means to achieve that.
+
+**Linux.com: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.**
+
+**Nadeau:** Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today's marketplace. Without open source in that virtualization space, you’re stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete.
+
+There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms.
+
+NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came "disaggregated VMs" where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it's still NFV.
+
+You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great.
+
+But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we're back to square one where you lose 80% of the performance because of the latest SOA model that they've implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it's still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations.
+
+**Linux.com: Tell us about the underlying Linux in NFV, and why that combo is so powerful.**
+
+**Nadeau:** Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it's the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it's all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It's secure, it's flexible, and scalable, so operators can really use it as a tool now.
+
+**Linux.com: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?**
+
+**Nadeau:** Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors’ businesses.
+
+These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they're using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions.
+
+_Learn more at[Open Networking Summit][4], happening April 3-5 at the San Jose McEnery Convention Center._
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation
+
+作者:[Pam Baker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/pambaker
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/nfv-443852_1920.jpg?itok=uFbzmEPY (NFV)
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.linuxfoundation.org/blog/2018/02/open-source-standards-team-red-hat-measures-open-source-success/
+[4]: https://events.linuxfoundation.org/events/open-networking-summit-north-america-2019/
+[5]: https://www.linkedin.com/in/tom-nadeau/
+[6]: https://onseu18.sched.com/event/Fmpr
diff --git a/sources/tech/20190326 An inside look at an IIoT-powered smart factory.md b/sources/tech/20190326 An inside look at an IIoT-powered smart factory.md
new file mode 100644
index 0000000000..52c7c925dd
--- /dev/null
+++ b/sources/tech/20190326 An inside look at an IIoT-powered smart factory.md
@@ -0,0 +1,74 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (An inside look at an IIoT-powered smart factory)
+[#]: via: (https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+An inside look at an IIoT-powered smart factory
+======
+
+### Despite housing some 50 robots and 50 people, Tempo Automation’s gleaming connected factory relies on industrial IoT and looks more like a high-tech startup office than a manufacturing plant.
+
+![Tempo Automation][1]
+
+As someone who’s spent his whole career working in offices, not factories, I had very little idea what a modern “smart factory” powered by the industrial Internet of Things (IIoT) might look like. That’s why I was so interested in [Tempo Automation][2]’s new 42,000-square-foot facility in San Francisco’s trendy Design District.
+
+Frankly, I pictured the company’s facility, which uses IIoT to automatically configure, operate, and monitor the prototyping and low-volume production of printed circuit board assemblies (PCBAs), as a cacophony of robots and conveyor belts attended to by a grizzled band of grease-stained technicians. You know, a 21stcentury update of Charlie Chaplin’s 1936 classic *Modern Times *making equipment for customers in the aerospace, medtech, industrial automation, consumer electronics, and automotive industries. (The company just inked a [new contract with Lockheed Martin][3].)
+
+**[ Learn more about the[industrial Internet of Things][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
+
+Not exactly. As you can see from the below pictures, despite housing some 50 robots and 50 people, this gleaming “connected factory” looks more like a high-tech startup office, with just as many computers and few more hard-to-identify machines, including Solder Jet and Stencil Printers, zone reflow ovens, 3D X-ray devices and many more.
+
+![Tempo Automation office space][6]
+
+![Tempo Automation factory floor][7]
+
+## How Tempo Automation's 'smart factory' works
+
+On the front end, Tempo’s customers upload CAD files with their board designs and Bills of Materials (BOM) listing the required parts to be used. After performing feature extraction on the design and developing a virtual model of the finished product, the Tempo system, the platform (called Tempocom) creates a manufacturing plan and automatically programs the factory’s machines. Tempocom also creates work plans for the factory employees, uploading them to the networked IIoT mobile devicesthey all carry. Updated in real time based on design and process changes, this“digital traveler” tells workers where to go and what to work on next.
+
+While Tempocom is planning and organizing the internal work of production, the system is also connected to supplier databases, seeking and ordering the parts that will be used in assembly, optimizing for speed of delivery to the Tempo factory.
+
+## Connecting the digital thread
+
+“There could be up to 20 robots, 400 unique parts, and 25 people working on the factory floor to produce one order start to finish in a matter of hours,” explained [Shashank Samala][8], Tempo’s co-founder and vice president of product in an email. Tempo “employs IIoT to automatically configure, operate, and monitor” the entire process, coordinated by a “connected manufacturing system” that creates an “unbroken digital thread from design intent of the engineer captured on the website, to suppliers distributed across the country, to robots and people on the factory floor.”
+
+Rather than the machines on the floor functioning as “isolated islands of technology,” Samala added, Tempo Automation uses [Amazon Web Services (AWS) GovCloud][9] to network everything in a bi-directional feedback loop.
+
+“After customers upload their design to the Tempo platform, our software extracts the design features and then streams relevant data down to all the devices, processes, and robots on the factory floor,” he said. “This loop then works the other way: As the robots build the products, they collect data and feedback about the design during production. This data is then streamed back through the Tempo secure cloud architecture to the customer as a ‘Production Forensics’ report.”
+
+Samala claimed the system has “streamlined operations, improved collaboration, and simplified remote management and control.”
+
+## Traditional IoT, too
+
+Of course, the Tempo factory isn’t all fancy, cutting-edge IIoT implementations. According to Ryan Saul, vice president of manufacturing,the plant also includes an array of IoT sensors that track temperature, humidity, equipment status, job progress, reported defects, and so on to help engineers and executives understand how the facility is operating.
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-floor-100791923-large.jpg
+[2]: http://www.tempoautomation.com/
+[3]: https://www.businesswire.com/news/home/20190325005097/en/Tempo-Automation-Announces-Contract-Lockheed-Martin
+[4]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb
+[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
+[6]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-2-100791921-large.jpg
+[7]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-100791922-large.jpg
+[8]: https://www.linkedin.com/in/shashanksamala/
+[9]: https://aws.amazon.com/govcloud-us/
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190326 Changes in SD-WAN Purchase Drivers Show Maturity of the Technology.md b/sources/tech/20190326 Changes in SD-WAN Purchase Drivers Show Maturity of the Technology.md
new file mode 100644
index 0000000000..803b6a993d
--- /dev/null
+++ b/sources/tech/20190326 Changes in SD-WAN Purchase Drivers Show Maturity of the Technology.md
@@ -0,0 +1,65 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Changes in SD-WAN Purchase Drivers Show Maturity of the Technology)
+[#]: via: (https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all)
+[#]: author: (Cliff Grossner https://www.networkworld.com/author/Cliff-Grossner/)
+
+Changes in SD-WAN Purchase Drivers Show Maturity of the Technology
+======
+
+![istock][1]
+
+[SD-WANs][2] have been available now for the past five years, but adoption has been light compared to that of the overall WAN market. This should be no surprise, as the technology was immature, and customers were dipping their toes in the water first as a test. Recently, however, there are signs that the market is maturing, which also happens to coincide with an acceleration of the market.
+
+Evidence of the maturation of SD-WANs can be seen in the most recent IHS Markit _Campus LAN and WAN SDN Strategies and Leadership North American Enterprise Survey_. Exhibit 1 shows that the top drivers of SD-WAN deployments are the simplification of WAN provisioning, automation capabilities. and direct cloud connectivity—all of which require an architectural change.
+
+This is in stark contrast to the approach of early adopters looking for a reduction in opex and capex savings, doing so in the past by shifting to cheap broadband and low-cost branch hardware. The survey data finds that opex savings now ranks tied in fifth place among the purchase drivers of SD-WAN; and that reduced capex is last, indicating that cost savings no longer possess the same level of importance as with early adopters.
+
+The shift in purchase drivers indicates companies are looking for SD-WAN to provide more value than legacy WAN.
+
+With [SD-WAN][3], the “software defined” indicates that the control plane has been separated from the data plane, enabling the control plane to be abstracted away from the hardware and allowing centralized, distributed, and hybrid control architectures, working alongside the centralized management of those architectures. This provides many benefits, the biggest of which is to make WAN provisioning easier.
+
+![Exhibit 1: Simplification and automation are top drivers for SD-WAN.][4]
+
+With SD-WAN, most mainstream buyers now demand Zero Touch Provisioning, where the SD-WAN appliance automatically calls home when it attaches to the network and pulls its configuration down from a centralized location. Also, changes can be made through a centralized console and then immediately pushed out to every device. This can automate many of the mundane and repetitive tasks associated with running a network.
+
+Such a setup carries many benefits—the most important being that highly skilled network engineers can dedicate more time to innovation and less time to working on tasks associated with “keeping the lights on.”
+
+At present, most resources—time and money—associated with running the WAN are allocated to maintaining the status quo. In the cloud era, however, business leaders embracing digital transformation are looking to their IT organization to help drive innovation and leapfrog the competition. SD-WANs can modernize the network, and the technology will tip the IT resource scale back in favor of innovation.
+
+### Mainstream buyers set new expectations for SD-WAN
+
+With early adopters, technology innovation is key because adopters are generally tech-savvy buyers and are always looking to use the latest and greatest to gain an edge. With mainstream buyers, other concerns arise. Exhibit 2 from the IHS Markit survey shows that technological innovation now ranks tied in fourth place in what buyers look for from an SD-WAN provider. While innovation is still important, factors such as security, financial stability, and product service and reliability rank higher. And although businesses need a strong technical solution, it cannot be achieved at the expense of security, vendor stability, or quality without putting operations at risk.
+
+It’s not surprising, then, that security turned out to be the overwhelming top evaluation criterion, as SD-WANs enable businesses to implement local internet breakout and cloud on-ramp features. Overall, SD-WANs help make applications perform better, especially as enterprises deploy workloads in off-premises, cloud-service-provider-operated data centers as they build their hybrid and multi-clouds.
+
+Another security capability of SD-WANs is their ability to easily implement segmentation, which enables businesses to establish centrally defined and globally consistent security policies that isolate traffic. For example, a retailer could isolate point-of-sale systems from its guest Wi-Fi network. [SD-WAN vendors][5] can also establish partnerships with well-known security vendors that enable the SD-WAN software to be service chained into application traffic flows, in the process allowing mainstream buyers their choice of security technology.
+
+![Exhibit 2: SD-WAN buyers now want security and financially viable vendors.][6]
+
+### The bottom line
+
+The SD-WAN market is maturing, and the shift from early adopters to mainstream businesses will create a “rising tide” that will benefit all SD-WAN buyers in the WAN ecosystem. As a result, vendors will work to meet calls emphasizing greater simplicity and risk reduction, as well as bring about features that provide an integrated connectivity fabric for enterprise edge, hybrid, and multi-clouds.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all
+
+作者:[Cliff Grossner][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Cliff-Grossner/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/istock-998475736-100791932-large.jpg
+[2]: https://www.silver-peak.com/sd-wan
+[3]: https://www.silver-peak.com/sd-wan/sd-wan-explained
+[4]: https://images.idgesg.net/images/article/2019/03/chart-1_post-10-100791930-large.jpg
+[5]: https://www.silver-peak.com/sd-wan/choosing-an-sd-wan-vendor
+[6]: https://images.idgesg.net/images/article/2019/03/chart-2_post-10-100791931-large.jpg
diff --git a/sources/tech/20190326 Today-s Retailer is Turning to the Edge for CX.md b/sources/tech/20190326 Today-s Retailer is Turning to the Edge for CX.md
new file mode 100644
index 0000000000..babc54c0f7
--- /dev/null
+++ b/sources/tech/20190326 Today-s Retailer is Turning to the Edge for CX.md
@@ -0,0 +1,52 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Today’s Retailer is Turning to the Edge for CX)
+[#]: via: (https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all)
+[#]: author: (Cindy Waxer https://www.networkworld.com/author/Cindy-Waxer/)
+
+Today’s Retailer is Turning to the Edge for CX
+======
+
+### Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the U.S. Census.
+
+![iStock][1]
+
+Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the [U.S. Census][2]. That’s putting enormous pressure on retailers to meet new consumer expectations around real-time access to merchandise and order information. In fact, 85.3% of shoppers expect retailers to provide associates with handheld or fixed devices to check inventory and price within a store, a nearly 51% increase over 2017, according to a [survey from SOTI][3].
+
+With an eye on transforming the customer experience of spending time in a store, retailers are investing aggressively in compute power located closer to the buyer, also known as [edge computing][4].
+
+So what new and innovative technologies are edge environments supporting? Here’s where retail is headed with customer service and how edge computing will help them get there.
+
+**Face forward** : Facial recognition technology is on the rise in retail as brands search for new ways to engage customers. Take, CaliBurger, for example. The restaurant chain recently tested out self-ordering kiosks that use AI and facial-recognition technology to identify registered customers and pull up their loyalty accounts and order preferences. By automatically displaying a customer’s most popular purchases, the system aims to help patrons complete their orders in seconds flat for greater speed and convenience.
+
+**Customer experience on display** : Forget about traditional counter displays. Savvy retailers are experimenting with high-tech, in-store digital signage solutions to attract consumers and gather valuable data. For instance, Glass Media’s projection-based, end-to-end digital retail signage combines display technology, a cloud-based IoT platform, and data analytic capabilities. Through projection, the solution can influence customers at the point-of-decision.
+
+**Backroom access** : Tracking inventory manually requires substantial human resources. IoT-powered backroom technologies such as RFID, real-time point of sale (POS), and smart shelving systems promise to change that by improving the accuracy of inventory tracking throughout the supply chain. These automated solutions can track and reorder items automatically, eliminating the need for humans to take inventory and reducing the risk of product shortages.
+
+**Robots to the rescue** : Hoping to transform the branch experience, HSBC recently unveiled Pepper, a concierge robot whose job is to help customers with simple tasks, from answering commonly asked questions to directing them to available tellers. Pepper also acts as an online banking station where customers can log into their mobile banking account or access information about products. By putting Pepper on the payroll, HSBC hopes to reduce customer wait times and free up its “human” bankers.
+
+These innovative technologies provide retailers with unique opportunities to enhance customer experience, develop new revenue streams, and boost customer loyalty. But many of them require edge computing to work properly. Bandwidth-intensive content and vast volumes of data can lead to latency issues, outages, and other IT headaches. Fortunately, by placing computing power and storage capabilities directly on the edge of the network, edge computing can help retailers deliver the best customer experience possible.
+
+To find out more about how edge computing is transforming the customer experience in retail, visit [APC.com][5].
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all
+
+作者:[Cindy Waxer][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Cindy-Waxer/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/istock-508154656-100791924-large.jpg
+[2]: https://ycharts.com/indicators/ecommerce_sales_as_percent_retail_sales
+[3]: https://www.soti.net/resources/newsroom/2019/annual-connected-retailer-survey-new-soti-survey-reveals-us-consumers-prefer-speed-and-convenience-when-shopping-with-limited-human-interaction/
+[4]: https://www.hpe.com/us/en/servers/edgeline-iot-systems.html?pp=false&jumpid=ps_83cqske5um_aid-510380402&gclid=CjwKCAjw6djYBRB8EiwAoAF6oWwk-M6LWcfCbbZ331fXhEHShXGbLWoSwTIzue6mxQg4gDvYx59XZxoC_4oQAvD_BwE&gclsrc=aw.ds
+[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
diff --git a/sources/tech/20190326 Using Square Brackets in Bash- Part 1.md b/sources/tech/20190326 Using Square Brackets in Bash- Part 1.md
deleted file mode 100644
index ea54fdabed..0000000000
--- a/sources/tech/20190326 Using Square Brackets in Bash- Part 1.md
+++ /dev/null
@@ -1,154 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (HankChow)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Using Square Brackets in Bash: Part 1)
-[#]: via: (https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1)
-[#]: author: (Paul Brown https://www.linux.com/users/bro66)
-
-Using Square Brackets in Bash: Part 1
-======
-
-![square brackets][1]
-
-This tutorial tackle square brackets and how they are used in different contexts at the command line.
-
-[Creative Commons Zero][2]
-
-After taking a look at [how curly braces (`{}`) work on the command line][3], now it’s time to tackle brackets (`[]`) and see how they are used in different contexts.
-
-### Globbing
-
-The first and easiest use of square brackets is in _globbing_. You have probably used globbing before without knowing it. Think of all the times you have listed files of a certain type, say, you wanted to list JPEGs, but not PNGs:
-
-```
-ls *.jpg
-```
-
-Using wildcards to get all the results that fit a certain pattern is precisely what we call globbing.
-
-In the example above, the asterisk means " _zero or more characters_ ". There is another globbing wildcard, `?`, which means " _exactly one character_ ", so, while
-
-```
-ls d*k*
-```
-
-will list files called _darkly_ and _ducky_ (and _dark_ and _duck_ \-- remember `*` can also be zero characters),
-
-```
-ls d*k?
-```
-
-will not list _darkly_ (or _dark_ or _duck_ ), but it will list _ducky_.
-
-Square brackets are used in globbing for sets of characters. To see what this means, make directory in which to carry out tests, `cd` into it and create a bunch of files like this:
-
-```
-touch file0{0..9}{0..9}
-```
-
-(If you don't know why that works, [take a look at the last installment that explains curly braces `{}`][3]).
-
-This will create files _file000_ , _file001_ , _file002_ , etc., through _file097_ , _file098_ and _file099_.
-
-Then, to list the files in the 70s and 80s, you can do this:
-
-```
-ls file0[78]?
-```
-
-To list _file022_ , _file027_ , _file028_ , _file052_ , _file057_ , _file058_ , _file092_ , _file097_ , and _file98_ you can do this:
-
-```
-ls file0[259][278]
-```
-
-Of course, you can use globbing (and square brackets for sets) for more than just `ls`. You can use globbing with any other tool for listing, removing, moving, or copying files, although the last two may require a bit of lateral thinking.
-
-Let's say you want to create duplicates of files _file010_ through _file029_ and call the copies _archive010_ , _archive011_ , _archive012_ , etc..
-
-You can't do:
-
-```
-cp file0[12]? archive0[12]?
-```
-
-Because globbing is for matching against existing files and directories and the _archive..._ files don't exist yet.
-
-Doing this:
-
-```
-cp file0[12]? archive0[1..2][0..9]
-```
-
-won't work either, because `cp` doesn't let you copy many files to other many new files. Copying many files only works if you are copying them to a directory, so this:
-
-```
-mkdir archive
-
-cp file0[12]? archive
-```
-
-would work, but it would copy the files, using their same names, into a directory called _archive/_. This is not what you set out to do.
-
-However, if you look back at [the article on curly braces (`{}`)][3], you will remember how you can use `%` to lop off the end of a string contained in a variable.
-
-Of course, there is a way you can also lop of the beginning of string contained in a variable. Instead of `%`, you use `#`.
-
-For practice, you can try this:
-
-```
-myvar="Hello World"
-
-echo Goodbye Cruel ${myvar#Hello}
-```
-
-It prints " _Goodbye Cruel World_ " because `#Hello` gets rid of the _Hello_ part at the beginning of the string stored in `myvar`.
-
-You can use this feature alongside your globbing tools to make your _archive_ duplicates:
-
-```
-for i in file0[12]?;\
-
-do\
-
-cp $i archive${i#file};\
-
-done
-```
-
-The first line tells the Bash interpreter that you want to loop through all the files that contain the string _file0_ followed by the digits _1_ or _2_ , and then one other character, which can be anything. The second line `do` indicates that what follows is the instruction or list of instructions you want the interpreter to loop through.
-
-Line 3 is where the actually copying happens, and you use the contents of the loop variable _`i`_ **twice: First, straight out, as the first parameter of the `cp` command, and then you add _archive_ to its contents, while at the same time cutting of _file_. So, if _`i`_ contains, say, _file019_...
-
-```
-"archive" + "file019" - "file" = "archive019"
-```
-
-the `cp` line is expanded to this:
-
-```
-cp file019 archive019
-```
-
-Finally, notice how you can use the backslash `\` to split a chain of commands over several lines for clarity.
-
-In part two, we’ll look at more ways to use square brackets. Stay tuned.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
-
-作者:[Paul Brown][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.linux.com/users/bro66
-[b]: https://github.com/lujun9972
-[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-gabriele-diwald-475007-unsplash.jpg?itok=cKmysLfd (square brackets)
-[2]: https://www.linux.com/LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
-[3]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
diff --git a/sources/tech/20190327 Cisco forms VC firm looking to weaponize fledgling technology companies.md b/sources/tech/20190327 Cisco forms VC firm looking to weaponize fledgling technology companies.md
new file mode 100644
index 0000000000..2a0dde5fb3
--- /dev/null
+++ b/sources/tech/20190327 Cisco forms VC firm looking to weaponize fledgling technology companies.md
@@ -0,0 +1,66 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Cisco forms VC firm looking to weaponize fledgling technology companies)
+[#]: via: (https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+Cisco forms VC firm looking to weaponize fledgling technology companies
+======
+
+### Decibel, an investment firm focused on early stage funding for enterprise-product startups, will back technologies related to Cisco's core interests.
+
+![BrianaJackson / Getty][1]
+
+Cisco this week stepped deeper into the venture capital world by announcing Decibel, an early-stage investment firm that will focus on bringing enterprise-oriented startups to market.
+
+Veteran VC groundbreaker and former general partner at New Enterprise Associates [Jon Sakoda][2] will lead Decibel. Sakoda had been with NEA since 2006 and focused on startup investments in software and Internet companies.
+
+**[ Now see[7 free network tools you must have][3]. ]**
+
+Of Decibel Sakoda said: “We want to invest in companies that are helping our customers use innovation as a weapon in the game to transform their respective industries.”
+
+“Decibel combines the speed, agility, and independent risk-taking traditionally found in the best VC firms, while offering differentiated access to the scale, entrepreneurial talent, and deep customer relationships found in one of the largest tech companies in the world,” [Sakoda said][4]. “This approach is an industry first and provides a unique way for entrepreneurs to get access to unparalleled resources at a time and stage when they need it most.”
+
+“As one of the most prolific strategic venture capitalists in the world, Cisco already has a view into future technologies shaping our markets through our rich portfolio of companies,” wrote Rob Salvagno, vice president of Corporate Development and Cisco Investments in a [blog about Decibel][5]. “But we realized we could do even more by engaging with the startup community earlier in its lifecycle.”
+
+Indeed Cisco already has an investment arm, Cisco Investments, that focuses on later stage startups, the company says. Cisco said this arm invests $200 to $300 million annually, and it will continue its charter of investing and partnering with best-in-class companies in core and adjacent markets.
+
+Cisco didn’t talk about how much money would be involved in Decibel, but according to a [CNBC report][6], Cisco is setting up Decibel as an independent firm with a separate pool of cash, an unusual model for corporate investors. The fund hasn’t closed yet, but a [Securities and Exchange Commission filing][7] from October indicated that Sakoda was setting out to [raise $500 million][8], CNBC wrote.
+
+**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][9] ]**
+
+Decibel does plan to invest anywhere from $5M – 15M in each start up in their portfolio, Cisco says.
+
+“Cisco has a culture of leveraging both internal and external innovation – accelerating our rich internal development capabilities by our ability to also partner, invest and acquire, Salvagno said.
+
+He said the company recognizes that significant innovation happens outside of the walls of Cisco. Cisco has acquired more than 200 companies, accounting for more than one in eight Cisco employees have joined as a result. "We have a deep bench of acquired founders, many of which play leadership roles within the company today, which continues to reinforce this entrepreneurial spirit," Salvagno said.
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/money_salary_magnet_flying-money_money-magnet-by-brianajackson-getty-100787974-large.jpg
+[2]: https://twitter.com/jonsakoda
+[3]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html
+[4]: https://www.decibel.vc/the-blast/announcingdecibel
+[5]: https://blogs.cisco.com/news/cisco-fuels-innovation-engine-with-investment-in-new-early-stage-vc-fund
+[6]: https://www.cnbc.com/2019/03/26/cisco-introduces-decibel-an-early-stage-venture-firm-with-jon-sakoda.html
+[7]: https://www.sec.gov/Archives/edgar/data/1754260/000175426018000002/xslFormDX01/primary_doc.xml
+[8]: https://www.cnbc.com/2018/10/08/cisco-lead-investor-jon-sakoda-catalyst-labs-500-million.html
+[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190327 HPE introduces hybrid cloud consulting business.md b/sources/tech/20190327 HPE introduces hybrid cloud consulting business.md
new file mode 100644
index 0000000000..f1d9d3564f
--- /dev/null
+++ b/sources/tech/20190327 HPE introduces hybrid cloud consulting business.md
@@ -0,0 +1,59 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (HPE introduces hybrid cloud consulting business)
+[#]: via: (https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+HPE introduces hybrid cloud consulting business
+======
+
+### HPE's Right Mix Advisor is designed to find a balance between on-premises and cloud systems.
+
+![Hewlett Packard Enterprise][1]
+
+Hybrid cloud is pretty much the de facto way to go, with only a few firms adopting a pure cloud play to replace their data center and only suicidal firms refusing to go to the cloud. But picking the right balance between on-premises and the cloud is tricky, and a mistake can be costly.
+
+Enter Right Mix Advisor from Hewlett Packard Enterprise, a combination of consulting from HPE's Pointnext division and software tools. It draws on quite a bit of recent acquisitions. Another part of Right Mix Advisor is a British cloud consultancy RedPixie, Amazon Web Services (AWS) specialists Cloud Technology Partners, and automated discovery capabilities from an Irish startup iQuate.
+
+Right Mix Advisor gathers data points from the company’s entire enterprise, ranging from configuration management database systems (CMDBs), such as ServiceNow, to external sources, such as cloud providers. HPE says that in a recent customer engagement it scanned 9 million IP addresses across six data centers.
+
+**[ Read also:[What is hybrid cloud computing][2]. | Learn [what you need to know about multi-cloud][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
+
+HPE Pointnext consultants then work with the client’s IT teams to analyze the data to determine the optimal configuration for workload placement. Pointnext has become HPE’s main consulting outfit following its divestiture of EDS, which it acquired in 2008 but spun off in a merger with CSC to form DXC Consulting. Pointnext now has 25,000 consultants in 88 countries.
+
+In a typical engagement, HPE claims it can deliver a concrete action plan within weeks, whereas previously businesses may have needed months to come to a conclusion using a manual processes. HPE has found migrating the right workloads to the right mix of hybrid cloud can typically result in 40 percent total cost of ownership savings*. *
+
+Although HPE has thrown its weight behind AWS, that doesn’t mean it doesn’t support competitors. Erik Vogel, vice president of hybrid IT for HPE Pointnext, notes in the blog post announcing Right Mix Advisor that target environments could be Microsoft Azure or Azure Stack, AWS, Google or Ali Cloud.
+
+“New service providers are popping up every day, and we see the big public cloud providers constantly producing new services and pricing models. As a result, the calculus for determining your right mix is constantly changing. If Azure, for example, offers a new service capability or a 10 percent pricing discount and it makes sense to leverage it, you want to be able to move an application seamlessly into that new environment,” he wrote.
+
+Key to Right Mix Advisor is app migration, and Pointnext follows the 50/30/20 rule: About 50 percent of apps are suitable for migration to the cloud, and for about 30 percent, migration is not a good choice for migration to be worth the effort. The remaining 20 percent should be retired.
+
+“With HPE Right Mix Advisor, you can identify that 50 percent,” he wrote. “Rather than hand you a laundry list of 10,000 apps to start migrating, HPE Right Mix Advisor hones in on what’s most impactful right now to meet your business goals – the 10 things you can do on Monday morning that you can be confident will really help your business.”
+
+HPE has already done some pilot projects with the Right Mix service and expects to expand it to include channel partners.
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg
+[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
+[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190327 Identifying exceptional user experience (UX) in IoT platforms.md b/sources/tech/20190327 Identifying exceptional user experience (UX) in IoT platforms.md
new file mode 100644
index 0000000000..f7c49381f4
--- /dev/null
+++ b/sources/tech/20190327 Identifying exceptional user experience (UX) in IoT platforms.md
@@ -0,0 +1,126 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Identifying exceptional user experience (UX) in IoT platforms)
+[#]: via: (https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all)
+[#]: author: (Steven Hilton https://www.networkworld.com/author/Steven-Hilton/)
+
+Identifying exceptional user experience (UX) in IoT platforms
+======
+
+### Examples of excellent IoT platform UX from the perspectives of 5 typical IoT platform personas.
+
+![Leo Wolfert / Getty Images][1]
+
+Enterprises are inundated with information about IoT platforms’ features and capabilities. But to find a long-lived IoT platform that minimizes ongoing development costs, enterprises must focus on exceptional user experience (UX) for 5 types of IoT platform users.
+
+Marketing and sales literature from IoT platform vendors is filled with information about IoT platform features. And no doubt, enterprises choosing to buy IoT platform services need to understand the actual capabilities of IoT platforms – preferably by [testing a variety of IoT platforms][2] – before making a purchase decision.
+
+However, it is a lot harder to gauge the quality of an IoT platform UX than itemizing an IoT platform’s features. Having excellent UX leads to lower platform deployment and management costs and higher customer satisfaction and retention. So enterprises should make UX one of their top criteria when selecting an IoT platform.
+
+[RELATED: Storage tank operator turns to IoT for energy savings][3]
+
+One of the ways to determine excellent IoT platform UX is to simulate the tasks conducted by typical IoT platform users. By completing these tasks, it becomes readily apparent when an IoT platform is exceptional or annoyingly bad.
+
+In this blog, I describe excellent IoT platform UX from the perspectives of five typical IoT platform users or personas.
+
+## Persona 1: platform administrator
+
+A platform administrator’s primary role is to configure, monitor, and maintain the functionality of an IoT platform. A platform administrator is typically an IT employee responsible for maintaining and configuring the various data management, device management, access control, external integration, and monitoring services that comprise an IoT platform.
+
+Typical platform administrator tasks include
+
+ * configuration of the on-platform data visualization and data aggregation tools
+ * configuration of available device management functionality or execution of in-bulk device management tasks
+ * configuration and creation of on-platform complex event processing (CEP) workflows
+ * management and configuration of platform service orchestration
+
+
+
+Enterprises should pick IoT platforms with superlative access to on-platform configuration functionality with an emphasis on declarative interfaces for configuration management. Although many platform administrators are capable of working with RESTful API endpoints, good UX design should not require that platform administrators use third-party tools to automate basic functionality or execute bulk tasks. Some programmatic interfaces, such as SQL syntax for limiting monitoring views or dashboards for setting event processing trigger criteria, are acceptable and expected, although a fully declarative solution that maintains similar functionality is preferred.
+
+## Persona 2: platform operator
+
+A platform operator’s primary role is to leverage an IoT platform to execute common day-to-day business-centric operations and services. While the responsibilities of a platform operator will vary based on enterprise vertical and use case, all platform operators conduct business rather than IoT domain tasks.
+
+Typical platform operator tasks include
+
+ * visualizing and aggregating on-platform data to view key business KPIs
+ * using device management functionality on a per-device basis
+ * creating, managing, and monitoring per-device and per-location event processing rules
+ * executing self-service administrative tasks, such as enrolling downstream operators
+
+
+
+Enterprises should pick IoT platforms centered on excellent ease-of-use for a business user. In general, the UX should be focused on providing information immediately required for the execution of day-to-day operational tasks while removing more complex functionality. These platforms should have easy access to well-defined and well-constrained operational functions or data visualization. An effective UX should enable easy creation and modification of data views, graphs, dashboards, and other visualizations by allowing operators to select devices using a declarative rather than SQL or other programmatic interfaces.
+
+## Persona 3: hardware and systems developer
+
+A hardware and systems developer’s primary role is the integration and configuration of IoT assets into an IoT platform. The hardware and systems developer possesses very specific, detailed knowledge about IoT hardware (e.g., specific multipoint control units, embedded platforms, or PLC/SCADA control systems), and leverages this knowledge to enable protocol and asset compatibility with northbound platform services.
+
+Typical hardware and systems developer tasks include
+
+ * designing and implementing firmware for IoT assets based on either standardized IoT SDKs or platform-specific SDKs
+ * updating firmware or software packages over deployment lifecycles
+ * integrating manufacturer-specific protocols adapters into either IoT assets or the northbound platform
+
+
+
+Enterprises should pick IoT platforms that allow hardware and systems developers to most efficiently design and implement low-level device and protocol functionality. An effective developer experience provides well-documented and fully-featured SDKs supporting a variety of languages and device architectures to enable integration with various types of IoT hardware.
+
+## Persona 4: platform and backend developer
+
+A platform and backend developer’s primary role is to execute customer-specific application logic and integrations within an IoT deployment. Customer-specific logic may include on-platform or on-edge custom applications, such as those used for analytics, data aggregation and normalization, or any type of event processing workflow. In addition, a platform and backend developer is responsible for integrating the IoT platform with external databases, analytic solutions, or business systems such as MES, ERP, or CRM applications.
+
+Typical platform and backend developer tasks include
+
+ * integrating streaming data from the IoT platform into external systems and applications
+ * configuring inbound and outbound platform actions and interactions with external systems
+ * configuring complex code-based event processing capabilities beyond the scope of a platform administrator’s knowledge or ability
+ * debugging low-level platform functionalities that require coding to detect or resolve
+
+
+
+Enterprises should pick excellent IoT platforms that provide access to well-documented and well-featured platform-level SDKs for application or service development. A best-in-class platform UX should provide real-time logging tools, debugging tools, and indexed and searchable access to all platform logs. Finally, a platform and backend developer is particularly dependent upon high-quality, platform-level documentation, especially for platform APIs.
+
+## Persona 5: user interface and experience (UI/UX) developer
+
+A UI/UX developer’s primary role is to design the various operator interfaces and monitoring views for an IoT platform. In more complex IoT deployments, various operator audiences will need to be addressed, including solution domain experts such as a factory manager; role-specific experts such as an equipment operator or factory technician; and business experts such as a supply-chain analyst or company executive.
+
+Typical UI/UX developer tasks include
+
+ * building and maintaining customer-specific dashboards and monitoring views on either the IoT platform or edge devices
+ * designing, implementing, and maintaining various operator consoles for a variety of operator audiences and customer-specific use cases
+ * ensuring good user experience for customers over the lifetime of an IoT implementation
+
+
+
+Enterprises should pick IoT platforms that provide an exceptional variety and quality of UI/UX tools, such as dashboarding frameworks for on-platform monitoring solutions that are declaratively or programmatically customizable, as well as various widget and display blocks to help the developer rapidly implement customer-specific views. An IoT platform must also provide a UI/UX developer with appropriate debugging and logging tools for monitoring and operator console frameworks and platform APIs. Finally, a best-in-class platform should provide a sample dashboard, operator console, and on-edge monitoring implementation in order to enable the UI/UX developer to quickly become accustomed with platform paradigms and best practices.
+
+Enterprises should make UX one of their top criteria when selecting an IoT platform. Having excellent UX allows enterprises to minimize platform deployment and management costs. At the same time, excellent UX allows enterprises to more readily launch new solutions to the market thereby increasing customer satisfaction and retention.
+
+**This article is published as part of the IDG Contributor Network.[Want to Join?][4]**
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all
+
+作者:[Steven Hilton][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Steven-Hilton/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_by_leowolfert_gettyimages-689799380_2400x1600-100788464-large.jpg
+[2]: https://www.machnation.com/2018/09/25/announcing-mit-e-2-0-hands-on-benchmarking-for-iot-cloud-edge-and-analytics-platforms/
+[3]: https://www.networkworld.com/article/3169384/internet-of-things/storage-tank-operator-turns-to-iot-for-energy-savings.html#tk.nww-fsb
+[4]: /contributor-network/signup.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190327 IoT roundup- Keeping an eye on energy use and Volkswagen teams with AWS.md b/sources/tech/20190327 IoT roundup- Keeping an eye on energy use and Volkswagen teams with AWS.md
new file mode 100644
index 0000000000..016c5151fb
--- /dev/null
+++ b/sources/tech/20190327 IoT roundup- Keeping an eye on energy use and Volkswagen teams with AWS.md
@@ -0,0 +1,71 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS)
+[#]: via: (https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS
+======
+
+### This week's roundup features new tech from MIT, big news in the automotive sector and a handy new level of centralization from a smaller IoT-focused company.
+
+![Getty Images][1]
+
+Much of what’s exciting about IoT technology has to do with getting data from a huge variety of sources into one place so it can be mined for insight, but sensors used to gather that data are frequently legacy devices from the early days of industrial automation or cheap, lightweight, SoC-based gadgets without a lot of sophistication of their own.
+
+Researchers at MIT have devised a system that can gather a certain slice of data from unsophisticated devices that are grouped on the same electrical circuit without adding sensors to each device.
+
+**[ Check out our[corporate guide to addressing IoT security][2]. ]**
+
+The technology’s called non-intrusive load monitoring, and sits directly on a given building's, vehicle's or other piece of infrastructure’s electrical circuits, identifies devices based on their power usage, and sends alerts when there are irregularities.
+
+It seems likely to make IIoT-related waves once it’s out of testing and onto the market.
+
+NLIM was recently tested, said MIT’s news service, on a U.S. Coast Guard cutter based in Boston, where it was attached to the outside of an electrical wire “at a single point, without requiring any cutting or splicing of wires.”
+
+Two such connections allowed the scientists to monitor roughly 20 separate devices on an electrical circuit, and the system was able to detect an anomalous amount of energy use from a component of the ship’s diesel engines known as a jacket water heater.
+
+“[C]rewmembers were skeptical about the reading but went to check it anyway. The heaters are hidden under protective metal covers, but as soon as the cover was removed from the suspect device, smoke came pouring out, and severe corrosion and broken insulation were clearly revealed,” the MIT report stated. Two other important but slightly less critical faults were also detected by the system.
+
+It’s easy to see why NLIM could easily prove to be an attractive technology for IIoT use in the future. It sounds as though it’s very simple to install, can operate without any kind of Internet connection (though most implementers will probably want to connect it to a wider monitoring setup for a more holistic picture of their systems) and does all of its computational work locally. It can even be used for general energy audits. What, in short, is not to like?
+
+**Volkswagen teams up with Amazon**
+
+AWS has got a new flagship client for its growing IoT services in the form of the Volkswagen Group, which [announced][3] that AWS is going to design and build the Volkswagen Industrial Cloud, a floor-to-ceiling industrial IoT implementation aimed at improving uptime, flexibility, productivity and vehicle quality.
+
+Real-time data from all 122 of VW’s manufacturing plants around the world will be available to the system, everything from part tracking to comparative analysis of efficiency to even deeper forms of analytics will take place in the company’s “data lake,” as the announcement calls it. Oh, and machine learning is part of it, too.
+
+The German carmaker clearly believes that AWS’s technology can provide a lot of help to its operations across the board, [even in the wake of a partnership with Microsoft for Azure-based cloud services announced last year.][4]
+
+**IoT-in-a-box**
+
+IoT can be very complicated. While individual components of any given implementation are often quite simple, each implementation usually contains a host of technologies that have to work in close concert. That means a lot of orchestration work has to go into making this stuff work.
+
+Enter Digi International, which rolled out an IoT-in-a-box package called Digi Foundations earlier this month. The idea is to take a lot of the logistical legwork out of IoT implementations by integrating cloud-connection software and edge-computing capabilities into the company’s core industrial router business. Foundations, which is packaged as a software subscription that adds these capabilities and more to the company’s devices, also includes a built-in management layer, allowing for simplified configuration and monitoring.
+
+OK, so it’s not quite all-in-one, but it’s still an impressive level of integration, particularly from a company that many might not have heard of before. It’s also a potential bellwether for other smaller firms upping their technical sophistication in the IoT sector.
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home7-100768495-large.jpg
+[2]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
+[3]: https://www.volkswagen-newsroom.com/en/press-releases/volkswagen-and-amazon-web-services-to-develop-industrial-cloud-4780
+[4]: https://www.volkswagenag.com/en/news/2018/09/volkswagen-and-microsoft-announce-strategic-partnership.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190327 Setting kernel command line arguments with Fedora 30.md b/sources/tech/20190327 Setting kernel command line arguments with Fedora 30.md
deleted file mode 100644
index caa8c1db59..0000000000
--- a/sources/tech/20190327 Setting kernel command line arguments with Fedora 30.md
+++ /dev/null
@@ -1,85 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Setting kernel command line arguments with Fedora 30)
-[#]: via: (https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/)
-[#]: author: (Laura Abbott https://fedoramagazine.org/makes-fedora-kernel/)
-
-Setting kernel command line arguments with Fedora 30
-======
-
-![][1]
-
-Adding options to the kernel command line is a common task when debugging or experimenting with the kernel. The upcoming Fedora 30 release made a change to use Bootloader Spec ([BLS][2]). Depending on how you are used to modifying kernel command line options, your workflow may now change. Read on for more information.
-
-To determine if your system is running with BLS or the older layout, look in the file
-
-```
-/etc/default/grub
-```
-
-If you see
-
-```
-GRUB_ENABLE_BLSCFG=true
-```
-
-in there, you are running with the BLS setup and you may need to change how you set kernel command line arguments.
-
-If you only want to modify a single kernel entry (for example, to temporarily work around a display problem) you can use a grubby command
-
-```
-$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --args="amdgpu.dc=0"
-```
-
-To remove a kernel argument, you can use the
-
-```
---remove-args
-```
-argument to grubby
-
-```
-$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --remove-args="amdgpu.dc=0"
-```
-
-If there is an option that should be added to every kernel command line (for example, you always want to disable the use of the rdrand instruction for random number generation) you can run a grubby command:
-
-```
-$ grubby --update-kernel=ALL --args="nordrand"
-```
-
-This will update the command line of all kernel entries and save the option to the saved kernel command line for future entries.
-
-If you later want to remove the option from all kernels, you can again use
-
-```
---remove-args
-```
-with
-
-```
---update-kernel=ALL
-```
-
-```
-$ grubby --update-kernel=ALL --remove-args="nordrand"
-```
-
---------------------------------------------------------------------------------
-
-via: https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/
-
-作者:[Laura Abbott][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://fedoramagazine.org/makes-fedora-kernel/
-[b]: https://github.com/lujun9972
-[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/f30-kernel-1-816x345.jpg
-[2]: https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault
diff --git a/sources/tech/20190328 As memory prices plummet, PCIe is poised to overtake SATA for SSDs.md b/sources/tech/20190328 As memory prices plummet, PCIe is poised to overtake SATA for SSDs.md
new file mode 100644
index 0000000000..3dfb93eec7
--- /dev/null
+++ b/sources/tech/20190328 As memory prices plummet, PCIe is poised to overtake SATA for SSDs.md
@@ -0,0 +1,85 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (As memory prices plummet, PCIe is poised to overtake SATA for SSDs)
+[#]: via: (https://www.networkworld.com/article/3384700/as-memory-prices-plummet-pcie-is-poised-to-overtake-sata-for-ssds.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+As memory prices plummet, PCIe is poised to overtake SATA for SSDs
+======
+
+### Taiwan vendors believe PCIe and SATA will achieve price and market share parity by years' end.
+
+![Intel SSD DC P6400 Series][1]
+
+A collapse in price for NAND flash memory and a shrinking gap between the prices of PCI Express-based and SATA-based [solid-state drives][2] (SSDs) means the shift to PCI Express SSDs will accelerate in 2019, with the newer, faster format replacing the old by years' end.
+
+According to the Taiwanese tech publication DigiTimes (the stories are now archived and unavailable without a subscription), falling NAND flash prices continue to drag down SSD prices, which will drive the adoption of SSDs in enterprise and data-center applications. This, in turn, will further drive the adoption of PCIe drives, which are a superior format to SATA.
+
+**[ Read also:[Backup vs. archive: Why it’s important to know the difference][3] ]**
+
+## SATA vs. PCI Express
+
+SATA was introduced in 2001 as a replacement for the IDE interface, which had a much larger cable and slower interface. But SATA is a legacy HDD connection and not fast enough for NAND flash memory.
+
+I used to review SSDs, and it was always the same when it came to benchmarking, with the drives scoring within a few milliseconds of each other despite the memory used. The SATA interface was the bottleneck. A SATA SSD is like a one-lane highway with no speed limit.
+
+PCIe is several times faster and has much more parallelism, so throughput is more suited to the NAND format. It comes in two physical formats: an [add-in card][4] that plugs into a PCIe slot and M.2, which is about the size of a [stick of gum][5] and sits on the motherboard. PCIe is most widely used in servers, while M.2 is in consumer devices.
+
+There used to be a significant price difference between PCIe and SATA drives with the same capacity, but they have come into parity thanks to Moore’s Law, said Jim Handy, principal analyst with Objective Analysis, who follows the memory market.
+
+“The controller used to be a big part of the price of an SSD. But complexity has not grown with transistor count. It can have a lot of transistors, and it doesn’t cost more. SATA got more complicated, but PCIe has not. PCIe is very close to the same price as SATA, and [the controller] was the only thing that justified the price diff between the two,” he said.
+
+**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][6] ]**
+
+DigiTimes estimates that the price drop for NAND flash chips will cause global shipments of SSDs to surge 20 to 25 percent in 2019, and PCIe SSDs are expected to emerge as a new mainstream offering by the end of 2019 with a market share of 50 percent, matching SATA SSDs.
+
+## SSD and NAND memory prices already falling
+
+Market sources to DigiTimes said that unit price for 512GB PCIe SSD has fallen by 11 percent sequentially in the first quarter of 2019, while SATA SSDs have dropped 9 percent. They added that the current average unit price for 512GB SSDs is now equal to that of 256GB SSDs from one year ago, with prices continuing to drop.
+
+According to DRAMeXchange, NAND flash contract prices will continue falling but at a slower rate in the second quarter of 2019. Memory makers are cutting production to avoid losing any more profits.
+
+“We’re in a price collapse. For over a year I’ve been saying the destination for NAND is 8 cents per gigabyte, and some spot markets are 6 cents. It was 30 cents a year ago. Contract pricing is around 15 cents now, it had been 25 to 27 cents last year,” said Handy.
+
+A contract price is what it sounds like. A memory maker like Samsung or Micron signs a contract with a SSD maker like Toshiba or Kingston for X amount for Y cents per gigabyte. Spot prices are prices that take place at the end of a quarter (like now) where a vendor anxious to unload excessive inventory has a fire sale to a drive maker that needs it on short supply.
+
+DigiTimes’s contacts aren’t the only ones who foresee this. Handy was at an analyst event by Samsung a few months back where they presented their projection that PCIe SSD would outsell SATA by the end of this year, and not just in the enterprise but everywhere.
+
+**More about backup and recovery:**
+
+ * [Backup vs. archive: Why it’s important to know the difference][3]
+ * [How to pick an off-site data-backup method][7]
+ * [Tape vs. disk storage: Why isn’t tape dead yet?][8]
+ * [The correct levels of backup save time, bandwidth, space][9]
+
+
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384700/as-memory-prices-plummet-pcie-is-poised-to-overtake-sata-for-ssds.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/12/intel-ssd-p4600-series1-100782098-large.jpg
+[2]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
+[3]: https://www.networkworld.com/article/3285652/storage/backup-vs-archive-why-its-important-to-know-the-difference.html
+[4]: https://www.newegg.com/Product/Product.aspx?Item=N82E16820249107
+[5]: https://www.newegg.com/Product/Product.aspx?Item=20-156-199&cm_sp=SearchSuccess-_-INFOCARD-_-m.2+-_-20-156-199-_-2&Description=m.2+
+[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[7]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
+[8]: https://www.networkworld.com/article/3315156/storage/tape-vs-disk-storage-why-isnt-tape-dead-yet.html
+[9]: https://www.networkworld.com/article/3302804/storage/the-correct-levels-of-backup-save-time-bandwidth-space.html
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190328 Can Better Task Stealing Make Linux Faster.md b/sources/tech/20190328 Can Better Task Stealing Make Linux Faster.md
new file mode 100644
index 0000000000..bae14a2f5c
--- /dev/null
+++ b/sources/tech/20190328 Can Better Task Stealing Make Linux Faster.md
@@ -0,0 +1,133 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Can Better Task Stealing Make Linux Faster?)
+[#]: via: (https://www.linux.com/blog/can-better-task-stealing-make-linux-faster)
+[#]: author: (Oracle )
+
+Can Better Task Stealing Make Linux Faster?
+======
+
+_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._
+
+### Load balancing via scalable task stealing
+
+The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.
+
+I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.
+
+### Results
+
+Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:
+
+ * %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
+ * steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.
+
+
+
+![load balancing][1]
+
+[Used with permission][2]
+
+CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:
+
+![][3]
+
+Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.
+
+### The code
+
+As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using
+
+```
+# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
+Yes
+```
+
+If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).
+
+### Future work
+
+After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:
+
+ * If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes.
+ * Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
+ * Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
+ * Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.
+
+
+
+_This article originally appeared at[Oracle Developers Blog][6]._
+
+_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._
+
+### Load balancing via scalable task stealing
+
+The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.
+
+I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.
+
+### Results
+
+Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:
+
+ * %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
+ * steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.
+
+
+
+![load balancing][1]
+
+[Used with permission][2]
+
+CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:
+
+![][3]
+
+Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.
+
+### The code
+
+As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using
+
+```
+# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
+Yes
+```
+
+If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).
+
+### Future work
+
+After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:
+
+ * If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes.
+ * Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
+ * Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
+ * Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.
+
+
+
+_This article originally appeared at[Oracle Developers Blog][6]._
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/can-better-task-stealing-make-linux-faster
+
+作者:[Oracle][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-load-balancing.png?itok=2Uk1yALt (load balancing)
+[2]: /LICENSES/CATEGORY/USED-PERMISSION
+[3]: https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b7a700fe-edc3-4ea0-876a-c91e1850b59b/Image/00c074f4282bcbaf0c10dd153c5dfa76/steal_graph.png
+[4]: https://lkml.org/lkml/2018/12/6/1253
+[5]: https://lkml.org/lkml/2018/12/6/1250
+[6]: https://blogs.oracle.com/linux/can-better-task-stealing-make-linux-faster
diff --git a/sources/tech/20190328 Cisco warns of two security patches that don-t work, issues 17 new ones for IOS flaws.md b/sources/tech/20190328 Cisco warns of two security patches that don-t work, issues 17 new ones for IOS flaws.md
new file mode 100644
index 0000000000..27370bf294
--- /dev/null
+++ b/sources/tech/20190328 Cisco warns of two security patches that don-t work, issues 17 new ones for IOS flaws.md
@@ -0,0 +1,72 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Cisco warns of two security patches that don’t work, issues 17 new ones for IOS flaws)
+[#]: via: (https://www.networkworld.com/article/3384742/cisco-warns-of-two-security-patches-that-dont-work-issues-17-new-ones-for-ios-flaws.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+Cisco warns of two security patches that don’t work, issues 17 new ones for IOS flaws
+======
+
+### Cisco is issuing 17 new fixes for security problems with IOS and IOS/XE software that runs most of its routers and switches, while it has no patch yet to replace flawed patches to RV320 and RV 325 routers.
+
+![Marisa9 / Getty][1]
+
+Cisco has dropped [17 Security advisories describing 19 vulnerabilities][2] in the software that runs most of its routers and switches, IOS and IOS/XE.
+
+The company also announced that two previously issued patches for its RV320 and RV325 Dual Gigabit WAN VPN Routers were “incomplete” and would need to be redone and reissued.
+
+**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
+
+Cisco rates both those router vulnerabilities as “High” and describes the problems like this:
+
+ * [One vulnerability][5] is due to improper validation of user-supplied input. An attacker could exploit this vulnerability by sending malicious HTTP POST requests to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary commands on the underlying Linux shell as _root_.
+ * The [second exposure][6] is due to improper access controls for URLs. An attacker could exploit this vulnerability by connecting to an affected device via HTTP or HTTPS and requesting specific URLs. A successful exploit could allow the attacker to download the router configuration or detailed diagnostic information.
+
+
+
+Cisco said firmware updates that address these vulnerabilities are not available and no workarounds exist, but is working on a complete fix for both.
+
+On the IOS front, the company said six of the vulnerabilities affect both Cisco IOS Software and Cisco IOS XE Software, one of the vulnerabilities affects just Cisco IOS software and ten of the vulnerabilities affect just Cisco IOS XE software. Some of the security bugs, which are all rated as “High”, include:
+
+ * [A vulnerability][7] in the web UI of Cisco IOS XE Software could let an unauthenticated, remote attacker access sensitive configuration information.
+ * [A vulnerability][8] in Cisco IOS XE Software could let an authenticated, local attacker inject arbitrary commands that are executed with elevated privileges. The vulnerability is due to insufficient input validation of commands supplied by the user. An attacker could exploit this vulnerability by authenticating to a device and submitting crafted input to the affected commands.
+ * [A weakness][9] in the ingress traffic validation of Cisco IOS XE Software for Cisco Aggregation Services Router (ASR) 900 Route Switch Processor 3 could let an unauthenticated, adjacent attacker trigger a reload of an affected device, resulting in a denial of service (DoS) condition, Cisco said. The vulnerability exists because the software insufficiently validates ingress traffic on the ASIC used on the RSP3 platform. An attacker could exploit this vulnerability by sending a malformed OSPF version 2 message to an affected device.
+ * A problem in the [authorization subsystem][10] of Cisco IOS XE Software could allow an authenticated but unprivileged (level 1), remote attacker to run privileged Cisco IOS commands by using the web UI. The vulnerability is due to improper validation of user privileges of web UI users. An attacker could exploit this vulnerability by submitting a malicious payload to a specific endpoint in the web UI, Cisco said.
+ * A vulnerability in the [Cluster Management Protocol][11] (CMP) processing code in Cisco IOS Software and Cisco IOS XE Software could allow an unauthenticated, adjacent attacker to trigger a DoS condition on an affected device. The vulnerability is due to insufficient input validation when processing CMP management packets, Cisco said.
+
+
+
+Cisco has released free software updates that address the vulnerabilities described in these advisories and [directs users to their software agreements][12] to find out how they can download the fixes.
+
+Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384742/cisco-warns-of-two-security-patches-that-dont-work-issues-17-new-ones-for-ios-flaws.html#tk.rss_all
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/woman-with-hands-over-face_mistake_oops_embarrassed_shy-by-marisa9-getty-100787990-large.jpg
+[2]: https://tools.cisco.com/security/center/viewErp.x?alertId=ERP-71135
+[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190123-rv-inject
+[6]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190123-rv-info
+[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-xeid
+[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-xecmd
+[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-rsp3-ospf
+[10]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-iosxe-privesc
+[11]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-cmp-dos
+[12]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
+[13]: https://www.facebook.com/NetworkWorld/
+[14]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190328 Elizabeth Warren-s right-to-repair plan fails to consider data from IoT equipment.md b/sources/tech/20190328 Elizabeth Warren-s right-to-repair plan fails to consider data from IoT equipment.md
new file mode 100644
index 0000000000..1ae1222f6e
--- /dev/null
+++ b/sources/tech/20190328 Elizabeth Warren-s right-to-repair plan fails to consider data from IoT equipment.md
@@ -0,0 +1,65 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment)
+[#]: via: (https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment
+======
+
+### Senator and presidential candidate Elizabeth Warren suggests national legislation focused on farm equipment. But that’s only a first step. The data collected by that equipment must also be considered.
+
+![Thinkstock][1]
+
+There’s a surprising battle being fought on America’s farms, between farmers and the companies that sell them tractors, combines, and other farm equipment. Surprisingly, the outcome of that war could have far-reaching implications for the internet of things (IoT) — and now Massachusetts senator and Democratic presidential candidate Elizabeth Warren has weighed in with a proposal that could shift the balance of power in this largely under-the-radar struggle.
+
+## Right to repair farm equipment
+
+Here’s the story: As part of a new plan to support family farms, Warren came out in support of a national right-to-repair law for farm equipment. That might not sound like a big deal, but it raises the stakes in a long-simmering fight between farmers and equipment makers over who really controls access to the equipment — and to the increasingly critical data gathered by the IoT capabilities built into it.
+
+**[ Also read:[Right-to-repair smartphone ruling loosens restrictions on industrial, farm IoT][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
+
+[Warren’s proposal reportedly][4] calls for making all diagnostics tools and manuals freely available to the equipment owners, as well as independent repair shops — not just vendors and their authorized agents — and focuses solely on farm equipment.
+
+That’s a great start, and kudos to Warren for being by far the most prominent politician to weigh in on the issue.
+
+## Part of a much bigger IoT data issue
+
+But Warren's proposal merely scratches the surface of the much larger issue of who actually controls the equipment and devices that consumers and businesses buy. Even more important, it doesn’t address the critical data gathered by IoT sensors in everything ranging from smartphones, wearables, and smart-home devices to private and commercial vehicles and aircraft to industrial equipment.
+
+And as many farmers can tell you, this isn’t some academic argument. That data has real value — not to mention privacy implications. For farmers, it’s GPS-equipped smart sensors tracking everything — from temperature to moisture to soil acidity — that can determine the most efficient times to plant and harvest crops. For consumers, it might be data that affects their home or auto insurance rates, or even divorce cases. For manufacturers, it might cover everything from which equipment needs maintenance to potential issues with raw materials or finished products.
+
+The solution is simple: IoT users need consistent regulations that ensure free access to what is really their own data, and give them the option to share that data with the equipment vendors — if they so choose and on their own terms.
+
+At the very least, users need clear statements of the rules, so they know exactly what they’re getting — and not getting — when they buy IoT-enhanced devices and equipment. And if they’re being honest, most equipment vendors would likely admit that clear rules would benefit them as well by creating a level playing field, reducing potential liabilities and helping to avoid making customers unhappy.
+
+Sen. Warren made headlines earlier this month by proposing to ["break up" tech giants][5] such as Amazon, Apple, and Facebook. If she really wants to help technology buyers, prioritizing the right-to-repair and the associated right to own your own data seems like a more effective approach.
+
+**[ Now read this:[Big trouble down on the IoT farm][6] ]**
+
+Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2017/03/ai_agriculture_primary-100715481-large.jpg
+[2]: https://www.networkworld.com/article/3317696/the-recent-right-to-repair-smartphone-ruling-will-also-affect-farm-and-industrial-equipment.html
+[3]: https://www.networkworld.com/newsletters/signup.html
+[4]: https://appleinsider.com/articles/19/03/27/presidential-candidate-elizabeth-warren-focusing-right-to-repair-on-farmers-not-tech
+[5]: https://www.nytimes.com/2019/03/08/us/politics/elizabeth-warren-amazon.html
+[6]: https://www.networkworld.com/article/3262631/big-trouble-down-on-the-iot-farm.html
+[7]: https://www.facebook.com/NetworkWorld/
+[8]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190328 Microsoft introduces Azure Stack for HCI.md b/sources/tech/20190328 Microsoft introduces Azure Stack for HCI.md
new file mode 100644
index 0000000000..0400f4db04
--- /dev/null
+++ b/sources/tech/20190328 Microsoft introduces Azure Stack for HCI.md
@@ -0,0 +1,63 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Microsoft introduces Azure Stack for HCI)
+[#]: via: (https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Microsoft introduces Azure Stack for HCI
+======
+
+### Azure Stack is great for your existing hardware, so Microsoft is covering the bases with a turnkey solution.
+
+![Thinkstock/Microsoft][1]
+
+Microsoft has introduced Azure Stack HCI Solutions, a new implementation of its on-premises Azure product specifically for [Hyper Converged Infrastructure][2] (HCI) hardware.
+
+[Azure Stack][3] is an on-premises version of its Azure cloud service. It gives companies a chance to migrate to an Azure environment within the confines of their own enterprise rather than onto Microsoft’s data centers. Once you have migrated your apps and infrastructure to Azure Stack, moving between your systems and Microsoft’s cloud service is easy.
+
+HCI is the latest trend in server hardware. It uses scale-out hardware systems and a full software-defined platform to handle [virtualization][4] and management. It’s designed to reduce the complexity of a deployment and on-going management, since everything ships fully integrated, hardware and software.
+
+**[ Read also:[12 most powerful hyperconverged infrasctructure vendors][5] | Get regularly scheduled insights: [Sign up for Network World newsletters][6] ]**
+
+It makes sense for Microsoft to take this step. Azure Stack was ideal for an existing enterprise. Now you can deploy a whole new hardware configuration setup to run Azure in-house, complete with Hyper-V-based software-defined compute, storage, and networking.
+
+The Windows Admin Center is the main management tool for Azure Stack HCI. It connects to other Azure tools, such as Azure Monitor, Azure Security Center, Azure Update Management, Azure Network Adapter, and Azure Site Recovery.
+
+“We are bringing our existing HCI technology into the Azure Stack family for customers to run virtualized applications on-premises with direct access to Azure management services such as backup and disaster recovery,” wrote Julia White, corporate vice president of Microsoft Azure, in a [blog post announcing Azure Stack HCI][7].
+
+It’s not so much a new product launch as a rebranding. When Microsoft launched Server 2016, it introduced a version called Windows Server Software-Defined Data Center (SDDC), which was built on the Hyper-V hypervisor, and says so in a [FAQ][8] as part of the announcement.
+
+"Azure Stack HCI is the evolution of Windows Server Software-Defined (WSSD) solutions previously available from our hardware partners. We brought it into the Azure Stack family because we have started to offer new options to connect seamlessly with Azure for infrastructure management services,” the company said.
+
+Microsoft introduced Azure Stack in 2017, but it was not the first to offer an on-premises cloud option. That distinction goes to [OpenStack][9], a joint project between Rackspace and NASA built on open-source code. Amazon followed with its own product, called [Outposts][10].
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2017/08/5_microsoft-azure-100733132-large.jpg
+[2]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence.html
+[3]: https://www.networkworld.com/article/3207748/microsoft-introduces-azure-stack-its-answer-to-openstack.html
+[4]: https://www.networkworld.com/article/3234795/what-is-virtualization-definition-virtual-machine-hypervisor.html
+[5]: https://www.networkworld.com/article/3112622/hardware/12-most-powerful-hyperconverged-infrastructure-vendors.htmll
+[6]: https://www.networkworld.com/newsletters/signup.html
+[7]: https://azure.microsoft.com/en-us/blog/enabling-customers-hybrid-strategy-with-new-microsoft-innovation/
+[8]: https://azure.microsoft.com/en-us/blog/announcing-azure-stack-hci-a-new-member-of-the-azure-stack-family/
+[9]: https://www.openstack.org/
+[10]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190328 Motorola taps freed-up wireless spectrum for enterprise LTE networks.md b/sources/tech/20190328 Motorola taps freed-up wireless spectrum for enterprise LTE networks.md
new file mode 100644
index 0000000000..ce38f54f79
--- /dev/null
+++ b/sources/tech/20190328 Motorola taps freed-up wireless spectrum for enterprise LTE networks.md
@@ -0,0 +1,68 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Motorola taps freed-up wireless spectrum for enterprise LTE networks)
+[#]: via: (https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Motorola taps freed-up wireless spectrum for enterprise LTE networks
+======
+
+### Citizens Broadband Radio Service (CBRS) is developing. Out of the gate, Motorola is creating a land mobile radio (LMR) system that includes enterprise-level, voice handheld devices and fast, private data networks.
+
+![Jiraroj Praditcharoenkul / Getty Images][1]
+
+In a move that could upend how workers access data in the enterprise, Motorola has announced a broadband product that it says will deliver data at double the capacity and four-times the range of Wi-Fi for end users. The handheld, walkie-talkie-like device, called Mototrbo Nitro, will, importantly, also include a voice channel. “Business-critical voice with private broadband data,” as [Motorola describes it on its website][2].
+
+The company sees the product being implemented in traditional, moving-around, voice communications environments, such as factories and warehouses, that increasingly need data supplementation, too. A shop floor that has an electronically delivered repair manual, with included video demonstration, could be one example. Video could be two-way, even.
+
+**[ Also read:[Wi-Fi 6 is coming to a router near you][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
+
+The product takes advantage of upcoming Citizens Broadband Radio Service (CBRS) spectrum. That’s a swath of radio bandwidth that’s being released by the Federal Communications Commission (FCC) in the 3.5GHz band. It’s a frequency chunk that is also expected to be used heavily for 5G. In this case, though, Motorola is creating a private LTE network for the enterprise.
+
+The CBRS band is the first time publicly available broadband spectrum has been available, [Motorola explains in a white paper][5] (pdf) — organizations don’t have to buy licenses, yet they can get access to useful spectrum: [A tiered sharing system, where auction winners will get priority access licenses, but others will have some access too is proposed][6] by the FCC. The non-prioritized open access could be used by any enterprise for whatever — internet of things (IoT) or private networks.
+
+## Motorola's pitch for using a private broadband network
+
+Why a private broadband network and not simply cell phones? One giveaway line is in Motorola’s promotional video: “Without sacrificing control,” it says. What it means is that the firm thinks there’s a market for companies who want to run entire business communications systems — data and voice — without involvement from possibly nosy Mobile Network Operator phone companies. [I’ve written before about how control over security is prompting large industrials to explore private networks][7] more. Motorola manages the network in this case, though, for the enterprise.
+
+Motorola also refers to potentially limited or intermittent onsite coverage and congestion for public, commercial, single-platform voice and data networks. That’s particularly the case in factories, [Motorola says in an ebook][8]. Heavy machinery containing radio-unfriendly metal can hinder Wi-Fi and cellular, it claims. Or that traditional Land Mobile Radios (LMRs), such as walkie-talkies and vehicle-mounted mobile radios, don’t handle data natively. In particular, it says that if you want to get into artificial intelligence (AI) and analytics, say, you need a more evolving voice and fast data communications setup.
+
+## Industrial IoT uses for Motorola's Nitro network
+
+Industrial IoT will be another beneficiary, Motorola says. It says its CBRS Nitro network could include instant notifications of equipment failures that traditional products can’t provide. It also suggests merging fixed security cameras with “photos and videos of broken machines and sending real-time video to an expert.”
+
+**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][9] ]**
+
+Motorola also suggests that by separating consumer Wi-Fi (as is offered in hospitality and transport verticals, for example) from business-critical systems, one reduces traffic congestion risks.
+
+The highly complicated CBRS band-sharing system is still not through its government testing. “However, we could deploy customer systems under an experimental license,” a Motorola representative told me.
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_robotic_arm_gear_engineer_tablet_by_jiraroj_praditcharoenkul_gettyimages-1091790364_2400x1600-100788459-large.jpg
+[2]: https://www.motorolasolutions.com/en_us/products/two-way-radios/mototrbo/nitro.html
+[3]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://www.motorolasolutions.com/content/dam/msi/docs/products/mototrbo/nitro/cbrs-white-paper.pdf
+[6]: https://www.networkworld.com/article/3300339/private-lte-using-new-spectrum-approaching-market-readiness.html
+[7]: https://www.networkworld.com/article/3319176/private-5g-networks-are-coming.html
+[8]: https://img04.en25.com/Web/MotorolaSolutionsInc/%7B293ce809-fde0-4619-8507-2b42076215c3%7D_radio_evolution_eBook_Nitro_03.13.19_MS_V3.pdf?elqTrackId=850d56c6d53f4013afa2290a66d6251f&elqaid=2025&elqat=2
+[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190328 Robots in Retail are Real- and so is Edge Computing.md b/sources/tech/20190328 Robots in Retail are Real- and so is Edge Computing.md
new file mode 100644
index 0000000000..f62317ae54
--- /dev/null
+++ b/sources/tech/20190328 Robots in Retail are Real- and so is Edge Computing.md
@@ -0,0 +1,48 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Robots in Retail are Real… and so is Edge Computing)
+[#]: via: (https://www.networkworld.com/article/3385046/robots-in-retail-are-real-and-so-is-edge-computing.html#tk.rss_all)
+[#]: author: (Wendy Torell https://www.networkworld.com/author/Wendy-Torell/)
+
+Robots in Retail are Real… and so is Edge Computing
+======
+
+### I’ve seen plenty of articles touting the promise of edge computing technologies like AI and robotics in retail brick & mortar, but it wasn’t until this past weekend that I had my first encounter with an actual robot in a retail store.
+
+![Getty][1]
+
+I’ve seen plenty of articles touting the promise of [edge computing][2] technologies like AI and robotics in retail brick & mortar, but it wasn’t until this past weekend that I had my first encounter with an actual robot in a retail store. I was doing my usual weekly grocery shopping at my local Stop & Shop, and who comes strolling down the aisle, but…. Marty… the autonomous robot. He was friendly looking with his big googly eyes and was wearing a sign that explained he was there for safety, and that he was monitoring the aisles to report spills, debris, and other hazards to employees to improve my shopping experience. He caught the attention of most of the shoppers.
+
+At the National Retail Federation conference in NY that I attended in January, this was a topic of one of the [panel sessions][3]. It all makes sense… a positive customer experience is critical to retail success. But employee-to-customer (human to human) interaction has also been proven important. That’s where Marty comes in… to free up resources spent on tedious, time consuming tasks so that personnel can spend more time directly helping customers.
+
+**Use cases for robots in stores**
+
+Robotics have been utilized by retailers in manufacturing floors, and in distribution warehouses to improve productivity and optimize business processes along the supply chain. But it is only more recently that we’re seeing them make their way into the retail store front, where they are in contact with the customers. Alerting to hazards in the aisles is just one of many use-cases for the robots. They can also be used to scan and re-stock shelves, or as general information sources and greeters upon entering the store to guide your shopping experience. But how does a retailer justify the investment in this type of technology? Determining your ROI isn’t as cut and dry as in a warehouse environment, for example, where costs are directly tied to number of staff, time to complete tasks, etc… I guess time will tell for the retailers that are giving it a go.
+
+**What does it mean for the IT equipment on-premise ([micro data center][4])**
+
+Robotics are one of the many ways retail stores are being digitized. Video analytics is another big one, being used to analyze facial expressions for customer satisfaction, obtain customer demographics as input to product development, or ensure queue lines don’t get too long. My colleague, Patrick Donovan, wrote a detailed [blog post][5] about our trip to NRF and the impact on the physical infrastructure in the stores. In a nutshell, the equipment on-premise is becoming more mission critical, more integrated to business applications in the cloud, more tied to positive customer-experiences… and with that comes the need for more secure, more available, more manageable edge. But this is easier said than done in an environment that generally has no IT staff on-premise, and with hundreds or potentially thousands of stores spread out geographically. So how do we address this?
+
+We answer this question in a white paper that Patrick and I are currently writing titled “An Integrated Ecosystem to Solve Edge Computing Infrastructure Challenges”. Here’s a hint, (1) an integrated ecosystem of partners, and (2) an integrated micro data center that emerges from the ecosystem. I’ll be sure to comment on this blog with the link when the white paper becomes publicly available! In the meantime, explore our [edge computing][2] landing page to learn more.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385046/robots-in-retail-are-real-and-so-is-edge-computing.html#tk.rss_all
+
+作者:[Wendy Torell][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Wendy-Torell/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/gettyimages-828488368-1060x445-100792228-large.jpg
+[2]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
+[3]: https://stores.org/2019/01/15/why-is-there-a-robot-in-my-store/
+[4]: https://www.apc.com/us/en/solutions/business-solutions/micro-data-centers.jsp
+[5]: https://blog.apc.com/2019/02/06/4-thoughts-edge-computing-infrastructure-retail-sector/
diff --git a/sources/tech/20190329 How to manage your Linux environment.md b/sources/tech/20190329 How to manage your Linux environment.md
new file mode 100644
index 0000000000..2c4ca113e3
--- /dev/null
+++ b/sources/tech/20190329 How to manage your Linux environment.md
@@ -0,0 +1,177 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to manage your Linux environment)
+[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+How to manage your Linux environment
+======
+
+### Linux user environments help you find the command you need and get a lot done without needing details about how the system is configured. Where the settings come from and how they can be modified is another matter.
+
+![IIP Photo Archive \(CC BY 2.0\)][1]
+
+The configuration of your user account on a Linux system simplifies your use of the system in a multitude of ways. You can run commands without knowing where they're located. You can reuse previously run commands without worrying how the system is keeping track of them. You can look at your email, view man pages, and get back to your home directory easily no matter where you might have wandered off to in the file system. And, when needed, you can tweak your account settings so that it works even more to your liking.
+
+Linux environment settings come from a series of files — some are system-wide (meaning they affect all user accounts) and some are configured in files that are sitting in your home directory. The system-wide settings take effect when you log in and local ones take effect right afterwards, so the changes that you make in your account will override system-wide settings. For bash users, these files include these system files:
+
+```
+/etc/environment
+/etc/bash.bashrc
+/etc/profile
+```
+
+And some of these local files:
+
+```
+~/.bashrc
+~/.profile -- not read if ~/.bash_profile or ~/.bash_login
+~/.bash_profile
+~/.bash_login
+```
+
+You can modify any of the local four that exist, since they sit in your home directory and belong to you.
+
+**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
+
+### Viewing your Linux environment settings
+
+To view your environment settings, use the **env** command. Your output will likely look similar to this:
+
+```
+$ env
+LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;
+01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:
+*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:
+*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:
+*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;
+31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:
+*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:
+*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:
+*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:
+*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:
+*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:
+*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:
+*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:
+*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:
+*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:
+*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:
+*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:
+*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.spf=00;36:
+SSH_CONNECTION=192.168.0.21 34975 192.168.0.11 22
+LESSCLOSE=/usr/bin/lesspipe %s %s
+LANG=en_US.UTF-8
+OLDPWD=/home/shs
+XDG_SESSION_ID=2253
+USER=shs
+PWD=/home/shs
+HOME=/home/shs
+SSH_CLIENT=192.168.0.21 34975 22
+XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop
+SSH_TTY=/dev/pts/0
+MAIL=/var/mail/shs
+TERM=xterm
+SHELL=/bin/bash
+SHLVL=1
+LOGNAME=shs
+DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
+XDG_RUNTIME_DIR=/run/user/1000
+PATH=/home/shs/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
+LESSOPEN=| /usr/bin/lesspipe %s
+_=/usr/bin/env
+```
+
+While you're likely to get a _lot_ of output, the first big section shown above deals with the colors that are used on the command line to identify various file types. When you see something like ***.tar=01;31:** , this tells you that tar files will be displayed in a file listing in red, while ***.jpg=01;35:** tells you that jpg files will show up in purple. These colors are meant to make it easy to pick out certain files from a file listing. You can learn more about these colors are defined and how to customize them at [Customizing your colors on the Linux command line][3].
+
+One easy way to turn colors off when you prefer a simpler display is to use a command such as this one:
+
+```
+$ ls -l --color=never
+```
+
+That command could easily be turned into an alias:
+
+```
+$ alias ll2='ls -l --color=never'
+```
+
+You can also display individual settings using the **echo** command. In this command, we display the number of commands that will be remembered in our history buffer:
+
+```
+$ echo $HISTSIZE
+1000
+```
+
+Your last location in the file system will be remembered if you've moved.
+
+```
+PWD=/home/shs
+OLDPWD=/tmp
+```
+
+### Making changes
+
+You can make changes to environment settings with a command like this, but add a line lsuch as "HISTSIZE=1234" in your ~/.bashrc file if you want to retain this setting.
+
+```
+$ export HISTSIZE=1234
+```
+
+### What it means to "export" a variable
+
+Exporting a variable makes the setting available to your shell and possible subshells. By default, user-defined variables are local and are not exported to new processes such as subshells and scripts. The export command makes variables available to functions to child processes.
+
+### Adding and removing variables
+
+You can create new variables and make them available to you on the command line and subshells quite easily. However, these variables will not survive your logging out and then back in again unless you also add them to ~/.bashrc or a similar file.
+
+```
+$ export MSG="Hello, World!"
+```
+
+You can unset a variable if you need by using the **unset** command:
+
+```
+$ unset MSG
+```
+
+If the variable is defined locally, you can easily set it back up by sourcing your startup file(s). For example:
+
+```
+$ echo $MSG
+Hello, World!
+$ unset $MSG
+$ echo $MSG
+
+$ . ~/.bashrc
+$ echo $MSG
+Hello, World!
+```
+
+### Wrap-up
+
+User accounts are set up with an appropriate set of startup files for creating a userful user environment, but both individual users and sysadmins can change the default settings by editing their personal setup files (users) or the files from which many of the settings originate (sysadmins).
+
+Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/environment-rocks-leaves-100792229-large.jpg
+[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
+[3]: https://www.networkworld.com/article/3269587/customizing-your-text-colors-on-the-linux-command-line.html
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190329 Russia demands access to VPN providers- servers.md b/sources/tech/20190329 Russia demands access to VPN providers- servers.md
new file mode 100644
index 0000000000..0c950eb04f
--- /dev/null
+++ b/sources/tech/20190329 Russia demands access to VPN providers- servers.md
@@ -0,0 +1,77 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Russia demands access to VPN providers’ servers)
+[#]: via: (https://www.networkworld.com/article/3385050/russia-demands-access-to-vpn-providers-servers.html#tk.rss_all)
+[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/)
+
+Russia demands access to VPN providers’ servers
+======
+
+### 10 VPN service providers have been ordered to link their servers in Russia to the state censorship agency by April 26
+
+![Getty Images][1]
+
+The Russian censorship agency Roskomnadzor has ordered 10 [VPN][2] service providers to link their servers in Russia to its network in order to stop users within the country from reaching banned sites.
+
+If they fail to comply, their services will be blocked, according to a machine translation of the order.
+
+[RELATED: Best VPN routers for small business][3]
+
+The 10 VPN providers are ExpressVPN, HideMyAss!, Hola VPN, IPVanish, Kaspersky Secure Connection, KeepSolid, NordVPN, OpenVPN, TorGuard, and VyprVPN.
+
+In response at least five of the 10 – Express VPN, IPVanish, KeepSolid, NordVPN, TorGuard and – say they are tearing down their servers in Russia but continuing to offer their services to Russian customers if they can reach the providers’ servers located outside of Russia. A sixth provider, Kaspersky Labs, which is based in Moscow, says it will comply with the order. The other four could not be reached for this article.
+
+IPVanish characterized the order as another phase of “Russia’s censorship agenda” dating back to 2017 when the government enacted a law forbidding the use of VPNs to access blocked Web sites.
+
+“Up until recently, however, they had done little to enforce such rules,” IPVanish [says in its blog][4]. “These new demands mark a significant escalation.”
+
+The reactions of those not complying are similar. TorGuard says it has taken steps to remove all its physical servers from Russia. It is also cutting off its business with data centers in the region
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
+
+“We would like to be clear that this removal of servers was a voluntary decision by TorGuard management and no equipment seizure occurred,” [TorGuard says in its blog][6]. “We do not store any logs so even if servers were compromised it would be impossible for customer’s data to be exposed.”
+
+TorGuard says it is deploying more servers in adjacent countries to protect fast download speeds for customers in the region.
+
+IPVanish says it has faced similar demands from Russia before and responded similarly. In 2016, a new Russian law required online service providers to store customers’ private data for a year. “In response, [we removed all physical server presence in Russia][7], while still offering Russians encrypted connections via servers outside of Russian borders,” the company says. “That decision was made in accordance with our strict zero-logs policy.”
+
+KeepSolid says it had no servers in Russia, but it will not comply with the order to link with Roskomnadzor's network. KeepSolid says it will [draw on its experience dealing with the Great Firewall of China][8] to fight the Russian censorship attempt. "Our team developed a special [KeepSolid Wise protocol][9] which is designed for use in countries where the use of VPN is blocked," a spokesperson for the company said in an email statement.
+
+NordVPN says it’s shutting down all its Russian servers, and all of them will be shredded as of April 1. [The company says in a blog][10] that some of its customers who connected to its Russian servers without use of the NordVPN application will have to reconfigure their devices to insure their security. Those customers using the app won’t have to do anything differently because the option to connect to Russia via the app has been removed.
+
+ExpressVPN is also not complying with the order. "As a matter of principle, ExpressVPN will never cooperate with efforts to censor the internet by any country," said the company's vice presidentn Harold Li in an email, but he said that blocking traffic will be ineffective. "We epect that Russian internet users will still be able to find means of accessing the sites and services they want, albeit perhaps with some additional effort."
+
+Kaspersky Labs says it will comply with the Russian order and responded to emailed questions about its reaction with this written response:
+
+“Kaspersky Lab is aware of the new requirements from Russian regulators for VPN providers operating in the country. These requirements oblige VPN providers to restrict access to a number of websites that were listed and prohibited by the Russian Government in the country’s territory. As a responsible company, Kaspersky Lab complies with the laws of all the countries where it operates, including Russia. At the same time, the new requirements don’t affect the main purpose of Kaspersky Secure Connection which protects user privacy and ensures confidentiality and protection against data interception, for example, when using open Wi-Fi networks, making online payments at cafes, airports or hotels. Additionally, the new requirements are relevant to VPN use only in Russian territory and do not concern users in other countries.”
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385050/russia-demands-access-to-vpn-providers-servers.html#tk.rss_all
+
+作者:[Tim Greene][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Tim-Greene/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/10/ipsecurity-protocols-network-security-vpn-100775457-large.jpg
+[2]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html
+[3]: http://www.networkworld.com/article/3002228/router/best-vpn-routers-for-small-business.html#tk.nww-fsb
+[4]: https://nordvpn.com/blog/nordvpn-servers-roskomnadzor-russia/
+[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[6]: https://torguard.net/blog/why-torguard-has-removed-all-russian-servers/
+[7]: https://blog.ipvanish.com/ipvanish-removes-russian-vpn-servers-from-moscow/
+[8]: https://www.vpnunlimitedapp.com/blog/what-roskomnadzor-demands-from-vpns/
+[9]: https://www.vpnunlimitedapp.com/blog/keepsolid-wise-a-smart-solution-to-get-total-online-freedom/
+[10]: /cms/article/blog%20https:/nordvpn.com/blog/nordvpn-servers-roskomnadzor-russia/
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190401 3 cool text-based email clients.md b/sources/tech/20190401 3 cool text-based email clients.md
deleted file mode 100644
index e35d61e89d..0000000000
--- a/sources/tech/20190401 3 cool text-based email clients.md
+++ /dev/null
@@ -1,70 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (3 cool text-based email clients)
-[#]: via: (https://fedoramagazine.org/3-cool-text-based-email-clients/)
-[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
-
-3 cool text-based email clients
-======
-
-![][1]
-
-Writing and receiving email is a big part of everyone’s daily routine and choosing an email client is usually a major decision. The Fedora OS provides a large choice of email clients and among these are text-based email applications.
-
-### Mutt
-
-Mutt is probably one of the most popular text-based email clients. It supports all the common features that one would expect from an email client. Color coding, mail threading, POP3, and IMAP are all supported by Mutt. But one of its best features is it’s highly configurable. Indeed, the user can easily change the keybindings, and create macros to adapt the tool to a particular workflow.
-
-To give Mutt a try, install it [using sudo][2] and dnf:
-
-```
-$ sudo dnf install mutt
-```
-
-To help newcomers get started, Mutt has a very comprehensive [wiki][3] full of macro examples and configuration tricks.
-
-### Alpine
-
-Alpine is also among the most popular text-based email clients. It’s more beginner friendly than Mutt, and you can configure most of Alpine via the application itself — no need to edit a configuration file. One powerful feature of Alpine is the ability to score emails. This is particularly interesting for users that are registered to a high volume mailing list like Fedora’s [devel list][4]. Using scores, Alpine can sort the email based on the user’s interests, showing emails with a high score first.
-
-Alpine is also available to install from Fedora’s repository using dnf.
-
-```
-$ sudo dnf install alpine
-```
-
-While using Alpine, you can easily access the documentation by pressing the _Ctrl+G_ key combination.
-
-### nmh
-
-nmh (new Mail Handling) follows the UNIX tools philosophy. It provides a collection of single purpose programs to send, receive, save, retrieve, and manipulate e-mail messages. This lets you swap the _nmh_ command with other programs, or create scripts around _nmh_ to create more customized tools. For example, you can use Mutt with nmh.
-
-nmh can be easily installed using dnf.
-
-```
-$ sudo dnf install nmh
-```
-
-To learn more about nmh and mail handling in general you can read this GPL licenced [book][5].
-
---------------------------------------------------------------------------------
-
-via: https://fedoramagazine.org/3-cool-text-based-email-clients/
-
-作者:[Clément Verna][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://fedoramagazine.org/author/cverna/
-[b]: https://github.com/lujun9972
-[1]: https://fedoramagazine.org/wp-content/uploads/2018/07/email-clients-816x345.png
-[2]: https://fedoramagazine.org/howto-use-sudo/
-[3]: https://gitlab.com/muttmua/mutt/wikis/home
-[4]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/
-[5]: https://rand-mh.sourceforge.io/book/
diff --git a/sources/tech/20190401 Meta Networks builds user security into its Network-as-a-Service.md b/sources/tech/20190401 Meta Networks builds user security into its Network-as-a-Service.md
new file mode 100644
index 0000000000..777108f639
--- /dev/null
+++ b/sources/tech/20190401 Meta Networks builds user security into its Network-as-a-Service.md
@@ -0,0 +1,87 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Meta Networks builds user security into its Network-as-a-Service)
+[#]: via: (https://www.networkworld.com/article/3385531/meta-networks-builds-user-security-into-its-network-as-a-service.html#tk.rss_all)
+[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
+
+Meta Networks builds user security into its Network-as-a-Service
+======
+
+### Meta Networks has a unique approach to the security of its Network-as-a-Service. A tight security perimeter is built around every user and the specific resources each person needs to access.
+
+![MF3d / Getty Images][1]
+
+Network-as-a-Service (NaaS) is growing in popularity and availability for those organizations that don’t want to host their own LAN or WAN, or that want to complement or replace their traditional network with something far easier to manage.
+
+With NaaS, a service provider creates a multi-tenant wide area network comprised of geographically dispersed points of presence (PoPs) connected via high-speed Tier 1 carrier links that create the network backbone. The PoPs peer with cloud services to facilitate customer access to cloud applications such as SaaS offerings, as well as to infrastructure services from the likes of Amazon, Google and Microsoft. User organizations connect to the network from whatever facilities they have — data centers, branch offices, or even individual client devices — typically via SD-WAN appliances and/or VPNs.
+
+Numerous service providers now offer Network-as-a-Service. As the network backbone and the PoPs become more of a commodity, the providers are distinguishing themselves on other value-added services, such as integrated security or WAN optimization.
+
+**[ Also read:[What to consider when deploying a next generation firewall][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3]. ]**
+
+Ever since its launch about a year ago, [Meta Networks][4] has staked security as its primary value-add. What’s different about the Meta NaaS is the philosophy that the network is built around users, not around specific sites or offices. Meta Networks does this by building a software-defined perimeter (SDP) for each user, giving workers micro-segmented access to only the applications and network resources they need. The vendor was a little ahead of its time with SDP, but the market is starting to catch up. Companies are beginning to show interest in SDP as a VPN replacement or VPN alternative.
+
+Meta NaaS has a zero-trust architecture where each user is bound by an SDP. Each user has a unique, fixed identity no matter from where they connect to this network. The SDP security framework allows one-to-one network connections that are dynamically created on demand between the user and the specific resources they need to access. Everything else on the NaaS is invisible to the user. No access is possible unless it is explicitly granted, and it’s continuously verified at the packet level. This model effectively provides dynamically provisioned secure network segmentation.
+
+## SDP tightly controls access to specific resources
+
+This approach works very well when a company wants to securely connect employees, contractors, and external partners to specific resources on the network. For example, one of Meta Networks’ customers is Via Transportation, a New York-based company that has a ride-sharing platform. The company operates its own ride-sharing services in various cities in North America and Europe, and it licenses its technology to other transit systems around the world.
+
+Via’s operations are completely cloud-native, and so it has no legacy-style site-based WAN to connect its 400-plus employees and contractors to their cloud-based applications. Via’s partners, primarily transportation operators in different cities and countries, also need controlled access to specific portions of Via’s software platform to manage rideshares. Giving each group of users access to the applications they need — and _only_ to the ones they specifically need – was a challenge using a VPN. Using the Meta NaaS instead gives Via more granular control over who has what access.
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
+
+Via’s employees with managed devices connect to the Meta NaaS using client software on the device, and they are authenticated using Okta and a certificate. Contractors and customers with unmanaged devices use a browser-based access solution from Meta that doesn’t require installation or setup. New users can be on-boarded quickly and assigned granular access policies based on their role. Integration with Okta provides information that facilitates identity-based access policies. Once users connect to the network, they can see only the applications and network resources that their policy allows; everything else is invisible to them under the SDP architecture.
+
+For Via, there are several benefits to the Meta NaaS approach. First and foremost, the company doesn’t have to own or operate its own WAN infrastructure. Everything is a managed service located in the cloud — the same business model that Via itself espouses. Next, this solution scales easily to support the company’s growth. Meta’s security integrates with Via’s existing identity management system, so identities and access policies can be centrally managed. And finally, the software-defined perimeter hides resources from unauthorized users, creating security by obscurity.
+
+## Tightening security even further
+
+Meta Networks further tightens the security around the user by doing device posture checks — “NAC lite,” if you will. A customer can define the criteria that devices have to meet before they are allowed to connect to the NaaS. For example, the check could be whether a security certificate is installed, if a registry key is set to a specific value, or if anti-virus software is installed and running. It’s one more way to enforce company policies on network access.
+
+When end users use the browser-based method to connect to the Meta NaaS, all activity is recorded in a rich log so that everything can be audited, but also to set alerts and look for anomalies. This data can be exported to a SIEM if desired, but Meta has its own notification and alert system for security incidents.
+
+Meta Networks recently implemented some new features around management, including smart groups and support for the System for Cross-Domain Identity Management (SCIM) protocol. The smart groups feature provides the means to add an extra notation or tag to elements such as devices, services, network subnets or segments, and basically everything that’s in the system. These tags can then be applied to policy. For example, a customer could label some of their services as a production, staging, or development environment. Then a policy could be implemented to say that only sales people can access the production environment. Smart groups are just one more way to get even more granular about policy.
+
+The SCIM support makes on-boarding new users simple. SCIM is a protocol that is used to synchronize and provision users and identities from a third-party identity provider such as Okta, Azure AD, or OneLogin. A customer can use SCIM to provision all the users from the IdP into the Meta system, synchronize in real time the groups and attributes, and then use that information to build the access policies inside Meta NaaS.
+
+These and other security features fit into Meta Networks’ vision that the security perimeter goes with you no matter where you are, and the perimeter includes everything that was formerly delivered through the data center. It is delivered through the cloud to your client device with always-on security. It’s a broad approach to SDP and a unique approach to NaaS.
+
+**Reviews: 4 free, open-source network monitoring tools**
+
+ * [Icinga: Enterprise-grade, open-source network-monitoring that scales][6]
+ * [Nagios Core: Network-monitoring software with lots of plugins, steep learning curve][7]
+ * [Observium open-source network monitoring tool: Won’t run on Windows but has a great user interface][8]
+ * [Zabbix delivers effective no-frills network monitoring][9]
+
+
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385531/meta-networks-builds-user-security-into-its-network-as-a-service.html#tk.rss_all
+
+作者:[Linda Musthaler][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Linda-Musthaler/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/10/firewall_network-security_lock_padlock_cyber-security-100776989-large.jpg
+[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
+[3]: https://www.networkworld.com/newsletters/signup.html
+[4]: https://www.metanetworks.com/
+[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[6]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb
+[7]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html
+[8]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb
+[9]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190401 Top Ten Reasons to Think Outside the Router -2- Simplify and Consolidate the WAN Edge.md b/sources/tech/20190401 Top Ten Reasons to Think Outside the Router -2- Simplify and Consolidate the WAN Edge.md
new file mode 100644
index 0000000000..8177390648
--- /dev/null
+++ b/sources/tech/20190401 Top Ten Reasons to Think Outside the Router -2- Simplify and Consolidate the WAN Edge.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top Ten Reasons to Think Outside the Router #2: Simplify and Consolidate the WAN Edge)
+[#]: via: (https://www.networkworld.com/article/3384928/top-ten-reasons-to-think-outside-the-router-2-simplify-and-consolidate-the-wan-edge.html#tk.rss_all)
+[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
+
+Top Ten Reasons to Think Outside the Router #2: Simplify and Consolidate the WAN Edge
+======
+
+![istock][1]
+
+We’re now near reaching the end of our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as [Silver Peak][2] counts down the *Top Ten Reasons to Think Outside the Router. *Click for the [#3][3], [#4][4], [#5][5], [#6][6], [#7][7], [#8][8], [#9][9] and [#10][10] reasons to retire traditional branch routers.
+
+_The #2 reason it’s time to retire branch routers: conventional router-centric WAN architectures are rigid and complex to manage!_
+
+### **Challenges of conventional WAN edge architecture**
+
+A conventional WAN edge architecture consists of a disparate array of devices, including routers, firewalls, WAN optimization appliances, wireless controllers and so on. This architecture was born in the era when applications were hosted exclusively in the data center. With this model, deploying new applications or provisioning new policies or making policy changes has become an arduous and time-consuming task. Configuration, deployment and management requires specialized on-premise IT expertise to manually program and configure each device with its own management interface, often using an arcane CLI. This process has hit the wall in the cloud era proving too slow, complex, error-prone, costly and inefficient.
+
+As cloud-first enterprises increasingly migrate applications and infrastructure to the cloud, the traditional WAN architecture is no longer efficient. IT is now faced with a new set of challenges when it comes to connecting users securely and directly to the applications that run their businesses:
+
+ * How do you manage and consistently apply QoS and security policies across the distributed enterprise?
+ * How do you intelligently automate traffic steering across multiple WAN transport services based on application type and unique requirements?
+ * How do you deliver the highest quality of experiences to users when running applications over broadband, especially voice and video?
+ * How do you quickly respond to continuously changing business requirements?
+
+
+
+These are just some of the new challenges facing IT teams in the cloud era. To be successful, enterprises will need to shift toward a business-first networking model where top-down business intent drives how the network behaves. And they would be well served to deploy a business-driven unified [SD-WAN][11] edge platform to transform their networks from a business constraint to a business accelerant.
+
+### **Shifting toward a business-driven WAN edge platform**
+
+A business-driven WAN edge platform is designed to enable enterprises to realize the full transformation promise of the cloud. It is a model where top-down business intent is the driver, not bottoms-up technology constraints. It’s outcome oriented, utilizing automation, artificial intelligence (AI) and machine learning to get smarter every day. Through this continuous adaptation, and the ability to improve the performance of underlying transport and applications, it delivers the highest quality of experience to end users. This is in stark contrast to the router-centric model where application policies must be shoe-horned to fit within the constraints of the network. A business-driven, top-down approach continuously stays in compliance with business intent and centrally defined security policies.
+
+### **A unified platform for simplifying and consolidating the WAN Edge**
+
+Achieving a business-driven architecture requires a unified platform, designed from the ground up as one system, uniting [SD-WAN][12], [firewall][13], [segmentation][14], [routing][15], [WAN optimization][16], application visibility and control in a single-platform. Furthermore, it requires [centralized orchestration][17] with complete observability of the entire wide area network through a single pane of glass.
+
+The use case “[Simplifying WAN Architecture][18]” describes in detail key capabilities of the Silver Peak [Unity EdgeConnect™][19] SD-WAN edge platform. It illustrates how EdgeConnect enables enterprises to simplify branch office WAN edge infrastructure and streamline deployment, configuration and ongoing management.
+
+![][20]
+
+### **Business and IT outcomes of a business-driven SD-WAN**
+
+ * Accelerates deployment, leveraging consistent hardware, software, cloud delivery models
+ * Saves up to 40 percent on hardware, software, installation, management and maintenance costs when replacing traditional routers
+ * Protects existing investment in security through simplified service chaining with our broadest ecosystem partners: [Check Point][21], [Forcepoint][22], [McAfee][23], [OPAQ][24], [Palo Alto Networks][25], [Symantec][26] and [Zscaler][27].
+ * Reduces foot print by 75 percent as it unifies network functions into a single platform
+ * Saves more than 50 percent on WAN optimization costs by selectively applying it when and where is needed on an application-by-application basis
+ * Accelerates time-to-resolution of application or network performance bottlenecks from days to minutes with simple, visual application and WAN analytics
+
+
+
+Calculate your [ROI][28] today and learn why the time is now to [think outside the router][29] and deploy the business-driven Silver Peak EdgeConnect SD-WAN edge platform!
+
+![][30]
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384928/top-ten-reasons-to-think-outside-the-router-2-simplify-and-consolidate-the-wan-edge.html#tk.rss_all
+
+作者:[Rami Rammaha][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Rami-Rammaha/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/silverpeak_main-100792490-large.jpg
+[2]: https://www.silver-peak.com/why-silver-peak
+[3]: http://blog.silver-peak.com/think-outside-the-router-reason-3-mpls-contract-renewal
+[4]: http://blog.silver-peak.com/top-ten-reasons-to-think-outside-the-router-4-broadband-is-used-only-for-failover
+[5]: http://blog.silver-peak.com/think-outside-the-router-reason-5-manual-cli-based-configuration-and-management
+[6]: http://blog.silver-peak.com/https-blog-silver-peak-com-think-outside-the-router-reason-6
+[7]: http://blog.silver-peak.com/think-outside-the-router-reason-7-exorbitant-router-support-and-maintenance-costs
+[8]: http://blog.silver-peak.com/think-outside-the-router-reason-8-garbled-voip-pixelated-video
+[9]: http://blog.silver-peak.com/think-outside-router-reason-9-sub-par-saas-performance
+[10]: http://blog.silver-peak.com/think-outside-router-reason-10-its-getting-cloudy
+[11]: https://www.silver-peak.com/sd-wan/sd-wan-explained
+[12]: https://www.silver-peak.com/sd-wan
+[13]: https://www.silver-peak.com/products/unity-edge-connect/orchestrated-security-policies
+[14]: https://www.silver-peak.com/resource-center/centrally-orchestrated-end-end-segmentation
+[15]: https://www.silver-peak.com/products/unity-edge-connect/bgp-routing
+[16]: https://www.silver-peak.com/products/unity-boost
+[17]: https://www.silver-peak.com/products/unity-orchestrator
+[18]: https://www.silver-peak.com/use-cases/simplifying-wan-architecture
+[19]: https://www.silver-peak.com/products/unity-edge-connect
+[20]: https://images.idgesg.net/images/article/2019/04/sp_linkthrough-copy-100792505-large.jpg
+[21]: https://www.silver-peak.com/resource-center/check-point-silver-peak-securing-internet-sd-wan
+[22]: https://www.silver-peak.com/company/tech-partners/forcepoint
+[23]: https://www.silver-peak.com/company/tech-partners/mcafee
+[24]: https://www.silver-peak.com/company/tech-partners/opaq-networks
+[25]: https://www.silver-peak.com/resource-center/palo-alto-networks-and-silver-peak
+[26]: https://www.silver-peak.com/company/tech-partners/symantec
+[27]: https://www.silver-peak.com/resource-center/zscaler-and-silver-peak-solution-brief
+[28]: https://www.silver-peak.com/sd-wan-interactive-roi-calculator
+[29]: https://www.silver-peak.com/think-outside-router
+[30]: https://images.idgesg.net/images/article/2019/04/roi-100792506-large.jpg
diff --git a/sources/tech/20190401 What is 5G- How is it better than 4G.md b/sources/tech/20190401 What is 5G- How is it better than 4G.md
new file mode 100644
index 0000000000..f4ad51b8ae
--- /dev/null
+++ b/sources/tech/20190401 What is 5G- How is it better than 4G.md
@@ -0,0 +1,171 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What is 5G? How is it better than 4G?)
+[#]: via: (https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html#tk.rss_all)
+[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/)
+
+What is 5G? How is it better than 4G?
+======
+
+### 5G networks will boost wireless throughput by a factor of 10 and may replace wired broadband. But when will they be available, and why are 5G and IoT so linked together?
+
+![Thinkstock][1]
+
+[5G wireless][2] is an umbrella term to describe a set of standards and technologies for a radically faster wireless internet that ideally is up to 20 times faster with 120 times less latency than 4G, setting the stage for IoT networking advances and support for new high-bandwidth applications.
+
+## What is 5G? Technology or buzzword?
+
+It will be years before the technology reaches its full potential worldwide, but meanwhile some 5G network services are being rolled out today. 5G is as much a marketing buzzword as a technical term, and not all services marketed as 5G are standard.
+
+**[From Mobile World Congress:[The time of 5G is almost here][3].]**
+
+## 5G speed vs 4G
+
+With every new generation of wireless technology, the biggest appeal is increased speed. 5G networks have potential peak download speeds of [20 Gbps, with 10 Gbps being seen as typical][4]. That's not just faster than current 4G networks, which currently top out at around 1 Gbps, but also faster than cable internet connections that deliver broadband to many people's homes. 5G offers network speeds that rival optical-fiber connections.
+
+Throughput alone isn't 5G's only important speed improvement; it also features a huge reduction in network latency*.* That's an important distinction: throughput measures how long it would take to download a large file, while latency is determined by network bottlenecks and delays that slow down responses in back-and-forth communication.
+
+Latency can be difficult to quantify because it varies based on myriad network conditions, but 5G networks are capable of latency rates that are less than a millisecond in ideal conditions. Overall, 5G latency will be lower than 4G's by a factor of 60 to 120. That will make possible a number of applications such as virtual reality that delay makes impractical today.
+
+## 5G technology
+
+The technology underpinnings of 5G are defined by a series of standards that have been in the works for the better part of a decade. One of the most important of these is 5G New Radio, or 5G NR*,* formalized by the 3rd Generation Partnership Project, a standards organization that develops protocols for mobile telephony. 5G NR will dictate many of the ways in which consumer 5G devices will operate, and was [finalized in June of 2018][5].
+
+**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
+
+A number of individual technologies have come together to make the speed and latency improvements of 5G possible, and below are some of the most important.
+
+## Millimeter waves
+
+5G networks will for the most part use frequencies in the 30 to 300 GHz range. (Wavelengths at these frequencies are between 1 and 10 millimeters, thus the name.) This high-frequency band can [carry much more information per unit of time than the lower-frequency signals][7] currently used by 4G LTE, which is generally below 1 GHz, or Wi-Fi, which tops out at 6 GHz.
+
+Millimeter-wave technology has traditionally been expensive and difficult to deploy. Technical advances have overcome those difficulties, which is part of what's made 5G possible today.
+
+## Small cells
+
+One drawback of millimeter wave transmission is that it's more prone to interference than Wi-Fi or 4G signals as they pass through physical objects.
+
+To overcome this, the model for 5G infrastructure will be different from 4G's. Instead of the large cellular-antenna masts we've come to accept as part of the landscape, 5G networks will be powered by [much smaller base stations spread throughout cities about 250 meters apart][8], creating cells of service that are also smaller.
+
+These 5G base stations have lower power requirements than those for 4G and can be attached to buildings and utility poles more easily.
+
+## Massive MIMO
+
+Despite 5G base stations being much smaller than their 4G counterparts, they pack in many more antennas. These antennas are [multiple-input multiple-output (MIMO)][9], meaning that they can handle multiple two-way conversations over the same data signal simultaneously. 5G networks can handle more than [20 times more conversations in this way than 4G networks][10].
+
+Massive MIMO promises to [radically improve on base station capacity limits][11], allowing individual base stations to have conversations with many more devices. This in particular is why 5G may drive wider adoption of IoT. In theory, a lot more internet-connected wireless gadgets will be able to be deployed in the same space without overwhelming the network.
+
+## Beamforming
+
+Making sure all these conversations go back and forth to the right places is tricky, especially with the aforementioned problems millimeter-wave signals have with interference. To overcome those issues, 5G stations deploy advanced beamforming techniques, which use constructive and destructive radio interference to make signals directional rather than broadcast. That effectively boosts signal strength and range in a particular direction.
+
+## 5G availability
+
+The first commercial 5G network was [rolled out in Qatar in May 2018][12]. Since then, networks have been popping up across the world, from Argentina to Vietnam. [Lifewire has a good, frequently updated list][13].
+
+One thing to keep in mind, though, is that not all 5G networks deliver on all the technology's promises yet. Some early 5G offerings piggyback on existing 4G infrastructure, which reduces the potential speed gains; other services dubbed 5G for marketing purposes don't even comply with the standard. A closer look at offerings from U.S. wireless carriers will demonstrate some of the pitfalls.
+
+## Wireless carriers and 5G
+
+Technically, 5G is available in the U.S. today. But the caveats involved in that statement vary from carrier to carrier, demonstrating the long road that still lies ahead before 5G becomes omnipresent.
+
+Verizon is making probably the biggest early 5G push. It announced [5G Home][14] in parts of four cities in October of 2018, a service that requires using a special 5G hotspot to connect to the network and feed it to your other devices via Wi-Fi.
+
+Verizon planned an April rollout of a [mobile service in Minneapolis and Chicago][15], which will spread to other cities over the course of the year. Accessing the 5G network will cost customers an extra monthly fee plus what they’ll have to spend on a phone that can actually connect to it (more on that in a moment). As an added wrinkle, Verizon is deploying what it calls [5G TF][16], which doesn't match up with the 5G NR standard.
+
+AT&T [announced the availability of 5G in 12 U.S. cities in December 2018][17], with nine more coming by the end of 2019, but even in those cities, availability is limited to the downtown areas. To use the network requires a special Netgear hotspot that connects to the service, then provides a Wi-Fi signal to phones and other devices.
+
+Meanwhile, AT&T is also rolling out speed boosts to its 4G network, which it's dubbed 5GE even though these improvements aren't related to 5G networking. ([This is causing backlash][18].)
+
+Sprint will have 5G service in parts of four cities by May of 2019, and five more by the end of the year. But while Sprint's 5G offering makes use of massive MIMO cells, they [aren't using millimeter-wave signals][19], meaning that Sprint users won't see as much of a speed boost as customers of other carriers.
+
+T-Mobile is pursuing a similar model,and it [won't roll out its service until the end of 2019][20] because there won't be any phones to connect to it.
+
+One kink that might stop a rapid spread of 5G is the need to spread out all those small-cell base stations. Their small size and low power requirements make them easier to deploy than current 4G tech in a technical sense, but that doesn't mean it's simple to convince governments and property owners to install dozens of them everywhere. Verizon actually set up a [website that you can use to petition your local elected officials][21] to speed up 5G base station deployment.
+
+## **5G phones: When available? When to buy?**
+
+The first major 5G phone to be announced is the Samsung Galaxy S10 5G, which should be available by the end of the summer of 2019. You can also order a "[Moto Mod][22]" from Verizon, which [transforms Moto Z3 phones into 5G-compatible device][23]s.
+
+But unless you can't resist the lure of being an early adopter, you may wish to hold off for a bit; some of the quirks and looming questions about carrier rollout may mean that you end up with a phone that [isn't compatible with your carrier's entire 5G network][24].
+
+One laggard that may surprise you is Apple: analysts believe that there won't be a [5G-compatible iPhone until 2020 at the earliest][25]. But this isn't out of character for the company; Apple [also lagged behind Samsung in releasing 4G-compatible phones][26] in back in 2012.
+
+Still, the 5G flood is coming. 5G-compatible devices [dominated Barcelona's Mobile World Congress in 2019][3], so expect to have a lot more choice on the horizon.
+
+## Why are people talking about 6G already?
+
+Some experts say [5G won’t be able to meet the latency and reliability targets][27] it is shooting for. These skeptics are already looking ahead to 6G, which they say will try to address these projected shortcomings.
+
+There is [a group that is researching new technologies that can be rolled into 6G][28] that calls itself
+
+The Center for Converged TeraHertz Communications and Sensing (ComSenTer). Part of the spec they’re working on calls for 100Gbps speed for every device.
+
+In addition to adding reliability, overcoming reliability and boosting speed, 6G is also trying to enable thousands of simultaneous connections. If successful, this feature could help to network IoT devices, which can be deployed in the thousands as sensors in a variety of industrial settings.
+
+Even in its embryonic form, 6G may already be facing security concerns due to the emergence of newly discovered [potential for man-in-the-middle attacks in tera-hertz based networks][29]. The good news is that there’s plenty of time to find solutions to the problem. 6G networks aren’t expected to start rolling out until 2030.
+
+**More about 5g networks:**
+
+ * [How enterprises can prep for 5G networks][30]
+ * [5G vs 4G: How speed, latency and apps support differ][31]
+ * [Private 5G networks are coming][32]
+ * [5G and 6G wireless have security issues][33]
+ * [How millimeter-wave wireless could help support 5G and IoT][34]
+
+
+
+Join the Network World communities on [Facebook][35] and [LinkedIn][36] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html#tk.rss_all
+
+作者:[Josh Fruhlinger][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Josh-Fruhlinger/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2017/04/5g-100718139-large.jpg
+[2]: https://www.networkworld.com/article/3203489/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html
+[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
+[4]: https://www.networkworld.com/article/3330603/5g-versus-4g-how-speed-latency-and-application-support-differ.html
+[5]: https://www.theverge.com/2018/6/15/17467734/5g-nr-standard-3gpp-standalone-finished
+[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
+[7]: https://www.networkworld.com/article/3291323/millimeter-wave-wireless-could-help-support-5g-and-iot.html
+[8]: https://spectrum.ieee.org/video/telecom/wireless/5g-bytes-small-cells-explained
+[9]: https://www.networkworld.com/article/3250268/what-is-mu-mimo-and-why-you-need-it-in-your-wireless-routers.html
+[10]: https://spectrum.ieee.org/tech-talk/telecom/wireless/5g-researchers-achieve-new-spectrum-efficiency-record
+[11]: https://www.networkworld.com/article/3262991/future-wireless-networks-will-have-no-capacity-limits.html
+[12]: https://venturebeat.com/2018/05/14/worlds-first-commercial-5g-network-launches-in-qatar/
+[13]: https://www.lifewire.com/5g-availability-world-4156244
+[14]: https://www.digitaltrends.com/computing/verizon-5g-home-promises-up-to-gigabit-internet-speeds-for-50/
+[15]: https://lifehacker.com/heres-your-cheat-sheet-for-verizons-new-5g-data-plans-1833278817
+[16]: https://www.theverge.com/2018/10/2/17927712/verizon-5g-home-internet-real-speed-meaning
+[17]: https://www.cnn.com/2018/12/18/tech/5g-mobile-att/index.html
+[18]: https://www.networkworld.com/article/3339720/like-4g-before-it-5g-is-being-hyped.html?nsdr=true
+[19]: https://www.digitaltrends.com/mobile/sprint-5g-rollout/
+[20]: https://www.cnet.com/news/t-mobile-delays-full-600-mhz-5g-launch-until-second-half/
+[21]: https://lets5g.com/
+[22]: https://www.verizonwireless.com/support/5g-moto-mod-faqs/?AID=11365093&SID=100098X1555750Xbc2e857934b22ebca1a0570d5ba93b7c&vendorid=CJM&PUBID=7105813&cjevent=2e2150cb478c11e98183013b0a1c0e0c
+[23]: https://www.digitaltrends.com/cell-phone-reviews/moto-z3-review/
+[24]: https://www.businessinsider.com/samsung-galaxy-s10-5g-which-us-cities-have-5g-networks-2019-2
+[25]: https://www.cnet.com/news/why-apples-in-no-rush-to-sell-you-a-5g-iphone/
+[26]: https://mashable.com/2012/09/09/iphone-5-4g-lte/#hYyQUelYo8qq
+[27]: https://www.networkworld.com/article/3305359/6g-will-achieve-terabits-per-second-speeds.html
+[28]: https://www.networkworld.com/article/3285112/get-ready-for-upcoming-6g-wireless-too.html
+[29]: https://www.networkworld.com/article/3315626/5g-and-6g-wireless-technologies-have-security-issues.html
+[30]: https://%20https//www.networkworld.com/article/3306720/mobile-wireless/how-enterprises-can-prep-for-5g.html
+[31]: https://%20https//www.networkworld.com/article/3330603/mobile-wireless/5g-versus-4g-how-speed-latency-and-application-support-differ.html
+[32]: https://%20https//www.networkworld.com/article/3319176/mobile-wireless/private-5g-networks-are-coming.html
+[33]: https://www.networkworld.com/article/3315626/network-security/5g-and-6g-wireless-technologies-have-security-issues.html
+[34]: https://www.networkworld.com/article/3291323/mobile-wireless/millimeter-wave-wireless-could-help-support-5g-and-iot.html
+[35]: https://www.facebook.com/NetworkWorld/
+[36]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190402 3 Essentials for Achieving Resiliency at the Edge.md b/sources/tech/20190402 3 Essentials for Achieving Resiliency at the Edge.md
new file mode 100644
index 0000000000..38cbc70e94
--- /dev/null
+++ b/sources/tech/20190402 3 Essentials for Achieving Resiliency at the Edge.md
@@ -0,0 +1,83 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (3 Essentials for Achieving Resiliency at the Edge)
+[#]: via: (https://www.networkworld.com/article/3386438/3-essentials-for-achieving-resiliency-at-the-edge.html#tk.rss_all)
+[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
+
+3 Essentials for Achieving Resiliency at the Edge
+======
+
+### Edge computing requires different thinking and management to ensure the always-on availability that users have come to demand.
+
+![iStock][1]
+
+> “The IT industry has done a good job of making robust data centers that are highly manageable, highly secure, with redundant systems,” [says Kevin Brown][2], SVP Innovation and CTO for Schneider Electric’s Secure Power Division.
+
+However, he continues, companies then connect these data centers to messy edge closets and server rooms, which over time have become “micro mission-critical data centers” in their own right — making system availability vital. If not designed and managed correctly, the situation can be disastrous if users cannot connect to business-critical applications.
+
+To avoid unacceptable downtime, companies should incorporate three essential ingredients into their edge computing deployments: remote management, physical security, and rapid deployments.
+
+**Remote management**
+
+Depending on the company’s size, staff could be managing several — or many multiple — edge sites. Not only is this time consuming and costly, it’s also complex, especially if protocols differ from site to site.
+
+While some organizations might deploy traditional remote monitoring technology to manage these sites, it’s important to note these tools: don’t provide real-time status updates; are largely reactionary rather than proactive; and are sometimes limited in terms of data output.
+
+Coupled with the need to overcome these limitations, the economics for managing edge sites necessitate that organizations consider a digital, or cloud-based, solution. In addition to cost savings, these platforms provide:
+
+ * Simplification in monitoring across edge sites
+ * Real-time visibility, right down to any device on the network
+ * Predictive analytics, including data-driven intelligence and recommendations to ensure proactive service delivery
+
+
+
+**Physical security**
+
+Small, local edge computing sites are often situated within larger corporate or wide-open spaces, sometimes in highly accessible, shared offices and public areas. And sometimes they’re set up on-the-fly for a time-sensitive project.
+
+However, when there is no dedicated location and open racks are unsecured, the risks of malicious and accidental incidents escalate.
+
+To prevent unauthorized access to IT equipment at edge computing sites, proper physical security is critical and requires:
+
+ * Physical space monitoring, with environmental sensors for temperature and humidity
+ * Access control, with biometric sensors as an option
+ * Audio and video surveillance and monitoring with recording
+ * If possible, install IT equipment within a secure enclosure
+
+
+
+**Rapid deployments**
+
+The [benefits of edge computing][3] are significant, especially the ability to bring bandwidth-intensive computing closer to the user, which leads to faster speed to market and greater productivity.
+
+Create a holistic plan that will enable the company to quickly deploy edge sites, while ensuring resiliency and reliability. That means having a standardized, repeatable process including:
+
+ * Pre-configured, integrated equipment that combines server, storage, networking, and software in a single enclosure — a prefabricated micro data center, if you will
+ * Designs that specify supporting racks, UPSs, PDUs, cable management, airflow practices, and cooling systems
+
+
+
+These best practices as well as a balanced, systematic approach to edge computing deployments will ensure the always-on availability that today’s employees and users have come to expect.
+
+Learn how to enable resiliency within your edge computing deployment at [APC.com][4].
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386438/3-essentials-for-achieving-resiliency-at-the-edge.html#tk.rss_all
+
+作者:[Anne Taylor][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Anne-Taylor/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/istock-900882382-100792635-large.jpg
+[2]: https://www.youtube.com/watch?v=IfsCTFSH6Jc
+[3]: https://www.networkworld.com/article/3342455/how-edge-computing-will-bring-business-to-the-next-level.html
+[4]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
diff --git a/sources/tech/20190402 5G- A deep dive into fast, new wireless.md b/sources/tech/20190402 5G- A deep dive into fast, new wireless.md
new file mode 100644
index 0000000000..f3941b3dde
--- /dev/null
+++ b/sources/tech/20190402 5G- A deep dive into fast, new wireless.md
@@ -0,0 +1,70 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (5G: A deep dive into fast, new wireless)
+[#]: via: (https://www.networkworld.com/article/3385030/5g-a-deep-dive-into-fast-new-wireless.html#tk.rss_all)
+[#]: author: (Craig Mathias https://www.networkworld.com/author/Craig-Mathias/)
+
+5G: A deep dive into fast, new wireless
+======
+
+### 5G wireless networks are just about ready for prime time, overcoming backhaul and backward-compatibility issues, and promising the possibility of all-mobile networking through enhanced throughput.
+
+The next step in the evolution of wireless WAN communications - [5G networks][1] \- is about to hit the front pages, and for good reason: it will complete the evolution of cellular from wireline augmentation to wireline replacement, and strategically from mobile-first to mobile-only.
+
+So it’s not too early to start least basic planning to understanding how 5G will fit into and benefit IT plans across organizations of all sizes, industries and missions.
+
+**[ From Mobile World Congress:[The time of 5G is almost here][2] ]**
+
+5G will of course provide end-users with the additional throughput, capacity, and other elements to address the continuing and dramatic growth in geographic availability, user base, range of subscriber devices, demand for capacity, and application requirements, but will also enable service providers to benefit from new opportunities in overall strategy, service offerings and broadened marketplace presence.
+
+A look at the key features you can expect in 5G wireless. (Click for larger image.)
+
+![A look at the key features you can expect in 5G wireless.][3]
+
+This article explores the technologies and market drivers behind 5G, with an emphasis on what 5G means to enterprise and organizational IT.
+
+While 5G remains an imprecise term today, key objectives for the development of the advances required have become clear. These are as follows:
+
+## 5G speeds
+
+As is the case with Wi-Fi, major advances in cellular are first and foremost defined by new upper-bound _throughput_ numbers. The magic number here for 5G is in fact a _floor_ of 1 Gbps, with numbers as high as 10 Gbps mentioned by some. However, and again as is the case with Wi-Fi, it’s important to think more in terms of overall individual-cell and system-wide _capacity_. We believe, then, that per-user throughput of 50 Mbps is a more reasonable – but clearly still remarkable – working assumption, with up to 300 Mbps peak throughput realized in some deployments over the next five years. The possibility of reaching higher throughput than that exceeds our planning horizon, but such is, well, possible.
+
+## Reduced latency
+
+Perhaps even more important than throughput, though, is a reduction in the round-trip time for each packet. Reducing latency is important for voice, which will most certainly be all-IP in 5G implementations, video, and, again, in improving overall capacity. The over-the-air latency goal for 5G is less than 10ms, with 1ms possible in some defined classes of service.
+
+## 5G network management and OSS
+
+Operators are always seeking to reduce overhead and operating expense, so enhancements to both system management and operational support systems (OSS) yielding improvements in reliability, availability, serviceability, resilience, consistency, analytics capabilities, and operational efficiency, are all expected. The benefits of these will, in most cases, however, be transparent to end-users.
+
+## Mobility and 5G technology
+
+Very-high-speed user mobility, to as much as hundreds of kilometers per hour, will be supported, thus serving users on all modes of transportation. Regulatory and situation-dependent restrictions – most notably, on aircraft – however, will still apply.
+
+To continue reading this article register now
+
+[Get Free Access][4]
+
+[Learn More][5] Existing Users [Sign In][4]
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3385030/5g-a-deep-dive-into-fast-new-wireless.html#tk.rss_all
+
+作者:[Craig Mathias][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Craig-Mathias/
+[b]: https://github.com/lujun9972
+[1]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
+[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
+[3]: https://images.idgesg.net/images/article/2017/06/2017_nw_5g_wireless_key_features-100727485-large.jpg
+[4]: javascript://
+[5]: /learn-about-insider/
diff --git a/sources/tech/20190402 Intel-s Agilex FPGA family targets data-intensive workloads.md b/sources/tech/20190402 Intel-s Agilex FPGA family targets data-intensive workloads.md
new file mode 100644
index 0000000000..686a2be6a4
--- /dev/null
+++ b/sources/tech/20190402 Intel-s Agilex FPGA family targets data-intensive workloads.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Intel's Agilex FPGA family targets data-intensive workloads)
+[#]: via: (https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all)
+[#]: author: (Marc Ferranti https://www.networkworld.com)
+
+Intel's Agilex FPGA family targets data-intensive workloads
+======
+Agilex processors are the first Intel FPGAs to use 10nm manufacturing, achieving a performance boost for AI, financial and IoT workloads
+![Intel][1]
+
+After teasing out details about the technology for a year and half under the code name Falcon Mesa, Intel has unveiled the Agilex family of FPGAs, aimed at data-center and network applications that are processing increasing amounts of data for AI, financial, database and IoT workloads.
+
+The Agilex family, expected to start appearing in devices in the third quarter, is part of a new wave of more easily programmable FPGAs that is beginning to take an increasingly central place in computing as data centers are called on to handle an explosion of data.
+
+**Learn about edge networking**
+
+ * [How edge networking and IoT will reshape data centers][2]
+ * [Edge computing best practices][3]
+ * [How edge computing can help secure the IoT][4]
+
+
+
+FPGAs, or field programmable gate arrays, are built around around a matrix of configurable logic blocks (CLBs) linked via programmable interconnects that can be programmed after manufacturing – and even reprogrammed after being deployed in devices – to run algorithms written for specific workloads. They can thus be more efficient on a performance-per-watt basis than general-purpose CPUs, even while driving higher performance.
+
+### Accelerated computing takes center stage
+
+CPUs can be packaged with FPGAs, offloading specific tasks to them and enhancing overall data-center and network efficiency. The concept, known as accelerated computing, is increasingly viewed by data-center and network managers as a cost-efficient way to handle increasing data and network traffic.
+
+"This data is creating what I call an innovation race across from the edge to the network to the cloud," said Dan McNamara, general manager of the Programmable Solutions Group (PSG) at Intel. "We believe that we’re in the largest adoption phase for FPGAs in our history."
+
+The Agilex family is the first line of FPGAs developed from the ground up in the wake of [Intel’s $16.7 billion 2015 acquisition of Altera.][5] It's the first FPGA line to be made with Intel's 10nm manufacturing process, which adds billions of transistors to the FPGAs compared to earlier generations. Along with Intel's second-generation HyperFlex architecture, it helps give Agilex 40 percent higher performance than the company's current high-end FPGA family, the Stratix 10 line, Intel says.
+
+HyperFlex architecture includes additional registers – places on a processor that temporarily hold data – called Hyper-Registers, located everywhere throughout the core fabric to enhance bandwidth as well as area and power efficiency.
+
+**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
+
+### Memory coherency is key
+
+Agilex FPGAs are also the first processors to support [Compute Express Link (CXL), a high-speed interconnect][7] designed to maintain memory coherency among CPUs like Intel's second-generation Xeon Scalable processors and purpose-built accelerators like FPGAs and GPUs. It ensures that different processors don't clash when trying to write to the same memory space, essentially allowing CPUs and accelerators to share memory.
+
+"By having this CXL bus you can actually write applications that will use all the real memory so what that does is it simplifies the programming model in large memory workloads," said Patrick Moorhead, founder and principal at Moor Insights & Strategy.
+
+The ability to integrate FPGAs, other accelerators and CPUs is key to Intel's accelerated computing strategy for the data center. Intel calls it "any to any" integration.
+
+### 'Any-to-any' integration is crucial for the data center
+
+The Agilex family uses embedded multi-die interconnect bridge (EMIB) packaging technology to integrate, for example, Xeon Scalable CPUs or ASICs – special-function processors that are not reprogammable – alongside FPGA fabric. Intel last year bought eASIC, a maker of structured ASICs, which the company describes as an intermediary technology between FPGAs and ASICs. The idea is to deliver products that offer a mix of functionality to achieve optimal cost and performance efficiency for data-intensive workloads.
+
+Intel underscored the importance of processor integration for the data center by unveiling Agilex on Tuesday at its Data Centric Innovation Day in San Francisco, when it also discussed plans for its second generation Xeon Scalable line.
+
+Traditionally, FPGAs were mainly used in embedded devices, communications equipment and in hyperscale data centers, and not sold directly to enterprises. But several products based on Intel Stratix 10 and Arria 10 FPGAs are now being sold to enterprises, including in Dell EMC and Fujitsu off-the-shelf servers.
+
+Making FPGAs easier to program is key to making them more mainstream. "What's really, really important is the software story," said Intel's McNamara. "None of this really matters if we can't generate more users and make it easier to program FPGA's."
+
+Intel's Quartus Prime design tool will be available for Agilex hardware developers but the real breakthrough for FPGA software development will be Intel's OneAPI concept, announced in December.
+
+"OneAPI is is an effort by Intel to be able to have programmers write to OneAPI and OneAPI determines the best piece of silicon to run it on," Moorhead said. "I lovingly refer to it as the magic API; this is the big play I always thought Intel was gonna be working on ever since it bought Altera. The first thing I expect to happen are the big enterprise developers like SAP and Oracle to write to Agilex, then smaller ISVs, then custom enterprise applications."
+
+![][8]
+
+Intel plans three different product lines in the Agilex family – from low to high end, the F-, I- and M-series – aimed at different applications and processing requirements. The Agilex family, depending on the series, supports PCIe (peripheral component interconnect express) Gen 5, and different types of memory including DDR5 RAM, HBM (high-bandwidth memory) and Optane DC persistent memory. It will offer up to 112G bps transceiver data rates and a greater mix of arithmetic precision for AI, including bfloat16 number format.
+
+In addition to accelerating server-based workloads like AI, genomics, financial and database applications, FPGAs play an important part in networking. Their cost-per-watt efficiency makes them suitable for edge networks, IoT devices as well as deep packet inspection. In addition, they can be used in 5G base stations; as 5G standards evolve, they can be reprogrammed. Once 5G standards are hardened, the "any to any" integration will allow processing to be offloaded to special-purpose ASICs for ultimate cost efficiency.
+
+### Agilex will compete with Xylinx's ACAPs
+
+Agilex will likely vie with Xylinx's upcoming [Versal product family][9], due out in devices in the second half of the year. Xylinx competed for years with Altera in the FPGA market, and with Versal has introduced what it says is [a new product category, the Adaptive Compute Acceleration Platform (ACAP)][10]. Versal ACAPs will be made using TSMC's 7nm manufacturing process technology, though because Intel achieves high transistor density, the number of transistors offered by Agilex and Versal chips will likely be equivalent, noted Moorhead.
+
+Though Agilex and Versal differ in details, the essential pitch is similar: the programmable processors offer a wider variety of programming options than prior generations of FPGA, work with CPUs to accelerate data-intensive workloads, and offer memory coherence. Rather than CXL, though, the Versal family uses the cache coherent interconnect for accelerators (CCIX) interconnect fabric.
+
+Neither Intel or Xylinx for the moment have announced OEM support for Agilex or Versal products that will be sold to the enterprise, but that should change as the year progresses.
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all
+
+作者:[Marc Ferranti][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/agilex-100792596-large.jpg
+[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
+[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
+[5]: https://www.networkworld.com/article/2903454/intel-could-strengthen-its-server-product-stack-with-altera.html
+[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
+[7]: https://www.networkworld.com/article/3359254/data-center-giants-announce-new-high-speed-interconnect.html
+[8]: https://images.idgesg.net/images/article/2019/04/agilex-family-100792597-large.jpg
+[9]: https://www.xilinx.com/news/press/2018/xilinx-unveils-versal-the-first-in-a-new-category-of-platforms-delivering-rapid-innovation-with-software-programmability-and-scalable-ai-inference.html
+[10]: https://www.networkworld.com/article/3263436/fpga-maker-xilinx-aims-range-of-software-programmable-chips-at-data-centers.html
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190402 Parallel computation in Python with Dask.md b/sources/tech/20190402 Parallel computation in Python with Dask.md
deleted file mode 100644
index 81a0bcb41f..0000000000
--- a/sources/tech/20190402 Parallel computation in Python with Dask.md
+++ /dev/null
@@ -1,71 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Parallel computation in Python with Dask)
-[#]: via: (https://opensource.com/article/19/4/parallel-computation-python-dask)
-[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
-
-Parallel computation in Python with Dask
-======
-The Dask library scales Python computation to multiple cores or even to
-multiple machines.
-![Pair programming][1]
-
-One frequent complaint about Python performance is the [global interpreter lock][2] (GIL). Because of GIL, only one thread can execute Python byte code at a time. As a consequence, using threads does not speed up computation—even on modern, multi-core machines.
-
-But when you need to parallelize to many cores, you don't need to stop using Python: the **[Dask][3]** library will scale computation to multiple cores or even to multiple machines. Some setups configure Dask on thousands of machines, each with multiple cores; while there are scaling limits, they are not easy to hit.
-
-While Dask has many built-in array operations, as an example of something not built-in, we can calculate the [skewness][4]:
-```
-import numpy
-import dask
-from dask import array as darray
-
-arr = dask.from_array(numpy.array(my_data), chunks=(1000,))
-mean = darray.mean()
-stddev = darray.std(arr)
-unnormalized_moment = darry.mean(arr * arr * arr)
-## See formula in wikipedia:
-skewness = ((unnormalized_moment - (3 * mean * stddev ** 2) - mean ** 3) /
- stddev ** 3)
-```
-
-Notice that each operation will use as many cores as needed. This will parallelize across all cores, even when calculating across billions of elements.
-
-Of course, it is not always the case that our operations can be parallelized by the library; sometimes we need to implement parallelism on our own.
-
-For that, Dask has a "delayed" functionality:
-```
-import dask
-
-def is_palindrome(s):
- return s == s[::-1]
-
-palindromes = [dask.delayed(is_palindrome)(s) for s in string_list]
-total = dask.delayed(sum)(palindromes)
-result = total.compute()
-```
-
-This will calculate whether strings are palindromes in parallel and will return a count of the palindromic ones.
-
-While Dask was created for data scientists, it is by no means limited to data science. Whenever we need to parallelize tasks in Python, we can turn to Dask—GIL or no GIL.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/4/parallel-computation-python-dask
-
-作者:[Moshe Zadka (Community Moderator)][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/moshez
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1 (Pair programming)
-[2]: https://wiki.python.org/moin/GlobalInterpreterLock
-[3]: https://github.com/dask/dask
-[4]: https://en.wikipedia.org/wiki/Skewness#Definition
diff --git a/sources/tech/20190402 When Wi-Fi is mission-critical, a mixed-channel architecture is the best option.md b/sources/tech/20190402 When Wi-Fi is mission-critical, a mixed-channel architecture is the best option.md
new file mode 100644
index 0000000000..29a73998d7
--- /dev/null
+++ b/sources/tech/20190402 When Wi-Fi is mission-critical, a mixed-channel architecture is the best option.md
@@ -0,0 +1,90 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (When Wi-Fi is mission-critical, a mixed-channel architecture is the best option)
+[#]: via: (https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all)
+[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
+
+When Wi-Fi is mission-critical, a mixed-channel architecture is the best option
+======
+
+### Multi-channel is the norm for Wi-Fi today, but it’s not always the best choice. Single-channel and hybrid APs offer compelling alternatives when reliable Wi-Fi is a must.
+
+![Getty Images][1]
+
+I’ve worked with a number of companies that have implemented digital projects only to see them fail. The ideation was correct, the implementation was sound, and the market opportunity was there. The weak link? The Wi-Fi network.
+
+For example, a large hospital wanted to improve clinician response times to patient alarms by having telemetry information sent to mobile devices. Without the system, the only way a nurse would know about a patient alarm is from an audible alert. And with all the background noise, it’s often tough to discern where noises are coming from. The problem was the Wi-Fi network in the hospital had not been upgraded in years and caused messages to be significantly delayed in their delivery, often taking four to five minutes to deliver. The long delivery times caused a lack of confidence in the system, so many clinicians stopped using it and went back to manual alerting. As a result, the project was considered a failure.
+
+I’ve seen similar examples in manufacturing, K-12 education, entertainment, and other industries. Businesses are competing on the basis of customer experience, and that’s driven from the ever-expanding, ubiquitous wireless edge. Great Wi-Fi doesn’t necessarily mean market leadership, but bad Wi-Fi will have a negative impact on customers and employees. And in today’s competitive climate, that’s a recipe for disaster.
+
+**[ Read also:[Wi-Fi site-survey tips: How to avoid interference, dead spots][2] ]**
+
+## Wi-Fi performance historically inconsistent
+
+The problem with Wi-Fi is that it’s inherently flaky. I’m sure everyone reading this has experienced the typical flaws with failed downloads, dropped connections, inconsistent performance, and lengthy wait times to connect to public hot spots.
+
+Picture sitting in a conference prior to a keynote address and being able to tweet, send email, browse the web, and do other things with no problem. Then the keynote speaker comes on stage and the entire audiences start snapping pics, uploading those pictures, and streaming things – and the Wi-Fi stops working. I find this to be the norm more than the exception, underscoring the need for [no-compromise Wi-Fi][3].
+
+The question for network professionals is how to get to a place where the Wi-Fi is rock solid 100% of the time. Some say that just beefing up the existing network will do that, and it might, but in some cases, the type of Wi-Fi might not be appropriate.
+
+The most commonly deployed type of Wi-Fi is multi-channel, also known as micro-cell, where each client connects to the access point (AP) using a radio channel. A high-quality experience is based on two things: good signal strength and minimal interference. Several things can cause interference, such as APs being too close, layout issues, or interference from other equipment. To minimize interference, businesses invest a significant amount of time and money in [site surveys to plan the optimal channel map][2], but even with that’s done well, Wi-Fi glitches can still happen.
+
+**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][4] ]**
+
+## Multi-channel Wi-Fi not always the best choice
+
+For many carpeted offices, multi-channel Wi-Fi is likely to be solid, but there are some environments where external circumstances will impact performance. A good example of this is a multi-tenant building in which there are multiple Wi-Fi networks transmitting on the same channel and interfering with one another. Another example is a hospital where there are many campus workers moving between APs. The client will also try to connect to the best AP, causing the client to continually disconnect and reconnect resulting in dropped sessions. Then there are environments such as schools, airports, and conference facilities where there is a high number of transient devices and multi-channel can struggle to keep up.
+
+## Single channel Wi-Fi offers better reliability but with a performance hit
+
+What’s a network manager to do? Is inconsistent Wi-Fi just a fait accompli? Multi-channel is the norm, but it isn’t designed for dynamic physical environments or those where reliable connectivity is a must.
+
+Several years ago an alternative architecture was proposed that would solve these problems. As the name suggests, “single channel” Wi-Fi uses a single radio channel for all APs in the network. Think of this as being a single Wi-Fi fabric that operates on one channel. With this architecture, the placement of APs is irrelevant because they all utilize the same channel, so they won’t interfere with one another. This has an obvious simplicity advantage, such as if coverage is poor, there’s no reason to do another expensive site survey. Instead, just drop in APs where they are needed.
+
+One of the disadvantages of single-channel is that aggregate network throughput was lower than multi-channel because only one channel can be used. This might be fine in environments where reliability trumps performance, but many organizations want both.
+
+## Hybrid APs offer the best of both worlds
+
+There has been recent innovation from the manufacturers of single-channel systems that mix channel architectures, creating a “best of both worlds” deployment that offers the throughput of multi-channel with the reliability of single-channel. For example, Allied Telesis offers Hybrid APs that can operate in multi-channel and single-channel mode simultaneously. That means some web clients can be assigned to the multi-channel to have maximum throughput, while others can use single-channel for seamless roaming experience.
+
+A practical use-case of such a mix might be a logistics facility where the office staff uses multi-channel, but the fork-lift operators use single-channel for continuous connectivity as they move throughout the warehouse.
+
+Wi-Fi was once a network of convenience, but now it is perhaps the most mission-critical of all networks. A traditional multi-channel system might work, but due diligence should be done to see how it functions under a heavy load. IT leaders need to understand how important Wi-Fi is to digital transformation initiatives and do the proper testing to ensure it’s not the weak link in the infrastructure chain and choose the best technology for today’s environment.
+
+**Reviews: 4 free, open-source network monitoring tools:**
+
+ * [Icinga: Enterprise-grade, open-source network-monitoring that scales][5]
+ * [Nagios Core: Network-monitoring software with lots of plugins, steep learning curve][6]
+ * [Observium open-source network monitoring tool: Won’t run on Windows but has a great user interface][7]
+ * [Zabbix delivers effective no-frills network monitoring][8]
+
+
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all
+
+作者:[Zeus Kerravala][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Zeus-Kerravala/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/09/tablet_graph_wifi_analytics-100771638-large.jpg
+[2]: https://www.networkworld.com/article/3315269/wi-fi-site-survey-tips-how-to-avoid-interference-dead-spots.html
+[3]: https://www.alliedtelesis.com/blog/no-compromise-wi-fi
+[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
+[5]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb
+[6]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html
+[7]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb
+[8]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190402 Zero-trust- microsegmentation networking.md b/sources/tech/20190402 Zero-trust- microsegmentation networking.md
new file mode 100644
index 0000000000..864bd8eea4
--- /dev/null
+++ b/sources/tech/20190402 Zero-trust- microsegmentation networking.md
@@ -0,0 +1,137 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Zero-trust: microsegmentation networking)
+[#]: via: (https://www.networkworld.com/article/3384748/zero-trust-microsegmentation-networking.html#tk.rss_all)
+[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
+
+Zero-trust: microsegmentation networking
+======
+
+### Microsegmentation gives administrators the control to set granular policies in order to protect the application environment.
+
+![Aaron Burson \(CC0\)][1]
+
+The transformation to the digital age has introduced significant changes to the cloud and data center environments. This has compelled the organizations to innovate more quickly than ever before. This, however, brings with it both – the advantages and disadvantages.
+
+The network and security need to keep up with this rapid pace of change. If you cannot match with the speed of the [digital age,][2] then ultimately bad actors will become a hazard. Therefore, the organizations must move to a [zero-trust environment][3]: default deny, with least privilege access. In today’s evolving digital world this is the primary key to success.
+
+Ideally, a comprehensive solution must provide protection across all platforms including legacy servers, VMs, services in public clouds, on-premise, off-premise, hosted, managed or self-managed. We are going to stay hybrid for a long time, therefore we need to equip our architecture with [zero-trust][4].
+
+**[ Don’t miss[customer reviews of top remote access tools][5] and see [the most powerful IoT companies][6] . | Get daily insights by [signing up for Network World newsletters][7]. ]**
+
+We need to have the ability to support all of these hybrid environments that can analyze at a process, flow data, and infrastructure level. As a matter of fact, there is never just one element to analyze within a network in order to create an effective security posture.
+
+To adequately secure such an environment requires a solution with key components: such as appropriate visibility, microsegmentation, and breach detection. Let's learn more about one of these primary elements: zero-trust microsegmentation networking.
+
+There are a variety of microsegmentation vendors, all with competing platforms. We have, for example, SDN-based, container-centric, network-based appliance be it physical or virtual, and container-centric to name just a few.
+
+## What is microsegmentation?
+
+Microsegmentation is the ability to put a wrapper around the access control for each component of an application. The traditional days are gone where we can just impose a block on source/destination/port numbers or higher up in the stack with protocols, such as HTTP or HTTPS.
+
+As the communication patterns become more complex, thereby isolating the communication flows between entities, hence following the microsegmentation principles has become a necessity.
+
+## Why is microsegmentation important?
+
+Microsegmentation gives administrators the control to set granular policies in order to protect the application environment. It defines the rules and policies as to how an application can communicate within its tier. The policies are granular (a lot more granular than what we had before), which restrict the communication to hosts that are only allowed to communicate.
+
+Eventually, this reduces the available attack surface and completely locks down the ability for the bad actors to move laterally within the application infrastructure. Why? Because it governs the application’s activity at a granular level, thereby improving the entire security posture. The traditional zone-based networking no longer cuts it in today’s [digital world][8].
+
+## General networking
+
+Let's start with the basics. We all know that with security, you are only as strong as your weakest link. As a result, enterprises have begun to further segment networks into microsegments. Some call them nanosegments.
+
+But first, let’s recap on what we actually started within the initial stage- nothing! We had IP addresses that were used for connectivity but unfortunately, they have no built-in authentication mechanism. Why? Because it wasn't a requirement back then.
+
+Network connectivity based on network routing protocols was primarily used for sharing resources. A printer, 30 years ago, could cost the same as a house, so connectivity and the sharing of resources were important. The authentication of the communication endpoints was not considered significant.
+
+## Broadcast domains
+
+As networks grew in size, virtual LANs (VLANs) were introduced to divide the broadcast domains and improve network performance. A broadcast domain is a logical division of a computer network. All nodes can reach each other by sending a broadcast at the data link layer. When the broadcast domain swells, the network performance takes a hit.
+
+Over time the role of the VLAN grew to be used as a security tool but it was never meant to be in that space. VLANs were used to improve performance, not to isolate the resources. The problem with VLANs is that there is no intra VLAN filtering. They have a very broad level of access and trust. If bad actors gain access to one segment in the zone, they should not be allowed to try and compromise another device within that zone, but with VLANs, this is a strong possibility.
+
+Hence, VLAN offers the bad actor a pretty large attack surface to play with and move across laterally without inspection. Lateral movements are really hard to detect with traditional architectures.
+
+Therefore, enterprises were forced to switch to microsegmentation. Microsegmentation further segments networks within the zone. On the contrary, the whole area of virtualization complicates the segmentation process. A virtualized server may only have a single physical network port but it supports numerous logical networks where services and applications reside across multiple security zones.
+
+Thus, microsegmentation needs to work at both; the physical network layer as well as within the virtualized networking layer. As you are aware, there has been a change in the traffic pattern. The good thing about microsegmentation is that it controls both; the “north & south” and also the “east & west” movement of traffic, further isolating the size of broadcast domains.
+
+## Microsegmentation – a multi-stage process
+
+Implementing microsegmentation is a multi-stage process. There are certain prerequisites that must be followed before the implementation. Firstly, you need to fully understand the communication patterns, map the flows and all the application dependencies.
+
+Once this is done, it's only then you can enable microsegmentation in a platform-agnostic manner across all the environments. Segmenting your network appropriately creates a dark network until the administrator turns on the lights. Authentication is performed first and then access is granted to the communicating entities operating with zero-trust with least privilege access.
+
+Once you are connecting the entities, they need to run through a number of technologies in order to be fully connected. There is not a once-off check with microsegmentation. It’s rather a continuous process to make sure that both entities are doing what they are supposed to do.
+
+This ensures that everyone is doing what they are entitled to do. You want to reduce the unnecessary cross-talk to an absolute minimum and only allow communication that is a complete necessity.
+
+## How do you implement microsegmentation?
+
+Firstly, you need strong visibility not just at the traffic flow level but also at the process and data contextual level. Without granular application visibility, it's impossible to map and fully understand what is normal traffic flow and irregular application communication patterns.
+
+Visibility cannot be mapped out manually, as there could be hundreds of workloads. Therefore, an automatic approach must be taken. Manual mapping is more prone to errors and is inefficient. The visibility also needs to be in real-time. A static snapshot of the application architecture, even if it's down to a process level, will not tell you anything about the behaviors that are sanctioned or unsanctioned.
+
+You also need to make sure that you, not under-segmenting, similar to what we had in the old days. Primarily, microsegmentation must manage communication workflows all the way up to Layer 7 of the Open Systems Interconnection (OSI) layer. Layer 4 microsegmentation only focuses on the Transport layer. If you are only segmenting the network at Layer 4 then you are widening your attack surface, thereby opening the network to be compromised.
+
+Segmenting right up to the application layer means you are locking down the lateral movements, open ports, and protocols. It enables you to restrict access to the source and destination process rather than source and destination port numbers.
+
+## Security issues with hybrid cloud
+
+Since the [network perimeter][9] has been removed, therefore, it has become difficult to bolt the traditional security tools. Traditionally, we could position a static perimeter around the network infrastructure. However, this is not an available option today as we have a mixture of containerized applications, for example, a legacy database server. We have legacy communicating to the containerized land.
+
+Hybrid enables organizations to use different types of cloud architects to include the on-premise and new technologies, such as containers. We are going to have a hybrid cloud in coming times which will change the way we think about networking. Hybrid forces the organizations to rethink about the network architectures.
+
+When you attach the microsegment policies around the workload itself, then the policies will go with the workload. Then it would not matter if the entity moves to the on-premise or to the cloud. If the workload auto scales up and down or horizontally, the policy needs to go with the workload. Even if you go deeper than the workload, into the process level, you can set even more granular controls for microsegmentation.
+
+## Identity
+
+However, this is the point where identity becomes a challenge. If things are scaling and becoming dynamic, you can’t tie policies to the IP addresses. Rather than using IP addresses as the base for microsegmentation, policies are based on the logical (not physical) attributes.
+
+With microsegmentation, the workload identity is based on logical attributes, such as the multi-factor authentication (MFA), transport layer security (TLS) certificate, the application service, or the use of a logical label associated with the workload.
+
+These are what are known as logical attributes. Ultimately the policies map to the IP addresses but these are set by using the logical attributes, not the physical ones. As we progress in this technological era, the IP address is less relevant now. Named data networking is one of the perfect examples.
+
+Other identity methods for microsegmentation are TLS certificates. If the traffic is encrypted with a different TLS certificate or from an invalid source, it automatically gets dropped, even if it comes from the right location. It will get blocked as it does not have the right identity.
+
+You can even extend that further and look inside the actual payload. If an entity is trying to do a hypertext transfer protocol (HTTP) post to a record and if it tries to perform any other operation, it will get blocked.
+
+## Policy enforcement
+
+Practically, all of these policies can be implemented and enforced in different places throughout the network. However, if you enforce in only one place, that point in the network can become compromised and become an entry door to the bad actor. You can, for example, enforce in 10 different network points, even if you subvert in 2 of them the other 8 will still protect you.
+
+Zero-trust microsegmentation ensures that you can enforce in different points throughout the network and also with different mechanics.
+
+**This article is published as part of the IDG Contributor Network.[Want to Join?][10]**
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3384748/zero-trust-microsegmentation-networking.html#tk.rss_all
+
+作者:[Matt Conran][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Matt-Conran/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/07/hive-structured_windows_architecture_connections_connectivity_network_lincoln_park_pavilion_chicago_by_aaron_burson_cc0_via_unsplash_1200x800-100765880-large.jpg
+[2]: https://youtu.be/AnMQH_noNDo
+[3]: https://network-insight.net/2018/10/zero-trust-networking-ztn-want-ghosted/
+[4]: https://network-insight.net/2018/09/embrace-zero-trust-networking/
+[5]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb
+[6]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
+[7]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
+[8]: https://network-insight.net/2017/10/internet-things-iot-dissolving-cloud/
+[9]: https://network-insight.net/2018/09/software-defined-perimeter-zero-trust/
+[10]: /contributor-network/signup.html
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190403 5 useful open source log analysis tools.md b/sources/tech/20190403 5 useful open source log analysis tools.md
deleted file mode 100644
index 72522edd3d..0000000000
--- a/sources/tech/20190403 5 useful open source log analysis tools.md
+++ /dev/null
@@ -1,124 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (5 useful open source log analysis tools)
-[#]: via: (https://opensource.com/article/19/4/log-analysis-tools)
-[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
-
-5 useful open source log analysis tools
-======
-Monitoring network activity is as important as it is tedious. These
-tools can make it easier.
-![People work on a computer server][1]
-
-Monitoring network activity can be a tedious job, but there are good reasons to do it. For one, it allows you to find and investigate suspicious logins on workstations, devices connected to networks, and servers while identifying sources of administrator abuse. You can also trace software installations and data transfers to identify potential issues in real time rather than after the damage is done.
-
-Those logs also go a long way towards keeping your company in compliance with the [General Data Protection Regulation][2] (GDPR) that applies to any entity operating within the European Union. If you have a website that is viewable in the EU, you qualify.
-
-Logging—both tracking and analysis—should be a fundamental process in any monitoring infrastructure. A transaction log file is necessary to recover a SQL server database from disaster. Further, by tracking log files, DevOps teams and database administrators (DBAs) can maintain optimum database performance or find evidence of unauthorized activity in the case of a cyber attack. For this reason, it's important to regularly monitor and analyze system logs. It's a reliable way to re-create the chain of events that led up to whatever problem has arisen.
-
-There are quite a few open source log trackers and analysis tools available today, making choosing the right resources for activity logs easier than you think. The free and open source software community offers log designs that work with all sorts of sites and just about any operating system. Here are five of the best I've used, in no particular order.
-
-### Graylog
-
-[Graylog][3] started in Germany in 2011 and is now offered as either an open source tool or a commercial solution. It is designed to be a centralized log management system that receives data streams from various servers or endpoints and allows you to browse or analyze that information quickly.
-
-![Graylog screenshot][4]
-
-Graylog has built a positive reputation among system administrators because of its ease in scalability. Most web projects start small but can grow exponentially. Graylog can balance loads across a network of backend servers and handle several terabytes of log data each day.
-
-IT administrators will find Graylog's frontend interface to be easy to use and robust in its functionality. Graylog is built around the concept of dashboards, which allows you to choose which metrics or data sources you find most valuable and quickly see trends over time.
-
-When a security or performance incident occurs, IT administrators want to be able to trace the symptoms to a root cause as fast as possible. Search functionality in Graylog makes this easy. It has built-in fault tolerance that can run multi-threaded searches so you can analyze several potential threats together.
-
-### Nagios
-
-[Nagios][5] started with a single developer back in 1999 and has since evolved into one of the most reliable open source tools for managing log data. The current version of Nagios can integrate with servers running Microsoft Windows, Linux, or Unix.
-
-![Nagios Core][6]
-
-Its primary product is a log server, which aims to simplify data collection and make information more accessible to system administrators. The Nagios log server engine will capture data in real-time and feed it into a powerful search tool. Integrating with a new endpoint or application is easy thanks to the built-in setup wizard.
-
-Nagios is most often used in organizations that need to monitor the security of their local network. It can audit a range of network-related events and help automate the distribution of alerts. Nagios can even be configured to run predefined scripts if a certain condition is met, allowing you to resolve issues before a human has to get involved.
-
-As part of network auditing, Nagios will filter log data based on the geographic location where it originates. That means you can build comprehensive dashboards with mapping technology to understand how your web traffic is flowing.
-
-### Elastic Stack (the "ELK Stack")
-
-[Elastic Stack][7], often called the ELK Stack, is one of the most popular open source tools among organizations that need to sift through large sets of data and make sense of their system logs (and it's a personal favorite, too).
-
-![ELK Stack][8]
-
-Its primary offering is made up of three separate products: Elasticsearch, Kibana, and Logstash:
-
- * As its name suggests, _**Elasticsearch**_ is designed to help users find matches within datasets using a wide range of query languages and types. Speed is this tool's number one advantage. It can be expanded into clusters of hundreds of server nodes to handle petabytes of data with ease.
-
- * _**Kibana**_ is a visualization tool that runs alongside Elasticsearch to allow users to analyze their data and build powerful reports. When you first install the Kibana engine on your server cluster, you will gain access to an interface that shows statistics, graphs, and even animations of your data.
-
- * The final piece of ELK Stack is _**Logstash**_ , which acts as a purely server-side pipeline into the Elasticsearch database. You can integrate Logstash with a variety of coding languages and APIs so that information from your websites and mobile applications will be fed directly into your powerful Elastic Stalk search engine.
-
-
-
-
-A unique feature of ELK Stack is that it allows you to monitor applications built on open source installations of WordPress. In contrast to most out-of-the-box security audit log tools that [track admin and PHP logs][9] but little else, ELK Stack can sift through web server and database logs.
-
-Poor log tracking and database management are one of the [most common causes of poor website performance][10]. Failure to regularly check, optimize, and empty database logs can not only slow down a site but could lead to a complete crash as well. Thus, the ELK Stack is an excellent tool for every WordPress developer's toolkit.
-
-### LOGalyze
-
-[LOGalyze][11] is an organization based in Hungary that builds open source tools for system administrators and security experts to help them manage server logs and turn them into useful data points. Its primary product is available as a free download for either personal or commercial use.
-
-![LOGalyze][12]
-
-LOGalyze is designed to work as a massive pipeline in which multiple servers, applications, and network devices can feed information using the Simple Object Access Protocol (SOAP) method. It provides a frontend interface where administrators can log in to monitor the collection of data and start analyzing it.
-
-From within the LOGalyze web interface, you can run dynamic reports and export them into Excel files, PDFs, or other formats. These reports can be based on multi-dimensional statistics managed by the LOGalyze backend. It can even combine data fields across servers or applications to help you spot trends in performance.
-
-LOGalyze is designed to be installed and configured in less than an hour. It has prebuilt functionality that allows it to gather audit data in formats required by regulatory acts. For example, LOGalyze can easily run different HIPAA reports to ensure your organization is adhering to health regulations and remaining compliant.
-
-### Fluentd
-
-If your organization has data sources living in many different locations and environments, your goal should be to centralize them as much as possible. Otherwise, you will struggle to monitor performance and protect against security threats.
-
-[Fluentd][13] is a robust solution for data collection and is entirely open source. It does not offer a full frontend interface but instead acts as a collection layer to help organize different pipelines. Fluentd is used by some of the largest companies worldwide but can be implemented in smaller organizations as well.
-
-![Fluentd architecture][14]
-
-The biggest benefit of Fluentd is its compatibility with the most common technology tools available today. For example, you can use Fluentd to gather data from web servers like Apache, sensors from smart devices, and dynamic records from MongoDB. What you do with that data is entirely up to you.
-
-Fluentd is based around the JSON data format and can be used in conjunction with [more than 500 plugins][15] created by reputable developers. This allows you to extend your logging data into other applications and drive better analysis from it with minimal manual effort.
-
-### The bottom line
-
-If you aren't already using activity logs for security reasons, governmental compliance, and measuring productivity, commit to changing that. There are plenty of plugins on the market that are designed to work with multiple environments and platforms, even on your internal network. Don't wait for a serious incident to justify taking a proactive approach to logs maintenance and oversight.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/4/log-analysis-tools
-
-作者:[Sam Bocetta][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/sambocetta
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server)
-[2]: https://opensource.com/article/18/4/gdpr-impact
-[3]: https://www.graylog.org/products/open-source
-[4]: https://opensource.com/sites/default/files/uploads/graylog-data.png (Graylog screenshot)
-[5]: https://www.nagios.org/downloads/
-[6]: https://opensource.com/sites/default/files/uploads/nagios_core_4.0.8.png (Nagios Core)
-[7]: https://www.elastic.co/products
-[8]: https://opensource.com/sites/default/files/uploads/elk-stack.png (ELK Stack)
-[9]: https://www.wpsecurityauditlog.com/benefits-wordpress-activity-log/
-[10]: https://websitesetup.org/how-to-speed-up-wordpress/
-[11]: http://www.logalyze.com/
-[12]: https://opensource.com/sites/default/files/uploads/logalyze.jpg (LOGalyze)
-[13]: https://www.fluentd.org/
-[14]: https://opensource.com/sites/default/files/uploads/fluentd-architecture.png (Fluentd architecture)
-[15]: https://opensource.com/article/18/9/open-source-log-aggregation-tools
diff --git a/sources/tech/20190403 Intel unveils an epic response to AMD-s server push.md b/sources/tech/20190403 Intel unveils an epic response to AMD-s server push.md
new file mode 100644
index 0000000000..826cd9d413
--- /dev/null
+++ b/sources/tech/20190403 Intel unveils an epic response to AMD-s server push.md
@@ -0,0 +1,78 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Intel unveils an epic response to AMD’s server push)
+[#]: via: (https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Intel unveils an epic response to AMD’s server push
+======
+
+### Intel introduced more than 50 new Xeon Scalable Processors for servers that cover a variety of workloads.
+
+![Intel][1]
+
+Intel on Tuesday introduced its second-generation Xeon Scalable Processors for servers, developed under the codename Cascade Lake, and it’s clear AMD has lit a fire under a once complacent company.
+
+These new Xeon SP processors max out at 28 cores and 56 threads, a bit shy of AMD’s Epyc server processors with 32 cores and 64 threads, but independent benchmarks are still to come, which may show Intel having a lead at single core performance.
+
+And for absolute overkill, there is the Xeon SP Platinum 9200 Series, which sports 56 cores and 112 threads. It will also require up to 400W of power, more than twice what the high-end Xeons usually consume.
+
+**[ Now read:[What is quantum computing (and why enterprises should care)][2] ]**
+
+The new processors were unveiled at a big event at Intel’s headquarters in Santa Clara, California, and live-streamed on the web. [Newly minted CEO][3] Bob Swan kicked off the event, saying the new processors were the “first truly data-centric portfolio for our customers.”
+
+“For the last several years, we have embarked on a journey to transform from a PC-centric company to a data-centric computing company and build the silicon processors with our partners to help our customers prosper and grow in an increasingly data-centric world,” he added.
+
+He also said the move to a data-centric world isn’t just CPUs, but a suite of accelerant technologies, including the [Agilex FPGA processors][4], Optane memory, and more.
+
+This launch is the largest Xeon launch in the company’s history, with more than 50 processor designs across the Xeon 8200 and 9200 lines. While something like that can lead to confusion, many of these are specific to certain workloads instead of general-purpose processors.
+
+**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][5] ]**
+
+Cascade Lake chips are the replacement for the previous Skylake platform, and the mainstream Cascade Lake chips have the same architecture as the Purley motherboard used by Skylake. Like the current Xeon Scalable processors, they have up to 28 cores with up to 38.5 MB of L3 cache, but speeds and feeds have been bumped up.
+
+The Cascade Lake generation supports the new UPI (Ultra Path Interface) high-speed interconnect, up to six memory channels, AVX-512 support, and up to 48 PCIe lanes. Memory capacity has been doubled, from 768GB to 1.5TB of memory per socket. They work in the same socket as Purley motherboards and are built on a 14nm manufacturing process.
+
+Some of the new Xeons, however, can access up to 4.5TB of memory per processor: 1.5TB of memory and 3TB of Optane memory, the new persistent memory that sits between DRAM and NAND flash memory and acts as a massive cache for both.
+
+## Built-in fixes for Meltdown and Spectre vulnerabilities
+
+Most important, though, is that these new Xeons have built-in fixes for the Meltdown and Spectre vulnerabilities. There are existing fixes for the exploits, but they have the effect of reducing performance, which varies based on workload. Intel showed a slide at the event that shows the company is using a combination of firmware and software mitigation.
+
+New features also include Intel Deep Learning Boost (DL Boost), a technology developed to accelerate vector computing that Intel said makes this the first CPU with built-in inference acceleration for AI workloads. It works with the AVX-512 extension, which should make it ideal for machine learning scenarios.
+
+Most of the new Xeons are available now, except for the 9200 Platinum, which is coming in the next few months. Many Intel partners – Dell, Cray, Cisco, Supermicro – all have new products, with Supermicro launching more than 100 new products built around Cascade Lake.
+
+## Intel also rolls out Xeon D-1600 series processors
+
+In addition to its hot rod Xeons, Intel also rolled out the Xeon D-1600 series processors, a low power variant based on a completely different architecture. Xeon D-1600 series processors are designed for space and/or power constrained environments, such as edge network devices and base stations.
+
+Along with the new Xeons and FPGA chips, Intel also announced the Intel Ethernet 800 series adapter, which supports 25, 50 and 100 Gigabit transfer speeds.
+
+Thank you, AMD. This is what competition looks like.
+
+Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/intel-xeon-family-1-100792811-large.jpg
+[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
+[3]: https://www.networkworld.com/article/3336921/intel-promotes-swan-to-ceo-bumps-off-itanium-and-eyes-mellanox.html
+[4]: https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html
+[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[6]: https://www.facebook.com/NetworkWorld/
+[7]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190403 Top Ten Reasons to Think Outside the Router -1- It-s Time for a Router Refresh.md b/sources/tech/20190403 Top Ten Reasons to Think Outside the Router -1- It-s Time for a Router Refresh.md
new file mode 100644
index 0000000000..72d566a7d0
--- /dev/null
+++ b/sources/tech/20190403 Top Ten Reasons to Think Outside the Router -1- It-s Time for a Router Refresh.md
@@ -0,0 +1,101 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top Ten Reasons to Think Outside the Router #1: It’s Time for a Router Refresh)
+[#]: via: (https://www.networkworld.com/article/3386116/top-ten-reasons-to-think-outside-the-router-1-it-s-time-for-a-router-refresh.html#tk.rss_all)
+[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
+
+Top Ten Reasons to Think Outside the Router #1: It’s Time for a Router Refresh
+======
+
+![istock][1]
+
+We’re now at the end of our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as [Silver Peak][2] counts down the _Top Ten Reasons to Think Outside the Router._ Click for the [#2][3], [#3][4], [#4][5], [#5][6], [#6][7], [#7][8], [#8][9], [#9][10] and [#10][11] reasons to retire traditional branch routers.
+
+_**The #1 reason it’s time to retire conventional routers at the branch: your branch routers are coming due for a refresh – the perfect time to evaluate new options.**_
+
+Your WAN architecture is due for a branch router refresh! You’re under immense pressure to advance your organization’s digital transformation initiatives and deliver a high quality of experience to your users and customers. Your applications – at least SaaS apps – are all cloud-based. You know you need to move more quickly to keep pace with changing business requirements to realize the transformational promise of the cloud. And, you’re dealing with shifting traffic patterns and an insatiable appetite for more bandwidth at branch sites to support your users and applications. Finally, you know your IT budget for networking isn’t going to increase.
+
+_So, what’s next?_ You really only have three options when it comes to refreshing your WAN. You can continue to try and stretch your conventional router-centric model. You can choose a basic [SD-WAN][12] model that may or may not be good enough. Or you can take a new approach and deploy a business-driven SD-WAN edge platform.
+
+### **The pitfalls of a router-centric model**
+
+![][13]
+
+The router-centric approach worked well when enterprise applications were hosted in the data center; before the advent of the cloud. All traffic was routed directly from branch offices to the data center. With the emergence of the cloud, businesses were forced to conform to the constraints of the network when deploying new applications or making network changes. This is a bottoms-up device centric approach in which the network becomes a bottleneck to the business.
+
+A router-centric approach requires manual device-by-device configuration that results in endless hours of manual programming, making it extremely difficult for network administrators to scale without experiencing major challenges in configuration, outages and troubleshooting. Any changes that arise when deploying a new application or changing a QoS or security policy, once again requires manually programming every router at every branch across the network. Re-programming is time consuming and requires utilizing a complex, cumbersome CLI, further adding to the inefficiencies of the model. In short, the router-centric WAN has hit the wall.
+
+### **Basic SD-WAN, a step in the right direction**
+
+![][14]
+
+In this model, businesses realize the benefit of foundational features, but this model falls short of the goal of a fully automated, business-driven network. A basic SD-WAN approach is unable to provide what the business really needs, including the ability to deliver the best Quality of Experience for users.
+
+Some of the basic SD-WAN features include the ability to use multiple forms of transport, path selection, centralized management, zero-touch provisioning and encrypted VPN overlays. However, a basic SD-WAN lacks in many areas:
+
+ * Limited end-to-end orchestration of WAN edge network functions
+ * Rudimentary path selection with traffic steering limited to pre-defined rules
+ * Long fail-over times in response to WAN transport outages
+ * Inability to use links when they experience brownouts due to link congestion or packet loss
+ * Fixed application definitions and manually scripted ACLs to control traffic steering across the internet
+
+
+
+### **The solution: shift to a business-first networking model**
+
+![][15]
+
+In this model, the network enables the business. The WAN is transformed into a business accelerant that is fully automated and continuous, giving every application the resources it truly needs while delivering 10x the bandwidth for the same budget – ultimately achieving the highest quality of experience to users and IT alike. With a business-first networking model, the network functions (SD-WAN, firewall, segmentation, routing, WAN optimization and application visibility and control) are unified in a single platform and are centrally orchestrated and managed. Top-down business intent is the driver, enabling businesses to unlock the full transformational promise of the cloud.
+
+The business-driven [Silver Peak® EdgeConnect™ SD-WAN][16] edge platform was built for the cloud, enabling enterprises to liberate their applications from the constraints of existing WAN approaches. EdgeConnect offers the following advanced capabilities:
+
+1\. Automates traffic steering and security policy enforcement based on business intent instead of TCP/IP addresses, delivering the highest Quality of Experience for users
+
+2\. Actively embraces broadband to increase application performance and availability while lowering costs
+
+3\. Securely and directly connect branch users to SaaS and IaaS cloud services
+
+4\. Increases operational efficiency while increasing business agility and time-to-market via centralized orchestration
+
+Silver Peak has more than 1,000 enterprise customer deployments across a range of vertical industries. Bentley Systems, [Nuffield Health][17] and [Solis Mammography][18] have all realized tangible business outcomes from their EdgeConnect deployments.
+
+![][19]
+
+Learn why the time is now to [think outside the router][20]!
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386116/top-ten-reasons-to-think-outside-the-router-1-it-s-time-for-a-router-refresh.html#tk.rss_all
+
+作者:[Rami Rammaha][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Rami-Rammaha/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/istock-478729482-100792542-large.jpg
+[2]: https://www.silver-peak.com/why-silver-peak
+[3]: http://blog.silver-peak.com/think-outside-the-router-reason-2-simplify-and-consolidate-the-wan-edge
+[4]: http://blog.silver-peak.com/think-outside-the-router-reason-3-mpls-contract-renewal
+[5]: http://blog.silver-peak.com/top-ten-reasons-to-think-outside-the-router-4-broadband-is-used-only-for-failover
+[6]: http://blog.silver-peak.com/think-outside-the-router-reason-5-manual-cli-based-configuration-and-management
+[7]: http://blog.silver-peak.com/https-blog-silver-peak-com-think-outside-the-router-reason-6
+[8]: http://blog.silver-peak.com/think-outside-the-router-reason-7-exorbitant-router-support-and-maintenance-costs
+[9]: http://blog.silver-peak.com/think-outside-the-router-reason-8-garbled-voip-pixelated-video
+[10]: http://blog.silver-peak.com/think-outside-router-reason-9-sub-par-saas-performance
+[11]: http://blog.silver-peak.com/think-outside-router-reason-10-its-getting-cloudy
+[12]: https://www.silver-peak.com/sd-wan/sd-wan-explained
+[13]: https://images.idgesg.net/images/article/2019/04/1_router-centric-vs-business-first-100792538-medium.jpg
+[14]: https://images.idgesg.net/images/article/2019/04/2_basic-sd-wan-vs-business-first-100792539-medium.jpg
+[15]: https://images.idgesg.net/images/article/2019/04/3_bus-first-networking-model-100792540-large.jpg
+[16]: https://www.silver-peak.com/products/unity-edge-connect
+[17]: https://www.silver-peak.com/resource-center/nuffield-health-deploys-uk-wide-sd-wan-silver-peak
+[18]: https://www.silver-peak.com/resource-center/national-leader-mammography-services-accelerates-access-life-critical-scans
+[19]: https://images.idgesg.net/images/article/2019/04/4_real-world-business-outcomes-100792541-large.jpg
+[20]: https://www.silver-peak.com/think-outside-router
diff --git a/sources/tech/20190404 9 features developers should know about Selenium IDE.md b/sources/tech/20190404 9 features developers should know about Selenium IDE.md
new file mode 100644
index 0000000000..b099da68e2
--- /dev/null
+++ b/sources/tech/20190404 9 features developers should know about Selenium IDE.md
@@ -0,0 +1,158 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (9 features developers should know about Selenium IDE)
+[#]: via: (https://opensource.com/article/19/4/features-selenium-ide)
+[#]: author: (Al Sargent https://opensource.com/users/alsargent)
+
+9 features developers should know about Selenium IDE
+======
+The new Selenium IDE brings the benefits of functional test automation
+to many IT professionals—and to frontend developers specifically.
+![magnifying glass on computer screen][1]
+
+There has long been a stigma associated with using record-and-playback tools for testing rather than scripted QA automation tools like [Selenium Webdriver][2], [Cypress][3], and [WebdriverIO][4].
+
+Record-and-playbook tools are perceived to suffer from many issues, including a lack of cross-browser support, no way to run scripts in parallel or from CI build scripts, poor support for responsive web apps, and no way to quickly diagnose frontend bugs.
+
+Needless to say, it's been somewhat of a rough road for these tools, and after Selenium IDE [went end-of-life][5] in 2017, many thought the road for record and playback would end altogether.
+
+Well, it turns out this perception was wrong. Not long after the Selenium IDE project was discontinued, my colleagues at [Applitools approached the Selenium open source community][6] to see how they could help.
+
+Since then, much of Selenium IDE's code has been revamped. The code is now freely available on GitHub under an Apache 2.0 license, managed by the Selenium community, and supported by [two full-time engineers][7], one of whom literally wrote the book on [Selenium testing][8].
+
+![Selenium IDE's GitHub repository][9]
+
+The new Selenium IDE brings the benefits of functional test automation to many IT professionals—and to frontend developers specifically. Here are nine things developers should know about the new Selenium IDE.
+
+### 1\. Selenium IDE is now cross-browser
+
+When the record-and-playback tool first came out in 2006, Firefox was the shiny new browser it hitched its wagon to, and it remained that way for a decade. No more! Selenium IDE is now available as a [Google Chrome Extension][10] and [Firefox Add-on][11].
+
+Even better, Selenium IDE can run its tests on Selenium WebDriver servers by using Selenium IDE's new command-line test runner, [SIDE Runner][12]. SIDE Runner blends elements of Selenium IDE and Selenium Webdriver. It takes a Selenium IDE script, saved as a [**.side** file][13], and runs it using browser drivers such as [ChromeDriver][14], [EdgeDriver][15], Firefox's [Geckodriver][16], [IEDriver][17], and [SafariDriver][18].
+
+SIDE Runner and the other drivers above are available as [straightforward npm installs][12]. Here's what it looks like in action.
+
+![SIDE Runner][19]
+
+### 2\. No more brittle functional tests
+
+For years, brittle tests have been an issue for functional tests—whether you record them or code them by hand. Now that developers are releasing new features more frequently, their user interface (UI) code is constantly changing as well. When a UI changes, object locators often change, too.
+
+Selenium IDE fixes that by capturing multiple object locators when you record your script. During playback, if Selenium IDE can't find one locator, it tries each of the other locators until it finds one that works. Your test will fail only if none of the locators work. This doesn't guarantee scripts will always play back, but it does insulate scripts against numerous changes. As you can see below, Selenium IDE captures linkText, an xPath expression, and CSS-based locators.
+
+![Selenium IDE captures linkText, an xPath expression, and CSS-based locators][20]
+
+### 3\. Conditional logic to handle UI features
+
+When testing web apps, scripts have to handle intermittent UI elements that can randomly appear in your app. These come in the form of cookie notices, popups for special offers, quote requests, newsletter subscriptions, paywall notifications, adblocker requests, and more.
+
+Conditional logic is a great way to handle these intermittent UI features. Developers can easily insert conditional logic—also called control flow—into Selenium IDE scripts. [Here are details][21] and how it looks.
+
+![Selenium IDE's Conditional logic][22]
+
+### 4\. Support for embedded code
+
+As broad as the new [Selenium IDE API][23] is, it doesn't do everything. For this reason, Selenium IDE has **[**execute** **script**][24]** and **[execute async script][25]** commands that let your script call a JavaScript snippet.
+
+This provides developers with a tremendous amount of flexibility to take advantage of JavaScript's flexibility and wide range of libraries. To use it, click on the test step where you want JavaScript to run, choose **Insert New Command** , and enter **execute script** or **execute async script** in the command field, as shown below.
+
+![Selenium IDE's command line][26]
+
+### 5\. Selenium IDE runs from CI build scripts
+
+Because SIDE Runner is called from the command line, you can easily fit it into CI build scripts, so long as the CI server can call **selenium-ide-runner** and upload the **.side** file (the test script) as a build artifact. For example, here's how to upload an input file in [Jenkins][27], [Travis][28], and [CircleCI][29].
+
+This means Selenium IDE can be better integrated into the software development technology stack. In addition, the scripts created by less-technical QA team members—including business analysts—can run with every build. This helps better align QA with the developer so fewer bugs escape into production.
+
+### 6\. Support for third-party plugins
+
+Imagine companies building plugins to have Selenium IDE do all kinds of things, like uploading scripts to a functional testing cloud, a load testing cloud, or a production application monitoring service.
+
+Plenty of companies have integrated Selenium Webdriver into their offerings, and I bet the same will happen with Selenium IDE. You can also [build your own Selenium IDE plugin][30].
+
+### 7\. Visual UI testing
+
+Speaking of new plugins, Applitools introduced a new Selenium IDE plugin to add artificial intelligence-powered visual validations to the equation. Available through the [Chrome][31] and [Firefox][32] stores via a three-second install, just plug in the Applitools API key and go.
+
+Visual checkpoints are a great way to ensure a UI renders correctly. Rather than a bunch of assert statements on all the UI elements—which would be a pain to maintain—one visual checkpoint checks all your page elements.
+
+Best of all, visual AI looks at a web app the same way a human does, ignoring minor differences. This means fewer fake bugs to frustrate a development team.
+
+### 8\. Visually test responsive web apps
+
+When testing the visual layout of [responsive web apps][33], it's best to do it on a wide range of screen sizes (also called viewports) to ensure nothing appears out of whack. It's all too easy for responsive web bugs to creep in, and when they do, the problems can range from merely cosmetic to business stopping.
+
+When you use visual UI testing for Selenium IDE, you can visually test your webpages on the Applitools [Visual Grid][34], which has more than 100 combinations of browsers, emulated devices, and viewport sizes.
+
+Once tests run on the Visual Grid, developers can easily check the test results on all the various combinations.
+
+![Selenium IDE's Visual Grid][35]
+
+### 9\. Responsive web bugs have nowhere to hide
+
+Selenium IDE can help pinpoint the cause of frontend bugs. Every Selenium IDE script that's run with the Visual Grid can be analyzed with Applitools' [Root Cause Analysis][36]. It's no longer enough to find a bug—developers also need to fix it.
+
+When a visual bug is discovered, it can be clicked on and just the relevant (not all) Document Object Model (DOM) and CSS differences will be displayed.
+
+![Finding visual bugs][37]
+
+In summary, much like many emerging technologies in software development, Selenium IDE is part of a larger trend of making life easier and simpler for technical professionals and enabling them to spend more time and effort on creating code for even faster feedback.
+
+* * *
+
+_This article is based on[16 reasons why to use Selenium IDE in 2019 (and 2 why not)][38] originally published on the Applitools blog._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/features-selenium-ide
+
+作者:[Al Sargent][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/alsargent
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
+[2]: https://www.seleniumhq.org/projects/webdriver/
+[3]: https://www.cypress.io/
+[4]: https://webdriver.io/
+[5]: https://seleniumhq.wordpress.com/2017/08/09/firefox-55-and-selenium-ide/
+[6]: https://seleniumhq.wordpress.com/2018/08/06/selenium-ide-tng/
+[7]: https://github.com/SeleniumHQ/selenium-ide/graphs/contributors
+[8]: http://davehaeffner.com/
+[9]: https://opensource.com/sites/default/files/uploads/selenium_ide_github_graphic_1.png (Selenium IDE's GitHub repository)
+[10]: https://chrome.google.com/webstore/detail/selenium-ide/mooikfkahbdckldjjndioackbalphokd
+[11]: https://addons.mozilla.org/en-US/firefox/addon/selenium-ide/
+[12]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/command-line-runner/
+[13]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/command-line-runner/#launching-the-runner
+[14]: http://chromedriver.chromium.org/
+[15]: https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/
+[16]: https://github.com/mozilla/geckodriver
+[17]: https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver
+[18]: https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari
+[19]: https://opensource.com/sites/default/files/uploads/selenium_ide_side_runner_2.png (SIDE Runner)
+[20]: https://opensource.com/sites/default/files/uploads/selenium_ide_linktext_3.png (Selenium IDE captures linkText, an xPath expression, and CSS-based locators)
+[21]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/control-flow/
+[22]: https://opensource.com/sites/default/files/uploads/selenium_ide_conditional_logic_4.png (Selenium IDE's Conditional logic)
+[23]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/
+[24]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/#execute-script
+[25]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/#execute-async-script
+[26]: https://opensource.com/sites/default/files/uploads/selenium_ide_command_line_5.png (Selenium IDE's command line)
+[27]: https://stackoverflow.com/questions/27491789/how-to-upload-a-generic-file-into-a-jenkins-job
+[28]: https://docs.travis-ci.com/user/uploading-artifacts/
+[29]: https://circleci.com/docs/2.0/artifacts/
+[30]: https://www.seleniumhq.org/selenium-ide/docs/en/plugins/plugins-getting-started/
+[31]: https://chrome.google.com/webstore/detail/applitools-for-selenium-i/fbnkflkahhlmhdgkddaafgnnokifobik
+[32]: https://addons.mozilla.org/en-GB/firefox/addon/applitools-for-selenium-ide/
+[33]: https://en.wikipedia.org/wiki/Responsive_web_design
+[34]: https://applitools.com/visualgrid
+[35]: https://opensource.com/sites/default/files/uploads/selenium_ide_visual_grid_6.png (Selenium IDE's Visual Grid)
+[36]: https://applitools.com/root-cause-analysis
+[37]: https://opensource.com/sites/default/files/uploads/seleniumice_rootcauseanalysis_7.png (Finding visual bugs)
+[38]: https://applitools.com/blog/why-selenium-ide-2019
diff --git a/sources/tech/20190404 Edge Computing is Key to Meeting Digital Transformation Demands - and Partnerships Can Help Deliver Them.md b/sources/tech/20190404 Edge Computing is Key to Meeting Digital Transformation Demands - and Partnerships Can Help Deliver Them.md
new file mode 100644
index 0000000000..b2f8a59ab4
--- /dev/null
+++ b/sources/tech/20190404 Edge Computing is Key to Meeting Digital Transformation Demands - and Partnerships Can Help Deliver Them.md
@@ -0,0 +1,72 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Edge Computing is Key to Meeting Digital Transformation Demands – and Partnerships Can Help Deliver Them)
+[#]: via: (https://www.networkworld.com/article/3387140/edge-computing-is-key-to-meeting-digital-transformation-demands-and-partnerships-can-help-deliver-t.html#tk.rss_all)
+[#]: author: (Rob McKernan https://www.networkworld.com/author/Rob-McKernan/)
+
+Edge Computing is Key to Meeting Digital Transformation Demands – and Partnerships Can Help Deliver Them
+======
+
+### Organizations in virtually every vertical industry are undergoing a digital transformation in an attempt to take advantage of edge computing technology
+
+![Getty Images][1]
+
+Organizations in virtually every vertical industry are undergoing a digital transformation in an attempt to take advantage of [edge computing][2] technology to make their businesses more efficient, innovative and profitable. In the process, they’re coming face to face with challenges ranging from time to market to reliability of IT infrastructure.
+
+It’s a complex problem, especially when you consider the scope of what digital transformation entails. “Digital transformation is not simply a list of IT projects, it involves completely rethinking how an organization uses technology to pursue new revenue streams, products, services, and business models,” as the [research firm IDC says][3].
+
+Companies will be spending more than $650 billion per year on digital transformation efforts by 2024, a CAGR of more than 18.5% from 2018, according to the research firm [Market Research Engine][4].
+
+The drivers behind all that spending include Internet of Things (IoT) technology, which involves collecting data from machines and sensors covering every aspect of the organization. That is contributing to Big Data – the treasure trove of data that companies mine to find the keys to efficiency, opportunity and more. Artificial intelligence and machine learning are crucial to that effort, helping companies make sense of the mountains of data they’re creating and consuming, and to find opportunities.
+
+**Requirements for Edge Computing**
+
+All of these trends are creating the need for more and more compute power and data storage. And much of it needs to be close to the source of the data, and to those employees who are working with it. In other words, it’s driving the need for companies to build edge data centers or edge computing sites.
+
+Physically, these edge computing sites bear little resemblance to large, centralized data centers, but they have many of the same requirements in terms of performance, reliability, efficiency and security. Given they are typically in locations with few if any IT personnel, the data centers must have a high degree of automation and remote management capabilities. And to meet business requirements, they must be built quickly.
+
+**Answering the Call at the Edge**
+
+These are complex requirements, but if companies are to meet time-to-market goals and deal with the lack of IT personnel at the edge, they demand simple solutions.
+
+One solution is integration. We’re seeing this already in the IT space, with vendors delivering hyper-converged infrastructure that combines servers, storage, networking and software that is tightly integrated and delivered in a single enclosure. This saves IT groups valuable time in terms of procuring and configuring equipment and makes it far easier to manage over the long term.
+
+Now we’re seeing the same strategy applied to edge data centers. Prefabricated, modular data centers are an ideal solution for delivering edge data center capacity quickly and reliably. All the required infrastructure – power, cooling, racks, UPSs – can be configured and installed in a factory and delivered as a single, modular unit to the data center site (or multiple modules, depending on requirements).
+
+Given they’re built in a factory under controlled conditions, modular data centers are more reliable over the long haul. They can be configured with management software built-in, enabling remote management capabilities and a high degree of automation. And they can be delivered in weeks or months, not years – and in whatever size is required, including small “micro” data centers.
+
+Few companies, however, have all the components required to deliver a complete, functional data center, not to mention the expertise required to install and configure it. So, it takes effective partnerships to deliver complete edge data center solutions.
+
+**Tech Data Partnership Delivers at the Edge **
+
+APC by Schneider Electric has a long history of partnering to deliver complete solutions that address customer needs. Of the thousands of partnerships it has established over the years, the [25-year partnership][5] with [Tech Data][6] is particularly relevant for the digital transformation era.
+
+Tech Data is a $36.8 billion, Fortune 100 company that has established itself as the world’s leading end-to-end IT distributor. Power and physical infrastructure specialists from Tech Data team up with their counterparts from APC to deliver innovative solutions, including modular and [micro data centers][7]. Many of these solutions are pre-certified by major alliance partners, including IBM, HPE, Cisco, Nutanix, Dell EMC and others.
+
+To learn more, [access the full story][8] that explains how the Tech Data and APC partnership helps deliver [Certainty in a Connected World][9] and effective edge computing solutions that meet today’s time to market requirements.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387140/edge-computing-is-key-to-meeting-digital-transformation-demands-and-partnerships-can-help-deliver-t.html#tk.rss_all
+
+作者:[Rob McKernan][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Rob-McKernan/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/gettyimages-494323751-942x445-100792905-large.jpg
+[2]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
+[3]: https://www.idc.com/getdoc.jsp?containerId=US43985717
+[4]: https://www.marketresearchengine.com/digital-transformation-market
+[5]: https://www.apc.com/us/en/partners-alliances/partners/tech-data-and-apc-partnership-drives-edge-computing-success/full-resource.jsp
+[6]: https://www.techdata.com/
+[7]: https://www.apc.com/us/en/solutions/business-solutions/micro-data-centers.jsp
+[8]: https://www.apc.com/us/en/partners-alliances/partners/tech-data-and-apc-partnership-drives-edge-computing-success/index.jsp
+[9]: https://www.apc.com/us/en/who-we-are/certainty-in-a-connected-world.jsp
diff --git a/sources/tech/20190404 Intel formally launches Optane for data center memory caching.md b/sources/tech/20190404 Intel formally launches Optane for data center memory caching.md
new file mode 100644
index 0000000000..3ec4b4600e
--- /dev/null
+++ b/sources/tech/20190404 Intel formally launches Optane for data center memory caching.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Intel formally launches Optane for data center memory caching)
+[#]: via: (https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Intel formally launches Optane for data center memory caching
+======
+
+### Intel formally launched the Optane persistent memory product line, which includes 3D Xpoint memory technology. The Intel-only solution is meant to sit between DRAM and NAND and to speed up performance.
+
+![Intel][1]
+
+As part of its [massive data center event][2] on Tuesday, Intel formally launched the Optane persistent memory product line. It had been out for a while, but the current generation of Xeon server processors could not fully utilize it. The new Xeon 8200 and 9200 lines take full advantage of it.
+
+And since Optane is an Intel product (co-developed with Micron), that means AMD and Arm server processors are out of luck.
+
+As I have [stated in the past][3], Optane DC Persistent Memory uses 3D Xpoint memory technology that Intel developed with Micron Technology. 3D Xpoint is a non-volatile memory type that is much faster than solid-state drives (SSD), almost at the speed of DRAM, but it has the persistence of NAND flash.
+
+**[ Read also:[Why NVMe? Users weigh benefits of NVMe-accelerated flash storage][4] and [IDC’s top 10 data center predictions][5] | Get regularly scheduled insights [Sign up for Network World newsletters][6] ]**
+
+The first 3D Xpoint products were SSDs called Intel’s ["ruler,"][7] because they were designed in a long, thin format similar to the shape of a ruler. They were designed that way to fit in 1u server carriages. As part of Tuesday’s announcement, Intel introduced the new Intel SSD D5-P4326 'Ruler' SSD, using four-cell or QLC 3D NAND memory, with up to 1PB of storage in a 1U design.
+
+Optane DC Persistent Memory will be available in DIMM capacities of 128GB on up to 512GB initially. That’s two to four times what you can get with DRAM, said Navin Shenoy, executive vice president and general manager of Intel’s Data Center Group, who keynoted the event.
+
+“We expect system capacity in a server system to scale to 4.5 terabytes per socket or 36 TB in an 8-socket system. That’s three times larger than what we were able to do with the first-generation of Xeon Scalable,” he said.
+
+## Intel Optane memory uses and speed
+
+Optane runs in two different modes: Memory Mode and App Direct Mode. Memory mode is what I have been describing to you, where Optane memory exists “above” the DRAM and acts as a cache. In App Direct mode, the DRAM and Optane DC Persistent Memory are pooled together to maximize the total capacity. Not every workload is ideal for this kind of configuration, so it should be used in applications that are not latency-sensitive. The primary use case for Optane, as Intel is promoting it, is Memory Mode.
+
+**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]**
+
+When 3D Xpoint was initially announced a few years back, Intel claimed it was 1,000 times faster than NAND, with 1000 times the endurance, and 10 times the density potential of DRAM. Well that was a little exaggerated, but it does have some intriguing elements.
+
+Optane memory, when used in 256B contiguous 4 cacheline, can achieve read speeds of 8.3GB/sec and write speeds of 3.0GB/sec. Compare that with the read/write speed of 500 or so MB/sec for a SATA SSD, and you can see the performance gain. Optane, remember, is feeding memory, so it caches frequently accessed SSD content.
+
+This is the key takeaware of Optane DC. It will keep very large data sets very close to memory, and hence the CPU, with low latency while at the same time minimizing the need to access the slower storage subsystem, whether it’s SSD or HDD. It now offers the possibility of putting multiple terabytes of data very close to the CPU for much faster access.
+
+## One challenge with Optane memory
+
+The only real challenge is that Optane goes into DIMM slots, which is where memory goes. Now some motherboards come with as many as 16 DIMM slots per CPU socket, but that’s still board real estate that the customer and OEM provider will need to balance out: Optane vs. memory. There are some Optane drives in PCI Express format, which alleviate the memory crowding on the motherboard.
+
+3D Xpoint also offers higher endurance than traditional NAND flash memory due to the way it writes data. Intel promises a five-year warranty with its Optane, while a lot of SSDs offer only three years.
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/06/intel-optane-persistent-memory-100760427-large.jpg
+[2]: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html
+[3]: https://www.networkworld.com/article/3279271/intel-launches-optane-the-go-between-for-memory-and-storage.html
+[4]: https://www.networkworld.com/article/3290421/why-nvme-users-weigh-benefits-of-nvme-accelerated-flash-storage.html
+[5]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb
+[6]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
+[7]: https://www.theregister.co.uk/2018/02/02/ruler_and_miniruler_ssd_formats_look_to_banish_diskstyle_drives/
+[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190404 Running LEDs in reverse could cool computers.md b/sources/tech/20190404 Running LEDs in reverse could cool computers.md
new file mode 100644
index 0000000000..2eb3c66c6b
--- /dev/null
+++ b/sources/tech/20190404 Running LEDs in reverse could cool computers.md
@@ -0,0 +1,68 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Running LEDs in reverse could cool computers)
+[#]: via: (https://www.networkworld.com/article/3386876/running-leds-in-reverse-could-cool-computers.html#tk.rss_all)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Running LEDs in reverse could cool computers
+======
+
+### The miniaturization of electronics is reaching its limits in part because of heat management. Many are now aggressively trying to solve the problem. A kind of reverse-running LED is one avenue being explored.
+
+![monsitj / Getty Images][1]
+
+The quest to find more efficient methods for cooling computers is almost as high on scientists’ agendas as the desire to discover better battery chemistries.
+
+More cooling is crucial for reducing costs. It would also allow for more powerful processing to take place in smaller spaces, where limited processing should be crunching numbers instead of making wasteful heat. It would stop heat-caused breakdowns, thereby creating longevity in components, and it would promote eco-friendly data centers — less heat means less impact on the environment.
+
+Removing heat from microprocessors is one angle scientists have been exploring, and they think they have come up with a simple, but unusual and counter-intuitive solution. They say that running a variant of a Light Emitting Diode (LED) with its electrodes reversed forces the component to act as if it were at an unusually low temperature. Placing it next to warmer electronics, then, with a nanoscale gap introduced, causes the LED to suck out the heat.
+
+**[ Read also:[IDC’s top 10 data center predictions][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
+
+“Once the LED is reverse biased, it began acting as a very low temperature object, absorbing photons,” says Edgar Meyhofer, professor of mechanical engineering at University of Michigan, in a [press release][4] announcing the breakthrough. “At the same time, the gap prevents heat from traveling back, resulting in a cooling effect.”
+
+The researchers say the LED and the adjacent electrical device (in this case a calorimeter, usually used for measuring heat energy) have to be extremely close. They say they’ve been able to demonstrate cooling of six watts per meter-squared. That’s about the power of sunshine on the earth’s surface, they explain.
+
+Internet of things (IoT) devices and smartphones could be among those electronics that would ultimately benefit from the LED modification. Both kinds of devices require increasing computing power to be squashed into smaller spaces.
+
+“Removing the heat from the microprocessor is beginning to limit how much power can be squeezed into a given space,” the University of Michigan announcement says.
+
+### Materials Science and cooling computers
+
+[I’ve written before about new forms of computer cooling][5]. Exotic materials, derived from Materials Science, are among ideas being explored. Sodium bismuthide (Na3Bi) could be used in transistor design, the U.S. Department of Energy’s Lawrence Berkeley National Laboratory says. The new substance carries a charge and is importantly tunable; however, it doesn’t need to be chilled as superconductors currently do.
+
+In fact, that’s a problem with superconductors. They unfortunately need more cooling than most electronics — electrical resistance with the technology is expelled through extreme cooling.
+
+Separately, [researchers in Germany at the University of Konstanz][6] say they soon will have superconductor-driven computers without waste heat. They plan to use electron spin — a new physical dimension in electrons that could create efficiency gains. The method “significantly reduces the energy consumption of computing centers,” the university said in a press release last year.
+
+Another way to reduce heat could be [to replace traditional heatsinks with spirals and mazes][7] embedded on microprocessors. Miniscule channels printed on the chip itself could provide paths for coolant to travel, again separately, scientists from Binghamton University say.
+
+“The miniaturization of the semiconductor technology is approaching its physical limits,” the University of Konstanz says. Heat management is very much on scientists’ agenda now. It’s “one of the big challenges in miniaturization."
+
+Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386876/running-leds-in-reverse-could-cool-computers.html#tk.rss_all
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/big_data_center_server_racks_storage_binary_analytics_by_monsitj_gettyimages-944444446_3x2-100787357-large.jpg
+[2]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb
+[3]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
+[4]: https://news.umich.edu/running-an-led-in-reverse-could-cool-future-computers/
+[5]: https://www.networkworld.com/article/3326831/computers-could-soon-run-cold-no-heat-generated.html
+[6]: https://www.uni-konstanz.de/en/university/news-and-media/current-announcements/news/news-in-detail/Supercomputer-ohne-Abwaerme/
+[7]: https://www.networkworld.com/article/3322956/chip-cooling-breakthrough-will-reduce-data-center-power-costs.html
+[8]: https://www.facebook.com/NetworkWorld/
+[9]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190404 Why blockchain (might be) coming to an IoT implementation near you.md b/sources/tech/20190404 Why blockchain (might be) coming to an IoT implementation near you.md
new file mode 100644
index 0000000000..f5915aebe7
--- /dev/null
+++ b/sources/tech/20190404 Why blockchain (might be) coming to an IoT implementation near you.md
@@ -0,0 +1,79 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Why blockchain (might be) coming to an IoT implementation near you)
+[#]: via: (https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html#tk.rss_all)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+Why blockchain (might be) coming to an IoT implementation near you
+======
+
+![MF3D / Getty Images][1]
+
+Companies have found that IoT partners well with a host of other popular enterprise computing technologies of late, and blockchain – the innovative system of distributed trust most famous for underpinning cryptocurrencies – is no exception. Yet while the two phenomena can be complementary in certain circumstances, those expecting an explosion of blockchain-enabled IoT technologies probably shouldn’t hold their breath.
+
+Blockchain technology can be counter-intuitive to understand at a basic level, but it’s probably best thought of as a sort of distributed ledger keeping track of various transactions. Every “block” on the chain contains transactional records or other data to be secured against tampering, and is linked to the previous one by a cryptographic hash, which means that any tampering with the block will invalidate that connection. The nodes – which can be largely anything with a CPU in it – communicate via a decentralized, peer-to-peer network to share data and ensure the validity of the data in the chain.
+
+**[ Also see[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3].]**
+
+The system works because all the blocks have to agree with each other on the specifics of the data that they’re safeguarding, according to Nir Kshetri, a professor of management at the University of North Carolina – Greensboro. If someone attempts to alter a previous transaction on a given node, the rest of the data on the network pushes back. “The old record of the data is still there,” said Kshetri.
+
+That’s a powerful security technique – absent a bad actor successfully controlling all of the nodes on a given blockchain (the [famous “51% attack][4]”), the data protected by that blockchain can’t be falsified or otherwise fiddled with. So it should be no surprise that the use of blockchain is an attractive option to companies in some corners of the IoT world.
+
+Part of the reason for that, over and above the bare fact of blockchain’s ability to securely distribute trusted information across a network, is its place in the technology stack, according to Jay Fallah, CTO and co-founder of NXMLabs, an IoT security startup.
+
+“Blockchain stands at a very interesting intersection. Computing has accelerated in the last 15 years [in terms of] storage, CPU, etc, but networking hasn’t changed that much until recently,” he said. “[Blockchain]’s not a network technology, it’s not a data technology, it’s both.”
+
+### Blockchain and IoT**
+
+**
+
+Where blockchain makes sense as a part of the IoT world depends on who you speak to and what they are selling, but the closest thing to a general summation may have come from Allison Clift-Jenning, CEO of enterprise blockchain vendor Filament.
+
+“Anywhere where you've got people who are kind of wanting to trust each other, and have very archaic ways of doing it, that is usually a good place to start with use cases,” she said.
+
+One example, culled directly from Filament’s own customer base, is used car sales. Filament’s working with “a major Detroit automaker” to create a trusted-vehicle history platform, based on a device that plugs into the diagnostic port of a used car, pulls information from there, and writes that data to a blockchain. Just like that, there’s an immutable record of a used car’s history, including whether its airbags have ever been deployed, whether it’s been flooded, and so on. No unscrupulous used car lot or duplicitous former owner could change the data, and even unplugging the device would mean that there’s a suspicious blank period in the records.
+
+Most of present-day blockchain IoT implementation is about trust and the validation of data, according to Elvira Wallis, senior vice president and global head of IoT at SAP.
+
+“Most of the use cases that we have come across are in the realm of tracking and tracing items,” she said, giving the example of a farm-to-fork tracking system for high-end foodstuffs, using blockchain nodes mounted on crates and trucks, allowing for the creation of an un-fudgeable record of an item’s passage through transport infrastructure. (e.g., how long has this steak been refrigerated at such-and-such a temperature, how far has it traveled today, and so on.)
+
+### **Is using blockchain with IoT a good idea?**
+
+Different vendors sell different blockchain-based products for different use cases, which use different implementations of blockchain technology, some of which don’t bear much resemblance to the classic, linear, mined-transaction blockchain used in cryptocurrency.
+
+That means it’s a capability that you’d buy from a vendor for a specific use case, at this point. Few client organizations have the in-house expertise to implement a blockchain security system, according to 451 Research senior analyst Csilla Zsigri.
+
+The idea with any intelligent application of blockchain technology is to play to its strengths, she said, creating a trusted platform for critical information.
+
+“That’s where I see it really adding value, just in adding a layer of trust and validation,” said Zsigri.
+
+Yet while the basic idea of blockchain-enabled IoT applications is fairly well understood, it’s not applicable to every IoT use case, experts agree. Applying blockchain to non-transactional systems – although there are exceptions, including NXM Labs’ blockchain-based configuration product for IoT devices – isn’t usually the right move.
+
+If there isn’t a need to share data between two different parties – as opposed to simply moving data from sensor to back-end – blockchain doesn’t generally make sense, since it doesn’t really do anything for the key value-add present in most IoT implementations today: data analysis.
+
+“We’re still in kind of the early dial-up era of blockchain today,” said Clift-Jennings. “It’s slower than a typical database, it often isn't even readable, it often doesn't have a query engine tied to it. You don't really get privacy, by nature of it.”
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html#tk.rss_all
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/chains_binary_data_blockchain_security_by_mf3d_gettyimages-941175690_2400x1600-100788434-large.jpg
+[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[4]: https://bitcoinist.com/51-percent-attack-hackers-steals-18-million-bitcoin-gold-btg-tokens/
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190405 5 open source tools for teaching young children to read.md b/sources/tech/20190405 5 open source tools for teaching young children to read.md
new file mode 100644
index 0000000000..c3a1fe82c8
--- /dev/null
+++ b/sources/tech/20190405 5 open source tools for teaching young children to read.md
@@ -0,0 +1,97 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (5 open source tools for teaching young children to read)
+[#]: via: (https://opensource.com/article/19/4/early-literacy-tools)
+[#]: author: (Laura B. Janusek https://opensource.com/users/lbjanusek)
+
+5 open source tools for teaching young children to read
+======
+Early literacy apps give kids a foundation in letter recognition,
+alphabet sequencing, word finding, and more.
+![][1]
+
+Anyone who sees a child using a tablet or smartphone observes their seemingly innate ability to scroll through apps and swipe through screens, flexing those "digital native" muscles. According to [Common Sense Media][2], the percentage of US households in which 0- to 8-year-olds have access to a smartphone has grown from 52% in 2011 to 98% in 2017. While the debates around age guidelines and screen time surge, it's hard to deny that children are developing familiarity and skills with technology at an unprecedented rate.
+
+This rise in early technical literacy may be astonishing, but what about _traditional_ literacy, the good old-fashioned ability to read? What does the intersection of early literacy development and early tech use look like? Let's explore some open source tools for early learners that may help develop both of these critical skill sets.
+
+### Balancing risks and rewards
+
+But first, a disclaimer: Guidelines for technology use, especially for young children, are [constantly changing][3]. Organizations like the American Academy of Pediatrics, Common Sense Media, Zero to Three, and PBS Kids are continually conducting research and publishing recommendations. One position that all of these and other organizations can agree on is that plopping a child in front of a screen with unmonitored content for an unlimited set of time is highly inadvisable.
+
+Even setting kids up with educational content or tools for extended periods of time may have risks. And on the flip side, research on the benefits of education technologies is often limited or unavailable. In short, there are many cases in which we don't know for certain if educational technology use at a young age is beneficial, detrimental, or simply neutral.
+
+But if screen time is available to your child or student, it's logical to infer that educational resources would be preferable over simpler pop-the-bubble or slice-the-fruit games or platforms that could house inappropriate content or online predators. While we may not be able to prove that education apps will make a child's test scores soar, we can at least take comfort in their generally being safer and more age-appropriate than the internet at large.
+
+That said, if you're open to exploring early-education technologies, there are many reasons to look to open source options. Open source technologies are not only free but open to collaborative improvement. In many cases, they are created by developers who are educators or parents themselves, and they're a great way to avoid in-app purchases, advertisements, and paid upgrades. Open source programs can often be downloaded and installed on your device and accessed without an internet connection. Plus, the idea of [open source in education][4] is a growing trend, and there are countless resources to [learn more][5] about the concept.
+
+But for now, let's check out some open source tools for early literacy in action!
+
+### Childsplay
+
+![Childsplay screenshot][6]
+
+Let's start simple. [Childsplay][7], licensed under the GPLv2, is the most basic of the resources on this list. It's a compilation of just over a dozen educational games for young learners, four of which are specific to letter recognition, including memory games and an activity where the learner identifies a spoken letter.
+
+### eduActiv8
+
+![eduActiv8 screenshot][8]
+
+[eduActiv8][9] started in 2011 as a personal project for the developer's son, "whose thirst for learning and knowledge inspired the creation of this educational program." It includes activities for building basic math and early literacy skills, including a variety of spelling, matching, and listening activities. Games include filling in missing letters in the alphabet, unscrambling letters to form a word, matching words to images, and completing mazes by connecting letters in the correct order. eduActiv8 was written in [Python][10] and is available under the GPLv3.
+
+### GCompris
+
+![GCompris screenshot][11]
+
+[GCompris][12] is an open source behemoth (licensed under the GPLv3) of early educational activities. A French software engineer started it in 2000, and it now includes over 130 educational games in nearly 20 languages. Tailored for learners under age 10, it includes activities for letter recognition and drawing, alphabet sequencing, vocabulary building, and games like hangman to identify missing letters in words, plus activities for learning braille. It also includes games in math and music, plus classics from tic-tac-toe to chess.
+
+### Feed the Monster
+
+![Feed the Monster screenshot][13]
+
+The quality of the playful "monster" graphics in [Feed the Monster][14] definitely sets it apart from the others on this list, plus it supports nearly 40 languages! The app includes activities for sorting letters to form words, memory games to match words to images, and letter-tracing writing activities. The app is developed by Curious Learning, which states: "We create, localize, distribute, and optimize open source mobile software so every child can learn to read." While Feed the Monster's offerings are geared toward early readers, Curious Mind's roadmap suggests it's headed towards a more robust personalized literacy platform growing on a foundation of research with MIT, Tufts, and Georgia State University.
+
+### Syntax Untangler
+
+![Syntax Untangler screenshot][15]
+
+[Syntax Untangler][16] is the outlier of this group. Developed by a technologist at the University of Wisconsin–Madison under the GPLv2, the application is "particularly designed for training language learners to recognize and parse linguistic features." Examples show the software being used for foreign language learning, but anyone can use it to create language identification games, including games for early literacy activities like letter recognition. It could also be applied to later literacy skills, like identifying parts of speech in complex sentences or literary techniques in poetry or fiction.
+
+### Wrapping up
+
+Access to [literary environments][17] has been shown to impact literacy and attitudes towards reading. Why not strive to create a digital literary environment for our kids by filling our devices with educational technologies, just like our shelves are filled with books?
+
+Now it's your turn! What open source literacy tools have you used? Comment below to share.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/early-literacy-tools
+
+作者:[Laura B. Janusek][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/lbjanusek
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa
+[2]: https://www.commonsensemedia.org/research/the-common-sense-census-media-use-by-kids-age-zero-to-eight-2017?action
+[3]: https://www.businessinsider.com/smartphone-use-young-kids-toddlers-limits-science-2018-3
+[4]: /article/18/1/best-open-education
+[5]: https://opensource.com/resources/open-source-education
+[6]: https://opensource.com/sites/default/files/uploads/cp_flashcards.gif (Childsplay screenshot)
+[7]: http://www.childsplay.mobi/
+[8]: https://opensource.com/sites/default/files/uploads/eduactiv8.jpg (eduActiv8 screenshot)
+[9]: https://www.eduactiv8.org/
+[10]: /article/17/11/5-approaches-learning-python
+[11]: https://opensource.com/sites/default/files/uploads/gcompris2.png (GCompris screenshot)
+[12]: https://gcompris.net/index-en.html
+[13]: https://opensource.com/sites/default/files/uploads/feedthemonster.png (Feed the Monster screenshot)
+[14]: https://www.curiouslearning.org/
+[15]: https://opensource.com/sites/default/files/uploads/syntaxuntangler.png (Syntax Untangler screenshot)
+[16]: https://courses.dcs.wisc.edu/untangler/
+[17]: http://www.jstor.org/stable/41386459
diff --git a/sources/tech/20190405 File sharing with Git.md b/sources/tech/20190405 File sharing with Git.md
new file mode 100644
index 0000000000..6b51d11600
--- /dev/null
+++ b/sources/tech/20190405 File sharing with Git.md
@@ -0,0 +1,234 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (File sharing with Git)
+[#]: via: (https://opensource.com/article/19/4/file-sharing-git)
+[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
+
+File sharing with Git
+======
+SparkleShare is an open source, Git-based, Dropbox-style file sharing
+application. Learn more in our series about little-known uses of Git.
+![][1]
+
+[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at SparkleShare, which uses Git as the backbone for file sharing.
+
+### Git for file sharing
+
+One of the nice things about Git is that it's inherently distributed. It's built to share. Even if you're sharing a repository just with other computers on your own network, Git brings transparency to the act of getting files from a shared location.
+
+As interfaces go, Git is pretty simple. It varies from user to user, but the common incantation when sitting down to get some work done is just **git pull** or maybe the slightly more complex **git pull && git checkout -b my-branch**. Still, for some people, the idea of _entering a command_ into their computer at all is confusing or bothersome. Computers are meant to make life easy, and computers are good at repetitious tasks, and so there are easier ways to share files with Git.
+
+### SparkleShare
+
+The [SparkleShare][3] project is a cross-platform, open source, Dropbox-style file sharing application based on Git. It automates all Git commands, triggering the add, commit, push, and pull processes with the simple act of dragging-and-dropping a file into a specially designated SparkleShare directory. Because it is based on Git, you get fast, diff-based pushes and pulls, and you inherit all the benefits of Git version control and backend infrastructure (like Git hooks). It can be entirely self-hosted, or you can use it with Git hosting services like [GitLab][4], GitHub, Bitbucket, and others. Furthermore, because it's basically just a frontend to Git, you can access your SparkleShare files on devices that may not have a SparkleShare client but do have Git clients.
+
+Just as you get all the benefits of Git, you also get all the usual Git restrictions: It's impractical to use SparkleShare to store hundreds of photos and music and videos because Git is designed and optimized for text. Git certainly has the capability to store large files of binary data but it is designed to track history, so once a file is added to it, it's nearly impossible to completely remove it. This somewhat limits the usefulness of SparkleShare for some people, but it makes it ideal for many workflows, including [calendaring][5].
+
+#### Installing SparkleShare
+
+SparkleShare is cross-platform, with installers for Windows and Mac available from its [website][6]. For Linux, there's a [Flatpak][7] in your software installer, or you can run these commands in a terminal:
+
+
+```
+$ sudo flatpak remote-add flathub
+$ sudo flatpak install flathub org.sparkleshare.SparkleShare
+```
+
+### Creating a Git repository
+
+SparkleShare isn't software-as-a-service (SaaS). You run SparkleShare on your computer to communicate with a Git repository—SparkleShare doesn't store your data. If you don't have a Git repository to sync a folder with yet, you must create one before launching SparkleShare. You have three options: hosted Git, self-hosted Git, or self-hosted SparkleShare.
+
+#### Git hosting
+
+SparkleShare can use any Git repository you can access for storage, so if you have or create an account with GitLab or any other hosting service, it can become the backend for your SparkleShare. For example, the open source [Notabug.org][8] service is a Git hosting service like GitHub and GitLab, but unique enough to prove SparkleShare's flexibility. Creating a new repository differs from host to host depending on the user interface, but all of the major ones follow the same general model.
+
+First, locate the button in your hosting service to create a new project or repository and click on it to begin. Then step through the repository creation process, providing a name for your repository, privacy level (repositories often default to being public), and whether or not to initialize the repository with a README file. Whether you need a README or not, enable an initial README file. Starting a repository with a file isn't strictly necessary, but it forces the Git host to instantiate a **master** branch in the repository, which helps ensure that frontend applications like SparkleShare have a branch to commit and push to. It's also useful for you to see a file, even if it's an almost empty README file, to confirm that you have connected.
+
+![Creating a Git repository][9]
+
+Once you've created a repository, obtain the URL it uses for SSH clones. You can get this URL the same way anyone gets any URL for a Git project: navigate to the page of the repository and look for the **Clone** button or field.
+
+![Cloning a URL on GitHub][10]
+
+Cloning a GitHub URL.
+
+![Cloning a URL on GitLab][11]
+
+Cloning a GitLab URL.
+
+This is the address SparkleShare uses to reach your data, so make note of it. Your Git repository is now configured.
+
+#### Self-hosted Git
+
+You can use SparkleShare to access a Git repository on any computer you have access to. No special setup is required, aside from a bare Git repository. However, if you want to give access to your Git repository to anyone else, then you should run a Git manager like [Gitolite][12] or SparkleShare's own Dazzle server to help you manage SSH keys and accounts. At the very least, create a user specific to Git so that users with access to your Git repository don't also automatically gain access to the rest of your server.
+
+Log into your server as the Git user (or yourself, if you're very good at managing user and group permissions) and create a repository:
+
+
+```
+$ mkdir ~/sparkly.git
+$ cd ~/sparkly.git
+$ git init --bare .
+```
+
+Your Git repository is now configured.
+
+#### Dazzle
+
+SparkleShare's developers provide a Git management system called [Dazzle][13] to help you self-host Git repositories.
+
+On your server, download the Dazzle application to some location in your path:
+
+
+```
+$ curl \
+\--output ~/bin/dazzle
+$ chmod +x ~/bin/dazzle
+```
+
+Dazzle sets up a user specific to Git and SparkleShare and also implements access rights based on keys generated by the SparkleShare application. For now, just set up a project:
+
+
+```
+`$ dazzle create sparkly`
+```
+
+Your server is now configured as a SparkleShare host.
+
+### Configuring SparkleShare
+
+When you launch SparkleShare for the first time, you are prompted to configure what server you want SparkleShare to use for storage. This process may feel like a first-run setup wizard, but it's actually the usual process for setting up a new shared location within SparkleShare. Unlike many shared drive applications, with SparkleShare you can have several locations configured at once. The first shared location you configure isn't any more significant than any shared location you may set up later, and you're not signing up with SparkleShare or any other service. You're just pointing SparkleShare at a Git repository so that it knows what to keep your first SparkleShare folder in sync with.
+
+On the first screen, identify yourself by whatever means you want on record in the Git commits that SparkleShare makes on your behalf. You can use anything, even fake information that resolves to nothing. It's purely for the commit messages, which you may never even see if you have no interest in reviewing the Git backend processes.
+
+The next screen prompts you to choose your hosting type. If you are using GitLab, GitHub, Planio, or Bitbucket, then select the appropriate one. For anything else, select **Own server**.
+
+![Choosing a Sparkleshare host][14]
+
+At the bottom of this screen, you must enter the SSH clone URL. If you're self-hosting, the address is something like **** and the remote path is the absolute path to the Git repository you created for this purpose.
+
+Based on my self-hosted examples above, the address to my imaginary server is **** (the **:22122** indicates a nonstandard SSH port) and the remote path is **/home/git/sparkly.git**.
+
+If I use my Notabug.org account instead, the address from the example above is **[git@notabug.org][15]** and the path is **seth/sparkly.git**.
+
+SparkleShare will fail the first time it attempts to connect to the host because you have not yet copied the SparkleShare client ID (an SSH key specific to the SparkleShare application) to the Git host. This is expected, so don't cancel the process. Leave the SparkleShare setup window open and obtain the client ID from the SparkleShare icon in your system tray. Then copy the client ID to your clipboard so you can add it to your Git host.
+
+![Getting the client ID from Sparkleshare][16]
+
+#### Adding your client ID to a hosted Git account
+
+Minor UI differences aside, adding an SSH key (which is all the client ID is) is basically the same process on any hosting service. In your Git host's web dashboard, navigate to your user settings and find the **SSH Keys** category. Click the **Add New Key** button (or similar) and paste the contents of your SparkleShare client ID.
+
+![Adding an SSH key][17]
+
+Save the key. If you want someone else, such as collaborators or family members, to be able to access this same repository, they must provide you with their SparkleShare client ID so you can add it to your account.
+
+#### Adding your client ID to a self-hosted Git account
+
+A SparkleShare client ID is just an SSH key, so copy and paste it into your Git user's **~/.ssh/authorized_keys** file.
+
+#### Adding your client ID with Dazzle
+
+If you are using Dazzle to manage your SparkleShare projects, add a client ID with this command:
+
+
+```
+`$ dazzle link`
+```
+
+When Dazzle prompts you for the ID, paste in the client ID found in the SparkleShare menu.
+
+### Using SparkleShare
+
+Once you've added your client ID to your Git host, click the **Retry** button in the SparkleShare window to finish setup. When it's finished cloning your repository, you can close the SparkleShare setup window, and you'll find a new **SparkleShare** folder in your home directory. If you set up a Git repository with a hosting service and chose to include a README or license file, you can see them in your SparkleShare directory.
+
+![Sparkleshare file manager][18]
+
+Otherwise, there are some hidden directories, which you can see by revealing hidden directories in your file manager.
+
+![Showing hidden files in GNOME][19]
+
+You use SparkleShare the same way you use any directory on your computer: you put files into it. Anytime a file or directory is placed into a SparkleShare folder, it's copied in the background to your Git repository.
+
+#### Excluding certain files
+
+Since Git is designed to remember _everything_ , you may want to exclude specific file types from ever being recorded. There are a few reasons to manage excluded files. By defining files that are off limits for SparkleShare, you can avoid accidental copying of large files. You can also design a scheme for yourself that enables you to store files that logically belong together (MIDI files with their **.flac** exports, for instance) in one directory, but manually back up the large files yourself while letting SparkleShare back up the text-based files.
+
+If you can't see hidden files in your system's file manager, then reveal them. Navigate to your SparkleShare folder, then to the directory representing your repository, locate a file called **.gitignore** , and open it in a text editor. You can enter file extensions or file names, one per line, into **.gitignore** , and any file matching what you list will be (as the file name suggests) ignored.
+
+
+```
+Thumbs.db
+$RECYCLE.BIN/
+.DS_Store
+._*
+.fseventsd
+.Spotlight-V100
+.Trashes
+.directory
+.Trash-*
+*.wav
+*.ogg
+*.flac
+*.mp3
+*.m4a
+*.opus
+*.jpg
+*.png
+*.mp4
+*.mov
+*.mkv
+*.avi
+*.pdf
+*.djvu
+*.epub
+*.od{s,t}
+*.cbz
+```
+
+You know the types of files you encounter most often, so concentrate on the ones most likely to sneak their way into your SparkleShare directory. If you want to exercise a little overkill, you can find good collections of **.gitignore** files on Notabug.org and also on the internet at large.
+
+With those entries in your **.gitignore** file, you can place large files that you don't want sent to your Git host in your SparkleShare directory, and SparkleShare will ignore them entirely. Of course, that means it's up to you to make sure they get onto a backup or distributed to your SparkleShare collaborators through some other means.
+
+### Automation
+
+[Automation][20] is part of the silent agreement we have with computers: they do the repetitious, boring stuff that we humans either aren't very good at doing or aren't very good at remembering. SparkleShare is a nice, simple way to automate the routine distribution of data. It isn't right for every Git repository, by any means. It doesn't have an interface for advanced Git functions; it doesn't have a pause button or a manual override. And that's OK because its scope is intentionally limited. SparkleShare does what SparkleShare sets out to do, it does it well, and it's one Git repository you won't have to think about.
+
+If you have a use for that kind of steady, invisible automation, give SparkleShare a try.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/file-sharing-git
+
+作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO
+[2]: https://git-scm.com/
+[3]: http://www.sparkleshare.org/
+[4]: http://gitlab.com
+[5]: https://opensource.com/article/19/4/calendar-git
+[6]: http://sparkleshare.org
+[7]: /business/16/8/flatpak
+[8]: http://notabug.org
+[9]: https://opensource.com/sites/default/files/uploads/git-new-repo.jpg (Creating a Git repository)
+[10]: https://opensource.com/sites/default/files/uploads/github-clone-url.jpg (Cloning a URL on GitHub)
+[11]: https://opensource.com/sites/default/files/uploads/gitlab-clone-url.jpg (Cloning a URL on GitLab)
+[12]: http://gitolite.org
+[13]: https://github.com/hbons/Dazzle
+[14]: https://opensource.com/sites/default/files/uploads/sparkleshare-host.jpg (Choosing a Sparkleshare host)
+[15]: mailto:git@notabug.org
+[16]: https://opensource.com/sites/default/files/uploads/sparkleshare-clientid.jpg (Getting the client ID from Sparkleshare)
+[17]: https://opensource.com/sites/default/files/uploads/git-ssh-key.jpg (Adding an SSH key)
+[18]: https://opensource.com/sites/default/files/uploads/sparkleshare-file-manager.jpg (Sparkleshare file manager)
+[19]: https://opensource.com/sites/default/files/uploads/gnome-show-hidden-files.jpg (Showing hidden files in GNOME)
+[20]: /downloads/ansible-quickstart
diff --git a/sources/tech/20190405 How to Authenticate a Linux Desktop to Your OpenLDAP Server.md b/sources/tech/20190405 How to Authenticate a Linux Desktop to Your OpenLDAP Server.md
new file mode 100644
index 0000000000..6ee1633f9d
--- /dev/null
+++ b/sources/tech/20190405 How to Authenticate a Linux Desktop to Your OpenLDAP Server.md
@@ -0,0 +1,190 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Authenticate a Linux Desktop to Your OpenLDAP Server)
+[#]: via: (https://www.linux.com/blog/how-authenticate-linux-desktop-your-openldap-server)
+[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
+
+How to Authenticate a Linux Desktop to Your OpenLDAP Server
+======
+
+![][1]
+
+[Creative Commons Zero][2]
+
+In this final part of our three-part series, we reach the conclusion everyone has been waiting for. The ultimate goal of using LDAP (in many cases) is enabling desktop authentication. With this setup, admins are better able to manage and control user accounts and logins. After all, Active Directory admins shouldn’t have all the fun, right?
+
+WIth OpenLDAP, you can manage your users on a centralized directory server and connect the authentication of every Linux desktop on your network to that server. And since you already have [OpenLDAP][3] and the [LDAP Authentication Manager][4] setup and running, the hard work is out of the way. At this point, there is just a few quick steps to enabling those Linux desktops to authentication with that server.
+
+I’m going to walk you through this process, using the Ubuntu Desktop 18.04 to demonstrate. If your desktop distribution is different, you’ll only have to modify the installation steps, as the configurations should be similar.
+
+**What You’ll Need**
+
+Obviously you’ll need the OpenLDAP server up and running. You’ll also need user accounts created on the LDAP directory tree, and a user account on the client machines with sudo privileges. With those pieces out of the way, let’s get those desktops authenticating.
+
+**Installation**
+
+The first thing we must do is install the necessary client software. This will be done on all the desktop machines that require authentication with the LDAP server. Open a terminal window on one of the desktop machines and issue the following command:
+
+```
+sudo apt-get install libnss-ldap libpam-ldap ldap-utils nscd -y
+```
+
+During the installation, you will be asked to enter the LDAP server URI ( **Figure 1** ).
+
+![][5]
+
+Figure 1: Configuring the LDAP server URI for the client.
+
+[Used with permission][6]
+
+The LDAP URI is the address of the OpenLDAP server, in the form ldap://SERVER_IP (Where SERVER_IP is the IP address of the OpenLDAP server). Type that address, tab to OK, and press Enter on your keyboard.
+
+In the next window ( **Figure 2)** , you are required to enter the Distinguished Name of the OpenLDAP server. This will be in the form dc=example,dc=com.
+
+![][7]
+
+Figure 2: Configuring the DN of your OpenLDAP server.
+
+[Used with permission][6]
+
+If you’re unsure of what your OpenLDAP DN is, log into the LDAP Account Manager, click Tree View, and you’ll see the DN listed in the left pane ( **Figure 3** ).
+
+![][8]
+
+Figure 3: Locating your OpenLDAP DN with LAM.
+
+[Used with permission][6]
+
+The next few configuration windows, will require the following information:
+
+ * Specify LDAP version (select 3)
+
+ * Make local root Database admin (select Yes)
+
+ * Does the LDAP database require login (select No)
+
+ * Specify LDAP admin account suffice (this will be in the form cn=admin,dc=example,dc=com)
+
+ * Specify password for LDAP admin account (this will be the password for the LDAP admin user)
+
+
+
+
+Once you’ve answered the above questions, the installation of the necessary bits is complete.
+
+**Configuring the LDAP Client**
+
+Now it’s time to configure the client to authenticate against the OpenLDAP server. This is not nearly as hard as you might think.
+
+First, we must configure nsswitch. Open the configuration file with the command:
+
+```
+sudo nano /etc/nsswitch.conf
+```
+
+In that file, add ldap at the end of the following line:
+
+```
+passwd: compat systemd
+
+group: compat systemd
+
+shadow: files
+```
+
+These configuration entries should now look like:
+
+```
+passwd: compat systemd ldap
+group: compat systemd ldap
+shadow: files ldap
+```
+
+At the end of this section, add the following line:
+
+```
+gshadow files
+```
+
+The entire section should now look like:
+
+```
+passwd: compat systemd ldap
+
+group: compat systemd ldap
+
+shadow: files ldap
+
+gshadow files
+```
+
+Save and close that file.
+
+Now we need to configure PAM for LDAP authentication. Issue the command:
+
+```
+sudo nano /etc/pam.d/common-password
+```
+
+Remove use_authtok from the following line:
+
+```
+password [success=1 user_unknown=ignore default=die] pam_ldap.so use_authtok try_first_pass
+```
+
+Save and close that file.
+
+There’s one more PAM configuration to take care of. Issue the command:
+
+```
+sudo nano /etc/pam.d/common-session
+```
+
+At the end of that file, add the following:
+
+```
+session optional pam_mkhomedir.so skel=/etc/skel umask=077
+```
+
+The above line will create the default home directory (upon first login), on the Linux desktop, for any LDAP user that doesn’t have a local account on the machine. Save and close that file.
+
+**Logging In**
+
+Reboot the client machine. When the login is presented, attempt to log in with a user on your OpenLDAP server. The user account should authenticate and present you with a desktop. You are good to go.
+
+Make sure to configure every single Linux desktop on your network in the same fashion, so they too can authenticate against the OpenLDAP directory tree. By doing this, any user in the tree will be able to log into any configured Linux desktop machine on your network.
+
+You now have an OpenLDAP server running, with the LDAP Account Manager installed for easy account management, and your Linux clients authenticating against that LDAP server.
+
+And that, my friends, is all there is to it.
+
+We’re done.
+
+Keep using Linux.
+
+It’s been an honor.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/how-authenticate-linux-desktop-your-openldap-server
+
+作者:[Jack Wallen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/jlwallen
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cyber-3400789_1280_0.jpg?itok=YiinDnTw
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.linux.com/blog/2019/3/how-install-openldap-ubuntu-server-1804
+[4]: https://www.linux.com/blog/learn/2019/3/how-install-ldap-account-manager-ubuntu-server-1804
+[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_1.jpg?itok=DgYT8iY1
+[6]: /LICENSES/CATEGORY/USED-PERMISSION
+[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_2.jpg?itok=CXITs7_J
+[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_3.jpg?itok=HmhiYj7J
diff --git a/sources/tech/20190406 Run a server with Git.md b/sources/tech/20190406 Run a server with Git.md
new file mode 100644
index 0000000000..650d5672af
--- /dev/null
+++ b/sources/tech/20190406 Run a server with Git.md
@@ -0,0 +1,240 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Run a server with Git)
+[#]: via: (https://opensource.com/article/19/4/server-administration-git)
+[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth/users/seth)
+
+Run a server with Git
+======
+Thanks to Gitolite, you can manage a Git server with Git. Learn how in
+our series about little-known Git uses.
+![computer servers processing data][1]
+
+As I've tried to demonstrate in this series leading up to Git's 14th anniversary on April 7, [Git][2] can do a wide range of things beyond tracking source code. Believe it or not, Git can even manage your Git server, so you can, more or less, run a Git server with Git itself.
+
+Of course, this involves a lot of components beyond everyday Git, not the least of which is [Gitolite][3], the backend application managing the fiddly bits that you configure using Git. The great thing about Gitolite is that, because it uses Git as its frontend interface, it's easy to integrate Git server administration within the rest of your Git-based workflow. Gitolite provides precise control over who can access specific repositories on your server and what permissions they have. You can manage that sort of thing yourself with the usual Linux system tools, but it takes a lot of work if you have more than just one or two repos across a half-dozen users.
+
+Gitolite's developers have done the hard work to make it easy for you to provide many users with access to your Git server without giving them access to your entire environment—and you can do it all with Git.
+
+What Gitolite is _not_ is a GUI admin and user panel. That sort of experience is available with the excellent [Gitea][4] project, but this article focuses on the simple elegance and comforting familiarity of Gitolite.
+
+### Install Gitolite
+
+Assuming your Git server runs Linux, you can install Gitolite with your package manager ( **yum** on CentOS and RHEL, **apt** on Debian and Ubuntu, **zypper** on OpenSUSE, and so on). For example, on RHEL:
+
+
+```
+`$ sudo yum install gitolite3`
+```
+
+Many repositories still have older versions of Gitolite for legacy support, but the current version is version 3.
+
+You must have passwordless SSH access to your server. You can use a password to log in if you prefer, but Gitolite relies on SSH keys, so you must configure the option to log in with keys. If you don't know how to configure a server for passwordless SSH access, go learn how to do that first (the [Setting up SSH key authentication][5] section of Steve Ovens's Ansible article explains it well). It's an essential part of secure server administration—as well as of running Gitolite.
+
+### Configure a Git user
+
+Without Gitolite, if a person requests access to a Git repository you host on a server, you have to provide that person with a user account. Git provides a special shell, the **git-shell** , which is an ultra-specific shell that performs only Git tasks. This lets you have users who can access your server only through the filter of a very limited shell environment.
+
+That solution works, but it usually means a user gains access to all repositories on your server unless you have a very good schema for group permissions and maintain those permissions strictly whenever a new repository is created. It also requires a lot of manual configuration at the system level, an area usually reserved for a specific tier of sysadmins and not necessarily the person usually in charge of Git repositories.
+
+Gitolite sidesteps this issue entirely by designating one username for every person who needs access to any repository. By default, the username is **git** , and because Gitolite's documentation assumes that's what is used, it's a good default to keep when you're learning the tool. It's also a well-known convention for anyone who's ever used GitLab or GitHub or any other Git hosting service.
+
+Gitolite calls this user the _hosting user_. Create an account on your server to act as the hosting user (I'll stick with **git** because that's the convention):
+
+
+```
+` $ sudo adduser --create-home git`
+```
+
+For you to control the **git** user account, it must have a valid public SSH key that belongs to you. You should already have this set up, so **cp** your public key ( _not your private key_ ) to the **git** user's home directory:
+
+
+```
+$ sudo cp ~/.ssh/id_ed25519.pub /home/git/
+$ sudo chown git:git /home/git/id_ed25519.pub
+```
+
+If your public key doesn't end with the extension **.pub** , Gitolite will not use it, so rename the file accordingly. Change to that user account to run Gitolite's setup:
+
+
+```
+$ sudo su - git
+$ gitolite setup --pubkey id_ed25519.pub
+```
+
+After the setup script runs, the **git** home's user directory will have a **repositories** directory, which (for now) contains the files **git-admin.git** and **testing.git**. That's all the setup the server requires, so log out.
+
+### Use Gitolite
+
+Managing Gitolite is a matter of editing text files in a Git repository, specifically **gitolite-admin.git**. You won't SSH into your server for Git administration, and Gitolite encourages you not to try. The repositories you and your users store on the Gitolite server are _bare_ repositories, so it's best to stay out of them.
+
+
+```
+$ git clone [git@example.com][6]:gitolite-admin.git gitolite-admin.git
+$ cd gitolite-admin.git
+$ ls -1
+conf
+keydir
+```
+
+The **conf** directory in this repository contains a file called **gitolite.conf**. Open it in a text editor or use **cat** to view its contents:
+
+
+```
+repo gitolite-admin
+RW+ = id_ed22519
+
+repo testing
+RW+ = @all
+```
+
+You may have an idea of what this configuration file does: **gitolite-admin** represents this repository, and the owner of the **id_ed25519** key has read, write, and Git administrative privileges. In other words, rather than mapping users to normal local Unix users (because all your users log in using the **git** hosting user identity), Gitolite maps users to SSH keys listed in the **keydir** directory.
+
+The **testing.git** repository gives full permissions to everyone with access to the server using special group notation.
+
+#### Add users
+
+If you want to add a user called **alice** to your Git server, the person Alice must send you her public SSH key. Gitolite uses whatever is to the left of the **.pub** extension as the identifier for your Git users. Rather than using the default key name values, give keys a name indicative of the key owner. If a user has more than one key (e.g., one for her laptop, one for her desktop), you can use subdirectories to avoid file name collisions. For instance, the key Alice uses from her laptop might come to you as the default **id_rsa.pub** , so rename it **alice.pub** or similar (or let the users name the key according to their local user accounts on their computers), and place it into the **gitolite-admin.git/keydir/work/laptop/** directory. If she sends you another key from her desktop, name it **alice.pub** (the same as the previous one) and add it to **keydir/work/desktop/**. Another key might go into **keydir/home/desktop/** , and so on. Gitolite recursively searches **keydir** for a **.pub** file matching a repository "user" and treats any match as the same identity.
+
+When you add keys to the **keydir** directory, you must commit them back to your server. This is such an easy thing to forget that there's a real argument here for using an automated Git application like [**Sparkleshare**][7] so any change is committed back to your Gitolite admin immediately. The first time you forget to commit and push—and waste three hours of your time and your user's time troubleshooting—you'll see that Gitolite is the perfect justification for using Sparkleshare.
+
+
+```
+$ git add keydir
+$ git commit -m 'added alice-laptop-0.pub'
+$ git push origin HEAD
+```
+
+Alice, by default, gains access to the **testing.git** directory so she can test connectivity and functionality with that.
+
+#### Set permissions
+
+As with users, directory permissions and groups are abstracted away from the normal Unix tools you might be used to (or find information about online). Permissions to projects are granted in the **gitolite.conf** file in **gitolite-admin.git/conf** directory. There are four levels of permissions:
+
+ * **R** allows read-only. A user with **R** permissions on a repository may clone it, and that's all.
+ * **RW** allows a user to perform a fast-forward push of a branch, create new branches, and create new tags. More or less, this one feels like a "normal" Git repository to most users.
+ * **RW+** allows Git actions that are potentially destructive. A user can perform normal fast-forward pushes, as well as rewind pushes, do rebases, and delete branches and tags. This may or may not be something you want to grant to all contributors on a project.
+ * **-** explicitly denies access to a repository. This is essentially the same as a user not being listed in the repository's configuration.
+
+
+
+Create a new repository or modify an existing repository's permissions by adjusting **gitolite.conf**. For instance, to give Alice permissions to administrate a new repository called **widgets.git** :
+
+
+```
+repo gitolite-admin
+RW+ = id_ed22519
+
+repo testing
+RW+ = @all
+
+repo widgets
+RW+ = alice
+```
+
+Now Alice—and Alice alone—can clone the repo:
+
+
+```
+[alice]$ git clone [git@example.com][6]:widgets.git
+Cloning into 'widgets'...
+warning: You appear to have cloned an empty repository.
+```
+
+On her initial push, Alice must use the **-u** option to send her branch to the empty repository (as she would have to do with any Git host).
+
+To make user management easier, you can define groups of repositories:
+
+
+```
+@qtrepo = widgets
+@qtrepo = games
+
+repo gitolite-admin
+RW+ = id_ed22519
+
+repo testing
+RW+ = @all
+
+repo @qtrepo
+RW+ = alice
+```
+
+Just as you can create group repositories, you can group users. One user group exists by default: **@all**. As you might expect, it includes all users, without exception. You can create your own:
+
+
+```
+@qtrepo = widgets
+@qtrepo = games
+
+@developers = alice bob
+
+repo gitolite-admin
+RW+ = id_ed22519
+
+repo testing
+RW+ = @all
+
+repo @qtrepo
+RW+ = @developers
+```
+
+As with adding or modifying key files, any change to the **gitolite.conf** file must be committed and pushed to take effect.
+
+### Create a repository
+
+By default, Gitolite assumes repository creation happens from the top down. For instance, a project manager with access to the Git server creates a project repository and, through the Gitolite administration repo, adds developers.
+
+In practice, you might prefer to grant users permission to create repositories. Gitolite calls these "wild repos" (I'm not sure whether that's commentary on how the repos come into being or a reference to the wildcard characters required by the configuration file to let it happen). Here's an example:
+
+
+```
+@managers = alice bob
+
+repo foo/CREATOR/[a-z]..*
+C = @managers
+RW+ = CREATOR
+RW = WRITERS
+R = READERS
+```
+
+The first line defines a group of users: the group is called **@managers** and contains users **alice** and **bob**. The next line sets up a wildcard allowing repositories that do not yet exist to be created in a directory called **foo** followed by a subdirectory named for the user creating the repo. For example:
+
+
+```
+[alice]$ git clone [git@example.com][6]:foo/alice/cool-app.git
+Cloning into cool-app'...
+Initialized empty Git repository in /home/git/repositories/foo/alice/cool-app.git
+warning: You appear to have cloned an empty repository.
+```
+
+There are some mechanisms for the creator of a wild repo to define who can read and write to their repository, but they're limited in scope. For the most part, Gitolite assumes that a specific set of users governs project permission. One solution is to grant all users access to **gitolite-admin** using a Git hook to require manager approval to merge changes into the master branch.
+
+### Learn more
+
+Gitolite has many more features than what this introductory article covers, so try it out. The [documentation][8] is excellent, and once you read through it, you can customize your Gitolite server to provide your users whatever level of control you are comfortable with. Gitolite is a low-maintenance, simple system that you can install, set up, and then more or less forget about.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/server-administration-git
+
+作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8 (computer servers processing data)
+[2]: https://git-scm.com/
+[3]: http://gitolite.com
+[4]: http://gitea.io
+[5]: Setting%20up%20SSH%20key%20authentication
+[6]: mailto:git@example.com
+[7]: https://opensource.com/article/19/4/file-sharing-git
+[8]: http://gitolite.com/gitolite/quick_install.html
diff --git a/sources/tech/20190407 Manage multimedia files with Git.md b/sources/tech/20190407 Manage multimedia files with Git.md
new file mode 100644
index 0000000000..81bc0d02ca
--- /dev/null
+++ b/sources/tech/20190407 Manage multimedia files with Git.md
@@ -0,0 +1,247 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Manage multimedia files with Git)
+[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
+[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
+
+Manage multimedia files with Git
+======
+Learn how to use Git to track large multimedia files in your projects in
+the final article in our series on little-known uses of Git.
+![video editing dashboard][1]
+
+Git is very specifically designed for source code version control, so it's rarely embraced by projects and industries that don't primarily work in plaintext. However, the advantages of an asynchronous workflow are appealing, especially in the ever-growing number of industries that combine serious computing with seriously artistic ventures, including web design, visual effects, video games, publishing, currency design (yes, that's a real industry), education… the list goes on and on.
+
+In this series leading up to Git's 14th anniversary, we've shared six little-known ways to use Git. In this final article, we'll look at software that brings the advantages of Git to managing multimedia files.
+
+### The problem with managing multimedia files with Git
+
+It seems to be common knowledge that Git doesn't work well with non-text files, but it never hurts to challenge assumptions. Here's an example of copying a photo file using Git:
+
+
+```
+$ du -hs
+108K .
+$ cp ~/photos/dandelion.tif .
+$ git add dandelion.tif
+$ git commit -m 'added a photo'
+[master (root-commit) fa6caa7] two photos
+1 file changed, 0 insertions(+), 0 deletions(-)
+create mode 100644 dandelion.tif
+$ du -hs
+1.8M .
+```
+
+Nothing unusual so far; adding a 1.8MB photo to a directory results in a directory 1.8MB in size. So, let's try removing the file:
+
+
+```
+$ git rm dandelion.tif
+$ git commit -m 'deleted a photo'
+$ du -hs
+828K .
+```
+
+You can see the problem here: Removing a large file after it's been committed increases a repository's size roughly eight times its original, barren state (from 108K to 828K). You can perform tests to get a better average, but this simple demonstration is consistent with my experience. The cost of committing files that aren't text-based is minimal at first, but the longer a project stays active, the more changes people make to static content, and the more those fractions start to add up. When a Git repository becomes very large, the major cost is usually speed. The time to perform pulls and pushes goes from being how long it takes to take a sip of coffee to how long it takes to wonder if your computer got kicked off the network.
+
+The reason static content causes Git to grow in size is that formats based on text allow Git to pull out just the parts that have changed. Raster images and music files make as much sense to Git as they would to you if you looked at the binary data contained in a .png or .wav file. So Git just takes all the data and makes a new copy of it, even if only one pixel changes from one photo to the next.
+
+### Git-portal
+
+In practice, many multimedia projects don't need or want to track the media's history. The media part of a project tends to have a different lifecycle than the text or code part of a project. Media assets generally progress in one direction: a picture starts as a pencil sketch, proceeds toward its destination as a digital painting, and, even if the text is rolled back to an earlier version, the art continues its forward progress. It's rare for media to be bound to a specific version of a project. The exceptions are usually graphics that reflect datasets—usually tables or graphs or charts—that can be done in text-based formats such as SVG.
+
+So, on many projects that involve both media and text (whether it's narrative prose or code), Git is an acceptable solution to file management, as long as there's a playground outside the version control cycle for artists to play in.
+
+![Graphic showing relationship between art assets and Git][2]
+
+A simple way to enable that is [Git-portal][3], a Bash script armed with Git hooks that moves your asset files to a directory outside Git's purview and replaces them with symlinks. Git commits the symlinks (sometimes called aliases or shortcuts), which are trivially small, so all you commit are your text files and whatever symlinks represent your media assets. Because the replacement files are symlinks, your project continues to function as expected because your local machine follows the symlinks to their "real" counterparts. Git-portal maintains a project's directory structure when it swaps out a file with a symlink, so it's easy to reverse the process, should you decide that Git-portal isn't right for your project or you need to build a version of your project without symlinks (for distribution, for instance).
+
+Git-portal also allows remote synchronization of assets over rsync, so you can set up a remote storage location as a centralized source of authority.
+
+Git-portal is ideal for multimedia projects, including video game and tabletop game design, virtual reality projects with big 3D model renders and textures, [books][4] with graphics and .odt exports, collaborative [blog websites][5], music projects, and much more. It's not uncommon for an artist to perform versioning in their application—in the form of layers (in the graphics world) and tracks (in the music world)—so Git adds nothing to multimedia project files themselves. The power of Git is leveraged for other parts of artistic projects (prose and narrative, project management, subtitle files, credits, marketing copy, documentation, and so on), and the power of structured remote backups is leveraged by the artists.
+
+#### Install Git-portal
+
+There are RPM packages for Git-portal located at , which you can download and install.
+
+Alternately, you can install Git-portal manually from its home on GitLab. It's just a Bash script and some Git hooks (which are also Bash scripts), but it requires a quick build process so that it knows where to install itself:
+
+
+```
+$ git clone git-portal.clone
+$ cd git-portal.clone
+$ ./configure
+$ make
+$ sudo make install
+```
+
+#### Use Git-portal
+
+Git-portal is used alongside Git. This means, as with all large-file extensions to Git, there are some added steps to remember. But you only need Git-portal when dealing with your media assets, so it's pretty easy to remember unless you've acclimated yourself to treating large files the same as text files (which is rare for Git users). There's one setup step you must do to use Git-portal in a project:
+
+
+```
+$ mkdir bigproject.git
+$ cd !$
+$ git init
+$ git-portal init
+```
+
+Git-portal's **init** function creates a **_portal** directory in your Git repository and adds it to your .gitignore file.
+
+Using Git-portal in a daily routine integrates smoothly with Git. A good example is a MIDI-based music project: the project files produced by the music workstation are text-based, but the MIDI files are binary data:
+
+
+```
+$ ls -1
+_portal
+song.1.qtr
+song.qtr
+song-Track_1-1.mid
+song-Track_1-3.mid
+song-Track_2-1.mid
+$ git add song*qtr
+$ git-portal song-Track*mid
+$ git add song-Track*mid
+```
+
+If you look into the **_portal** directory, you'll find the original MIDI files. The files in their place are symlinks to **_portal** , which keeps the music workstation working as expected:
+
+
+```
+$ ls -lG
+[...] _portal/
+[...] song.1.qtr
+[...] song.qtr
+[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
+[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
+[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
+```
+
+As with Git, you can also add a directory of files:
+
+
+```
+$ cp -r ~/synth-presets/yoshimi .
+$ git-portal add yoshimi
+Directories cannot go through the portal. Sending files instead.
+$ ls -lG _portal/yoshimi
+[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
+```
+
+Removal works as expected, but when removing something in **_portal** , you should use **git-portal rm** instead of **git rm**. Using Git-portal ensures that the file is removed from **_portal** :
+
+
+```
+$ ls
+_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
+song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
+$ git-portal rm song-Track_1-3.mid
+rm 'song-Track_1-3.mid'
+$ ls _portal/
+song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
+```
+
+If you forget to use Git-portal, then you have to remove the portal file manually:
+
+
+```
+$ git-portal rm song-Track_1-1.mid
+rm 'song-Track_1-1.mid'
+$ ls _portal/
+song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
+$ trash _portal/song-Track_1-1.mid
+```
+
+Git-portal's only other function is to list all current symlinks and find any that may have become broken, which can sometimes happen if files move around in a project directory:
+
+
+```
+$ mkdir foo
+$ mv yoshimi foo
+$ git-portal status
+bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
+bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
+```
+
+If you're using Git-portal for a personal project and maintaining your own backups, this is technically all you need to know about Git-portal. If you want to add in collaborators or you want Git-portal to manage backups the way (more or less) Git does, you can a remote.
+
+#### Add Git-portal remotes
+
+Adding a remote location for Git-portal is done through Git's existing remote function. Git-portal implements Git hooks, scripts hidden in your repository's .git directory, to look at your remotes for any that begin with **_portal**. If it finds one, it attempts to **rsync** to the remote location and synchronize files. Git-portal performs this action anytime you do a Git push or a Git merge (or pull, which is really just a fetch and an automatic merge).
+
+If you've only cloned Git repositories, then you may never have added a remote yourself. It's a standard Git procedure:
+
+
+```
+$ git remote add origin [git@gitdawg.com][6]:seth/bigproject.git
+$ git remote -v
+origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
+origin [git@gitdawg.com][6]:seth/bigproject.git (push)
+```
+
+The name **origin** is a popular convention for your main Git repository, so it makes sense to use it for your Git data. Your Git-portal data, however, is stored separately, so you must create a second remote to tell Git-portal where to push to and pull from. Depending on your Git host, you may need a separate server because gigabytes of media assets are unlikely to be accepted by a Git host with limited space. Or maybe you're on a server that permits you to access only your Git repository and not external storage directories:
+
+
+```
+$ git remote add _portal [seth@example.com][7]:/home/seth/git/bigproject_portal
+$ git remote -v
+origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
+origin [git@gitdawg.com][6]:seth/bigproject.git (push)
+_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (fetch)
+_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (push)
+```
+
+You may not want to give all of your users individual accounts on your server, and you don't have to. To provide access to the server hosting a repository's large file assets, you can run a Git frontend like **[Gitolite][8]** , or you can use **rrsync** (i.e., restricted rsync).
+
+Now you can push your Git data to your remote Git repository and your Git-portal data to your remote portal:
+
+
+```
+$ git push origin HEAD
+master destination detected
+Syncing _portal content...
+sending incremental file list
+sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
+total size is 60,358,015 speedup is 6,474.10
+Syncing _portal content to example.com:/home/seth/git/bigproject_portal
+```
+
+If you have Git-portal installed and a **_portal** remote configured, your **_portal** directory will be synchronized, getting new content from the server and sending fresh content with every push. While you don't have to do a Git commit and push to sync with the server (a user could just use rsync directly), I find it useful to require commits for artistic changes. It integrates artists and their digital assets into the rest of the workflow, and it provides useful metadata about project progress and velocity.
+
+### Other options
+
+If Git-portal is too simple for you, there are other options for managing large files with Git. [Git Large File Storage][9] (LFS) is a fork of a defunct project called git-media and is maintained and supported by GitHub. It requires special commands (like **git lfs track** to protect large files from being tracked by Git) and requires the user to manage a .gitattributes file to update which files in the repository are tracked by LFS. It supports _only_ HTTP and HTTPS remotes for large files, so your LFS server must be configured so users can authenticate over HTTP rather than SSH or rsync.
+
+A more flexible option than LFS is [git-annex][10], which you can learn more about in my article about [managing binary blobs in Git][11] (ignore the parts about the deprecated git-media, as its former flexibility doesn't apply to its successor, Git LFS). Git-annex is a flexible and elegant solution with a detailed system for adding, removing, and moving large files within a repository. Because it's flexible and powerful, there are lots of new commands and rules to learn, so take a look at its [documentation][12].
+
+If, however, your needs are simple and you like a solution that utilizes existing technology to do simple and obvious tasks, Git-portal might be the tool for the job.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/manage-multimedia-files-git
+
+作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
+[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
+[3]: http://gitlab.com/slackermedia/git-portal.git
+[4]: https://www.apress.com/gp/book/9781484241691
+[5]: http://mixedsignals.ml
+[6]: mailto:git@gitdawg.com
+[7]: mailto:seth@example.com
+[8]: https://opensource.com/article/19/4/file-sharing-git
+[9]: https://git-lfs.github.com/
+[10]: https://git-annex.branchable.com/
+[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
+[12]: https://git-annex.branchable.com/walkthrough/
diff --git a/sources/tech/20190407 What it means to be Cloud-Native approach - the CNCF way.md b/sources/tech/20190407 What it means to be Cloud-Native approach - the CNCF way.md
new file mode 100644
index 0000000000..10e073a029
--- /dev/null
+++ b/sources/tech/20190407 What it means to be Cloud-Native approach - the CNCF way.md
@@ -0,0 +1,123 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What it means to be Cloud-Native approach — the CNCF way)
+[#]: via: (https://medium.com/@sonujose993/what-it-means-to-be-cloud-native-approach-the-cncf-way-9e8ab99d4923)
+[#]: author: (Sonu Jose https://medium.com/@sonujose993)
+
+What it means to be Cloud-Native approach — the CNCF way
+======
+
+
+
+While discussing on Digital Transformation and modern application development Cloud-Native is a term which frequently comes in. But what does it actually means to be cloud-native? This blog is all about giving a good understanding of the cloud-native approach and the ways to achieve it in the CNCF way.
+
+Michael Dell once said that “the cloud isn’t a place, it’s a way of doing IT”. He was right, and the same can be said of cloud-native.
+
+Cloud-native is an approach to building and running applications that exploit the advantages of the cloud computing delivery model. Cloud-native is about how applications are created and deployed, not where. … It’s appropriate for both public and private clouds.
+
+Cloud native architectures take full advantage of on-demand delivery, global deployment, elasticity, and higher-level services. They enable huge improvements in developer productivity, business agility, scalability, availability, utilization, and cost savings.
+
+### CNCF (Cloud native computing foundation)
+
+Google has been using containers for many years and they led the Kubernetes project which is a leading container orchestration platform. But alone they can’t really change the broad perspective in the industry around modern applications. So there was a huge need for industry leaders to come together and solve the major problems facing the modern approach. In order to achieve this broader vision, Google donated kubernetes to the Cloud Native foundation and this lead to the birth of CNCF in 2015.
+
+
+
+Cloud Native computing foundation is created in the Linux foundation for building and managing platforms and solutions for modern application development. It really is a home for amazing projects that enable modern application development. CNCF defines cloud-native as “scalable applications” running in “modern dynamic environments” that use technologies such as containers, microservices, and declarative APIs. Kubernetes is the world’s most popular container-orchestration platform and the first CNCF project.
+
+### The approach…
+
+CNCF created a trail map to better understand the concept of Cloud native approach. In this article, we will be discussed based on this landscape. The newer version is available at https://landscape.cncf.io/
+
+The Cloud Native Trail Map is CNCF’s recommended path through the cloud-native landscape. This doesn’t define a specific path with which we can approach digital transformation rather there are many possible paths you can follow to align with this concept based on your business scenario. This is just a trail to simplify the journey to cloud-native.
+
+
+Let's start discussing the steps defined in this trail map.
+
+### 1. CONTAINERIZATION
+
+![][1]
+
+You can’t do cloud-native without containerizing your application. It doesn’t matter what size the application is any type of application will do. **A container is a standard unit of software that packages up the code and all its dependencies** so the application runs quickly and reliably from one computing environment to another. Docker is the most preferred platform for containerization. A **Docker container** image is a lightweight, standalone, executable package of software that includes everything needed to run an application.
+
+### 2. CI/CD
+
+![][2]
+
+Setup Continuous Integration/Continuous Delivery (CI/CD) so that changes to your source code automatically result in a new container being built, tested, and deployed to staging and eventually, perhaps, to production. Next thing we need to setup is automated rollouts, rollbacks as well as testing. There are a lot of platforms for CI/CD: **Jenkins, VSTS, Azure DevOps** , TeamCity, JFRog, Spinnaker, etc..
+
+### 3. ORCHESTRATION
+
+![][3]
+
+Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments. Software teams use container orchestration to control and automate many tasks. **Kubernetes** is the market-leading orchestration solution. There are other orchestrators like Docker swarm, Mesos, etc.. **Helm Charts** help you define, install, and upgrade even the most complex Kubernetes application.
+
+### 4. OBSERVABILITY & ANALYSIS
+
+Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. Kubernetes provides detailed information about an application’s resource usage at each of these levels. This information allows you to evaluate your application’s performance and where bottlenecks can be removed to improve overall performance.
+
+![][4]
+
+Pick solutions for monitoring, logging, and tracing. Consider CNCF projects Prometheus for monitoring, Fluentd for logging and Jaeger for TracingFor tracing, look for an OpenTracing-compatible implementation like Jaeger.
+
+### 5. SERVICE MESH
+
+As its name says it’s all about connecting services, the **discovery of services** , **health checking, routing** and it is used to **monitoring ingress** from the internet. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.
+
+![][5]
+
+**Istio** provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. **CoreDNS** is a fast and flexible tool that is useful for service discovery. **Envoy** and **Linkerd** each enable service mesh architectures.
+
+### 6. NETWORKING AND POLICY
+
+It is really important to enable more flexible networking layers. To enable more flexible networking, use a CNI compliant network project like Calico, Flannel, or Weave Net. Open Policy Agent (OPA) is a general purpose policy engine with uses ranging from authorization and admission control to data filtering
+
+### 7. DISTRIBUTED DATABASE
+
+A distributed database is a database in which not all storage devices are attached to a common processor. It may be stored in multiple computers, located in the same physical location; or may be dispersed over a network of interconnected computers.
+
+![][6]
+
+When you need more resiliency and scalability than you can get from a single database, **Vitess** is a good option for running MySQL at scale through sharding. Rook is a storage orchestrator that integrates a diverse set of storage solutions into Kubernetes. Serving as the “brain” of Kubernetes, etcd provides a reliable way to store data across a cluster of machine
+
+### 8. MESSAGING
+
+When you need higher performance than JSON-REST, consider using gRPC or NATS. gRPC is a universal RPC framework. NATS is a multi-modal messaging system that includes request/reply, pub/sub and load balanced queues. It is also applicable and take care of much newer and use cases like IoT.
+
+### 9. CONTAINER REGISTRY & RUNTIMES
+
+Container Registry is a single place for your team to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control. There are many container registries available in market docker hub, Azure Container registry, Harbor, Nexus registry, Amazon Elastic Container Registry and way more…
+
+![][7]
+
+Container runtime **containerd** is available as a daemon for Linux and Windows. It manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond.
+
+### 10. SOFTWARE DISTRIBUTION
+
+If you need to do secure software distribution, evaluate Notary, implementation of The Update Framework (TUF).
+
+TUF provide a framework (a set of libraries, file formats, and utilities) that can be used to secure new and existing software update systems. The framework should enable applications to be secure from all known attacks on the software update process. It is not concerned with exposing information about what software is being updated (and thus what software the client may be running) or the contents of updates.
+
+--------------------------------------------------------------------------------
+
+via: https://medium.com/@sonujose993/what-it-means-to-be-cloud-native-approach-the-cncf-way-9e8ab99d4923
+
+作者:[Sonu Jose][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://medium.com/@sonujose993
+[b]: https://github.com/lujun9972
+[1]: https://cdn-images-1.medium.com/max/1200/1*glD7bNJG3SlO0_xNmSGPcQ.png
+[2]: https://cdn-images-1.medium.com/max/1600/1*qOno8YNzmwimlaL9j2fSbA.png
+[3]: https://cdn-images-1.medium.com/max/1200/1*fw8YJnfF32dWsX_beQpWOw.png
+[4]: https://cdn-images-1.medium.com/max/1600/1*sbjPYNq76s9lR7D_FK4ltg.png
+[5]: https://cdn-images-1.medium.com/max/1600/1*kUFBuGfjZSS-n-32CCjtwQ.png
+[6]: https://cdn-images-1.medium.com/max/1600/1*4OGiB3HHQZBFsALjaRb9pA.jpeg
+[7]: https://cdn-images-1.medium.com/max/1600/1*VMCJN41mGZs4p2lQHD0nDw.png
diff --git a/sources/tech/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md b/sources/tech/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md
new file mode 100644
index 0000000000..e5f772e8ca
--- /dev/null
+++ b/sources/tech/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md
@@ -0,0 +1,352 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (A beginner's guide to building DevOps pipelines with open source tools)
+[#]: via: (https://opensource.com/article/19/4/devops-pipeline)
+[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter)
+
+A beginner's guide to building DevOps pipelines with open source tools
+======
+If you're new to DevOps, check out this five-step process for building
+your first pipeline.
+![Shaking hands, networking][1]
+
+DevOps has become the default answer to fixing software development processes that are slow, siloed, or otherwise dysfunctional. But that doesn't mean very much when you're new to DevOps and aren't sure where to begin. This article explores what a DevOps pipeline is and offers a five-step process to create one. While this tutorial is not comprehensive, it should give you a foundation to start on and expand later. But first, a story.
+
+### My DevOps journey
+
+I used to work for the cloud team at Citi Group, developing an Infrastructure-as-a-Service (IaaS) web application to manage Citi's cloud infrastructure, but I was always interested in figuring out ways to make the development pipeline more efficient and bring positive cultural change to the development team. I found my answer in a book recommended by Greg Lavender, who was the CTO of Citi's cloud architecture and infrastructure engineering, called _[The Phoenix Project][2]_. The book reads like a novel while it explains DevOps principles.
+
+A table at the back of the book shows how often different companies deploy to the release environment:
+
+Company | Deployment Frequency
+---|---
+Amazon | 23,000 per day
+Google | 5,500 per day
+Netflix | 500 per day
+Facebook | 1 per day
+Twitter | 3 per week
+Typical enterprise | 1 every 9 months
+
+How are the frequency rates of Amazon, Google, and Netflix even possible? It's because these companies have figured out how to make a nearly perfect DevOps pipeline.
+
+This definitely wasn't the case before we implemented DevOps at Citi. Back then, my team had different staged environments, but deployments to the development server were very manual. All developers had access to just one development server based on IBM WebSphere Application Server Community Edition. The problem was the server went down whenever multiple users simultaneously tried to make deployments, so the developers had to let each other know whenever they were about to make a deployment, which was quite a pain. In addition, there were problems with low code test coverages, cumbersome manual deployment processes, and no way to track code deployments with a defined task or a user story.
+
+I realized something had to be done, and I found a colleague who felt the same way. We decided to collaborate to build an initial DevOps pipeline—he set up a virtual machine and a Tomcat application server while I worked on Jenkins, integrating with Atlassian Jira and BitBucket, and code testing coverages. This side project was hugely successful: we almost fully automated the development pipeline, we achieved nearly 100% uptime on our development server, we could track and improve code testing coverage, and the Git branch could be associated with the deployment and Jira task. And most of the tools we used to construct our DevOps pipeline were open source.
+
+I now realize how rudimentary our DevOps pipeline was, as we didn't take advantage of advanced configurations like Jenkins files or Ansible. However, this simple process worked well, maybe due to the [Pareto][3] principle (also known as the 80/20 rule).
+
+### A brief introduction to DevOps and the CI/CD pipeline
+
+If you ask several people, "What is DevOps? you'll probably get several different answers. DevOps, like agile, has evolved to encompass many different disciplines, but most people will agree on a few things: DevOps is a software development practice or a software development lifecycle (SDLC) and its central tenet is cultural change, where developers and non-developers all breathe in an environment where formerly manual things are automated; everyone does what they are best at; the number of deployments per period increases; throughput increases; and flexibility improves.
+
+While having the right software tools is not the only thing you need to achieve a DevOps environment, some tools are necessary. A key one is continuous integration and continuous deployment (CI/CD). This pipeline is where the environments have different stages (e.g., DEV, INT, TST, QA, UAT, STG, PROD), manual things are automated, and developers can achieve high-quality code, flexibility, and numerous deployments.
+
+This article describes a five-step approach to creating a DevOps pipeline, like the one in the following diagram, using open source tools.
+
+![Complete DevOps pipeline][4]
+
+Without further ado, let's get started.
+
+### Step 1: CI/CD framework
+
+The first thing you need is a CI/CD tool. Jenkins, an open source, Java-based CI/CD tool based on the MIT License, is the tool that popularized the DevOps movement and has become the de facto standard.
+
+So, what is Jenkins? Imagine it as some sort of a magical universal remote control that can talk to many many different services and tools and orchestrate them. On its own, a CI/CD tool like Jenkins is useless, but it becomes more powerful as it plugs into different tools and services.
+
+Jenkins is just one of many open source CI/CD tools that you can leverage to build a DevOps pipeline.
+
+Name | License
+---|---
+[Jenkins][5] | Creative Commons and MIT
+[Travis CI][6] | MIT
+[CruiseControl][7] | BSD
+[Buildbot][8] | GPL
+[Apache Gump][9] | Apache 2.0
+[Cabie][10] | GNU
+
+Here's what a DevOps process looks like with a CI/CD tool.
+
+![CI/CD tool][11]
+
+You have a CI/CD tool running in your localhost, but there is not much you can do at the moment. Let's follow the next step of DevOps journey.
+
+### Step 2: Source control management
+
+The best (and probably the easiest) way to verify that your CI/CD tool can perform some magic is by integrating with a source control management (SCM) tool. Why do you need source control? Suppose you are developing an application. Whenever you build an application, you are programming—whether you are using Java, Python, C++, Go, Ruby, JavaScript, or any of the gazillion programming languages out there. The programming codes you write are called source codes. In the beginning, especially when you are working alone, it's probably OK to put everything in your local directory. But when the project gets bigger and you invite others to collaborate, you need a way to avoid merge conflicts while effectively sharing the code modifications. You also need a way to recover a previous version—and the process of making a backup and copying-and-pasting gets old. You (and your teammates) want something better.
+
+This is where SCM becomes almost a necessity. A SCM tool helps by storing your code in repositories, versioning your code, and coordinating among project members.
+
+Although there are many SCM tools out there, Git is the standard and rightly so. I highly recommend using Git, but there are other open source options if you prefer.
+
+Name | License
+---|---
+[Git][12] | GPLv2 & LGPL v2.1
+[Subversion][13] | Apache 2.0
+[Concurrent Versions System][14] (CVS) | GNU
+[Vesta][15] | LGPL
+[Mercurial][16] | GNU GPL v2+
+
+Here's what the DevOps pipeline looks like with the addition of SCM.
+
+![Source control management][17]
+
+The CI/CD tool can automate the tasks of checking in and checking out source code and collaborating across members. Not bad? But how can you make this into a working application so billions of people can use and appreciate it?
+
+### Step 3: Build automation tool
+
+Excellent! You can check out the code and commit your changes to the source control, and you can invite your friends to collaborate on the source control development. But you haven't yet built an application. To make it a web application, it has to be compiled and put into a deployable package format or run as an executable. (Note that an interpreted programming language like JavaScript or PHP doesn't need to be compiled.)
+
+Enter the build automation tool. No matter which build tool you decide to use, all build automation tools have a shared goal: to build the source code into some desired format and to automate the task of cleaning, compiling, testing, and deploying to a certain location. The build tools will differ depending on your programming language, but here are some common open source options to consider.
+
+Name | License | Programming Language
+---|---|---
+[Maven][18] | Apache 2.0 | Java
+[Ant][19] | Apache 2.0 | Java
+[Gradle][20] | Apache 2.0 | Java
+[Bazel][21] | Apache 2.0 | Java
+[Make][22] | GNU | N/A
+[Grunt][23] | MIT | JavaScript
+[Gulp][24] | MIT | JavaScript
+[Buildr][25] | Apache | Ruby
+[Rake][26] | MIT | Ruby
+[A-A-P][27] | GNU | Python
+[SCons][28] | MIT | Python
+[BitBake][29] | GPLv2 | Python
+[Cake][30] | MIT | C#
+[ASDF][31] | Expat (MIT) | LISP
+[Cabal][32] | BSD | Haskell
+
+Awesome! You can put your build automation tool configuration files into your source control management and let your CI/CD tool build it.
+
+![Build automation tool][33]
+
+Everything is good, right? But where can you deploy it?
+
+### Step 4: Web application server
+
+So far, you have a packaged file that might be executable or deployable. For any application to be truly useful, it has to provide some kind of a service or an interface, but you need a vessel to host your application.
+
+For a web application, a web application server is that vessel. An application server offers an environment where the programming logic inside the deployable package can be detected, render the interface, and offer the web services by opening sockets to the outside world. You need an HTTP server as well as some other environment (like a virtual machine) to install your application server. For now, let's assume you will learn about this along the way (although I will discuss containers below).
+
+There are a number of open source web application servers available.
+
+Name | License | Programming Language
+---|---|---
+[Tomcat][34] | Apache 2.0 | Java
+[Jetty][35] | Apache 2.0 | Java
+[WildFly][36] | GNU Lesser Public | Java
+[GlassFish][37] | CDDL & GNU Less Public | Java
+[Django][38] | 3-Clause BSD | Python
+[Tornado][39] | Apache 2.0 | Python
+[Gunicorn][40] | MIT | Python
+[Python Paste][41] | MIT | Python
+[Rails][42] | MIT | Ruby
+[Node.js][43] | MIT | Javascript
+
+Now the DevOps pipeline is almost usable. Good job!
+
+![Web application server][44]
+
+Although it's possible to stop here and integrate further on your own, code quality is an important thing for an application developer to be concerned about.
+
+### Step 5: Code testing coverage
+
+Implementing code test pieces can be another cumbersome requirement, but developers need to catch any errors in an application early on and improve the code quality to ensure end users are satisfied. Luckily, there are many open source tools available to test your code and suggest ways to improve its quality. Even better, most CI/CD tools can plug into these tools and automate the process.
+
+There are two parts to code testing: _code testing frameworks_ that help write and run the tests, and _code quality suggestion tools_ that help improve code quality.
+
+#### Code test frameworks
+
+Name | License | Programming Language
+---|---|---
+[JUnit][45] | Eclipse Public License | Java
+[EasyMock][46] | Apache | Java
+[Mockito][47] | MIT | Java
+[PowerMock][48] | Apache 2.0 | Java
+[Pytest][49] | MIT | Python
+[Hypothesis][50] | Mozilla | Python
+[Tox][51] | MIT | Python
+
+#### Code quality suggestion tools
+
+Name | License | Programming Language
+---|---|---
+[Cobertura][52] | GNU | Java
+[CodeCover][53] | Eclipse Public (EPL) | Java
+[Coverage.py][54] | Apache 2.0 | Python
+[Emma][55] | Common Public License | Java
+[JaCoCo][56] | Eclipse Public License | Java
+[Hypothesis][50] | Mozilla | Python
+[Tox][51] | MIT | Python
+[Jasmine][57] | MIT | JavaScript
+[Karma][58] | MIT | JavaScript
+[Mocha][59] | MIT | JavaScript
+[Jest][60] | MIT | JavaScript
+
+Note that most of the tools and frameworks mentioned above are written for Java, Python, and JavaScript, since C++ and C# are proprietary programming languages (although GCC is open source).
+
+Now that you've implemented code testing coverage tools, your DevOps pipeline should resemble the DevOps pipeline diagram shown at the beginning of this tutorial.
+
+### Optional steps
+
+#### Containers
+
+As I mentioned above, you can host your application server on a virtual machine or a server, but containers are a popular solution.
+
+[What are][61] [containers][61]? The short explanation is that a VM needs the huge footprint of an operating system, which overwhelms the application size, while a container just needs a few libraries and configurations to run the application. There are clearly still important uses for a VM, but a container is a lightweight solution for hosting an application, including an application server.
+
+Although there are other options for containers, Docker and Kubernetes are the most popular.
+
+Name | License
+---|---
+[Docker][62] | Apache 2.0
+[Kubernetes][63] | Apache 2.0
+
+To learn more, check out these other [Opensource.com][64] articles about Docker and Kubernetes:
+
+ * [What Is Docker?][65]
+ * [An introduction to Docker][66]
+ * [What is Kubernetes?][67]
+ * [From 0 to Kubernetes][68]
+
+
+
+#### Middleware automation tools
+
+Our DevOps pipeline mostly focused on collaboratively building and deploying an application, but there are many other things you can do with DevOps tools. One of them is leveraging Infrastructure as Code (IaC) tools, which are also known as middleware automation tools. These tools help automate the installation, management, and other tasks for middleware software. For example, an automation tool can pull applications, like a web application server, database, and monitoring tool, with the right configurations and deploy them to the application server.
+
+Here are several open source middleware automation tools to consider:
+
+Name | License
+---|---
+[Ansible][69] | GNU Public
+[SaltStack][70] | Apache 2.0
+[Chef][71] | Apache 2.0
+[Puppet][72] | Apache or GPL
+
+For more on middleware automation tools, check out these other [Opensource.com][64] articles:
+
+ * [A quickstart guide to Ansible][73]
+ * [Automating deployment strategies with Ansible][74]
+ * [Top 5 configuration management tools][75]
+
+
+
+### Where can you go from here?
+
+This is just the tip of the iceberg for what a complete DevOps pipeline can look like. Start with a CI/CD tool and explore what else you can automate to make your team's job easier. Also, look into [open source communication tools][76] that can help your team work better together.
+
+For more insight, here are some very good introductory articles about DevOps:
+
+ * [What is DevOps][77]
+ * [5 things to master to be a DevOps engineer][78]
+ * [DevOps is for everyone][79]
+ * [Getting started with predictive analytics in DevOps][80]
+
+
+
+Integrating DevOps with open source agile tools is also a good idea:
+
+ * [What is agile?][81]
+ * [4 steps to becoming an awesome agile developer][82]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/devops-pipeline
+
+作者:[Bryant Son (Red Hat, Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/network_team_career_hand.png?itok=_ztl2lk_ (Shaking hands, networking)
+[2]: https://www.amazon.com/dp/B078Y98RG8/
+[3]: https://en.wikipedia.org/wiki/Pareto_principle
+[4]: https://opensource.com/sites/default/files/uploads/1_finaldevopspipeline.jpg (Complete DevOps pipeline)
+[5]: https://github.com/jenkinsci/jenkins
+[6]: https://github.com/travis-ci/travis-ci
+[7]: http://cruisecontrol.sourceforge.net
+[8]: https://github.com/buildbot/buildbot
+[9]: https://gump.apache.org
+[10]: http://cabie.tigris.org
+[11]: https://opensource.com/sites/default/files/uploads/2_runningjenkins.jpg (CI/CD tool)
+[12]: https://git-scm.com
+[13]: https://subversion.apache.org
+[14]: http://savannah.nongnu.org/projects/cvs
+[15]: http://www.vestasys.org
+[16]: https://www.mercurial-scm.org
+[17]: https://opensource.com/sites/default/files/uploads/3_sourcecontrolmanagement.jpg (Source control management)
+[18]: https://maven.apache.org
+[19]: https://ant.apache.org
+[20]: https://gradle.org/
+[21]: https://bazel.build
+[22]: https://www.gnu.org/software/make
+[23]: https://gruntjs.com
+[24]: https://gulpjs.com
+[25]: http://buildr.apache.org
+[26]: https://github.com/ruby/rake
+[27]: http://www.a-a-p.org
+[28]: https://www.scons.org
+[29]: https://www.yoctoproject.org/software-item/bitbake
+[30]: https://github.com/cake-build/cake
+[31]: https://common-lisp.net/project/asdf
+[32]: https://www.haskell.org/cabal
+[33]: https://opensource.com/sites/default/files/uploads/4_buildtools.jpg (Build automation tool)
+[34]: https://tomcat.apache.org
+[35]: https://www.eclipse.org/jetty/
+[36]: http://wildfly.org
+[37]: https://javaee.github.io/glassfish
+[38]: https://www.djangoproject.com/
+[39]: http://www.tornadoweb.org/en/stable
+[40]: https://gunicorn.org
+[41]: https://github.com/cdent/paste
+[42]: https://rubyonrails.org
+[43]: https://nodejs.org/en
+[44]: https://opensource.com/sites/default/files/uploads/5_applicationserver.jpg (Web application server)
+[45]: https://junit.org/junit5
+[46]: http://easymock.org
+[47]: https://site.mockito.org
+[48]: https://github.com/powermock/powermock
+[49]: https://docs.pytest.org
+[50]: https://hypothesis.works
+[51]: https://github.com/tox-dev/tox
+[52]: http://cobertura.github.io/cobertura
+[53]: http://codecover.org/
+[54]: https://github.com/nedbat/coveragepy
+[55]: http://emma.sourceforge.net
+[56]: https://github.com/jacoco/jacoco
+[57]: https://jasmine.github.io
+[58]: https://github.com/karma-runner/karma
+[59]: https://github.com/mochajs/mocha
+[60]: https://jestjs.io
+[61]: /resources/what-are-linux-containers
+[62]: https://www.docker.com
+[63]: https://kubernetes.io
+[64]: http://Opensource.com
+[65]: https://opensource.com/resources/what-docker
+[66]: https://opensource.com/business/15/1/introduction-docker
+[67]: https://opensource.com/resources/what-is-kubernetes
+[68]: https://opensource.com/article/17/11/kubernetes-lightning-talk
+[69]: https://www.ansible.com
+[70]: https://www.saltstack.com
+[71]: https://www.chef.io
+[72]: https://puppet.com
+[73]: https://opensource.com/article/19/2/quickstart-guide-ansible
+[74]: https://opensource.com/article/19/1/automating-deployment-strategies-ansible
+[75]: https://opensource.com/article/18/12/configuration-management-tools
+[76]: https://opensource.com/alternatives/slack
+[77]: https://opensource.com/resources/devops
+[78]: https://opensource.com/article/19/2/master-devops-engineer
+[79]: https://opensource.com/article/18/11/how-non-engineer-got-devops
+[80]: https://opensource.com/article/19/1/getting-started-predictive-analytics-devops
+[81]: https://opensource.com/article/18/10/what-agile
+[82]: https://opensource.com/article/19/2/steps-agile-developer
diff --git a/sources/tech/20190408 Beyond SD-WAN- VMware-s vision for the network edge.md b/sources/tech/20190408 Beyond SD-WAN- VMware-s vision for the network edge.md
new file mode 100644
index 0000000000..4ec5b372e0
--- /dev/null
+++ b/sources/tech/20190408 Beyond SD-WAN- VMware-s vision for the network edge.md
@@ -0,0 +1,114 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Beyond SD-WAN: VMware’s vision for the network edge)
+[#]: via: (https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html#tk.rss_all)
+[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
+
+Beyond SD-WAN: VMware’s vision for the network edge
+======
+Under the ownership of VMware, the VeloCloud Business Unit is greatly expanding its vision of what an SD-WAN should be. VMware calls the strategy “the network edge.”
+![istock][1]
+
+VeloCloud is now a Business Unit within VMware since being acquired in December 2017. The two companies have had sufficient time to integrate their operations and fit their technologies together to build a cohesive offering. In January, Neal Weinberg provided [an overview of where VMware is headed with its reinvention][2]. Now let’s look at it from the VeloCloud [SD-WAN][3] perspective.
+
+I recently talked to Sanjay Uppal, vice president and general manager of the VeloCloud Business Unit. He shared with me where VeloCloud is heading, adding that it’s all possible because of the complementary products that VMware brings to VeloCloud’s table.
+
+**[ Read also:[Edge computing is the place to address a host of IoT security concerns][4] ]**
+
+It all starts with this architecture chart that shows the VMware vision for the network edge.
+
+![][5]
+
+The left side of the chart shows that in the branch office, you can put an edge device that can be either a VeloCloud hardware appliance or VeloCloud software running on some third-party hardware. Then the right side of the chart shows where the workloads are — the traditional data center, the public cloud, and SaaS applications. You can put one or more edge devices there and then you have the classic hub-and-spoke model with the VeloCloud SD-WAN on running on top.
+
+In the middle of the diagram are the gateways, which are a differentiator and a unique benefit of VeloCloud.
+
+“If you have applications in the public cloud or SaaS, then you can use our gateways instead of spinning up individual edges at each of the applications,” Uppal said. “Those gateways really perform a multi-tenanted edge function. So, instead of locating an individual edge at every termination point at the cloud, you basically go from an edge in the branch to a gateway in the cloud, and then from that gateway you go to your final destination. We've engineered it so that the gateways are close to where the end applications are — typically within five milliseconds.”
+
+Going back to the architecture diagram, there are two clouds in the middle of the chart. The left-hand cloud is the over-the-top (OTT) service run by VeloCloud. It uses 800 gateways deployed over 30 points of presence (PoPs) around the world. The right-hand cloud is the telco cloud, which deploys gateways as network-based services. VeloCloud has several telco partners that take the same VeloCloud gateways and deploy them in their cloud.
+
+“Between a telco service, a cloud service, and hub and spoke on premise, we essentially have covered all the bases in terms of how enterprises would want to consume software-defined WAN. This flexibility is part of the reason why we've been successful in this market,” Uppal said.
+
+Where is VeloCloud going with this strategy? Again, looking at the architecture chart, the “vision” pieces are labeled 1 through 5. Let’s look at each of those areas.
+
+### Edge compute
+
+Starting with number 1 on the left-hand side of the diagram, there is the expansion from the edge itself going deeper into the branch by crossing over a LAN or a Wi-Fi boundary to get to where the individual users and IoT “things” are. This approach uses the same VeloCloud platform to spin up [compute at the edge][6], which can be either a container or a virtual machine (VM).
+
+“Of course, VMwareis very strong in compute in the data center. Our CEO recently articulated the VMware edge story, which is compute edge and device edge. When you combine it with the network edge, which is VeloCloud, then you have a full edge solution,” Uppal explained. “So, this first piece that you see is our foray into getting deeper into the branch all the way up to the individual users and things and combining compute functions on to the VeloCloud solution. There's been a lot of talk about edge compute and we do know that the pendulum is swinging back, but one of the major challenges is how to manage it all. VMware has strong technology in the data center space that we are bringing to bear out there at the edge.”
+
+### 5G underlay intelligence
+
+The next piece, number 2 on the diagram, is [5G][7]. At the Mobile World Congress, VMware and AT&T announced they are bringing SD-WAN out running on 5G. The idea here is that 5G should give you a low-latency connection and you get on-demand control, so you can tell 5G on the fly that you want this type of connection. Once that is done, the right network slices would be put in place and then you can get a connection according to the specifications that you asked for.
+
+“We as VeloCloud would measure the underlay continuously. It's like a speed test on steroids. We would measure bandwidth, packet loss, jitter and latency continuously with low overhead because we piggyback on real user traffic. And then on the basis of that measurement, we would steer the traffic one way or another,” Uppal said. “For example, your real-time voice is important, so let's pick the best performing network at that instant of time, which might change in the next instant, so that's why we have to make that decision on a per-packet basis.”
+
+Uppal continued, “What 5G allows us to do is to look at that underlay as not just being one underlay, but it could be several different underlays, and it's programmable so you could ask it for a type of underlay. That is actually pretty revolutionary — that we would run an overlay with the intelligence of SD-WAN counting on the underlay intelligence of 5G.
+
+“We are working pretty closely with our partner AT&T in this space. We are talking about the business aspect of 5G being used as a transport mechanism for enterprise data, rather than consumer phones having 5G on them. This is available from AT&T today in a handful of cities. So as 5G becomes more ubiquitous, you'll begin to see it deployed more and more. Then we will do an Ethernet or Wi-Fi handoff to the hotspot, and from then on, we'll jump onto the 5G network for the SD-WAN. Then the next phase of that will be 5G natively on our devices, which is what we are working on today.”
+
+### Gateway federation
+
+The third part of the vision is gateway federation, some of which is available today. The left-hand cloud in the diagram, which is the OTT service, should be able to interoperate gateway to gateway with the cloud on the right-hand side, which is the network-based service. For example, if you have a telco cloud of gateways but those gateways don't reach out into areas where the telco doesn’t have a presence, then you can reuse VeloCloud gateways that are sitting in other locations. A gateway would federate with another gateway, so it would extend the telco’s network beyond the facilities that they own. That's the first step of gateway federation, which is available from VeloCloud today.
+
+Uppal said the next step is a telco-to telco-federation. “There's a lot of interest from folks in the industry on how to get that federation done. We're working with the Metro Ethernet Forum (MEF) on that,” he said.
+
+### SD-WAN as a platform
+
+The next piece of the vision is SD-WAN as a platform. VeloCloud already incorporates security services into its SD-WAN platform in the form of [virtual network functions][8] (VNFs) from Palo Alto, Check Point Software, and other partners. Deploying a service as a VNF eliminates having separate hardware on the network. Now the company is starting to bring more services onto its platform.
+
+“Analytics is the area we are bringing in next,” Uppal said. “We partnered with SevOne and Plixer so that they can take analytics that we are providing, correlate them with other analytics that they have and then come up with inferences on whether things worked correctly or not, or to check for anomalous behavior.”
+
+Two additional areas that VeloCloud is working on are unified communications as a service (UCaaS) and universal customer premises equipment (uCPE).
+
+“We announced that we are working with RingCentral in the UCaaS space, and with ADVA and Telco Systems for uCPE. We have our own uCPE offering today but with a limited number of VNFs, so ADVA and Telco Systems will help us expand those capabilities,” Uppal explained. “With SD-WAN becoming a platform for on-premise deployments, you can virtualize functions and manage them from the same place, whether they're VNF-type of functions or compute-type of functions. This is an important direction that we are moving towards.”
+
+### Hybrid and multi-cloud integration
+
+The final piece of the strategy is hybrid and multi-cloud integration. Since its inception, VeloCloud has had gateways to facilitate access to specific applications running in the cloud. These gateways provide a secure end-to-end connection and an ROI advantage.
+
+Recognizing that workloads have expanded to multi-cloud and hybrid cloud, VeloCloud is broadening this approach utilizing VMware’s relationships with Microsoft, Amazon, and Google and offerings on Azure, Amazon Web Services, and Google Cloud, respectively. From a networking standpoint, you can get the same consistency of access using VeloCloud because you can decide from the gateway whichever direction you want to go. That direction will be chosen — and services added — based on your business policy.
+
+“We think this is the next hurdle in terms of deployment of SD-WAN, and once that is solved, people are going to deploy a lot more for hybrid and multi-cloud,” said Uppal. “We want to be the first ones out of the gate to get that done.”
+
+Uppal further said, “These five areas are where we see our SD-WAN headed, and we call this a network edge because it's beyond just the traditional SD-WAN functions. It includes edge computing, SD-WAN becoming a broader platform, integrating with hybrid multi cloud — these are all aspects of features that go way beyond just the narrower definition of SD-WAN.”
+
+**More about edge networking:**
+
+ * [How edge networking and IoT will reshape data centers][9]
+ * [Edge computing best practices][10]
+ * [How edge computing can help secure the IoT][11]
+
+
+
+Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html#tk.rss_all
+
+作者:[Linda Musthaler][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Linda-Musthaler/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/01/istock-864405678-100747484-large.jpg
+[2]: https://www.networkworld.com/article/3340259/vmware-s-transformation-takes-hold.html
+[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
+[4]: https://www.networkworld.com/article/3307859/edge-computing-helps-a-lot-of-iot-security-problems-by-getting-it-involved.html
+[5]: https://images.idgesg.net/images/article/2019/04/vmware-vision-for-network-edge-100793086-large.jpg
+[6]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[7]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
+[8]: https://www.networkworld.com/article/3206709/what-s-the-difference-between-sdn-and-nfv.html
+[9]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[10]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
+[11]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
+[12]: https://www.facebook.com/NetworkWorld/
+[13]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190408 Getting started with Python-s cryptography library.md b/sources/tech/20190408 Getting started with Python-s cryptography library.md
new file mode 100644
index 0000000000..63eab6104f
--- /dev/null
+++ b/sources/tech/20190408 Getting started with Python-s cryptography library.md
@@ -0,0 +1,111 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Getting started with Python's cryptography library)
+[#]: via: (https://opensource.com/article/19/4/cryptography-python)
+[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
+
+Getting started with Python's cryptography library
+======
+Encrypt your data and keep it safe from attackers.
+![lock on world map][1]
+
+The first rule of cryptography club is: never _invent_ a cryptography system yourself. The second rule of cryptography club is: never _implement_ a cryptography system yourself: many real-world holes are found in the _implementation_ phase of a cryptosystem as well as in the design.
+
+One useful library for cryptographic primitives in Python is called simply [**cryptography**][2]. It has both "secure" primitives as well as a "hazmat" layer. The "hazmat" layer requires care and knowledge of cryptography and it is easy to implement security holes using it. We will not cover anything in the "hazmat" layer in this introductory article!
+
+The most useful high-level secure primitive in **cryptography** is the Fernet implementation. Fernet is a standard for encrypting buffers in a way that follows best-practices cryptography. It is not suitable for very big files—anything in the gigabyte range and above—since it requires you to load the whole buffer that you want to encrypt or decrypt into memory at once.
+
+Fernet supports _symmetric_ , or _secret key_ , cryptography: the same key is used for encryption and decryption, and therefore must be kept safe.
+
+Generating a key is easy:
+
+
+```
+>>> k = fernet.Fernet.generate_key()
+>>> type(k)
+
+```
+
+Those bytes can be written to a file with appropriate permissions, ideally on a secure machine.
+
+Once you have key material, encrypting is easy as well:
+
+
+```
+>>> frn = fernet.Fernet(k)
+>>> encrypted = frn.encrypt(b"x marks the spot")
+>>> encrypted[:10]
+b'gAAAAABb1'
+```
+
+You will get slightly different values if you encrypt on your machine. Not only because (I hope) you generated a different key from me, but because Fernet concatenates the value to be encrypted with some randomly generated buffer. This is one of the "best practices" I alluded to earlier: it will prevent an adversary from being able to tell which encrypted values are identical, which is sometimes an important part of an attack.
+
+Decryption is equally simple:
+
+
+```
+>>> frn = fernet.Fernet(k)
+>>> frn.decrypt(encrypted)
+b'x marks the spot'
+```
+
+Note that this only encrypts and decrypts _byte strings_. In order to encrypt and decrypt _text strings_ , they will need to be encoded and decoded, usually with [UTF-8][3].
+
+One of the most interesting advances in cryptography in the mid-20th century was _public key_ cryptography. It allows the encryption key to be published while the _decryption key_ is kept secret. It can, for example, be used to store API keys to be used by a server: the server is the only thing with access to the decryption key, but anyone can add to the store by using the public encryption key.
+
+While **cryptography** does not have any public key cryptographic _secure_ primitives, the [**PyNaCl**][4] library does. PyNaCl wraps and offers some nice ways to use the [**NaCl**][5] encryption system invented by Daniel J. Bernstein.
+
+NaCl always _encrypts_ and _signs_ or _decrypts_ and _verifies signatures_ simultaneously. This is a way to prevent malleability-based attacks, where an adversary modifies the encrypted value.
+
+Encryption is done with a public key, while signing is done with a secret key:
+
+
+```
+>>> from nacl.public import PrivateKey, PublicKey, Box
+>>> source = PrivateKey.generate()
+>>> with open("target.pubkey", "rb") as fpin:
+... target_public_key = PublicKey(fpin.read())
+>>> enc_box = Box(source, target_public_key)
+>>> result = enc_box.encrypt(b"x marks the spot")
+>>> result[:4]
+b'\xe2\x1c0\xa4'
+```
+
+Decryption reverses the roles: it needs the private key for decryption and the public key to verify the signature:
+
+
+```
+>>> from nacl.public import PrivateKey, PublicKey, Box
+>>> with open("source.pubkey", "rb") as fpin:
+... source_public_key = PublicKey(fpin.read())
+>>> with open("target.private_key", "rb") as fpin:
+... target = PrivateKey(fpin.read())
+>>> dec_box = Box(target, source_public_key)
+>>> dec_box.decrypt(result)
+b'x marks the spot'
+```
+
+The [**PocketProtector**][6] library builds on top of PyNaCl and contains a complete secrets management solution.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/cryptography-python
+
+作者:[Moshe Zadka (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-cloud-safe.png?itok=yj2TFPzq (lock on world map)
+[2]: https://cryptography.io/en/latest/
+[3]: https://en.wikipedia.org/wiki/UTF-8
+[4]: https://pynacl.readthedocs.io/en/stable/
+[5]: https://nacl.cr.yp.to/
+[6]: https://github.com/SimpleLegal/pocket_protector/blob/master/USER_GUIDE.md
diff --git a/sources/tech/20190408 How to quickly deploy, run Linux applications as unikernels.md b/sources/tech/20190408 How to quickly deploy, run Linux applications as unikernels.md
new file mode 100644
index 0000000000..6d65eaf369
--- /dev/null
+++ b/sources/tech/20190408 How to quickly deploy, run Linux applications as unikernels.md
@@ -0,0 +1,84 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to quickly deploy, run Linux applications as unikernels)
+[#]: via: (https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+How to quickly deploy, run Linux applications as unikernels
+======
+Unikernels are a smaller, faster, and more secure option for deploying applications on cloud infrastructure. With NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding.
+![Marcho Verch \(CC BY 2.0\)][1]
+
+Building and deploying lightweight apps is becoming an easier and more reliable process with the emergence of unikernels. While limited in functionality, unikernals offer many advantages in terms of speed and security.
+
+### What are unikernels?
+
+A unikernel is a very specialized single-address-space machine image that is similar to the kind of cloud applications that have come to dominate so much of the internet, but they are considerably smaller and are single-purpose. They are lightweight, providing only the resources needed. They load very quickly and are considerably more secure -- having a very limited attack surface. Any drivers, I/O routines and support libraries that are required are included in the single executable. The resultant virtual image can then be booted and run without anything else being present. And they will often run 10 to 20 times faster than a container.
+
+**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
+
+Would-be attackers cannot drop into a shell and try to gain control because there is no shell. They can't try to grab the system's /etc/passwd or /etc/shadow files because these files don't exist. Creating a unikernel is much like turning your application into its own OS. With a unikernel, the application and the OS become a single entity. You omit what you don't need, thereby removing vulnerabilities and improving performance many times over.
+
+In short, unikernels:
+
+ * Provide improved security (e.g., making shell code exploits impossible)
+ * Have much smaller footprints then standard cloud apps
+ * Are highly optimized
+ * Boot extremely quickly
+
+
+
+### Are there any downsides to unikernels?
+
+The only serious downside to unikernels is that you have to build them. For many developers, this has been a giant step. Trimming down applications to just what is needed and then producing a tight, smoothly running application can be complex because of the application's low-level nature. In the past, you pretty much had to have been a systems developer or a low level programmer to generate them.
+
+### How is this changing?
+
+Just recently (March 24, 2019) [NanoVMs][3] announced a tool that loads any Linux application as a unikernel. Using NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding. The application will also run faster, more safely and with less cost and overhead.
+
+### What is NanoVMs OPS?
+
+NanoVMs is a unikernel tool for developers. It allows you to run all sorts of enterprise class software yet still have extremely tight control over how it works.
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
+
+Other benefits associated with OPS include:
+
+ * Developers need no prior experience or knowledge to build unikernels.
+ * The tool can be used to build and run unikernels locally on a laptop.
+ * No accounts need to be created and only a single download and one command is required to execute OPS.
+
+
+
+An intro to NanoVMs is available on [NanoVMs on youtube][5]. You can also check out the company's [LinkedIn page][6] and can read about NanoVMs security [here][7].
+
+Here is some information on how to [get started][8].
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/corn-kernels-100792925-large.jpg
+[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
+[3]: https://nanovms.com/
+[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[5]: https://www.youtube.com/watch?v=VHWDGhuxHPM
+[6]: https://www.linkedin.com/company/nanovms/
+[7]: https://nanovms.com/security
+[8]: https://nanovms.gitbook.io/ops/getting_started
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190408 InitRAMFS, Dracut, and the Dracut Emergency Shell.md b/sources/tech/20190408 InitRAMFS, Dracut, and the Dracut Emergency Shell.md
new file mode 100644
index 0000000000..b0e1948ff4
--- /dev/null
+++ b/sources/tech/20190408 InitRAMFS, Dracut, and the Dracut Emergency Shell.md
@@ -0,0 +1,135 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (InitRAMFS, Dracut, and the Dracut Emergency Shell)
+[#]: via: (https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/)
+[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
+
+InitRAMFS, Dracut, and the Dracut Emergency Shell
+======
+
+![][1]
+
+The [Linux startup process][2] goes through several stages before reaching the final [graphical or multi-user target][3]. The initramfs stage occurs just before the root file system is mounted. Dracut is a tool that is used to manage the initramfs. The dracut emergency shell is an interactive mode that can be initiated while the initramfs is loaded.
+
+This article will show how to use the dracut command to modify the initramfs. Some basic troubleshooting commands that can be run from the dracut emergency shell will also be demonstrated.
+
+### The InitRAMFS
+
+[Initramfs][4] stands for Initial Random-Access Memory File System. On modern Linux systems, it is typically stored in a file under the /boot directory. The kernel version for which it was built will be included in the file name. A new initramfs is generated every time a new kernel is installed.
+
+![A Linux Boot Directory][5]
+
+By default, Fedora keeps the previous two versions of the kernel and its associated initramfs. This default can be changed by modifying the value of the _installonly_limit_ setting the /etc/dnf/dnf.conf file.
+
+You can use the _lsinitrd_ command to list the contents of your initramfs archive:
+
+![The LsInitRD Command][6]
+
+The above screenshot shows that my initramfs archive contains the _nouveau_ GPU driver. The _modinfo_ command tells me that the nouveau driver supports several models of NVIDIA video cards. The _lspci_ command shows that there is an NVIDIA GeForce video card in my computer’s PCI slot. There are also several basic Unix commands included in the archive such as _cat_ and _cp_.
+
+By default, the initramfs archive only includes the drivers that are needed for your specific computer. This allows the archive to be smaller and decreases the time that it takes for your computer to boot.
+
+### The Dracut Command
+
+The _dracut_ command can be used to modify the contents of your initramfs. For example, if you are going to move your hard drive to a new computer, you might want to temporarily include all drivers in the initramfs to be sure that the operating system can load on the new computer. To do so, you would run the following command:
+
+```
+# dracut --force --no-hostonly
+```
+
+The _force_ parameter tells dracut that it is OK to overwrite the existing initramfs archive. The _no-hostonly_ parameter overrides the default behavior of including only drivers that are germane to the currently-running computer and causes dracut to instead include all drivers in the initramfs.
+
+By default dracut operates on the initramfs for the currently-running kernel. You can use the _uname_ command to display which version of the Linux kernel you are currently running:
+
+```
+$ uname -r
+5.0.5-200.fc29.x86_64
+```
+
+Once you have your hard drive installed and running in your new computer, you can re-run the dracut command to regenerate the initramfs with only the drivers that are needed for the new computer:
+
+```
+# dracut --force
+```
+
+There are also parameters to add arbitrary drivers, dracut modules, and files to the initramfs archive. You can also create configuration files for dracut and save them under the /etc/dracut.conf.d directory so that your customizations will be automatically applied to all new initramfs archives that are generated when new kernels are installed. As always, check the man page for the details that are specific to the version of dracut you have installed on your computer:
+
+```
+$ man dracut
+```
+
+### The Dracut Emergency Shell
+
+![The Dracut Emergency Shell][7]
+
+Sometimes something goes wrong during the initramfs stage of your computer’s boot process. When this happens, you will see “Entering emergency mode” printed to the screen followed by a shell prompt. This gives you a chance to try and fix things up manually and continue the boot process.
+
+As a somewhat contrived example, let’s suppose that I accidentally deleted an important kernel parameter in my boot loader configuration:
+
+```
+# sed -i 's/ rd.lvm.lv=fedora\/root / /' /boot/grub2/grub.cfg
+```
+
+The next time I reboot my computer, it will seem to hang for several minutes while it is trying to find the root partition and eventually give up and drop to an emergency shell.
+
+From the emergency shell, I can enter _journalctl_ and then use the **Space** key to page down though the startup logs. Near the end of the log I see a warning that reads “/dev/mapper/fedora-root does not exist”. I can then use the _ls_ command to find out what does exist:
+
+```
+# ls /dev/mapper
+control fedora-swap
+```
+
+Hmm, the fedora-root LVM volume appears to be missing. Let’s see what I can find with the lvm command:
+
+```
+# lvm lvscan
+ACTIVE '/dev/fedora/swap' [3.85 GiB] inherit
+inactive '/dev/fedora/home' [22.85 GiB] inherit
+inactive '/dev/fedora/root' [46.80 GiB] inherit
+```
+
+Ah ha! There’s my root partition. It’s just inactive. All I need to do is activate it and exit the emergency shell to continue the boot process:
+
+```
+# lvm lvchange -a y fedora/root
+# exit
+```
+
+![The Fedora Login Screen][8]
+
+The above example only demonstrates the basic concept. You can check the [troubleshooting section][9] of the [dracut guide][10] for a few more examples.
+
+It is possible to access the dracut emergency shell manually by adding the _rd.break_ parameter to your kernel command line. This can be useful if you need to access your files before any system services have been started.
+
+Check the _dracut.kernel_ man page for details about what kernel options your version of dracut supports:
+
+```
+$ man dracut.kernel
+```
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/
+
+作者:[Gregory Bartholomew][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/glb/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/dracut-816x345.png
+[2]: https://en.wikipedia.org/wiki/Linux_startup_process
+[3]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/sect-managing_services_with_systemd-targets
+[4]: https://en.wikipedia.org/wiki/Initial_ramdisk
+[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/boot.jpg
+[6]: https://fedoramagazine.org/wp-content/uploads/2019/04/lsinitrd.jpg
+[7]: https://fedoramagazine.org/wp-content/uploads/2019/04/dracut-shell.jpg
+[8]: https://fedoramagazine.org/wp-content/uploads/2019/04/fedora-login-1024x768.jpg
+[9]: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut.html#_troubleshooting
+[10]: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut.html
diff --git a/sources/tech/20190408 Linux Server Hardening Using Idempotency with Ansible- Part 1.md b/sources/tech/20190408 Linux Server Hardening Using Idempotency with Ansible- Part 1.md
new file mode 100644
index 0000000000..ca0d81d89a
--- /dev/null
+++ b/sources/tech/20190408 Linux Server Hardening Using Idempotency with Ansible- Part 1.md
@@ -0,0 +1,94 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 1)
+[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-1)
+[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie)
+
+Linux Server Hardening Using Idempotency with Ansible: Part 1
+======
+
+![][1]
+
+[Creative Commons Zero][2]
+
+I think it’s safe to say that the need to frequently update the packages on our machines has been firmly drilled into us. To ensure the use of latest features and also keep security bugs to a minimum, skilled engineers and even desktop users are well-versed in the need to update their software.
+
+Hardware, software and SaaS (Software as a Service) vendors have also firmly embedded the word “firewall” into our vocabulary for both domestic and industrial uses to protect our computers. In my experience, however, even within potentially more sensitive commercial environments, few engineers actively tweak the operating system (OS) they’re working on, to any great extent at least, to bolster security.
+
+Standard fare on Linux systems, for example, might mean looking at configuring a larger swap file to cope with your hungry application’s demands. Or, maybe adding a separate volume to your server for extra disk space, specifying a more performant CPU at launch time, installing a few of your favorite DevOps tools, or chucking a couple of certificates onto the filesystem for each new server you build. This isn’t quite the same thing.
+
+### Improve your Security Posture
+
+What I am specifically referring to is a mixture of compliance and security, I suppose. In short, there’s a surprisingly large number of areas in which a default OS can improve its security posture. We can agree that tweaking certain aspects of an OS are a little riskier than others. Consider your network stack, for example. Imagine that, completely out of the blue, your server’s networking suddenly does something unexpected and causes you troubleshooting headaches or even some downtime. This might happen because a new application or updated package suddenly expects routing to behave in a less-common way or needs a specific protocol enabled to function correctly.
+
+However, there are many changes that you can make to your servers without suffering any sleepless nights. The version and flavor of an OS helps determine which changes and to what extent you might want to comfortably make. Most importantly though what’s good for the goose is rarely good for the gander. In other words every single server estate has different, both broad and subtle, requirements which makes each use case unique. And, don’t forget that a database server also has very different needs to a web server so you can have a number of differing needs even within one small cluster of servers.
+
+Over the last few years I’ve introduced these hardening and compliance tweaks more than a handful of times across varying server estates in my DevSecOps roles. The OSs have included: Debian, Red Hat Enterprise Linux (RHEL) and their respective derivatives (including what I suspect will be the increasingly popular RHEL derivative, Amazon Linux). There have been times that, admittedly including a multitude of relatively tiny tweaks, the number of changes to a standard server build was into the hundreds. It all depended on the time permitted for the work, the appetite for any risks and the generic or specific nature of the OS tweaks.
+
+In this article, we’ll discuss the theory around something called idempotency which, in hand with an automation tool such as Ansible, can provide the ongoing improvements to your server estate’s security posture. For good measure we’ll also look at a number of Ansible playbook examples and additionally refer to online resources so that you can introduce idempotency to a server estate near you.
+
+### Say What?
+
+In simple terms the word “idempotent” just means returning something back to how it was prior to a change. It can also mean that lots of things you wanted to be the same, for consistency, are exactly the same, too.
+
+Picture that in action for a moment on a server estate; we’ll use AWS (Amazon Web Services) as our example. You create a new server image (Amazon Machine Images == AMIs) precisely how you want it with compliance and hardening introduced, custom packages, the removal of unwanted packages, SSH keys, user accounts etc and then spin up twenty servers using that AMI.
+
+You know for certain that all the servers, at least at the time that they are launched, are absolutely identical. Trust me when I say that this is a “good thing” ™. The lack of what’s known as “config drift” means that if one package on a server needs updated for security reasons then all the servers need that package updated too. Or if there’s a typo in a config file that’s breaking an application then it affects all servers equally. There’s less administrative overhead, less security risk and greater levels of predictability in terms of achieving better uptime.
+
+What about config drift from a security perspective? As you’ve guessed it’s definitely not welcome. That’s because engineers making manual changes to a “base OS build” can only lead to heartache and stress. The predictability of how a system is working suffers greatly as a result and servers running unique config become less reliable. These server systems are known as “snowflakes” as they’re unique but far less beautiful than actual snow.
+
+Equally an attacker might have managed to breach one aspect, component or service on a server but not all of its facets. By rewriting our base config again and again we’re able to, with 100% certainty (if it’s set up correctly), predict exactly what a server will look like and therefore how it will perform. Using various tools you can also trigger alarms if changes are detected to request that a pair of human eyes have a look to see if it’s a serious issue and then adjust the base config if needed.
+
+To make our machines idempotent we might overwrite our config changes every 20 or 30 minutes, for example. When it comes to running servers, that in essence, is what is meant by idempotency.
+
+### Central Station
+
+My mechanism of choice for repeatedly writing config across a large number of servers is running Ansible playbooks. It’s relatively easy to implement and removes the all-too-painful additional logic required when using shell scripts. Of the popular configuration management tools I’ve seen in action is Puppet, used successfully on a large government estate in an idempotent manner, but I prefer Ansible due to its more logical syntax (to my mind at least) and its readily available documentation.
+
+Before we look at some simple Ansible examples of hardening an OS with idempotency in mind we should explore how to trigger our Ansible playbooks.
+
+This is a larger area for debate than you might first imagine. Say, for example, you have nicely segmented server estate with production servers being carefully locked away from development servers, sitting behind a production-grade firewall. Consider the other servers on the estate, belonging to staging (pre-production) or other development environments, intentionally having different access permissions for security reasons.
+
+If you’re going to run a centralized server that has superuser permissions (which are required to make privileged changes to your core system files) then that server will need to have high-level access permissions potentially across your entire server estate. It must therefore be guarded very closely.
+
+You will also want to test your playbooks against development environments (in plural) to test their efficacy which means you’ll probably need two all-powerful centralised Ansible servers, one for production and one for the multiple development environments.
+
+The actual approach of how to achieve other logistical issues is up for debate and I’ve heard it discussed a few times. Bear in mind that Ansible runs using plain, old SSH keys (a feature that something other configuration management tools have started to copy over time) but ideally you want a mechanism for keeping non-privileged keys on your centralised servers so you’re not logging in as the “root” user across the estate every twenty minutes or thirty minutes.
+
+From a network perspective I like the idea of having firewalling in place to enforce one-way traffic only into the environment that you’re affecting. This protects your centralised host so that a compromised server can’t attack that main Ansible host easily and then as a result gain access to precious SSH keys in order to damage the whole estate.
+
+Speaking of which, are servers actually needed for a task like this? What about using AWS Lambda () to execute your playbooks? A serverless approach stills needs to be secured carefully but unquestionably helps to limit the attack surface and also potentially reduces administrative responsibilities.
+
+I suspect how this all-powerful server is architected and deployed is always going to be contentious and there will never be a one-size-fits-all approach but instead a unique, bespoke solution will be required for every server estate.
+
+### How Now, Brown Cow
+
+It’s important to think about how often you run your Ansible and also how to prepare for your first execution of the playbook. Let’s get the frequency of execution out of the way first as it’s the easiest to change in the future.
+
+My preference would be three times an hour or instead every thirty minutes. If we include enough detail in our configuration then our playbooks might prevent an attacker gaining a foothold on a system as the original configuration overwrites any altered config. Twenty minutes seems more appropriate to my mind.
+
+Again, this is an aspect you need to have a think about. You might be dumping small config databases locally onto a filesystem every sixty minutes for example and that scheduled job might add an extra little bit of undesirable load to your server meaning you have to schedule around it.
+
+Next time, we’ll take a look at some specific changes that can be made to various systems.
+
+_Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website:[https://www.devsecops.cc][3]_
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-1
+
+作者:[Chris Binnie][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/chrisbinnie
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/geometric-1732847_1280.jpg?itok=YRux0Tua
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.devsecops.cc/
diff --git a/sources/tech/20190408 Performance-Based Routing (PBR) - The gold rush for SD-WAN.md b/sources/tech/20190408 Performance-Based Routing (PBR) - The gold rush for SD-WAN.md
new file mode 100644
index 0000000000..9844c3d3bf
--- /dev/null
+++ b/sources/tech/20190408 Performance-Based Routing (PBR) - The gold rush for SD-WAN.md
@@ -0,0 +1,129 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Performance-Based Routing (PBR) – The gold rush for SD-WAN)
+[#]: via: (https://www.networkworld.com/article/3387152/performance-based-routing-pbr-the-gold-rush-for-sd-wan.html#tk.rss_all)
+[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
+
+Performance-Based Routing (PBR) – The gold rush for SD-WAN
+======
+The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off.
+![Getty Images][1]
+
+BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, there’s a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?
+
+There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.
+
+The problem with BGP is that it's not capacity or performance aware and therefore its decisions can sink the application’s performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the network’s performance.
+
+[The time of 5G is almost here][2]
+
+Also, BGP changes paths only in reaction to changes in the policy or the set of available routes. It traditionally permits the use of only one path to reach a destination. Hence, traditional routing falls short as it doesn't always look for the best path which may not be the shortest path.
+
+### Blackout and brownouts
+
+As a matter of fact, we live in a world where we have more brownouts than blackouts. However, BGP was originally designed to detect only the blackouts i.e. the events wherein a link fails to reroute the traffic to another link. In a world where brownouts can last from 10 milliseconds to 10 seconds, you ought to be able to detect the failure in sub-seconds and re-route to a better path.
+
+This triggered my curiosity to dig out some of the real yet significant reasons why [SD-WAN][3] was introduced. We all know it saves cost and does many other things but were the inefficiencies in routing one of the main reasons? I decided to sit down with [Sorell][4] to discuss the need for policy-based routing (PBR).
+
+### SD-WAN is taking off
+
+The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off. SD-WAN vendors are adding proprietary mechanisms to their routing in order to select the best path, not the shortest path.
+
+Originally, we didn't have real-time traffic, such as, voice and video, which is latency and jitter sensitive. Besides, we also assumed that all links were equal. But in today's world, we witness more of a mix and match, for example, 100Gig and slower long-term evolution (LTE) links. The assumption that the shortest path is the best no longer holds true.
+
+### Introduction of new protocols
+
+To overcome the drawbacks of traditional routing, we have had the onset of new protocols, such as, [IPv6 segment routing][5] and named data networking along with specific SD-WAN vendor mechanisms that improve routing.
+
+For optimum routing, effective packet steering is a must. And SD-WAN overlays provide this by utilizing encapsulation which could be a combination of GRE, UDP, Ethernet, MPLS, [VxLAN][6] and IPsec. IPv6 segment routing implements a stack of segments (IPv6 address list) inserted in every packet and the named data networking can be distributed with routing protocols.
+
+Another critical requirement is the hop-by-hop payload encryption. You should be able to encrypt payloads for sessions that do not have transport layer encryption. Re-encrypting data can be expensive; it fragments the packets and further complicates the networks. Therefore, avoiding double encryption is also a must.
+
+The SD-WAN overlays furnish an all or nothing approach with [IPsec][7]. IPv6 segment routing requires application layer security that is provided by [IPsec][8] and named data network can offer since it’s object-based.
+
+### The various SD-WAN solutions
+
+The above are some of the new protocols available and some of the technologies that the SD-WAN vendors offer. Different vendors will have different mechanisms to implement PBR. Different vendors term PBR with different names, such as, “application-aware routing.”
+
+SD-WAN vendors are using many factors to influence the routing decision. They are not just making routing decisions on the number of hops or links the way traditional routing does by default. They monitor how the link is performing and do not just evaluate if the link is up or down.
+
+They are using a variety of mechanisms to perform PBR. For example, some are adding timestamps to every packet. Whereas, others are adding sequence numbers to the packets over and above what you would get in a transmission control protocol (TCP) sequence number.
+
+Another option is the use of the domain name system (DNS) and [transport layer security][9] (TLS) certificates to automatically identify the application and then based on the identity of the application; they have default classes for it. However, others use timestamps by adding a proprietary label. This is the same as adding a sequence number to the packets, but the sequence number is at Layer 3 instead of Layer 4.
+
+I can tie all my applications and sequence numbers and then use the network time protocol (NTP) to identify latency, jitter and dropped packets. Running NTP on both ends enables the identification of end-to-end vs hop-by-hop performance.
+
+Some vendors use an internet control message protocol (ICMP) or bidirectional forwarding detection (BFD). Hence, instead of adding a label to every packet which can introduce overhead, they are doing a sampling for every quarter or half a second.
+
+Realistically, it is yet to be determined which technology is the best to use, but what is consistent is that these mechanisms are examining elements, such as, the latency, dropped packets and jitter on the links. Essentially, different vendors are using different technologies to choose the best path, but the end result is still the same.
+
+With these approaches, one can, for example, identify a WebEx session and since a WebEx session has voice and video, can create that session as a high-priority session. All packets associated with the WebEx sessions get placed in a high-value queue.
+
+The rules are set to say, “I want my WebEx session to go over the multiprotocol label switching (MPLS) link instead of a slow LTE link.” Hence, if your MPLS link faces latency or jitter problems, it automatically reroutes the flow to a better alternate path.
+
+### Problems with TCP
+
+One critical problem that surfaces today due to the transmission control protocol (TCP) and adaptive codex is called waves. Let’s say you have 30 file transfers across a link, now to carry out the file transfers, the TCP window size will grow to a point where the link gets maxed out. The router will start to drop packets, followed by the reduced TCP window size. As a result, the bandwidth shrinks and at times when not dropping packets the window size increases. This hits the threshold and eventually, the packets start getting dropped again.
+
+This can be a continuous process, happening again and again. With all these waves obstructing the efficiency, we need products, like wide area network (WAN) optimizations to manage multiple TCP flows. Why? Because only TCP is aware of the flow that it controls, the single flow. It is not the networking aware of other flows moving across the path. Primarily, the TCP window size is only aware of one single file transfer.
+
+### Problems with adaptive codex
+
+Adaptive codex will use upward of 6 megabytes of the video if the link is clean but as soon as it starts to drop packets, the adaptive codex will send more packets for forwarding error-control in the codex. Therefore, it makes the problem even worse before it backs off to change the frame rate and resolution.
+
+Adaptive codex is the opposite of fixed codex that will always send out a fixed packet size. Adaptive codex is the standard used in WebRTC and can vary the jitter, buffer size and the frequency of packets based on the network conditions.
+
+Adaptive codex works better off Internet connections that have higher loss and jitter rate than, for example, more stable links, such as MPLS. This is the reason why real-time voice and the video does not use TCP because if the packet gets dropped, there is no point in sending a new packet. Logically, having the additional headers of TCP does not buy you anything.
+
+QUIC, on the other hand, can take a single flow and run it across multiple network-flows. This helps the video applications in rebuffering and improves throughput. In addition, it helps in boosting the response for bandwidth-intensive applications.
+
+### The introduction of new technologies
+
+With the introduction of [edge computing][10], augmented reality (AR), virtual reality (VR), real-time driving applications, [IoT sensors][11] on critical systems and other hypersensitive latency applications, PBR becomes a necessity.
+
+With AR you want the computing to be accomplished between 5 to 10 milliseconds of the endpoint. In the world of brownouts and path congestion, you need to pick a better path much more quickly. Also, service providers (SP) are rolling out 5G networks and announcing the use of different routing protocols that are being used as PBR. So the future looks bright for PBR.
+
+As voice and video, edge and virtual reality gain more existence in the market, PBR will become more popular. Even Facebook and Google are putting PBR inside their internal networks. Over time it will have a role in all the networks, specifically, the Internet Exchange points, both private and public.
+
+### Internet exchange points
+
+Back in the early 90s, there were only 4 internet exchange points in the US and 9 across the world overall. Now we have more than 3,000 where different providers have come together, and they exchange Internet traffic.
+
+When BGP was first rolled out in the mid-‘90s, because the internet exchange points were located far apart, the concept of shortest path held true more than today, where you have an internet that is highly distributed.
+
+The internet architecture will get changed as different service providers move to software-defined networking and update the routing protocols that they use. As far as the foreseeable future is concerned, however, the core internet exchanges will still use BGP.
+
+**This article is published as part of the IDG Contributor Network.[Want to Join?][12]**
+
+Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387152/performance-based-routing-pbr-the-gold-rush-for-sd-wan.html#tk.rss_all
+
+作者:[Matt Conran][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Matt-Conran/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/10/smart-city_iot_digital-transformation_networking_wireless_city-scape_skyline-100777499-large.jpg
+[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
+[3]: https://network-insight.net/2017/08/sd-wan-networks-scalpel/
+[4]: https://techvisionresearch.com/
+[5]: https://network-insight.net/2015/07/segment-routing-introduction/
+[6]: https://youtu.be/5XtkCSfRy3c
+[7]: https://network-insight.net/2015/01/design-guide-ipsec-fault-tolerance/
+[8]: https://network-insight.net/2015/01/ipsec-virtual-private-network-vpn-overview/
+[9]: https://network-insight.net/2015/10/back-to-basics-ssl-security/
+[10]: https://youtu.be/5mbPiKd_TFc
+[11]: https://network-insight.net/2016/11/internet-of-things-iot-networking/
+[12]: /contributor-network/signup.html
+[13]: https://www.facebook.com/NetworkWorld/
+[14]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190409 5 Linux rookie mistakes.md b/sources/tech/20190409 5 Linux rookie mistakes.md
new file mode 100644
index 0000000000..ae7a0a2969
--- /dev/null
+++ b/sources/tech/20190409 5 Linux rookie mistakes.md
@@ -0,0 +1,54 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (5 Linux rookie mistakes)
+[#]: via: (https://opensource.com/article/19/4/linux-rookie-mistakes)
+[#]: author: (Jen Wike Huger (Red Hat) https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p)
+
+5 Linux rookie mistakes
+======
+Linux enthusiasts share some of the biggest mistakes they made.
+![magnifying glass on computer screen, finding a bug in the code][1]
+
+It's smart to learn new skills throughout your life—it keeps your mind nimble and makes you more competitive in the job market. But some skills are harder to learn than others, especially those where small rookie mistakes can cost you a lot of time and trouble when you're trying to fix them.
+
+Take learning [Linux][2], for example. If you're used to working in a Windows or MacOS graphical interface, moving to Linux, with its unfamiliar commands typed into a terminal, can have a big learning curve. But the rewards are worth it, as the millions and millions of people who have gone before you have proven.
+
+That said, the journey won't be without pitfalls. We asked some of Linux enthusiasts to think back to when they first started using Linux and tell us about the biggest mistakes they made.
+
+"Don't go into [any sort of command line interface (CLI) work] with an expectation that commands work in rational or consistent ways, as that is likely to lead to frustration. This is not due to poor design choices—though it can feel like it when you're banging your head against the proverbial desk—but instead reflects the fact that these systems have evolved and been added onto through generations of software and OS evolution. Go with the flow, write down or memorize the commands you need, and (try not to) get frustrated when [things aren't what you'd expect][3]." _—[Gina Likins][4]_
+
+"As easy as it might be to just copy and paste commands to make the thing go, read the command first and at least have a general understanding of the actions that are about to be performed. Especially if there is a pipe command. Double especially if there is more than one. There are a lot of destructive commands that look innocuous until you realize what they can do (e.g., **rm** , **dd** ), and you don't want to accidentally destroy things. (Ask me how I know.)" _—[Katie McLaughlin][5]_
+
+"Early on in my Linux journey, I wasn't as aware of the importance of knowing where you are in the filesystem. I was deleting some file in what I thought was my home directory, and I entered **sudo rm -rf *** and deleted all of the boot files on my system. Now, I frequently use **pwd** to ensure that I am where I think I am before issuing such commands. Fortunately for me, I was able to boot my wounded laptop with a USB drive and recover my files." _—[Don Watkins][6]_
+
+"Do not reset permissions on the entire file system to [777][7] because you think 'permissions are hard to understand' and you want an application to have access to something." _—[Matthew Helmke][8]_
+
+"I was removing a package from my system, and I did not check what other packages it was dependent upon. I just let it remove whatever it wanted and ended up causing some of my important programs to crash and become unavailable." _—[Kedar Vijay Kulkarni][9]_
+
+What mistakes have you made while learning to use Linux? Share them in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/linux-rookie-mistakes
+
+作者:[Jen Wike Huger (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
+[2]: https://opensource.com/resources/linux
+[3]: https://lintqueen.com/2017/07/02/learning-while-frustrated/
+[4]: https://opensource.com/users/lintqueen
+[5]: https://opensource.com/users/glasnt
+[6]: https://opensource.com/users/don-watkins
+[7]: https://www.maketecheasier.com/file-permissions-what-does-chmod-777-means/
+[8]: https://twitter.com/matthewhelmke
+[9]: https://opensource.com/users/kkulkarn
diff --git a/sources/tech/20190409 5 open source mobile apps.md b/sources/tech/20190409 5 open source mobile apps.md
new file mode 100644
index 0000000000..15378c29b8
--- /dev/null
+++ b/sources/tech/20190409 5 open source mobile apps.md
@@ -0,0 +1,131 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (5 open source mobile apps)
+[#]: via: (https://opensource.com/article/19/4/mobile-apps)
+[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen)
+
+5 open source mobile apps
+======
+You can count on these apps to meet your needs for productivity,
+communication, and entertainment.
+![][1]
+
+Like most people in the world, I'm rarely further than an arm's reach from my smartphone. My Android device provides a seemingly limitless number of communication, productivity, and entertainment services thanks to the open source mobile apps I've installed from Google Play and F-Droid.
+
+Of the many open source apps on my phone, the following five are the ones I consistently turn to whether I want to listen to music; connect with friends, family, and colleagues; or get work done on the go.
+
+### MPDroid
+
+_An Android controller for the Music Player Daemon (MPD)_
+
+![MPDroid][2]
+
+MPD is a great way to get music from little music server computers out to the big black stereo boxes. It talks straight to ALSA and therefore to the Digital-to-Analog Converter ([DAC][3]) via the ALSA hardware interface, and it can be controlled over my network—but by what? Well, it turns out that MPDroid is a great MPD controller. It manages my music database, displays album art, handles playlists, and supports internet radio. And it's open source, so if something doesn't work…
+
+MPDroid is available on [Google Play][4] and [F-Droid][5].
+
+### RadioDroid
+
+_An Android internet radio tuner that I use standalone and with Chromecast_
+
+**
+
+**
+
+**
+
+_![RadioDroid][6]_
+
+RadioDroid is to internet radio as MPDroid is to managing my music database; essentially, RadioDroid is a frontend to [Internet-Radio.com][7]. Moreover, RadioDroid can be enjoyed by plugging headphones into the Android device, by connecting the Android device directly to the stereo via the headphone jack or USB, or by using its Chromecast capability with a compatible device. It's a fine way to check the weather in Finland, listen to the Spanish top 40, or hear the latest news from down under.
+
+RadioDroid is available on [Google Play][8] and [F-Droid][9].
+
+### Signal
+
+_A secure messaging client for Android, iOS, and desktop_
+
+**
+
+**
+
+**
+
+_![Signal][10]_
+
+If you like WhatsApp but are bothered by its [getting-closer-every-day][11] relationship to Facebook, Signal should be your next thing. The only problem with Signal is convincing your contacts they're better off replacing WhatsApp with Signal. But other than that, it has a similar interface; great voice and video calling; great encryption; decent anonymity; and it's supported by a foundation that doesn't plan to monetize your use of the software. What's not to like?
+
+Signal is available for [Android][12], [iOS][13], and [desktop][14].
+
+### ConnectBot
+
+_Android SSH client_
+
+**
+
+**
+
+**
+
+_![ConnectBot][15]_
+
+Sometimes I'm far away from my computer, but I need to log into the server to do something. [ConnectBot][16] is a great solution for moving SSH sessions onto my phone.
+
+ConnectBot is available on [Google Play][17].
+
+### Termux
+
+_Android terminal emulator with many familiar utilities_
+
+**
+
+**
+
+**
+
+_![Termux][18]_
+
+Have you ever needed to run an **awk** script on your phone? [Termux][19] is your solution. If you need to do terminal-type stuff, and you don't want to maintain an SSH connection to a remote computer the whole time, bring the files over to your phone with ConnectBot, quit the session, do your stuff in Termux, and send the results back with ConnectBot.
+
+Termux is available on [Google Play][20] and [F-Droid][21].
+
+* * *
+
+What are your favorite open source mobile apps for work or fun? Please share them in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/mobile-apps
+
+作者:[Chris Hermansen (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78
+[2]: https://opensource.com/sites/default/files/uploads/mpdroid.jpg (MPDroid)
+[3]: https://opensource.com/article/17/4/fun-new-gadget
+[4]: https://play.google.com/store/apps/details?id=com.namelessdev.mpdroid&hl=en_US
+[5]: https://f-droid.org/en/packages/com.namelessdev.mpdroid/
+[6]: https://opensource.com/sites/default/files/uploads/radiodroid.png (RadioDroid)
+[7]: https://www.internet-radio.com/
+[8]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2
+[9]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/
+[10]: https://opensource.com/sites/default/files/uploads/signal.png (Signal)
+[11]: https://opensource.com/article/19/3/open-messenger-client
+[12]: https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms
+[13]: https://itunes.apple.com/us/app/signal-private-messenger/id874139669?mt=8
+[14]: https://signal.org/download/
+[15]: https://opensource.com/sites/default/files/uploads/connectbot.png (ConnectBot)
+[16]: https://connectbot.org/
+[17]: https://play.google.com/store/apps/details?id=org.connectbot
+[18]: https://opensource.com/sites/default/files/uploads/termux.jpg (Termux)
+[19]: https://termux.com/
+[20]: https://play.google.com/store/apps/details?id=com.termux
+[21]: https://f-droid.org/packages/com.termux/
diff --git a/sources/tech/20190409 AI Ops- Let the data talk.md b/sources/tech/20190409 AI Ops- Let the data talk.md
new file mode 100644
index 0000000000..2b3d57ef17
--- /dev/null
+++ b/sources/tech/20190409 AI Ops- Let the data talk.md
@@ -0,0 +1,66 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (AI Ops: Let the data talk)
+[#]: via: (https://www.networkworld.com/article/3388217/ai-ops-let-the-data-talk.html#tk.rss_all)
+[#]: author: (Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena )
+
+AI Ops: Let the data talk
+======
+The catalysts and ROI of AI-powered network analytics for automated operations were the focus of discussion for service providers at the recent FutureNet conference in London. Blue Planet’s Marie Fiala details the conversation.
+![metamorworks][1]
+
+![Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena][2]
+
+_The catalysts and ROI of AI-powered network analytics for automated operations were the focus of discussion for service providers at the recent FutureNet conference in London. Blue Planet’s Marie Fiala details the conversation._
+
+Do we need perfect data? Or is ‘good enough’ data good enough? Certainly, there is a need to find a pragmatic approach or else one could get stalled in analysis-paralysis. Is closed-loop automation the end goal? Or is human-guided open loop automation desired? If the quality of data defines the quality of the process, then for closed-loop automation of critical business processes, one needs near-perfect data. Is that achievable?
+
+These issues were discussed and debated at the recent FutureNet conference in London, where the show focused on solving network operators’ toughest challenges. Industry presenters and panelists stayed true to the themes of AI and automation, all touting the necessity of these interlinked software technologies, yet there were varied opinions on approaches. Network and service providers such as BT, Colt, Deutsche Telekom, KPN, Orange, Telecom Italia, Telefonica, Telenor, Telia, Telus, Turk Telkom, and Vodafone weighed in on the discussion.
+
+**Catalysts for AI-powered analytics**
+
+On one point, most service providers were in agreement: there is a need to identify a specific business use case with measurable ROI, as an initial validation point when introducing AI-powered analytics into operations.
+
+Host operator, Vodafone, positioned 5G as the catalyst. With the advent of 5G technology supporting 100x connections, 10Gbps super-bandwidth, and ultra-low <10ms latency, the volume, velocity and variety of data is exploding. It’s a virtuous cycle – 5G technologies generate a plethora of data, and conversely, a 5G network requires data-driven automation to function accurately and optimally (how else can virtualized network functions be managed in real-time?).
+
+![5G as catalyst for digitalisation][3]
+
+Another operator stated that the ‘AI gateway for telecom’ is the customer experience domain, citing how agents can use analytics to better serve the customer base. For another operator, capacity planning is the killer use case: first leverage AI to understand what’s going on in your network, then use predictive AI for planning so that you can make smarter investment decisions. Another point of view was that service assurance is the area where the most benefits from AI will be realized. There was even mention of ‘AI as a business’ by enabling the creation of new services, such as home assistants. At the broadest level, it was noted that AI allows network operators to remain relevant in the eyes of customers.
+
+**The human side of AI and automation**
+
+When it comes to implementation, the significant human impact of AI and automation was not overlooked. Across the board, service providers acknowledged that a new skillset is needed in network operations centers. Network engineers have to upskill to become data scientists and DevOps developers in order to best leverage the new AI-driven software tools.
+
+Furthermore, it is a challenge to recruit specialist AI experts, especially since web-scale providers are also vying for the same talent. On the flip side of the dire need for new skills, there is also a shortage of qualified experts in legacy technologies. Operators need automated, zero-touch management before the workforce retires!
+
+![FutureNet panelists discuss how automated AI can be leveraged as a competitive differentiator][4]
+
+**The ROI of AI**
+
+In many cases, the approach to AI has been a technology-driven ‘Field of Dreams’: build it and they will come. A strategic decision was made to hire experts, build data lakes, collect data, and then the business case that yielded positive returns was discovered. In other cases, the business use case came first. But no matter what the approach, the ROI was significant.
+
+These positive results are spurring determination for continued research to uncover ever more areas where AI can deliver tangible benefits. This is however no easy task – one operator highlighted that data collection takes 80% of the effort, with the remaining 20% spent on development of algorithms. For AI to really proliferate throughout all aspects of operations, that trend needs to be reversed. It needs to be relatively easy and quick to collect massive amounts of heterogeneous data, aggregate it, and correlate it. This would allow investment to be overwhelmingly applied to the development of predictive and prescriptive analytics tailored to specific use cases, and to enacting intelligent closed-loop automation. Only then will data be able to truly talk – and tell us what we haven’t even thought of yet.
+
+[Discover Intelligent Automation at Blue Planet][5]
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3388217/ai-ops-let-the-data-talk.html#tk.rss_all
+
+作者:[Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/istock-957627892-100793278-large.jpg
+[2]: https://images.idgesg.net/images/article/2019/04/marla-100793273-small.jpg
+[3]: https://images.idgesg.net/images/article/2019/04/ciena-post-5-image-1-100793275-large.jpg
+[4]: https://images.idgesg.net/images/article/2019/04/ciena-post-5-image-2-100793276-large.jpg
+[5]: https://www.blueplanet.com/resources/Intelligent-Automation-Driving-Digital-Automation-for-Service-Providers.html?utm_campaign=X1058319&utm_source=NWW&utm_term=BPVision&utm_medium=newsletter
diff --git a/sources/tech/20190409 Anbox - Easy Way To Run Android Apps On Linux.md b/sources/tech/20190409 Anbox - Easy Way To Run Android Apps On Linux.md
new file mode 100644
index 0000000000..c7b0ba82c8
--- /dev/null
+++ b/sources/tech/20190409 Anbox - Easy Way To Run Android Apps On Linux.md
@@ -0,0 +1,182 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Anbox – Easy Way To Run Android Apps On Linux)
+[#]: via: (https://www.2daygeek.com/anbox-best-android-emulator-for-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+Anbox – Easy Way To Run Android Apps On Linux
+======
+
+Android emulator applications are allow us to run our favorite Android apps or games directly from Linux system.
+
+There are many android emulators were available for Linux and we had covered few applications in the past.
+
+You can review those by navigating to the following URLs.
+
+ * [How To Install Official Android Emulator (SDK) On Linux][1]
+ * [How To Install GenyMotion (Android Emulator) On Linux][2]
+
+
+
+Today we are going to discuss about the Anbox Android emulator.
+
+### What Is Anbox?
+
+Anbox stands for Android in a box. Anbox is a container-based approach to boot a full Android system on a regular GNU/Linux system.
+
+It’s new and modern emulator among others.
+
+Since Anbox places the core Android OS into a container using Linux namespaces (LXE) so, there is no slowness while accessing the installed applications.
+
+Anbox will let you run Android on your Linux system without the slowness of virtualization because the core Android OS has placed into a container using Linux namespaces (LXE).
+
+There is no direct access to any hardware from the Android container. All hardware access are going through the anbox daemon on the host.
+
+Each applications will be open in a separate window, just like other native system applications, and it can be showing up in the launcher.
+
+### How To Install Anbox In Linux?
+
+Anbox application is available as snap package so, make sure you have enabled snap support on your system.
+
+Anbox package is recently added to the Ubuntu (Cosmic) and Debian (Buster) repositories. If you are running these version then you can easily install with help of official distribution package manager. Other wise go with snap package installation.
+
+Make sure the necessary kernel modules should be installed in your system in order to work Anbox. For Ubuntu based users, use the following PPA to install it.
+
+```
+$ sudo add-apt-repository ppa:morphis/anbox-support
+$ sudo apt update
+$ sudo apt install linux-headers-generic anbox-modules-dkms
+```
+
+After you installed the `anbox-modules-dkms` package you have to manually reload the kernel modules or system reboot is required.
+
+```
+$ sudo modprobe ashmem_linux
+$ sudo modprobe binder_linux
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install anbox.
+
+```
+$ sudo apt install anbox
+```
+
+We always used to get package for Arch Linux based systems from AUR repository. So, use any of the **[AUR helper][5]** to install it. I prefer to go with **[Yay utility][6]**.
+
+```
+$ yuk -S anbox-git
+```
+
+If no, you can **[install and configure snaps in Linux][7]** by navigating to the following article. Others can ignore if you have already installed snaps on your system.
+
+```
+$ sudo snap install --devmode --beta anbox
+```
+
+### Prerequisites For Anbox
+
+By default, Anbox doesn’t ship with the Google Play Store.
+
+Hence, we need to manually download each application (APK) and install it using Android Debug Bridge (ADB).
+
+The ADB tool is readily available in most of the distributions repository so, we can easily install it.
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install ADB.
+
+```
+$ sudo apt install android-tools-adb
+```
+
+For **`Fedora`** system, use **[DNF Command][8]** to install ADB.
+
+```
+$ sudo dnf install android-tools
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install ADB.
+
+```
+$ sudo pacman -S android-tools
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][10]** to install ADB.
+
+```
+$ sudo zypper install android-tools
+```
+
+### Where To Download The Android Apps?
+
+Since you can’t use the Play Store so, you have to download the APK packages from trusted sites like [APKMirror][11] then manually install it.
+
+### How To Launch Anbox?
+
+Anbox can be launched from the Dash. This is how the default Anbox looks.
+![][13]
+
+### How To Push The Apps Into Anbox?
+
+As i told previously, we need to manually install it. For testing purpose, we are going to install `YouTube` and `Firefox` apps.
+
+First, you need to start ADB server. To do so, run the following command.
+
+```
+$ adb devices
+```
+
+We have already downloaded the `YouTube` and `Firefox` apps and the same we will install now.
+
+**Common Syntax:**
+
+```
+$ adb install Name-Of-Your-Application.apk
+```
+
+Installing YouTube and Firefox app.
+
+```
+$ adb install 'com.google.android.youtube_14.13.54-1413542800_minAPI19(x86_64)(nodpi)_apkmirror.com.apk'
+Success
+
+$ adb install 'org.mozilla.focus_9.0-330191219_minAPI21(x86)(nodpi)_apkmirror.com.apk'
+Success
+```
+
+I have installed `YouTube` and `Firefox` in my Anbox. See the screenshot below.
+![][14]
+
+As we told in the beginning of the article, it will open any app as a new tab. Here, I’m going to open Firefox and accessing the **[2daygeek.com][15]** website.
+![][16]
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/anbox-best-android-emulator-for-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/install-configure-sdk-android-emulator-on-linux/
+[2]: https://www.2daygeek.com/install-genymotion-android-emulator-on-ubuntu-debian-fedora-arch-linux/
+[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[5]: https://www.2daygeek.com/category/aur-helper/
+[6]: https://www.2daygeek.com/install-yay-yet-another-yogurt-aur-helper-on-arch-linux/
+[7]: https://www.2daygeek.com/linux-snap-package-manager-ubuntu/
+[8]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
+[11]: https://www.apkmirror.com/
+[12]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[13]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-1.jpg
+[14]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-2.jpg
+[15]: https://www.2daygeek.com/
+[16]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-3.jpg
diff --git a/sources/tech/20190409 How To Install And Configure Chrony As NTP Client.md b/sources/tech/20190409 How To Install And Configure Chrony As NTP Client.md
new file mode 100644
index 0000000000..9f7eb5f66e
--- /dev/null
+++ b/sources/tech/20190409 How To Install And Configure Chrony As NTP Client.md
@@ -0,0 +1,225 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Install And Configure Chrony As NTP Client?)
+[#]: via: (https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+How To Install And Configure Chrony As NTP Client?
+======
+
+The NTP server and NTP client allow us to sync the clock across the network.
+
+We had written an article about **[NTP server and NTP client installation and configuration][1]** in the past.
+
+If you would like to check these, navigate to the above URL.
+
+### What Is Chrony Client?
+
+Chrony is replacement of NTP client.
+
+It can synchronize the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time.
+
+chronyd is smaller, it uses less memory and it wakes up the CPU only when necessary, which is better for power saving.
+
+It can perform well even when the network is congested for longer periods of time.
+
+It supports hardware timestamping on Linux, which allows extremely accurate synchronization on local networks.
+
+It offers following two services.
+
+ * **`chronyc:`** Command line interface for chrony.
+ * **`chronyd:`** Chrony daemon service.
+
+
+
+### How To Install And Configure Chrony In Linux?
+
+Since the package is available in most of the distributions official repository. So, use the package manager to install it.
+
+For **`Fedora`** system, use **[DNF Command][2]** to install chrony.
+
+```
+$ sudo dnf install chrony
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install chrony.
+
+```
+$ sudo apt install chrony
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install chrony.
+
+```
+$ sudo pacman -S chrony
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install chrony.
+
+```
+$ sudo yum install chrony
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install chrony.
+
+```
+$ sudo zypper install chrony
+```
+
+In this article, we are going to use the following setup to test this.
+
+ * **`NTP Server:`** HostName: CentOS7.2daygeek.com, IP:192.168.1.5, OS:CentOS 7
+ * **`Chrony Client:`** HostName: Ubuntu18.2daygeek.com, IP:192.168.1.3, OS:Ubuntu 18.04
+
+
+
+Navigate to the following URL for **[NTP server installation and configuration in Linux][1]**.
+
+I have installed and configured the NTP server on `CentOS7.2daygeek.com` so, append the same into all the client machines. Also, include the other required information on it.
+
+The `chrony.conf` file will be placed in the different locations based on your distribution.
+
+For RHEL based systems, it’s located at `/etc/chrony.conf`.
+
+For Debian based systems, it’s located at `/etc/chrony/chrony.conf`.
+
+```
+# vi /etc/chrony/chrony.conf
+
+server CentOS7.2daygeek.com prefer iburst
+keyfile /etc/chrony/chrony.keys
+driftfile /var/lib/chrony/chrony.drift
+logdir /var/log/chrony
+maxupdateskew 100.0
+makestep 1 3
+cmdallow 192.168.1.0/24
+```
+
+Bounce the Chrony service once you update the configuration.
+
+For sysvinit systems. For RHEL based system we need to run `chronyd` instead of chrony.
+
+```
+# service chrony restart
+
+# chkconfig chrony on
+```
+
+For systemctl systems. For RHEL based system we need to run `chronyd` instead of chrony.
+
+```
+# systemctl restart chrony
+
+# systemctl enable chrony
+```
+
+Use the following commands like tacking, sources and sourcestats to check chrony synchronization details.
+
+To check chrony tracking status.
+
+```
+# chronyc tracking
+Reference ID : C0A80105 (CentOS7.2daygeek.com)
+Stratum : 3
+Ref time (UTC) : Thu Mar 28 05:57:27 2019
+System time : 0.000002545 seconds slow of NTP time
+Last offset : +0.001194361 seconds
+RMS offset : 0.001194361 seconds
+Frequency : 1.650 ppm fast
+Residual freq : +184.101 ppm
+Skew : 2.962 ppm
+Root delay : 0.107966967 seconds
+Root dispersion : 1.060455322 seconds
+Update interval : 2.0 seconds
+Leap status : Normal
+```
+
+Run the sources command to displays information about the current time sources.
+
+```
+# chronyc sources
+210 Number of sources = 1
+MS Name/IP address Stratum Poll Reach LastRx Last sample
+===============================================================================
+^* CentOS7.2daygeek.com 2 6 17 62 +36us[+1230us] +/- 1111ms
+```
+
+The sourcestats command displays information about the drift rate and offset estimation process for each of the sources currently being examined by chronyd.
+
+```
+# chronyc sourcestats
+210 Number of sources = 1
+Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
+==============================================================================
+CentOS7.2daygeek.com 5 3 71 -97.314 78.754 -469us 441us
+```
+
+When chronyd is configured as an NTP client or peer, you can have the transmit and receive timestamping modes and the interleaved mode reported for each NTP source by the chronyc ntpdata command.
+
+```
+# chronyc ntpdata
+
+Remote address : 192.168.1.5 (C0A80105)
+Remote port : 123
+Local address : 192.168.1.3 (C0A80103)
+Leap status : Normal
+Version : 4
+Mode : Server
+Stratum : 2
+Poll interval : 6 (64 seconds)
+Precision : -23 (0.000000119 seconds)
+Root delay : 0.108994 seconds
+Root dispersion : 0.076523 seconds
+Reference ID : 85F3EEF4 ()
+Reference time : Thu Mar 28 06:43:35 2019
+Offset : +0.000160221 seconds
+Peer delay : 0.000664478 seconds
+Peer dispersion : 0.000000178 seconds
+Response time : 0.000243252 seconds
+Jitter asymmetry: +0.00
+NTP tests : 111 111 1111
+Interleaved : No
+Authenticated : No
+TX timestamping : Kernel
+RX timestamping : Kernel
+Total TX : 46
+Total RX : 46
+Total valid RX : 46
+```
+
+Finally run the `date` command.
+
+```
+# date
+Thu Mar 28 03:08:11 CDT 2019
+```
+
+To step the system clock immediately, bypassing any adjustments in progress by slewing, issue the following command as root (To adjust the system clock manually).
+
+```
+# chronyc makestep
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/
+[2]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
diff --git a/sources/tech/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md b/sources/tech/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md
new file mode 100644
index 0000000000..f243fad898
--- /dev/null
+++ b/sources/tech/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md
@@ -0,0 +1,262 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Install And Configure NTP Server And NTP Client In Linux?)
+[#]: via: (https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+How To Install And Configure NTP Server And NTP Client In Linux?
+======
+
+You might heard this word many times and also you might had worked on this.
+
+However, i will tell you clearly in this article about NTP Server setup and NTP Client setup
+
+We will see about **[Chrony NTP Client setup][1]** later.
+
+### What Is NTP Server?
+
+NTP stands for Network Time Protocol.
+
+It is a networking protocol that synchronize the clock between computer systems over the network.
+
+In other hand I can say. It will keep the same time (It keep an accurate time) to all the systems which are connected to NTP server through NTP or Chrony client.
+
+NTP can usually maintain time to within tens of milliseconds over the public Internet, and can achieve better than one millisecond accuracy in local area networks under ideal conditions.
+
+It uses User Datagram Protocol (UDP) on port number 123 for send and receive timestamps. It’s a client/server application.
+
+It send and receive timestamps using the User Datagram Protocol (UDP) on port number 123.
+
+### What Is NTP Client?
+
+NTP client will synchronize its clock to the network time server.
+
+### What Is Chrony Client?
+
+Chrony is replacement of NTP client. It can synchronize the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time.
+
+### Why We Need NTP Server?
+
+To keep all the servers in your organization in-sync with an accurate time to perform time based jobs.
+
+To clarify this, I will tell you a scenario. Say for example, we have two servers (Server1 and Server2). The server1 usually complete the batch jobs at 10:55 then the server2 needs to run another job at 11:00 based on the server1 job completion report.
+
+If both the system is using in a different time (if one system is ahead of the others, the others are behind that particular one) then we can’t perform this. To achieve this, we should setup NTP. Hope it cleared your doubts about NTP.
+
+In this article, we are going to use the following setup to test this.
+
+ * **`NTP Server:`** HostName: CentOS7.2daygeek.com, IP:192.168.1.8, OS:CentOS 7
+ * **`NTP Client:`** HostName: Ubuntu18.2daygeek.com, IP:192.168.1.5, OS:Ubuntu 18.04
+
+
+
+### NTP SERVER SIDE: How To Install NTP Server In Linux?
+
+There is no different packages for NTP server and NTP client since it’s a client/server model. The NTP package is available in distribution official repository so, use the distribution package manger to install it.
+
+For **`Fedora`** system, use **[DNF Command][2]** to install ntp.
+
+```
+$ sudo dnf install ntp
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install ntp.
+
+```
+$ sudo apt install ntp
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install ntp.
+
+```
+$ sudo pacman -S ntp
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install ntp.
+
+```
+$ sudo yum install ntp
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install ntp.
+
+```
+$ sudo zypper install ntp
+```
+
+### How To Configure The NTP Server In Linux?
+
+Once you have installed the NTP package, make sure you have to uncomment the following configuration in the `/etc/ntp.conf` file on server side.
+
+By default the NTP server configuration relies on `X.distribution_name.pool.ntp.org`. If you want you can use the default configuration or you can change it as per your location (country specific) by visiting site.
+
+Say for example. If you are in India then your NTP server will be `0.in.pool.ntp.org` and it will work for most of the countries.
+
+```
+# vi /etc/ntp.conf
+
+restrict default kod nomodify notrap nopeer noquery
+restrict -6 default kod nomodify notrap nopeer noquery
+restrict 127.0.0.1
+restrict -6 ::1
+server 0.asia.pool.ntp.org
+server 1.asia.pool.ntp.org
+server 2.asia.pool.ntp.org
+server 3.asia.pool.ntp.org
+restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
+driftfile /var/lib/ntp/drift
+keys /etc/ntp/keys
+```
+
+We have allowed only `192.168.1.0/24` subnet clients to access the NTP server.
+
+Since firewall is enabled by default on RHEL 7 based distributions so, allow the ntp server/service.
+
+```
+# firewall-cmd --add-service=ntp --permanent
+# firewall-cmd --reload
+```
+
+Bounce the service once you update the configuration.
+
+For sysvinit systems. For Debian based system we need to run `ntp` instead of ntpd.
+
+```
+# service ntpd restart
+
+# chkconfig ntpd on
+```
+
+For systemctl systems. For Debian based system we need to run `ntp` instead of ntpd.
+
+```
+# systemctl restart ntpd
+
+# systemctl enable ntpd
+```
+
+### NTP CLIENT SIDE: How To Install NTP Client On Linux?
+
+As I mentioned earlier in this article. There is no specific package for NTP server and client. So, install the same package on client also.
+
+For **`Fedora`** system, use **[DNF Command][2]** to install ntp.
+
+```
+$ sudo dnf install ntp
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install ntp.
+
+```
+$ sudo apt install ntp
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install ntp.
+
+```
+$ sudo pacman -S ntp
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install ntp.
+
+```
+$ sudo yum install ntp
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install ntp.
+
+```
+$ sudo zypper install ntp
+```
+
+I have installed and configured the NTP server on `CentOS7.2daygeek.com` so, append the same into all the client machines.
+
+```
+# vi /etc/ntp.conf
+
+restrict default kod nomodify notrap nopeer noquery
+restrict -6 default kod nomodify notrap nopeer noquery
+restrict 127.0.0.1
+restrict -6 ::1
+server CentOS7.2daygeek.com prefer iburst
+driftfile /var/lib/ntp/drift
+keys /etc/ntp/keys
+```
+
+Bounce the service once you update the configuration.
+
+For sysvinit systems. For Debian based system we need to run `ntp` instead of ntpd.
+
+```
+# service ntpd restart
+
+# chkconfig ntpd on
+```
+
+For systemctl systems. For Debian based system we need to run `ntp` instead of ntpd.
+
+```
+# systemctl restart ntpd
+
+# systemctl enable ntpd
+```
+
+Wait for few minutes post restart of the NTP service to get synchronize time from the NTP server.
+
+Run the following commands to verify the NTP server synchronization status on Linux.
+
+```
+# ntpq –p
+Or
+# ntpq -pn
+
+ remote refid st t when poll reach delay offset jitter
+==============================================================================
+*CentOS7.2daygee 133.243.238.163 2 u 14 64 37 0.686 0.151 16.432
+```
+
+Run the following command to get the current status of ntpd.
+
+```
+# ntpstat
+synchronised to NTP server (192.168.1.8) at stratum 3
+ time correct to within 508 ms
+ polling server every 64 s
+```
+
+Finally run the `date` command.
+
+```
+# date
+Tue Mar 26 23:17:05 CDT 2019
+```
+
+If you are observing a significant offset in the NTP output. Run the following command to sync clock manually from the NTP server. Make sure that your NTP client should be inactive state when you perform the command.
+
+```
+# ntpdate –uv CentOS7.2daygeek.com
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/
+[2]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
diff --git a/sources/tech/20190409 Juniper opens SD-WAN service for the cloud.md b/sources/tech/20190409 Juniper opens SD-WAN service for the cloud.md
new file mode 100644
index 0000000000..7ed701ec14
--- /dev/null
+++ b/sources/tech/20190409 Juniper opens SD-WAN service for the cloud.md
@@ -0,0 +1,80 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Juniper opens SD-WAN service for the cloud)
+[#]: via: (https://www.networkworld.com/article/3388030/juniper-opens-sd-wan-service-for-the-cloud.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+Juniper opens SD-WAN service for the cloud
+======
+Juniper rolls out its Contrail SD-WAN cloud offering.
+![Thinkstock][1]
+
+Juniper has taken the wraps off a cloud-based SD-WAN service it says will ease the management and bolster the security of wired and wireless-connected branch office networks.
+
+The Contrail SD-WAN cloud offering expands on the company’s existing on-premise ([SRX][2]-based) and virtual ([NFX][3]-based) SD-WAN offerings to include greater expansion possibilities – up to 10,000 spoke-attached sites and support for more variants of passive redundant hybrid WAN links – and topologies such as hub and spoke, partial, and dynamic full mesh, Juniper stated.
+
+**More about SD-WAN**
+
+ * [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][4]
+ * [How to pick an off-site data-backup method][5]
+ * [SD-Branch: What it is and why you’ll need it][6]
+ * [What are the options for security SD-WAN?][7]
+
+
+
+The service brings with it Juniper’s Contrail Service Orchestration package, which secures, automates, and runs the service life cycle across [NFX Series][3] Network Services Platforms, [EX Series][8] Ethernet Switches, [SRX Series][2] next-generation firewalls, and [MX Series][9] 5G Universal Routing Platforms. Ultimately it lets customers manage and set up SD-WANs all from a single portal.
+
+The package is also a service orchestrator for the [vSRX][10] Virtual Firewall and [vMX][11] Virtual Router, available in public cloud marketplaces such as Amazon Web Services (AWS) and Microsoft Azure, Juniper said. The SD-WAN offering also includes integration with cloud security provider ZScaler.
+
+Contrail Service Orchestration offers organizations visibility across SD-WAN, as well as branch wired and now wireless infrastructure. Monitoring and intelligent analytics offer real-time insight into network operations, allowing administrators to preempt looming threats and degradations, as well as pinpoint issues for faster recovery.
+
+The new service also includes support for Juniper’s [recently acquired][12] Mist Systems wireless technology, which lets the service access and manage Mist’s wireless access points, allowing customers to meld wireless and wired networks.
+
+Juniper recently closed the agreement to buy innovative wireless-gear-maker Mist for $405 million. Mist touts itself as having developed an artificial-intelligence-based wireless platform that makes Wi-Fi more predictable, reliable, and measurable.
+
+With Contrail, administrators can control a growing mix of legacy and modern scale-out architectures while automating their operational workflows using software that provides smarter, easier-to-use automation, orchestration and infrastructure visibility, wrote Juniper CTO [Bikash Koley][13] in a [blog about the SD-WAN announcement][14].
+
+“Management complexity and policy enforcement are traditional network administrator fears, while both data and network security are growing in importance for organizations of all sizes,” Koley stated. ** **“Cloud-delivered SD-WAN removes the complexity of software operations, arguably the most difficult part of Software Defined Networking.”
+
+Analysts said the Juniper announcement could help the company compete in a super-competitive, rapidly evolving SD-WAN world.
+
+“The announcement is more a ‘me too’ than a particular technological breakthrough,” said Lee Doyle, principal analyst with Doyle Research. “The Mist integration is what’s interesting here, and that could help them, but there are 15 to 20 other vendors that have the same technology, bigger partners, and bigger sales channels than Juniper does.”
+
+Indeed the SD-WAN arena is a crowded one with Cisco, VMware, Silver Peak, Riverbed, Aryaka, Nokia, and Versa among the players.
+
+The cloud-based Contrail SD-WAN offering is available as an annual or multi-year subscription.
+
+Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3388030/juniper-opens-sd-wan-service-for-the-cloud.html#tk.rss_all
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/01/cloud_network_blockchain_bitcoin_storage-100745950-large.jpg
+[2]: https://www.juniper.net/us/en/products-services/security/srx-series/
+[3]: https://www.juniper.net/us/en/products-services/sdn/nfx-series/
+[4]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
+[5]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
+[6]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
+[7]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
+[8]: https://www.juniper.net/us/en/products-services/switching/ex-series/
+[9]: https://www.juniper.net/us/en/products-services/routing/mx-series/
+[10]: https://www.juniper.net/us/en/products-services/security/srx-series/vsrx/
+[11]: https://www.juniper.net/us/en/products-services/routing/mx-series/vmx/
+[12]: https://www.networkworld.com/article/3353042/juniper-grabs-mist-for-wireless-ai-cloud-service-delivery-technology.html
+[13]: https://www.networkworld.com/article/3324374/juniper-cto-talks-cloud-intent-computing-revolution-high-speed-networking-and-open-source-growth.html?nsdr=true
+[14]: https://forums.juniper.net/t5/Engineering-Simplicity/Cloud-Delivered-Branch-Simplicity-Now-Surpasses-SD-WAN/ba-p/461188
+[15]: https://www.facebook.com/NetworkWorld/
+[16]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190409 The Microsoft-BMW IoT Open Manufacturing Platform might not be so open.md b/sources/tech/20190409 The Microsoft-BMW IoT Open Manufacturing Platform might not be so open.md
new file mode 100644
index 0000000000..c74f61efe4
--- /dev/null
+++ b/sources/tech/20190409 The Microsoft-BMW IoT Open Manufacturing Platform might not be so open.md
@@ -0,0 +1,69 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The Microsoft/BMW IoT Open Manufacturing Platform might not be so open)
+[#]: via: (https://www.networkworld.com/article/3387642/the-microsoftbmw-iot-open-manufacturing-platform-might-not-be-so-open.html#tk.rss_all)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+The Microsoft/BMW IoT Open Manufacturing Platform might not be so open
+======
+The new industrial IoT Open Manufacturing Platform from Microsoft and BMW runs only on Microsoft Azure. That could be an issue.
+![Martyn Williams][1]
+
+Last week at [Hannover Messe][2], Microsoft and German carmaker BMW announced a partnership to build a hardware and software technology framework and reference architecture for the industrial internet of things (IoT), and foster a community to spread these smart-factory solutions across the automotive and manufacturing industries.
+
+The stated goal of the [Open Manufacturing Platform (OMP)][3]? According to the press release, it's “to drive open industrial IoT development and help grow a community to build future [Industry 4.0][4] solutions.” To make that a reality, the companies said that by the end of 2019, they plan to attract four to six partners — including manufacturers and suppliers from both inside and outside the automotive industry — and to have rolled out at least 15 use cases operating in actual production environments.
+
+**[ Read also:[An inside look at an IIoT-powered smart factory][5] | Get regularly scheduled insights: [Sign up for Network World newsletters][6] ]**
+
+### Complex and proprietary is bad for IoT
+
+It sounds like a great idea, right? As the companies rightly point out, many of today’s industrial IoT solutions rely on “complex, proprietary systems that create data silos and slow productivity.” Who wouldn’t want to “standardize data models that enable analytics and machine learning scenarios” and “accelerate future industrial IoT developments, shorten time to value, and drive production efficiencies while addressing common industrial challenges”?
+
+But before you get too excited, let’s talk about a key word in the effort: open. As Scott Guthrie, executive vice president of Microsoft Cloud + AI Group, said in a statement, "Our commitment to building an open community will create new opportunities for collaboration across the entire manufacturing value chain."
+
+### The Open Manufacturing Platform is open only to Microsoft Azure
+
+However, that will happen as long as all that collaboration occurs in Microsoft Azure. I’m not saying Azure isn’t up to the task, but it’s hardly the only (or even the leading) cloud platform interested in the industrial IoT. Putting everything in Azure might be an issue to those potential OMP partners. It’s an “open” question as to how many companies already invested in Amazon Web Services (AWS) or the Google Cloud Platform (GCP) will be willing to make the switch or go multi-cloud just to take advantage of the OMP.
+
+My guess is that Microsoft and BMW won’t have too much trouble meeting their initial goals for the OMP. It shouldn’t be that hard to get a handful of existing Azure customers to come up with 15 use cases leveraging advances in analytics, artificial intelligence (AI), and digital feedback loops. (As an example, the companies cited the autonomous transport systems in BMW’s factory in Regensburg, Germany, part of the more than 3,000 machines, robots and transport systems connected with the BMW Group’s IoT platform, which — naturally — is built on Microsoft Azure's cloud.)
+
+### Will non-Azure users jump on board the OMP?
+
+The question is whether tying all this to a single cloud provider will affect the effort to attract enough new companies — including companies not currently using Azure — to establish a truly viable open platform?
+
+Perhaps [Stacey Higginbotham at Stacy on IoT put it best][7]:
+
+> “What they really launched is a reference design for manufacturers to work from.”
+
+That’s not nothing, of course, but it’s a lot less ambitious than building a new industrial IoT platform. And it may not easily fulfill the vision of a community working together to create shared solutions that benefit everyone.
+
+**[ Now read this:[Why are IoT platforms so darn confusing?][8] ]**
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387642/the-microsoftbmw-iot-open-manufacturing-platform-might-not-be-so-open.html#tk.rss_all
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2017/01/20170107_105344-100702818-large.jpg
+[2]: https://www.hannovermesse.de/home
+[3]: https://www.prnewswire.co.uk/news-releases/microsoft-and-the-bmw-group-launch-the-open-manufacturing-platform-859672858.html
+[4]: https://en.wikipedia.org/wiki/Industry_4.0
+[5]: https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html
+[6]: https://www.networkworld.com/newsletters/signup.html
+[7]: https://mailchi.mp/iotpodcast/stacey-on-iot-industrial-iot-reminds-me-of-apples-ecosystem?e=6bf9beb394
+[8]: https://www.networkworld.com/article/3336166/why-are-iot-platforms-so-darn-confusing.html
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190409 UP Shell Script - Quickly Navigate To A Specific Parent Directory In Linux.md b/sources/tech/20190409 UP Shell Script - Quickly Navigate To A Specific Parent Directory In Linux.md
new file mode 100644
index 0000000000..2bb20bc8a0
--- /dev/null
+++ b/sources/tech/20190409 UP Shell Script - Quickly Navigate To A Specific Parent Directory In Linux.md
@@ -0,0 +1,149 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (UP Shell Script – Quickly Navigate To A Specific Parent Directory In Linux)
+[#]: via: (https://www.2daygeek.com/up-shell-script-quickly-go-back-to-a-specific-parent-directory-in-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+UP Shell Script – Quickly Navigate To A Specific Parent Directory In Linux
+======
+
+Recently we had written an article about **[bd command][1]** , which help us to **[quickly go back to the specific parent directory][1]**.
+
+Even, the [up shell script][2] allow us to perform the same but has different approach so, we would like to explore it.
+
+This will allow us to quickly navigate to a specific parent directory with mentioning the directory name.
+
+Instead we can give the directory number. I mean to say that number of times you’d have to go back.
+
+Stop typing `cd ../../..` endlessly and navigate easily to a specific parent directory by using up shell script.
+
+It support tab completion so, it’s become more convenient.
+
+The `up.sh` registers the up function and some completion functions via your `.bashrc` or `.zshrc` file.
+
+It was completely written using shell script and it’s support zsh and fish shell as well.
+
+We had written an article about **[autocd][3]**. It’s a builtin shell variable that helps us to **[navigate to inside a directory without cd command][3]**.
+
+### How To Install up Linux?
+
+It’s not based on the distribution and you have to install it based on your shell.
+
+Simple run the following command to enable up script on `bash` shell.
+
+```
+$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
+
+$ echo 'source ~/.config/up/up.sh' >> ~/.bashrc
+```
+
+Run the following command to take the changes to effect.
+
+```
+$ source ~/.bashrc
+```
+
+Simple run the following command to enable up script on `zsh` shell.
+
+```
+$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
+
+$ echo 'source ~/.config/up/up.sh' >> ~/.zshrc
+```
+
+Run the following command to take the changes to effect.
+
+```
+$ source ~/.zshrc
+```
+
+Simple run the following command to enable up script on `fish` shell.
+
+```
+$ curl --create-dirs -o ~/.config/up/up.fish https://raw.githubusercontent.com/shannonmoeller/up/master/up.fish
+
+$ source ~/.config/up/up.fish
+```
+
+### How To Use This In Linux?
+
+We have successfully installed and configured the up script on system. It’s time to test it.
+
+I’m going to take the below directory path for this testing.
+
+Run the `pwd` command or `dirs` command to know your current location.
+
+```
+daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ pwd
+or
+daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ dirs
+
+/usr/share/icons/Adwaita/256x256/apps
+```
+
+How to up one level? Quickly go back to one directory. I’m currently in `/usr/share/icons/Adwaita/256x256/apps` and if i want to go one directory up `256x256` directory quickly then simple type the following command.
+
+```
+daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ up
+
+daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256$ pwd
+/usr/share/icons/Adwaita/256x256
+```
+
+How to up multiple levels? Quickly go back to multiple directory. I’m currently in `/usr/share/icons/Adwaita/256x256/apps` and if i want to go to `share` directory quickly then simple type the following command.
+
+```
+daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ up 4
+
+daygeek@Ubuntu18:/usr/share$ pwd
+/usr/share
+```
+
+How to up by full name? Quickly go back to the given directory instead of number.
+
+```
+daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ up icons
+
+daygeek@Ubuntu18:/usr/share/icons$ pwd
+/usr/share/icons
+```
+
+How to up by partial name? Quickly go back to the given directory instead of number.
+
+```
+daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ up Ad
+
+daygeek@Ubuntu18:/usr/share/icons/Adwaita$ pwd
+/usr/share/icons/Adwaita
+```
+
+As i told in the beginning of the article, it supports tab completion.
+
+```
+daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ up
+256x256/ Adwaita/ icons/ share/ usr/
+```
+
+This tutorial allows you to quickly go back to a specific parent directory but there is no option to move forward quickly.
+
+We have another solution for this, will come up with new solution shortly. Please stay tune with us.
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/up-shell-script-quickly-go-back-to-a-specific-parent-directory-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/bd-quickly-go-back-to-a-specific-parent-directory-in-linux/
+[2]: https://github.com/shannonmoeller/up
+[3]: https://www.2daygeek.com/navigate-switch-directory-without-using-cd-command-in-linux/
diff --git a/sources/tech/20190409 What it takes to become a blockchain developer.md b/sources/tech/20190409 What it takes to become a blockchain developer.md
new file mode 100644
index 0000000000..668824b99a
--- /dev/null
+++ b/sources/tech/20190409 What it takes to become a blockchain developer.md
@@ -0,0 +1,204 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What it takes to become a blockchain developer)
+[#]: via: (https://opensource.com/article/19/4/blockchain-career-developer)
+[#]: author: (Joseph Mugo https://opensource.com/users/mugo)
+
+What it takes to become a blockchain developer
+======
+If you’ve been considering a career in blockchain development, the time
+to get your foot in the door is now. Here's how to get started.
+![][1]
+
+The past decade has been an interesting time for the development of decentralized technologies. Before 2009, the progress was slow and without any clear direction until Satoshi Nakamoto created and deployed Bitcoin. That brought blockchain, the record-keeping technology behind Bitcoin, into the limelight.
+
+Since then, we've seen blockchain revolutionize various concepts that we used to take for granted, such as monitoring supply chains, [creating digital identities,][2] [tracking jewelry][3], and [managing shipping systems.][4] Companies such as IBM and Samsung are at the forefront of blockchain as the underlying infrastructure for the next wave of tech innovation. There is no doubt that blockchain's role will grow in the years to come.
+
+Thus, it's no surprise that there's a high demand for blockchain developers. LinkedIn put "blockchain developers" at the top of its 2018 [emerging jobs report][5] with an expected 33-fold growth. The freelancing site Upwork also released a report showing that blockchain was one of the [fastest growing skills][6] out of more than 5,000 in its index.
+
+Describing the internet in 2003, [Jeff Bezos said][7], "we are at the 1908 Hurley washing machine stage." The same can be said about blockchain today. The industry is busy building its foundation. If you've been considering a career as a blockchain developer, the time to get your foot in the door is now.
+
+However, you may not know where to start. It can be frustrating to go through countless blog posts and white papers or messy Slack channels when trying to find your footing. This article is a report on what I learned when contemplating whether I should become a blockchain developer. I'll approach it from the basics, with resources for each topic you need to master to be industry-ready.
+
+### Technical fundamentals
+
+Although you're won't be expected to build a blockchain from scratch, you need to be skilled enough to handle the duties of blockchain development. A bachelor's degree in computer science or information security is required. You also need to have some fundamentals in data structures, cryptography, and networking and distributed systems.
+
+#### Data structures
+
+The complexity of blockchain requires a solid understanding of data structures. At the core, a distributed ledger is like a network of replicated databases, only it stores information in blocks rather than tables. The blocks are also cryptographically secured to ensure their integrity every time a block is added.
+
+For this reason, you have to know how common data structures, such as binary search trees, hash maps, graphs, and linked lists, work. It's even better if you can build them from scratch.
+
+This [GitHub repository][8] contains all information newbies need to learn data structures and algorithms. Common languages such as Python, Java, Scala, C, C-Sharp, and C++ are featured.
+
+#### Cryptography
+
+Cryptography is the foundation of blockchain; it is what makes cryptocurrencies work. The Bitcoin blockchain employs public-key cryptography to create digital signatures and hash functions. You might be discouraged if you don't have a strong math background, but Stanford offers [a free course][9] that's perfect for newbies. You'll learn about authenticated encryption, message integrity, and block ciphers.
+
+You should also study [RSA][10], which doesn't require a strong background in mathematics, and look at [ECDSA][11] (elliptic curve cryptography).
+
+And don't forget [cryptographic hash functions][12]. They are the equations that enable most forms of encryptions on the internet. They keep payments secure on e-commerce sites and are the core mechanism behind the HTTPS protocol. There's extensive use of cryptographic hash functions in blockchain.
+
+#### Networking and distributed systems
+
+Build a good foundation in understanding how distributed ledgers work. Also understand how peer-to-peer networks work, which translates to a good foundation in computer networks, from networking topologies to routing.
+
+In blockchain, the processing power is harnessed from connected computers. For seamless recording and interchange of information between these devices, you need to understand about [Byzantine fault-tolerant consensus][13], which is a key security feature in blockchain. You don't need to know everything; an understanding of how distributed systems work is good enough.
+
+Stanford has a free, self-paced [course on computer networking][14] if you need to start from scratch. You can also consult this list of [awesome material on distributed systems][15].
+
+### Cryptonomics
+
+We've covered some of the most important technical bits. It's time to talk about the economics of this industry. Although cryptocurrencies don't have central banks to monitor the money supply or keep crypto companies in check, it's essential to understand the economic structures woven around them.
+
+You'll need to understand game theory, the ideal mathematical framework for modeling scenarios in which conflicts of interest exist among involved parties. Take a look at Michael Karnjanaprakorn's [Beginner's Guide to Game Theory][16]. It's lucid and well explained.
+
+You also need to understand what affects currency valuation and the various monetary policies that affect cryptocurrencies. Here are some books you can refer to:
+
+ * _[The Business Blockchain: Promise, Practice, and Application of the Next Internet Technology][17]_ by William Mougayar
+ * _[Blockchain: Blueprint for the New Economy][18]_ by Melanie Swan
+ * _[Blockchain: The Blockchain For Beginners Guide to Blockchain Technology and Leveraging Blockchain Programming][19]_ by Josh Thompsons
+
+
+
+Depending on how skilled you are, you won't need to go through all those materials. But once you're done, you'll understand the fundamentals of blockchain. Then you can dive into the good stuff.
+
+### Smart contracts
+
+A [smart contract][20] is a program that runs on the blockchain once a transaction is complete to enhance blockchain's capabilities.
+
+Unlike traditional judicial systems, smart contracts are enforced automatically and impartially. There are also no middlemen, so you don't need a lawyer to oversee a transaction.
+
+As smart contracts get more complex, they become harder to secure. You need to be aware of every possible way a smart contract can be executed and ensure that it does what is expected. At the moment, not many developers can properly optimize and audit smart contracts.
+
+### Decentralized applications
+
+Decentralized applications (DApps) are software built on blockchains. As a blockchain developer, there are several platforms where you can build a DApp. Here are some of them:
+
+#### Ethereum
+
+Ethereum is Vitalik Buterin's brainchild. It went live in 2015 and is one of the most popular development platforms. Ether is the cryptocurrency that fuels the Ethereum.
+
+It has its own language called Solidity, which is similar to C++ and JavaScript. If you've got any experience with either, you'll pick it up easily.
+
+One thing that makes Solidity unique is that it is smart-contract oriented.
+
+#### NEO
+
+Originally known as Antshares, NEO was founded by Erik Zhang and Da Hongfei in 2014. It became NEO in 2017. Unlike Ethereum, it's not limited to one language. You can use different programming languages to build your DApps on NEO, including C# and Java. Experienced users can easily start building DApps on NEO. It's focused on providing platforms for future digital businesses.
+
+Consider NEO if you have applications that will need to process lots of transactions per second. However, it works closely with the Chinese government and follows Chinese business regulations.
+
+#### EOS
+
+EOS blockchain aims to be a decentralized operating system that can support industrial-scale applications. It's basically like Ethereum, but with faster transaction speeds and more scalable.
+
+#### Hyperledger
+
+Hyperledger is an open source collaborative platform that was created to develop cross-industry blockchain technologies. The Linux Foundation hosts Hyperledger as a hub for open industrial blockchain development.
+
+### Learning resources
+
+Here are some courses and other resources that'll help make you an industry-ready blockchain developer.
+
+ * The University of Buffalo and The State University of New York have a [blockchain specialization course][21] that also teaches smart contracts. You can complete it in two months if you put in 10 hours per week. You'll learn about designing and implementing smart contracts and various methods for developing decentralized applications on blockchain.
+ * [DApps for Beginners][22] offers tutorials and other information to get you started on creating decentralized apps on the Ethereum blockchain. You'll need to know JavaScript, and knowledge of C++ is an added advantage.
+ * IBM also offers [Blockchain for Developers][23], where you'll work with IBM's private blockchain and build smart contracts using the [Hyperledger Fabric][24].
+ * For $3,500 you can enroll in MIT's online [Blockchain Technologies: Business Innovation and Application][25] program, which examines blockchain from an economic perspective. You need deep pockets for this one; it's meant for executives who want to know how blockchain can be used in their organizations.
+ * If you're willing to commit 10 hours per week, Udacity's [Blockchain Developer Nanodegree][26] can prepare you to become an industry-ready blockchain developer in six months. Before enrolling, you should have some experience in object-oriented programming. You should also have developed the frontend and backend of a web application with JavaScript. And you're required to have used a remote API to create and consume data. You'll work with Bitcoin and Ethereum protocols to build projects for real-world applications.
+ * If you need to shore up your foundations, you may be interested in the Open Source Society University's wildly popular and [free computer science curriculum][27].
+ * You can read a variety of articles about [blockchain in open source][28] on [Opensource.com][29].
+
+
+
+### Types of blockchain development
+
+What does a blockchain developer really do? It doesn't involve building a blockchain from scratch. Depending on the organization you work for, here are some of the categories that blockchain developers fall under.
+
+#### Backend developers
+
+In this case, the developer is responsible for:
+
+ * Designing and developing APIs for blockchain integration
+ * Doing performance testing and deployment
+ * Gathering requirements and working side-by-side with other developers and designers to design software
+ * Providing technical support
+
+
+
+#### Blockchain-specific
+
+Blockchain developers and project managers fall under this category. Their main roles include:
+
+ * Developing and maintaining decentralized applications
+ * Supervising and planning blockchain projects
+ * Advising companies on how to structure initial coin offerings (ICOs)
+ * Understanding what a company needs and creating apps that address those needs
+ * For project managers, organizing training for employees
+
+
+
+#### Smart-contract engineers
+
+This type of developer is required to know a smart-contract language like Solidity, Python, or Go. Their main roles include:
+
+ * Auditing and developing smart contracts
+ * Meeting with users and buyers
+ * Understanding business flow and security to ensure there are no loopholes in smart contracts
+ * Doing end-to-end business process testing
+
+
+
+### The state of the industry
+
+There's a wide base of knowledge to help you become a blockchain developer. If you're interested in joining the field, it's an opportunity for you to make a difference by pioneering the next wave of tech innovations. It pays very well and is in high demand. There's also a wide community you can join to help you gain entry as an actual developer, including [Ethereum Stack Exchange][30] and meetup events around the world.
+
+The banking sector, the insurance industry, governments, and retail industries are some of the sectors where blockchain developers can work. If you're willing to work for it, being a blockchain developer is an excellent career choice. Currently, the need outpaces available talent by far.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/blockchain-career-developer
+
+作者:[Joseph Mugo][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mugo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA
+[2]: https://www.fool.com/investing/2018/02/16/this-is-really-happening-microsoft-is-developing-b.aspx
+[3]: https://www.engadget.com/2018/04/26/ibm-blockchain-jewelry-provenance/
+[4]: https://www.engadget.com/2018/04/16/samsung-blockchain-based-global-shipping-system/
+[5]: https://economicgraph.linkedin.com/research/linkedin-2018-emerging-jobs-report
+[6]: https://www.upwork.com/blog/2018/05/fastest-growing-skills-upwork-q1-2018/
+[7]: https://www.wsj.com/articles/SB104690855395981400
+[8]: https://github.com/TheAlgorithms
+[9]: https://www.coursera.org/learn/crypto
+[10]: https://en.wikipedia.org/wiki/RSA_(cryptosystem)
+[11]: https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm
+[12]: https://komodoplatform.com/cryptographic-hash-function/
+[13]: https://en.wikipedia.org/wiki/Byzantine_fault
+[14]: https://lagunita.stanford.edu/courses/Engineering/Networking-SP/SelfPaced/about
+[15]: https://github.com/theanalyst/awesome-distributed-systems
+[16]: https://hackernoon.com/beginners-guide-to-game-theory-31e3e6adcec9
+[17]: https://www.amazon.com/dp/B01EIGP8HG/
+[18]: https://www.amazon.com/Blockchain-Blueprint-Economy-Melanie-Swan/dp/1491920491
+[19]: https://www.amazon.com/Blockchain-Beginners-Technology-Leveraging-Programming-ebook/dp/B0711RN8KJ
+[20]: https://lifeinpaces.com/2019/03/04/ethereum-smart-contracts-how-do-they-work/
+[21]: https://www.coursera.org/specializations/blockchain?aid=true
+[22]: https://dappsforbeginners.wordpress.com/
+[23]: https://developer.ibm.com/tutorials/cl-ibm-blockchain-101-quick-start-guide-for-developers-bluemix-trs/#start
+[24]: https://www.hyperledger.org/projects/fabric
+[25]: https://executive.mit.edu/openenrollment/program/blockchain-technologies-business-innovation-and-application-self-paced-online/#.XJSk-CgzbRY
+[26]: https://www.udacity.com/course/blockchain-developer-nanodegree--nd1309
+[27]: https://github.com/ossu/computer-science
+[28]: https://opensource.com/tags/blockchain
+[29]: http://Opensource.com
+[30]: https://ethereum.stackexchange.com/
diff --git a/sources/tech/20190409 Working with variables on Linux.md b/sources/tech/20190409 Working with variables on Linux.md
new file mode 100644
index 0000000000..da4fec5ea9
--- /dev/null
+++ b/sources/tech/20190409 Working with variables on Linux.md
@@ -0,0 +1,267 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Working with variables on Linux)
+[#]: via: (https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+Working with variables on Linux
+======
+Variables often look like $var, but they also look like $1, $*, $? and $$. Let's take a look at what all these $ values can tell you.
+![Mike Lawrence \(CC BY 2.0\)][1]
+
+A lot of important values are stored on Linux systems in what we call “variables,” but there are actually several types of variables and some interesting commands that can help you work with them. In a previous post, we looked at [environment variables][2] and where they are defined. In this post, we're going to look at variables that are used on the command line and within scripts.
+
+### User variables
+
+While it's quite easy to set up a variable on the command line, there are a few interesting tricks. To set up a variable, all you need to do is something like this:
+
+```
+$ myvar=11
+$ myvar2="eleven"
+```
+
+To display the values, you simply do this:
+
+```
+$ echo $myvar
+11
+$ echo $myvar2
+eleven
+```
+
+You can also work with your variables. For example, to increment a numeric variable, you could use any of these commands:
+
+```
+$ myvar=$((myvar+1))
+$ echo $myvar
+12
+$ ((myvar=myvar+1))
+$ echo $myvar
+13
+$ ((myvar+=1))
+$ echo $myvar
+14
+$ ((myvar++))
+$ echo $myvar
+15
+$ let "myvar=myvar+1"
+$ echo $myvar
+16
+$ let "myvar+=1"
+$ echo $myvar
+17
+$ let "myvar++"
+$ echo $myvar
+18
+```
+
+With some of these, you can add more than 1 to a variable's value. For example:
+
+```
+$ myvar0=0
+$ ((myvar0++))
+$ echo $myvar0
+1
+$ ((myvar0+=10))
+$ echo $myvar0
+11
+```
+
+With all these choices, you'll probably find at least one that is easy to remember and convenient to use.
+
+You can also _unset_ a variable — basically undefining it.
+
+```
+$ unset myvar
+$ echo $myvar
+```
+
+Another interesting option is that you can set up a variable and make it **read-only**. In other words, once set to read-only, its value cannot be changed (at least not without some very tricky command line wizardry). That means you can't unset it either.
+
+```
+$ readonly myvar3=1
+$ echo $myvar3
+1
+$ ((myvar3++))
+-bash: myvar3: readonly variable
+$ unset myvar3
+-bash: unset: myvar3: cannot unset: readonly variable
+```
+
+You can use any of those setting and incrementing options for assigning and manipulating variables within scripts, but there are also some very useful _internal variables_ for working within scripts. Note that you can't reassign their values or increment them.
+
+### Internal variables
+
+There are quite a few variables that can be used within scripts to evaluate arguments and display information about the script itself.
+
+ * $1, $2, $3 etc. represent the first, second, third, etc. arguments to the script.
+ * $# represents the number of arguments.
+ * $* represents the string of arguments.
+ * $0 represents the name of the script itself.
+ * $? represents the return code of the previously run command (0=success).
+ * $$ shows the process ID for the script.
+ * $PPID shows the process ID for your shell (the parent process for the script).
+
+
+
+Some of these variables also work on the command line but show related information:
+
+ * $0 shows the name of the shell you're using (e.g., -bash).
+ * $$ shows the process ID for your shell.
+ * $PPID shows the process ID for your shell's parent process (for me, this is sshd).
+
+
+
+If we throw all of these variables into a script just to see the results, we might do this:
+
+```
+#!/bin/bash
+
+echo $0
+echo $1
+echo $2
+echo $#
+echo $*
+echo $?
+echo $$
+echo $PPID
+```
+
+When we call this script, we'll see something like this:
+
+```
+$ tryme one two three
+/home/shs/bin/tryme <== script name
+one <== first argument
+two <== second argument
+3 <== number of arguments
+one two three <== all arguments
+0 <== return code from previous echo command
+10410 <== script's process ID
+10109 <== parent process's ID
+```
+
+If we check the process ID of the shell once the script is done running, we can see that it matches the PPID displayed within the script:
+
+```
+$ echo $$
+10109 <== shell's process ID
+```
+
+Of course, we're more likely to use these variables in considerably more useful ways than simply displaying their values. Let's check out some ways we might do this.
+
+Checking to see if arguments have been provided:
+
+```
+if [ $# == 0 ]; then
+ echo "$0 filename"
+ exit 1
+fi
+```
+
+Checking to see if a particular process is running:
+
+```
+ps -ef | grep apache2 > /dev/null
+if [ $? != 0 ]; then
+ echo Apache is not running
+ exit
+fi
+```
+
+Verifying that a file exists before trying to access it:
+
+```
+if [ $# -lt 2 ]; then
+ echo "Usage: $0 lines filename"
+ exit 1
+fi
+
+if [ ! -f $2 ]; then
+ echo "Error: File $2 not found"
+ exit 2
+else
+ head -$1 $2
+fi
+```
+
+And in this little script, we check if the correct number of arguments have been provided, if the first argument is numeric, and if the second argument is an existing file.
+
+```
+#!/bin/bash
+
+if [ $# -lt 2 ]; then
+ echo "Usage: $0 lines filename"
+ exit 1
+fi
+
+if [[ $1 != [0-9]* ]]; then
+ echo "Error: $1 is not numeric"
+ exit 2
+fi
+
+if [ ! -f $2 ]; then
+ echo "Error: File $2 not found"
+ exit 3
+else
+ echo top of file
+ head -$1 $2
+fi
+```
+
+### Renaming variables
+
+When writing a complicated script, it's often useful to assign names to the script's arguments rather than continuing to refer to them as $1, $2, and so on. By the 35th line, someone reading your script might have forgotten what $2 represents. It will be a lot easier on that person if you assign an important parameter's value to $filename or $numlines.
+
+```
+#!/bin/bash
+
+if [ $# -lt 2 ]; then
+ echo "Usage: $0 lines filename"
+ exit 1
+else
+ numlines=$1
+ filename=$2
+fi
+
+if [[ $numlines != [0-9]* ]]; then
+ echo "Error: $numlines is not numeric"
+ exit 2
+fi
+
+if [ ! -f $ filename]; then
+ echo "Error: File $filename not found"
+ exit 3
+else
+ echo top of file
+ head -$numlines $filename
+fi
+```
+
+Of course, this example script does nothing more than run the head command to show the top X lines in a file, but it is meant to show how internal parameters can be used within scripts to help ensure the script runs well or fails with at least some clarity.
+
+**[ Watch Sandra Henry-Stocker's Two-Minute Linux Tips[to learn how to master a host of Linux commands][3] ]**
+
+Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/variable-key-keyboard-100793080-large.jpg
+[2]: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html
+[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190409 topgrade - Upgrade-Update Everything In Single Command On Linux.md b/sources/tech/20190409 topgrade - Upgrade-Update Everything In Single Command On Linux.md
new file mode 100644
index 0000000000..48edeaec20
--- /dev/null
+++ b/sources/tech/20190409 topgrade - Upgrade-Update Everything In Single Command On Linux.md
@@ -0,0 +1,207 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (topgrade – Upgrade/Update Everything In Single Command On Linux?)
+[#]: via: (https://www.2daygeek.com/topgrade-upgrade-update-everything-in-single-command-on-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+topgrade – Upgrade/Update Everything In Single Command On Linux?
+======
+
+As a Linux administrator, you have to keep your system up-to-date to get ride out from some unexpected issues.
+
+We have to keep the system with latest patches as part of best practices.
+
+To do so, you need to perform the patching activity at least once in a month.
+
+Most of the time you have to reboot the server after patching to activate the latest kernel.
+
+It’s good to reboot the server at least 90-120 days once that will fix some outstanding issue which we already having.
+
+If you have a single system then we can directly login to the system and do perform the patching that is not a big deal.
+
+Even, if you have few of servers with the same flavor then you can perform the patching with help of shell script.
+
+If you have high number of servers then i would advise you to go with any of the parallel utility, which will help us to perform the patching in parallel.
+
+It will save a lot’s of time compared with shell script as this go with sequential order.
+
+how to patch all togeter if you have servers with multiple flavors? What will be the solution ?
+
+I recently came to know the utility called `topgrade` that can fulfill your requirement.
+
+Also, your distribution package manager doesn’t upgrade the packages which was installed with other package managers such as pip, npm, snap, etc,. but topgrade can fix this issue as well.
+
+### What Is topgrade?
+
+[topgrade][1] is a new tool that will upgrade all the installed packages on your system to latest available version by detecting and running the appropriate package managers.
+
+### How To Install topgrade In Linux?
+
+There is no separate package manager for distributions wise. Hence, you need to install topgrade with help of cargo package manager.
+
+The topgrade is available in AUR. So, use one of the **[AUR helper][2]** to install it on Arch-based systems. I prefer to go with **[Yay helper][3]** program.
+
+```
+$ yay -S topgrade
+```
+
+Once you have installed the **[cargo package manager][4]** , use the following command to install it.
+
+```
+$ cargo install topgrade
+```
+
+Once topgrade is initiated, it will perform the following tasks one by one.
+
+ * Try to self-upgrade if any updated is available for topgrade.
+ * Arch: Run yay or fall back to pacman
+ * CentOS/RHEL: Run yum upgrade
+ * Fedora: Run dnf upgrade
+ * Debian/Ubuntu: Run apt update && apt dist-upgrade
+ * openSUSE: Run zypper refresh && zypper dist-upgrade
+ * Upgrade Vim/Neovim packages.
+ * Run npm update -g if NPM is installed
+ * Upgrade Atom packages
+ * Linux: Update Flatpak packages
+ * Linux: Update snap packages
+ * Linux: Run fwupdmgr to show firmware upgrade.
+ * Finally it will run needrestart to bounce all the services.
+
+
+
+Now, we have successfully installed `topgrade` so, run the topgrade alone to upgrade everything on your system. I have tested the utility on Ubuntu 18.04 LTS and the results are below.
+
+```
+$ topgrade
+
+―― System update ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
+[sudo] password for daygeek:
+Hit:1 http://in.archive.ubuntu.com/ubuntu bionic InRelease
+Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
+Get:3 http://in.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
+Get:4 http://in.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
+.
+Get:16 http://security.ubuntu.com/ubuntu bionic-security/universe DEP-11 64x64 Icons [45.2 kB]
+Get:17 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 DEP-11 Metadata [2,460 B]
+Fetched 1,565 kB in 13s (117 kB/s)
+Reading package lists... Done
+Building dependency tree
+Reading state information... Done
+119 packages can be upgraded. Run 'apt list --upgradable' to see them.
+Reading package lists... Done
+Building dependency tree
+Reading state information... Done
+Calculating upgrade... Done
+The following packages were automatically installed and are no longer required:
+ libopts25 linux-headers-4.15.0-45 linux-headers-4.15.0-45-generic linux-image-4.15.0-45-generic
+ linux-modules-4.15.0-29-generic linux-modules-4.15.0-45-generic linux-modules-extra-4.15.0-45-generic sntp
+Use 'sudo apt autoremove' to remove them.
+The following packages will be upgraded:
+ apport apport-gtk apt apt-utils cups cups-bsd cups-client cups-common cups-core-drivers cups-daemon cups-ipp-utils
+ cups-ppdc cups-server-common distro-info-data fwupdate fwupdate-signed gir1.2-dbusmenu-glib-0.4 gir1.2-gtk-3.0
+ gir1.2-packagekitglib-1.0 gir1.2-snapd-1 gnome-settings-daemon gnome-settings-daemon-schemas grub-common grub-pc
+ python3-httplib2 python3-problem-report samba-libs systemd systemd-sysv ubuntu-drivers-common udev ufw
+ unattended-upgrades xdg-desktop-portal xdg-desktop-portal-gtk
+119 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
+Need to get 38.5 MB of archives.
+After this operation, 475 kB of additional disk space will be used.
+Do you want to continue? [Y/n]
+.
+.
+Setting up grub-pc (2.02-2ubuntu8.13) ...
+Installing for i386-pc platform.
+Installation finished. No error reported.
+Sourcing file `/etc/default/grub'
+Generating grub configuration file ...
+Found memtest86+ image: /boot/memtest86+.elf
+Found memtest86+ image: /boot/memtest86+.bin
+done
+Setting up mesa-vdpau-drivers:amd64 (18.2.8-0ubuntu0~18.04.2) ...
+Updating PPD files for cups ...
+Setting up apport-gtk (2.20.9-0ubuntu7.6) ...
+Setting up pulseaudio-module-bluetooth (1:11.1-1ubuntu7.2) ...
+Processing triggers for libc-bin (2.27-3ubuntu1) ...
+Processing triggers for initramfs-tools (0.130ubuntu3.7) ...
+update-initramfs: Generating /boot/initrd.img-4.15.0-47-generic
+```
+
+It will run the self-updates once the distribution official packages update done.
+
+```
+―― rustup ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
+info: checking for self-updates
+info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu'
+info: checking for self-updates
+
+ stable-x86_64-unknown-linux-gnu unchanged - rustc 1.33.0 (2aa4c46cf 2019-02-28)
+```
+
+Then it will try to update the packages that has installed with other package managers.
+
+```
+―― Flatpak User Packages ――――――――――――――――――――――――――――――――――――――――――――――――――――――――
+Looking for updates...
+Looking for updates...
+Updating in system:
+org.gnome.Platform/x86_64/3.30 flathub 862e6b8ec2b5
+org.gnome.Platform.Locale/x86_64/3.30 flathub 5e66e981ae00
+org.freedesktop.Platform.html5-codecs/x86_64/18.08 flathub 282fd2c4ef33
+com.github.muriloventuroso.easyssh/x86_64/stable flathub c6bc3a3e72fb
+ new permissions: ssh-auth
+com.github.muriloventuroso.easyssh.Locale/x86_64/stable flathub b705864b8d78
+Updating: org.gnome.Platform/x86_64/3.30 from flathub
+[####################] 16 delta parts, 10 loose fetched; 65539 KiB transferred in 63 seconds
+Error: Failed to update org.gnome.Platform/x86_64/3.30: Flatpak system operation Deploy not allowed for user
+
+Skipping org.gnome.Platform.Locale/x86_64/3.30 due to previous error
+
+Skipping org.freedesktop.Platform.html5-codecs/x86_64/18.08 due to previous error
+Updating: com.github.muriloventuroso.easyssh/x86_64/stable from flathub
+[####################] 2 delta parts, 3 loose fetched; 1532 KiB transferred in 5 seconds
+Error: Failed to update com.github.muriloventuroso.easyssh/x86_64/stable: Flatpak system operation Deploy not allowed for user
+
+Skipping com.github.muriloventuroso.easyssh.Locale/x86_64/stable due to previous error
+error: There were one or more errors
+
+Retry? [y/N]
+```
+
+Then it will run the firmwre upgrade.
+
+```
+―― Firmware upgrades ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
+Fetching metadata https://cdn.fwupd.org/downloads/firmware.xml.gz
+Downloading… [***************************************]
+Fetching signature https://cdn.fwupd.org/downloads/firmware.xml.gz.asc
+```
+
+Finally, it shows the summary about the patching has done.
+
+```
+―― Summary ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
+System update: OK
+rustup: OK
+Flatpak User Packages: FAILED
+Firmware upgrade: OK
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/topgrade-upgrade-update-everything-in-single-command-on-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://github.com/r-darwish/topgrade
+[2]: https://www.2daygeek.com/category/aur-helper/
+[3]: https://www.2daygeek.com/install-yay-yet-another-yogurt-aur-helper-on-arch-linux/
+[4]: https://www.2daygeek.com/how-to-install-rust-programming-language-in-linux/
diff --git a/sources/tech/20190410 How to enable serverless computing in Kubernetes.md b/sources/tech/20190410 How to enable serverless computing in Kubernetes.md
new file mode 100644
index 0000000000..52b75df6e2
--- /dev/null
+++ b/sources/tech/20190410 How to enable serverless computing in Kubernetes.md
@@ -0,0 +1,136 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to enable serverless computing in Kubernetes)
+[#]: via: (https://opensource.com/article/19/4/enabling-serverless-kubernetes)
+[#]: author: (Daniel Oh (Red Hat, Community Moderator) https://opensource.com/users/daniel-oh/users/daniel-oh)
+
+How to enable serverless computing in Kubernetes
+======
+Knative is a faster, easier way to develop serverless applications on
+Kubernetes platforms.
+![Kubernetes][1]
+
+In the first two articles in this series about using serverless on an open source platform, I described [how to get started with serverless platforms][2] and [how to write functions][3] in popular languages and build components using containers on Apache OpenWhisk.
+
+Here in the third article, I'll walk you through enabling serverless in your [Kubernetes][4] environment. Kubernetes is the most popular platform to manage serverless workloads and microservice application containers and uses a finely grained deployment model to process workloads more quickly and easily.
+
+Keep in mind that serverless not only helps you reduce infrastructure management while utilizing a consumption model for actual service use but also provides many capabilities of what the cloud platform serves. There are many serverless or FaaS (Function as a Service) platforms, but Kuberenetes is the first-class citizen for building a serverless platform because there are more than [13 serverless or FaaS open source projects][5] based on Kubernetes.
+
+However, Kubernetes won't allow you to build, serve, and manage app containers for your serverless workloads in a native way. For example, if you want to build a [CI/CD pipeline][6] on Kubernetes to build, test, and deploy cloud-native apps from source code, you need to use your own release management tool and integrate it with Kubernetes.
+
+Likewise, it's difficult to use Kubernetes in combination with serverless computing unless you use an independent serverless or FaaS platform built on Kubernetes, such as [Apache OpenWhisk][7], [Riff][8], or [Kubeless][9]. More importantly, the Kubernetes environment is still difficult for developers to learn the features of how it deals with serverless workloads from cloud-native apps.
+
+### Knative
+
+[Knative][10] was born for developers to create serverless experiences natively without depending on extra serverless or FaaS frameworks and many custom tools. Knative has three primary components—[Build][11], [Serving][12], and [Eventing][13]—for addressing common patterns and best practices for developing serverless applications on Kubernetes platforms.
+
+To learn more, let's go through the usual development process for using Knative to increase productivity and solve Kubernetes' difficulties from the developer's point of view.
+
+**Step 1:** Generate your cloud-native application from scratch using [Spring Initializr][14] or [Thorntail Project Generator][15]. Begin implementing your business logic using the [12-factor app methodology][16], and you might also do assembly testing to see if the function works correctly in many local testing tools.
+
+![Spring Initializr screenshot][17] | ![Thorntail Project Generator screenshot][18]
+---|---
+
+**Step 2:** Build container images from your source code repositories via the Knative Build component. You can define multiple steps, such as installing dependencies, running integration testing, and pushing container images to your secured image registry for using existing Kubernetes primitives. More importantly, Knative Build makes developers' daily work easier and simpler—"boring but difficult." Here's an example of the Build YAML:
+
+
+```
+apiVersion: build.knative.dev/v1alpha1
+kind: Build
+metadata:
+name: docker-build
+spec:
+serviceAccountName: build-bot
+source:
+git:
+revision: master
+url:
+steps:
+\- args:
+\- --context=/workspace/java/springboot
+\- --dockerfile=/workspace/java/springboot/Dockerfile
+\- --destination=docker.io/demo/event-greeter:0.0.1
+env:
+\- name: DOCKER_CONFIG
+value: /builder/home/.docker
+image: gcr.io/kaniko-project/executor
+name: docker-push
+```
+
+**Step 3:** Deploy and serve your container applications as serverless workloads via the Knative Serving component. This step shows the beauty of Knative in terms of automatically scaling up your serverless containers on Kubernetes then scaling them down to zero if there is no request to the containers for a specific period (e.g., two minutes). More importantly, [Istio][19] will automatically address ingress and egress networking traffic of serverless workloads in multiple, secure ways. Here's an example of the Serving YAML:
+
+
+```
+apiVersion: serving.knative.dev/v1alpha1
+kind: Service
+metadata:
+name: greeter
+spec:
+runLatest:
+configuration:
+revisionTemplate:
+spec:
+container:
+image: dev.local/rhdevelopers/greeter:0.0.1
+```
+
+**Step 4:** Bind running serverless containers to a variety of eventing platforms, such as SaaS, FaaS, and Kubernetes, via Knative's Eventing component. In this step, you can define event channels and subscriptions, which are delivered to your services via a messaging platform such as [Apache Kafka][20] or [NATS streaming][21]. Here's an example of the Event sourcing YAML:
+
+
+```
+apiVersion: sources.eventing.knative.dev/v1alpha1
+kind: CronJobSource
+metadata:
+name: test-cronjob-source
+spec:
+schedule: "* * * * *"
+data: '{"message": "Event sourcing!!!!"}'
+sink:
+apiVersion: eventing.knative.dev/v1alpha1
+kind: Channel
+name: ch-event-greeter
+```
+
+### Conclusion
+
+Developing with Knative will save a lot of time in building serverless applications in the Kubernetes environment. It can also make developers' jobs easier by focusing on developing serverless applications, functions, or cloud-native containers.
+
+* * *
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/enabling-serverless-kubernetes
+
+作者:[Daniel Oh (Red Hat, Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/daniel-oh/users/daniel-oh
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes.png?itok=PqDGb6W7 (Kubernetes)
+[2]: https://opensource.com/article/18/11/open-source-serverless-platforms
+[3]: https://opensource.com/article/18/11/developing-functions-service-apache-openwhisk
+[4]: https://kubernetes.io/
+[5]: https://landscape.cncf.io/format=serverless
+[6]: https://opensource.com/article/18/8/what-cicd
+[7]: https://openwhisk.apache.org/
+[8]: https://projectriff.io/
+[9]: https://kubeless.io/
+[10]: https://cloud.google.com/knative/
+[11]: https://github.com/knative/build
+[12]: https://github.com/knative/serving
+[13]: https://github.com/knative/eventing
+[14]: https://start.spring.io/
+[15]: https://thorntail.io/generator/
+[16]: https://12factor.net/
+[17]: https://opensource.com/sites/default/files/uploads/spring_300.png (Spring Initializr screenshot)
+[18]: https://opensource.com/sites/default/files/uploads/springboot_300.png (Thorntail Project Generator screenshot)
+[19]: https://istio.io/
+[20]: https://kafka.apache.org/
+[21]: https://nats.io/
diff --git a/sources/tech/20190410 How we built a Linux desktop app with Electron.md b/sources/tech/20190410 How we built a Linux desktop app with Electron.md
new file mode 100644
index 0000000000..eb11c65614
--- /dev/null
+++ b/sources/tech/20190410 How we built a Linux desktop app with Electron.md
@@ -0,0 +1,101 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How we built a Linux desktop app with Electron)
+[#]: via: (https://opensource.com/article/19/4/linux-desktop-electron)
+[#]: author: (Nils Ganther https://opensource.com/users/nils-ganther)
+
+How we built a Linux desktop app with Electron
+======
+A story of building an open source email service that runs natively on
+Linux desktops, thanks to the Electron framework.
+![document sending][1]
+
+[Tutanota][2] is a secure, open source email service that's been available as an app for the browser, iOS, and Android. The client code is published under GPLv3 and the Android app is available on [F-Droid][3] to enable everyone to use a completely Google-free version.
+
+Because Tutanota focuses on open source and develops on Linux clients, we wanted to release a desktop app for Linux and other platforms. Being a small team, we quickly ruled out building native apps for Linux, Windows, and MacOS and decided to adapt our app using [Electron][4].
+
+Electron is the go-to choice for anyone who wants to ship visually consistent, cross-platform applications, fast—especially if there's already a web app that needs to be freed from the shackles of the browser API. Tutanota is exactly such a case.
+
+Tutanota is based on [SystemJS][5] and [Mithril][6] and aims to offer simple, secure email communications for everybody. As such, it has to provide a lot of the standard features users expect from any email client.
+
+Some of these features, like basic push notifications, search for text and contacts, and support for two-factor authentication are easy to offer in the browser thanks to modern APIs and standards. Other features (such as automatic backups or IMAP support without involving our servers) need less-restricted access to system resources, which is exactly what the Electron framework provides.
+
+While some criticize Electron as "just a basic wrapper," it has obvious benefits:
+
+ * Electron enables you to adapt a web app quickly for Linux, Windows, and MacOS desktops. In fact, most Linux desktop apps are built with Electron.
+ * Electron enables you to easily bring the desktop client to feature parity with the web app.
+ * Once you've published the desktop app, you can use free development capacity to add desktop-specific features that enhance usability and security.
+ * And last but certainly not least, it's a great way to make the app feel native and integrated into the user's system while maintaining its identity.
+
+
+
+### Meeting users' needs
+
+At Tutanota, we do not rely on big investor money, rather we are a community-driven project. We grow our team organically based on the increasing number of users upgrading to our freemium service's paid plans. Listening to what users want is not only important to us, it is essential to our success.
+
+Offering a desktop client was users' [most-wanted feature][7] in Tutanota, and we are proud that we can now offer free beta desktop clients to all of our users. (We also implemented another highly requested feature—[search on encrypted data][8]—but that's a topic for another time.)
+
+We liked the idea of providing users with signed versions of Tutanota and enabling functions that are impossible in the browser, such as push notifications via a background process. Now we plan to add more desktop-specific features, such as IMAP support without depending on our servers to act as a proxy, automatic backups, and offline availability.
+
+We chose Electron because its combination of Chromium and Node.js promised to be the best fit for our small development team, as it required only minimal changes to our web app. It was particularly helpful to use the browser APIs for everything as we got started, slowly replacing those components with more native versions as we progressed. This approach was especially handy with attachment downloads and notifications.
+
+### Tuning security
+
+We were aware that some people cite security problems with Electron, but we found Electron's options for fine-tuning access in the web app quite satisfactory. You can use resources like the Electron's [security documentation][9] and Luca Carettoni's [Electron Security Checklist][10] to help prevent catastrophic mishaps with untrusted content in your web app.
+
+### Achieving feature parity
+
+The Tutanota web client was built from the start with a solid protocol for interprocess communication. We utilize web workers to keep user interface (UI) rendering responsive while encrypting and requesting data. This came in handy when we started implementing our mobile apps, which use the same protocol to communicate between the native part and the web view.
+
+That's why when we started building the desktop clients, a lot of bindings for things like native push notifications, opening mailboxes, and working with the filesystem were already there, so only the native (node) side had to be implemented.
+
+Another convenience was our build process using the [Babel transpiler][11], which allows us to write the entire codebase in modern ES6 JavaScript and mix-and-match utility modules between the different environments. This enabled us to speedily adapt the code for the Electron-based desktop apps. However, we encountered some challenges.
+
+### Overcoming challenges
+
+While Electron allows us to integrate with the different platforms' desktop environments pretty easily, you can't underestimate the time investment to get things just right! In the end, it was these little things that took up much more time than we expected but were also crucial to finish the desktop client project.
+
+The places where platform-specific code was necessary caused most of the friction:
+
+ * Window management and the tray, for example, are still handled in subtly different ways on the three platforms.
+ * Registering Tutanota as the default mail program and setting up autostart required diving into the Windows Registry while making sure to prompt the user for admin access in a [UAC][12]-compatible way.
+ * We needed to use Electron's API for shortcuts and menus to offer even standard features like copy, paste, undo, and redo.
+
+
+
+This process was complicated a bit by users' expectations of certain, sometimes not directly compatible behavior of the apps on different platforms. Making the three versions feel native required some iteration and even some modest additions to the web app to offer a text search similar to the one in the browser.
+
+### Wrapping up
+
+Our experience with Electron was largely positive, and we completed the project in less than four months. Despite some rather time-consuming features, we were surprised about the ease with which we could ship a beta version of the [Tutanota desktop client for Linux][13]. If you're interested, you can dive into the source code on [GitHub][14].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/linux-desktop-electron
+
+作者:[Nils Ganther][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/nils-ganther
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ (document sending)
+[2]: https://tutanota.com/
+[3]: https://f-droid.org/en/packages/de.tutao.tutanota/
+[4]: https://electronjs.org/
+[5]: https://github.com/systemjs/systemjs
+[6]: https://mithril.js.org/
+[7]: https://tutanota.uservoice.com/forums/237921-general/filters/top?status_id=1177482
+[8]: https://tutanota.com/blog/posts/first-search-encrypted-data/
+[9]: https://electronjs.org/docs/tutorial/security
+[10]: https://www.blackhat.com/docs/us-17/thursday/us-17-Carettoni-Electronegativity-A-Study-Of-Electron-Security-wp.pdf
+[11]: https://babeljs.io/
+[12]: https://en.wikipedia.org/wiki/User_Account_Control
+[13]: https://tutanota.com/blog/posts/desktop-clients/
+[14]: https://www.github.com/tutao/tutanota
diff --git a/sources/tech/20190411 Be your own certificate authority.md b/sources/tech/20190411 Be your own certificate authority.md
new file mode 100644
index 0000000000..de35385097
--- /dev/null
+++ b/sources/tech/20190411 Be your own certificate authority.md
@@ -0,0 +1,135 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Be your own certificate authority)
+[#]: via: (https://opensource.com/article/19/4/certificate-authority)
+[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez/users/elenajon123)
+
+Be your own certificate authority
+======
+Create a simple, internal CA for your microservice architecture or
+integration testing.
+![][1]
+
+The Transport Layer Security ([TLS][2]) model, which is sometimes referred to by the older name SSL, is based on the concept of [certificate authorities][3] (CAs). These authorities are trusted by browsers and operating systems and, in turn, _sign_ servers' certificates to validate their ownership.
+
+However, for an intranet, a microservice architecture, or integration testing, it is sometimes useful to have a _local CA_ : one that is trusted only internally and, in turn, signs local servers' certificates.
+
+This especially makes sense for integration tests. Getting certificates can be a burden because the servers will be up for minutes. But having an "ignore certificate" option in the code could allow it to be activated in production, leading to a security catastrophe.
+
+A CA certificate is not much different from a regular server certificate; what matters is that it is trusted by local code. For example, in the **requests** library, this can be done by setting the **REQUESTS_CA_BUNDLE** variable to a directory containing this certificate.
+
+In the example of creating a certificate for integration tests, there is no need for a _long-lived_ certificate: if your integration tests take more than a day, you have already failed.
+
+So, calculate **yesterday** and **tomorrow** as the validity interval:
+
+
+```
+>>> import datetime
+>>> one_day = datetime.timedelta(days=1)
+>>> today = datetime.date.today()
+>>> yesterday = today - one_day
+>>> tomorrow = today - one_day
+```
+
+Now you are ready to create a simple CA certificate. You need to generate a private key, create a public key, set up the "parameters" of the CA, and then self-sign the certificate: a CA certificate is _always_ self-signed. Finally, write out both the certificate file as well as the private key file.
+
+
+```
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography import x509
+from cryptography.x509.oid import NameOID
+
+private_key = rsa.generate_private_key(
+public_exponent=65537,
+key_size=2048,
+backend=default_backend()
+)
+public_key = private_key.public_key()
+builder = x509.CertificateBuilder()
+builder = builder.subject_name(x509.Name([
+x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'),
+]))
+builder = builder.issuer_name(x509.Name([
+x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'),
+]))
+builder = builder.not_valid_before(yesterday)
+builder = builder.not_valid_after(tomorrow)
+builder = builder.serial_number(x509.random_serial_number())
+builder = builder.public_key(public_key)
+builder = builder.add_extension(
+x509.BasicConstraints(ca=True, path_length=None),
+critical=True)
+certificate = builder.sign(
+private_key=private_key, algorithm=hashes.SHA256(),
+backend=default_backend()
+)
+private_bytes = private_key.private_bytes(
+encoding=serialization.Encoding.PEM,
+format=serialization.PrivateFormat.TraditionalOpenSSL,
+encryption_algorithm=serialization.NoEncrption())
+public_bytes = certificate.public_bytes(
+encoding=serialization.Encoding.PEM)
+with open("ca.pem", "wb") as fout:
+fout.write(private_bytes + public_bytes)
+with open("ca.crt", "wb") as fout:
+fout.write(public_bytes)
+```
+
+In general, a real CA will expect a [certificate signing request][4] (CSR) to sign a certificate. However, when you are your own CA, you can make your own rules! Just go ahead and sign what you want.
+
+Continuing with the integration test example, you can create the private keys and sign the corresponding public keys right then. Notice **COMMON_NAME** needs to be the "server name" in the **https** URL. If you've configured name lookup, the needed server will respond on **service.test.local**.
+
+
+```
+service_private_key = rsa.generate_private_key(
+public_exponent=65537,
+key_size=2048,
+backend=default_backend()
+)
+service_public_key = service_private_key.public_key()
+builder = x509.CertificateBuilder()
+builder = builder.subject_name(x509.Name([
+x509.NameAttribute(NameOID.COMMON_NAME, 'service.test.local')
+]))
+builder = builder.not_valid_before(yesterday)
+builder = builder.not_valid_after(tomorrow)
+builder = builder.public_key(public_key)
+certificate = builder.sign(
+private_key=private_key, algorithm=hashes.SHA256(),
+backend=default_backend()
+)
+private_bytes = service_private_key.private_bytes(
+encoding=serialization.Encoding.PEM,
+format=serialization.PrivateFormat.TraditionalOpenSSL,
+encryption_algorithm=serialization.NoEncrption())
+public_bytes = certificate.public_bytes(
+encoding=serialization.Encoding.PEM)
+with open("service.pem", "wb") as fout:
+fout.write(private_bytes + public_bytes)
+```
+
+Now the **service.pem** file has a private key and a certificate that is "valid": it has been signed by your local CA. The file is in a format that can be given to, say, Nginx, HAProxy, or most other HTTPS servers.
+
+By applying this logic to testing scripts, it's easy to create servers that look like authentic HTTPS servers, as long as the client is configured to trust the right CA.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/certificate-authority
+
+作者:[Moshe Zadka (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez/users/elenajon123
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj
+[2]: https://en.wikipedia.org/wiki/Transport_Layer_Security
+[3]: https://en.wikipedia.org/wiki/Certificate_authority
+[4]: https://en.wikipedia.org/wiki/Certificate_signing_request
diff --git a/sources/tech/20190411 How do you contribute to open source without code.md b/sources/tech/20190411 How do you contribute to open source without code.md
new file mode 100644
index 0000000000..0b04f7e87d
--- /dev/null
+++ b/sources/tech/20190411 How do you contribute to open source without code.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How do you contribute to open source without code?)
+[#]: via: (https://opensource.com/article/19/4/contribute-without-code)
+[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/don-watkins/users/greg-p/users/petercheer)
+
+How do you contribute to open source without code?
+======
+
+![Dandelion held out over water][1]
+
+My earliest open source contributions date back to the mid-1980s when our organization first connected to [UseNet][2] where we discovered the contributed code and the opportunities to share in its development and support.
+
+Today there are endless contribution opportunities, from contributing code to making how-to videos.
+
+I'm going to step right over the whole issue of contributing code, other than pointing out that many of us who write code but don't consider ourselves developers can still [contribute code][3]. Instead, I'd like to remind everyone that there are lots of [non-code ways to contribute to open source][4] and talk about three alternatives.
+
+### Filing bug reports
+
+One important and concrete kind of contribution could best be described as "not being afraid to file a decent bug report" and [all the consequences related to that][5]. Sometimes it's quite challenging to [file a decent bug report][6]. For example:
+
+ * A bug may be difficult to record or describe. A long and complicated message with all sorts of unrecognizable codes may flash by as the computer is booting, or there may just be some "odd behavior" on the screen with no error messages produced.
+ * A bug may be difficult to reproduce. It may occur only on certain hardware/software configurations, or it may be rarely triggered, or the precise problem area may not be apparent.
+ * A bug may be linked to a very specific development environment configuration that is too big, messy, and complicated to share, requiring laborious creation of a stripped-down example.
+ * When reporting a bug to a distro, the maintainers may suggest filing the bug upstream instead, which can sometimes lead to a lot of work when the version supported by the distro is not the primary version of interest to the upstream community. (This can happen when the version provided in the distro lags the officially supported release and development version.)
+
+
+
+Nevertheless, I exhort would-be bug reporters (including me) to press on and try to get bugs fully recorded and acknowledged.
+
+One way to get started is to use your favorite search tool to look for similar bug reports, see how they are described, where they are filed, and so on. Another important thing to know is the formal mechanism defined for bug reporting by your distro (for example, [Fedora's is here][7]; [openSUSE's is here][8]; [Ubuntu's is here][9]) or software package ([LibreOffice's is here][10]; [Mozilla's seems to be here][11]).
+
+### Answering user's questions
+
+I lurk and occasionally participate in various mailing lists and forums, such as the [Ubuntu quality control team][12] and [forums][13], [LinuxQuestions.org][14], and the [ALSA users' mailing list][15]. Here, the contributions may relate less to bugs and more to documenting complex use cases. It's a great feeling for everyone to see someone jumping in to help a person sort out their trouble with a particular issue.
+
+### Writing about open source
+
+Finally, another area where I really enjoy contributing is [_writing_][16] about using open source software; whether it's a how-to guide, a comparative evaluation of different solutions to a particular problem, or just generally exploring an area of interest (in my case, using open source music-playing software to enjoy music). A similar option is making an instructional video; it's easy to [record the desktop][17] while demonstrating some fiendishly difficult desktop maneuver, such as creating a splashy logo with GIMP. And those of you who are bi- or multi-lingual can also consider translating existing how-to articles or videos to another language.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/contribute-without-code
+
+作者:[Chris Hermansen (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clhermansen/users/don-watkins/users/greg-p/users/petercheer
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dandelion_blue_water_hand.jpg?itok=QggW8Wnw (Dandelion held out over water)
+[2]: https://en.wikipedia.org/wiki/Usenet
+[3]: https://opensource.com/article/19/2/open-science-git
+[4]: https://opensource.com/life/16/1/8-ways-contribute-open-source-without-writing-code
+[5]: https://producingoss.com/en/bug-tracker.html
+[6]: https://opensource.com/article/19/3/bug-reporting
+[7]: https://docs.fedoraproject.org/en-US/quick-docs/howto-file-a-bug/
+[8]: https://en.opensuse.org/openSUSE:Submitting_bug_reports
+[9]: https://help.ubuntu.com/stable/ubuntu-help/report-ubuntu-bug.html.en
+[10]: https://wiki.documentfoundation.org/QA/BugReport
+[11]: https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines
+[12]: https://wiki.ubuntu.com/QATeam
+[13]: https://ubuntuforums.org/
+[14]: https://www.linuxquestions.org/
+[15]: https://www.alsa-project.org/wiki/Mailing-lists
+[16]: https://opensource.com/users/clhermansen
+[17]: https://opensource.com/education/16/10/simplescreenrecorder-and-kazam
diff --git a/sources/tech/20190411 Installing Ubuntu MATE on a Raspberry Pi.md b/sources/tech/20190411 Installing Ubuntu MATE on a Raspberry Pi.md
new file mode 100644
index 0000000000..494f85923f
--- /dev/null
+++ b/sources/tech/20190411 Installing Ubuntu MATE on a Raspberry Pi.md
@@ -0,0 +1,135 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Installing Ubuntu MATE on a Raspberry Pi)
+[#]: via: (https://itsfoss.com/ubuntu-mate-raspberry-pi/)
+[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
+
+Installing Ubuntu MATE on a Raspberry Pi
+======
+
+_**Brief: This quick tutorial shows you how to install Ubuntu MATE on Raspberry Pi devices.**_
+
+[Raspberry Pi][1] is by far the most popular SBC (Single Board Computer) and the go-to board for makers. [Raspbian][2] which is based on Debian is the official operating system for the Pi. It is lightweight, comes bundled with educational tools and gets the job done for most scenarios.
+
+[Installing Raspbian][3] is easy as well but the problem with [Debian][4] is its slow upgrade cycles and older packages.
+
+Running Ubuntu on the Raspberry Pi gives you a richer experience and up to date software. We have a few options when it comes to running Ubuntu on your Pi.
+
+ 1. [Ubuntu MATE][5] : Ubuntu MATE is the only distribution which natively supports the Raspberry Pi with a complete desktop environment.
+ 2. [Ubuntu Server 18.04][6] \+ Installing a desktop environment manually.
+ 3. Using Images built by the [Ubuntu Pi Flavor Maker][7] community, _these images only support the Raspberry Pi 2B and 3B variants_ and are **not** updated to the latest LTS release.
+
+
+
+The first option is the easiest and the quickest to set up while the second option gives you the freedom to install the desktop environment of your choice. I recommend going with either of the first two options.
+
+Here are the links to download the Disc Images. In this article I’ll be covering Ubuntu MATE installation only.
+
+### Installing Ubuntu MATE on Raspberry Pi
+
+Go to the download page of Ubuntu MATE and get the recommended images.
+
+![][8]
+
+The experimental ARM64 version should only be used if you need to run 64-bit only applications like MongoDB on a Raspberry Pi server.
+
+[Download Ubuntu MATE for Raspberry Pi][9]
+
+#### Step 1: Setting Up the SD Card
+
+The image file needs to be decompressed once downloaded. You can simply right click on it to extract it.
+
+Alternatively, the following command will do the job.
+
+```
+xz -d ubuntu-mate***.img.xz
+```
+
+Alternatively you can use [7-zip][10] if you are on Windows.
+
+Install **[Balena Etcher][11]** , we’ll use this tool to write the image to the SD card. Make sure that your SD card is at least 8 GB capacity.
+
+Launch Etcher and select the image file and your SD card.
+
+![][12]
+
+Once the flashing process is complete the SD card is ready.
+
+#### Step 2: Setting Up the Raspberry Pi
+
+You probably already know that you need a few things to get started with Raspberry Pi such as a mouse, keyboard, HDMI cable etc. You can also [install Raspberry Pi headlessly without keyboard and mouse][13] but this tutorial is not about that.
+
+ * Plug in a mouse and a keyboard.
+ * Connect the HDMI cable.
+ * Insert the SD card into the SD card slot.
+
+
+
+Power it on by plugging in the power cable. Make sure you have a good power supply (5V, 3A minimum). A bad power supply can reduce the performance.
+
+#### Ubuntu MATE installation
+
+Once you power on the Raspberry Pi, you’ll be greeted with a very familiar Ubuntu installation process. The process is pretty much straight forward from here.
+
+![Select your keyboard layout][14]
+
+![Select Your Timezone][15]
+
+Select your WiFi network and enter the password in the network connection screen.
+
+![Add Username and Password][16]
+
+After setting the keyboard layout, timezone and user credentials you’ll be taken to the login screen after a few minutes. And voila! you are almost done.
+
+![][17]
+
+Once logged in, the first thing you should do is to [update Ubuntu][18]. You can use the command line for that.
+
+```
+sudo apt update
+sudo apt upgrade
+```
+
+You can also use the Software Updater.
+
+![][19]
+
+Once the updates are finished installing you are good to go. You can also go ahead and install Raspberry Pi specific packages for GPIO and other I/O depending on your needs.
+
+What made you think about installing Ubuntu on the Raspberry and how has your experience been with Raspbian? Let me know in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/ubuntu-mate-raspberry-pi/
+
+作者:[Chinmay][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/chinmay/
+[b]: https://github.com/lujun9972
+[1]: https://www.raspberrypi.org/
+[2]: https://www.raspberrypi.org/downloads/
+[3]: https://itsfoss.com/tutorial-how-to-install-raspberry-pi-os-raspbian-wheezy/
+[4]: https://www.debian.org/
+[5]: https://ubuntu-mate.org/
+[6]: https://wiki.ubuntu.com/ARM/RaspberryPi#Recovering_a_system_using_the_generic_kernel
+[7]: https://ubuntu-pi-flavour-maker.org/download/
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/ubuntu-mate-raspberry-pi-download.jpg?ssl=1
+[9]: https://ubuntu-mate.org/download/
+[10]: https://www.7-zip.org/download.html
+[11]: https://www.balena.io/etcher/
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-08-01-36-16.png?ssl=1
+[13]: https://linuxhandbook.com/raspberry-pi-headless-setup/
+[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Keyboard-layout-ubuntu.jpg?fit=800%2C467&ssl=1
+[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/select-time-zone-ubuntu.jpg?fit=800%2C468&ssl=1
+[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Credentials-ubuntu.jpg?fit=800%2C469&ssl=1
+[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Desktop-ubuntu.jpg?fit=800%2C600&ssl=1
+[18]: https://itsfoss.com/update-ubuntu/
+[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/update-software.png?ssl=1
diff --git a/sources/tech/20190411 Managed, enabled, empowered- 3 dimensions of leadership in an open organization.md b/sources/tech/20190411 Managed, enabled, empowered- 3 dimensions of leadership in an open organization.md
new file mode 100644
index 0000000000..4efcdd17a7
--- /dev/null
+++ b/sources/tech/20190411 Managed, enabled, empowered- 3 dimensions of leadership in an open organization.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Managed, enabled, empowered: 3 dimensions of leadership in an open organization)
+[#]: via: (https://opensource.com/open-organization/19/4/managed-enabled-empowered)
+[#]: author: (Heidi Hess von Ludewig (Red Hat) https://opensource.com/users/heidi-hess-von-ludewig/users/amatlack)
+
+Managed, enabled, empowered: 3 dimensions of leadership in an open organization
+======
+Different types of work call for different types of engagement. Should
+open leaders always aim for empowerment?
+![][1]
+
+"Empowerment" seems to be the latest people management [buzzword][2]. And it's an important consideration for open organizations, too. After all, we like to think these open organizations thrive when the people inside them are equipped to take initiative to do their best work as they see fit. Shouldn't an open leader's goal be complete and total empowerment of everyone, in all parts of the organization, doing all types of work?
+
+Not necessarily.
+
+Before we jump on the employee [empowerment bandwagon][3], we should explore the important connections between empowerment and innovation. That requires placing empowerment in context.
+
+As Allison Matlack has already demonstrated, employee investment in an organization's mission and activities—and employee _autonomy_ relative to those things—[can take several forms][4], from "managed" to "enabled" to "empowered." Sometimes, complete and total empowerment _isn't_ the most desirable type of investment an open leader would like to activate in a contributor. Projects are always changing. New challenges are always arising. As a result, the _type_ or _degree_ of involvement leaders can expect in different situations is always shifting. "Managed," "enabled," and "empowered," contributors exist simultaneously and dynamically, depending on the work they're performing (and that work's desired outcomes).
+
+So before we head down to the community center to win a game of buzzword bingo, let's examine the different types of work, how they function, and how they contribute to the overall innovation of a company. Let's refine what we mean by "managed," "enabled," and "empowered" work, and discuss why we need all three.
+
+### Managed, enabled, empowered
+
+First, let's consider and define each type of work activity.
+
+"Managed" work involves tasks that are coordinated using guidance, supervision, and direction in order to achieve specific outcomes. When someone works to coordinate _every_ part of _every_ task, we colloquially call that behavior "micro-managing." "Enabled" associates have the ability to direct themselves while working within boundaries (guidance), and they have access to the materials and resources (information, people, technologies, etc.) they require to problem-solve as they see fit. Lastly, "empowered" individuals _direct themselves_ within organizational limits, have access materials and resources, and also have the authority to represent their team or organization and make decisions about work on behalf using their best judgement, based on the former elements.
+
+Most important here is the idea that these concepts are _nested_ (see Figure 1). Because each level builds on the one before it, one cannot have the full benefit of "empowered" associates without also having clear guidance and direction ("managed"), and transparency of information and resources ("enabled"). What changes from level to level is the amount of managed or enabled activity that comes before it.
+
+Let's dive more deeply into the nature of those activities and discuss the roles leaders should play in each.
+
+#### Managed work
+
+"Managed" work is just that: work activity supervised and directed to some degree. The amount of management occurring in a situation is dynamic and depends on the activity itself. For instance, in the manufacturing economy, managed work is prominent. I'll call this "widget" work, the point of which is producing a widget the same way, every time. People need to perform this work according to consistent processes with consistent, standardized outcomes.
+
+Before we jump on the employee empowerment bandwagon, we should explore the important connections between empowerment and innovation. That requires placing empowerment in context.
+
+Because this work requires consistency, it typically proceeds via explicit guidelines and policies (rules about cost, schedule, quality, quantity, process, and so on—characteristics applicable to all work to a greater or lesser degree). We can find examples of it in a variety of roles across many industries. Quite often, _any_ role in _any_ industry requires _some_ amount of this type of work. Examples include manufacturing precision machine parts, answering a customer support case within a specified timeframe for contractual reasons and with a friendly greeting, etc. In the software industry, a role that's _entirely_ like this would be a rarity, yet even these roles require some work of the "managed" type. For instance, consider the way a support engineer must respond to a case using a set of quality standards (friendliness, perhaps with a professional written tone, a branded signature line, adherence to a participat contractual agreement, usually responding within a particular time frame, etc.).
+
+"Management" is the best strategy when _work requirements include adhering to a consistent schedule, process, and quality._
+
+#### Enabled work
+
+As the amount of creativity a role requires _increases_ , the amount of directed and "managed" work we find in that role _decreases_. Guidelines get broader, processes looser, schedules lengthened (I wish!). This is because what's required to "be creative" involves other types of work (and new degrees of transparency and authority along with them). Ron McFarland explains this in [his article on adaptive leadership][5]: Many challenges challenges are ambiguous, as opposed to technical, and therefore require specific kinds of leadership.
+
+To take this idea one step further, we might say open leaders need to be _adaptive_ to how they view and implement the different kinds of work on their teams or in their organizations. "Enabling" associates means growing their skills and knowledge so they can manage themselves. The foundation for this type of activity is information—access to it, sharing it, and opportunities to independently use it to complete work activity. This is the kind of work Peter Drucker was referring to when he coined the term "knowledge work."
+
+Enabled work liberates associates from the constraints of managed work, though it still involves leaders providing considerable direction and guidance. Outcomes of this work might be familiar and normalized, but the _paths to achieving them_ are more open-ended than in managed work. Methods are more flexible and inclusive of individual preference and capability.
+
+"Enablement" is the best strategy when _objectives are well-defined and the outcomes are aligned with past outcomes and results_.
+
+#### Empowered work
+
+In "[Beyond Engagement][4]," Allison describes empowerment as a state in which employees have "access to all the information, training, tools, and connections to people and others teams that they need to do their best work, as well as a safe environment in which to do that work so they feel comfortable making their own decisions." In other words, empowerment is enablement with the opportunity for associates to _act using their own best judgment as it relates to shared understanding of team and organizational guidelines and objectives._
+
+"Empowerment" is the best strategy when _objectives and methods for achieving them are unclear and creative flexibility is necessary for defining them._ Often this work is focused on activities where problem definition and possible solutions (i.e. investigation, planning, and execution) are not well-defined.
+
+Any role in any organization involves these three types of work occurring at various moments and in various situations. No job requires just one.
+
+### Supporting innovation through managed, enabled, and empowered work
+
+The labels "managed," enabled," and "empowered" apply to different work at different times, and _all three_ are embedded in work activity at different times and in different tasks. That means leaders should be paying more attention to the work contributors are doing: the kind of work, its purpose, and its desired outcomes. We're now in a position to consider how _innovation_ factors into this equation.
+
+Frequently, people discuss the different modes of work by way of _contrast_. Most language about them connotes negativity: managed work is "the worst," while empowered work is "the best." The goal of any leadership practice should be to "move people along the scale"—to create empowered contributors.
+
+However, just as types of work are located on a continuum that doesn't include this element of negation, so too should our understanding of the types of work. Rather than seeing work as, for example " _always empowered"_ or _"always managed_ ," we should recognize that any role is a function of _of all three types of work at the same time_ , each to a varying degree. Think of the equation this way:
+
+> _Work = managed (x) + enabled (x) + empowered (x)_
+
+Note here that the more enabled and empowered the work is, the more potential there is for creativity when doing that work. This is because creativity (and the creative individual) requires information—consistently updated and "fresh" sources of information—used in conjunction with individual judgment and capacity for interpreting how to _use_ and _combine_ that information to define problems, ideate, and solve problems. Enabled and empowered work can increase inclusivity—that is, draw more closely on an individual's unique skills, perspectives, and talents because, by definition, those kinds of work are less managed and more guided. Open leadership clearly supports hiring for diversity exactly for the reason that it makes inclusivity so much richer. The ambiguity that's characteristic of the challenges we face in modern workplaces means that the work we do is ripe with potential for innovation—if we embrace risk and adapt our leadership styles to liberate it.
+
+In other words:
+
+> _Innovation = enabled (x) + empowered (x) / managed (x)_
+>
+> _The more enabled and empowered the work is, the more potential for innovation._
+
+Focusing on the importance of enabled work and empowered work is not to devalue managed work in any way. I would say that managed work creates a stable foundation on which creative (enabled and empowered) work can blossom. Imagine if all the work we did was empowered; our organizations would be completely chaotic, undefined, and ambiguous. Organizations need a degree of managed work in order to ensure some direction, some understanding of priorities, and some definition of "quality."
+
+Any role in any organization involves these three types of work occurring at various moments and in various situations. No job requires just one. As open leaders, we must recognize that work isn't an all-or-nothing, one-type-of-work-alone equation. We have to get better at understanding work in _these three different ways_ and using each one to the organization's advantage, depending on the situation.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/open-organization/19/4/managed-enabled-empowered
+
+作者:[Heidi Hess von Ludewig (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/heidi-hess-von-ludewig/users/amatlack
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_ControlNotDesirable.png?itok=nrXwSkv7
+[2]: https://www.entrepreneur.com/article/288340
+[3]: https://www.forbes.com/sites/lisaquast/2011/02/28/6-ways-to-empower-others-to-succeed/#5c860b365c62
+[4]: https://opensource.com/open-organization/18/10/understanding-engagement-and-empowerment
+[5]: https://opensource.com/open-organization/19/3/adaptive-leadership-review
diff --git a/sources/tech/20190411 Testing Small Scale Scrum in the real world.md b/sources/tech/20190411 Testing Small Scale Scrum in the real world.md
new file mode 100644
index 0000000000..c39a787482
--- /dev/null
+++ b/sources/tech/20190411 Testing Small Scale Scrum in the real world.md
@@ -0,0 +1,57 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Testing Small Scale Scrum in the real world)
+[#]: via: (https://opensource.com/article/19/4/next-steps-small-scale-scrum)
+[#]: author: (Agnieszka Gancarczyk (Red Hat)Leigh Griffin (Red Hat) https://opensource.com/users/agagancarczyk/users/lgriffin/users/agagancarczyk/users/lgriffin)
+
+Testing Small Scale Scrum in the real world
+======
+We plan to test the Small Scale Scrum framework in real-world projects
+involving small teams.
+![Green graph of measurements][1]
+
+Scrum is built on the three pillars of inspection, adaptation, and transparency. Our empirical research is really the starting point in bringing scrum, one of the most popular agile implementations, to smaller teams. As presented in the diagram below, we are now taking time to inspect this framework and principles by testing them in real-world projects.
+
+![small-scale-scrum-inspection.png][2]
+
+Progress in empirical process control
+
+We plan to implement Small Scale Scrum in several upcoming projects. Our test candidates are customers with real projects where teams of one to three people will undertake short-lived projects (ranging from a few weeks to three months) with an emphasis on quality and outputs. Individual projects, such as final-year projects (over 24 weeks) that are a capstone project after four years in a degree program, are almost exclusively completed by a single person. In projects of this nature, there is an emphasis on the project plan and structure and on maximizing the outputs that a single person can achieve.
+
+We plan to metricize and publish the results of these projects and hold several retrospectives with the teams involved. We are particularly interested in metrics centered around quality, with a particular emphasis on quality in a software engineering context and management, both project management through the lifecycle with a customer and management of the day-to-day team activities and the delivery, release, handover, and signoff process.
+
+Ultimately, we will retrospectively analyze the overall framework and principles and see if the Manifesto we envisioned holds up to the reality of executing a project with small numbers. From this data, we will produce the second version of Small Scale Scrum and begin a cyclic pattern of inspecting the model in new projects and adapting it again.
+
+We want to do all of this transparently. This series of articles is one window into the data, the insights, the experiences, and the reality of running scrum for small teams whose everyday challenges include context switching, communication, and the need for a quality delivery. A follow-up series of articles is planned to examine the outputs and help develop the second edition of Small Scale Scrum entirely in the community.
+
+We also plan to attend conferences and share our knowledge with the Agile community. Our first conference will be Agile 2019 where the evolution of Small Scale Scrum will be further explored as an Experience Report. We are advising colleges and sharing our structure and approach to managing and executing final-year projects. All our outputs will be freely available in the open source way.
+
+Given the changes to recommended team sizes in the Scrum Guide, our long-term goal and vision is to have the Scrum Guide reflect that teams of one or more people occupying one or more roles within a project are capable of following scrum.
+
+* * *
+
+_Leigh Griffin will present Small Scale Scrum at Agile 2019 in Washington, August 5-9, 2019 as an Experience Report. An expanded paper will be published on[Agile Alliance][3] to accompany this._
+
+* * *
+
+* * *
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/next-steps-small-scale-scrum
+
+作者:[Agnieszka Gancarczyk (Red Hat)Leigh Griffin (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/agagancarczyk/users/lgriffin/users/agagancarczyk/users/lgriffin
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_lead-steps-measure.png?itok=DG7rFZPk (Green graph of measurements)
+[2]: https://opensource.com/sites/default/files/small-scale-scrum-inspection.png (small-scale-scrum-inspection.png)
+[3]: https://www.agilealliance.org/
diff --git a/sources/tech/20190412 Designing posters with Krita, Scribus, and Inkscape.md b/sources/tech/20190412 Designing posters with Krita, Scribus, and Inkscape.md
new file mode 100644
index 0000000000..3136ed60a0
--- /dev/null
+++ b/sources/tech/20190412 Designing posters with Krita, Scribus, and Inkscape.md
@@ -0,0 +1,131 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Designing posters with Krita, Scribus, and Inkscape)
+[#]: via: (https://opensource.com/article/19/4/design-posters)
+[#]: author: (Raghavendra Kamath https://opensource.com/users/raghukamath/users/seilarashel/users/raghukamath/users/raghukamath/users/greg-p/users/raghukamath)
+
+Designing posters with Krita, Scribus, and Inkscape
+======
+Graphic designers can do professional work with free and open source
+tools.
+![Hand drawing out the word "code"][1]
+
+A few months ago, I was asked to design some posters for a local [Free Software Foundation][2] (FSF) event. Richard M. Stallman was [visiting][3] our country, and my friend [Abhas Abhinav][4] wanted to put up some posters and banners to promote his visit. I designed two posters for RMS's talk in Bangalore.
+
+I create my artwork with F/LOSS (free/libre open source software) tools. Although many artists successfully use free software to create artwork, I repeatedly encounter comments in discussion forums claiming that free software is not made for creative work. This article is my effort to detail the process I typically use to create my artwork and to spread awareness that one can do professional work with the help of F/LOSS tools.
+
+### Sketching some concepts
+
+After understanding Abhas' initial requirements, I sat down to visualize some concepts. I am not that great of a copywriter, so I started reading the FSF website to get some copy material. I needed to finish the project in two days time, while simultaneously working on other projects. I started sketching some rough layouts. From five layouts, I liked three. I scanned them using [Skanlite][5]; although these sketches were very rough and would need proper layout and design, they were a good base for me to work from.
+
+![Skanlite][6]
+
+![Poster sketches][7]
+
+![Poster sketch][8]
+
+I had three concepts:
+
+ * On the [FSF's website][2], I read about taking free software to new frontiers, which made me think about the idea of "conquering a summit." Free software work is also filled with adventures, in my opinion, and sometimes a task may seem like scaling a summit. So, I thought showing some mountaineers would resonate well.
+ * I also wanted to ask people to donate to FSF, so I sketched a hand giving a heart. I didn't feel any excitement in executing this idea, nevertheless, I kept it for backup in case I fell short of time.
+ * The FSF website has a hashtag for a donation program called #thankGNU, so I thought about using this as the basis of my design. Repurposing my hand visual, I replaced the heart with a bouquet of flowers that has a heart-shaped card saying #thankGNU!
+
+
+
+I know these are somewhat quick and safe concepts, but given the little time I had for the project, I went ahead with them.
+
+My design process mostly depends on the kind of look I need in the final image. I choose my software and process according to my needs. I may use one software from start to finish or combine various software packages to accomplish what I need. For this project, I used [Krita][9] and [Scribus][10], with some minimal use of [Inkscape][11].
+
+### Krita: Making the illustrations
+
+I imported my sketches into [Krita][12] and started adding more defined lines and shapes.
+
+For the first image, which has some mountaineers climbing, I used [vector layers][13] in Krita to add basic shapes and then used [Alpha Inheritance][14], which is similar to what is called Clipping Masks in Photoshop, to add texture and gradients inside the shapes. This helped me change the underlying base shape (in this case, the shape of the mountain in the first poster) anytime during the process. Krita also has a nice feature called the Reference Image tool, which lets you pin some references around your canvas (this helps a lot and saves many Alt+Tabs). Once I got the mountain how I wanted, according to the layout, I started painting the mountaineers and added more details for the ice and other features. I like grungy brushes and brushes that have a texture akin to chalks and sponges. Krita has a wide range of brushes as well as a brush engine, which makes replicating a traditional medium easier. After about 3.5 hours of painting, this image was ready for further processing.
+
+I wanted the second poster to have the feel of an old-style book illustration. So, I created the illustration with inked lines, somewhat similar to what we see in textbooks or novels. Inking in Krita is really a time saver; since it has stabilizer options, your wavy, hand-drawn lines will be smooth and crisp. I added a textured background and some minimal colors beneath the lines. It took me about three hours to do this illustration as well.
+
+![Poster][15]
+
+![Poster][16]
+
+### Scribus: Adding layout and typography
+
+Once my illustrations were ready, it was time to move on to the next part: adding text and other things to the layout. For this, I used Scribus. Both Scribus and Krita have CMYK support. In both applications, you can soft-proof your artwork and make changes according to the color profile you get from the printer. I mostly do my work in RGB and then, if required, I convert it to CMYK. Since most printers nowadays will do the color conversion, I don't think CMYK is support required, however, it's good to be able to work in CMYK with free software tools.
+
+I use open source fonts for my design work unless a client has licensed a closed font for use. A good way to browse for suitable fonts is [Google Fonts repository][17]. (I have the entire repository cloned.) Occasionally, I also browse fonts on [Font Library][18], as it also has a nice collection. I decided to use Montserrat by Julieta Ulanovsky for the posters. Placing text was very quick in Scribus; once you create a style, you can apply it to any number of paragraphs or titles. This helped me place text in both designs quickly since I didn't have to re-create the text properties.
+
+![Poster in Scribus][19]
+
+I keep two layers in Scribus. One is for the illustrations, which are linked to the original files so if I change an illustration, it will update in Scribus. The other is for text and it's layered on top of the illustration layer.
+
+### Inkscape: QR codes
+
+I used Inkscape to generate a QR code that points to the Membership page on FSF's website. To generate a QR code in Scribus, go to **Extensions > Render > Barcode > QR Code** in Inkscape's menu. The logos are also vector; because Scribus supports vector images, you can directly paste things from Inkscape into Scribus. In a way, this helps in designing CMYK-based vector graphics.
+
+![Final poster design][20]
+
+![Final poster design][21]
+
+With the designs ready, I exported them to layered PDF and sent to them to Abhas for feedback. He asked me to add FSF India's logo, which I did and sent a new PDF to him.
+
+### Printing the posters
+
+From here, Abhas took over the printing part of the process. His local printer in Bangalore printed the posters in A2 size. He was kind enough to send me some pictures of them. The prints came out well, considering I didn't even convert them to CMYK nor do any color corrections or soft proofing, as I usually do when I get the color profile from my printer. My opinion is that 100% accurate CMYK printing is just a myth; there are too many factors to consider. If I really want perfect color reproduction, I leave this job to the printer, as they know their printer well and can do the conversion.
+
+![Final poster design][22]
+
+![Final poster design][23]
+
+### Accessing the source files
+
+When we discussed the requirements for these posters, Abhas told me to release the artwork under a Creative Commons license so others can re-use, modify, and share it. I am really glad he mentioned it. Anyone who wants to poke at the files can [download them from my Nextcloud drive][24]. If you have any improvements to make, please go ahead—and do remember to share your work with everybody.
+
+Let me know what you think about this article by [emailing me][25].
+
+* * *
+
+_[This article][26] originally appeared on [Raghukamath.com][27] and is republished with the author's permission._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/design-posters
+
+作者:[Raghavendra Kamath][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/raghukamath/users/seilarashel/users/raghukamath/users/raghukamath/users/greg-p/users/raghukamath
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code")
+[2]: https://www.fsf.org/
+[3]: https://rms-tour.gnu.org.in/
+[4]: https://abhas.io/
+[5]: https://kde.org/applications/graphics/skanlite/
+[6]: https://opensource.com/sites/default/files/uploads/skanlite.png (Skanlite)
+[7]: https://opensource.com/sites/default/files/uploads/sketch-01.png (Poster sketches)
+[8]: https://opensource.com/sites/default/files/uploads/sketch-02.png (Poster sketch)
+[9]: https://krita.org/
+[10]: https://www.scribus.net/
+[11]: https://inkscape.org/
+[12]: /life/16/4/nick-hamilton-linuxfest-northwest-2016-krita
+[13]: https://docs.krita.org/en/user_manual/vector_graphics.html#vector-graphics
+[14]: https://docs.krita.org/en/tutorials/clipping_masks_and_alpha_inheritance.html
+[15]: https://opensource.com/sites/default/files/uploads/poster-illo-01.jpg (Poster)
+[16]: https://opensource.com/sites/default/files/uploads/poster-illo-02.jpg (Poster)
+[17]: https://fonts.google.com/
+[18]: https://fontlibrary.org/
+[19]: https://opensource.com/sites/default/files/uploads/poster-in-scribus.png (Poster in Scribus)
+[20]: https://opensource.com/sites/default/files/uploads/final-01.png (Final poster design)
+[21]: https://opensource.com/sites/default/files/uploads/final-02.png (Final poster design)
+[22]: https://opensource.com/sites/default/files/uploads/posters-in-action-01.jpg (Final poster design)
+[23]: https://opensource.com/sites/default/files/uploads/posters-in-action-02.jpg (Final poster design)
+[24]: https://box.raghukamath.com/cloud/index.php/s/97KPnTBP4QL4iCx
+[25]: mailto:raghu@raghukamath.com?Subject=designing-posters-with-free-software
+[26]: https://raghukamath.com/journal/designing-posters-with-free-software/
+[27]: https://raghukamath.com/
diff --git a/sources/tech/20190412 How libraries are adopting open source.md b/sources/tech/20190412 How libraries are adopting open source.md
new file mode 100644
index 0000000000..481c317ead
--- /dev/null
+++ b/sources/tech/20190412 How libraries are adopting open source.md
@@ -0,0 +1,71 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How libraries are adopting open source)
+[#]: via: (https://opensource.com/article/19/4/software-libraries)
+[#]: author: (Don Watkins (Community Moderator) https://opensource.com/users/don-watkins)
+
+How libraries are adopting open source
+======
+Over the past decade, ByWater Solutions has expanded its business by
+advocating for open source software.
+![][1]
+
+Four years ago, I [interviewed Nathan Currulla][2], co-founder of ByWater Solutions, a major services and solutions provider for [Koha][3], a popular open source integrated library system (ILS). Since then, I've benefitted directly from his company's work, as my local [Chautauqua–Cattaraugus Library System][4] in western New York migrated from a proprietary software system to a [ByWater Systems][5]' Koha implementation.
+
+When I learned that ByWater is celebrating its 10th anniversary in 2019, I decided to reach out to Nathan to learn how the company has grown over the last decade. (Our remarks have been edited slightly for grammar and clarity.)
+
+**Don Watkins** : How has ByWater grown in the last 10 years?
+
+**Nathan Currulla** : Over the last 10 years, ByWater has grown by leaps and bounds. By the end of 2009, we supported five libraries with five contracts. That number shot up to 117 libraries made up of 46 contracts by the end of 2010. We now support over 1,500 libraries and 450+ contracts. We also went from having two team members to 25 in the past 10 years. The service-focused processes we have developed for migrating new libraries have been adopted by other library companies, and we have become a real market disruptor, putting pressure on other companies to provide better support and lower software subscription fees for libraries using their products. This was our goal from the outset, to change the way libraries work with the technology companies who support them, whomever they may be.
+
+Since the beginning, we have been rooted in the future, while legacy systems are still rooted in the past. Ten years ago, it was a real struggle for us to overcome the barriers presented by the fear of change in libraries and the outdated perceptions of open source in general. Now, although we still have to deal with change aversion, there are enough users to disprove any misinformation that exists regarding Koha and open source. The conversation is easier now than it ever was. That said, despite the fact that the ideals and morals held by open source are directly aligned with those of libraries, we still have a long way to go until open source technologies are the norm in this marketplace.
+
+**DW** : What kinds of libraries do you support?
+
+**NC** : Our partners are made up of a diverse set of library types. About 35% of our partners are public libraries, 35% are academic, and the remaining 30% are made up of museum, corporate, law, school, and other special library types. Because of Koha's flexibility and diverse feature set, we can successfully provide services to a variety of library types despite the current trend of consolidation in the library technology marketplace.
+
+**DW** : How does ByWater work with and help the Koha community?
+
+**NC** : We are working with the rest of the Koha community to streamline workflows and further improve the process of submitting and accepting new features into Koha. The vast majority of the community is made up of volunteers; by providing paid positions within the community, we can dedicate more time to the quality assurance and sign-off processes needed to stay competitive with other systems, both open source and proprietary. The number of new features submitted to the Koha community for each release is staggering. The more resources we have to get those features out to our users, the faster Koha can evolve and further shape the library-technology marketplace.
+
+**DW** : When we talked in 2015, ByWater had recently partnered with library solutions provider [EBSCO][6]. What initiatives are you working on now with EBSCO?
+
+**NC** : Originally, Catalyst IT of New Zealand worked with EBSCO to create the EBSCO Discovery Service (EDS) plugin that is used by many of our customers. Unlike most discovery systems that sit on top of a library's online public access catalog (OPAC), Koha's integration with EDS uses the Koha OPAC as the frontend, with EDS feeding data into the Koha interface. This allows libraries to choose which interface they prefer (EDS or Koha as the frontend) and provides a unified library service platform (LSP). EBSCO has always been a great partner and has always shown a strong willingness to contribute to the open source initiative. They understand the importance of having fewer barriers between the ILS and the libraries' other content to provide a seamless interface to the end user.
+
+Outside of Koha, ByWater is working closely with EBSCO to provide implementation, training, and support services for its [Folio LSP][7]. Folio is an open source LSP for academic libraries with the intent to provide even more seamless integration with other content providers using an extensible, open app marketplace. ByWater is developing a separate department for the implementation and ongoing support of Folio, with EBSCO providing hosting services to our mutual customers. The fact that EBSCO is investing millions in the creation of an open source platform lends further credence to the importance and validity of open source technologies in the library market.
+
+**DW** : What other projects are you supporting? How do they complement Koha?
+
+**NC** : ByWater also supports Libki, an open source, web-based kiosk and print management solution; Coral, an open source electronic resource management (ERM) solution; and Folio. Libki and Coral seamlessly integrate with Koha to provide a unified LSP. Folio may work in cooperation with Koha on some functionality, but it is too early to tell what that will specifically look like.
+
+ByWater also offers Koha Klassmates, a program that provides free installations of Koha to over 40 library schools in the US to familiarize the next generation of librarians with open source and the tools they will use daily in the workforce. We are also rolling out a program called Koha University, which will mentor computer science students in writing and submitting code to Koha, one of the largest open source projects in the world. This will give them experience in working in such an environment and provide the opportunity for their names to be listed as official Koha contributors.
+
+**DW** : What is ByWater's strategic focus over the next five years?
+
+**NC** : ByWater will continue offering top-rated support to our ever-growing customer base while leveraging new open source opportunities to disprove misinformation surrounding the use of open source solutions in libraries. We will focus on making open source the norm and educating libraries that could be taking advantage of these technologies but do not because of outdated information and perceptions.
+
+Additionally, our research and development efforts will be focused on analyzing machine learning for advanced education and support services. We also want to work closely with our partners on advancing the marketing efforts (through software) for small and large libraries to help cement their roles as community centers by marketing inventory, programs, and library events. We want to be community builders on different levels, both for our partner libraries and with the open source communities that we are involved in.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/software-libraries
+
+作者:[Don Watkins (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_opencardcatalog.png?itok=f9PyJEe-
+[2]: https://opensource.com/business/15/5/bywater-solutions-empowering-library-tech
+[3]: http://www.koha.org/
+[4]: https://catalog.cclsny.org/
+[5]: https://bywatersolutions.com/
+[6]: https://www.ebsco.com/
+[7]: https://www.ebsco.com/products/ebsco-folio-library-services
diff --git a/sources/tech/20190412 Joe Doss- How Do You Fedora.md b/sources/tech/20190412 Joe Doss- How Do You Fedora.md
new file mode 100644
index 0000000000..bc642fb1d6
--- /dev/null
+++ b/sources/tech/20190412 Joe Doss- How Do You Fedora.md
@@ -0,0 +1,122 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Joe Doss: How Do You Fedora?)
+[#]: via: (https://fedoramagazine.org/joe-doss-how-do-you-fedora/)
+[#]: author: (Charles Profitt https://fedoramagazine.org/author/cprofitt/)
+
+Joe Doss: How Do You Fedora?
+======
+
+![Joe Doss][1]
+
+We recently interviewed Joe Doss on how he uses Fedora. This is part of a [series][2] on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the [feedback form][3] to express your interest in becoming a interviewee.
+
+### Who is Joe Doss?
+
+Joe Doss lives in Chicago, Illinois USA and his favorite food is pizza. He is the Director of Engineering Operations and Kenna Security, Inc. Doss describes his employer this way: “Kenna uses data science to help enterprises combine their infrastructure and application vulnerability data with exploit intelligence to measure risk, predict attacks and prioritize remediation.”
+
+His first Linux distribution was Red Hat Linux 5. A friend of his showed him a computer that wasn’t running Windows. Doss thought it was just a program to install on Windows when his friend gave him a Red Hat Linux 5 install disk. “I proceeded to install this Linux ‘program’ on my Father’s PC,” he says. Luckily for Doss, his father supported his interest in computers. “I ended up totally wiping out the Windows 95 install as a result and this was how I got my first computer.”
+
+At Kenna, Doss’ group makes use of Fedora and [Ansible][4]: “We run Fedora Cloud in multiple VPC deployments in AWS and Google Compute with over 200 virtual machines. We use Ansible to automate everything we do with Fedora.”
+
+Doss brews beer at home and contributes to open source in his free time. He also has a cat named Tibby. “I rescued Tibby off the street the Hyde Park neighborhood of Chicago when she was 7 months old. She is not very smart, but she makes up for that with cuteness.” His favorite place to visit is his childhood home of Michigan, but Doss says, “anywhere with a warm beach, a cool drink, and the ocean is pretty nice too.”
+
+![Tibby the cute cat!][5]
+
+### The Fedora community
+
+Doss became involved with Fedora and the Fedora community through his job at Kenna Security. When he first joined the company they were using Ubuntu and Chef in production. There was a desire to make the infrastructure more reproducible and reliable, and he says, “I was able to greenfield our deployments with Fedora Cloud and Ansible.” This project got him involved in the Fedora Cloud release.
+
+When asked about his first impression of the Fedora community, Doss said, “Overwhelming to be honest. There is so much going on and it is hard to figure out who are the stakeholders of each part of Fedora.” Once he figured out who he needed to talk to he found the community very welcoming and super supportive.
+
+One of the ideas he had to improve the community was to unite the various projects and team under on bug tracking tool and community resource. “Pagure, Bugzilla, Github, Fedora Forums, Discourse Forums, Mailing lists… it is all over the place and hard to navigate at first.” Despite the initial complexity of becoming familiar with the Fedora Project, Doss feels it is amazingly rewarding to be involved. “It feels awesome it to be apart of a Linux distro that impacts so many people in very positive ways. You can make a difference.”
+
+Doss called out Dusty Mabe at Red Hat for helping him become involved, saying Dusty “has been an amazing mentor and resource for enabling me to contribute back to Fedora.”
+
+Doss has an interesting way of explaining to non-technical friends what he does. “Imagine changing the tires on a very large bus while it is going down the highway at 70 MPH and sometimes you need to get involved with the tire manufacturer to help make this process work well.” This metaphor helps people understand what replacing 200-plus VMs across more than five production VPCs in AWS and Google Compute with every Fedora release.
+
+Doss drew my attention to one specific incident with Fedora 29 and Vagrant. “Recently we encountered an issue where Vagrant wouldn’t set the hostname on a Fresh Fedora 29 Beta VM. This was due to Fedora 29 Cloud no longer shipping the network service stub in favor of NetworkManager. This led to me working with a colleague at Kenna Security to send a patch upstream to the Vagrant project to help their developers produce a fix for Fedora 29. Vagrant usage with Fedora is a very large part of our development cycle at Kenna, and having this broken before the Fedora 29 release would have impacted us a lot.” As Doss said, “Sometimes you need to help make the tires before they go on the bus.”
+
+Doss is the [COPR][6] Fedora, RHEL, and CentOS package maintainer for [WireGuard VPN][7]. “The CentOS repo just went over 60 thousand downloads last month which is pretty awesome.”
+
+### What Hardware?
+
+Doss uses Fedora 29 cloud in the over five VPC deployments in AWS and Google computer. At home he has a SuperMicro SYS-5019A-FTN4 1U Server that runs Fedora 29 Server with Openshift OKD installed on it. His laptops are all Lenovo. “For Laptops I use a ThinkPad T460s for work and a ThinkPad 25 at home. Both have Fedora 29 installed. ThinkPads are the best with Fedora.”
+
+### What Software?
+
+Doss used GNOME 3 as his preferred desktop on Fedora Workstation. “I use Sublime Text 3 for my text editor on the desktop or vim on servers.” For development and testing he uses Vagrant. “Ansible is what I use for any kind of automation with Fedora. I maintain an [Ansible playbook][8] for setting up my workstation.”
+
+### Ansible
+
+I asked Doss if he had advice for people trying to learn Ansible.
+
+“Start small. Automate the stuff that makes your life easier, but don’t over complicate it. [Ansible Galaxy][9] is a great resource to get things done quickly, but if you truly want to learn how to use Ansible, writing your own roles and playbooks the path I would take.
+
+“I have helped a lot of my coworkers that have joined my Operations team at Kenna get up to speed on using Ansible by buying them a copy of [Ansible for Devops][10] by Jeff Geerling. This book will give anyone new to Ansible the foundation they need to start using it everyday. #ansible on Freenode is a great resource as well along with the [official Ansible docs][11].”
+
+Doss also said, “Knowing what to automate is most likely the most difficult thing to master without over complicating things. Debugging complex playbooks and roles is a close second.”
+
+### Home lab
+
+He recommended setting up a home lab. “At Kenna and at home I use [Vagrant][12] with the [Vagrant-libvirt plugin][13] for developing Ansible roles and playbooks. You can iterate quickly to build your roles and playbooks on your laptop with your favorite editor and run _vagrant provision_ to run your playbook. Quick feedback loop and the ability to burn down your Vagrant VM and start over quickly is an amazing workflow. Below is a sample Vagrant file that I keep handy to spin up a Fedora VM to test my playbooks.”
+
+```
+-- mode: ruby --
+ vi: set ft=ruby :
+ Vagrant.configure(2) do |config|
+ config.vm.provision "shell", inline: "dnf install nfs-utils rpcbind @development-tools @ansible-node redhat-rpm-config gcc-c++ -y"
+ config.ssh.forward_agent = true
+ config.vm.define "f29", autostart: false do |f29|
+ f29.vm.box = "fedora/29-cloud-base"
+ f29.vm.hostname = "f29.example.com"
+ f29.vm.provider "libvirt" do |vm|
+ vm.memory = 2048
+ vm.cpus = 2
+ vm.driver = "kvm"
+ vm.nic_model_type = "e1000"
+ end
+config.vm.synced_folder '.', '/vagrant', disabled: true
+
+config.vm.provision "ansible" do |ansible|
+ ansible.groups = {
+ }
+ ansible.playbook = "playbooks/main.yml"
+ ansible.inventory_path = "inventory/development"
+ ansible.extra_vars = {
+ ansible_python_interpreter: "/usr/bin/python3"
+ }
+# ansible.verbose = 'vvv' end
+end
+end
+```
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/joe-doss-how-do-you-fedora/
+
+作者:[Charles Profitt][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/cprofitt/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/IMG_20181029_121944-816x345.jpg
+[2]: https://fedoramagazine.org/tag/how-do-you-fedora/
+[3]: https://fedoramagazine.org/submit-an-idea-or-tip/
+[4]: https://ansible.com
+[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/IMG_20181231_110920_fixed.jpg
+[6]: https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/
+[7]: https://www.wireguard.com/install/
+[8]: https://github.com/jdoss/fedora-workstation
+[9]: https://galaxy.ansible.com/
+[10]: https://www.ansiblefordevops.com/
+[11]: https://docs.ansible.com/ansible/latest/index.html
+[12]: http://www.vagrantup.com/
+[13]: https://github.com/vagrant-libvirt/vagrant-libvirt%20plugin
diff --git a/sources/tech/20190412 Linux Server Hardening Using Idempotency with Ansible- Part 2.md b/sources/tech/20190412 Linux Server Hardening Using Idempotency with Ansible- Part 2.md
new file mode 100644
index 0000000000..1e1b451500
--- /dev/null
+++ b/sources/tech/20190412 Linux Server Hardening Using Idempotency with Ansible- Part 2.md
@@ -0,0 +1,116 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 2)
+[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-2)
+[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie)
+
+Linux Server Hardening Using Idempotency with Ansible: Part 2
+======
+
+![][1]
+
+[Creative Commons Zero][2]
+
+In the first part of this series, we introduced something called idempotency, which can provide the ongoing improvements to your server estate’s security posture. In this article, we’ll get a little more hands-on with a look at some specific Ansible examples.
+
+### Shopping List
+
+You will need some Ansible experience before being able to make use of the information that follows. Rather than run through the installation and operation of Ansible let’s instead look at some of the idempotency playbook’s content.
+
+As mentioned earlier there might be hundreds of individual system tweaks to make on just one type of host so we’ll only explore a few suggested Ansible tasks and how I like to structure the Ansible role responsible for the compliance and hardening. You have hopefully picked up on the fact that the devil is in the detail and you should absolutely, unequivocally, understand to as high a level of detail as possible, about the permutations of making changes to your server OS.
+
+Be aware that I will mix and match between OSs in the Ansible examples that follow. Many examples are OS agnostic but as ever you should pay close attention to the detail. Obvious changes like “apt” to “yum” for the package manager is a given.
+
+Inside a “tasks” file under our Ansible “hardening” role, or whatever you decide to name it, these named tasks represent the areas of a system with some example code to offer food for thought. In other words, each section that follows will probably be a single YAML file, such as “accounts.yml”, and each will have with varying lengths and complexity.
+
+Let’s look at some examples with ideas about what should go into each file to get you started. The contents of each file that follow are just the very beginning of a checklist and the following suggestions are far from exhaustive.
+
+#### SSH Server
+
+This is the application that almost all engineers immediately look to harden when asked to secure a server. It makes sense as SSH (the OpenSSH package in many cases) is usually only one of a few ports intentionally prised open and of course allows direct access to the command line. The level of hardening that you should adopt is debatable. I believe in tightening the daemon as much as possible without disruption and would usually make around fifteen changes to the standard OpenSSH server config file, “sshd_config”. These changes would include pulling in a MOTD banner (Message Of The Day) for legal compliance (warning of unauthorised access and prosecution), enforcing the permissions on the main SSHD files (so they can’t be tampered with by lesser-privileged users), ensuring the “root” user can’t log in directly, setting an idle session timeout and so on.
+
+Here’s a very simple Ansible example that you can repeat within other YAML files later on, focusing on enforcing file permissions on our main, critical OpenSSH server config file. Note that you should carefully check every single file that you hard-reset permissions on before doing so. This is because there are horrifyingly subtle differences between Linux distributions. Believe me when I say that it’s worth checking first.
+
+name: Hard reset permissions on sshd server file
+
+file: owner=root group=root mode=0600 path=/etc/ssh/sshd_config
+
+To check existing file permissions I prefer this natty little command for the job:
+
+```
+$ stat -c "%a %n" /etc/ssh/sshd_config
+
+644 /etc/ssh/sshd_config
+```
+
+As our “stat” command shows our Ansible snippet would be an improvement to the current permissions because 0600 means only the “root” user can read and write to that file. Other users or groups can’t even read that file which is of benefit because if we’ve made any mistakes in securing SSH’s config they can’t be discovered as easily by less-privileged users.
+
+#### System Accounts
+
+At a simple level this file might define how many users should be on a standard server. Usually a number of users who are admins have home directories with public keys copied into them. However this file might also include performing simple checks that the root user is the only system user with the all-powerful superuser UID 0; in case an attacker has altered user accounts on the system for example.
+
+#### Kernel
+
+Here’s a file that can grow arms and legs. Typically I might affect between fifteen and twenty sysctl changes on an OS which I’m satisfied won’t be disruptive to current and, all going well, any future uses of a system. These changes are again at your discretion and, at my last count (as there’s between five hundred and a thousand configurable kernel options using sysctl on a Debian/Ubuntu box) you might opt to split off these many changes up into different categories.
+
+Such categories might include network stack tuning, stopping core dumps from filling up disk space, disabling IPv6 entirely and so on. Here’s an Ansible example of logging network packets that shouldn’t been routed out onto the Internet, namely those packets using spoofed private IP Addresses, called “martians”.
+
+name: Keep track of traffic that shouldn’t be routed onto the Internet
+
+lineinfile: dest="/etc/sysctl.conf" line="{{item.network}}" state=present
+
+with_items:
+
+\- { network: 'net.ipv4.conf.all.log_martians = 1' }
+
+\- { network: 'net.ipv4.conf.default.log_martians = 1' }
+
+Pay close attention that you probably don’t want to use the file “/etc/sysctl.conf” but create a custom file under the directory “/etc/sysctl.d/” or similar. Again, check your OS’s preference, usually in the comments of the pertinent files. If you’ve not seen martian packets being enabled before then type “dmesg” (sometimes only as the “root” user) to view kernel messages and after a week or two of logging being in place you’ll probably see some traffic polluting your logs. It’s much better to know how attackers are probing your servers than not. A few log entries for reference can only be of value. When it comes to looking after servers, ignorance is certainly not bliss.
+
+#### Network
+
+As mentioned you might want to include hardening the network stack within your kernel.yml file, depending on whether there’s many entries or not, or simply for greater clarity. For your network.yml file have a think about stopping old-school broadcast attacks flooding your LAN and ICMP oddities from changing your routing in addition.
+
+#### Services
+
+Usually I would stop or start miscellaneous system services (and potentially applications) within this Ansible file. If there weren’t many services then rather than also using a “cron.yml” file specifically for “cron” hardening I’d include those here too.
+
+There’s a bundle of changes you can make around cron’s file permissions etc. If you haven’t come across it, on some OSs, there’s a “cron.deny” file for example which blacklists certain users from accessing the “crontab” command. Additionally you also have a multitude of cron directories under the “/etc” directory which need permissions enforced and improved, indeed along with the file “/etc/crontab” itself. Once again check with your OS’s current settings before altering these or “bad things” ™ might happen to your uptime.
+
+In terms of miscellaneous services being purposefully stopped and certain services, such as system logging which is imperative to a healthy and secure system, have a quick look at the Ansible below which I might put in place for syslog as an example.
+
+name: Insist syslog is definitely installed (so we can receive upstream logs)
+
+apt: name=rsyslog state=present
+
+name: Make sure that syslog starts after a reboot
+
+service: name=rsyslog state=started enabled=yes
+
+#### IPtables
+
+The venerable Netfilter which, from within the Linux kernel offers the IPtables software firewall the ability to filter network packets in an exceptionally sophisticated manner, is a must if you can enable it sensibly. If you’re confident that each of your varying flavours of servers (whether it’s a webserver, database server and so on) can use the same IPtables config then copy a file onto the filesystem via Ansible and make sure it’s always loaded up using this YAML file.
+
+Next time, we’ll wrap up our look at specific system suggestions and talk a little more about how the playbook might be used.
+
+Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website: [https://www.devsecops.cc][3]
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-2
+
+作者:[Chris Binnie][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/chrisbinnie
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/artificial-intelligence-3382507_1280.jpg?itok=PHazitpd
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.devsecops.cc/
diff --git a/sources/tech/20190412 What-s your primary backup strategy for the -home directory in Linux.md b/sources/tech/20190412 What-s your primary backup strategy for the -home directory in Linux.md
new file mode 100644
index 0000000000..e51ae79681
--- /dev/null
+++ b/sources/tech/20190412 What-s your primary backup strategy for the -home directory in Linux.md
@@ -0,0 +1,36 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What's your primary backup strategy for the /home directory in Linux?)
+[#]: via: (https://opensource.com/poll/19/4/backup-strategy-home-directory-linux)
+[#]: author: ( https://opensource.com/users/dboth/users/don-watkins/users/greg-p)
+
+What's your primary backup strategy for the /home directory in Linux?
+======
+
+![Linux keys on the keyboard for a desktop computer][1]
+
+I frequently upgrade to newer releases of Fedora, which is my primary distribution. I also upgrade other distros but much less frequently. I have also had many crashes of various types over the years, including a large portion of self-inflicted ones. Past experience with data loss has made me very aware of the need for good backups.
+
+I back up many parts of my Linux hosts but my **/home** directory is especially important. Losing any of the data in **/home** on my primary workstation due to a crash or an upgrade could be disastrous.
+
+My backup strategy for **/home** is to back up everything every day. There are other things on every Linux system to back up but **/home **is the center of everything I do on my workstation. I keep my documents and financial records there as well as off-line emails, address books for different apps, calendar and task data, and most importantly for me these days, the working copies of my next two Linux books.
+
+I can think of a number of approaches to doing backups and restores of **/home** which would allow an easy and complete recovery after a data loss ranging from a single file to the entire directory. Which approach do you take? Which tools do you use?
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/poll/19/4/backup-strategy-home-directory-linux
+
+作者:[][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dboth/users/don-watkins/users/greg-p
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
diff --git a/sources/tech/20190414 Working with Microsoft Exchange from your Linux Desktop.md b/sources/tech/20190414 Working with Microsoft Exchange from your Linux Desktop.md
new file mode 100644
index 0000000000..657464affb
--- /dev/null
+++ b/sources/tech/20190414 Working with Microsoft Exchange from your Linux Desktop.md
@@ -0,0 +1,100 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Working with Microsoft Exchange from your Linux Desktop)
+[#]: via: (https://itsfoss.com/microsoft-exchange-linux-desktop/)
+[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
+
+Working with Microsoft Exchange from your Linux Desktop
+======
+
+Recently I had to do some research (and even magic) to be able to work on my Ubuntu Desktop with Exchange Mail Server from my current employer. I am going to share my experience with you.
+
+### Microsoft Exchange on Linux desktop
+
+I guess many readers might feel confused, I mean, it shouldn’t be that hard if you simply use [Thunderbird][1] or any other [Linux email client][2] with your Office365 Exchange Account, right? Well, for better or for worse it was not this case for me.
+
+Here’s my ordeal and what I did to make Microsoft Exchange work on my Linux desktop.
+
+![][3]
+
+#### The initial problem, no Office365
+
+The first problem encountered in my situation was that we don’t currently use Office365 like probably majority of current people does for hosting their Exchange accounts, we currently use an on premises Exchange server and a very old version of it.
+
+So, this means I didn’t have the luxury of using automatic configuration that comes in majority of email clients to simply connect to Office365.
+
+#### Webmail is always an option… right?
+
+Short answer is yes, however, as I mentioned we are using Exchange 2010, so the webmail interface is not only outdated, it even won’t allow you to have a decent email signature as it has a limit of characters in webmail configuration, so I needed to use an email client if I really wanted to be able to use the email the way I needed.
+
+#### Another problem, I am picky for my email client
+
+I am a regular Google user, I have been using GMail for the past 14 years as my personal email, so I really like how it looks and works. I actually use the webmail as I don’t like to be tied to my email client or even my computer device, if something happens and I need to switch to a newer device I don’t want to have to copy things over, I just want things to be there waiting for me to use them.
+
+This leads me not liking Thunderbird, K-9 or Evolution Mail clients. All of these are capable of being connected to Exchange servers (one way or the other) but again, they don’t meet the standard of a clean, easy and modern GUI I wanted plus they couldn’t even manage my Exchange calendar well (which was a real deal breaker for me).
+
+#### Found some options as email clients!
+
+After some other research I found there were a couple of options for email clients that I could use and that actually would work the way I expected.
+
+These were: [Hiri][4], which had a very modern and innovative user interface and had Exchange Server capabilities and there also was [Mailspring][5] which is a fork of an old foe ([Nylas Mail][6]) and which was my real favorite.
+
+However, Mailspring couldn’t connect directly to an Exchange server (using Exchange’s protocol) unless you use Office365, it required [IMAP][7] (another luxury!) and the IT department at my office was reluctant to activate IMAP for “security reasons”.
+
+Hiri is a good option but it’s not free.
+
+#### No IMAP, no Office365, game over? Not yet!
+
+I have to confess, I was really ready to give up and simply use the old webmail and learn to live with it, however, I gave a last shot on my research capabilities and I found a possible solution: what if I had a way to put a “man in the middle”? What if I was able to make the IMAP to run locally on my computer while my computer simply pull the emails via Exchange protocol? It was a long shot but, could work…
+
+So I started looking here and there and found this [DavMail][8], which works as a Gateway to “talk” with an Exchange server and then locally provide you whatever you need in order to use it. Basically it was like a “translator” between by computer and the Exchange and then provided me with whatever service I needed.
+
+![DavMail Settings][9]
+
+So basically I only had to give DavMail my Exchange Server’s URL (even OWA URL) and set whatever ports I wanted on my local computer to be the new ports where my email client could connect.
+
+This way I was free to basically use ANY client I wanted, at least any client which was capable of using IMAP protocol would work, as long as I configure the same ports I set up as my local ports.
+
+![Mailspring working my office’s on premises Exchange. Information has been blurred due to non-disclosure agreement at my office.][10]
+
+And that was it! I was able to use MailSpring (which is my preferred choice for email client) under my non favorable conditions.
+
+#### Bonus point: this is a multi-platform solution!
+
+What’s best is that this solution will work for any platform! So if you have the same problem while using Windows or macOS, DavMail has a version for all tastes!
+
+![avatar][11]
+
+![avatar][11]
+
+### Helder Martins
+
+Systems Engineer, technology evangelist, Ubuntu user, Linux enthusiast, father and husband.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/microsoft-exchange-linux-desktop/
+
+作者:[It's FOSS Community][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/itsfoss/
+[b]: https://github.com/lujun9972
+[1]: https://www.thunderbird.net/en-US/
+[2]: https://itsfoss.com/best-email-clients-linux/
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/microsoft-exchange-linux-desktop.png?resize=800%2C450&ssl=1
+[4]: https://www.hiri.com/
+[5]: https://getmailspring.com/
+[6]: https://itsfoss.com/n1-open-source-email-client/
+[7]: https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol
+[8]: http://davmail.sourceforge.net/
+[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/davmail-exchange-settings.png?resize=800%2C597&ssl=1
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/davmail-exchange-settings-1.jpg?ssl=1
+[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/helder-martins-1.jpeg?ssl=1
diff --git a/sources/tech/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md b/sources/tech/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md
new file mode 100644
index 0000000000..c30c286142
--- /dev/null
+++ b/sources/tech/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md
@@ -0,0 +1,354 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (12 Single Board Computers: Alternative to Raspberry Pi)
+[#]: via: (https://itsfoss.com/raspberry-pi-alternatives/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+12 Single Board Computers: Alternative to Raspberry Pi
+======
+
+_**Brief: Looking for a Raspberry Pi alternative? Here are some other single board computers to satisfy your DIY cravings.**_
+
+Raspberry Pi is the most popular single board computer right now. You can use it for your DIY projects or can use it as a cost effective system to learn coding or maybe utilize a [media server software][1] on it to stream media at your convenience.
+
+You can do a lot of things with Raspberry Pi but it is not the ultimate solution for all kinds of tinkerers. Some might be looking for a cheaper board and some might be on the lookout for a powerful one.
+
+Whatever be the case, we do need Raspberry Pi alternatives for a variety of reasons. So, in this article, we will talk about the best ten single board computers that we think are the best Raspberry Pi alternatives.
+
+![][2]
+
+### Raspberry Pi alternatives to satisfy your DIY craving
+
+The list is in no particular order of ranking. Some of the links here are affiliate links. Please read our [affiliate policy][3].
+
+#### 1\. Onion Omega2+
+
+![][4]
+
+For just **$13** , the Omega2+ is one of the cheapest IoT single board computers you can find out there. It runs on LEDE (Linux Embedded Development Environment) Linux OS – a distribution based on [OpenWRT][5].
+
+Its form factor, cost, and the flexibility that comes from running a customized version of Linux OS makes it a perfect fit for almost any type of IoT applications.
+
+You can find [Onion Omega kit on Amazon][6] or order from their own website that would cost you extra shipping charges.
+
+**Key Specifications**
+
+ * MT7688 SoC
+ * 2.4 GHz IEEE 802.11 b/g/n WiFi
+ * 128 MB DDR2 RAM
+ * 32 MB on-board flash storage
+ * MicroSD Slot
+ * USB 2.0
+ * 12 GPIO Pins
+
+
+
+[Visit WEBSITE
+][7]
+
+#### 2\. NVIDIA Jetson Nano Developer Kit
+
+![][8]
+
+This is a very unique and interesting Raspberry Pi alternative from NVIDIA for just **$99**. Yes, it’s not something that everyone can make use of – but for a specific group of tinkerers or developers.
+
+NVIDIA explains it for the following use-case:
+
+> NVIDIA® Jetson Nano™ Developer Kit is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. All in an easy-to-use platform that runs in as little as 5 watts.
+>
+> nvidia
+
+So, basically, if you are into AI and deep learning, you can make use of the developer kit. If you are curious, the production compute module of this will be arriving in June 2019.
+
+**Key Specifications:**
+
+ * CPU: Quad-core ARM A57 @ 1.43 GHz
+ * GPU: 128-core Maxwell
+ * RAM: 4 GB 64-bit LPDDR4 25.6 GB/s
+ * Display: HDMI 2.0
+ * 4 x USB 3.0 and eDP 1.4
+
+
+
+[VISIT WEBSITE
+][9]
+
+#### 3\. ASUS Tinker Board S
+
+![][10]
+
+ASUS Tinker Board S isn’t the most affordable Raspberry Pi alternative at **$82** (on [Amazon][11]) but it is a powerful alternative. It features the same 40-pin connector that you’d normally find in the standard Raspberry Pi 3 Model but offers a powerful processor and a GPU.Also, the size of the Tinker Board S is exactly the same as a standard Raspberry Pi 3.
+
+The main highlight of this board is the presence of 16 GB [eMMC][12] (in layman terms, it has got SSD-like storage on board that makes it faster while working on it).
+
+**Key Specifications:**
+
+ * Rockchip Quad-Core RK3288 processor
+ * 2 GB DDR3 RAM
+ * Integrated Graphics Processor
+ * ARM® Mali™-T764 GPU
+ * 16 GB eMMC
+ * MicroSD Card Slot
+ * 802.11 b/g/n, Bluetooth V4.0 + EDR
+ * USB 2.0
+ * 28 GPIO pins
+ * HDMI Interface
+
+
+
+[Visit website
+][13]
+
+#### 4\. ClockworkPi
+
+![][14]
+
+Clockwork Pi is usually a part of the [GameShell Kit][15] if you are looking to assemble a modular retro gaming console. However, you can purchase the board separately for $49.
+
+Its compact size, WiFi connectivity, and the presence of micro HDMI port make it a great choice for a lot of things.
+
+**Key Specifications:**
+
+ * Allwinner R16-J Quad-core Cortex-A7 CPU @1.2GHz
+ * Mali-400 MP2 GPU
+ * RAM: 1GB DDR3
+ * WiFi & Bluetooth v4.0
+ * Micro HDMI output
+ * MicroSD Card Slot
+
+
+
+[visit website
+][16]
+
+#### 5\. Arduino Mega 2560
+
+![][17]
+
+If you are into robotic projects or you want something for a 3D printer – Arduino Mega 2560 will be a handy replacement to Raspberry Pi. Unlike Raspberry Pi, it is based on a microcontroller and not a microprocessor.
+
+It would cost you $38.50 on their [official site][18] and and around [$33 on Amazon][19].
+
+**Key Specifications:**
+
+ * Microcontroller: ATmega2560
+ * Clock Speed: 16 MHz
+ * Digital I/O Pins: 54
+ * Analog Input Pins: 16
+ * Flash Memory: 256 KB of which 8 KB used by bootloader
+
+
+
+[visit website
+][18]
+
+#### 6\. Rock64 Media Board
+
+![][20]
+
+For the same investment as you would on a Raspberry Pi 3 B+, you will be getting a faster processor and double the memory on Rock64 Media Board. In addition, it also offers a cheaper alternative to Raspberry Pi if you want the 1 GB RAM model – which would cost $10 less.
+
+Unlike Raspberry Pi, you do not have wireless connectivity support here but the presence of USB 3.0 and HDMI 2.0 does make a good difference if that matters to you.
+
+**Key Specifications:**
+
+ * Rockchip RK3328 Quad-Core ARM Cortex A53 64-Bit Processor
+ * Supports up to 4GB 1600MHz LPDDR3 RAM
+ * eMMC module socket
+ * MicroSD Card slot
+ * USB 3.0
+ * HDMI 2.0
+
+
+
+[visit website
+][21]
+
+#### 7\. Odroid-XU4
+
+![][22]
+
+Odroid-XU4 is the perfect alternative to Raspberry Pi if you have room to spend a little more ($80-$100 or even lower, depending on the store/availability).
+
+It is indeed a powerful replacement and technically a bit smaller in size. The support for eMMC and USB 3.0 makes it faster to work with.
+
+**Key Specifications:**
+
+ * Samsung Exynos 5422 Octa ARM Cortex™-A15 Quad 2Ghz and Cortex™-A7 Quad 1.3GHz CPUs
+ * 2Gbyte LPDDR3 RAM
+ * GPU: Mali-T628 MP6
+ * USB 3.0
+ * HDMI 1.4a
+ * eMMC 5.0 module socket
+ * MicroSD Card Slot
+
+
+
+[visit website
+][23]
+
+#### 8\. **PocketBeagle**
+
+![][24]
+
+It is an incredibly small SBC – almost similar to the Raspberry Pi Zero. However, it would cost you the same as that of a full-sized Raspberry Pi 3 model. The main highlight here is that you can use it as a USB key-fob and then access the Linux terminal to work on it.
+
+**Key Specifications:**
+
+ * Processor: Octavo Systems OSD3358 1GHz ARM® Cortex-A8
+ * RAM: 512 MB DDR3
+ * 72 expansion pin headers
+ * microUSB
+ * USB 2.0
+
+
+
+[visit website
+][25]
+
+#### 9\. Le Potato
+
+![][26]
+
+Le Potato by [Libre Computer][27], also identified by its model number AML-S905X-CC. It would [cost you $45][28].
+
+If you want double the memory along with HDMI 2.0 interface by spending a bit more than a Raspberry Pi – this would be the perfect choice. Although, you won’t find wireless connectivity baked in.
+
+**Key Specifications:**
+
+ * Amlogic S905X SoC
+ * 2GB DDR3 SDRAM
+ * USB 2.0
+ * HDMI 2.0
+ * microUSB
+ * MicroSD Card Slot
+ * eMMC Interface
+
+
+
+[visit website
+][29]
+
+#### 10\. Banana Pi M64
+
+![][30]
+
+It comes loaded with 8 Gigs of eMMC – which is the key highlight of this Raspberry Pi alternative. For the very same reason, it would cost you $60.
+
+The presence of HDMI interface makes it 4K-ready. In addition, Banana Pi offers a lot more variety of open source SBCs as an alternative to Raspberry Pi.
+
+**Key Specifications:**
+
+ * 1.2 Ghz Quad-Core ARM Cortex A53 64-Bit Processor-R18
+ * 2GB DDR3 SDRAM
+ * 8 GB eMMC
+ * WiFi & Bluetooth
+ * USB 2.0
+ * HDMI
+
+
+
+[visit website
+][31]
+
+#### 11\. Orange Pi Zero
+
+![][32]
+
+The Orange Pi Zero is an incredibly cheap alternative to Raspberry Pi. You will be able to get it for almost $10 on Aliexpress or Amazon. For a [little more investment, you can get 512 MB RAM][33].
+
+If that isn’t sufficient, you can also go for Orange Pi 3 with better specifications which will cost you around $25.
+
+**Key Specifications:**
+
+ * H2 Quad-core Cortex-A7
+ * Mali400MP2 GPU
+ * RAM: Up to 512 MB
+ * TF Card support
+ * WiFi
+ * USB 2.0
+
+
+
+[Visit website
+][34]
+
+#### 12\. VIM 2 SBC by Khadas
+
+![][35]
+
+VIM 2 by Khadas is one of the latest SBCs that you can grab with Bluetooth 5.0 on board. It [starts from $99 (the basic model) and goes up to $140][36].
+
+The basic model includes 2 GB RAM, 16 GB eMMC and Bluetooth 4.1. However, the Pro/Max versions would include Bluetooth 5.0, more memory, and more eMMC storage.
+
+**Key Specifications:**
+
+ * Amlogic S912 1.5GHz 64-bit Octa-Core CPU
+ * T820MP3 GPU
+ * Up to 3 GB DDR4 RAM
+ * Up to 64 GB eMMC
+ * Bluetooth 5.0 (Pro/Max)
+ * Bluetooth 4.1 (Basic)
+ * HDMI 2.0a
+ * WiFi
+
+
+
+**Wrapping Up**
+
+We do know that there are different types of single board computers. Some are better than Raspberry Pi – and some scaled down versions of it for a cheaper price tag. Also, SBCs like Jetson Nano have been tailored for a specific use. So, depending on what you require – you should verify the specifications of the single board computer.
+
+If you think that you know about something that is better than the ones mentioned above, feel free to let us know in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/raspberry-pi-alternatives/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/best-linux-media-server/
+[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-alternatives.png?resize=800%2C450&ssl=1
+[3]: https://itsfoss.com/affiliate-policy/
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/omega-2-plus-e1555306748755-800x444.jpg?resize=800%2C444&ssl=1
+[5]: https://openwrt.org/
+[6]: https://amzn.to/2Xj8pkn
+[7]: https://onion.io/store/omega2p/
+[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Jetson-Nano-e1555306350976-800x590.jpg?resize=800%2C590&ssl=1
+[9]: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/asus-tinker-board-s-e1555304945760-800x450.jpg?resize=800%2C450&ssl=1
+[11]: https://amzn.to/2XfkOFT
+[12]: https://en.wikipedia.org/wiki/MultiMediaCard
+[13]: https://www.asus.com/in/Single-Board-Computer/Tinker-Board-S/
+[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/clockwork-pi-e1555305016242-800x506.jpg?resize=800%2C506&ssl=1
+[15]: https://itsfoss.com/gameshell-console/
+[16]: https://www.clockworkpi.com/product-page/cpi-v3-1
+[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/arduino-mega-2560-e1555305257633.jpg?ssl=1
+[18]: https://store.arduino.cc/usa/mega-2560-r3
+[19]: https://amzn.to/2KCi041
+[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/ROCK64_board-e1555306092845-800x440.jpg?resize=800%2C440&ssl=1
+[21]: https://www.pine64.org/?product=rock64-media-board-computer
+[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/odroid-xu4.jpg?fit=800%2C354&ssl=1
+[23]: https://www.hardkernel.com/shop/odroid-xu4-special-price/
+[24]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/PocketBeagle.jpg?fit=800%2C450&ssl=1
+[25]: https://beagleboard.org/p/products/pocketbeagle
+[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/aml-libre.-e1555306237972-800x514.jpg?resize=800%2C514&ssl=1
+[27]: https://libre.computer/
+[28]: https://amzn.to/2DpG3xl
+[29]: https://libre.computer/products/boards/aml-s905x-cc/
+[30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/banana-pi-m6.jpg?fit=800%2C389&ssl=1
+[31]: http://www.banana-pi.org/m64.html
+[32]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/orange-pi-zero.jpg?fit=800%2C693&ssl=1
+[33]: https://amzn.to/2IlI81g
+[34]: http://www.orangepi.org/orangepizero/index.html
+[35]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/khadas-vim-2-e1555306505640-800x563.jpg?resize=800%2C563&ssl=1
+[36]: https://amzn.to/2UDvrFE
diff --git a/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md b/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md
new file mode 100644
index 0000000000..5bc9aaf92f
--- /dev/null
+++ b/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md
@@ -0,0 +1,75 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Blender short film, new license for Chef, ethics in open source, and more news)
+[#]: via: (https://opensource.com/article/15/4/news-april-15)
+[#]: author: (Joshua Allen Holm (Community Moderator) https://opensource.com/users/holmja)
+
+Blender short film, new license for Chef, ethics in open source, and more news
+======
+Here are some of the biggest headlines in open source in the last two
+weeks
+![][1]
+
+In this edition of our open source news roundup, we take a look at the 12th Blender short film, Chef shifts away from open core toward a 100% open source license, SuperTuxKart's latest release candidate with online multiplayer support, and more.
+
+### Blender Animation Studio releases Spring
+
+[Spring][2], the latest short film from [Blender Animation Studio][3], premiered on April 4th. The [press release on Blender.org][4] describes _Spring_ as "the story of a shepherd girl and her dog, who face ancient spirits in order to continue the cycle of life." The development version of Blender 2.80, as well as other open source tools, were used to create this animated short film. The character and asset files for the film are available from [Blender Cloud][5], and tutorials, walkthroughs, and other instructional material are coming soon.
+
+### The importance of ethics in open source
+
+Reuven M. Lerner, writing for [Linux Journal][6], shares his thoughts about need for teaching programmers about ethics in an article titled [Open Source Is Winning, and Now It's Time for People to Win Too][7]. Part retrospective looking back at the history of open source and part call to action for moving forward, Lerner's article discusses many issues relevant to open source beyond just coding. He argues that when we teach kids about open source "[w]e also need to inform them of the societal parts of their work, and the huge influence and power that today's programmers have." He continues by stating "It's sometimes okay—and even preferable—for a company to make less money deliberately, when the alternative would be to do things that are inappropriate or illegal." Overall a very thought-provoking piece, Lerner makes a solid case for making sure to remember that the open source movement is about more than free code.
+
+### Chef transitions from open core to open source
+
+Chef, the company behind the well-known DevOps automation tool, [announced][8] that they will be release 100% of their software as open source under an Apache 2.0 license. This move marks a departure from their current [open core model][9]. Given a tendency for companies to try to move in the opposite direction, Chef's move is a big one. By operating under a fully open source model Chef builds a better, stronger relationship with the community, and the community benefits from full access to all the source code. Even developers of competing projects (and the commercial projects based on those products) benefit from being able to learn from Chef's code, as Chef can do from its open source competitors, which is one of the greatest advantages of open source; the best ideas get to win and business relationships are built around trust and quality of service, not proprietary secrets. For a more detailed look at this development, read Steven J. Vaughan-Nichols's [article for ZDNet][10].
+
+### SuperTuxKart releases version 0.10 RC1 for testing
+
+SuperTuxKart, the open source Mario Kart clone featuring open source mascots, is getting very close to releasing a version that supports online multi-player. On April 5th, the SuperTuxKart blog announced the release of [SuperTuxKart 0.10 Release Candidate 1][11], which needs testing before the final release. Users who want to help test the online and LAN multiplayer options can [download the game from SourceForge][12]. In addition to the new online and LAN features, SuperTuxKart 0.10 features a couple new tracks to race on; Ravenbridge Mansion replaces the old Mansion track, and Black Forest, which was an add-on track in earlier versions, is now part of the official track set.
+
+#### In other news
+
+ * [My code is your code: Embracing the power of open sourcing][13]
+ * [FOSS means kids can have a big impact][14]
+ * [Open-source textbooks lighten students’ financial load][15]
+ * [Developing the ultimate open source radio control transmitter][16]
+ * [How does open source tech transform Government?][17]
+
+
+
+_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/15/4/news-april-15
+
+作者:[Joshua Allen Holm (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/holmja
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i
+[2]: https://www.youtube.com/watch?v=WhWc3b3KhnY (Spring)
+[3]: https://blender.studio/ (Blender Animation Studio)
+[4]: https://www.blender.org/press/spring-open-movie/ (Spring Open Movie)
+[5]: https://cloud.blender.org/p/spring/ (Spring on Blender Cloud)
+[6]: https://www.linuxjournal.com/ (Linux Journal)
+[7]: https://www.linuxjournal.com/content/open-source-winning-and-now-its-time-people-win-too (Open Source Is Winning, and Now It's Time for People to Win Too)
+[8]: https://blog.chef.io/2019/04/02/chef-software-announces-the-enterprise-automation-stack/ (Introducing the New Chef: 100% Open, Always)
+[9]: https://en.wikipedia.org/wiki/Open-core_model (Wikipedia: Open-core model)
+[10]: https://www.zdnet.com/article/leading-devops-program-chef-goes-all-in-with-open-source/ (Leading DevOps program Chef goes all in with open source)
+[11]: http://blog.supertuxkart.net/2019/04/supertuxkart-010-release-candidate-1.html (SuperTuxKart 0.10 Release Candidate 1 Released)
+[12]: https://sourceforge.net/projects/supertuxkart/files/SuperTuxKart/0.10-rc1/ (SourceForge: SuperTuxKart)
+[13]: https://www.forbes.com/sites/forbestechcouncil/2019/04/10/my-code-is-your-code-embracing-the-power-of-open-sourcing/ (My code is your code: Embracing the power of open sourcing)
+[14]: https://www.linuxjournal.com/content/foss-means-kids-can-have-big-impact (FOSS means kids can have a big impact)
+[15]: https://www.schoolnewsnetwork.org/2019/04/09/open-source-textbooks-lighten-students-financial-load/ (Open-source textbooks lighten students’ financial load)
+[16]: https://hackaday.com/2019/04/03/developing-the-ultimate-open-source-radio-control-transmitter/ (Developing the ultimate open source radio control transmitter)
+[17]: https://www.openaccessgovernment.org/open-source-tech-transform/62059/ (How does open source tech transform Government?)
diff --git a/sources/tech/20190415 Getting started with Mercurial for version control.md b/sources/tech/20190415 Getting started with Mercurial for version control.md
new file mode 100644
index 0000000000..10812affed
--- /dev/null
+++ b/sources/tech/20190415 Getting started with Mercurial for version control.md
@@ -0,0 +1,127 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Getting started with Mercurial for version control)
+[#]: via: (https://opensource.com/article/19/4/getting-started-mercurial)
+[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
+
+Getting started with Mercurial for version control
+======
+Learn the basics of Mercurial, a distributed version control system
+written in Python.
+![][1]
+
+[Mercurial][2] is a distributed version control system written in Python. Because it's written in a high-level language, you can write a Mercurial extension with a few Python functions.
+
+There are several ways to install Mercurial, which are explained in the [official documentation][3]. My favorite one is not there: using **pip**. This is the most amenable way to develop local extensions!
+
+For now, Mercurial only supports Python 2.7, so you will need to create a Python 2.7 virtual environment:
+
+
+```
+python2 -m virtualenv mercurial-env
+./mercurial-env/bin/pip install mercurial
+```
+
+To have a short command, and to satisfy everyone's insatiable need for chemistry-based humor, the command is called **hg**.
+
+
+```
+$ source mercurial-env/bin/activate
+(mercurial-env)$ mkdir test-dir
+(mercurial-env)$ cd test-dir
+(mercurial-env)$ hg init
+(mercurial-env)$ hg status
+(mercurial-env)$
+```
+
+The status is empty since you do not have any files. Add a couple of files:
+
+
+```
+(mercurial-env)$ echo 1 > one
+(mercurial-env)$ echo 2 > two
+(mercurial-env)$ hg status
+? one
+? two
+(mercurial-env)$ hg addremove
+adding one
+adding two
+(mercurial-env)$ hg commit -m 'Adding stuff'
+(mercurial-env)$ hg log
+changeset: 0:1f1befb5d1e9
+tag: tip
+user: Moshe Zadka <[moshez@zadka.club][4]>
+date: Fri Mar 29 12:42:43 2019 -0700
+summary: Adding stuff
+```
+
+The **addremove** command is useful: it adds any new files that are not ignored to the list of managed files and removes any files that have been removed.
+
+As I mentioned, Mercurial extensions are written in Python—they are just regular Python modules.
+
+This is an example of a short Mercurial extension:
+
+
+```
+from mercurial import registrar
+from mercurial.i18n import _
+
+cmdtable = {}
+command = registrar.command(cmdtable)
+
+@command('say-hello',
+[('w', 'whom', '', _('Whom to greet'))])
+def say_hello(ui, repo, **opts):
+ui.write("hello ", opts['whom'], "\n")
+```
+
+A simple way to test it is to put it in a file in the virtual environment manually:
+
+
+```
+`$ vi ../mercurial-env/lib/python2.7/site-packages/hello_ext.py`
+```
+
+Then you need to _enable_ the extension. You can start by enabling it only in the current repository:
+
+
+```
+$ cat >> .hg/hgrc
+[extensions]
+hello_ext =
+```
+
+Now, a greeting is possible:
+
+
+```
+(mercurial-env)$ hg say-hello --whom world
+hello world
+```
+
+Most extensions will do more useful stuff—possibly even things to do with Mercurial. The **repo** object is a **mercurial.hg.repository** object.
+
+Refer to the [official documentation][5] for more about Mercurial's API. And visit the [official repo][6] for more examples and inspiration.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/getting-started-mercurial
+
+作者:[Moshe Zadka (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO
+[2]: https://www.mercurial-scm.org/
+[3]: https://www.mercurial-scm.org/wiki/UnixInstall
+[4]: mailto:moshez@zadka.club
+[5]: https://www.mercurial-scm.org/wiki/MercurialApi#Repositories
+[6]: https://www.mercurial-scm.org/repo/hg/file/tip/hgext
diff --git a/sources/tech/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md b/sources/tech/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md
new file mode 100644
index 0000000000..e9e764b440
--- /dev/null
+++ b/sources/tech/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md
@@ -0,0 +1,292 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux?)
+[#]: via: (https://www.2daygeek.com/enable-disable-up-down-nic-network-interface-port-linux-using-ifconfig-ifdown-ifup-ip-nmcli-nmtui/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux?
+======
+
+You may need to run these commands based on your requirements.
+
+I can tell you few examples, where you would be needed this.
+
+When you add a new network interface or when you create a new virtual network interface from the original physical interface.
+
+you may need to bounce these commands to bring up the new interface.
+
+Also, if you made any changes or if it’s down then you need to run one of the below commands to bring them up.
+
+It can be done on many ways and we would like to add best five method which we used in the article.
+
+It can be done using the below five methods.
+
+ * **`ifconfig Command:`** The ifconfig command is used configure a network interface. It provides so many information about NIC.
+ * **`ifdown/up Command:`** The ifdown command take a network interface down and the ifup command bring a network interface up.
+ * **`ip Command:`** ip command is used to manage NIC. It’s replacement of old and deprecated ifconfig command. It’s similar to ifconfig command but has many powerful features which isn’t available in ifconfig command.
+ * **`nmcli Command:`** nmcli is a command-line tool for controlling NetworkManager and reporting network status.
+ * **`nmtui Command:`** nmtui is a curses‐based TUI application for interacting with NetworkManager.
+
+
+
+The below output shows the available network interface card (NIC) information in my Linux system.
+
+```
+# ip a
+1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ inet6 ::1/128 scope host
+ valid_lft forever preferred_lft forever
+2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
+ link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
+ valid_lft 86049sec preferred_lft 86049sec
+ inet6 fe80::3899:270f:ae38:b433/64 scope link noprefixroute
+ valid_lft forever preferred_lft forever
+3: enp0s8: mtu 1500 qdisc fq_codel state UP group default qlen 1000
+ link/ether 08:00:27:30:5d:52 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.1.3/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s8
+ valid_lft 86049sec preferred_lft 86049sec
+ inet6 fe80::32b7:8727:bdf2:2f3/64 scope link noprefixroute
+ valid_lft forever preferred_lft forever
+```
+
+### 1) How To Bring UP And Bring Down A Network Interface In Linux Using ifconfig Command?
+
+The ifconfig command is used configure a network interface.
+
+It is used at boot time to set up interfaces as necessary. It provides so many information about NIC. We can use ifconfig command when we need to make any changes on NIC.
+
+Common Syntax for ifconfig:
+
+```
+# ifconfig [NIC_NAME] Down/Up
+```
+
+Run the following command to bring down the `enp0s3` interface in Linux. Make a note, you have to input your interface name instead of us.
+
+```
+# ifconfig enp0s3 down
+```
+
+Yes, the given interface is down now as per the following output.
+
+```
+# ip a | grep -A 1 "enp0s3:"
+2: enp0s3: mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
+ link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
+```
+
+Run the following command to bring down the `enp0s3` interface in Linux.
+
+```
+# ifconfig enp0s3 up
+```
+
+Yes, the given interface is up now as per the following output.
+
+```
+# ip a | grep -A 5 "enp0s3:"
+2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
+ link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
+ valid_lft 86294sec preferred_lft 86294sec
+ inet6 fe80::3899:270f:ae38:b433/64 scope link noprefixroute
+ valid_lft forever preferred_lft forever
+```
+
+### 2) How To Enable And Disable A Network Interface In Linux Using ifdown/up Command?
+
+The ifdown command take a network interface down and the ifup command bring a network interface up.
+
+**Note:**It doesn’t work on new interface device name like `enpXXX`
+
+Common Syntax for ifdown/ifup:
+
+```
+# ifdown [NIC_NAME]
+
+# ifup [NIC_NAME]
+```
+
+Run the following command to bring down the `eth1` interface in Linux.
+
+```
+# ifdown eth0
+```
+
+Run the following command to bring down the `eth1` interface in Linux.
+
+```
+# ip a | grep -A 3 "eth1:"
+3: eth1: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
+ link/ether 08:00:27:d5:a0:18 brd ff:ff:ff:ff:ff:ff
+```
+
+Run the following command to bring down the `eth1` interface in Linux.
+
+```
+# ifup eth0
+```
+
+Yes, the given interface is up now as per the following output.
+
+```
+# ip a | grep -A 5 "eth1:"
+3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
+ link/ether 08:00:27:d5:a0:18 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.1.7/24 brd 192.168.1.255 scope global eth1
+ inet6 fe80::a00:27ff:fed5:a018/64 scope link tentative dadfailed
+ valid_lft forever preferred_lft forever
+```
+
+ifup and ifdown doesn’t supporting the latest interface device `enpXXX` names. I got the below message when i ran the command.
+
+```
+# ifdown enp0s8
+Unknown interface enp0s8
+```
+
+### 3) How To Bring UP/Bring Down A Network Interface In Linux Using ip Command?
+
+ip command is used to manage Network Interface Card (NIC). It’s replacement of old and deprecated ifconfig command on modern Linux systems.
+
+It’s similar to ifconfig command but has many powerful features which isn’t available in ifconfig command.
+
+Common Syntax for ip:
+
+```
+# ip link set Down/Up
+```
+
+Run the following command to bring down the `enp0s3` interface in Linux.
+
+```
+# ip link set enp0s3 down
+```
+
+Yes, the given interface is down now as per the following output.
+
+```
+# ip a | grep -A 1 "enp0s3:"
+2: enp0s3: mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
+ link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
+```
+
+Run the following command to bring down the `enp0s3` interface in Linux.
+
+```
+# ip link set enp0s3 up
+```
+
+Yes, the given interface is up now as per the following output.
+
+```
+# ip a | grep -A 5 "enp0s3:"
+2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
+ link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
+ valid_lft 86294sec preferred_lft 86294sec
+ inet6 fe80::3899:270f:ae38:b433/64 scope link noprefixroute
+ valid_lft forever preferred_lft forever
+```
+
+### 4) How To Enable And Disable A Network Interface In Linux Using nmcli Command?
+
+nmcli is a command-line tool for controlling NetworkManager and reporting network status.
+
+It can be utilized as a replacement for nm-applet or other graphical clients. nmcli is used to create, display, edit, delete, activate, and deactivate network
+
+connections, as well as control and display network device status.
+
+Run the following command to identify the interface name because nmcli command is perform most of the task using `profile name` instead of `device name`.
+
+```
+# nmcli con show
+NAME UUID TYPE DEVICE
+Wired connection 1 3d5afa0a-419a-3d1a-93e6-889ce9c6a18c ethernet enp0s3
+Wired connection 2 a22154b7-4cc4-3756-9d8d-da5a4318e146 ethernet enp0s8
+```
+
+Common Syntax for ip:
+
+```
+# nmcli con Down/Up
+```
+
+Run the following command to bring down the `enp0s3` interface in Linux. You have to give `profile name` instead of `device name` to bring down it.
+
+```
+# nmcli con down 'Wired connection 1'
+Connection 'Wired connection 1' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
+```
+
+Yes, the given interface is down now as per the following output.
+
+```
+# nmcli dev status
+DEVICE TYPE STATE CONNECTION
+enp0s8 ethernet connected Wired connection 2
+enp0s3 ethernet disconnected --
+lo loopback unmanaged --
+```
+
+Run the following command to bring down the `enp0s3` interface in Linux. You have to give `profile name` instead of `device name` to bring down it.
+
+```
+# nmcli con up 'Wired connection 1'
+Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)
+```
+
+Yes, the given interface is up now as per the following output.
+
+```
+# nmcli dev status
+DEVICE TYPE STATE CONNECTION
+enp0s8 ethernet connected Wired connection 2
+enp0s3 ethernet connected Wired connection 1
+lo loopback unmanaged --
+```
+
+### 5) How To Bring UP/Bring Down A Network Interface In Linux Using nmtui Command?
+
+nmtui is a curses based TUI application for interacting with NetworkManager.
+
+When starting nmtui, the user is prompted to choose the activity to perform unless it was specified as the first argument.
+
+Run the following command launch the nmtui interface. Select “Active a connection” and hit “OK”
+
+```
+# nmtui
+```
+
+[![][1]![][1]][2]
+
+Select the interface which you want to bring down then hit “Deactivate” button.
+[![][1]![][1]][3]
+
+For activation do the same above procedure.
+[![][1]![][1]][4]
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/enable-disable-up-down-nic-network-interface-port-linux-using-ifconfig-ifdown-ifup-ip-nmcli-nmtui/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: https://www.2daygeek.com/wp-content/uploads/2019/04/enable-disable-up-down-nic-network-interface-port-linux-nmtui-1.png
+[3]: https://www.2daygeek.com/wp-content/uploads/2019/04/enable-disable-up-down-nic-network-interface-port-linux-nmtui-2.png
+[4]: https://www.2daygeek.com/wp-content/uploads/2019/04/enable-disable-up-down-nic-network-interface-port-linux-nmtui-3.png
diff --git a/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md b/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
new file mode 100644
index 0000000000..bf6c2c07cc
--- /dev/null
+++ b/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
@@ -0,0 +1,419 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Inter-process communication in Linux: Shared storage)
+[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-storage)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+Inter-process communication in Linux: Shared storage
+======
+Learn how processes synchronize with each other in Linux.
+![Filing papers and documents][1]
+
+This is the first article in a series about [interprocess communication][2] (IPC) in Linux. The series uses code examples in C to clarify the following IPC mechanisms:
+
+ * Shared files
+ * Shared memory (with semaphores)
+ * Pipes (named and unnamed)
+ * Message queues
+ * Sockets
+ * Signals
+
+
+
+This article reviews some core concepts before moving on to the first two of these mechanisms: shared files and shared memory.
+
+### Core concepts
+
+A _process_ is a program in execution, and each process has its own address space, which comprises the memory locations that the process is allowed to access. A process has one or more _threads_ of execution, which are sequences of executable instructions: a _single-threaded_ process has just one thread, whereas a _multi-threaded_ process has more than one thread. Threads within a process share various resources, in particular, address space. Accordingly, threads within a process can communicate straightforwardly through shared memory, although some modern languages (e.g., Go) encourage a more disciplined approach such as the use of thread-safe channels. Of interest here is that different processes, by default, do _not_ share memory.
+
+There are various ways to launch processes that then communicate, and two ways dominate in the examples that follow:
+
+ * A terminal is used to start one process, and perhaps a different terminal is used to start another.
+ * The system function **fork** is called within one process (the parent) to spawn another process (the child).
+
+
+
+The first examples take the terminal approach. The [code examples][3] are available in a ZIP file on my website.
+
+### Shared files
+
+Programmers are all too familiar with file access, including the many pitfalls (non-existent files, bad file permissions, and so on) that beset the use of files in programs. Nonetheless, shared files may be the most basic IPC mechanism. Consider the relatively simple case in which one process ( _producer_ ) creates and writes to a file, and another process ( _consumer_ ) reads from this same file:
+
+
+```
+writes +-----------+ reads
+producer-------->| disk file |<\-------consumer
++-----------+
+```
+
+The obvious challenge in using this IPC mechanism is that a _race condition_ might arise: the producer and the consumer might access the file at exactly the same time, thereby making the outcome indeterminate. To avoid a race condition, the file must be locked in a way that prevents a conflict between a _write_ operation and any another operation, whether a _read_ or a _write_. The locking API in the standard system library can be summarized as follows:
+
+ * A producer should gain an exclusive lock on the file before writing to the file. An _exclusive_ lock can be held by one process at most, which rules out a race condition because no other process can access the file until the lock is released.
+ * A consumer should gain at least a shared lock on the file before reading from the file. Multiple _readers_ can hold a _shared_ lock at the same time, but no _writer_ can access a file when even a single _reader_ holds a shared lock.
+
+
+
+A shared lock promotes efficiency. If one process is just reading a file and not changing its contents, there is no reason to prevent other processes from doing the same. Writing, however, clearly demands exclusive access to a file.
+
+The standard I/O library includes a utility function named **fcntl** that can be used to inspect and manipulate both exclusive and shared locks on a file. The function works through a _file descriptor_ , a non-negative integer value that, within a process, identifies a file. (Different file descriptors in different processes may identify the same physical file.) For file locking, Linux provides the library function **flock** , which is a thin wrapper around **fcntl**. The first example uses the **fcntl** function to expose API details.
+
+#### Example 1. The _producer_ program
+
+
+```
+#include
+#include
+#include
+#include
+#include
+
+#define FileName "data.dat"
+#define DataString "Now is the winter of our discontent\nMade glorious summer by this sun of York\n"
+
+void report_and_exit(const char* msg) {
+[perror][4](msg);
+[exit][5](-1); /* EXIT_FAILURE */
+}
+
+int main() {
+struct flock lock;
+lock.l_type = F_WRLCK; /* read/write (exclusive versus shared) lock */
+lock.l_whence = SEEK_SET; /* base for seek offsets */
+lock.l_start = 0; /* 1st byte in file */
+lock.l_len = 0; /* 0 here means 'until EOF' */
+lock.l_pid = getpid(); /* process id */
+
+int fd; /* file descriptor to identify a file within a process */
+if ((fd = open(FileName, O_RDWR | O_CREAT, 0666)) < 0) /* -1 signals an error */
+report_and_exit("open failed...");
+
+if (fcntl(fd, F_SETLK, &lock) < 0) /** F_SETLK doesn't block, F_SETLKW does **/
+report_and_exit("fcntl failed to get lock...");
+else {
+write(fd, DataString, [strlen][6](DataString)); /* populate data file */
+[fprintf][7](stderr, "Process %d has written to data file...\n", lock.l_pid);
+}
+
+/* Now release the lock explicitly. */
+lock.l_type = F_UNLCK;
+if (fcntl(fd, F_SETLK, &lock) < 0)
+report_and_exit("explicit unlocking failed...");
+
+close(fd); /* close the file: would unlock if needed */
+return 0; /* terminating the process would unlock as well */
+}
+```
+
+The main steps in the _producer_ program above can be summarized as follows:
+
+ * The program declares a variable of type **struct flock** , which represents a lock, and initializes the structure's five fields. The first initialization: [code]`lock.l_type = F_WRLCK; /* exclusive lock */`[/code] makes the lock an exclusive ( _read-write_ ) rather than a shared ( _read-only_ ) lock. If the _producer_ gains the lock, then no other process will be able to write or read the file until the _producer_ releases the lock, either explicitly with the appropriate call to **fcntl** or implicitly by closing the file. (When the process terminates, any opened files would be closed automatically, thereby releasing the lock.)
+ * The program then initializes the remaining fields. The chief effect is that the _entire_ file is to be locked. However, the locking API allows only designated bytes to be locked. For example, if the file contains multiple text records, then a single record (or even part of a record) could be locked and the rest left unlocked.
+ * The first call to **fcntl** : [code]`if (fcntl(fd, F_SETLK, &lock) < 0)`[/code] tries to lock the file exclusively, checking whether the call succeeded. In general, the **fcntl** function returns **-1** (hence, less than zero) to indicate failure. The second argument **F_SETLK** means that the call to **fcntl** does _not_ block: the function returns immediately, either granting the lock or indicating failure. If the flag **F_SETLKW** (the **W** at the end is for _wait_ ) were used instead, the call to **fcntl** would block until gaining the lock was possible. In the calls to **fcntl** , the first argument **fd** is the file descriptor, the second argument specifies the action to be taken (in this case, **F_SETLK** for setting the lock), and the third argument is the address of the lock structure (in this case, **& lock**).
+ * If the _producer_ gains the lock, the program writes two text records to the file.
+ * After writing to the file, the _producer_ changes the lock structure's **l_type** field to the _unlock_ value: [code]`lock.l_type = F_UNLCK;`[/code] and calls **fcntl** to perform the unlocking operation. The program finishes up by closing the file and exiting.
+
+
+
+#### Example 2. The _consumer_ program
+
+
+```
+#include
+#include
+#include
+#include
+
+#define FileName "data.dat"
+
+void report_and_exit(const char* msg) {
+[perror][4](msg);
+[exit][5](-1); /* EXIT_FAILURE */
+}
+
+int main() {
+struct flock lock;
+lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
+lock.l_whence = SEEK_SET; /* base for seek offsets */
+lock.l_start = 0; /* 1st byte in file */
+lock.l_len = 0; /* 0 here means 'until EOF' */
+lock.l_pid = getpid(); /* process id */
+
+int fd; /* file descriptor to identify a file within a process */
+if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
+report_and_exit("open to read failed...");
+
+/* If the file is write-locked, we can't continue. */
+fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
+if (lock.l_type != F_UNLCK)
+report_and_exit("file is still write locked...");
+
+lock.l_type = F_RDLCK; /* prevents any writing during the reading */
+if (fcntl(fd, F_SETLK, &lock) < 0)
+report_and_exit("can't get a read-only lock...");
+
+/* Read the bytes (they happen to be ASCII codes) one at a time. */
+int c; /* buffer for read bytes */
+while (read(fd, &c, 1) > 0) /* 0 signals EOF */
+write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
+
+/* Release the lock explicitly. */
+lock.l_type = F_UNLCK;
+if (fcntl(fd, F_SETLK, &lock) < 0)
+report_and_exit("explicit unlocking failed...");
+
+close(fd);
+return 0;
+}
+```
+
+The _consumer_ program is more complicated than necessary to highlight features of the locking API. In particular, the _consumer_ program first checks whether the file is exclusively locked and only then tries to gain a shared lock. The relevant code is:
+
+
+```
+lock.l_type = F_WRLCK;
+...
+fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
+if (lock.l_type != F_UNLCK)
+report_and_exit("file is still write locked...");
+```
+
+The **F_GETLK** operation specified in the **fcntl** call checks for a lock, in this case, an exclusive lock given as **F_WRLCK** in the first statement above. If the specified lock does not exist, then the **fcntl** call automatically changes the lock type field to **F_UNLCK** to indicate this fact. If the file is exclusively locked, the _consumer_ terminates. (A more robust version of the program might have the _consumer_ **sleep** a bit and try again several times.)
+
+If the file is not currently locked, then the _consumer_ tries to gain a shared ( _read-only_ ) lock ( **F_RDLCK** ). To shorten the program, the **F_GETLK** call to **fcntl** could be dropped because the **F_RDLCK** call would fail if a _read-write_ lock already were held by some other process. Recall that a _read-only_ lock does prevent any other process from writing to the file, but allows other processes to read from the file. In short, a _shared_ lock can be held by multiple processes. After gaining a shared lock, the _consumer_ program reads the bytes one at a time from the file, prints the bytes to the standard output, releases the lock, closes the file, and terminates.
+
+Here is the output from the two programs launched from the same terminal with **%** as the command line prompt:
+
+
+```
+% ./producer
+Process 29255 has written to data file...
+
+% ./consumer
+Now is the winter of our discontent
+Made glorious summer by this sun of York
+```
+
+In this first code example, the data shared through IPC is text: two lines from Shakespeare's play _Richard III_. Yet, the shared file's contents could be voluminous, arbitrary bytes (e.g., a digitized movie), which makes file sharing an impressively flexible IPC mechanism. The downside is that file access is relatively slow, whether the access involves reading or writing. As always, programming comes with tradeoffs. The next example has the upside of IPC through shared memory, rather than shared files, with a corresponding boost in performance.
+
+### Shared memory
+
+Linux systems provide two separate APIs for shared memory: the legacy System V API and the more recent POSIX one. These APIs should never be mixed in a single application, however. A downside of the POSIX approach is that features are still in development and dependent upon the installed kernel version, which impacts code portability. For example, the POSIX API, by default, implements shared memory as a _memory-mapped file_ : for a shared memory segment, the system maintains a _backing file_ with corresponding contents. Shared memory under POSIX can be configured without a backing file, but this may impact portability. My example uses the POSIX API with a backing file, which combines the benefits of memory access (speed) and file storage (persistence).
+
+The shared-memory example has two programs, named _memwriter_ and _memreader_ , and uses a _semaphore_ to coordinate their access to the shared memory. Whenever shared memory comes into the picture with a _writer_ , whether in multi-processing or multi-threading, so does the risk of a memory-based race condition; hence, the semaphore is used to coordinate (synchronize) access to the shared memory.
+
+The _memwriter_ program should be started first in its own terminal. The _memreader_ program then can be started (within a dozen seconds) in its own terminal. The output from the _memreader_ is:
+
+
+```
+`This is the way the world ends...`
+```
+
+Each source file has documentation at the top explaining the link flags to be included during compilation.
+
+Let's start with a review of how semaphores work as a synchronization mechanism. A general semaphore also is called a _counting semaphore_ , as it has a value (typically initialized to zero) that can be incremented. Consider a shop that rents bicycles, with a hundred of them in stock, with a program that clerks use to do the rentals. Every time a bike is rented, the semaphore is incremented by one; when a bike is returned, the semaphore is decremented by one. Rentals can continue until the value hits 100 but then must halt until at least one bike is returned, thereby decrementing the semaphore to 99.
+
+A _binary semaphore_ is a special case requiring only two values: 0 and 1. In this situation, a semaphore acts as a _mutex_ : a mutual exclusion construct. The shared-memory example uses a semaphore as a mutex. When the semaphore's value is 0, the _memwriter_ alone can access the shared memory. After writing, this process increments the semaphore's value, thereby allowing the _memreader_ to read the shared memory.
+
+#### Example 3. Source code for the _memwriter_ process
+
+
+```
+/** Compilation: gcc -o memwriter memwriter.c -lrt -lpthread **/
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include "shmem.h"
+
+void report_and_exit(const char* msg) {
+[perror][4](msg);
+[exit][5](-1);
+}
+
+int main() {
+int fd = shm_open(BackingFile, /* name from smem.h */
+O_RDWR | O_CREAT, /* read/write, create if needed */
+AccessPerms); /* access permissions (0644) */
+if (fd < 0) report_and_exit("Can't open shared mem segment...");
+
+ftruncate(fd, ByteSize); /* get the bytes */
+
+caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
+ByteSize, /* how many bytes */
+PROT_READ | PROT_WRITE, /* access protections */
+MAP_SHARED, /* mapping visible to other processes */
+fd, /* file descriptor */
+0); /* offset: start at 1st byte */
+if ((caddr_t) -1 == memptr) report_and_exit("Can't get segment...");
+
+[fprintf][7](stderr, "shared mem address: %p [0..%d]\n", memptr, ByteSize - 1);
+[fprintf][7](stderr, "backing file: /dev/shm%s\n", BackingFile );
+
+/* semaphore code to lock the shared mem */
+sem_t* semptr = sem_open(SemaphoreName, /* name */
+O_CREAT, /* create the semaphore */
+AccessPerms, /* protection perms */
+0); /* initial value */
+if (semptr == (void*) -1) report_and_exit("sem_open");
+
+[strcpy][8](memptr, MemContents); /* copy some ASCII bytes to the segment */
+
+/* increment the semaphore so that memreader can read */
+if (sem_post(semptr) < 0) report_and_exit("sem_post");
+
+sleep(12); /* give reader a chance */
+
+/* clean up */
+munmap(memptr, ByteSize); /* unmap the storage */
+close(fd);
+sem_close(semptr);
+shm_unlink(BackingFile); /* unlink from the backing file */
+return 0;
+}
+```
+
+Here's an overview of how the _memwriter_ and _memreader_ programs communicate through shared memory:
+
+ * The _memwriter_ program, shown above, calls the **shm_open** function to get a file descriptor for the backing file that the system coordinates with the shared memory. At this point, no memory has been allocated. The subsequent call to the misleadingly named function **ftruncate** : [code]`ftruncate(fd, ByteSize); /* get the bytes */`[/code] allocates **ByteSize** bytes, in this case, a modest 512 bytes. The _memwriter_ and _memreader_ programs access the shared memory only, not the backing file. The system is responsible for synchronizing the shared memory and the backing file.
+ * The _memwriter_ then calls the **mmap** function: [code] caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
+ByteSize, /* how many bytes */
+PROT_READ | PROT_WRITE, /* access protections */
+MAP_SHARED, /* mapping visible to other processes */
+fd, /* file descriptor */
+0); /* offset: start at 1st byte */ [/code] to get a pointer to the shared memory. (The _memreader_ makes a similar call.) The pointer type **caddr_t** starts with a **c** for **calloc** , a system function that initializes dynamically allocated storage to zeroes. The _memwriter_ uses the **memptr** for the later _write_ operation, using the library **strcpy** (string copy) function.
+ * At this point, the _memwriter_ is ready for writing, but it first creates a semaphore to ensure exclusive access to the shared memory. A race condition would occur if the _memwriter_ were writing while the _memreader_ was reading. If the call to **sem_open** succeeds: [code] sem_t* semptr = sem_open(SemaphoreName, /* name */
+O_CREAT, /* create the semaphore */
+AccessPerms, /* protection perms */
+0); /* initial value */ [/code] then the writing can proceed. The **SemaphoreName** (any unique non-empty name will do) identifies the semaphore in both the _memwriter_ and the _memreader_. The initial value of zero gives the semaphore's creator, in this case, the _memwriter_ , the right to proceed, in this case, to the _write_ operation.
+ * After writing, the _memwriter_ increments the semaphore value to 1: [code]`if (sem_post(semptr) < 0) ..`[/code] with a call to the **sem_post** function. Incrementing the semaphore releases the mutex lock and enables the _memreader_ to perform its _read_ operation. For good measure, the _memwriter_ also unmaps the shared memory from the _memwriter_ address space: [code]`munmap(memptr, ByteSize); /* unmap the storage *`[/code] This bars the _memwriter_ from further access to the shared memory.
+
+
+
+#### Example 4. Source code for the _memreader_ process
+
+
+```
+/** Compilation: gcc -o memreader memreader.c -lrt -lpthread **/
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include "shmem.h"
+
+void report_and_exit(const char* msg) {
+[perror][4](msg);
+[exit][5](-1);
+}
+
+int main() {
+int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* empty to begin */
+if (fd < 0) report_and_exit("Can't get file descriptor...");
+
+/* get a pointer to memory */
+caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
+ByteSize, /* how many bytes */
+PROT_READ | PROT_WRITE, /* access protections */
+MAP_SHARED, /* mapping visible to other processes */
+fd, /* file descriptor */
+0); /* offset: start at 1st byte */
+if ((caddr_t) -1 == memptr) report_and_exit("Can't access segment...");
+
+/* create a semaphore for mutual exclusion */
+sem_t* semptr = sem_open(SemaphoreName, /* name */
+O_CREAT, /* create the semaphore */
+AccessPerms, /* protection perms */
+0); /* initial value */
+if (semptr == (void*) -1) report_and_exit("sem_open");
+
+/* use semaphore as a mutex (lock) by waiting for writer to increment it */
+if (!sem_wait(semptr)) { /* wait until semaphore != 0 */
+int i;
+for (i = 0; i < [strlen][6](MemContents); i++)
+write(STDOUT_FILENO, memptr + i, 1); /* one byte at a time */
+sem_post(semptr);
+}
+
+/* cleanup */
+munmap(memptr, ByteSize);
+close(fd);
+sem_close(semptr);
+unlink(BackingFile);
+return 0;
+}
+```
+
+In both the _memwriter_ and _memreader_ programs, the shared-memory functions of main interest are **shm_open** and **mmap** : on success, the first call returns a file descriptor for the backing file, which the second call then uses to get a pointer to the shared memory segment. The calls to **shm_open** are similar in the two programs except that the _memwriter_ program creates the shared memory, whereas the _memreader_ only accesses this already created memory:
+
+
+```
+int fd = shm_open(BackingFile, O_RDWR | O_CREAT, AccessPerms); /* memwriter */
+int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* memreader */
+```
+
+With a file descriptor in hand, the calls to **mmap** are the same:
+
+
+```
+`caddr_t memptr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);`
+```
+
+The first argument to **mmap** is **NULL** , which means that the system determines where to allocate the memory in virtual address space. It's possible (but tricky) to specify an address instead. The **MAP_SHARED** flag indicates that the allocated memory is shareable among processes, and the last argument (in this case, zero) means that the offset for the shared memory should be the first byte. The **size** argument specifies the number of bytes to be allocated (in this case, 512), and the protection argument indicates that the shared memory can be written and read.
+
+When the _memwriter_ program executes successfully, the system creates and maintains the backing file; on my system, the file is _/dev/shm/shMemEx_ , with _shMemEx_ as my name (given in the header file _shmem.h_ ) for the shared storage. In the current version of the _memwriter_ and _memreader_ programs, the statement:
+
+
+```
+`shm_unlink(BackingFile); /* removes backing file */`
+```
+
+removes the backing file. If the **unlink** statement is omitted, then the backing file persists after the program terminates.
+
+The _memreader_ , like the _memwriter_ , accesses the semaphore through its name in a call to **sem_open**. But the _memreader_ then goes into a wait state until the _memwriter_ increments the semaphore, whose initial value is 0:
+
+
+```
+`if (!sem_wait(semptr)) { /* wait until semaphore != 0 */`
+```
+
+Once the wait is over, the _memreader_ reads the ASCII bytes from the shared memory, cleans up, and terminates.
+
+The shared-memory API includes operations explicitly to synchronize the shared memory segment and the backing file. These operations have been omitted from the example to reduce clutter and keep the focus on the memory-sharing and semaphore code.
+
+The _memwriter_ and _memreader_ programs are likely to execute without inducing a race condition even if the semaphore code is removed: the _memwriter_ creates the shared memory segment and writes immediately to it; the _memreader_ cannot even access the shared memory until this has been created. However, best practice requires that shared-memory access is synchronized whenever a _write_ operation is in the mix, and the semaphore API is important enough to be highlighted in a code example.
+
+### Wrapping up
+
+The shared-file and shared-memory examples show how processes can communicate through _shared storage_ , files in one case and memory segments in the other. The APIs for both approaches are relatively straightforward. Do these approaches have a common downside? Modern applications often deal with streaming data, indeed, with massively large streams of data. Neither the shared-file nor the shared-memory approaches are well suited for massive data streams. Channels of one type or another are better suited. Part 2 thus introduces channels and message queues, again with code examples in C.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/interprocess-communication-linux-storage
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
+[2]: https://en.wikipedia.org/wiki/Inter-process_communication
+[3]: http://condor.depaul.edu/mkalin
+[4]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
+[5]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
+[6]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
+[7]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
+[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html
diff --git a/sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md b/sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md
new file mode 100644
index 0000000000..5650e80aee
--- /dev/null
+++ b/sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md
@@ -0,0 +1,211 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Kubernetes on Fedora IoT with k3s)
+[#]: via: (https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/)
+[#]: author: (Lennart Jern https://fedoramagazine.org/author/lennartj/)
+
+Kubernetes on Fedora IoT with k3s
+======
+
+![][1]
+
+Fedora IoT is an upcoming Fedora edition targeted at the Internet of Things. It was introduced last year on Fedora Magazine in the article [How to turn on an LED with Fedora IoT][2]. Since then, it has continued to improve together with Fedora Silverblue to provide an immutable base operating system aimed at container-focused workflows.
+
+Kubernetes is an immensely popular container orchestration system. It is perhaps most commonly used on powerful hardware handling huge workloads. However, it can also be used on lightweight devices such as the Raspberry Pi 3. Read on to find out how.
+
+### Why Kubernetes?
+
+While Kubernetes is all the rage in the cloud, it may not be immediately obvious to run it on a small single board computer. But there are certainly reasons for doing it. First of all it is a great way to learn and get familiar with Kubernetes without the need for expensive hardware. Second, because of its popularity, there are [tons of applications][3] that comes pre-packaged for running in Kubernetes clusters. Not to mention the large community to provide help if you ever get stuck.
+
+Last but not least, container orchestration may actually make things easier, even at the small scale in a home lab. This may not be apparent when tackling the the learning curve, but these skills will help when dealing with any cluster in the future. It doesn’t matter if it’s a single node Raspberry Pi cluster or a large scale machine learning farm.
+
+#### K3s – a lightweight Kubernetes
+
+A “normal” installation of Kubernetes (if such a thing can be said to exist) is a bit on the heavy side for IoT. The recommendation is a minimum of 2 GB RAM per machine! However, there are plenty of alternatives, and one of the newcomers is [k3s][4] – a lightweight Kubernetes distribution.
+
+K3s is quite special in that it has replaced etcd with SQLite for its key-value storage needs. Another thing to note is that k3s ships as a single binary instead of one per component. This diminishes the memory footprint and simplifies the installation. Thanks to the above, k3s should be able to run k3s with just 512 MB of RAM, perfect for a small single board computer!
+
+### What you will need
+
+ 1. Fedora IoT in a virtual machine or on a physical device. See the excellent getting started guide [here][5]. One machine is enough but two will allow you to test adding more nodes to the cluster.
+ 2. [Configure the firewall][6] to allow traffic on ports 6443 and 8472. Or simply disable it for this experiment by running “systemctl stop firewalld”.
+
+
+
+### Install k3s
+
+Installing k3s is very easy. Simply run the installation script:
+
+```
+curl -sfL https://get.k3s.io | sh -
+```
+
+This will download, install and start up k3s. After installation, get a list of nodes from the server by running the following command:
+
+```
+kubectl get nodes
+```
+
+Note that there are several options that can be passed to the installation script through environment variables. These can be found in the [documentation][7]. And of course, there is nothing stopping you from installing k3s manually by downloading the binary directly.
+
+While great for experimenting and learning, a single node cluster is not much of a cluster. Luckily, adding another node is no harder than setting up the first one. Just pass two environment variables to the installation script to make it find the first node and avoid running the server part of k3s
+
+```
+curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \
+ K3S_TOKEN=XXX sh -
+```
+
+The example-url above should be replaced by the IP address or fully qualified domain name of the first node. On that node the token (represented by XXX) is found in the file /var/lib/rancher/k3s/server/node-token.
+
+### Deploy some containers
+
+Now that we have a Kubernetes cluster, what can we actually do with it? Let’s start by deploying a simple web server.
+
+```
+kubectl create deployment my-server --image nginx
+```
+
+This will create a [Deployment][8] named “my-server” from the container image “nginx” (defaulting to docker hub as registry and the latest tag). You can see the Pod created by running the following command.
+
+```
+kubectl get pods
+```
+
+In order to access the nginx server running in the pod, first expose the Deployment through a [Service][9]. The following command will create a Service with the same name as the deployment.
+
+```
+kubectl expose deployment my-server --port 80
+```
+
+The Service works as a kind of load balancer and DNS record for the Pods. For instance, when running a second Pod, we will be able to _curl_ the nginx server just by specifying _my-server_ (the name of the Service). See the example below for how to do this.
+
+```
+# Start a pod and run bash interactively in it
+kubectl run debug --generator=run-pod/v1 --image=fedora -it -- bash
+# Wait for the bash prompt to appear
+curl my-server
+# You should get the "Welcome to nginx!" page as output
+```
+
+### Ingress controller and external IP
+
+By default, a Service only get a ClusterIP (only accessible inside the cluster), but you can also request an external IP for the service by setting its type to [LoadBalancer][10]. However, not all applications require their own IP address. Instead, it is often possible to share one IP address among many services by routing requests based on the host header or path. You can accomplish this in Kubernetes with an [Ingress][11], and this is what we will do. Ingresses also provide additional features such as TLS encryption of the traffic without having to modify your application.
+
+Kubernetes needs an ingress controller to make the Ingress resources work and k3s includes [Traefik][12] for this purpose. It also includes a simple service load balancer that makes it possible to get an external IP for a Service in the cluster. The [documentation][13] describes the service like this:
+
+> k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.
+>
+> k3s README
+
+The ingress controller is already exposed with this load balancer service. You can find the IP address that it is using with the following command.
+
+```
+$ kubectl get svc --all-namespaces
+NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ default kubernetes ClusterIP 10.43.0.1 443/TCP 33d
+ default my-server ClusterIP 10.43.174.38 80/TCP 30m
+ kube-system kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 33d
+ kube-system traefik LoadBalancer 10.43.145.104 10.0.0.8 80:31596/TCP,443:31539/TCP 33d
+```
+
+Look for the Service named traefik. In the above example the IP we are interested in is 10.0.0.8.
+
+### Route incoming requests
+
+Let’s create an Ingress that routes requests to our web server based on the host header. This example uses [xip.io][14] to avoid having to set up DNS records. It works by including the IP adress as a subdomain, to use any subdomain of 10.0.0.8.xip.io to reach the IP 10.0.0.8. In other words, my-server.10.0.0.8.xip.io is used to reach the ingress controller in the cluster. You can try this right now (with your own IP instead of 10.0.0.8). Without an ingress in place you should reach the “default backend” which is just a page showing “404 page not found”.
+
+We can tell the ingress controller to route requests to our web server Service with the following Ingress.
+
+```
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: my-server
+spec:
+ rules:
+ - host: my-server.10.0.0.8.xip.io
+ http:
+ paths:
+ - path: /
+ backend:
+ serviceName: my-server
+ servicePort: 80
+```
+
+Save the above snippet in a file named _my-ingress.yaml_ and add it to the cluster by running this command:
+
+```
+kubectl apply -f my-ingress.yaml
+```
+
+You should now be able to reach the default nginx welcoming page on the fully qualified domain name you chose. In my example this would be my-server.10.0.0.8.xip.io. The ingress controller is routing the requests based on the information in the Ingress. A request to my-server.10.0.0.8.xip.io will be routed to the Service and port defined as backend in the Ingress (my-server and 80 in this case).
+
+### What about IoT then?
+
+Imagine the following scenario. You have dozens of devices spread out around your home or farm. It is a heterogeneous collection of IoT devices with various hardware capabilities, sensors and actuators. Maybe some of them have cameras, weather or light sensors. Others may be hooked up to control the ventilation, lights, blinds or blink LEDs.
+
+In this scenario, you want to gather data from all the sensors, maybe process and analyze it before you finally use it to make decisions and control the actuators. In addition to this, you may want to visualize what’s going on by setting up a dashboard. So how can Kubernetes help us manage something like this? How can we make sure that Pods run on suitable devices?
+
+The simple answer is labels. You can label the nodes according to capabilities, like this:
+
+```
+kubectl label nodes =
+# Example
+kubectl label nodes node2 camera=available
+```
+
+Once they are labeled, it is easy to select suitable nodes for your workload with [nodeSelectors][15]. The final piece to the puzzle, if you want to run your Pods on _all_ suitable nodes is to use [DaemonSets][16] instead of Deployments. In other words, create one DaemonSet for each data collecting application that uses some unique sensor and use nodeSelectors to make sure they only run on nodes with the proper hardware.
+
+The service discovery feature that allows Pods to find each other simply by Service name makes it quite easy to handle these kinds of distributed systems. You don’t need to know or configure IP addresses or custom ports for the applications. Instead, they can easily find each other through named Services in the cluster.
+
+#### Utilize spare resources
+
+With the cluster up and running, collecting data and controlling your lights and climate control you may feel that you are finished. However, there are still plenty of compute resources in the cluster that could be used for other projects. This is where Kubernetes really shines.
+
+You shouldn’t have to worry about where exactly those resources are or calculate if there is enough memory to fit an extra application here or there. This is exactly what orchestration solves! You can easily deploy more applications in the cluster and let Kubernetes figure out where (or if) they will fit.
+
+Why not run your own [NextCloud][17] instance? Or maybe [gitea][18]? You could also set up a CI/CD pipeline for all those IoT containers. After all, why would you build and cross compile them on your main computer if you can do it natively in the cluster?
+
+The point here is that Kubernetes makes it easier to make use of the “hidden” resources that you often end up with otherwise. Kubernetes handles scheduling of Pods in the cluster based on available resources and fault tolerance so that you don’t have to. However, in order to help Kubernetes make reasonable decisions you should definitely add [resource requests][19] to your workloads.
+
+### Summary
+
+While Kubernetes, or container orchestration in general, may not usually be associated with IoT, it certainly makes a lot of sense to have an orchestrator when you are dealing with distributed systems. Not only does is allow you to handle a diverse and heterogeneous fleet of devices in a unified way, but it also simplifies communication between them. In addition, Kubernetes makes it easier to utilize spare resources.
+
+Container technology made it possible to build applications that could “run anywhere”. Now Kubernetes makes it easier to manage the “anywhere” part. And as an immutable base to build it all on, we have Fedora IoT.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/
+
+作者:[Lennart Jern][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/lennartj/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/k3s-1-816x345.png
+[2]: https://fedoramagazine.org/turnon-led-fedora-iot/
+[3]: https://hub.helm.sh/
+[4]: https://k3s.io
+[5]: https://docs.fedoraproject.org/en-US/iot/getting-started/
+[6]: https://github.com/rancher/k3s#open-ports--network-security
+[7]: https://github.com/rancher/k3s#systemd
+[8]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
+[9]: https://kubernetes.io/docs/concepts/services-networking/service/
+[10]: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
+[11]: https://kubernetes.io/docs/concepts/services-networking/ingress/
+[12]: https://traefik.io/
+[13]: https://github.com/rancher/k3s/blob/master/README.md#service-load-balancer
+[14]: http://xip.io/
+[15]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+[16]: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
+[17]: https://nextcloud.com/
+[18]: https://gitea.io/en-us/
+[19]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
diff --git a/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md b/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
new file mode 100644
index 0000000000..6c3db30f25
--- /dev/null
+++ b/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
@@ -0,0 +1,39 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Troubleshooting slow WiFi on Linux)
+[#]: via: (https://www.linux.com/blog/troubleshooting-slow-wifi-linux)
+[#]: author: (OpenSource.com https://www.linux.com/USERS/OPENSOURCECOM)
+
+Troubleshooting slow WiFi on Linux
+======
+
+I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
+
+Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
+
+Read more at: [OpenSource.com][2]
+
+I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
+
+Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
+
+Read more at: [OpenSource.com][2]
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/troubleshooting-slow-wifi-linux
+
+作者:[OpenSource.com][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/USERS/OPENSOURCECOM
+[b]: https://github.com/lujun9972
+[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
+[2]: https://opensource.com/article/19/4/troubleshooting-wifi-linux
diff --git a/sources/tech/20190416 Building a DNS-as-a-service with OpenStack Designate.md b/sources/tech/20190416 Building a DNS-as-a-service with OpenStack Designate.md
new file mode 100644
index 0000000000..2dc628a49c
--- /dev/null
+++ b/sources/tech/20190416 Building a DNS-as-a-service with OpenStack Designate.md
@@ -0,0 +1,263 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a DNS-as-a-service with OpenStack Designate)
+[#]: via: (https://opensource.com/article/19/4/getting-started-openstack-designate)
+[#]: author: (Amjad Yaseen https://opensource.com/users/ayaseen)
+
+Building a DNS-as-a-service with OpenStack Designate
+======
+Learn how to install and configure Designate, a multi-tenant
+DNS-as-a-service (DNSaaS) for OpenStack.
+![Command line prompt][1]
+
+[Designate][2] is a multi-tenant DNS-as-a-service that includes a REST API for domain and record management, a framework for integration with [Neutron][3], and integration support for Bind9.
+
+You would want to consider a DNSaaS for the following:
+
+ * A clean REST API for managing zones and records
+ * Automatic records generated (with OpenStack integration)
+ * Support for multiple authoritative name servers
+ * Hosting multiple projects/organizations
+
+
+
+![Designate's architecture][4]
+
+This article explains how to manually install and configure the latest release of Designate service on CentOS or Red Hat Enterprise Linux 7 (RHEL 7), but you can use the same configuration on other distributions.
+
+### Install Designate on OpenStack
+
+I have Ansible roles for bind and Designate that demonstrate the setup in my [GitHub repository][5].
+
+This setup presumes bind service is external (even though you can install bind locally) on the OpenStack controller node.
+
+ 1. Install Designate's packages and bind (on OpenStack controller): [code]`# yum install openstack-designate-* bind bind-utils -y`
+```
+ 2. Create the Designate database and user: [code] MariaDB [(none)]> CREATE DATABASE designate CHARACTER SET utf8 COLLATE utf8_general_ci;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO \
+'designate'@'localhost' IDENTIFIED BY 'rhlab123';
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO 'designate'@'%' \
+IDENTIFIED BY 'rhlab123';
+```
+
+
+
+
+Note: Bind packages must be installed on the controller side for Remote Name Daemon Control (RNDC) to function properly.
+
+### Configure bind (DNS server)
+
+ 1. Generate RNDC files: [code] rndc-confgen -a -k designate -c /etc/rndc.key -r /dev/urandom
+
+cat < etcrndc.conf
+include "/etc/rndc.key";
+options {
+default-key "designate";
+default-server {{ DNS_SERVER_IP }};
+default-port 953;
+};
+EOF
+```
+ 2. Add the following into **named.conf** : [code]`include "/etc/rndc.key"; controls { inet {{ DNS_SERVER_IP }} allow { localhost;{{ CONTROLLER_SERVER_IP }}; } keys { "designate"; }; };`[/code] In the **option** section, add: [code] options {
+...
+allow-new-zones yes;
+request-ixfr no;
+listen-on port 53 { any; };
+recursion no;
+allow-query { 127.0.0.1; {{ CONTROLLER_SERVER_IP }}; };
+}; [/code] Add the right permissions: [code] chown named:named /etc/rndc.key
+chown named:named /etc/rndc.conf
+chmod 600 /etc/rndc.key
+chown -v root:named /etc/named.conf
+chmod g+w /var/named
+
+# systemctl restart named
+# setsebool named_write_master_zones 1
+```
+
+ 3. Push **rndc.key** and **rndc.conf** into the OpenStack controller: [code]`# scp -r /etc/rndc* {{ CONTROLLER_SERVER_IP }}:/etc/`
+```
+## Create OpenStack Designate service and endpoints
+
+Enter:
+```
+
+
+# openstack user create --domain default --password-prompt designate
+# openstack role add --project services --user designate admin
+# openstack service create --name designate --description "DNS" dns
+
+# openstack endpoint create --region RegionOne dns public http://{{ CONTROLLER_SERVER_IP }}:9001/
+# openstack endpoint create --region RegionOne dns internal http://{{ CONTROLLER_SERVER_IP }}:9001/
+# openstack endpoint create --region RegionOne dns admin http://{{ CONTROLLER_SERVER_IP }}:9001/
+
+```
+## Configure Designate service
+
+ 1. Edit **/etc/designate/designate.conf** :
+ * In the **[service:api]** section, configure **auth_strategy** : [code] [service:api]
+listen = 0.0.0.0:9001
+auth_strategy = keystone
+api_base_uri = http://{{ CONTROLLER_SERVER_IP }}:9001/
+enable_api_v2 = True
+enabled_extensions_v2 = quotas, reports
+```
+ * In the **[keystone_authtoken]** section, configure the following options: [code] [keystone_authtoken]
+auth_type = password
+username = designate
+password = rhlab123
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+www_authenticate_uri = http://{{ CONTROLLER_SERVER_IP }}:5000/
+auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000/
+```
+ * In the **[service:worker]** section, enable the worker model: [code] enabled = True
+notify = True
+```
+ * In the **[storage:sqlalchemy]** section, configure database access: [code] [storage:sqlalchemy]
+connection = mysql+pymysql://designate:rhlab123@{{ CONTROLLER_SERVER_IP }}/designate
+```
+* Populate the Designate database: [code]`# su -s /bin/sh -c "designate-manage database sync" designate`
+```
+
+
+ 2. Create Designate's **pools.yaml** file (has target and bind details):
+ * Edit **/etc/designate/pools.yaml** : [code] - name: default
+# The name is immutable. There will be no option to change the name after
+# creation and the only way will to change it will be to delete it
+# (and all zones associated with it) and recreate it.
+description: Default Pool
+
+attributes: {}
+
+# List out the NS records for zones hosted within this pool
+# This should be a record that is created outside of designate, that
+# points to the public IP of the controller node.
+ns_records:
+\- hostname: {{Controller_FQDN}}. # Thisis mDNS
+priority: 1
+
+# List out the nameservers for this pool. These are the actual BIND servers.
+# We use these to verify changes have propagated to all nameservers.
+nameservers:
+\- host: {{ DNS_SERVER_IP }}
+port: 53
+
+# List out the targets for this pool. For BIND there will be one
+# entry for each BIND server, as we have to run rndc command on each server
+targets:
+\- type: bind9
+description: BIND9 Server 1
+
+# List out the designate-mdns servers from which BIND servers should
+# request zone transfers (AXFRs) from.
+# This should be the IP of the controller node.
+# If you have multiple controllers you can add multiple masters
+# by running designate-mdns on them, and adding them here.
+masters:
+\- host: {{ CONTROLLER_SERVER_IP }}
+port: 5354
+
+# BIND Configuration options
+options:
+host: {{ DNS_SERVER_IP }}
+port: 53
+rndc_host: {{ DNS_SERVER_IP }}
+rndc_port: 953
+rndc_key_file: /etc/rndc.key
+rndc_config_file: /etc/rndc.conf
+```
+* Populate Designate's pools: [code]`su -s /bin/sh -c "designate-manage pool update" designate`
+```
+
+
+
+ 3. Start Designate central and API services: [code]`systemctl enable --now designate-central designate-api`
+```
+ 4. Verify Designate's services are up: [code] # openstack dns service list
+
++--------------+--------+-------+--------------+
+| service_name | status | stats | capabilities |
++--------------+--------+-------+--------------+
+| central | UP | - | - |
+| api | UP | - | - |
+| mdns | UP | - | - |
+| worker | UP | - | - |
+| producer | UP | - | - |
++--------------+--------+-------+--------------+
+```
+
+
+
+
+### Configure OpenStack Neutron with external DNS
+
+ 1. Configure iptables for Designate services: [code] # iptables -I INPUT -p tcp -m multiport --dports 9001 -m comment --comment "designate incoming" -j ACCEPT
+
+# iptables -I INPUT -p tcp -m multiport --dports 5354 -m comment --comment "Designate mdns incoming" -j ACCEPT
+
+# iptables -I INPUT -p tcp -m multiport --dports 53 -m comment --comment "bind incoming" -j ACCEPT
+
+
+# iptables -I INPUT -p udp -m multiport --dports 53 -m comment --comment "bind/powerdns incoming" -j ACCEPT
+
+# iptables -I INPUT -p tcp -m multiport --dports 953 -m comment --comment "rndc incoming - bind only" -j ACCEPT
+
+# service iptables save; service iptables restart
+# setsebool named_write_master_zones 1
+```
+2. Edit the **[default]** section of **/etc/neutron/neutron.conf** : [code]`external_dns_driver = designate`
+```
+
+ 3. Add the **[designate]** section in **/_etc/_neutron/neutron.conf** : [code] [designate]
+url = http://{{ CONTROLLER_SERVER_IP }}:9001/v2 ## This end point of designate
+auth_type = password
+auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000
+username = designate
+password = rhlab123
+project_name = services
+project_domain_name = Default
+user_domain_name = Default
+allow_reverse_dns_lookup = True
+ipv4_ptr_zone_prefix_size = 24
+ipv6_ptr_zone_prefix_size = 116
+```
+ 4. Edit **dns_domain** in **neutron.conf** : [code] dns_domain = rhlab.dev.
+
+# systemctl restart neutron-*
+```
+
+ 5. Add **dns** to the list of Modular Layer 2 (ML2) drivers in **/etc/neutron/plugins/ml2/ml2_conf.ini** : [code]`extension_drivers=port_security,qos,dns`
+```
+6. Add **zone** in Designate: [code]`# openstack zone create –email=admin@rhlab.dev rhlab.dev.`[/code] Add a new record in **zone rhlab.dev** : [code]`# openstack recordset create --record '192.168.1.230' --type A rhlab.dev. Test`
+```
+
+
+
+
+Designate should now be installed and configured.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/getting-started-openstack-designate
+
+作者:[Amjad Yaseen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ayaseen
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
+[2]: https://docs.openstack.org/designate/latest/
+[3]: /article/19/3/openstack-neutron
+[4]: https://opensource.com/sites/default/files/uploads/openstack_designate_architecture.png (Designate's architecture)
+[5]: https://github.com/ayaseen/designate
diff --git a/sources/tech/20190416 Can schools be agile.md b/sources/tech/20190416 Can schools be agile.md
new file mode 100644
index 0000000000..065b313c05
--- /dev/null
+++ b/sources/tech/20190416 Can schools be agile.md
@@ -0,0 +1,79 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Can schools be agile?)
+[#]: via: (https://opensource.com/open-organization/19/4/education-culture-agile)
+[#]: author: (Ben Owens https://opensource.com/users/engineerteacher/users/ke4qqq/users/n8chz/users/don-watkins)
+
+Can schools be agile?
+======
+We certainly don't need to run our schools like businesses—but we could
+benefit from educational organizations more focused on continuous
+improvement.
+![][1]
+
+We've all had those _deja vu_ moments that make us think "I've seen this before!" I experienced them often in the late 1980s, when I first began my career in industry. I was caught up in a wave of organizational change, where the U.S. manufacturing sector was experimenting with various models that asked leaders, managers, and engineers like me to rethink how we approached things like quality, cost, innovation, and shareholder value. It seems as if every year (sometimes, more frequently) we'd study yet another book to identify the "best practices" necessary for making us leaner, flatter, more nimble, and more responsive to the needs of the customer.
+
+Many of the approaches were so transformational that their core principles still resonate with me today. Specific ideas and methods from thought leaders such as John Kotter, Peter Drucker, Edwards Demming, and Peter Senge were truly pivotal for our ability to rethink our work, as were the adoption of process improvement methods such as Six Sigma and those embodied in the "Toyota Way."
+
+But others seemed to simply repackage these same ideas with a sexy new twist—hence my _deja vu_.
+
+And yet when I began my career as a teacher, I encountered a context that _didn't_ give me that feeling: education. In fact, I was surprised to find that "getting better all the time" was _not_ the same high priority in my new profession that it was in my old one (particularly at the level of my role as a classroom teacher).
+
+Why aren't more educational organizations working to create cultures of continuous improvement? I can think of several reasons, but let me address two.
+
+### Widgets no more
+
+The first barrier to a culture of continuous improvement is education's general reticence to look at other professions for ideas it can adapt and adopt—especially ideas from the business community. The second is education's predominant leadership model, which remains predominantly top-down and rooted in hierarchy. Conversations about systemic, continuous improvement tend to be the purview of a relatively small group of school or district leaders: principals, assistant principals, superintendents, and the like. But widespread organizational culture change can't occur if only one small group is involved in it.
+
+Before unpacking these points a bit further, I'd like to emphasize that there are certainly exceptions to the above generalization (many I have seen first hand) and that there are two basic assumptions that I think any education stakeholder should be able to agree with:
+
+ 1. Continuous improvement must be an essential mindset for _anyone_ involved in the work of providing high-quality and equitable teaching and learning systems for students, and
+ 2. Decisions by leaders of our schools will more greatly benefit students and the communities in which they live when those decisions are informed and influenced by those who work closest with students.
+
+
+
+So why a tendency to ignore (or be outright hostile toward) ideas that come from outside the education space?
+
+I, for example, have certainly faced criticism in the past for suggesting that we look to other professions for ideas and inspiration that can help us better meet the needs of students. A common refrain is something like: "You're trying to treat our students like widgets!" But how could our students be treated any more like widgets than they already are? They matriculate through school in age-based cohorts, going from siloed class to class each day by the sound of a shrill bell, and receive grades based on arbitrary tests that emphasize sameness over individuality.
+
+What I'm advocating is a clear-eyed and objective look at any idea from any sector with potential to help us better meet the needs of individual students, not that we somehow run our schools like businesses.
+
+It may be news to many inside of education, but widgets—abstract units of production that evoke the idea of assembly line standardization—are not a significant part of the modern manufacturing sector. Thanks to the culture of continuous improvement described above, modern, advanced manufacturing delivers just what the individual customer wants, at a competitive price, exactly when she wants it. If we adapted this model to our schools, teachers would be more likely to collaborate and constantly refine their unique paths of growth for all students based on just-in-time needs and desires—regardless of the time, subject, or any other traditional norm.
+
+What I'm advocating is a clear-eyed and objective look at any idea from any sector with potential to help us better meet the needs of individual students, not that we somehow run our schools like businesses. In order for this to happen effectively, however, we need to scrutinize a leadership structure that has frankly remained stagnant for over 100 years.
+
+### Toward continuous improvement
+
+While I certainly appreciate the argument that education is an animal significantly different from other professions, I also believe that rethinking an organizational and leadership structure is an applicable exercise for any entity wanting to remain responsible (and responsive) to the needs of its stakeholders. Most other professions have taken a hard look at their traditional, closed, hierarchical structures and moved to ones that encourage collective autonomy per shared goals of excellence—organizational elements essential for continuous improvement. It's time our schools and districts do the same by expanding their horizon beyond sources that, while well intended, are developed from a lens of the current paradigm.
+
+Not surprisingly, a go-to resource I recommend to any school wanting to begin or accelerate this process is _The Open Organization_ by Jim Whitehurst. Not only does the book provide a window into how educators can create more open, inclusive leadership structures—where mutual respect enables nimble decisions to be made per real-time data—but it does so in language easily adaptable to the rather strange lexicon that's second nature to educators. Open organization thinking provides pragmatic ways any organization can empower members to be more open: sharing ideas and resources, embracing a culture of collaborative participation as a top priority, developing an innovation mindset through rapid prototyping, valuing ideas based on merit rather than the rank of the person proposing them, and building a strong sense of community that's baked into the organization's DNA. Such an open organization crowd-sources ideas from both inside and outside its formal structure and creates the type of environment that enables localized, student-centered innovations to thrive.
+
+We simply can't rely on solutions and practices we developed in a factory-model paradigm.
+
+Here's the bottom line: Essential to a culture of continuous improvement is recognizing that what we've done in the past may not be suitable in a rapidly changing future. For educators, that means we simply can't rely on solutions and practices we developed in a factory-model paradigm. We must acknowledge countless examples of best practices from other sectors—such as non-profits, the military, the medical profession, and yes, even business—that can at least _inform_ how we rethink what we do in the best interest of students. By moving beyond the traditionally sanctioned "eduspeak" world, we create opportunities for considering perspectives. We can better see the forest for the trees, taking a more objective look at the problems we face, as well as acknowledging what we do very well.
+
+Intentionally considering ideas from all sources—from first year classroom teachers to the latest NYT Business & Management Leadership bestseller—offers us a powerful way to engage existing talent within our schools to help overcome the institutionalized inertia that has prevented more positive change from taking hold in our schools and districts.
+
+Relentlessly pursuing methods of continuous improvement should not be a behavior confined to organizations fighting to remain competitive in a global, innovation economy, nor should it be left to a select few charged with the operation of our schools. When everyone in an organization is always thinking about what they can do differently _today_ to improve what they did _yesterday_ , then you have an organization living a culture of excellence. That's the kind of radically collaborative and innovative culture we should especially expect for organizations focused on changing the lives of young people.
+
+I'm eagerly awaiting the day when I enter a school, recognize that spirit, and smile to myself as I say, "I've seen this before."
+
+Experiential learning using open source is fraught with opportunities for disaster.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/open-organization/19/4/education-culture-agile
+
+作者:[Ben Owens][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/engineerteacher/users/ke4qqq/users/n8chz/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_network.png?itok=ySEHuAQ8
diff --git a/sources/tech/20190416 Detecting malaria with deep learning.md b/sources/tech/20190416 Detecting malaria with deep learning.md
new file mode 100644
index 0000000000..77df4a561b
--- /dev/null
+++ b/sources/tech/20190416 Detecting malaria with deep learning.md
@@ -0,0 +1,792 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Detecting malaria with deep learning)
+[#]: via: (https://opensource.com/article/19/4/detecting-malaria-deep-learning)
+[#]: author: (Dipanjan (DJ) Sarkar (Red Hat) https://opensource.com/users/djsarkar)
+
+Detecting malaria with deep learning
+======
+Artificial intelligence combined with open source tools can improve
+diagnosis of the fatal disease malaria.
+![][1]
+
+Artificial intelligence (AI) and open source tools, technologies, and frameworks are a powerful combination for improving society. _"Health is wealth"_ is perhaps a cliche, yet it's very accurate! In this article, we will examine how AI can be leveraged for detecting the deadly disease malaria with a low-cost, effective, and accurate open source deep learning solution.
+
+While I am neither a doctor nor a healthcare researcher and I'm nowhere near as qualified as they are, I am interested in applying AI to healthcare research. My intent in this article is to showcase how AI and open source solutions can help malaria detection and reduce manual labor.
+
+![Python and TensorFlow][2]
+
+Python and TensorFlow: A great combo to build open source deep learning solutions
+
+Thanks to the power of Python and deep learning frameworks like TensorFlow, we can build robust, scalable, and effective deep learning solutions. Because these tools are free and open source, we can build solutions that are very cost-effective and easily adopted and used by anyone. Let's get started!
+
+### Motivation for the project
+
+Malaria is a deadly, infectious, mosquito-borne disease caused by _Plasmodium_ parasites that are transmitted by the bites of infected female _Anopheles_ mosquitoes. There are five parasites that cause malaria, but two types— _P. falciparum_ and _P. vivax_ —cause the majority of the cases.
+
+![Malaria heat map][3]
+
+This map shows that malaria is prevalent around the globe, especially in tropical regions, but the nature and fatality of the disease is the primary motivation for this project.
+
+If an infected mosquito bites you, parasites carried by the mosquito enter your blood and start destroying oxygen-carrying red blood cells (RBC). Typically, the first symptoms of malaria are similar to a virus like the flu and they usually begin within a few days or weeks after the mosquito bite. However, these deadly parasites can live in your body for over a year without causing symptoms, and a delay in treatment can lead to complications and even death. Therefore, early detection can save lives.
+
+The World Health Organization's (WHO) [malaria facts][4] indicate that nearly half the world's population is at risk from malaria, and there are over 200 million malaria cases and approximately 400,000 deaths due to malaria every year. This is a motivatation to make malaria detection and diagnosis fast, easy, and effective.
+
+### Methods of malaria detection
+
+There are several methods that can be used for malaria detection and diagnosis. The paper on which our project is based, "[Pre-trained convolutional neural networks as feature extractors toward improved Malaria parasite detection in thin blood smear images][5]," by Rajaraman, et al., introduces some of the methods, including polymerase chain reaction (PCR) and rapid diagnostic tests (RDT). These two tests are typically used where high-quality microscopy services are not readily available.
+
+The standard malaria diagnosis is typically based on a blood-smear workflow, according to Carlos Ariza's article "[Malaria Hero: A web app for faster malaria diagnosis][6]," which I learned about in Adrian Rosebrock's "[Deep learning and medical image analysis with Keras][7]." I appreciate the authors of these excellent resources for giving me more perspective on malaria prevalence, diagnosis, and treatment.
+
+![Blood smear workflow for Malaria detection][8]
+
+A blood smear workflow for Malaria detection
+
+According to WHO protocol, diagnosis typically involves intensive examination of the blood smear at 100X magnification. Trained people manually count how many red blood cells contain parasites out of 5,000 cells. As the Rajaraman, et al., paper cited above explains:
+
+> Thick blood smears assist in detecting the presence of parasites while thin blood smears assist in identifying the species of the parasite causing the infection (Centers for Disease Control and Prevention, 2012). The diagnostic accuracy heavily depends on human expertise and can be adversely impacted by the inter-observer variability and the liability imposed by large-scale diagnoses in disease-endemic/resource-constrained regions (Mitiku, Mengistu, and Gelaw, 2003). Alternative techniques such as polymerase chain reaction (PCR) and rapid diagnostic tests (RDT) are used; however, PCR analysis is limited in its performance (Hommelsheim, et al., 2014) and RDTs are less cost-effective in disease-endemic regions (Hawkes, Katsuva, and Masumbuko, 2009).
+
+Thus, malaria detection could benefit from automation using deep learning.
+
+### Deep learning for malaria detection
+
+Manual diagnosis of blood smears is an intensive manual process that requires expertise in classifying and counting parasitized and uninfected cells. This process may not scale well, especially in regions where the right expertise is hard to find. Some advancements have been made in leveraging state-of-the-art image processing and analysis techniques to extract hand-engineered features and build machine learning-based classification models. However, these models are not scalable with more data being available for training and given the fact that hand-engineered features take a lot of time.
+
+Deep learning models, or more specifically convolutional neural networks (CNNs), have proven very effective in a wide variety of computer vision tasks. (If you would like additional background knowledge on CNNs, I recommend reading [CS231n Convolutional Neural Networks for Visual Recognition][9].) Briefly, the key layers in a CNN model include convolution and pooling layers, as shown in the following figure.
+
+![A typical CNN architecture][10]
+
+A typical CNN architecture
+
+Convolution layers learn spatial hierarchical patterns from data, which are also translation-invariant, so they are able to learn different aspects of images. For example, the first convolution layer will learn small and local patterns, such as edges and corners, a second convolution layer will learn larger patterns based on the features from the first layers, and so on. This allows CNNs to automate feature engineering and learn effective features that generalize well on new data points. Pooling layers helps with downsampling and dimension reduction.
+
+Thus, CNNs help with automated and scalable feature engineering. Also, plugging in dense layers at the end of the model enables us to perform tasks like image classification. Automated malaria detection using deep learning models like CNNs could be very effective, cheap, and scalable, especially with the advent of transfer learning and pre-trained models that work quite well, even with constraints like less data.
+
+The Rajaraman, et al., paper leverages six pre-trained models on a dataset to obtain an impressive accuracy of 95.9% in detecting malaria vs. non-infected samples. Our focus is to try some simple CNN models from scratch and a couple of pre-trained models using transfer learning to see the results we can get on the same dataset. We will use open source tools and frameworks, including Python and TensorFlow, to build our models.
+
+### The dataset
+
+The data for our analysis comes from researchers at the Lister Hill National Center for Biomedical Communications (LHNCBC), part of the National Library of Medicine (NLM), who have carefully collected and annotated the [publicly available dataset][11] of healthy and infected blood smear images. These researchers have developed a mobile [application for malaria detection][12] that runs on a standard Android smartphone attached to a conventional light microscope. They used Giemsa-stained thin blood smear slides from 150 _P. falciparum_ -infected and 50 healthy patients, collected and photographed at Chittagong Medical College Hospital, Bangladesh. The smartphone's built-in camera acquired images of slides for each microscopic field of view. The images were manually annotated by an expert slide reader at the Mahidol-Oxford Tropical Medicine Research Unit in Bangkok, Thailand.
+
+Let's briefly check out the dataset's structure. First, I will install some basic dependencies (based on the operating system being used).
+
+![Installing dependencies][13]
+
+I am using a Debian-based system on the cloud with a GPU so I can run my models faster. To view the directory structure, we must install the tree dependency (if we don't have it) using **sudo apt install tree**.
+
+![Installing the tree dependency][14]
+
+We have two folders that contain images of cells, infected and healthy. We can get further details about the total number of images by entering:
+
+
+```
+import os
+import glob
+
+base_dir = os.path.join('./cell_images')
+infected_dir = os.path.join(base_dir,'Parasitized')
+healthy_dir = os.path.join(base_dir,'Uninfected')
+
+infected_files = glob.glob(infected_dir+'/*.png')
+healthy_files = glob.glob(healthy_dir+'/*.png')
+len(infected_files), len(healthy_files)
+
+# Output
+(13779, 13779)
+```
+
+It looks like we have a balanced dataset with 13,779 malaria and 13,779 non-malaria (uninfected) cell images. Let's build a data frame from this, which we will use when we start building our datasets.
+
+
+```
+import numpy as np
+import pandas as pd
+
+np.random.seed(42)
+
+files_df = pd.DataFrame({
+'filename': infected_files + healthy_files,
+'label': ['malaria'] * len(infected_files) + ['healthy'] * len(healthy_files)
+}).sample(frac=1, random_state=42).reset_index(drop=True)
+
+files_df.head()
+```
+
+![Datasets][15]
+
+### Build and explore image datasets
+
+To build deep learning models, we need training data, but we also need to test the model's performance on unseen data. We will use a 60:10:30 split for train, validation, and test datasets, respectively. We will leverage the train and validation datasets during training and check the performance of the model on the test dataset.
+
+
+```
+from sklearn.model_selection import train_test_split
+from collections import Counter
+
+train_files, test_files, train_labels, test_labels = train_test_split(files_df['filename'].values,
+files_df['label'].values,
+test_size=0.3, random_state=42)
+train_files, val_files, train_labels, val_labels = train_test_split(train_files,
+train_labels,
+test_size=0.1, random_state=42)
+
+print(train_files.shape, val_files.shape, test_files.shape)
+print('Train:', Counter(train_labels), '\nVal:', Counter(val_labels), '\nTest:', Counter(test_labels))
+
+# Output
+(17361,) (1929,) (8268,)
+Train: Counter({'healthy': 8734, 'malaria': 8627})
+Val: Counter({'healthy': 970, 'malaria': 959})
+Test: Counter({'malaria': 4193, 'healthy': 4075})
+```
+
+The images will not be of equal dimensions because blood smears and cell images vary based on the human, the test method, and the orientation of the photo. Let's get some summary statistics of our training dataset to determine the optimal image dimensions (remember, we don't touch the test dataset at all!).
+
+
+```
+import cv2
+from concurrent import futures
+import threading
+
+def get_img_shape_parallel(idx, img, total_imgs):
+if idx % 5000 == 0 or idx == (total_imgs - 1):
+print('{}: working on img num: {}'.format(threading.current_thread().name,
+idx))
+return cv2.imread(img).shape
+
+ex = futures.ThreadPoolExecutor(max_workers=None)
+data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)]
+print('Starting Img shape computation:')
+train_img_dims_map = ex.map(get_img_shape_parallel,
+[record[0] for record in data_inp],
+[record[1] for record in data_inp],
+[record[2] for record in data_inp])
+train_img_dims = list(train_img_dims_map)
+print('Min Dimensions:', np.min(train_img_dims, axis=0))
+print('Avg Dimensions:', np.mean(train_img_dims, axis=0))
+print('Median Dimensions:', np.median(train_img_dims, axis=0))
+print('Max Dimensions:', np.max(train_img_dims, axis=0))
+
+# Output
+Starting Img shape computation:
+ThreadPoolExecutor-0_0: working on img num: 0
+ThreadPoolExecutor-0_17: working on img num: 5000
+ThreadPoolExecutor-0_15: working on img num: 10000
+ThreadPoolExecutor-0_1: working on img num: 15000
+ThreadPoolExecutor-0_7: working on img num: 17360
+Min Dimensions: [46 46 3]
+Avg Dimensions: [132.77311215 132.45757733 3.]
+Median Dimensions: [130. 130. 3.]
+Max Dimensions: [385 394 3]
+```
+
+We apply parallel processing to speed up the image-read operations and, based on the summary statistics, we will resize each image to 125x125 pixels. Let's load up all of our images and resize them to these fixed dimensions.
+
+
+```
+IMG_DIMS = (125, 125)
+
+def get_img_data_parallel(idx, img, total_imgs):
+if idx % 5000 == 0 or idx == (total_imgs - 1):
+print('{}: working on img num: {}'.format(threading.current_thread().name,
+idx))
+img = cv2.imread(img)
+img = cv2.resize(img, dsize=IMG_DIMS,
+interpolation=cv2.INTER_CUBIC)
+img = np.array(img, dtype=np.float32)
+return img
+
+ex = futures.ThreadPoolExecutor(max_workers=None)
+train_data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)]
+val_data_inp = [(idx, img, len(val_files)) for idx, img in enumerate(val_files)]
+test_data_inp = [(idx, img, len(test_files)) for idx, img in enumerate(test_files)]
+
+print('Loading Train Images:')
+train_data_map = ex.map(get_img_data_parallel,
+[record[0] for record in train_data_inp],
+[record[1] for record in train_data_inp],
+[record[2] for record in train_data_inp])
+train_data = np.array(list(train_data_map))
+
+print('\nLoading Validation Images:')
+val_data_map = ex.map(get_img_data_parallel,
+[record[0] for record in val_data_inp],
+[record[1] for record in val_data_inp],
+[record[2] for record in val_data_inp])
+val_data = np.array(list(val_data_map))
+
+print('\nLoading Test Images:')
+test_data_map = ex.map(get_img_data_parallel,
+[record[0] for record in test_data_inp],
+[record[1] for record in test_data_inp],
+[record[2] for record in test_data_inp])
+test_data = np.array(list(test_data_map))
+
+train_data.shape, val_data.shape, test_data.shape
+
+# Output
+Loading Train Images:
+ThreadPoolExecutor-1_0: working on img num: 0
+ThreadPoolExecutor-1_12: working on img num: 5000
+ThreadPoolExecutor-1_6: working on img num: 10000
+ThreadPoolExecutor-1_10: working on img num: 15000
+ThreadPoolExecutor-1_3: working on img num: 17360
+
+Loading Validation Images:
+ThreadPoolExecutor-1_13: working on img num: 0
+ThreadPoolExecutor-1_18: working on img num: 1928
+
+Loading Test Images:
+ThreadPoolExecutor-1_5: working on img num: 0
+ThreadPoolExecutor-1_19: working on img num: 5000
+ThreadPoolExecutor-1_8: working on img num: 8267
+((17361, 125, 125, 3), (1929, 125, 125, 3), (8268, 125, 125, 3))
+```
+
+We leverage parallel processing again to speed up computations pertaining to image load and resizing. Finally, we get our image tensors of the desired dimensions, as depicted in the preceding output. We can now view some sample cell images to get an idea of how our data looks.
+
+
+```
+import matplotlib.pyplot as plt
+%matplotlib inline
+
+plt.figure(1 , figsize = (8 , 8))
+n = 0
+for i in range(16):
+n += 1
+r = np.random.randint(0 , train_data.shape[0] , 1)
+plt.subplot(4 , 4 , n)
+plt.subplots_adjust(hspace = 0.5 , wspace = 0.5)
+plt.imshow(train_data[r[0]]/255.)
+plt.title('{}'.format(train_labels[r[0]]))
+plt.xticks([]) , plt.yticks([])
+```
+
+![Malaria cell samples][16]
+
+Based on these sample images, we can see some subtle differences between malaria and healthy cell images. We will make our deep learning models try to learn these patterns during model training.
+
+Before can we start training our models, we must set up some basic configuration settings.
+
+
+```
+BATCH_SIZE = 64
+NUM_CLASSES = 2
+EPOCHS = 25
+INPUT_SHAPE = (125, 125, 3)
+
+train_imgs_scaled = train_data / 255.
+val_imgs_scaled = val_data / 255.
+
+# encode text category labels
+from sklearn.preprocessing import LabelEncoder
+
+le = LabelEncoder()
+le.fit(train_labels)
+train_labels_enc = le.transform(train_labels)
+val_labels_enc = le.transform(val_labels)
+
+print(train_labels[:6], train_labels_enc[:6])
+
+# Output
+['malaria' 'malaria' 'malaria' 'healthy' 'healthy' 'malaria'] [1 1 1 0 0 1]
+```
+
+We fix our image dimensions, batch size, and epochs and encode our categorical class labels. The alpha version of TensorFlow 2.0 was released in March 2019, and this exercise is the perfect excuse to try it out.
+
+
+```
+import tensorflow as tf
+
+# Load the TensorBoard notebook extension (optional)
+%load_ext tensorboard.notebook
+
+tf.random.set_seed(42)
+tf.__version__
+
+# Output
+'2.0.0-alpha0'
+```
+
+### Deep learning model training
+
+In the model training phase, we will build three deep learning models, train them with our training data, and compare their performance using the validation data. We will then save these models and use them later in the model evaluation phase.
+
+#### Model 1: CNN from scratch
+
+Our first malaria detection model will build and train a basic CNN from scratch. First, let's define our model architecture.
+
+
+```
+inp = tf.keras.layers.Input(shape=INPUT_SHAPE)
+
+conv1 = tf.keras.layers.Conv2D(32, kernel_size=(3, 3),
+activation='relu', padding='same')(inp)
+pool1 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv1)
+conv2 = tf.keras.layers.Conv2D(64, kernel_size=(3, 3),
+activation='relu', padding='same')(pool1)
+pool2 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv2)
+conv3 = tf.keras.layers.Conv2D(128, kernel_size=(3, 3),
+activation='relu', padding='same')(pool2)
+pool3 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv3)
+
+flat = tf.keras.layers.Flatten()(pool3)
+
+hidden1 = tf.keras.layers.Dense(512, activation='relu')(flat)
+drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
+hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
+drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
+
+out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
+
+model = tf.keras.Model(inputs=inp, outputs=out)
+model.compile(optimizer='adam',
+loss='binary_crossentropy',
+metrics=['accuracy'])
+model.summary()
+
+# Output
+Model: "model"
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+input_1 (InputLayer) [(None, 125, 125, 3)] 0
+_________________________________________________________________
+conv2d (Conv2D) (None, 125, 125, 32) 896
+_________________________________________________________________
+max_pooling2d (MaxPooling2D) (None, 62, 62, 32) 0
+_________________________________________________________________
+conv2d_1 (Conv2D) (None, 62, 62, 64) 18496
+_________________________________________________________________
+...
+...
+_________________________________________________________________
+dense_1 (Dense) (None, 512) 262656
+_________________________________________________________________
+dropout_1 (Dropout) (None, 512) 0
+_________________________________________________________________
+dense_2 (Dense) (None, 1) 513
+=================================================================
+Total params: 15,102,529
+Trainable params: 15,102,529
+Non-trainable params: 0
+_________________________________________________________________
+```
+
+Based on the architecture in this code, our CNN model has three convolution and pooling layers, followed by two dense layers, and dropouts for regularization. Let's train our model.
+
+
+```
+import datetime
+
+logdir = os.path.join('/home/dipanzan_sarkar/projects/tensorboard_logs',
+datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
+tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
+reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5,
+patience=2, min_lr=0.000001)
+callbacks = [reduce_lr, tensorboard_callback]
+
+history = model.fit(x=train_imgs_scaled, y=train_labels_enc,
+batch_size=BATCH_SIZE,
+epochs=EPOCHS,
+validation_data=(val_imgs_scaled, val_labels_enc),
+callbacks=callbacks,
+verbose=1)
+
+
+# Output
+Train on 17361 samples, validate on 1929 samples
+Epoch 1/25
+17361/17361 [====] - 32s 2ms/sample - loss: 0.4373 - accuracy: 0.7814 - val_loss: 0.1834 - val_accuracy: 0.9393
+Epoch 2/25
+17361/17361 [====] - 30s 2ms/sample - loss: 0.1725 - accuracy: 0.9434 - val_loss: 0.1567 - val_accuracy: 0.9513
+...
+...
+Epoch 24/25
+17361/17361 [====] - 30s 2ms/sample - loss: 0.0036 - accuracy: 0.9993 - val_loss: 0.3693 - val_accuracy: 0.9565
+Epoch 25/25
+17361/17361 [====] - 30s 2ms/sample - loss: 0.0034 - accuracy: 0.9994 - val_loss: 0.3699 - val_accuracy: 0.9559
+```
+
+We get a validation accuracy of 95.6%, which is pretty good, although our model looks to be overfitting slightly (based on looking at our training accuracy, which is 99.9%). We can get a clear perspective on this by plotting the training and validation accuracy and loss curves.
+
+
+```
+f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
+t = f.suptitle('Basic CNN Performance', fontsize=12)
+f.subplots_adjust(top=0.85, wspace=0.3)
+
+max_epoch = len(history.history['accuracy'])+1
+epoch_list = list(range(1,max_epoch))
+ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy')
+ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy')
+ax1.set_xticks(np.arange(1, max_epoch, 5))
+ax1.set_ylabel('Accuracy Value')
+ax1.set_xlabel('Epoch')
+ax1.set_title('Accuracy')
+l1 = ax1.legend(loc="best")
+
+ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
+ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
+ax2.set_xticks(np.arange(1, max_epoch, 5))
+ax2.set_ylabel('Loss Value')
+ax2.set_xlabel('Epoch')
+ax2.set_title('Loss')
+l2 = ax2.legend(loc="best")
+```
+
+![Learning curves for basic CNN][17]
+
+Learning curves for basic CNN
+
+We can see after the fifth epoch that things don't seem to improve a whole lot overall. Let's save this model for future evaluation.
+
+
+```
+`model.save('basic_cnn.h5')`
+```
+
+#### Deep transfer learning
+
+Just like humans have an inherent capability to transfer knowledge across tasks, transfer learning enables us to utilize knowledge from previously learned tasks and apply it to newer, related ones, even in the context of machine learning or deep learning. If you are interested in doing a deep-dive on transfer learning, you can read my article "[A comprehensive hands-on guide to transfer learning with real-world applications in deep learning][18]" and my book [_Hands-On Transfer Learning with Python_][19].
+
+![Ideas for deep transfer learning][20]
+
+The idea we want to explore in this exercise is:
+
+> Can we leverage a pre-trained deep learning model (which was trained on a large dataset, like ImageNet) to solve the problem of malaria detection by applying and transferring its knowledge in the context of our problem?
+
+We will apply the two most popular strategies for deep transfer learning.
+
+ * Pre-trained model as a feature extractor
+ * Pre-trained model with fine-tuning
+
+
+
+We will be using the pre-trained VGG-19 deep learning model, developed by the Visual Geometry Group (VGG) at the University of Oxford, for our experiments. A pre-trained model like VGG-19 is trained on a huge dataset ([ImageNet][21]) with a lot of diverse image categories. Therefore, the model should have learned a robust hierarchy of features, which are spatial-, rotational-, and translation-invariant with regard to features learned by CNN models. Hence, the model, having learned a good representation of features for over a million images, can act as a good feature extractor for new images suitable for computer vision problems like malaria detection. Let's discuss the VGG-19 model architecture before unleashing the power of transfer learning on our problem.
+
+##### Understanding the VGG-19 model
+
+The VGG-19 model is a 19-layer (convolution and fully connected) deep learning network built on the ImageNet database, which was developed for the purpose of image recognition and classification. This model was built by Karen Simonyan and Andrew Zisserman and is described in their paper "[Very deep convolutional networks for large-scale image recognition][22]." The architecture of the VGG-19 model is:
+
+![VGG-19 Model Architecture][23]
+
+You can see that we have a total of 16 convolution layers using 3x3 convolution filters along with max pooling layers for downsampling and two fully connected hidden layers of 4,096 units in each layer followed by a dense layer of 1,000 units, where each unit represents one of the image categories in the ImageNet database. We do not need the last three layers since we will be using our own fully connected dense layers to predict malaria. We are more concerned with the first five blocks so we can leverage the VGG model as an effective feature extractor.
+
+We will use one of the models as a simple feature extractor by freezing the five convolution blocks to make sure their weights aren't updated after each epoch. For the last model, we will apply fine-tuning to the VGG model, where we will unfreeze the last two blocks (Block 4 and Block 5) so that their weights will be updated in each epoch (per batch of data) as we train our own model.
+
+#### Model 2: Pre-trained model as a feature extractor
+
+For building this model, we will leverage TensorFlow to load up the VGG-19 model and freeze the convolution blocks so we can use them as an image feature extractor. We will plug in our own dense layers at the end to perform the classification task.
+
+
+```
+vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet',
+input_shape=INPUT_SHAPE)
+vgg.trainable = False
+# Freeze the layers
+for layer in vgg.layers:
+layer.trainable = False
+
+base_vgg = vgg
+base_out = base_vgg.output
+pool_out = tf.keras.layers.Flatten()(base_out)
+hidden1 = tf.keras.layers.Dense(512, activation='relu')(pool_out)
+drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
+hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
+drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
+
+out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
+
+model = tf.keras.Model(inputs=base_vgg.input, outputs=out)
+model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-4),
+loss='binary_crossentropy',
+metrics=['accuracy'])
+model.summary()
+
+# Output
+Model: "model_1"
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+input_2 (InputLayer) [(None, 125, 125, 3)] 0
+_________________________________________________________________
+block1_conv1 (Conv2D) (None, 125, 125, 64) 1792
+_________________________________________________________________
+block1_conv2 (Conv2D) (None, 125, 125, 64) 36928
+_________________________________________________________________
+...
+...
+_________________________________________________________________
+block5_pool (MaxPooling2D) (None, 3, 3, 512) 0
+_________________________________________________________________
+flatten_1 (Flatten) (None, 4608) 0
+_________________________________________________________________
+dense_3 (Dense) (None, 512) 2359808
+_________________________________________________________________
+dropout_2 (Dropout) (None, 512) 0
+_________________________________________________________________
+dense_4 (Dense) (None, 512) 262656
+_________________________________________________________________
+dropout_3 (Dropout) (None, 512) 0
+_________________________________________________________________
+dense_5 (Dense) (None, 1) 513
+=================================================================
+Total params: 22,647,361
+Trainable params: 2,622,977
+Non-trainable params: 20,024,384
+_________________________________________________________________
+```
+
+It is evident from this output that we have a lot of layers in our model and we will be using the frozen layers of the VGG-19 model as feature extractors only. You can use the following code to verify how many layers in our model are indeed trainable and how many total layers are present in our network.
+
+
+```
+print("Total Layers:", len(model.layers))
+print("Total trainable layers:",
+sum([1 for l in model.layers if l.trainable]))
+
+# Output
+Total Layers: 28
+Total trainable layers: 6
+```
+
+We will now train our model using similar configurations and callbacks to the ones we used in our previous model. Refer to [my GitHub repository][24] for the complete code to train the model. We observe the following plots showing the model's accuracy and loss.
+
+![Learning curves for frozen pre-trained CNN][25]
+
+Learning curves for frozen pre-trained CNN
+
+This shows that our model is not overfitting as much as our basic CNN model, but the performance is slightly less than our basic CNN model. Let's save this model for future evaluation.
+
+
+```
+`model.save('vgg_frozen.h5')`
+```
+
+#### Model 3: Fine-tuned pre-trained model with image augmentation
+
+In our final model, we will fine-tune the weights of the layers in the last two blocks of our pre-trained VGG-19 model. We will also introduce the concept of image augmentation. The idea behind image augmentation is exactly as the name sounds. We load in existing images from our training dataset and apply some image transformation operations to them, such as rotation, shearing, translation, zooming, and so on, to produce new, altered versions of existing images. Due to these random transformations, we don't get the same images each time. We will leverage an excellent utility called **ImageDataGenerator** in **tf.keras** that can help build image augmentors.
+
+
+```
+train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,
+zoom_range=0.05,
+rotation_range=25,
+width_shift_range=0.05,
+height_shift_range=0.05,
+shear_range=0.05, horizontal_flip=True,
+fill_mode='nearest')
+
+val_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
+
+# build image augmentation generators
+train_generator = train_datagen.flow(train_data, train_labels_enc, batch_size=BATCH_SIZE, shuffle=True)
+val_generator = val_datagen.flow(val_data, val_labels_enc, batch_size=BATCH_SIZE, shuffle=False)
+```
+
+We will not apply any transformations on our validation dataset (except for scaling the images, which is mandatory) since we will be using it to evaluate our model performance per epoch. For a detailed explanation of image augmentation in the context of transfer learning, feel free to check out my [article][18] cited above. Let's look at some sample results from a batch of image augmentation transforms.
+
+
+```
+img_id = 0
+sample_generator = train_datagen.flow(train_data[img_id:img_id+1], train_labels[img_id:img_id+1],
+batch_size=1)
+sample = [next(sample_generator) for i in range(0,5)]
+fig, ax = plt.subplots(1,5, figsize=(16, 6))
+print('Labels:', [item[1][0] for item in sample])
+l = [ax[i].imshow(sample[i][0][0]) for i in range(0,5)]
+```
+
+![Sample augmented images][26]
+
+You can clearly see the slight variations of our images in the preceding output. We will now build our deep learning model, making sure the last two blocks of the VGG-19 model are trainable.
+
+
+```
+vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet',
+input_shape=INPUT_SHAPE)
+# Freeze the layers
+vgg.trainable = True
+
+set_trainable = False
+for layer in vgg.layers:
+if layer.name in ['block5_conv1', 'block4_conv1']:
+set_trainable = True
+if set_trainable:
+layer.trainable = True
+else:
+layer.trainable = False
+
+base_vgg = vgg
+base_out = base_vgg.output
+pool_out = tf.keras.layers.Flatten()(base_out)
+hidden1 = tf.keras.layers.Dense(512, activation='relu')(pool_out)
+drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
+hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
+drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
+
+out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
+
+model = tf.keras.Model(inputs=base_vgg.input, outputs=out)
+model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-5),
+loss='binary_crossentropy',
+metrics=['accuracy'])
+
+print("Total Layers:", len(model.layers))
+print("Total trainable layers:", sum([1 for l in model.layers if l.trainable]))
+
+# Output
+Total Layers: 28
+Total trainable layers: 16
+```
+
+We reduce the learning rate in our model since we don't want to make to large weight updates to the pre-trained layers when fine-tuning. The model's training process will be slightly different since we are using data generators, so we will be leveraging the **fit_generator(…)** function.
+
+
+```
+tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
+reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5,
+patience=2, min_lr=0.000001)
+
+callbacks = [reduce_lr, tensorboard_callback]
+train_steps_per_epoch = train_generator.n // train_generator.batch_size
+val_steps_per_epoch = val_generator.n // val_generator.batch_size
+history = model.fit_generator(train_generator, steps_per_epoch=train_steps_per_epoch, epochs=EPOCHS,
+validation_data=val_generator, validation_steps=val_steps_per_epoch,
+verbose=1)
+
+# Output
+Epoch 1/25
+271/271 [====] - 133s 489ms/step - loss: 0.2267 - accuracy: 0.9117 - val_loss: 0.1414 - val_accuracy: 0.9531
+Epoch 2/25
+271/271 [====] - 129s 475ms/step - loss: 0.1399 - accuracy: 0.9552 - val_loss: 0.1292 - val_accuracy: 0.9589
+...
+...
+Epoch 24/25
+271/271 [====] - 128s 473ms/step - loss: 0.0815 - accuracy: 0.9727 - val_loss: 0.1466 - val_accuracy: 0.9682
+Epoch 25/25
+271/271 [====] - 128s 473ms/step - loss: 0.0792 - accuracy: 0.9729 - val_loss: 0.1127 - val_accuracy: 0.9641
+```
+
+This looks to be our best model yet. It gives us a validation accuracy of almost 96.5% and, based on the training accuracy, it doesn't look like our model is overfitting as much as our first model. This can be verified with the following learning curves.
+
+![Learning curves for fine-tuned pre-trained CNN][27]
+
+Learning curves for fine-tuned pre-trained CNN
+
+Let's save this model so we can use it for model evaluation on our test dataset.
+
+
+```
+`model.save('vgg_finetuned.h5')`
+```
+
+This completes our model training phase. We are now ready to test the performance of our models on the actual test dataset!
+
+### Deep learning model performance evaluation
+
+We will evaluate the three models we built in the training phase by making predictions with them on the data from our test dataset—because just validation is not enough! We have also built a nifty utility module called **model_evaluation_utils** , which we can use to evaluate the performance of our deep learning models with relevant classification metrics. The first step is to scale our test data.
+
+
+```
+test_imgs_scaled = test_data / 255.
+test_imgs_scaled.shape, test_labels.shape
+
+# Output
+((8268, 125, 125, 3), (8268,))
+```
+
+The next step involves loading our saved deep learning models and making predictions on the test data.
+
+
+```
+# Load Saved Deep Learning Models
+basic_cnn = tf.keras.models.load_model('./basic_cnn.h5')
+vgg_frz = tf.keras.models.load_model('./vgg_frozen.h5')
+vgg_ft = tf.keras.models.load_model('./vgg_finetuned.h5')
+
+# Make Predictions on Test Data
+basic_cnn_preds = basic_cnn.predict(test_imgs_scaled, batch_size=512)
+vgg_frz_preds = vgg_frz.predict(test_imgs_scaled, batch_size=512)
+vgg_ft_preds = vgg_ft.predict(test_imgs_scaled, batch_size=512)
+
+basic_cnn_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
+for pred in basic_cnn_preds.ravel()])
+vgg_frz_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
+for pred in vgg_frz_preds.ravel()])
+vgg_ft_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
+for pred in vgg_ft_preds.ravel()])
+```
+
+The final step is to leverage our **model_evaluation_utils** module and check the performance of each model with relevant classification metrics.
+
+
+```
+import model_evaluation_utils as meu
+import pandas as pd
+
+basic_cnn_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=basic_cnn_pred_labels)
+vgg_frz_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=vgg_frz_pred_labels)
+vgg_ft_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=vgg_ft_pred_labels)
+
+pd.DataFrame([basic_cnn_metrics, vgg_frz_metrics, vgg_ft_metrics],
+index=['Basic CNN', 'VGG-19 Frozen', 'VGG-19 Fine-tuned'])
+```
+
+![Model accuracy][28]
+
+It looks like our third model performs best on the test dataset, giving a model accuracy and an F1-score of 96%, which is pretty good and quite comparable to the more complex models mentioned in the research paper and articles we mentioned earlier.
+
+### Conclusion
+
+Malaria detection is not an easy procedure, and the availability of qualified personnel around the globe is a serious concern in the diagnosis and treatment of cases. We looked at an interesting real-world medical imaging case study of malaria detection. Easy-to-build, open source techniques leveraging AI can give us state-of-the-art accuracy in detecting malaria, thus enabling AI for social good.
+
+I encourage you to check out the articles and research papers mentioned in this article, without which it would have been impossible for me to conceptualize and write it. If you are interested in running or adopting these techniques, all the code used in this article is available on [my GitHub repository][24]. Remember to download the data from the [official website][11].
+
+Let's hope for more adoption of open source AI capabilities in healthcare to make it less expensive and more accessible for everyone around the world!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/detecting-malaria-deep-learning
+
+作者:[Dipanjan (DJ) Sarkar (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/djsarkar
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourcedoctor.png?itok=fk79NwpC
+[2]: https://opensource.com/sites/default/files/uploads/malaria1_python-tensorflow.png (Python and TensorFlow)
+[3]: https://opensource.com/sites/default/files/uploads/malaria2_malaria-heat-map.png (Malaria heat map)
+[4]: https://www.who.int/features/factfiles/malaria/en/
+[5]: https://peerj.com/articles/4568/
+[6]: https://blog.insightdatascience.com/https-blog-insightdatascience-com-malaria-hero-a47d3d5fc4bb
+[7]: https://www.pyimagesearch.com/2018/12/03/deep-learning-and-medical-image-analysis-with-keras/
+[8]: https://opensource.com/sites/default/files/uploads/malaria3_blood-smear.png (Blood smear workflow for Malaria detection)
+[9]: http://cs231n.github.io/convolutional-networks/
+[10]: https://opensource.com/sites/default/files/uploads/malaria4_cnn-architecture.png (A typical CNN architecture)
+[11]: https://ceb.nlm.nih.gov/repositories/malaria-datasets/
+[12]: https://www.ncbi.nlm.nih.gov/pubmed/29360430
+[13]: https://opensource.com/sites/default/files/uploads/malaria5_dependencies.png (Installing dependencies)
+[14]: https://opensource.com/sites/default/files/uploads/malaria6_tree-dependency.png (Installing the tree dependency)
+[15]: https://opensource.com/sites/default/files/uploads/malaria7_dataset.png (Datasets)
+[16]: https://opensource.com/sites/default/files/uploads/malaria8_cell-samples.png (Malaria cell samples)
+[17]: https://opensource.com/sites/default/files/uploads/malaria9_learningcurves.png (Learning curves for basic CNN)
+[18]: https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a
+[19]: https://github.com/dipanjanS/hands-on-transfer-learning-with-python
+[20]: https://opensource.com/sites/default/files/uploads/malaria10_transferideas.png (Ideas for deep transfer learning)
+[21]: http://image-net.org/index
+[22]: https://arxiv.org/pdf/1409.1556.pdf
+[23]: https://opensource.com/sites/default/files/uploads/malaria11_vgg-19-model-architecture.png (VGG-19 Model Architecture)
+[24]: https://nbviewer.jupyter.org/github/dipanjanS/data_science_for_all/tree/master/os_malaria_detection/
+[25]: https://opensource.com/sites/default/files/uploads/malaria12_learningcurves.png (Learning curves for frozen pre-trained CNN)
+[26]: https://opensource.com/sites/default/files/uploads/malaria13_sampleimages.png (Sample augmented images)
+[27]: https://opensource.com/sites/default/files/uploads/malaria14_learningcurves.png (Learning curves for fine-tuned pre-trained CNN)
+[28]: https://opensource.com/sites/default/files/uploads/malaria15_modelaccuracy.png (Model accuracy)
diff --git a/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md b/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
new file mode 100644
index 0000000000..ee3a82ca03
--- /dev/null
+++ b/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
@@ -0,0 +1,238 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Install MySQL in Ubuntu Linux)
+[#]: via: (https://itsfoss.com/install-mysql-ubuntu/)
+[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
+
+How to Install MySQL in Ubuntu Linux
+======
+
+_**Brief: This tutorial teaches you to install MySQL in Ubuntu based Linux distributions. You’ll also learn how to verify your install and how to connect to MySQL for the first time.**_
+
+**[MySQL][1]** is the quintessential database management system. It is used in many tech stacks, including the popular **[LAMP][2]** (Linux, Apache, MySQL, PHP) stack. It has proven its stability. Another thing that makes **MySQL** so great is that it is **open-source**.
+
+**MySQL** uses **relational databases** (basically **tabular data** ). It is really easy to store, organize and access data this way. For managing data, **SQL** ( **Structured Query Language** ) is used.
+
+In this article I’ll show you how to **install** and **use** MySQL 8.0 in Ubuntu 18.04. Let’s get to it!
+
+### Installing MySQL in Ubuntu
+
+![][3]
+
+I’ll be covering two ways you can install **MySQL** in Ubuntu 18.04:
+
+ 1. Install MySQL from the Ubuntu repositories. Very basic, not the latest version (5.7)
+ 2. Install MySQL using the official repository. There is a bigger step that you’ll have to add to the process, but nothing to worry about. Also, you’ll have the latest version (8.0)
+
+
+
+When needed, I’ll provide screenshots to guide you. For most of this guide, I’ll be entering commands in the **terminal** ( **default hotkey** : CTRL+ALT+T). Don’t be scared of it!
+
+#### Method 1. Installing MySQL from the Ubuntu repositories
+
+First of all, make sure your repositories are updated by entering:
+
+```
+sudo apt update
+```
+
+Now, to install **MySQL 5.7** , simply type:
+
+```
+sudo apt install mysql-server -y
+```
+
+That’s it! Simple and efficient.
+
+#### Method 2. Installing MySQL using the official repository
+
+Although this method has a few more steps, I’ll go through them one by one and I’ll try writing down clear notes.
+
+The first step is browsing to the [download page][4] of the official MySQL website.
+
+![][5]
+
+Here, go down to the **download link** for the **DEB Package**.
+
+![][6]
+
+Scroll down past the info about Oracle Web and right-click on **No thanks, just start my download.** Select **Copy link location**.
+
+Now go back to the terminal. We’ll [use][7] **[Curl][7]** [command][7] to the download the package:
+
+```
+curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb
+```
+
+**** is the link I copied from the website. It might be different based on the current version of MySQL. Let’s use **dpkg** to start installing MySQL:
+
+```
+sudo dpkg -i mysql-apt-config*
+```
+
+Update your repositories:
+
+```
+sudo apt update
+```
+
+To actually install MySQL, we’ll use the same command as in the first method:
+
+```
+sudo apt install mysql-server -y
+```
+
+Doing so will open a prompt in your terminal for **package configuration**. Use the **down arrow** to select the **Ok** option.
+
+![][8]
+
+Press **Enter**. This should prompt you to enter a **password** :. Your are basically setting the root password for MySQL. Don’t confuse it with [root password of Ubuntu][9] system.
+
+![][10]
+
+Type in a password and press **Tab** to select **< Ok>**. Press **Enter.** You’ll now have to **re-enter** the **password**. After doing so, press **Tab** again to select **< Ok>**. Press **Enter**.
+
+![][11]
+
+Some **information** on configuring MySQL Server will be presented. Press **Tab** to select **< Ok>** and **Enter** again:
+
+![][12]
+
+Here you need to choose a **default authentication plugin**. Make sure **Use Strong Password Encryption** is selected. Press **Tab** and then **Enter**.
+
+That’s it! You have successfully installed MySQL.
+
+#### Verify your MySQL installation
+
+To **verify** that MySQL installed correctly, use:
+
+```
+sudo systemctl status mysql.service
+```
+
+This will show some information about the service:
+
+![][13]
+
+You should see **Active: active (running)** in there somewhere. If you don’t, use the following command to start the **service** :
+
+```
+sudo systemctl start mysql.service
+```
+
+#### Configuring/Securing MySQL
+
+For a new install, you should run the provided command for security-related updates. That’s:
+
+```
+sudo mysql_secure_installation
+```
+
+Doing so will first of all ask you if you want to use the **VALIDATE PASSWORD COMPONENT**. If you want to use it, you’ll have to select a minimum password strength ( **0 – Low, 1 – Medium, 2 – High** ). You won’t be able to input any password doesn’t respect the selected rules. If you don’t have the habit of using strong passwords (you should!), this could come in handy. If you think it might help, type in **y** or **Y** and press **Enter** , then choose a **strength level** for your password and input the one you want to use. If successful, you’ll continue the **securing** process; otherwise you’ll have to re-enter a password.
+
+If, however, you do not want this feature (I won’t), just press **Enter** or **any other key** to skip using it.
+
+For the other options, I suggest **enabling** them (typing in **y** or **Y** and pressing **Enter** for each of them). They are (in this order): **remove anonymous user, disallow root login remotely, remove test database and access to it, reload privilege tables now**.
+
+#### Connecting to & Disconnecting from the MySQL Server
+
+To be able to run SQL queries, you’ll first have to connect to the server using MySQL and use the MySQL prompt. The command for doing this is:
+
+```
+mysql -h host_name -u user -p
+```
+
+ * **-h** is used to specify a **host name** (if the server is located on another machine; if it isn’t, just omit it)
+ * **-u** mentions the **user**
+ * **-p** specifies that you want to input a **password**.
+
+
+
+Although not recommended (for safety reasons), you can enter the password directly in the command by typing it in right after **-p**. For example, if the password for **test_user** is **1234** and you are trying to connect on the machine you are using, you could use:
+
+```
+mysql -u test_user -p1234
+```
+
+If you successfully inputted the required parameters, you’ll be greeted by the **MySQL shell prompt** ( **mysql >**):
+
+![][14]
+
+To **disconnect** from the server and **leave** the mysql prompt, type:
+
+```
+QUIT
+```
+
+Typing **quit** (MySQL is case insensitive) or **\q** will also work. Press **Enter** to exit.
+
+You can also output info about the **version** with a simple command:
+
+```
+sudo mysqladmin -u root version -p
+```
+
+If you want to see a **list of options** , use:
+
+```
+mysql --help
+```
+
+#### Uninstalling MySQL
+
+If you decide that you want to use a newer release or just want to stop using MySQL.
+
+First, disable the service:
+
+```
+sudo systemctl stop mysql.service && sudo systemctl disable mysql.service
+```
+
+Make sure you backed up your databases, in case you want to use them later on. You can uninstall MySQL by running:
+
+```
+sudo apt purge mysql*
+```
+
+To clean up dependecies:
+
+```
+sudo apt autoremove
+```
+
+**Wrapping Up**
+
+In this article, I’ve covered **installing MySQL** in Ubuntu Linux. I’d be glad if this guide helps struggling users and beginners.
+
+Tell us in the comments if you found this post to be a useful resource. What do you use MySQL for? We’re eager to receive any feedback, impressions or suggestions. Thanks for reading and have don’t hesitate to experiment with this incredible tool!
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-mysql-ubuntu/
+
+作者:[Sergiu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/sergiu/
+[b]: https://github.com/lujun9972
+[1]: https://www.mysql.com/
+[2]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-mysql-ubuntu.png?resize=800%2C450&ssl=1
+[4]: https://dev.mysql.com/downloads/repo/apt/
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_apt_download_page.jpg?fit=800%2C280&ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_deb_download_link.jpg?fit=800%2C507&ssl=1
+[7]: https://linuxhandbook.com/curl-command-examples/
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_package_configuration_ok.jpg?fit=800%2C587&ssl=1
+[9]: https://itsfoss.com/change-password-ubuntu/
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_enter_password.jpg?fit=800%2C583&ssl=1
+[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_information_on_configuring.jpg?fit=800%2C581&ssl=1
+[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_default_authentication_plugin.jpg?fit=800%2C586&ssl=1
+[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_service_information.jpg?fit=800%2C402&ssl=1
+[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_shell_prompt-2.jpg?fit=800%2C423&ssl=1
diff --git a/sources/tech/20190416 Inter-process communication in Linux- Using pipes and message queues.md b/sources/tech/20190416 Inter-process communication in Linux- Using pipes and message queues.md
new file mode 100644
index 0000000000..a2472dbc92
--- /dev/null
+++ b/sources/tech/20190416 Inter-process communication in Linux- Using pipes and message queues.md
@@ -0,0 +1,531 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Inter-process communication in Linux: Using pipes and message queues)
+[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-channels)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+Inter-process communication in Linux: Using pipes and message queues
+======
+Learn how processes synchronize with each other in Linux.
+![Chat bubbles][1]
+
+This is the second article in a series about [interprocess communication][2] (IPC) in Linux. The [first article][3] focused on IPC through shared storage: shared files and shared memory segments. This article turns to pipes, which are channels that connect processes for communication. A channel has a _write end_ for writing bytes, and a _read end_ for reading these bytes in FIFO (first in, first out) order. In typical use, one process writes to the channel, and a different process reads from this same channel. The bytes themselves might represent anything: numbers, employee records, digital movies, and so on.
+
+Pipes come in two flavors, named and unnamed, and can be used either interactively from the command line or within programs; examples are forthcoming. This article also looks at memory queues, which have fallen out of fashion—but undeservedly so.
+
+The code examples in the first article acknowledged the threat of race conditions (either file-based or memory-based) in IPC that uses shared storage. The question naturally arises about safe concurrency for the channel-based IPC, which will be covered in this article. The code examples for pipes and memory queues use APIs with the POSIX stamp of approval, and a core goal of the POSIX standards is thread-safety.
+
+Consider the [man pages for the **mq_open**][4] function, which belongs to the memory queue API. These pages include a section on [Attributes][5] with this small table:
+
+Interface | Attribute | Value
+---|---|---
+mq_open() | Thread safety | MT-Safe
+
+The value **MT-Safe** (with **MT** for multi-threaded) means that the **mq_open** function is thread-safe, which in turn implies process-safe: A process executes in precisely the sense that one of its threads executes, and if a race condition cannot arise among threads in the _same_ process, such a condition cannot arise among threads in different processes. The **MT-Safe** attribute assures that a race condition does not arise in invocations of **mq_open**. In general, channel-based IPC is concurrent-safe, although a cautionary note is raised in the examples that follow.
+
+### Unnamed pipes
+
+Let's start with a contrived command line example that shows how unnamed pipes work. On all modern systems, the vertical bar **|** represents an unnamed pipe at the command line. Assume **%** is the command line prompt, and consider this command:
+
+
+```
+`% sleep 5 | echo "Hello, world!" ## writer to the left of |, reader to the right`
+```
+
+The _sleep_ and _echo_ utilities execute as separate processes, and the unnamed pipe allows them to communicate. However, the example is contrived in that no communication occurs. The greeting _Hello, world!_ appears on the screen; then, after about five seconds, the command line prompt returns, indicating that both the _sleep_ and _echo_ processes have exited. What's going on?
+
+In the vertical-bar syntax from the command line, the process to the left ( _sleep_ ) is the writer, and the process to the right ( _echo_ ) is the reader. By default, the reader blocks until there are bytes to read from the channel, and the writer—after writing its bytes—finishes up by sending an end-of-stream marker. (Even if the writer terminates prematurely, an end-of-stream marker is sent to the reader.) The unnamed pipe persists until both the writer and the reader terminate.
+
+In the contrived example, the _sleep_ process does not write any bytes to the channel but does terminate after about five seconds, which sends an end-of-stream marker to the channel. In the meantime, the _echo_ process immediately writes the greeting to the standard output (the screen) because this process does not read any bytes from the channel, so it does no waiting. Once the _sleep_ and _echo_ processes terminate, the unnamed pipe—not used at all for communication—goes away and the command line prompt returns.
+
+Here is a more useful example using two unnamed pipes. Suppose that the file _test.dat_ looks like this:
+
+
+```
+this
+is
+the
+way
+the
+world
+ends
+```
+
+The command:
+
+
+```
+`% cat test.dat | sort | uniq`
+```
+
+pipes the output from the _cat_ (concatenate) process into the _sort_ process to produce sorted output, and then pipes the sorted output into the _uniq_ process to eliminate duplicate records (in this case, the two occurrences of **the** reduce to one):
+
+
+```
+ends
+is
+the
+this
+way
+world
+```
+
+The scene now is set for a program with two processes that communicate through an unnamed pipe.
+
+#### Example 1. Two processes communicating through an unnamed pipe.
+
+
+```
+#include /* wait */
+#include
+#include /* exit functions */
+#include /* read, write, pipe, _exit */
+#include
+
+#define ReadEnd 0
+#define WriteEnd 1
+
+void report_and_exit(const char* msg) {
+[perror][6](msg);
+[exit][7](-1); /** failure **/
+}
+
+int main() {
+int pipeFDs[2]; /* two file descriptors */
+char buf; /* 1-byte buffer */
+const char* msg = "Nature's first green is gold\n"; /* bytes to write */
+
+if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");
+pid_t cpid = fork(); /* fork a child process */
+if (cpid < 0) report_and_exit("fork"); /* check for failure */
+
+if (0 == cpid) { /*** child ***/ /* child process */
+close(pipeFDs[WriteEnd]); /* child reads, doesn't write */
+
+while (read(pipeFDs[ReadEnd], &buf, 1) > 0) /* read until end of byte stream */
+write(STDOUT_FILENO, &buf, sizeof(buf)); /* echo to the standard output */
+
+close(pipeFDs[ReadEnd]); /* close the ReadEnd: all done */
+_exit(0); /* exit and notify parent at once */
+}
+else { /*** parent ***/
+close(pipeFDs[ReadEnd]); /* parent writes, doesn't read */
+
+write(pipeFDs[WriteEnd], msg, [strlen][8](msg)); /* write the bytes to the pipe */
+close(pipeFDs[WriteEnd]); /* done writing: generate eof */
+
+wait(NULL); /* wait for child to exit */
+[exit][7](0); /* exit normally */
+}
+return 0;
+}
+```
+
+The _pipeUN_ program above uses the system function **fork** to create a process. Although the program has but a single source file, multi-processing occurs during (successful) execution. Here are the particulars in a quick review of how the library function **fork** works:
+
+ * The **fork** function, called in the _parent_ process, returns **-1** to the parent in case of failure. In the _pipeUN_ example, the call is: [code]`pid_t cpid = fork(); /* called in parent */`[/code] The returned value is stored, in this example, in the variable **cpid** of integer type **pid_t**. (Every process has its own _process ID_ , a non-negative integer that identifies the process.) Forking a new process could fail for several reasons, including a full _process table_ , a structure that the system maintains to track processes. Zombie processes, clarified shortly, can cause a process table to fill if these are not harvested.
+ * If the **fork** call succeeds, it thereby spawns (creates) a new child process, returning one value to the parent but a different value to the child. Both the parent and the child process execute the _same_ code that follows the call to **fork**. (The child inherits copies of all the variables declared so far in the parent.) In particular, a successful call to **fork** returns:
+ * Zero to the child process
+ * The child's process ID to the parent
+ * An _if/else_ or equivalent construct typically is used after a successful **fork** call to segregate code meant for the parent from code meant for the child. In this example, the construct is: [code] if (0 == cpid) { /*** child ***/
+...
+}
+else { /*** parent ***/
+...
+}
+```
+If forking a child succeeds, the _pipeUN_ program proceeds as follows. There is an integer array:
+```
+`int pipeFDs[2]; /* two file descriptors */`
+```
+to hold two file descriptors, one for writing to the pipe and another for reading from the pipe. (The array element **pipeFDs[0]** is the file descriptor for the read end, and the array element **pipeFDs[1]** is the file descriptor for the write end.) A successful call to the system **pipe** function, made immediately before the call to **fork** , populates the array with the two file descriptors:
+```
+`if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");`
+```
+The parent and the child now have copies of both file descriptors, but the _separation of concerns_ pattern means that each process requires exactly one of the descriptors. In this example, the parent does the writing and the child does the reading, although the roles could be reversed. The first statement in the child _if_ -clause code, therefore, closes the pipe's write end:
+```
+`close(pipeFDs[WriteEnd]); /* called in child code */`
+```
+and the first statement in the parent _else_ -clause code closes the pipe's read end:
+```
+`close(pipeFDs[ReadEnd]); /* called in parent code */`
+```
+The parent then writes some bytes (ASCII codes) to the unnamed pipe, and the child reads these and echoes them to the standard output.
+
+One more aspect of the program needs clarification: the call to the **wait** function in the parent code. Once spawned, a child process is largely independent of its parent, as even the short _pipeUN_ program illustrates. The child can execute arbitrary code that may have nothing to do with the parent. However, the system does notify the parent through a signal—if and when the child terminates.
+
+What if the parent terminates before the child? In this case, unless precautions are taken, the child becomes and remains a _zombie_ process with an entry in the process table. The precautions are of two broad types. One precaution is to have the parent notify the system that the parent has no interest in the child's termination:
+```
+`signal(SIGCHLD, SIG_IGN); /* in parent: ignore notification */`
+```
+A second approach is to have the parent execute a **wait** on the child's termination, thereby ensuring that the parent outlives the child. This second approach is used in the _pipeUN_ program, where the parent code has this call:
+```
+`wait(NULL); /* called in parent */`
+```
+This call to **wait** means _wait until the termination of any child occurs_ , and in the _pipeUN_ program, there is only one child process. (The **NULL** argument could be replaced with the address of an integer variable to hold the child's exit status.) There is a more flexible **waitpid** function for fine-grained control, e.g., for specifying a particular child process among several.
+
+The _pipeUN_ program takes another precaution. When the parent is done waiting, the parent terminates with the call to the regular **exit** function. By contrast, the child terminates with a call to the **_exit** variant, which fast-tracks notification of termination. In effect, the child is telling the system to notify the parent ASAP that the child has terminated.
+
+If two processes write to the same unnamed pipe, can the bytes be interleaved? For example, if process P1 writes:
+```
+`foo bar`
+```
+to a pipe and process P2 concurrently writes:
+```
+`baz baz`
+```
+to the same pipe, it seems that the pipe contents might be something arbitrary, such as:
+```
+`baz foo baz bar`
+```
+The POSIX standard ensures that writes are not interleaved so long as no write exceeds **PIPE_BUF** bytes. On Linux systems, **PIPE_BUF** is 4,096 bytes in size. My preference with pipes is to have a single writer and a single reader, thereby sidestepping the issue.
+
+## Named pipes
+
+An unnamed pipe has no backing file: the system maintains an in-memory buffer to transfer bytes from the writer to the reader. Once the writer and reader terminate, the buffer is reclaimed, so the unnamed pipe goes away. By contrast, a named pipe has a backing file and a distinct API.
+
+Let's look at another command line example to get the gist of named pipes. Here are the steps:
+
+ * Open two terminals. The working directory should be the same for both.
+ * In one of the terminals, enter these two commands (the prompt again is **%** , and my comments start with **##** ): [code] % mkfifo tester ## creates a backing file named tester
+% cat tester ## type the pipe's contents to stdout [/code] At the beginning, nothing should appear in the terminal because nothing has been written yet to the named pipe.
+ * In the second terminal, enter the command: [code] % cat > tester ## redirect keyboard input to the pipe
+hello, world! ## then hit Return key
+bye, bye ## ditto
+ ## terminate session with a Control-C [/code] Whatever is typed into this terminal is echoed in the other. Once **Ctrl+C** is entered, the regular command line prompt returns in both terminals: the pipe has been closed.
+ * Clean up by removing the file that implements the named pipe: [code]`% unlink tester`
+```
+
+
+
+As the utility's name _mkfifo_ implies, a named pipe also is called a FIFO because the first byte in is the first byte out, and so on. There is a library function named **mkfifo** that creates a named pipe in programs and is used in the next example, which consists of two processes: one writes to the named pipe and the other reads from this pipe.
+
+#### Example 2. The _fifoWriter_ program
+
+
+```
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+#define MaxLoops 12000 /* outer loop */
+#define ChunkSize 16 /* how many written at a time */
+#define IntsPerChunk 4 /* four 4-byte ints per chunk */
+#define MaxZs 250 /* max microseconds to sleep */
+
+int main() {
+const char* pipeName = "./fifoChannel";
+mkfifo(pipeName, 0666); /* read/write for user/group/others */
+int fd = open(pipeName, O_CREAT | O_WRONLY); /* open as write-only */
+if (fd < 0) return -1; /* can't go on */
+
+int i;
+for (i = 0; i < MaxLoops; i++) { /* write MaxWrites times */
+int j;
+for (j = 0; j < ChunkSize; j++) { /* each time, write ChunkSize bytes */
+int k;
+int chunk[IntsPerChunk];
+for (k = 0; k < IntsPerChunk; k++)
+chunk[k] = [rand][9]();
+write(fd, chunk, sizeof(chunk));
+}
+usleep(([rand][9]() % MaxZs) + 1); /* pause a bit for realism */
+}
+
+close(fd); /* close pipe: generates an end-of-stream marker */
+unlink(pipeName); /* unlink from the implementing file */
+[printf][10]("%i ints sent to the pipe.\n", MaxLoops * ChunkSize * IntsPerChunk);
+
+return 0;
+}
+```
+
+The _fifoWriter_ program above can be summarized as follows:
+
+ * The program creates a named pipe for writing: [code] mkfifo(pipeName, 0666); /* read/write perms for user/group/others */
+int fd = open(pipeName, O_CREAT | O_WRONLY); [/code] where **pipeName** is the name of the backing file passed to **mkfifo** as the first argument. The named pipe then is opened with the by-now familiar call to the **open** function, which returns a file descriptor.
+ * For a touch of realism, the _fifoWriter_ does not write all the data at once, but instead writes a chunk, sleeps a random number of microseconds, and so on. In total, 768,000 4-byte integer values are written to the named pipe.
+ * After closing the named pipe, the _fifoWriter_ also unlinks the file: [code] close(fd); /* close pipe: generates end-of-stream marker */
+unlink(pipeName); /* unlink from the implementing file */ [/code] The system reclaims the backing file once every process connected to the pipe has performed the unlink operation. In this example, there are only two such processes: the _fifoWriter_ and the _fifoReader_ , both of which do an _unlink_ operation.
+
+
+
+The two programs should be executed in different terminals with the same working directory. However, the _fifoWriter_ should be started before the _fifoReader_ , as the former creates the pipe. The _fifoReader_ then accesses the already created named pipe.
+
+#### Example 3. The _fifoReader_ program
+
+
+```
+#include
+#include
+#include
+#include
+#include
+
+unsigned is_prime(unsigned n) { /* not pretty, but efficient */
+if (n <= 3) return n > 1;
+if (0 == (n % 2) || 0 == (n % 3)) return 0;
+
+unsigned i;
+for (i = 5; (i * i) <= n; i += 6)
+if (0 == (n % i) || 0 == (n % (i + 2))) return 0;
+
+return 1; /* found a prime! */
+}
+
+int main() {
+const char* file = "./fifoChannel";
+int fd = open(file, O_RDONLY);
+if (fd < 0) return -1; /* no point in continuing */
+unsigned count = 0, total = 0, primes_count = 0;
+
+while (1) {
+int next;
+int i;
+
+ssize_t count = read(fd, &next, sizeof(int));
+if (0 == count) break; /* end of stream */
+else if (count == sizeof(int)) { /* read a 4-byte int value */
+total++;
+if (is_prime(next)) primes_count++;
+}
+}
+
+close(fd); /* close pipe from read end */
+unlink(file); /* unlink from the underlying file */
+[printf][10]("Received ints: %u, primes: %u\n", total, primes_count);
+
+return 0;
+}
+```
+
+The _fifoReader_ program above can be summarized as follows:
+
+ * Because the _fifoWriter_ creates the named pipe, the _fifoReader_ needs only the standard call **open** to access the pipe through the backing file: [code] const char* file = "./fifoChannel";
+int fd = open(file, O_RDONLY); [/code] The file opens as read-only.
+ * The program then goes into a potentially infinite loop, trying to read a 4-byte chunk on each iteration. The **read** call: [code]`ssize_t count = read(fd, &next, sizeof(int));`[/code] returns 0 to indicate end-of-stream, in which case the _fifoReader_ breaks out of the loop, closes the named pipe, and unlinks the backing file before terminating.
+ * After reading a 4-byte integer, the _fifoReader_ checks whether the number is a prime. This represents the business logic that a production-grade reader might perform on the received bytes. On a sample run, there were 37,682 primes among the 768,000 integers received.
+
+
+
+On repeated sample runs, the _fifoReader_ successfully read all of the bytes that the _fifoWriter_ wrote. This is not surprising. The two processes execute on the same host, taking network issues out of the equation. Named pipes are a highly reliable and efficient IPC mechanism and, therefore, in wide use.
+
+Here is the output from the two programs, each launched from a separate terminal but with the same working directory:
+
+
+```
+% ./fifoWriter
+768000 ints sent to the pipe.
+###
+% ./fifoReader
+Received ints: 768000, primes: 37682
+```
+
+### Message queues
+
+Pipes have strict FIFO behavior: the first byte written is the first byte read, the second byte written is the second byte read, and so forth. Message queues can behave in the same way but are flexible enough that byte chunks can be retrieved out of FIFO order.
+
+As the name suggests, a message queue is a sequence of messages, each of which has two parts:
+
+ * The payload, which is an array of bytes ( **char** in C)
+ * A type, given as a positive integer value; types categorize messages for flexible retrieval
+
+
+
+Consider the following depiction of a message queue, with each message labeled with an integer type:
+
+
+```
++-+ +-+ +-+ +-+
+sender--->|3|--->|2|--->|2|--->|1|--->receiver
++-+ +-+ +-+ +-+
+```
+
+Of the four messages shown, the one labeled 1 is at the front, i.e., closest to the receiver. Next come two messages with label 2, and finally, a message labeled 3 at the back. If strict FIFO behavior were in play, then the messages would be received in the order 1-2-2-3. However, the message queue allows other retrieval orders. For example, the messages could be retrieved by the receiver in the order 3-2-1-2.
+
+The _mqueue_ example consists of two programs, the _sender_ that writes to the message queue and the _receiver_ that reads from this queue. Both programs include the header file _queue.h_ shown below:
+
+#### Example 4. The header file _queue.h_
+
+
+```
+#define ProjectId 123
+#define PathName "queue.h" /* any existing, accessible file would do */
+#define MsgLen 4
+#define MsgCount 6
+
+typedef struct {
+long type; /* must be of type long */
+char payload[MsgLen + 1]; /* bytes in the message */
+} queuedMessage;
+```
+
+The header file defines a structure type named **queuedMessage** , with **payload** (byte array) and **type** (integer) fields. This file also defines symbolic constants (the **#define** statements), the first two of which are used to generate a key that, in turn, is used to get a message queue ID. The **ProjectId** can be any positive integer value, and the **PathName** must be an existing, accessible file—in this case, the file _queue.h_. The setup statements in both the _sender_ and the _receiver_ programs are:
+
+
+```
+key_t key = ftok(PathName, ProjectId); /* generate key */
+int qid = msgget(key, 0666 | IPC_CREAT); /* use key to get queue id */
+```
+
+The ID **qid** is, in effect, the counterpart of a file descriptor for message queues.
+
+#### Example 5. The message _sender_ program
+
+
+```
+#include
+#include
+#include
+#include
+#include
+#include "queue.h"
+
+void report_and_exit(const char* msg) {
+[perror][6](msg);
+[exit][7](-1); /* EXIT_FAILURE */
+}
+
+int main() {
+key_t key = ftok(PathName, ProjectId);
+if (key < 0) report_and_exit("couldn't get key...");
+
+int qid = msgget(key, 0666 | IPC_CREAT);
+if (qid < 0) report_and_exit("couldn't get queue id...");
+
+char* payloads[] = {"msg1", "msg2", "msg3", "msg4", "msg5", "msg6"};
+int types[] = {1, 1, 2, 2, 3, 3}; /* each must be > 0 */
+int i;
+for (i = 0; i < MsgCount; i++) {
+/* build the message */
+queuedMessage msg;
+msg.type = types[i];
+[strcpy][11](msg.payload, payloads[i]);
+
+/* send the message */
+msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT); /* don't block */
+[printf][10]("%s sent as type %i\n", msg.payload, (int) msg.type);
+}
+return 0;
+}
+```
+
+The _sender_ program above sends out six messages, two each of a specified type: the first messages are of type 1, the next two of type 2, and the last two of type 3. The sending statement:
+
+
+```
+`msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT);`
+```
+
+is configured to be non-blocking (the flag **IPC_NOWAIT** ) because the messages are so small. The only danger is that a full queue, unlikely in this example, would result in a sending failure. The _receiver_ program below also receives messages using the **IPC_NOWAIT** flag.
+
+#### Example 6. The message _receiver_ program
+
+
+```
+#include
+#include
+#include
+#include
+#include "queue.h"
+
+void report_and_exit(const char* msg) {
+[perror][6](msg);
+[exit][7](-1); /* EXIT_FAILURE */
+}
+
+int main() {
+key_t key= ftok(PathName, ProjectId); /* key to identify the queue */
+if (key < 0) report_and_exit("key not gotten...");
+
+int qid = msgget(key, 0666 | IPC_CREAT); /* access if created already */
+if (qid < 0) report_and_exit("no access to queue...");
+
+int types[] = {3, 1, 2, 1, 3, 2}; /* different than in sender */
+int i;
+for (i = 0; i < MsgCount; i++) {
+queuedMessage msg; /* defined in queue.h */
+if (msgrcv(qid, &msg, sizeof(msg), types[i], MSG_NOERROR | IPC_NOWAIT) < 0)
+[puts][12]("msgrcv trouble...");
+[printf][10]("%s received as type %i\n", msg.payload, (int) msg.type);
+}
+
+/** remove the queue **/
+if (msgctl(qid, IPC_RMID, NULL) < 0) /* NULL = 'no flags' */
+report_and_exit("trouble removing queue...");
+
+return 0;
+}
+```
+
+The _receiver_ program does not create the message queue, although the API suggests as much. In the _receiver_ , the call:
+
+
+```
+`int qid = msgget(key, 0666 | IPC_CREAT);`
+```
+
+is misleading because of the **IPC_CREAT** flag, but this flag really means _create if needed, otherwise access_. The _sender_ program calls **msgsnd** to send messages, whereas the _receiver_ calls **msgrcv** to retrieve them. In this example, the _sender_ sends the messages in the order 1-1-2-2-3-3, but the _receiver_ then retrieves them in the order 3-1-2-1-3-2, showing that message queues are not bound to strict FIFO behavior:
+
+
+```
+% ./sender
+msg1 sent as type 1
+msg2 sent as type 1
+msg3 sent as type 2
+msg4 sent as type 2
+msg5 sent as type 3
+msg6 sent as type 3
+
+% ./receiver
+msg5 received as type 3
+msg1 received as type 1
+msg3 received as type 2
+msg2 received as type 1
+msg6 received as type 3
+msg4 received as type 2
+```
+
+The output above shows that the _sender_ and the _receiver_ can be launched from the same terminal. The output also shows that the message queue persists even after the _sender_ process creates the queue, writes to it, and exits. The queue goes away only after the _receiver_ process explicitly removes it with the call to **msgctl** :
+
+
+```
+`if (msgctl(qid, IPC_RMID, NULL) < 0) /* remove queue */`
+```
+
+### Wrapping up
+
+The pipes and message queue APIs are fundamentally _unidirectional_ : one process writes and another reads. There are implementations of bidirectional named pipes, but my two cents is that this IPC mechanism is at its best when it is simplest. As noted earlier, message queues have fallen in popularity—but without good reason; these queues are yet another tool in the IPC toolbox. Part 3 completes this quick tour of the IPC toolbox with code examples of IPC through sockets and signals.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/interprocess-communication-linux-channels
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
+[2]: https://en.wikipedia.org/wiki/Inter-process_communication
+[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
+[4]: http://man7.org/linux/man-pages/man2/mq_open.2.html
+[5]: http://man7.org/linux/man-pages/man2/mq_open.2.html#ATTRIBUTES
+[6]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
+[7]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
+[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
+[9]: http://www.opengroup.org/onlinepubs/009695399/functions/rand.html
+[10]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
+[11]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html
+[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
diff --git a/sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md b/sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md
new file mode 100644
index 0000000000..04c1feb5ba
--- /dev/null
+++ b/sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md
@@ -0,0 +1,68 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Linux Foundation Training Courses Sale & Discount Coupon)
+[#]: via: (https://itsfoss.com/linux-foundation-discount-coupon/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+Linux Foundation Training Courses Sale & Discount Coupon
+======
+
+Linux Foundation is the non-profit organization that employs Linux creator Linus Torvalds and manages the development of the Linux kernel. Linux Foundation aims to promote the adoption of Linux and Open Source in the industry and it is doing a great job in this regard.
+
+Open Source jobs are in demand and no one knows is better than Linux Foundation, the official Linux organization. This is why the Linux Foundation provides a number of training and certification courses on Linux related technology. You can browse the [entire course offering on Linux Foundations’ training webpage][1].
+
+### Linux Foundation Latest Offer: 40% off on all courses [Limited Time]
+
+At present Linux Foundation is offering some great offers for sysadmin, devops and cloud professionals.
+
+At present, Linux Foundation is offering massive discount of 40% on the entire range of their e-learning courses and certification bundles, including the growing catalog of cloud and devops e-learning courses like Kubernetes!
+
+Just use coupon code **APRIL40** at checkout to get your discount.
+
+[Linux Foundation 40% Off (Coupon Code APRIL40)][2]
+
+_Do note that this offer is valid till 22nd April 2019 only._
+
+### Linux Foundation Discount Coupon [Valid all the time]
+
+You can get a 16% off on any training or certification course provided by The Linux Foundation at any given time. All you have to do is to use the coupon code **FOSS16** at the checkout page.
+
+Note that it might not be combined with sysadmin day offer.
+
+[Get 16% off on Linux Foundation Courses with FOSS16 Code][1]
+
+This article contains affiliate links. Please read our [affiliate policy][3].
+
+#### Should you get certified?
+
+![][4]
+
+This is the question I have been asked regularly. Are Linux certifications worth it? The short answer is yes.
+
+As per the [open source jobs report in 2018][5], over 80% of open source professionals said that certifications helped with their careers. Certifications enable you to demonstrate technical knowledge to potential employers and thus certifications make you more employable in general.
+
+Almost half of the hiring managers said that employing certified open source professionals is a priority for them.
+
+Certifications from a reputed authority like Linux Foundation, Red Hat, LPIC etc are particularly helpful when you are a fresh graduate or if you want to switch to a new domain in your career.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/linux-foundation-discount-coupon/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://shareasale.com/u.cfm?d=507759&m=59485&u=747593&afftrack=
+[2]: http://shrsl.com/1k5ug
+[3]: https://itsfoss.com/affiliate-policy/
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/07/linux-foundation-training-certification-discount.png?ssl=1
+[5]: https://www.linuxfoundation.org/publications/open-source-jobs-report-2018/
diff --git a/sources/tech/20190416 Linux Server Hardening Using Idempotency with Ansible- Part 3.md b/sources/tech/20190416 Linux Server Hardening Using Idempotency with Ansible- Part 3.md
new file mode 100644
index 0000000000..50f4981c08
--- /dev/null
+++ b/sources/tech/20190416 Linux Server Hardening Using Idempotency with Ansible- Part 3.md
@@ -0,0 +1,118 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 3)
+[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-3)
+[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie)
+
+Linux Server Hardening Using Idempotency with Ansible: Part 3
+======
+
+![][1]
+
+[Creative Commons Zero][2]
+
+In the previous articles, we introduced idempotency as a way to approach your server’s security posture and looked at some specific Ansible examples, including the kernel, system accounts, and IPtables. In this final article of the series, we’ll look at a few more server-hardening examples and talk a little more about how the idempotency playbook might be used.
+
+#### **Time**
+
+Due to its reduced functionality, and therefore attack surface, the preference amongst a number of OSs has been to introduce “chronyd” over “ntpd”. If you’re new to “chrony” then fret not. It’s still using the NTP (Network Time Protocol) that we all know and love but in a more secure fashion.
+
+The first thing I do with Ansible within the “chrony.conf” file is alter the “bind address” and if my memory serves there’s also a “command port” option. These config options allow Chrony to only listen on the localhost. In other words you are still syncing as usual with other upstream time servers (just as NTP does) but no remote servers can query your time services; only your local machine has access.
+
+There’s more information on the “bindcmdaddress 127.0.0.1” and “cmdport 0” on this Chrony page () under “2.5. How can I make chronyd more secure?” which you should read for clarity. This premise behind the comment on that page is a good idea: “you can disable the internet command sockets completely by adding cmdport 0 to the configuration file”.
+
+Additionally I would also focus on securing the file permissions for Chrony and insist that the service starts as expected just like the syslog config above. Otherwise make sure that your time sources are sane, have a degree of redundancy with multiple sources set up and then copy the whole config file over using Ansible.
+
+#### **Logging**
+
+You can clearly affect the level of detail included in the logs from a number pieces of software on a server. Thinking back to what we’ve looked at in relation to syslog already you can also tweak that application’s config using Ansible to your needs and then use the example Ansible above in addition.
+
+#### **PAM**
+
+Apparently PAM (Pluggable Authentication Modules) has been a part of Linux since 1997. It is undeniably useful (a common use is that you can force SSH to use it for password logins, as per the SSH YAML file above). It is extensible, sophisticated and can perform useful functions such as preventing brute force attacks on password logins using a clever rate limiting system. The syntax varies a little between OSes but if you have the time then getting PAM working well (even if you’re only using SSH keys and not passwords for your logins) is a worthwhile effort. Attackers like their own users on a system with lots of usernames, something innocuous such as “webadmin” or similar might be easy to miss on a server, and PAM can help you out in this respect.
+
+#### **Auditd**
+
+We’ve looked at logging a little already but what about capturing every “system call” that a kernel makes. The Linux kernel is a super-busy component of any system and logging almost every single thing that a system does is an excellent way of providing post-event forensics. This article will hopefully shed some light on where to begin: . Note the comments in that article about performance, there’s little point in paying extra for compute and disk IO resource because you’ve misconfigured your logging so spend some time getting it correct would be my advice.
+
+For concerns over disk space I will usually change a few lines in the file “/etc/audit/auditd.conf” in order to prevent there firstly being too many log files created and secondly logs that grow very large without being rotated. This is also on the proviso that logs are being ingested upstream via another mechanism too. Clearly the files permissions and the service starting are also the basics you need to cover here too. Generally file permissions for auditd are tight as it’s a “root” oriented service so there’s less changes needed here generally.
+
+#### **Filesystems**
+
+With a little reading you can discover which filesystems that are made available to your OS by default. You should disable these (at the “modprode.d” file level) with Ansible to prevent weird and wonderful things being attached unwittingly to your servers. You are reducing the attack surface with this approach. The Ansible might look something like this below for example.
+
+```
+name: Make sure filesystems which are not needed are forced as off
+
+lineinfile: dest="/etcmodprobe.d/harden.conf" line='install squashfs /bin/true' state=present
+```
+
+#### **SELinux**
+
+The old, but sometimes avoided due to complexity, security favourite, SELinux, should be set to “enforcing” mode. Or, at the every least, set to log sensibly using “permissive” mode. Permissive mode will at least fill your auditd logs up with any correct rule matches nicely. In terms of what Ansible looks like it’s simple and is along these lines:
+
+```
+name: Configure SElinux to be running in permissive mode
+
+replace: path=”/etc/selinux/config” regexp='SELINUX=disabled' replace='SELINUX=permissive'
+```
+
+#### **Packages**
+
+Needless to say the compliance hardening playbook is also a good place to upgrade all the packages (with some selective exclusions) on the system. Pay attention to the section relating to reboots and idempotency in a moment however. With other mechanisms in place you might not want to update packages here but instead as per the Automation Documents article mentioned in a moment.
+
+### **Idempotency**
+
+Now we’ve run through some of the aspects you would want to look at when hardening on a server, let’s think a little more about how the playbook might be used.
+
+When it comes to cloud platforms most of my professional work has been on AWS and therefore, more often than not, a fresh AMI is launched and then a playbook is run over the top of it. There’s a mountain of detail in one way of doing that in this article () which you may be pleased to discover accommodates a mechanism to spawn a script or playbook.
+
+It is important to note, when it comes to idempotency, that it may take a little more effort initially to get your head around the logic involved in being able to re-run Ansible repeatedly without disturbing the required status quo of your server estate.
+
+One thing to be absolutely certain of however (barring rare edge cases) is that after you apply your hardening for the very first time, on a new AMI or server build, you will require a reboot. This is an important element due to a number of system facets not being altered correctly without a reboot. These include applying kernel changes so alterations become live, writing auditd rules as immutable config and also starting or stopping services to improve the security posture.
+
+Note though that you’re probably not going to want to execute all plays in a playbook every twenty or thirty minutes, such as updating all packages and stopping and restarting key customer-facing services. As a result you should factor the logic into your Ansible so that some tasks only run once initially and then maybe write a “completed” placeholder file to the filesystem afterwards for referencing. There’s a million different ways of achieving a status checker.
+
+The nice thing about Ansible is that the logic for rerunning playbooks is implicit and unlike shell scripts which for this type of task can be arduous to code the logic into. Sometimes, such as updating the GRUB bootloader for example, trying to guess the many permutations of a system change can be painful.
+
+### **Bedtime Reading**
+
+I still think that you can’t beat trial and error when it comes to computing. Experience is valued for good reason.
+
+Be warned that you’ll find contradictory advice sometimes from the vast array of online resources in this area. Advice differs probably because of the different use cases. The only way to harden the varying flavours of OS to my mind is via a bespoke approach. This is thanks to the environments that servers are used within and the requirements of the security framework or standard that an organisation needs to meet.
+
+For OS hardening details you can check with resources such as the NSA ([https://www.nsa.gov][3]), the Cloud Security Alliance (), proprietary training organisations such as GIAC ([https://www.giac.org][4]) who offer resources (), the diverse CIS Benchmarks ([https://www.cisecurity.org][5]) for industry consensus-based benchmarking, the SANS Institute (), NIST’s Computer Security Research ([https://csrc.nist.gov][6]) and of course print media too.
+
+### **Conclusion**
+
+Hopefully, you can see how powerful an idempotent server infrastructure is and are tempted to try it for yourself.
+
+The ever-present threat of APT (Advanced Persistent Threat) attacks on infrastructure, where a successful attacker will sit silently monitoring events and then when it’s opportune infiltrate deeper into an estate, makes this type of configuration highly valuable.
+
+The amount of detail that goes into the tests and configuration changes is key to the value that such an approach will bring to an organisation. Like the tests in a CI/CD pipeline they’re only as ever as good as their coverage.
+
+Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website: [https://www.devsecops.cc][7]
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-3
+
+作者:[Chris Binnie][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/chrisbinnie
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tech-1495181_1280.jpg?itok=5WcwApNN
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.nsa.gov/
+[4]: https://www.giac.org/
+[5]: https://www.cisecurity.org/
+[6]: https://csrc.nist.gov/
+[7]: https://www.devsecops.cc/
diff --git a/translated/talk/20190208 Which programming languages should you learn.md b/translated/talk/20190208 Which programming languages should you learn.md
new file mode 100644
index 0000000000..8806b8cfc0
--- /dev/null
+++ b/translated/talk/20190208 Which programming languages should you learn.md
@@ -0,0 +1,47 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Which programming languages should you learn?)
+[#]: via: (https://opensource.com/article/19/2/which-programming-languages-should-you-learn)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+应该学习哪种编程语言?
+======
+学习一门新的编程语言是在你的职业生涯中继续前进的好方法,但是应该学习哪一门呢?
+
+
+如果你想要在编程生涯中起步或继续前进,那么学习一门新语言是一个聪明的主意。但是,大量活跃使用的语言引发了一个问题:哪种编程语言是最好学习的?要回答这个问题,让我们从一个简单的问题开始:你想做什么样的程序?
+
+如果你想在客户端进行网络编程,那么特定语言 HTML、CSS 和 JavaScript(一种看似无穷无尽的方言)是必须要学习的。
+
+
+如果你想在服务器端进行 Web 编程,那么选项包括常见的通用语言:C++, Golang, Java, C#, Node.js, Perl, Python, Ruby 等等。当然,服务器程序与数据存储(例如关系数据库和其他数据库)打交道,这意味着 SQL 等查询语言可能会发挥作用。
+
+如果你正在为移动设备编写本地应用程序,那么了解目标平台非常重要。对于 Apple 设备,Swift 已经取代 Objective C 成为首选语言。对于 Android 设备,Java(带有专用库和工具集)仍然是主要语言。有一些特殊语言,如 与 C# 一起使用的 Xamarin,可以为 Apple、Android 和 Windows 设备生成特定于平台的代码。
+
+那么通用语言呢?通常有各种各样的选择。在*动态*或*脚本*语言(如 Perl、Python 和 Ruby)中,有一些新东西,如 Node.js。java 和 C# 的相似之处比它们的粉丝愿意承认的还要多,仍然是针对虚拟机(分别是 JVM 和 CLR)的主要*静态编译*语言。在编译为*原生可执行文件*的语言中,C++ 仍然处于混合状态,以及后来的 Golang 和 Rust 等。通用*函数*语言比比皆是(如 Clojure、Haskell、Erlang、F#、Lisp 和 Scala),它们通常都有热情投入的社区。值得注意的是,面向对象语言(如 Java 和 C#)已经添加了函数构造(特别是 lambdas),而动态语言从一开始就有函数构造。
+
+让我以 C 语言结尾,它是一种小巧,优雅,可扩展的语言,不要与 C++ 混淆。现代操作系统主要用 C 语言编写,其余的用汇编语言编写。任何平台上的标准库大多数都是用 C 语言编写的。例如,任何打印 `Hello, world!` 这种问候都是通过调用名为 **write** 的 C 库函数来实现的。
+
+C 作为一种可移植的汇编语言,公开了其他高级语言有意隐藏的底层系统的详细信息。因此,理解 C 可以更好地掌握程序如何竞争执行所需的共享系统资源(如处理器,内存和 I/O 设备)。C 语言既高级又接近硬件,因此在性能方面无与伦比,当然,汇编语言除外。最后,C 是编程语言中的通用语言,几乎所有通用语言都支持一种或另一种形式的 C 调用。
+
+有关现代 C 语言的介绍,参考我的书籍 [C Programming: Introducing Portable Assembler][1]。无论你怎么做,学习 C 语言,你会学到比另一种编程语言多得多的东西。
+
+你认为学习哪些编程语言很重要?你是否同意这些建议?在评论告知我们!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/which-programming-languages-should-you-learn
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+ -->
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://www.amazon.com/dp/1977056954?ref_=pe_870760_150889320
diff --git a/translated/talk/20190327 Why DevOps is the most important tech strategy today.md b/translated/talk/20190327 Why DevOps is the most important tech strategy today.md
new file mode 100644
index 0000000000..fe014a243a
--- /dev/null
+++ b/translated/talk/20190327 Why DevOps is the most important tech strategy today.md
@@ -0,0 +1,129 @@
+[#]: collector: "lujun9972"
+[#]: translator: "zgj1024 "
+[#]: reviewer: " "
+[#]: publisher: " "
+[#]: url: " "
+[#]: subject: "Why DevOps is the most important tech strategy today"
+[#]: via: "https://opensource.com/article/19/3/devops-most-important-tech-strategy"
+[#]: author: "Kelly AlbrechtWilly-Peter Schaub https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht"
+
+为何 DevOps 是如今最重要的技术策略
+======
+消除一些关于 DevOps 的疑惑
+![CICD with gears][1]
+
+很多人初学 [DevOps][2] 时,看到它其中一个结果就问这个是如何得来的。其实理解这部分 Devops 的怎样实现并不重要,重要的是——理解(使用) DevOps 策略的原因——这是做一个行业的领导者还是追随者的差别。
+
+你可能会听过些 Devops 的难以置信的成果,例如生产环境非常有弹性,“混世猴子”([Chaos Monkey][3])程序运行时,将周围的连接随机切断,每天仍可以处理数千个版本。这是令人印象深刻的,但就其本身而言,这是一个 DevOps 的无力案例,本质上会被一个[反例][4]困扰:DevOps 环境有弹性是因为没有观察到严重的故障。。。还没有。
+
+有很多关于 DevOps 的疑惑,并且许多人还在尝试弄清楚它的意义。下面是来自我 LinkedIn Feed 中的某个人的一个案例:
+
+> 最近我参加一些 #DevOps 的交流会,那里一些演讲人好像在倡导 #敏捷开发是 DevOps 的子集。不知为何,我的理解洽洽相反。
+>
+> 能听一下你们的想法吗?你认为敏捷开发和 DevOps 之间是什么关系呢?
+>
+> 1. DevOps 是敏捷开发的子集
+> 2. 敏捷开发 是 DevOps 的子集
+> 3. DevOps 是敏捷开发的扩展,从敏捷开发结束的地方开始
+> 4. DevOps 是敏捷开发的新版本
+>
+
+科技行业的专业人士在那篇 LinkedIn 的帖子上达标各样的答案,你会怎样回复呢?
+
+### DevOps源于精益和敏捷
+
+如果我们从亨利福特的战略和丰田生产系统对福特车型的改进(的历史)开始, DevOps 就更有意义了。精益制造就诞生在那段历史中,人们对精益制作进行了良好的研究。James P. Womack 和 Daniel T. Jones 将精益思维( [Lean Thinking][5])提炼为五个原则:
+ 1. 指明客户所需的价值
+ 2. 确定提供该价值的每个产品的价值流,并对当前提供该价值所需的所有浪费步骤提起挑战
+ 3. 使产品通过剩余的增值步骤持续流动
+ 4. 在可以连续流动的所有步骤之间引入拉力
+ 5. 管理要尽善尽美,以便为客户服务所需的步骤数量和时间以及信息量持续下降
+
+
+Lean seeks to continuously remove waste and increase the flow of value to the customer. This is easily recognizable and understood through a core tenet of lean: single piece flow. We can do a number of activities to learn why moving single pieces at a time is magnitudes faster than batches of many pieces; the [Penny Game][6] and the [Airplane Game][7] are two of them. In the Penny Game, if a batch of 20 pennies takes two minutes to get to the customer, they get the whole batch after waiting two minutes. If you move one penny at a time, the customer gets the first penny in about five seconds and continues getting pennies until the 20th penny arrives approximately 25 seconds later.
+
+精益致力于持续消除浪费并增加客户的价值流动。这很容易识别并明白精益的核心原则:单一流。我们可以做一些游戏去了解为何同一时间移动单个比批量移动要快得多。其中的两个游戏是[硬币游戏][6]和[飞机游戏][7]。在硬币游戏中,如果一批 20 个硬币到顾客手中要用 2 分钟,顾客等 2 分钟后能拿到整批硬币。如果一次只移动一个硬币,顾客会在 5 秒内得到第一枚硬币,并会持续获得硬币,直到在大约 25 秒后第 20 个硬币到达。(译者注:有相关的视频的)
+
+这是巨大的不同,但是不是生活中的所有事都像硬币游戏那样简单并可预测的。这就是敏捷的出现的原因。我们当然看到了高效绩敏捷团队的精益原则,但这些团队需要的不仅仅是精益去做他们要做的事。
+
+为了能够处理典型的软件开发任务的不可预见性和变化,敏捷开发的方法论会将重点放在意识、审议、决策和行动上,以便在不断变化的现实中调整。例如,敏捷框架(如 srcum)通过每日站立会议和冲刺评审会议等仪式提高意识。如果 scrum 团队意识到新的事实,框架允许并鼓励他们在必要时及时调整路线。
+
+要使团队做出这些类型的决策,他们需要高度信任的环境中的自我组织能力。以这种方式工作的高效绩敏捷团队在不断调整的同时实现快速的价值流,消除错误方向上的浪费。
+
+### 最佳批量大小
+
+要了解 DevOps 在软件开发中的请强大功能,这会帮助我们理解批处理大小的经济学。请考虑以下来自Donald Reinertsen 的[产品开发流程原则][8]的U曲线优化示例:
+
+![U-curve optimization illustration of optimal batch size][9]
+
+这可以类比杂货店购物来解释。假设你需要买一些鸡蛋,而你住的地方离商店只有 30 分的路程。买一个鸡蛋(图种最左边)意味着每次要花 30 分钟的路程,这就是你的_交易成本_。_持有成本_可能是鸡蛋变质和在你的冰箱中持续地占用空间。_总成本_是_交易成本_加上你的_持有成本_。这 U 型曲线解释了为什么对大部分来说,一次买一打鸡蛋是他们的_最佳批量大小_。如果你就住在商店的旁边,步行到那里不会花费你任何的时候,你可能每次只会买一小盒鸡蛋,以此来节省冰箱的空间并享受新鲜的鸡蛋。
+
+这 U 型优化曲线可以说明为什么在成功敏捷转换中生产力会显著提高。考虑敏捷转换对组织决策的影响。在传统的分级组织中,决策权是集中的。这会导致较少的人做更大的决策。敏捷方法论会有效地降低组织决策中的交易成本,方法是将决策分散到最被人熟知的认识和信息的位置:跨越高度信任,自组织的敏捷团队。
+
+下面的动画演示了降低事务成本后,最佳批量大小是如何向左移动。在更频繁地做出更快的决策方面,你不能低估组织的价值。
+
+![U-curve optimization illustration][10]
+
+### DevOps 适合哪些地方
+
+自动化是 DevOps 最知名的事情之一。前面的插图非常详细地展示了自动化的价值。通过自动化,我们将交易成本降低到接近零,实质上是免费进行测试和部署。这使我们可以利用越来越小的批量工作。较小批量的工作更容易理解、提交、测试、审查和知道何时能完成。这些较小的批量大小也包含较少的差异和风险,使其更易于部署,如果出现问题,可以进行故障排除和恢复。通过自动化与扎实的敏捷实践相结合,我们可以使我们的功能开发非常接近单件流程,从而快速,持续地为客户提供价值。
+
+更传统地说,DevOps 被理解为一种打破开发团队和运营团队之间混乱局面的方法。在这个模型中,开发团队开发新的功能,而运营团队则保持系统的稳定和平稳运行。摩擦的发生是因为开发过程中的新功能将更改引入到系统中,从而增加了停机的风险,运营团队并不认为要对此负责,但无论如何都必须处理这一问题。DevOps 不仅仅尝试让人们一起工作,更重要的是尝试在复杂的环境中安全地进行更频繁的更改。
+
+我们可以向 [Ron Westrum][11] 寻求有关在复杂组织中实现安全性的研究。在研究为什么有些组织比其他组织更安全时,他发现组织的文化可以预测其安全性。他确定了三种文化:病态,官僚主义的和生产式的。他发现病理的可以预测安全性较低,而生产式文化被预测为更安全(例如,在他的主要研究领域中,飞机坠毁或意外住院死亡的数量要少得多)。
+
+![Three types of culture identified by Ron Westrum][12]
+
+高效的 DevOps 团队通过精益和敏捷的实践实现了一种生成性文化,这表明速度和安全性是互补的,或者说是同一个问题的两个方面。通过将决策和功能的最佳批量大小减少到非常小,DevOps 实现了更快的信息流和价值,同时消除了浪费并降低了风险。
+
+与 Westrum的研究一致,在提高安全性和可靠性的同时,变化也很容易发生。。当一个敏捷的 DevOps 团队被信任做出自己的决定时,我们将获得 DevOps 目前最为人所知的工具和技术:自动化和持续交付。通过这种自动化,交易成本比以往任何时候都进一步降低,并且实现了近乎单一的精益流程,每天创造数千个决策和发布的潜力,正如我们在高效绩的 DevOps 组织中看到的那样
+
+### 流动、反馈、学习
+
+DevOps 并不止于此。我们主要讨论了 DevOps 实现了革命性的流程,但通过类似的努力可以进一步放大精益和敏捷实践,从而实现更快的反馈循环和更快的学习。在[_DevOps手册_][13] 中,作者除了详细解释快速流程外, DevOps 如何在整个价值流中实现遥测,从而获得快速且持续的反馈。此外,利用[精益求精的突破][14]和scrum 的[回顾][15],高效的 DevOps 团队将不断推动学习和持续改进深入到他们的组织的基础,实现软件产品开发行业的精益制造革命。
+
+
+### 从 DevOps 评估开始
+
+利用 DevOps 的第一步是,经过大量研究或在 DevOps 顾问和教练的帮助下,对高效绩 DevOps 团队中始终存在的一系列维度进行评估。评估应确定需要改进的薄弱或不存在的团队规范。对评估的结果进行评估,以找到具有高成功机会的快速获胜焦点领域,从而产生高影响力的改进。快速获胜非常重要,能让团队获取解决更具挑战性领域所需的动力。团队应该产生可以快速尝试的想法,并开始关注 DevOps 转型。
+
+一段时间后,团队应重新评估相同的维度,以衡量改进并确立新的高影响力重点领域,并再次采纳团队的新想法。一位好的教练将根据需要进行咨询、培训、指导和支持,直到团队拥有自己的持续改进方案,并通过不断地重新评估、试验和学习,在所有维度上实现近乎一致。
+
+在本文的[第二部分][16]中,我们将查看 Drupal 社区中 DevOps 调查的结果,并了解最有可能找到快速获胜的位置。
+
+* * *
+
+_Rob_ _Bayliss and Kelly Albrecht will present[DevOps: Why, How, and What][17] and host a follow-up [Birds of a][18]_ [_Feather_][18] _[discussion][18] at [DrupalCon 2019][19] in Seattle, April 8-12._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/devops-most-important-tech-strategy
+
+作者:[Kelly AlbrechtWilly-Peter Schaub][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/zgj1024)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc "CICD with gears"
+[2]: https://opensource.com/resources/devops
+[3]: https://github.com/Netflix/chaosmonkey
+[4]: https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative
+[5]: https://www.amazon.com/dp/B0048WQDIO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
+[6]: https://youtu.be/5t6GhcvKB8o?t=54
+[7]: https://www.shmula.com/paper-airplane-game-pull-systems-push-systems/8280/
+[8]: https://www.amazon.com/dp/B00K7OWG7O/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
+[9]: https://opensource.com/sites/default/files/uploads/batch_size_optimal_650.gif "U-curve optimization illustration of optimal batch size"
+[10]: https://opensource.com/sites/default/files/uploads/batch_size_650.gif "U-curve optimization illustration"
+[11]: https://en.wikipedia.org/wiki/Ron_Westrum
+[12]: https://opensource.com/sites/default/files/uploads/information_flow.png "Three types of culture identified by Ron Westrum"
+[13]: https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002/ref=sr_1_3?keywords=DevOps+handbook&qid=1553197361&s=books&sr=1-3
+[14]: https://en.wikipedia.org/wiki/Kaizen
+[15]: https://www.scrum.org/resources/what-is-a-sprint-retrospective
+[16]: https://opensource.com/article/19/3/where-drupal-community-stands-devops-adoption
+[17]: https://events.drupal.org/seattle2019/sessions/devops-why-how-and-what
+[18]: https://events.drupal.org/seattle2019/bofs/devops-getting-started
+[19]: https://events.drupal.org/seattle2019
diff --git a/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md b/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md
deleted file mode 100644
index 6b5db8b104..0000000000
--- a/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md
+++ /dev/null
@@ -1,911 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (guevaraya )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 11 Input02)
-[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html)
-[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
-
-计算机实验室 – 树莓派开发: 课程11 输入02
-======
-
-课程输入02是以课程输入01基础讲解的,通过一个简单的命令行实现用户的命令输入和计算机的处理和显示。本文假设你已经具备 [课程11:输入01][1] 的操作系统代码基础。
-
-### 1 终端
-
-```
-早期的计算一般是在一栋楼里的一个巨型计算机系统,他有很多可以输命令的'终端'。计算机依次执行不同来源的命令。
-```
-
-几乎所有的操作系统都是以字符终端显示启动的。经典的黑底白字,通过键盘输入计算机要执行的命令,然后会提示你拼写错误,或者恰好得到你想要的执行结果。这种方法有两个主要优点:键盘和显示器可以提供简易,健壮的计算机交互机制,几乎所有的计算机系统都采用这个机制,这个也广泛被系统管理员应用。
-
-让我们分析下真正想要哪些信息:
-
-1. 计算机打开后,显示欢迎信息
-2. 计算机启动后可以接受输入标志
-3. 用户从键盘输入带参数的命令
-4. 用户输入回车键或提交按钮
-5. 计算机解析命令后执行可用的命令
-6. 计算机显示命令的执行结果,过程信息
-7. 循环跳转到步骤2
-
-
-这样的终端被定义为标准的输入输出设备。用于输入的屏幕和输出打印的屏幕是同一个。也就是说终端是对字符显示的一个抽象。字符显示中,单个字符是最小的单元,而不是像素。屏幕被划分成固定数量不同颜色的字符。我们可以在现有的屏幕代码基础上,先存储字符和对应的颜色,然后再用方法 DrawCharacter 把其推送到屏幕上。一旦我们需要字符显示,就只需要在屏幕上画出一行字符串。
-
-新建文件名为 terminal.s 如下:
-```
-.section .data
-.align 4
-terminalStart:
-.int terminalBuffer
-terminalStop:
-.int terminalBuffer
-terminalView:
-.int terminalBuffer
-terminalColour:
-.byte 0xf
-.align 8
-terminalBuffer:
-.rept 128*128
-.byte 0x7f
-.byte 0x0
-.endr
-terminalScreen:
-.rept 1024/8 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 768/16
-.byte 0x7f
-.byte 0x0
-.endr
-```
-这是文件终端的配置数据文件。我们有两个主要的存储变量:terminalBuffer 和 terminalScreen。terminalBuffer保存所有显示过的字符。它保存128行字符文本(1行包含128个字符)。每个字符有一个 ASCII 字符和颜色单元组成,初始值为0x7f(ASCII的删除键)和 0(前景色和背景色为黑)。terminalScreen 保存当前屏幕显示的字符。它保存128x48的字符,与 terminalBuffer 初始化值一样。你可能会想我仅需要terminalScreen就够了,为什么还要terminalBuffer,其实有两个好处:
-
- 1. 我们可以很容易看到字符串的变化,只需画出有变化的字符。
- 2. 我们可以回滚终端显示的历史字符,也就是缓冲的字符(有限制)
-
-
-你总是需要尝试去设计一个高效的系统,如果很少变化的条件这个系统会运行的更快。
-
-独特的技巧在低功耗系统里很常见。画屏是很耗时的操作,因此我们仅在不得已的时候才去执行这个操作。在这个系统里,我们可以任意改变terminalBuffer,然后调用一个仅拷贝屏幕上字节变化的方法。也就是说我们不需要持续画出每个字符,这样可以节省一大段跨行文本的操作时间。
-
-其他在 .data 段的值得含义如下:
-
- * terminalStart
- 写入到 terminalBuffer 的第一个字符
- * terminalStop
- 写入到 terminalBuffer 的最后一个字符
- * terminalView
- 表示当前屏幕的第一个字符,这样我们可以控制滚动屏幕
- * temrinalColour
- 即将被描画的字符颜色
-
-
-```
-循环缓冲区是**数据结构**一个例子。这是一个组织数据的思路,有时我们通过软件实现这种思路。
-```
-
-![显示 Hellow world 插入到大小为5的循环缓冲区的示意图。][2]
-terminalStart 需要保存起来的原因是 termainlBuffer 是一个循环缓冲区。意思是当缓冲区变满时,末尾地方会回滚覆盖开始位置,这样最后一个字符变成了第一个字符。因此我们需要将 terminalStart 往前推进,这样我们知道我们已经占满它了。如何实现缓冲区检测:如果索引越界到缓冲区的末尾,就将索引指向缓冲区的开始位置。循环缓冲区是一个比较常见的高明的存储大量数据的方法,往往这些数据的最近部分比较重要。它允许无限制的写入,只保证最近一些特定数据有效。这个常常用于信号处理和数据压缩算法。这样的情况,可以允许我们存储128行终端记录,超过128行也不会有问题。如果不是这样,当超过第128行时,我们需要把127行分别向前拷贝一次,这样很浪费时间。
-
-之前已经提到过 terminalColour 几次了。你可以根据你的想法实现终端颜色,但这个文本终端有16个前景色和16个背景色(这里相当于有16²=256种组合)。[CGA][3]终端的颜色定义如下:
-
-表格 1.1 - CGA 颜色编码
-
-| 序号 | 颜色 (R, G, B) |
-| ------ | ------------------------|
-| 0 | 黑 (0, 0, 0) |
-| 1 | 蓝 (0, 0, ⅔) |
-| 2 | 绿 (0, ⅔, 0) |
-| 3 | 青色 (0, ⅔, ⅔) |
-| 4 | 红色 (⅔, 0, 0) |
-| 5 | 品红 (⅔, 0, ⅔) |
-| 6 | 棕色 (⅔, ⅓, 0) |
-| 7 | 浅灰色 (⅔, ⅔, ⅔) |
-| 8 | 灰色 (⅓, ⅓, ⅓) |
-| 9 | 淡蓝色 (⅓, ⅓, 1) |
-| 10 | 淡绿色 (⅓, 1, ⅓) |
-| 11 | 淡青色 (⅓, 1, 1) |
-| 12 | 淡红色 (1, ⅓, ⅓) |
-| 13 | 浅品红 (1, ⅓, 1) |
-| 14 | 黄色 (1, 1, ⅓) |
-| 15 | 白色 (1, 1, 1) |
-
-```
-棕色作为替代色(黑黄色)既不吸引人也没有什么用处。
-```
-我们将前景色保存到颜色的低字节,背景色保存到颜色高字节。除过棕色,其他这些颜色遵循一种模式如二进制的高位比特代表增加 ⅓ 到每个组件,其他比特代表增加⅔到各自组件。这样很容易进行RGB颜色转换。
-
-我们需要一个方法从TerminalColour读取颜色编码的四个比特,然后用16比特等效参数调用 SetForeColour。尝试实现你自己实现。如果你感觉麻烦或者还没有完成屏幕系列课程,我们的实现如下:
-
-```
-.section .text
-TerminalColour:
-teq r0,#6
-ldreq r0,=0x02B5
-beq SetForeColour
-
-tst r0,#0b1000
-ldrne r1,=0x52AA
-moveq r1,#0
-tst r0,#0b0100
-addne r1,#0x15
-tst r0,#0b0010
-addne r1,#0x540
-tst r0,#0b0001
-addne r1,#0xA800
-mov r0,r1
-b SetForeColour
-```
-### 2 文本显示
-
-我们的终端第一个真正需要的方法是 TerminalDisplay,它用来把当前的数据从 terminalBuffe r拷贝到 terminalScreen 和实际的屏幕。如上所述,这个方法必须是最小开销的操作,因为我们需要频繁调用它。它主要比较 terminalBuffer 与 terminalDisplay的文本,然后只拷贝有差异的字节。请记住 terminalBuffer 是循环缓冲区运行的,这种情况,从 terminalView 到 terminalStop 或者 128*48 字符集,哪个来的最快。如果我们遇到 terminalStop,我们将会假定在这之后的所有字符是7f16 (ASCII delete),背景色为0(黑色前景色和背景色)。
-
-让我们看看必须要做的事情:
-
- 1. 加载 terminalView ,terminalStop 和 terminalDisplay 的地址。
- 2. 执行每一行:
- 1. 执行每一列:
- 1. 如果 terminalView 不等于 terminalStop,根据 terminalView 加载当前字符和颜色
- 2. 否则加载 0x7f 和颜色 0
- 3. 从 terminalDisplay 加载当前的字符
- 4. 如果字符和颜色相同,直接跳转到10
- 5. 存储字符和颜色到 terminalDisplay
- 6. 用 r0 作为背景色参数调用 TerminalColour
- 7. 用 r0 = 0x7f (ASCII 删除键, 一大块), r1 = x, r2 = y 调用 DrawCharacter
- 8. 用 r0 作为前景色参数调用 TerminalColour
- 9. 用 r0 = 字符, r1 = x, r2 = y 调用 DrawCharacter
- 10. 对位置参数 terminalDisplay 累加2
- 11. 如果 terminalView 不等于 terminalStop不能相等 terminalView 位置参数累加2
- 12. 如果 terminalView 位置已经是文件缓冲器的末尾,将他设置为缓冲区的开始位置
- 13. x 坐标增加8
- 2. y 坐标增加16
-
-
-Try to implement this yourself. If you get stuck, my solution is given below:
-尝试去自己实现吧。如果你遇到问题,我们的方案下面给出来了:
-
-1.
-```
-.globl TerminalDisplay
-TerminalDisplay:
-push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
-x .req r4
-y .req r5
-char .req r6
-col .req r7
-screen .req r8
-taddr .req r9
-view .req r10
-stop .req r11
-
-ldr taddr,=terminalStart
-ldr view,[taddr,#terminalView - terminalStart]
-ldr stop,[taddr,#terminalStop - terminalStart]
-add taddr,#terminalBuffer - terminalStart
-add taddr,#128*128*2
-mov screen,taddr
-```
-
-我这里的变量有点乱。为了方便起见,我用 taddr 存储 textBuffer 的末尾位置。
-
-2.
-```
-mov y,#0
-yLoop$:
-```
-从yLoop开始运行。
-
- 1.
- ```
- mov x,#0
- xLoop$:
- ```
- 从yLoop开始运行。
-
- 1.
- ```
- teq view,stop
- ldrneh char,[view]
- ```
- 为了方便起见,我把字符和颜色同时加载到 char 变量了
-
- 2.
- ```
- moveq char,#0x7f
- ```
- 这行是对上面一行的补充说明:读取黑色的Delete键
-
- 3.
- ```
- ldrh col,[screen]
- ```
- 为了简便我把字符和颜色同时加载到 col 里。
-
- 4.
- ```
- teq col,char
- beq xLoopContinue$
- ```
- 现在我用teq指令检查是否有数据变化
-
- 5.
- ```
- strh char,[screen]
- ```
- 我可以容易的保存当前值
-
- 6.
- ```
- lsr col,char,#8
- and char,#0x7f
- lsr r0,col,#4
- bl TerminalColour
- ```
- 我用 bitshift(比特偏移) 指令和 and 指令从 char 变量中分离出颜色到 col 变量和字符到 char变量,然后再用bitshift(比特偏移)指令后调用TerminalColour 获取背景色。
-
- 7.
- ```
- mov r0,#0x7f
- mov r1,x
- mov r2,y
- bl DrawCharacter
- ```
- 写入一个彩色的删除字符块
-
- 8.
- ```
- and r0,col,#0xf
- bl TerminalColour
- ```
- 用 and 指令获取 col 变量的最低字节,然后调用TerminalColour
-
- 9.
- ```
- mov r0,char
- mov r1,x
- mov r2,y
- bl DrawCharacter
- ```
- 写入我们需要的字符
-
- 10.
- ```
- xLoopContinue$:
- add screen,#2
- ```
- 自增屏幕指针
-
- 11.
- ```
- teq view,stop
- addne view,#2
- ```
- 如果可能自增view指针
-
- 12.
- ```
- teq view,taddr
- subeq view,#128*128*2
- ```
- 很容易检测 view指针是否越界到缓冲区的末尾,因为缓冲区的地址保存在 taddr 变量里
-
- 13.
- ```
- add x,#8
- teq x,#1024
- bne xLoop$
- ```
- 如果还有字符需要显示,我们就需要自增 x 变量然后循环到 xLoop 执行
-
- 2.
- ```
- add y,#16
- teq y,#768
- bne yLoop$
- ```
- 如果还有更多的字符显示我们就需要自增 y 变量,然后循环到 yLoop 执行
-
-```
-pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
-.unreq x
-.unreq y
-.unreq char
-.unreq col
-.unreq screen
-.unreq taddr
-.unreq view
-.unreq stop
-```
-不要忘记最后清除变量
-
-
-### 3 行打印
-
-现在我有了自己 TerminalDisplay方法,它可以自动显示 terminalBuffer 到 terminalScreen,因此理论上我们可以画出文本。但是实际上我们没有任何基于字符显示的实例。 首先快速容易上手的方法便是 TerminalClear, 它可以彻底清除终端。这个方法没有循环很容易实现。可以尝试分析下面的方法应该不难:
-
-```
-.globl TerminalClear
-TerminalClear:
-ldr r0,=terminalStart
-add r1,r0,#terminalBuffer-terminalStart
-str r1,[r0]
-str r1,[r0,#terminalStop-terminalStart]
-str r1,[r0,#terminalView-terminalStart]
-mov pc,lr
-```
-
-现在我们需要构造一个字符显示的基础方法:打印函数。它将保存在 r0 的字符串和 保存在 r1 字符串长度简易的写到屏幕上。有一些特定字符需要特别的注意,这些特定的操作是确保 terminalView 是最新的。我们来分析一下需要做啥:
-
- 1. 检查字符串的长度是否为0,如果是就直接返回
- 2. 加载 terminalStop 和 terminalView
- 3. 计算出 terminalStop 的 x 坐标
- 4. 对每一个字符的操作:
- 1. 检查字符是否为新起一行
- 2. 如果是的话,自增 bufferStop 到行末,同时写入黑色删除键
- 3. 否则拷贝当前 terminalColour 的字符
- 4. 加成是在行末
- 5. 如果是,检查从 terminalView 到 terminalStop 之间的字符数是否大于一屏
- 6. 如果是,terminalView 自增一行
- 7. 检查 terminalView 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
- 8. 检查 terminalStop 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
- 9. 检查 terminalStop 是否等于 terminalStart, 如果是的话 terminalStart 自增一行。
- 10. 检查 terminalStart 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
- 5. 存取 terminalStop 和 terminalView
-
-
-试一下自己去实现。我们的方案提供如下:
-
-1.
-```
-.globl Print
-Print:
-teq r1,#0
-moveq pc,lr
-```
-这个是打印函数开始快速检查字符串为0的代码
-
-2.
-```
-push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
-bufferStart .req r4
-taddr .req r5
-x .req r6
-string .req r7
-length .req r8
-char .req r9
-bufferStop .req r10
-view .req r11
-
-mov string,r0
-mov length,r1
-
-ldr taddr,=terminalStart
-ldr bufferStop,[taddr,#terminalStop-terminalStart]
-ldr view,[taddr,#terminalView-terminalStart]
-ldr bufferStart,[taddr]
-add taddr,#terminalBuffer-terminalStart
-add taddr,#128*128*2
-```
-
-这里我做了很多配置。 bufferStart 代表 terminalStart, bufferStop代表terminalStop, view 代表 terminalView,taddr 代表 terminalBuffer 的末尾地址。
-
-3.
-```
-and x,bufferStop,#0xfe
-lsr x,#1
-```
-和通常一样,巧妙的对齐技巧让许多事情更容易。由于需要对齐 terminalBuffer,每个字符的 x 坐标需要8位要除以2。
-
- 4.
- 1.
- ```
- charLoop$:
- ldrb char,[string]
- and char,#0x7f
- teq char,#'\n'
- bne charNormal$
- ```
- 我们需要检查新行
-
- 2.
- ```
- mov r0,#0x7f
- clearLine$:
- strh r0,[bufferStop]
- add bufferStop,#2
- add x,#1
- teq x,#128 blt clearLine$
-
- b charLoopContinue$
- ```
- 循环执行值到行末写入 0x7f;黑色删除键
-
- 3.
- ```
- charNormal$:
- strb char,[bufferStop]
- ldr r0,=terminalColour
- ldrb r0,[r0]
- strb r0,[bufferStop,#1]
- add bufferStop,#2
- add x,#1
- ```
- 存储字符串的当前字符和 terminalBuffer 末尾的 terminalColour然后将它和 x 变量自增
-
- 4.
- ```
- charLoopContinue$:
- cmp x,#128
- blt noScroll$
- ```
- 检查 x 是否为行末;128
-
- 5.
- ```
- mov x,#0
- subs r0,bufferStop,view
- addlt r0,#128*128*2
- cmp r0,#128*(768/16)*2
- ```
- 这是 x 为 0 然后检查我们是否已经显示超过1屏。请记住,我们是用的循环缓冲区,因此如果 bufferStop 和 view 之前差是负值,我们实际使用是环绕缓冲区。
-
- 6.
- ```
- addge view,#128*2
- ```
- 增加一行字节到 view 的地址
-
- 7.
- ```
- teq view,taddr
- subeq view,taddr,#128*128*2
- ```
- 如果 view 地址是缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。
-
- 8.
- ```
- noScroll$:
- teq bufferStop,taddr
- subeq bufferStop,taddr,#128*128*2
- ```
- 如果 stop 的地址在缓冲区末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。
-
- 9.
- ```
- teq bufferStop,bufferStart
- addeq bufferStart,#128*2
- ```
- 检查 bufferStop 是否等于 bufferStart。 如果等于增加一行到 bufferStart。
-
- 10.
- ```
- teq bufferStart,taddr
- subeq bufferStart,taddr,#128*128*2
- ```
- 如果 start 的地址在缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。
-
-```
-subs length,#1
-add string,#1
-bgt charLoop$
-```
-循环执行知道字符串结束
-
-5.
-```
-charLoopBreak$:
-sub taddr,#128*128*2
-sub taddr,#terminalBuffer-terminalStart
-str bufferStop,[taddr,#terminalStop-terminalStart]
-str view,[taddr,#terminalView-terminalStart]
-str bufferStart,[taddr]
-
-pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
-.unreq bufferStart
-.unreq taddr
-.unreq x
-.unreq string
-.unreq length
-.unreq char
-.unreq bufferStop
-.unreq view
-```
-保存变量然后返回
-
-
-这个方法允许我们打印任意字符到屏幕。然而我们用了颜色变量,但实际上没有设置它。一般终端用特性的组合字符去行修改颜色。如ASCII转移(1b16)后面跟着一个0-f的16进制的书,就可以设置前景色为 CGA颜色。如果你自己想尝试实现;在下载页面有一个我的详细的例子。
-
-
-### 4 标志输入
-
-```
-按照惯例,许多编程语言中,任意程序可以访问 stdin 和 stdin,他们可以连接到终端的输入和输出流。在图形程序其实也可以进行同样操作,但实际几乎不用。
-```
-
-现在我们有一个可以打印和显示文本的输出终端。这仅仅是说了一半,我们需要输入。我们想实现一个方法:Readline,可以保存文件的一行文本,文本位置有 r0 给出,最大的长度由 r1 给出,返回 r0 里面的字符串长度。棘手的是用户输出字符的时候要回显功能,同时想要退格键的删除功能和命令回车执行功能。他们还想需要一个闪烁的下划线代表计算机需要输入。这些完全合理的要求让构造这个方法更具有挑战性。有一个方法完成这些需求就是存储用户输入的文本和文件大小到内存的某个地方。然后当调用 ReadLine 的时候,移动 terminalStop 的地址到它开始的地方然后调用 Print。也就是说我们只需要确保在内存维护一个字符串,然后构造一个我们自己的打印函数。
-
-让我们看看 ReadLine做了哪些事情:
-
- 1. 如果字符串可保存的最大长度为0,直接返回
- 2. 检索 terminalStop 和 terminalStop 的当前值
- 3. 如果字符串的最大长度大约缓冲区的一半,就设置大小为缓冲区的一半
- 4. 从最大长度里面减去1来确保输入的闪烁字符或结束符
- 5. 向字符串写入一个下划线
- 6. 写入一个 terminalView 和 terminalStop 的地址到内存
- 7. 调用 Print 大约当前字符串
- 8. 调用 TerminalDisplay
- 9. 调用 KeyboardUpdate
- 10. 调用 KeyboardGetChar
- 11. 如果为一个新行直接跳转到16
- 12. 如果是一个退格键,将字符串长度减一(如果其大约0)
- 13. 如果是一个普通字符,将他写入字符串(字符串大小确保小于最大值)
- 14. 如果字符串是以下划线结束,写入一个空格,否则写入下划线
- 15. 跳转到6
- 16. 字符串的末尾写入一个新行
- 17. 调用 Print 和 TerminalDisplay
- 18. 用结束符替换新行
- 19. 返回字符串的长度
-
-
-
-为了方便读者理解,然后然后自己去实现,我们的实现提供如下:
-
-1.
-```
-.globl ReadLine
-ReadLine:
-teq r1,#0
-moveq r0,#0
-moveq pc,lr
-```
-快速处理长度为0的情况
-
-2.
-```
-string .req r4
-maxLength .req r5
-input .req r6
-taddr .req r7
-length .req r8
-view .req r9
-
-push {r4,r5,r6,r7,r8,r9,lr}
-
-mov string,r0
-mov maxLength,r1
-ldr taddr,=terminalStart
-ldr input,[taddr,#terminalStop-terminalStart]
-ldr view,[taddr,#terminalView-terminalStart]
-mov length,#0
-```
-考虑到常见的场景,我们初期做了很多初始化动作。input 代表 terminalStop 的值,view 代表 terminalView。Length 默认为 0.
-
-3.
-```
-cmp maxLength,#128*64
-movhi maxLength,#128*64
-```
-我们必须检查异常大的读操作,我们不能处理超过 terminalBuffer 大小的输入(理论上可行,但是terminalStart 移动越过存储的terminalStop,会有很多问题)。
-
-4.
-```
-sub maxLength,#1
-```
-由于用户需要一个闪烁的光标,我们需要一个备用字符在理想状况在这个字符串后面放一个结束符。
-
-5.
-```
-mov r0,#'_'
-strb r0,[string,length]
-```
-写入一个下划线让用户知道我们可以输入了。
-
-6.
-```
-readLoop$:
-str input,[taddr,#terminalStop-terminalStart]
-str view,[taddr,#terminalView-terminalStart]
-```
-保存 terminalStop 和 terminalView。这个对重置一个终端很重要,它会修改这些变量。严格讲也可以修改 terminalStart,但是不可逆。
-
-7.
-```
-mov r0,string
-mov r1,length
-add r1,#1
-bl Print
-```
-写入当前的输入。由于下划线因此字符串长度加1
-8.
-```
-bl TerminalDisplay
-```
-拷贝下一个文本到屏幕
-
-9.
-```
-bl KeyboardUpdate
-```
-获取最近一次键盘输入
-
-10.
-```
-bl KeyboardGetChar
-```
-检索键盘输入键值
-
-11.
-```
-teq r0,#'\n'
-beq readLoopBreak$
-teq r0,#0
-beq cursor$
-teq r0,#'\b'
-bne standard$
-```
-
-如果我们有一个回车键,循环中断。如果有结束符和一个退格键也会同样跳出选好。
-
-12.
-```
-delete$:
-cmp length,#0
-subgt length,#1
-b cursor$
-```
-从 length 里面删除一个字符
-
-13.
-```
-standard$:
-cmp length,maxLength
-bge cursor$
-strb r0,[string,length]
-add length,#1
-```
-写回一个普通字符
-
-14.
-```
-cursor$:
-ldrb r0,[string,length]
-teq r0,#'_'
-moveq r0,#' '
-movne r0,#'_'
-strb r0,[string,length]
-```
-加载最近的一个字符,如果不是下换线则修改为下换线,如果是空格则修改为下划线
-
-15.
-```
-b readLoop$
-readLoopBreak$:
-```
-循环执行值到用户输入按下
-
-16.
-```
-mov r0,#'\n'
-strb r0,[string,length]
-```
-在字符串的结尾处存入一新行
-
-17.
-```
-str input,[taddr,#terminalStop-terminalStart]
-str view,[taddr,#terminalView-terminalStart]
-mov r0,string
-mov r1,length
-add r1,#1
-bl Print
-bl TerminalDisplay
-```
-重置 terminalView 和 terminalStop 然后调用 Print 和 TerminalDisplay 输入回显
-
-
-18.
-```
-mov r0,#0
-strb r0,[string,length]
-```
-写入一个结束符
-
-19.
-```
-mov r0,length
-pop {r4,r5,r6,r7,r8,r9,pc}
-.unreq string
-.unreq maxLength
-.unreq input
-.unreq taddr
-.unreq length
-.unreq view
-```
-返回长度
-
-
-
-
-### 5 终端: 机器进化
-
-现在我们理论用终端和用户可以交互了。最显而易见的事情就是拿去测试了!在 'main.s' 里UsbInitialise后面的删除代码如下
-
-```
-reset$:
- mov sp,#0x8000
- bl TerminalClear
-
- ldr r0,=welcome
- mov r1,#welcomeEnd-welcome
- bl Print
-
-loop$:
- ldr r0,=prompt
- mov r1,#promptEnd-prompt
- bl Print
-
- ldr r0,=command
- mov r1,#commandEnd-command
- bl ReadLine
-
- teq r0,#0
- beq loopContinue$
-
- mov r4,r0
-
- ldr r5,=command
- ldr r6,=commandTable
-
- ldr r7,[r6,#0]
- ldr r9,[r6,#4]
- commandLoop$:
- ldr r8,[r6,#8]
- sub r1,r8,r7
-
- cmp r1,r4
- bgt commandLoopContinue$
-
- mov r0,#0
- commandName$:
- ldrb r2,[r5,r0]
- ldrb r3,[r7,r0]
- teq r2,r3
- bne commandLoopContinue$
- add r0,#1
- teq r0,r1
- bne commandName$
-
- ldrb r2,[r5,r0]
- teq r2,#0
- teqne r2,#' '
- bne commandLoopContinue$
-
- mov r0,r5
- mov r1,r4
- mov lr,pc
- mov pc,r9
- b loopContinue$
-
- commandLoopContinue$:
- add r6,#8
- mov r7,r8
- ldr r9,[r6,#4]
- teq r9,#0
- bne commandLoop$
-
- ldr r0,=commandUnknown
- mov r1,#commandUnknownEnd-commandUnknown
- ldr r2,=formatBuffer
- ldr r3,=command
- bl FormatString
-
- mov r1,r0
- ldr r0,=formatBuffer
- bl Print
-
-loopContinue$:
- bl TerminalDisplay
- b loop$
-
-echo:
- cmp r1,#5
- movle pc,lr
-
- add r0,#5
- sub r1,#5
- b Print
-
-ok:
- teq r1,#5
- beq okOn$
- teq r1,#6
- beq okOff$
- mov pc,lr
-
- okOn$:
- ldrb r2,[r0,#3]
- teq r2,#'o'
- ldreqb r2,[r0,#4]
- teqeq r2,#'n'
- movne pc,lr
- mov r1,#0
- b okAct$
-
- okOff$:
- ldrb r2,[r0,#3]
- teq r2,#'o'
- ldreqb r2,[r0,#4]
- teqeq r2,#'f'
- ldreqb r2,[r0,#5]
- teqeq r2,#'f'
- movne pc,lr
- mov r1,#1
-
- okAct$:
-
- mov r0,#16
- b SetGpio
-
-.section .data
-.align 2
-welcome: .ascii "Welcome to Alex's OS - Everyone's favourite OS"
-welcomeEnd:
-.align 2
-prompt: .ascii "\n> "
-promptEnd:
-.align 2
-command:
- .rept 128
- .byte 0
- .endr
-commandEnd:
-.byte 0
-.align 2
-commandUnknown: .ascii "Command `%s' was not recognised.\n"
-commandUnknownEnd:
-.align 2
-formatBuffer:
- .rept 256
- .byte 0
- .endr
-formatEnd:
-
-.align 2
-commandStringEcho: .ascii "echo"
-commandStringReset: .ascii "reset"
-commandStringOk: .ascii "ok"
-commandStringCls: .ascii "cls"
-commandStringEnd:
-
-.align 2
-commandTable:
-.int commandStringEcho, echo
-.int commandStringReset, reset$
-.int commandStringOk, ok
-.int commandStringCls, TerminalClear
-.int commandStringEnd, 0
-```
-这块代码集成了一个简易的命令行操作系统。支持命令:echo,reset,ok 和 cls。echo 拷贝任意文本到终端,reset命令会在系统出现问题的是复位操作系统,ok 有两个功能:设置 OK 灯亮灭,最后 cls 调用 TerminalClear 清空终端。
-
-试试树莓派的代码吧。如果遇到问题,请参照问题集锦页面吧。
-
-如果运行正常,祝贺你完成了一个操作系统基本终端和输入系列的课程。很遗憾这个教程先讲到这里,但是我希望将来能制作更多教程。有问题请反馈至awc32@cam.ac.uk。
-
-你已经在建立了一个简易的终端操作系统。我们的代码在 commandTable 构造了一个可用的命令表格。每个表格的入口是一个整型数字,用来表示字符串的地址,和一个整型数字表格代码的执行入口。 最后一个入口是 为 0 的commandStringEnd。尝试实现你自己的命令,可以参照已有的函数,建立一个新的。函数的参数 r0 是用户输入的命令地址,r1是其长度。你可以用这个传递你输入值到你的命令。也许你有一个计算器程序,或许是一个绘图程序或国际象棋。不管你的什么电子,让它跑起来!
-
-
---------------------------------------------------------------------------------
-
-via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html
-
-作者:[Alex Chadwick][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/guevaraya)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.cl.cam.ac.uk
-[b]: https://github.com/lujun9972
-[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input01.html
-[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/circular_buffer.png
-[3]: https://en.wikipedia.org/wiki/Color_Graphics_Adapter
diff --git a/translated/tech/20161106 Myths about -dev-urandom.md b/translated/tech/20161106 Myths about -dev-urandom.md
new file mode 100644
index 0000000000..118c6426f2
--- /dev/null
+++ b/translated/tech/20161106 Myths about -dev-urandom.md
@@ -0,0 +1,296 @@
+关于 /dev/urandom 的流言终结
+======
+
+有很多关于 `/dev/urandom` 和 `/dev/random` 的流言在坊间不断流传。然而流言终究是流言。
+
+> 本篇文章里针对的都是近来的 Linux 操作系统,其它类 Unix 操作系统不在讨论范围内。
+
+**`/dev/urandom` 不安全。加密用途必须使用 `/dev/random`**
+
+事实:`/dev/urandom` 才是类 Unix 操作系统下推荐的加密种子。
+
+**`/dev/urandom` 是伪随机数生成器pseudo random number generator(PRND),而 `/dev/random` 是“真”随机数生成器。**
+
+事实:它们两者本质上用的是同一种 CSPRNG (一种密码学伪随机数生成器)。它们之间细微的差别和“真”不“真”随机完全无关
+
+**`/dev/random` 在任何情况下都是密码学应用更好地选择。即便 `/dev/urandom` 也同样安全,我们还是不应该用它。**
+
+事实:`/dev/random` 有个很恶心人的问题:它是阻塞的。(LCTT 译注:意味着请求都得逐个执行,等待前一个请求完成)
+
+**但阻塞不是好事吗!`/dev/random` 只会给出电脑收集的信息熵足以支持的随机量。`/dev/urandom` 在用完了所有熵的情况下还会不断吐不安全的随机数给你。**
+
+事实:这是误解。就算我们不去考虑应用层面后续对随机种子的用法,“用完信息熵池”这个概念本身就不存在。仅仅 256 位的熵就足以生成计算上安全的随机数很长、很长的一段时间了。
+
+问题的关键还在后头:`/dev/random` 怎么知道有系统会*多少*可用的信息熵?接着看!
+
+**但密码学家老是讨论重新选种子(re-seeding)。这难道不和上一条冲突吗?**
+
+事实:你说的也没错!某种程度上吧。确实,随机数生成器一直在使用系统信息熵的状态重新选种。但这么做(一部分)是因为别的原因。
+
+这样说吧,我没有说引入新的信息熵是坏的。更多的熵肯定更好。我只是说在熵池低的时候阻塞是没必要的。
+
+**好,就算你说的都对,但是 `/dev/(u)random` 的 man 页面和你说的也不一样啊!到底有没有专家同意你说的这堆啊?**
+
+事实:其实 man 页面和我说的不冲突。它看似好像在说 `/dev/urandom` 对密码学用途来说不安全,但如果你真的理解这堆密码学术语你就知道它说的并不是这个意思。
+
+man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也没问题,但绝对不是说必要的),但它也推荐在大多数“一般”的密码学应用下使用 `/dev/urandom` 。
+
+虽然诉诸权威一般来说不是好事,但在密码学这么严肃的事情上,和专家统一意见是很有必要的。
+
+所以说呢,还确实有一些*专家*和我的一件事一致的:`/dev/urandom` 就应该是类 UNIX 操作系统下密码学应用的首选。显然的,是他们的观点说服了我而不是反过来的。
+
+------
+
+难以相信吗?觉得我肯定错了?读下去看我能不能说服你。
+
+我尝试不讲太高深的东西,但是有两点内容必须先提一下才能让我们接着论证观点。
+
+首当其冲的,*什么是随机性*,或者更准确地:我们在探讨什么样的随机性?
+
+另外一点很重要的是,我*没有尝试以说教的态度*对你们写这段话。我写这篇文章是为了日后可以在讨论起的时候指给别人看。比 140 字长(LCTT 译注:推特长度)。这样我就不用一遍遍重复我的观点了。能把论点磨炼成一篇文章本身就很有助于将来的讨论。
+
+并且我非常乐意听到不一样的观点。但我只是认为单单地说 `/dev/urandom` 坏是不够的。你得能指出到底有什么问题,并且剖析它们。
+
+### 你是在说我笨?!
+
+绝对没有!
+
+事实上我自己也相信了 “`/dev/urandom` 是不安全的” 好些年。这几乎不是我们的错,因为那么德高望重的人在 Usenet、论坛、推特上跟我们重复这个观点。甚至*连 man 手册*都似是而非地说着。我们当年怎么可能鄙视诸如“信息熵太低了”这种看上去就很让人信服的观点呢?
+
+整个流言之所以如此广为流传不是因为人们太蠢,而是因为但凡有点关于信息熵和密码学概念的人都会觉得这个说法很有道理。直觉似乎都在告诉我们这流言讲的很有道理。很不幸直觉在密码学里通常不管用,这次也一样。
+
+### 真随机
+
+什么叫一个随机变量是“真随机的”?
+
+我不想搞的太复杂以至于变成哲学范畴的东西。这种讨论很容易走偏因为随机模型大家见仁见智,讨论很快变得毫无意义。
+
+在我看来“真随机”的“试金石”是量子效应。一个光子穿过或不穿过一个半透镜。或者观察一个放射性粒子衰变。这类东西是现实世界最接近真随机的东西。当然,有些人也不相信这类过程是真随机的,或者这个世界根本不存在任何随机性。这个就百家争鸣了,我也不好多说什么了。
+
+密码学家一般都会通过不去讨论什么是“真随机”来避免这种争论。它们更关心的是不可预测性 unpredictability。只要没有*任何*方法能猜出下一个随机数就可以了。所以当你以密码学应用为前提讨论一个随机数好不好的时候,在我看来这才是最重要的。
+
+无论如何,我不怎么关心“哲学上安全”的随机数,这也包括别人嘴里的“真”随机数。
+
+## 两种安全,一种有用
+
+但就让我们退一步说,你有了一个“真”随机变量。你下一步做什么呢?
+
+你把它们打印出来然后挂在墙上来展示量子宇宙的美与和谐?牛逼!我很理解你。
+
+但是等等,你说你要*用*它们?做密码学用途?额,那这就废了,因为这事情就有点复杂了。
+
+事情是这样的,你的真随机,量子力学加护的随机数即将被用进不理想的现实世界程序里。
+
+因为我们使用的大多数算法并不是理论信息学information-theoretic上安全的。它们“只能”提供 **计算意义上的安全**。我能想到为数不多的例外就只有 Shamir 密钥分享 和 One-time pad 算法。并且就算前者是名副其实的(如果你实际打算用的话),后者则毫无可行性可言。
+
+但所有那些大名鼎鼎的密码学算法,AES、RSA、Diffie-Hellman、椭圆曲线,还有所有那些加密软件包,OpenSSL、GnuTLS、Keyczar、你的操作系统的加密 API,都仅仅是计算意义上的安全的。
+
+那区别是什么呢?理论信息学上的安全肯定是安全的,绝对是,其它那些的算法都可能在理论上被拥有无限计算力的穷举破解。我们依然愉快地使用它们因为全世界的计算机加起来都不可能在宇宙年龄的时间里破解,至少现在是这样。而这就是我们文章里说的“不安全”。
+
+除非哪个聪明的家伙破解了算法本身——在只需要极少量计算力的情况下。这也是每个密码学家梦寐以求的圣杯:破解 AES 本身、破解 RSA 本身等等。
+
+所以现在我们来到了更底层的东西:随机数生成器,你坚持要“真随机”而不是“伪随机”。但是没过一会儿你的真随机数就被喂进了你极为鄙视的伪随机算法里了!
+
+真相是,如果我们最先进的 hash 算法被破解了,或者最先进的块加密被破解了,你得到这些那些“哲学上不安全的”甚至无所谓了,因为反正你也没有安全的应用方法了。
+
+所以把计算性上安全的随机数喂给你的仅仅是计算性上安全的算法就可以了,换而言之,用 `/dev/urandom`。
+
+### Linux 随机数生成器的构架
+
+#### 一种错误的看法
+
+你对内核的随机数生成器的理解很可能是像这样的:
+
+![image: mythical structure of the kernel's random number generator][1]
+
+“真随机数”,尽管可能有点瑕疵,进入操作系统然后它的熵立刻被加入内部熵计数器。然后经过“矫偏”和“漂白”之后它进入内核的熵池,然后 `/dev/random` 和 `/dev/urandom` 从里面生成随机数。
+
+“真”随机数生成器,`/dev/random`,直接从池里选出随机数,如果熵计数器表示能满足需要的数字大小,那就吐出数字并且减少熵计数。如果不够的话,它会阻塞程序直至有足够的熵进入系统。
+
+这里很重要一环是 `/dev/random` 几乎只是仅经过必要的“漂白”后就直接把那些进入系统的随机性吐了出来,不经扭曲。
+
+而对 `/dev/urandom` 来说,事情是一样的。除了当没有足够的熵的时候,它不会阻塞,而会从一直在运行的伪随机数生成器(当然,是密码学安全的,CSPRNG)里吐出“低质量”的随机数。这个 CSPRNG 只会用“真随机数”生成种子一次(或者好几次,这不重要),但你不能特别相信它。
+
+在这种对随机数生成的理解下,很多人会觉得在 Linux 下尽量避免 `/dev/urandom` 看上去有那么点道理。
+
+因为要么你有足够多的熵,你会相当于用了 `/dev/random`。要么没有,那你就会从几乎没有高熵输入的 CSPRNG 那里得到一个低质量的随机数。
+
+看上去很邪恶是吧?很不幸的是这种看法是完全错误的。实际上,随机数生成器的构架更像是下面这样的。
+
+#### 更好地简化
+
+##### Linux 4.8 之前
+
+![image: actual structure of the kernel's random number generator before Linux 4.8][2]
+
+> 这是个很粗糙的简化。实际上不仅有一个,而是三个熵池。一个主池,另一个给 `/dev/random`,还有一个给 `/dev/urandom`,后两者依靠从主池里获取熵。这三个池都有各自的熵计数器,但二级池(后两个)的计数器基本都在 0 附近,而“新鲜”的熵总在需要的时候从主池流过来。同时还有好多混合和回流进系统在同时进行。整个过程对于这篇文档来说都过于复杂了我们跳过。
+
+你看到最大的区别了吗?CSPRNG 并不是和随机数生成器一起跑的,以 `/dev/urandom` 需要输出但熵不够的时候进行填充。CSPRNG 是整个随机数生成过程的内部组件之一。从来就没有什么 `/dev/random` 直接从池里输出纯纯的随机性。每个随机源的输入都在 CSPRNG 里充分混合和散列过了,这一切都发生在实际变成一个随机数,被 `/dev/urandom` 或者 `/dev/random` 吐出去之前。
+
+另外一个重要的区别是这里没有熵计数器的任何事情,只有预估。一个源给你的熵的量并不是什么很明确能直接得到的数字。你得预估它。注意,如果你太乐观地预估了它,那 `/dev/random` 最重要的特性——只给出熵允许的随机量——就荡然无存了。很不幸的,预估熵的量是很困难的。
+
+Linux 内核只使用事件的到达时间来预估熵的量。它通过多项式插值,某种模型,来预估实际的到达时间有多“出乎意料”。这种多项式插值的方法到底是不是好的预估熵量的方法本身就是个问题。同时硬件情况会不会以某种特定的方式影响到达时间也是个问题。而所有硬件的取样率也是个问题,因为这基本上就直接决定了随机数到达时间的颗粒度。
+
+说到最后,至少现在看来,内核的熵预估还是不错的。这也意味着它比较保守。有些人会具体地讨论它有多好,这都超出我的脑容量了。就算这样,如果你坚持不想在没有足够多的熵的情况下吐出随机数,那你看到这里可能还会有一丝紧张。我睡的就很香了,因为我不关心熵预估什么的。
+
+最后强调一下终点:`/dev/random` 和 `/dev/urandom` 都是被同一个 CSPRNG 喂的输入。只有它们在用完各自熵池(根据某种预估标准)的时候,它们的行为会不同:`/dev/random` 阻塞,`/dev/urandom` 不阻塞。
+
+##### Linux 4.8 以后
+
+在 Linux 4.8 里,`/dev/random` 和 `/dev/urandom` 的等价性被放弃了。现在 `/dev/urandom` 的输出不来自于熵池,而是直接从 CSPRNG 来。
+
+![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
+
+*我们很快会理解*为什么这不是一个安全问题。
+
+### 阻塞有什么问题?
+
+你有没有需要等着 `/dev/random` 来吐随机数?比如在虚拟机里生成一个 PGP 密钥?或者访问一个在生成会话密钥的网站?
+
+这些都是问题。阻塞本质上会降低可用性。换而言之你的系统不干你让它干的事情。不用我说,这是不好的。要是它不干活你干嘛搭建它呢?
+
+> 我在工厂自动化里做过和安全相关的系统。猜猜看安全系统失效的主要原因是什么?被错误操作。就这么简单。很多安全措施的流程让工人恼火了。比如时间太长,或者太不方便。你要知道人很会找捷径来“解决”问题。
+
+但其实有个更深刻的问题:人们不喜欢被打断。它们会找一些绕过的方法,把一些诡异的东西接在一起仅仅因为这样能用。一般人根本不知道什么密码学什么乱七八糟的,至少正常的人是这样吧。
+
+为什么不禁止调用 `random()`?为什么不随便在论坛上找个人告诉你用写奇异的 ioctl 来增加熵计数器呢?为什么不干脆就把 SSL 加密给关了算了呢?
+
+到头来如果东西太难用的话,你的用户就会被迫开始做一些降低系统安全性的事情——你甚至不知道它们会做些什么。
+
+我们很容易会忽视可用性之类的重要性。毕竟安全第一对吧?所以比起牺牲安全,不可用,难用,不方便都是次要的?
+
+这种二元对立的想法是错的。阻塞不一定就安全了。正如我们看到的,`/dev/urandom` 直接从 CSPRNG 里给你一样好的随机数。用它不好吗!
+
+### CSPRNG 没问题
+
+现在情况听上去很沧桑。如果连高质量的 `/dev/random` 都是从一个 CSPRNG 里来的,我们怎么敢在高安全性的需求上使用它呢?
+
+实际上,“看上去随机”是现存大多数密码学基础组件的基本要求。如果你观察一个密码学哈希的输出,它一定得和随机的字符串不可区分,密码学家才会认可这个算法。如果你生成一个块加密,它的输出(在你不知道密钥的情况下)也必须和随机数据不可区分才行。
+
+如果任何人能比暴力穷举要更有效地破解一个加密,比如它利用了某些 CSPRNG 伪随机的弱点,那这就又是老一套了:一切都废了,也别谈后面的了。块加密、哈希,一切都是基于某个数学算法,比如 CSPRNG。所以别害怕,到头来都一样。
+
+### 那熵池快空了的情况呢?
+
+毫无影响。
+
+加密算法的根基建立在攻击者不能预测输出上,只要最一开始有足够的随机性(熵)就行了。一般的下限是 256 位,不需要更多了。
+
+介于我们一直在很随意的使用“熵”这个概念,我用“位”来量化随机性希望读者不要太在意细节。像我们之前讨论的那样,内核的随机数生成器甚至没法精确地知道进入系统的熵的量。只有一个预估。而且这个预估的准确性到底怎么样也没人知道。
+
+### 重新选种
+
+但如果熵这么不重要,为什么还要有新的熵一直被收进随机数生成器里呢?
+
+> djb [提到][4] 太多的熵甚至可能会起到反效果。
+
+首先,一般不会这样。如果你有很多随机性可以拿来用,用就对了!
+
+但随机数生成器时不时要重新选种还有别的原因:
+
+想象一下如果有个攻击者获取了你随机数生成器的所有内部状态。这是最坏的情况了,本质上你的一切都暴露给攻击者了。
+
+你已经凉了,因为攻击者可以计算出所有未来会被输出的随机数了。
+
+但是,如果不断有新的熵被混进系统,那内部状态会再一次变得随机起来。所以随机数生成器被设计成这样有些“自愈”能力。
+
+但这是在给内部状态引入新的熵,这和阻塞输出没有任何关系。
+
+### random 和 urandom 的 man 页面
+
+这两个 man 页面在吓唬程序员方面很有建树:
+
+> 从 `/dev/urandom` 读取数据不会因为需要更多熵而阻塞。这样的结果是,如果熵池里没有足够多的熵,取决于驱动使用的算法,返回的数值在理论上有被密码学攻击的可能性。发动这样攻击的步骤并没有出现在任何公开文献当中,但这样的攻击从理论上讲是可能存在的。如果你的应用担心这类情况,你应该使用 `/dev/random`。
+
+>> 实际上已经有 `/dev/random` 和 `/dev/urandom` 的 Linux 内核 man 页面的更新版本。不幸的是,随便一个网络搜索出现我在结果顶部的仍然是旧的、有缺陷的版本。此外,许多 Linux 发行版仍在发布旧的 man 页面。所以不幸的是,这一节需要在这篇文章中保留更长的时间。我很期待删除这一节!
+
+没有“公开的文献”描述,但是 NSA 的小卖部里肯定卖这种攻击手段是吧?如果你真的真的很担心(你应该很担心),那就用 `/dev/random` 然后所有问题都没了?
+
+然而事实是,可能某个什么情报局有这种攻击,或者某个什么邪恶黑客组织找到了方法。但如果我们就直接假设这种攻击一定存在也是不合理的。
+
+而且就算你想给自己一个安心,我要给你泼个冷水:AES、SHA-3 或者其它什么常见的加密算法也没有“公开文献记述”的攻击手段。难道你也不用这几个加密算法了?这显然是可笑的。
+
+我们在回到 man 页面说:“使用 `/dev/random`”。我们已经知道了,虽然 `/dev/urandom` 不阻塞,但是它的随机数和 `/dev/random` 都是从同一个 CSPRNG 里来的。
+
+如果你真的需要信息论理论上安全的随机数(你不需要的,相信我),那才有可能成为唯一一个你需要等足够熵进入 CSPRNG 的理由。而且你也不能用 `/dev/random`。
+
+man 页面有毒,就这样。但至少它还稍稍挽回了一下自己:
+
+> 如果你不确定该用 `/dev/random` 还是 `/dev/urandom` ,那你可能应该用后者。通常来说,除了需要长期使用的 GPG/SSL/SSH 密钥以外,你总该使用`/dev/urandom` 。
+
+>> 该手册页的[当前更新版本](http://man7.org/linux/man-pages/man4/random.4.html)毫不含糊地说:
+
+>> `/dev/random` 接口被认为是遗留接口,并且 `/dev/urandom` 在所有用例中都是首选和足够的,除了在启动早期需要随机性的应用程序;对于这些应用程序,必须替代使用 `getrandom(2)`,因为它将阻塞,直到熵池初始化完成。
+
+行。我觉得没必要,但如果你真的要用 `/dev/random` 来生成 “长期使用的密钥”,用就是了也没人拦着!你可能需要等几秒钟或者敲几下键盘来增加熵,但这没什么问题。
+
+但求求你们,不要就因为“你想更安全点”就让连个邮件服务器要挂起半天。
+
+### 正道
+
+本篇文章里的观点显然在互联网上是“小众”的。但如果问问一个真正的密码学家,你很难找到一个认同阻塞 `/dev/random` 的人。
+
+比如我们看看 [Daniel Bernstein][5](即著名的 djb)的看法:
+
+> 我们密码学家对这种胡乱迷信行为表示不负责。你想想,写 `/dev/random` man 页面的人好像同时相信:
+>
+> * (1) 我们不知道如何用一个 256 位长的 `/dev/random` 的输出来生成一个无限长的随机密钥串流(这是我们需要 `/dev/urandom` 吐出来的),但与此同时
+> * (2) 我们却知道怎么用单个密钥来加密一条消息(这是 SSL,PGP 之类干的事情)
+>
+> 对密码学家来说这甚至都不好笑了
+
+
+
+或者 [Thomas Pornin][6] 的看法,他也是我在 stackexchange 上见过最乐于助人的一位:
+
+> 简单来说,是的。展开说,答案还是一样。`/dev/urandom` 生成的数据可以说和真随机完全无法区分,至少在现有科技水平下。使用比 `/dev/urandom` “更好的“随机性毫无意义,除非你在使用极为罕见的“信息论安全”的加密算法。这肯定不是你的情况,不然你早就说了。
+>
+> urandom 的 man 页面多多少少有些误导人,或者干脆可以说是错的——特别是当它说 `/dev/urandom` 会“用完熵”以及 “`/dev/random` 是更好的”那几句话;
+
+或者 [Thomas Ptacek][7] 的看法,他不设计密码算法或者密码学系统,但他是一家名声在外的安全咨询公司的创始人,这家公司负责很多渗透和破解烂密码学算法的测试:
+
+> 用 urandom。用 urandom。用 urandom。用 urandom。用 urandom。
+
+### 没有完美
+
+`/dev/urandom` 不是完美的,问题分两层:
+
+在 Linux 上,不像 FreeBSD,`/dev/urandom` 永远不阻塞。记得安全性取决于某个最一开始决定的随机性?种子?
+
+Linux 的 `/dev/urandom` 会很乐意给你吐点不怎么随机的随机数,甚至在内核有机会收集一丁点熵之前。什么时候有这种情况?当你系统刚刚启动的时候。
+
+FreeBSD 的行为更正确点:`/dev/random` 和 `/dev/urandom` 是一样的,在系统启动的时候 `/dev/random` 会阻塞到有足够的熵为止,然后它们都再也不阻塞了。
+
+> 与此同时 Linux 实行了一个新的系统调用syscall,最早由 OpenBSD 引入叫 `getentrypy(2)`,在 Linux 下这个叫 `getrandom(2)`。这个系统调用有着上述正确的行为:阻塞到有足够的熵为止,然后再也不阻塞了。当然,这是个系统调用,而不是一个字节设备(LCTT 译注:指不在 `/dev/` 下),所以它在 shell 或者别的脚本语言里没那么容易获取。这个系统调用 自 Linux 3.17 起存在。
+
+在 Linux 上其实这个问题不太大,因为 Linux 发行版会在启动的过程中储蓄一点随机数(这发生在已经有一些熵之后,因为启动程序不会在按下电源的一瞬间就开始运行)到一个种子文件中,以便系统下次启动的时候读取。所以每次启动的时候系统都会从上一次会话里带一点随机性过来。
+
+显然这比不上在关机脚本里写入一些随机种子,因为这样的显然就有更多熵可以操作了。但这样做显而易见的好处就是它不用关心系统是不是正确关机了,比如可能你系统崩溃了。
+
+而且这种做法在你真正第一次启动系统的时候也没法帮你随机,不过好在系统安装器一般会写一个种子文件,所以基本上问题不大。
+
+虚拟机是另外一层问题。因为用户喜欢克隆它们,或者恢复到某个之前的状态。这种情况下那个种子文件就帮不到你了。
+
+但解决方案依然和用 `/dev/random` 没关系,而是你应该正确的给每个克隆或者恢复的镜像重新生成种子文件。
+
+### 太长不看
+
+别问,问就是用 `/dev/urandom` !
+
+--------------------------------------------------------------------------------
+
+via: https://www.2uo.de/myths-about-urandom/
+
+作者:[Thomas Hühn][a]
+译者:[Moelf](https://github.com/Moelf)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2uo.de/
+[1]:https://www.2uo.de/_media/wiki:structure-no.png
+[2]:https://www.2uo.de/_media/wiki:structure-yes.png
+[3]:https://www.2uo.de/_media/wiki:structure-new.png
+[4]:http://blog.cr.yp.to/20140205-entropy.html
+[5]:http://www.mail-archive.com/cryptography@randombit.net/msg04763.html
+[6]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key/3939#3939
+[7]:http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
diff --git a/translated/tech/20180823 Getting started with Sensu monitoring.md b/translated/tech/20180823 Getting started with Sensu monitoring.md
new file mode 100644
index 0000000000..16d6a473e5
--- /dev/null
+++ b/translated/tech/20180823 Getting started with Sensu monitoring.md
@@ -0,0 +1,287 @@
+Sensu 监控入门
+======
+
+
+Sensu 是一个开源基础设施和应用程序监控解决方案,它监控服务器、相关服务和应用程序健康状况,并通过第三方集成发送警报和通知。Sensu 用 Ruby 编写,可以使用 [RabbitMQ][1] 或 [Redis][2] 来处理消息,它使用 Redis 来存储数据。
+
+如果你想以一种简单而有效的方式监控云基础设施,Sensu 是一个不错的选择。它可以与你组织已经使用的许多现代 DevOps 堆栈集成,比如 [Slack][3]、[HipChat][4 ] 或 [IRC][5],它甚至可以用 [PagerDuty][6] 发送移动或寻呼机警报。
+
+Sensu 的[模块化架构][7]意味着每个组件都可以安装在同一台服务器上或者在完全独立的机器上。
+
+### 结构
+
+Sensu 的主要通信机制是 `Transport`。每个 Sensu 组件必须连接到 `Transport` 才能相互发送消息。`Transport` 可以使用 RabbitMQ(在生产中推荐使用)或 Redis。
+
+Sensu 服务器处理事件数据并采取行动。它注册客户端并使用过滤器、增变器和处理程序检查结果和监视事件。服务器向客户端发布检查说明,Sensu API 提供 RESTful API,提供对监控数据和核心功能的访问。
+
+[Sensu 客户端][8]执行 Sensu 服务器安排的检查或本地检查定义。Sensu 使用数据存储(Redis)来保存所有的持久数据。最后,[Uchiwa][9] 是与 Sensu API 进行通信的 Web 界面。
+
+![sensu_system.png][11]
+
+### 安装 Sensu
+
+#### 条件
+
+ * 一个 Linux 系统作为服务器节点(本文使用了 CentOS 7)
+ * 要监控的一台或多台 Linux 机器(客户机)
+
+#### 服务器侧
+
+Sensu 需要安装 Redis。要安装 Redis,启用 EPEL 仓库:
+```
+$ sudo yum install epel-release -y
+
+```
+
+然后安装 Redis:
+```
+$ sudo yum install redis -y
+
+```
+
+修改 `/etc/redis.conf` 来禁用保护模式,监听每个地址并设置密码:
+```
+$ sudo sed -i 's/^protected-mode yes/protected-mode no/g' /etc/redis.conf
+
+$ sudo sed -i 's/^bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.conf
+
+$ sudo sed -i 's/^# requirepass foobared/requirepass password123/g' /etc/redis.conf
+
+```
+
+启用并启动 Redis 服务:
+```
+$ sudo systemctl enable redis
+$ sudo systemctl start redis
+```
+
+Redis 现在已经安装并准备好被 Sensu 使用。
+
+现在让我们来安装 Sensu。
+
+首先,配置 Sensu 仓库并安装软件包:
+```
+$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
+[sensu]
+name=sensu
+baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
+gpgcheck=0
+enabled=1
+EOF
+
+$ sudo yum install sensu uchiwa -y
+```
+
+让我们为 Sensu 创建最简单的配置文件:
+```
+$ sudo tee /etc/sensu/conf.d/api.json << EOF
+{
+ "api": {
+ "host": "127.0.0.1",
+ "port": 4567
+ }
+}
+EOF
+```
+
+然后,配置 `sensu-api` 在本地主机上使用端口 4567 监听:
+```
+$ sudo tee /etc/sensu/conf.d/redis.json << EOF
+{
+ "redis": {
+ "host": "",
+ "port": 6379,
+ "password": "password123"
+ }
+}
+EOF
+
+
+$ sudo tee /etc/sensu/conf.d/transport.json << EOF
+{
+ "transport": {
+ "name": "redis"
+ }
+}
+EOF
+```
+
+在这两个文件中,我们将 Sensu 配置为使用 Redis 作为传输机制,还有 Reids 监听的地址。客户端需要直接连接到传输机制。每台客户机都需要这两个文件。
+```
+$ sudo tee /etc/sensu/uchiwa.json << EOF
+{
+ "sensu": [
+ {
+ "name": "sensu",
+ "host": "127.0.0.1",
+ "port": 4567
+ }
+ ],
+ "uchiwa": {
+ "host": "0.0.0.0",
+ "port": 3000
+ }
+}
+EOF
+```
+
+在这个文件中,我们配置 `Uchiwa` 监听端口 3000 上的每个地址(0.0.0.0)。我们还配置 `Uchiwa` 使用 `sensu-api`(已配置好)。
+
+出于安全原因,更改刚刚创建的配置文件的所有者:
+```
+$ sudo chown -R sensu:sensu /etc/sensu
+```
+
+启用并启动 Sensu 服务:
+```
+$ sudo systemctl enable sensu-server sensu-api sensu-client
+$ sudo systemctl start sensu-server sensu-api sensu-client
+$ sudo systemctl enable uchiwa
+$ sudo systemctl start uchiwa
+```
+
+尝试访问 `Uchiwa` 网站:http://<服务器的 IP 地址>:3000
+
+对于生产环境,建议运行 RabbitMQ 集群作为 Transport 而不是 Redis(虽然 Redis 集群也可以用于生产),运行多个 Sensu 服务器实例和 API 实例,以实现负载均衡和高可用性。
+
+Sensu 现在安装完成,让我们来配置客户端。
+
+#### 客户端侧
+
+要添加一个新客户端,你需要通过创建 `/etc/yum.repos.d/sensu.repo` 文件在客户机上启用 Sensu 仓库。
+```
+$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
+[sensu]
+name=sensu
+baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
+gpgcheck=0
+enabled=1
+EOF
+```
+
+启用仓库后,安装 Sensu:
+```
+$ sudo yum install sensu -y
+```
+
+要配置 `sensu-client`,创建在服务器中相同的 `redis.json` 和 `transport.json`,还有 `client.json` 配置文件:
+```
+$ sudo tee /etc/sensu/conf.d/client.json << EOF
+{
+ "client": {
+ "name": "rhel-client",
+ "environment": "development",
+ "subscriptions": [
+ "frontend"
+ ]
+ }
+}
+EOF
+```
+
+在 `name` 字段中,指定一个名称来标识此客户机(通常是主机名)。`environment` 字段可以帮助你过滤,订阅定义客户机将执行哪些监视检查。
+
+最后,启用并启动服务并检查 `Uchiwa`,因为客户机会自动注册:
+```
+$ sudo systemctl enable sensu-client
+$ sudo systemctl start sensu-client
+```
+
+### Sensu 检查
+
+Sensu 检查有两个组件:一个插件和一个定义。
+
+Sensu 与 [Nagios 检查插件规范][12]兼容,因此无需修改即可使用针对 Nagios 的任何检查。检查是可执行文件,由 Sensu 客户机运行。
+
+检查定义让 Sensu 知道如何、在哪以及何时运行插件。
+
+#### 客户端侧
+
+让我们在客户机上安装一个检查插件。请记住,此插件将在客户机上执行。
+
+启用 EPEL 并安装 `nagios-plugins-http` :
+```
+$ sudo yum install -y epel-release
+$ sudo yum install -y nagios-plugins-http
+```
+
+现在让我们通过手动执行它来研究这个插件。尝试检查客户机上运行的 Web 服务器的状态。它应该会失败,因为我们并没有运行 Web 服务器:
+```
+$ /usr/lib64/nagios/plugins/check_http -I 127.0.0.1
+connect to address 127.0.0.1 and port 80: Connection refused
+HTTP CRITICAL - Unable to open TCP socket
+```
+
+不出所料,它失败了。检查执行的返回值:
+```
+$ echo $?
+2
+
+```
+
+Nagios 检查插件规范定义了插件执行的四个返回值:
+
+| **Plugin return code** | **State** |
+|------------------------|-----------|
+| 0 | OK |
+| 1 | WARNING |
+| 2 | CRITICAL |
+| 3 | UNKNOWN |
+
+有了这些信息,我们现在可以在服务器上创建检查定义。
+
+#### 服务器侧
+
+在服务器机器上,创建 `/etc/sensu/conf.d/check_http.json` 文件:
+```
+{
+ "checks": {
+ "check_http": {
+ "command": "/usr/lib64/nagios/plugins/check_http -I 127.0.0.1",
+ "interval": 10,
+ "subscribers": [
+ "frontend"
+ ]
+ }
+ }
+}
+```
+
+在 `command ` 字段中,使用我们之前测试过的命令。`Interval` 会告诉 Sensu 这个检查的频率,以秒为单位。最后,`subscribers` 将定义执行检查的客户机。
+
+重新启动 sensu-api 和 sensu-server 并确认新检查在 Uchiwa 中可用。
+
+```
+$ sudo systemctl restart sensu-api sensu-server
+```
+
+### 接下来
+
+Sensu 是一个功能强大的工具,本文只简要介绍它可以干什么。参阅[文档][13]了解更多信息,访问 Sensu 网站了解有关 [Sensu 社区][14]的更多信息。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/8/getting-started-sensu-monitoring-solution
+
+作者:[Michael Zamot][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/mzamot
+[1]:https://www.rabbitmq.com/
+[2]:https://redis.io/topics/config
+[3]:https://slack.com/
+[4]:https://en.wikipedia.org/wiki/HipChat
+[5]:http://www.irc.org/
+[6]:https://www.pagerduty.com/
+[7]:https://docs.sensu.io/sensu-core/1.4/overview/architecture/
+[8]:https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/
+[9]:https://uchiwa.io/#/
+[10]:/file/406576
+[11]:https://opensource.com/sites/default/files/uploads/sensu_system.png (sensu_system.png)
+[12]:https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html
+[13]:https://docs.sensu.io/
+[14]:https://sensu.io/community
diff --git a/translated/tech/20190204 Enjoy Netflix- You Should Thank FreeBSD.md b/translated/tech/20190204 Enjoy Netflix- You Should Thank FreeBSD.md
new file mode 100644
index 0000000000..3413c4f65f
--- /dev/null
+++ b/translated/tech/20190204 Enjoy Netflix- You Should Thank FreeBSD.md
@@ -0,0 +1,91 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Enjoy Netflix? You Should Thank FreeBSD)
+[#]: via: (https://itsfoss.com/netflix-freebsd-cdn/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+享受 Netflix 么?你应该感谢 FreeBSD
+======
+
+Netflix 是世界上最受欢迎的流媒体服务之一。
+
+但你已经知道了。不是吗?
+
+你可能不知道的是 Netflix 使用 [FreeBSD][1] 向你提供内容。
+
+是的。Netflix 依靠 FreeBSD 来构建其内部内容交付网络 (CDN)。
+
+[CDN][2] 是一组位于世界各地的服务器。它主要用于向终端用户分发像图像和视频这样的“大文件”。
+
+Netflix 没有选择商业 CDN 服务,而是建立了自己的内部 CDN,名为 [Open Connect][3]。
+
+Open Connect 使用[自定义硬件][4],Open Connect Appliance。你可以在下面的图片中看到它。它可以处理 40Gb/s 的数据,存储容量为 248 TB。
+
+![Netflix’s Open Connect Appliance runs FreeBSD][5]
+
+Netflix 免费为合格的互联网服务提供商 (ISP) 提供 Open Connect Appliance。通过这种方式,大量的 Netflix 流量得到了本地化,ISP 可以更高效地提供 Netflix 内容。
+
+Open Connect Appliance 运行在 FreeBSD 操作系统上,并且[几乎完全运行开源软件][6]。
+
+### Open Connect 使用 FreeBSD “头”
+
+![][7]
+
+你或许会期望 Netflix 在这样一个关键基础设施上使用 FreeBSD 的稳定版本,但 Netflix 会跟踪 [FreeBSD 头/当前版本][8]。Netflix 表示,跟踪“头”让他们“保持前瞻性,专注于创新”。
+
+以下是 Netflix 跟踪 FreeBSD 的好处:
+
+ * 更快的功能迭代
+ * 更快地使用 FreeBSD 的新功能
+ * 更快的 bug 修复
+ * 实现协作
+ * 尽量减少合并冲突
+ * 摊销合并“成本”
+
+
+
+> 运行 FreeBSD “head” 可以让我们非常高效地向用户分发大量数据,同时保持高速的功能开发。
+>
+> Netflix
+
+请记得,甚至[谷歌也使用 Debian][9] 测试版而不是 Debian 稳定版。也许这些企业更喜欢最先进的功能。
+
+与谷歌一样,Netflix 也计划向上游提供代码。这应该有助于 FreeBSD 和其他基于 FreeBSD 的 BSD 发行版。
+
+那么 Netflix 用 FreeBSD 实现了什么?以下是一些统计数据:
+
+> 使用 FreeBSD 和商业硬件,我们在 16 核 2.6GHz CPU 上使用约 55% 的 CPU,实现了 90 Gb/s 的 TLS 加密连接,。
+>
+> Netflix
+
+如果你想了解更多关于 Netflix 和 FreeBSD 的信息,可以参考 [FOSDEM 的这个演示文稿][10]。你还可以在[这里][11]观看演示文稿的视频。
+
+目前,大型企业主要依靠 Linux 来实现其服务器基础架构,但 Netflix 已经信任了 BSD。这对 BSD 社区来说是一件好事,因为如果像 Netflix 这样的行业领导者重视 BSD,那么其他人也可以跟上。你怎么看?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/netflix-freebsd-cdn/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://www.freebsd.org/
+[2]: https://www.cloudflare.com/learning/cdn/what-is-a-cdn/
+[3]: https://openconnect.netflix.com/en/
+[4]: https://openconnect.netflix.com/en/hardware/
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-open-connect-appliance.jpeg?fit=800%2C533&ssl=1
+[6]: https://openconnect.netflix.com/en/software/
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-freebsd.png?resize=800%2C450&ssl=1
+[8]: https://www.bsdnow.tv/tutorials/stable-current
+[9]: https://itsfoss.com/goobuntu-glinux-google/
+[10]: https://fosdem.org/2019/schedule/event/netflix_freebsd/attachments/slides/3103/export/events/attachments/netflix_freebsd/slides/3103/FOSDEM_2019_Netflix_and_FreeBSD.pdf
+[11]: http://mirror.onet.pl/pub/mirrors/video.fosdem.org/2019/Janson/netflix_freebsd.webm
diff --git a/translated/tech/20190314 14 days of celebrating the Raspberry Pi.md b/translated/tech/20190314 14 days of celebrating the Raspberry Pi.md
deleted file mode 100644
index 64836429b9..0000000000
--- a/translated/tech/20190314 14 days of celebrating the Raspberry Pi.md
+++ /dev/null
@@ -1,77 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (14 days of celebrating the Raspberry Pi)
-[#]: via: (https://opensource.com/article/19/3/happy-pi-day)
-[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva)
-
-庆祝 Raspberry Pi 的 14 天
-======
-
-在我们关于树莓派入门系列的第 14 篇也是最后一篇文章中,回顾一下我们学到的所有东西。
-
-![][1]
-
-**派节快乐!**
-
-每年的 3 月 14 日,我们这些极客都会庆祝派节。我们用这种方式缩写日期 MMDD,March 14 于是写成 03/14,它的数字上提醒我们 3.14,或者说 [π][2] 的前三位数字。许多美国人没有意识到的是,世界上几乎没有其他国家使用这种[日期格式][3],因此派节几乎只适用于美国,尽管它在全球范围内得到了庆祝。
-
-无论你身在何处,让我们一起庆祝树莓派,并通过回顾过去两周我们所涉及的主题来结束本系列:
-
- * 第 1 天:[你应该选择哪种树莓派?][4]
- * 第 2 天:[如何购买树莓派][5]
- * 第 3 天:[如何启动新的树莓派][6]
- * 第 4 天:[用树莓派学习 Linux][7]
- * 第 5 天:[5 种教孩子用树莓派编程的方法][8]
- * 第 6 天:[你可以用树莓派学习的 3 种流行编程语言][9]
- * 第 7 天:[如何更新树莓派][10]
- * 第 8 天:[如何使用树莓派娱乐][11]
- * 第 9 天:[在树莓派上玩游戏][12]
- * 第 10 天:[让我们实物化:如何在树莓派上使用 GPIO 引脚][13]
- * 第 11 天:[通过树莓派了解计算机安全][14]
- * 第 12 天:[在树莓派上使用 Mathematica 进行高级数学运算][15]
- * 第 13 天:[为树莓派社区做出贡献][16]
-
-
-
-![Pi Day illustration][18]
-
-我将结束本系列,感谢所有关注的人,尤其是那些在过去 14 天里从中学到了东西的人!我还想鼓励大家不断扩展他们对树莓派以及围绕它构建的所有开源(和闭源)技术的了解。
-
-我还鼓励你了解其他文化、哲学、宗教和世界观。让我们成为人类的是这种惊人的 (有时是有趣的) 能力,我们不仅要适应外部环境,而且要适应智力环境。
-
-不管你做什么,保持学习!
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/3/happy-pi-day
-
-作者:[Anderson Silva (Red Hat)][a]
-选题:[lujun9972][b]
-译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA
-[2]: https://www.piday.org/million/
-[3]: https://en.wikipedia.org/wiki/Date_format_by_country
-[4]: https://opensource.com/article/19/3/which-raspberry-pi-choose
-[5]: https://opensource.com/article/19/3/how-buy-raspberry-pi
-[6]: https://opensource.com/article/19/3/how-boot-new-raspberry-pi
-[7]: https://opensource.com/article/19/3/learn-linux-raspberry-pi
-[8]: https://opensource.com/article/19/3/teach-kids-program-raspberry-pi
-[9]: https://opensource.com/article/19/3/programming-languages-raspberry-pi
-[10]: https://opensource.com/article/19/3/how-raspberry-pi-update
-[11]: https://opensource.com/article/19/3/raspberry-pi-entertainment
-[12]: https://opensource.com/article/19/3/play-games-raspberry-pi
-[13]: https://opensource.com/article/19/3/gpio-pins-raspberry-pi
-[14]: https://opensource.com/article/19/3/learn-about-computer-security-raspberry-pi
-[15]: https://opensource.com/article/19/3/do-math-raspberry-pi
-[16]: https://opensource.com/article/19/3/contribute-raspberry-pi-community
-[17]: /file/426561
-[18]: https://opensource.com/sites/default/files/uploads/raspberrypi_14_piday.jpg (Pi Day illustration)
diff --git a/translated/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md b/translated/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
new file mode 100644
index 0000000000..a02cd504d5
--- /dev/null
+++ b/translated/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
@@ -0,0 +1,189 @@
+[#]: collector: "lujun9972"
+[#]: translator: "zero-MK"
+[#]: reviewer: " "
+[#]: publisher: " "
+[#]: url: " "
+[#]: subject: "How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?"
+[#]: via: "https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/"
+[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
+
+
+
+# 如何使用带有 nc 命令的 Shell 脚本来检查多个远程 Linux 系统是否打开了指定端口?
+
+我们最近写了一篇文章关于如何检查远程 Linux 服务器是否打开指定端口。它能帮助您检查单个服务器。
+
+如果要检查五个服务器有没有问题,可以使用以下任何一个命令,如 nc(netcat),nmap 和 telnet。
+
+但是如果想检查 50 多台服务器,那么你的解决方案是什么?
+
+要检查所有服务器并不容易,如果你一个一个这样做,完全没有必要,因为这样你将会浪费大量的时间。
+
+为了解决这种情况,我使用 nc 命令编写了一个 shell 小脚本,它将允许我们扫描任意数量服务器给定的端口。
+
+如果您要查找单个服务器扫描,您有多个选择,你只需导航到到 **[检查远程 Linux 系统上的端口是否打开?][1]** 了解更多信息。
+
+本教程中提供了两个脚本,这两个脚本都很有用。
+
+这两个脚本都用于不同的目的,您可以通过阅读标题轻松理解其用途。
+
+在你阅读这篇文章之前,我会问你几个问题,如果你知道答案或者你可以通过阅读这篇文章来获得答案。
+
+如何检查一个远程 Linux 服务器上指定的端口是否打开?
+
+如何检查多个远程 Linux 服务器上指定的端口是否打开?
+
+如何检查多个远程 Linux 服务器上是否打开了多个指定的端口?
+
+### 什么是nc(netcat)命令?
+
+nc 即 netcat 。Netcat 是一个简单实用的 Unix 程序,它使用 TCP 或 UDP 协议进行跨网络连接进行数据读取和写入。
+
+它被设计成一个可靠的 “后端” (back-end) 工具,我们可以直接使用或由其他程序和脚本轻松驱动它。
+
+同时,它也是一个功能丰富的网络调试和探索工具,因为它可以创建您需要的几乎任何类型的连接,并具有几个有趣的内置功能。
+
+Netcat 有三个主要的模式。分别是连接模式,监听模式和隧道模式。
+
+**nc(netcat)的通用语法:**
+
+```
+$ nc [-options] [HostName or IP] [PortNumber]
+```
+
+### 如何检查多个远程 Linux 服务器上的端口是否打开?
+
+如果要检查多个远程 Linux 服务器上给定端口是否打开,请使用以下 shell 脚本。
+
+在我的例子中,我们将检查端口 22 是否在以下远程服务器中打开,确保您已经更新文件中的服务器列表而不是还是使用我的服务器列表。
+
+您必须确保已经更新服务器列表 : `server-list.txt file` 。每个服务器(IP)应该在单独的行中。
+
+```
+# cat server-list.txt
+192.168.1.2
+192.168.1.3
+192.168.1.4
+192.168.1.5
+192.168.1.6
+192.168.1.7
+```
+
+使用以下脚本可以达到此目的。
+
+```
+# vi port_scan.sh
+
+#!/bin/sh
+for server in `more server-list.txt`
+do
+#echo $i
+nc -zvw3 $server 22
+done
+```
+
+设置 `port_scan.sh` 文件的可执行权限。
+
+```
+$ chmod +x port_scan.sh
+```
+
+最后运行脚本来达到此目的。
+
+```
+# sh port_scan.sh
+
+Connection to 192.168.1.2 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.3 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.4 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.5 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.6 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
+```
+
+### 如何检查多个远程 Linux 服务器上是否打开多个端口?
+
+如果要检查多个服务器中的多个端口,请使用下面的脚本。
+
+在我的例子中,我们将检查给定服务器的 22 和 80 端口是否打开。确保您必须替换所需的端口和服务器名称而不使用是我的。
+
+您必须确保已经将要检查的端口写入 `port-list.txt` 文件中。每个端口应该在一个单独的行中。
+
+```
+# cat port-list.txt
+22
+80
+```
+
+您必须确保已经将要检查的服务器( IP 地址 )写入 `server-list.txt` 到文件中。每个服务器( IP ) 应该在单独的行中。
+
+```
+# cat server-list.txt
+192.168.1.2
+192.168.1.3
+192.168.1.4
+192.168.1.5
+192.168.1.6
+192.168.1.7
+```
+
+使用以下脚本来达成此目的。
+
+```
+# vi multiple_port_scan.sh
+
+#!/bin/sh
+for server in `more server-list.txt`
+do
+for port in `more port-list.txt`
+do
+#echo $server
+nc -zvw3 $server $port
+echo ""
+done
+done
+```
+
+设置 `multiple_port_scan.sh` 文件的可执行权限。
+
+```
+$ chmod +x multiple_port_scan.sh
+```
+
+最后运行脚本来实现这一目的。
+
+```
+# sh multiple_port_scan.sh
+Connection to 192.168.1.2 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.2 80 port [tcp/http] succeeded!
+
+Connection to 192.168.1.3 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.3 80 port [tcp/http] succeeded!
+
+Connection to 192.168.1.4 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.4 80 port [tcp/http] succeeded!
+
+Connection to 192.168.1.5 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.5 80 port [tcp/http] succeeded!
+
+Connection to 192.168.1.6 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.6 80 port [tcp/http] succeeded!
+
+Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
+Connection to 192.168.1.7 80 port [tcp/http] succeeded!
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[zero-MK](https://github.com/zero-mk)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/
diff --git a/translated/tech/20190325 Getting started with Vim- The basics.md b/translated/tech/20190325 Getting started with Vim- The basics.md
new file mode 100644
index 0000000000..87b2ed01f5
--- /dev/null
+++ b/translated/tech/20190325 Getting started with Vim- The basics.md
@@ -0,0 +1,221 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Modrisco)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Getting started with Vim: The basics)
+[#]: via: (https://opensource.com/article/19/3/getting-started-vim)
+[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson)
+
+Vim 入门:基础
+======
+
+为工作或者新项目学习足够的 Vim 知识。
+
+![Person standing in front of a giant computer screen with numbers, data][1]
+
+我还清晰地记得我第一次接触 Vim 的时候。那时我还是一名大学生,计算机学院的机房里都装着 Ubuntu 系统。尽管我在上大学前也曾接触过不同的 Linux 发行版(比如 RHEL,Red Hat 在百思买出售它的 CD),但这却是我第一次要在日常中频繁使用 Linux 系统,因为我的课程要求我这样做。当我开始使用 Linux 时,正如我的前辈和将来的后继者们一样,我感觉自己像是一名“真正的程序员”了。
+
+![Real Programmers comic][2]
+
+真正的程序员,来自 [xkcd][3]
+
+学生们可以使用像 [Kate][4] 一样的图形文本编辑器,这也安装在学校的电脑上了。对于那些可以使用 shell 但不习惯使用控制台编辑器的学生,最流行的选择是 [Nano][5],它提供了很好的交互式菜单和类似于 Windows 图形文本编辑器的体验。
+
+我有时会用 Nano,但当我听说 [Vi/Vim][6] 和 [Emacs][7] 能做一些很棒的事情时我决定试一试它们(主要是因为它们看起来很酷,而且我也很好奇它们有什么特别之处)。第一次使用 Vim 时吓到我了 —— 我不想搞砸任何事情!但是,一旦我掌握了它的诀窍,事情就变得容易得多,我可以欣赏编辑器的强大功能。至于 Emacs,呃,我有点放弃了,但我很高兴我坚持和 Vim 在一起。
+
+在本文中,我将介绍一下 Vim(基于我的个人经验),这样你就可以在 Linux 系统上用它来作为编辑器使用了。这篇文章不会让你变成 Vim 的专家,甚至不会触及 Vim 许多强大功能的皮毛。但是起点总是很重要的,我想让开始的经历尽可能简单,剩下的则由你自己去探索。
+
+### 第 0 步:打开一个控制台窗口
+
+在使用 Vim 前,你需要做一些准备工作。在 Linux 操作系统打开控制台终端。(因为 Vim 也可以在 MacOS 上使用,Mac 用户也可以使用这些说明)。
+
+打开终端窗口后,输入 `ls` 命令列出当前目录下的内容。然后,输入 `mkdir Tutorial` 命令创建一个名为 `Tutorial` 的新目录。通过输入 `cd Tutorial` 来进入该目录。
+
+![Create a folder][8]
+
+这就是全部的准备工作。现在是时候转到有趣的部分了——开始使用 Vim。
+
+### 第 1 步:创建一个 Vim 文件和不保存退出
+
+还记得我一开始说过我不敢使用 Vim 吗?我当时在害怕“如果我改变了一个现有的文件,把事情搞砸了怎么办?”毕竟,一些计算机科学作业要求我修改现有的文件。我想知道:_如何在不保存更改的情况下打开和关闭文件?_
+
+好消息是你可以使用相同的命令在 Vim 中创建或打开文件:`vim `,其中 **** 表示要创建或修改的目标文件名。让我们通过输入 `vim HelloWorld.java` 来创建一个名为 `HelloWorld.java` 的文件。
+
+你好,Vim!现在,讲一下 Vim 中一个非常重要的概念,可能也是最需要记住的:Vim 有多种模式,下面是 Vim 基础中需要知道的的三种:
+
+模式 | 描述
+---|---
+正常模式 | 默认模式,用于导航和简单编辑
+插入模式 | 用于插入和修改文本
+命令模式 | 用于执行如保存,退出等命令
+
+Vim 也有其他模式,例如可视模式、选择模式和命令模式。不过上面的三种模式对我们来说已经足够好了。
+
+你现在正处于正常模式,如果有文本,你可以用箭头键移动或使用其他导航键(将在稍后看到)。要确定你正处于正常模式,只需按下 `esc` (Escape)键即可。
+
+> **提示:** **Esc** 切换到正常模式。即使你已经在正常模式下,点击 **Esc** 只是为了练习。
+
+现在,有趣的事情发生了。输入 `:` (冒号键)并接着 `q!` (完整命令:`:q!`)。你的屏幕将显示如下:
+
+![Editing Vim][9]
+
+在正常模式下输入冒号会将 Vim 切换到命令行模式,执行 `:q!` 命令将退出 Vim 编辑器而不进行保存。换句话说,你放弃了所有的更改。你也可以使用 `ZQ` 命令;选择你认为更方便的选项。
+
+一旦你按下 `Enter` (回车),你就不再在 Vim 中。重复练习几次来掌握这条命令。熟悉了这部分内容之后,请转到下一节,了解如何对文件进行更改。
+
+### 第 2 步:在 Vim 中修改并保存
+
+通过输入 `vim HelloWorld.java` 和回车键来再次打开这个文件。你可以在插入模式中修改文件。首先,通过 `Esc` 键来确定你正处于正常模式。接着输入 `i` 来进入插入模式(没错,就是字母 **i**)。
+
+在左下角,你将看到 `\-- INSERT --`,这标志着你这处于插入模式。
+
+![Vim insert mode][10]
+
+写一些 Java 代码。你可以写任何你想写的,不过这也有一份你可以参照的例子。你的屏幕将显示如下:
+
+```
+public class HelloWorld {
+ public static void main([String][11][] args) {
+ }
+}
+```
+非常漂亮!注意文本是如何在 Java 语法中高亮显示的。因为这是个 Java 文件,所以 Vim 将自动检测语法并高亮颜色。
+
+保存文件:按下 `Esc` 来退出插入模式并进入命令模式。输入 `:` 并接着 `x!` (完整命令:`:x!`),按回车键来保存文件。你也可以输入 `wq` 来执行相同的操作。
+
+现在,你知道了如何使用插入模式输入文本并使用以下命令保存文件:`:x!` 或者 `:wq`。
+
+### 第 3 步:Vim 中的基本导航
+
+虽然你总是可以使用上箭头、下箭头、左箭头和右箭头在文件中移动,但在一个几乎有数不清行数的大文件中,这将是非常困难的。能够在一行中跳跃光标将会是很有用的。虽然 Vim 提供了不少很棒的导航功能,不过在一开始,我想向你展示如何在 Vim 中到达某一特定的行。
+
+单击 `Esc` 来确定你处于正常模式,接着输入 `:set number` 并键入回车。
+
+瞧!你现在可以在每一行的左侧看到行号。
+
+![Showing Line Numbers][12]
+
+好,你也许会说,“这确实很酷,不过我该怎么跳到某一行呢?”再一次的,确认你正处于正常模式。接着输入 `: `,在这里 **< LINE_NUMBER>** 是你想去的那一行的行数。按下回车键来试着移动到第二行。
+
+```
+:2
+```
+
+现在,跳到第三行。
+
+![Jump to line 3][13]
+
+但是,假如你正在处理一个一千多行的文件,而你正想到文件底部。这该怎么办呢?确认你正处于正常模式,接着输入 `:$` 并按下回车。
+
+你将来到最后一行!
+
+现在,你知道如何在行间跳跃了,作为补充,我们来学一下如何移动到一行的行尾。确认你正处于有文本内容的一行,如第三行,接着输入 `$`。
+
+![Go to the last character][14]
+
+你现在来到这行的最后一个字节了。在此示例中,高亮左大括号以显示光标移动到的位置,右大括号被高亮是因为它是高亮的左大括号的匹配字符。
+
+这就是 Vim 中的基本导航功能。等等,别急着退出文件。让我们转到 Vim 中的基本编辑。不过,你可以暂时随便喝杯咖啡或茶休息一下。
+
+### 第 4 步:Vim 中的基本编辑
+
+现在,你已经知道如何通过跳到想要的一行来在文件中导航,你可以使用这个技能在 Vim 中进行一些基本编辑。切换到插入模式。(还记得怎么做吗?是不是输入 `i` ?)当然,你可以使用键盘逐一删除或插入字符来进行编辑,但是 Vim 提供了更快捷的方法来编辑文件。
+
+来到第三行,这里的代码是 **public static void main(String[] args) {**。双击 `d` 键,没错,就是 `dd`。如果你成功做到了,你将会看到,第三行消失了,剩下的所有行都向上移动了一行。(例如,第四行变成了第三行)。
+
+![Deleting A Line][15]
+
+这就是 _删除_(delete) 命令。不要担心,键入 `u`,你会发现这一行又回来了。喔,这就是 _撤销_(undo) 命令。
+
+![Undoing a change in Vim][16]
+
+下一课是学习如何复制和粘贴文本,但首先,你需要学习如何在 Vim 中突出显示文本。按下 `v` 并向左右移动光标来选择或反选文本。当你向其他人展示代码并希望标识你想让他们注意到的代码时,这个功能也非常有用。
+
+![Highlighting text in Vim][17]
+
+来到第四行,这里的代码是 **System.out.println("Hello, Opensource");**。高亮这一行的所有内容。好了吗?当第四行的内容处于高亮时,按下 `y`。这就叫做 _复制_(yank)模式,文本将会被复制到剪贴板上。接下来,输入 `o` 来创建新的一行。注意,这将让你进入插入模式。通过按 `Esc` 退出插入模式,然后按下 `p`,代表 _粘贴_。这将把复制的文本从第三行粘贴到第四行。
+
+![Pasting in Vim][18]
+
+作为练习,请重复这些步骤,但也要修改新创建的行中的文字。此外,请确保这些行对齐工整。
+
+> **提示:** 您需要在插入模式和命令行模式之间来回切换才能完成此任务。
+
+当你完成了,通过 `x!` 命令保存文件。以上就是 Vim 基本编辑的全部内容。
+
+### 第 5 步:Vim 中的基本搜索
+
+假设你的团队领导希望你更改项目中的文本字符串。你该如何快速完成任务?你可能希望使用某个关键字来搜索该行。
+
+Vim 的搜索功能非常有用。通过 `Esc` 键来进入命令模式,然后输入冒号 `:`,我们可以通过输入 `/ ` 来搜索关键词, **< SEARCH_KEYWORD>** 指你希望搜索的字符串。在这里,我们搜索关键字符串 “Hello”。在面的图示中缺少冒号,但这是必需的。
+
+![Searching in Vim][19]
+
+但是,一个关键字可以出现不止一次,而这可能不是你想要的那一个。那么,如何找到下一个匹配项呢?只需按 `n` 键即可,这代表 _下一个_(next)。执行此操作时,请确保你没有处于插入模式!
+
+### 附加步骤:Vim中的分割模式
+
+以上几乎涵盖了所有的 Vim 基础知识。但是,作为一个额外奖励,我想给你展示 Vim 一个很酷的特性,叫做 _分割_(split)模式。
+
+退出 _HelloWorld.java_ 并创建一个新文件。在控制台窗口中,输入 `vim GoodBye.java` 并按回车键来创建一个名为 _GoodBye.java_ 的新文件。
+
+输入任何你想输入的让内容,我选择输入“Goodbye”。保存文件(记住你可以在命令模式中使用 `:x!` 或者 `:wq`)。
+
+在命令模式中,输入 `:split HelloWorld.java`,来看看发生了什么。
+
+![Split mode in Vim][20]
+
+Wow!快看!**split** 命令将控制台窗口水平分割成了两个部分,上面是 _HelloWorld.java_,下面是 _GoodBye.java_。该怎么能在窗口之间切换呢?按住 `Control` 键 (在 Mac 上)或 `Ctrl` 键(在PC上),然后按下 `ww` (即双击 `w` 键)。
+
+作为最后一个练习,尝试通过复制和粘贴 _HelloWorld.java_ 来编辑 _GoodBye.java_ 以匹配下面屏幕上的内容。
+
+![Modify GoodBye.java file in Split Mode][21]
+
+保存两份文件,成功!
+
+> **提示 1:** 如果你想将两个文件窗口垂直分割,使用 `:vsplit ` 命令。(代替 `:split ` 命令,**< FILE_NAME>** 指你想要使用分割模式打开的文件名)。
+>
+> **提示 2:** 你可以通过调用任意数量的 **split** 或者 **vsplit** 命令来打开两个以上的文件。试一试,看看它效果如何。
+
+### Vim 速查表
+
+在本文中,您学会了如何使用 Vim 来完成工作或项目。但这只是你开启 Vim 强大功能之旅的开始。请务必在 Opensource.com 上查看其他很棒的教程和技巧。
+
+为了让一切变得简单些,我已经将你学到的一切总结到了 [a handy cheat sheet][22] 中。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/getting-started-vim
+
+作者:[Bryant Son (Red Hat, Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[Modrisco](https://github.com/Modrisco)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/brson
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
+[2]: https://opensource.com/sites/default/files/uploads/1_xkcdcartoon.jpg (Real Programmers comic)
+[3]: https://xkcd.com/378/
+[4]: https://kate-editor.org
+[5]: https://www.nano-editor.org
+[6]: https://www.vim.org
+[7]: https://www.gnu.org/software/emacs
+[8]: https://opensource.com/sites/default/files/uploads/2_createtestfolder.jpg (Create a folder)
+[9]: https://opensource.com/sites/default/files/uploads/4_existingvim.jpg (Editing Vim)
+[10]: https://opensource.com/sites/default/files/uploads/6_insertionmode.jpg (Vim insert mode)
+[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
+[12]: https://opensource.com/sites/default/files/uploads/10_setnumberresult_0.jpg (Showing Line Numbers)
+[13]: https://opensource.com/sites/default/files/uploads/12_jumpintoline3.jpg (Jump to line 3)
+[14]: https://opensource.com/sites/default/files/uploads/14_gotolastcharacter.jpg (Go to the last character)
+[15]: https://opensource.com/sites/default/files/uploads/15_deletinglines.jpg (Deleting A Line)
+[16]: https://opensource.com/sites/default/files/uploads/16_undoingtheline.jpg (Undoing a change in Vim)
+[17]: https://opensource.com/sites/default/files/uploads/17_highlighting.jpg (Highlighting text in Vim)
+[18]: https://opensource.com/sites/default/files/uploads/19_pasting.jpg (Pasting in Vim)
+[19]: https://opensource.com/sites/default/files/uploads/22_searchmode.jpg (Searching in Vim)
+[20]: https://opensource.com/sites/default/files/uploads/26_copytonewfiles.jpg (Split mode in Vim)
+[21]: https://opensource.com/sites/default/files/uploads/27_exercise.jpg (Modify GoodBye.java file in Split Mode)
+[22]: https://opensource.com/downloads/cheat-sheet-vim
diff --git a/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md b/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md
new file mode 100644
index 0000000000..70652f894c
--- /dev/null
+++ b/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md
@@ -0,0 +1,158 @@
+[#]: collector: (lujun9972)
+[#]: translator: (HankChow)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Using Square Brackets in Bash: Part 2)
+[#]: via: (https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2)
+[#]: author: (Paul Brown https://www.linux.com/users/bro66)
+
+在 Bash 中使用[方括号](二)
+======
+
+![square brackets][1]
+
+> 我们继续来看方括号的用法,它们甚至还可以在 Bash 当中作为一个命令使用。
+
+[Creative Commons Zero][2]
+
+欢迎回到我们的方括号专题。在[前一篇文章][3]当中,我们介绍了方括号在命令行中可以用于通配操作,如果你已经读过前一篇文章,就可以从这里继续了。
+
+方括号还可以以一个命令的形式使用,就像这样:
+
+```
+[ "a" = "a" ]
+```
+
+上面这种 `[ ... ]` 的形式就可以看成是一个可执行的命令。要注意,方括号内部的内容 `"a" = "a"` 和方括号 `[`、`]` 之间是有空格隔开的。因为这里的方括号被视作一个命令,因此要用空格将命令和它的参数隔开。
+
+上面这个命令的含义是“判断字符串 `"a"` 和字符串 `"a"` 是否相同”,如果判断结果为真,那么 `[ ... ]` 就会以状态码status code 0 退出,否则以状态码 1 退出。在之前的文章中,我们也有介绍过状态码的概念,可以通过 `$?` 变量获取到最近一个命令的状态码。
+
+分别执行
+
+```
+[ "a" = "a" ]
+echo $?
+```
+
+以及
+
+```
+[ "a" = "b" ]
+echo $?
+```
+
+这两段命令中,前者会输出 0(判断结果为真),后者则会输出 1(判断结果为假)。在 Bash 当中,如果一个命令的状态码是 0,表示这个命令正常执行完成并退出,而且其中没有出现错误,对应布尔值 `true`;如果在命令执行过程中出现错误,就会返回一个非零的状态码,对应布尔值 `false`。而 `[ ... ]`也同样遵循这样的规则。
+
+因此,`[ ... ]` 很适合在 `if ... then`、`while` 或 `until` 这种在代码块结束前需要判断是否达到某个条件结构中使用。
+
+对应使用的逻辑判断运算符也相当直观:
+
+```
+[ STRING1 = STRING2 ] => checks to see if the strings are equal
+[ STRING1 != STRING2 ] => checks to see if the strings are not equal
+[ INTEGER1 -eq INTEGER2 ] => checks to see if INTEGER1 is equal to INTEGER2
+[ INTEGER1 -ge INTEGER2 ] => checks to see if INTEGER1 is greater than or equal to INTEGER2
+[ INTEGER1 -gt INTEGER2 ] => checks to see if INTEGER1 is greater than INTEGER2
+[ INTEGER1 -le INTEGER2 ] => checks to see if INTEGER1 is less than or equal to INTEGER2
+[ INTEGER1 -lt INTEGER2 ] => checks to see if INTEGER1 is less than INTEGER2
+[ INTEGER1 -ne INTEGER2 ] => checks to see if INTEGER1 is not equal to INTEGER2
+etc...
+```
+
+方括号的这种用法也可以很有 shell 风格,例如通过带上 `-f` 参数可以判断某个文件是否存在:
+
+```
+for i in {000..099}; \
+ do \
+ if [ -f file$i ]; \
+ then \
+ echo file$i exists; \
+ else \
+ touch file$i; \
+ echo I made file$i; \
+ fi; \
+done
+```
+
+如果你在上一篇文章使用到的测试目录中运行以上这串命令,其中的第 3 行会判断那几十个文件当中的某个文件是否存在。如果文件存在,会输出一条提示信息;如果文件不存在,就会把对应的文件创建出来。最终,这个目录中会完整存在从 `file000` 到 `file099` 这一百个文件。
+
+上面这段命令还可以写得更加简洁:
+
+```
+for i in {000..099};\
+do\
+ if [ ! -f file$i ];\
+ then\
+ touch file$i;\
+ echo I made file$i;\
+ fi;\
+done
+```
+
+其中 `!` 运算符表示将判断结果取反,因此第 3 行的含义就是“如果文件 `file$i` 不存在”。
+
+可以尝试一下将测试目录中那几十个文件随意删除几个,然后运行上面的命令,你就可以看到它是如何把被删除的文件重新创建出来的。
+
+除了 `-f` 之外,还有很多有用的参数。`-d` 参数可以判断某个目录是否存在,`-h` 参数可以判断某个文件是不是一个符号链接。可以用 `-G` 参数判断某个文件是否属于某个用户组,用 `-ot` 参数判断某个文件的最后更新时间是否早于另一个文件,甚至还可以判断某个文件是否为空文件。
+
+运行下面的几条命令,可以向几个文件中写入一些内容:
+
+```
+echo "Hello World" >> file023
+echo "This is a message" >> file065
+echo "To humanity" >> file010
+```
+
+然后运行:
+
+```
+for i in {000..099};\
+do\
+ if [ ! -s file$i ];\
+ then\
+ rm file$i;\
+ echo I removed file$i;\
+ fi;\
+done
+```
+
+你就会发现所有空文件都被删除了,只剩下少数几个非空的文件。
+
+如果你还想了解更多别的参数,可以执行 `man test` 来查看 `test` 命令的 man 手册(`test` 是 `[ ... ]` 的命令别名)。
+
+有时候你还会看到 `[[ ... ]]` 这种双方括号的形式,使用起来和单方括号差别不大。但双方括号支持的比较运算符更加丰富:例如可以使用 `==` 来判断某个字符串是否符合某个模式pattern,也可以使用 `<`、`>` 来判断两个字符串的出现顺序。
+
+可以在 [Bash 表达式文档][5]中了解到双方括号支持的更多运算符。
+
+### 下一集
+
+在下一篇文章中,我们会开始介绍圆括号 `()` 在 Linux 命令行中的用法,敬请关注!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
+
+作者:[Paul Brown][a]
+选题:[lujun9972][b]
+译者:[HankChow](https://github.com/HankChow)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/bro66
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-brackets-3734552_1920.jpg?itok=hv9D6TBy "square brackets"
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
+[4]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
+[5]: https://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions
+[6]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
+[7]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
+[8]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
+[9]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
+[10]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
+[11]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
+
diff --git a/translated/tech/20190403 5 useful open source log analysis tools.md b/translated/tech/20190403 5 useful open source log analysis tools.md
new file mode 100644
index 0000000000..cc80f12590
--- /dev/null
+++ b/translated/tech/20190403 5 useful open source log analysis tools.md
@@ -0,0 +1,120 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (5 useful open source log analysis tools)
+[#]: via: (https://opensource.com/article/19/4/log-analysis-tools)
+[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
+
+5 个有用的开源日志分析工具
+======
+监控网络活动既重要又繁琐,以下这些工具可以使它更容易。
+![People work on a computer server][1]
+
+监控网络活动是一项繁琐的工作,但有充分的理由这样做。例如,它允许你查找和调查工作站、连接到网络的设备和服务器上的可疑登录,同时确定管理员滥用了什么。你还可以跟踪软件安装和数据传输,以实时识别潜在问题,而不是在损坏发生后才进行跟踪。
+
+这些日志还有助于使你的公司遵守适用于在欧盟范围内运营的任何实体的[通用数据保护条例][2](GFPR)。如果你的网站在欧盟可以浏览,那么你就有资格。
+
+日志记录,包括跟踪和分析,应该是任何监控基础设置中的一个基本过程。要从灾难中恢复 SQL Server 数据库,需要事务日志文件。此外,通过跟踪日志文件,DevOps 团队和数据库管理员(DBA)可以保持最佳的数据库性能,或者,在网络攻击的情况下找到未经授权活动的证据。因此,定期监视和分析系统日志非常重要。这是一种可靠的方式来重新创建导致出现任何问题的事件链。
+
+现在有很多开源日志跟踪器和分析工具可供使用,这使得为活动日志选择合适的资源比你想象的更容易。免费和开源软件社区提供的日志设计适用于各种站点和操作系统。以下是五个我用过的最好的,它们并没有特别的顺序。
+
+### Graylog
+
+[Graylog][3] 于 2011 年在德国启动,现在作为开源工具或商业解决方案提供。它被设计成一个集中式日志管理系统,接受来自不同服务器或端点的数据流,并允许你快速浏览或分析该信息。
+
+![Graylog screenshot][4]
+
+Graylog 在系统管理员中建立了良好的声誉,因为它易于扩展。大多数 Web 项目都是从小规模开始的,‘但它们可能指数级增长。Graylog 可以平衡后端服务网络中的负载,每天可以处理几 TB 的日志数据。
+
+IT 管理员会发现 Graylog 的前端界面易于使用,而且功能强大。Graylog 是围绕仪表板的概念构建的,它允许你选择你认为最有价值的指标或数据源,并快速查看一段时间内的趋势。
+
+当发生安全或性能事件时,IT 管理员希望能够尽可能地将症状追根溯源。Graylog 的搜索功能使这变得容易。它有内置的容错功能,可运行多线程搜索,因此你可以同时分析多个潜在的威胁。
+
+### Nagios
+
+[Nagios][5] 于 1999 年开始由一个开发人员开发,现在已经发展成为管理日志数据最可靠的开源工具之一。当前版本的 Nagios 可以与运行 Microsoft Windows, Linux 或 Unix 的服务器集成。
+
+![Nagios Core][6]
+
+它的主要产品是日志服务器,旨在简化数据收集并使系统管理员更容易访问信息。Nagios 日志服务器引擎将实时捕获数据并将其提供给一个强大的搜索工具。通过内置的设置向导,可以轻松地与新端点或应用程序集成。
+
+Nagios 最常用于需要监控其本地网络安全性的组织。它可以审核一系列与网络相关的事件,并帮助自动分发警报。如果满足特定条件,甚至可以将 Nagios 配置为运行预定义的脚本,从而允许你在人员介入之前解决问题。
+
+作为网络审核的一部分,Nagios 将根据日志数据来源的地理位置过滤日志数据。这意味着你可以使用映射技术构建全面的仪表板,以了解 Web 流量是如何流动的。
+
+### Elastic Stack ("ELK Stack")
+
+[Elastic Stack][7],通常称为 ELK Stack,是需要筛选大量数据并理解其日志系统的组织中最受欢迎的开源工具之一(这也是我个人的最爱)。
+
+![ELK Stack][8]
+
+它的主要产品由三个独立的产品组成:Elasticsearch, Kibana 和 Logstash:
+
+ * 顾名思义, _**Elasticsearch**_ 旨在帮助用户使用多种查询语言和类型在数据集中找到匹配项。速度是它最大的优势。它可以扩展成由数百个服务器节点组成的集群,轻松处理 PB 级的数据。
+
+ * _**Kibana**_ 是一个可视化工具,与 Elasticsearch 一起工作,允许用户分析他们的数据并构建强大的报告。当你第一次在服务器集群上安装 Kibana 引擎时,你将访问一个显示统计数据、图表甚至是动画的界面。
+
+ * ELK Stack 的最后一部分是 _**Logstash**_ ,它作为一个纯粹的服务端管道进入 Elasticsearch 数据库。你可以将 Logstash 与各种编程语言和 API 集成,这样你的网站和移动应用程序中的信息就可以直接提供给强大的 Elastic Stalk 搜索引擎中。
+
+ELK Stack 的一个独特功能是,它允许你监视构建在 WordPress 开源安装上的应用程序。与[跟踪管理员和 PHP 日志][9]的大多数开箱即用的安全审计日志工具相比,ELK Stack 可以筛选 Web 服务器和数据库日志。
+
+糟糕的日志跟踪和数据库管理是导致网站性能不佳的最常见原因之一。没有定期检查、优化和清空数据库日志,不仅会降低站点的运行速度,还可能导致其完全崩溃。因此,ELK Stack 对于每个 WordPress 开发人员的工具包来说都是一个优秀的工具。
+
+### LOGalyze
+
+[LOGalyze][11] 是一个位于匈牙利的组织,它为系统管理员和安全专家构建开源工具,以帮助他们管理服务器日志,并将其转换为有用的数据点。其主要产品可供个人或商业用户免费下载。
+
+![LOGalyze][12]
+
+LOGalyze 被设计成一个巨大的管道,其中多个服务器、应用程序和网络设备可以使用简单对象访问协议(SOAP)方法提供信息。它提供了一个前端界面,管理员可以登录界面来监控数据集并开始分析数据。
+
+在 LOGalyze 的 Web 界面中,你可以运行动态报告,并将其导出到 Excel 文件、PDF 文件或其他格式。这些报告可以基于 LOGalyze 后端管理的多维统计信息。它甚至可以跨服务器或应用程序组合数据字段,借此来帮助你发现性能趋势。
+
+LOGalyze 旨在不到一个小时内完成安装和配置。它具有预先构建的功能,允许它以法律所要求的格式收集审计数据。例如,LOGalyze 可以很容易地运行不同的 HIPAA 报告,以确保你的组织遵守健康法律并保持合规性。
+
+### Fluentd
+
+如果你所在组织的数据源位于许多不同的位置和环境中,那么你的目标应该是尽可能地将它们集中在一起。否则,你将难以监控性能并防范安全威胁。
+
+[Fluentd][13] 是一个强大的数据收集解决方案,它是完全开源的。它没有提供完整的前端界面,而是作为一个收集层来帮助组织不同的管道。Fluentd 在被世界上一些最大的公司使用,但是也可以在较小的组织中实施。
+
+![Fluentd architecture][14]
+
+Fluentd 最大的好处是它与当今最常用的技术工具兼容。例如,你可以使用 Fluentd 从 Web 服务器(如 Apache)、智能设备传感器和 MongoDB 的动态记录中收集数据。如何处理这些数据完全取决于你。
+
+Fluentd 基于 JSON 数据格式,它可以与由卓越的开发人员创建的 [500 多个插件][15]一起使用。这使你可以将日志数据扩展到其他应用程序中,并通过最少的手工操作从中获得更好的分析。
+
+### 写在最后
+
+如果出于安全原因、政府合规性和衡量生产力的原因,你还没有使用活动日志,那么现在开始改变吧。市场上有很多插件,它们可以与多种环境或平台一起工作,甚至可以在内部网络上使用。不要等发生了严重的事件,才采取一个积极主动的方法去维护和监督日志。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/log-analysis-tools
+
+作者:[Sam Bocetta][a]
+选题:[lujun9972][b]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sambocetta
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server)
+[2]: https://opensource.com/article/18/4/gdpr-impact
+[3]: https://www.graylog.org/products/open-source
+[4]: https://opensource.com/sites/default/files/uploads/graylog-data.png (Graylog screenshot)
+[5]: https://www.nagios.org/downloads/
+[6]: https://opensource.com/sites/default/files/uploads/nagios_core_4.0.8.png (Nagios Core)
+[7]: https://www.elastic.co/products
+[8]: https://opensource.com/sites/default/files/uploads/elk-stack.png (ELK Stack)
+[9]: https://www.wpsecurityauditlog.com/benefits-wordpress-activity-log/
+[10]: https://websitesetup.org/how-to-speed-up-wordpress/
+[11]: http://www.logalyze.com/
+[12]: https://opensource.com/sites/default/files/uploads/logalyze.jpg (LOGalyze)
+[13]: https://www.fluentd.org/
+[14]: https://opensource.com/sites/default/files/uploads/fluentd-architecture.png (Fluentd architecture)
+[15]: https://opensource.com/article/18/9/open-source-log-aggregation-tools
diff --git a/translated/tech/20190407 Fixing Ubuntu Freezing at Boot Time.md b/translated/tech/20190407 Fixing Ubuntu Freezing at Boot Time.md
new file mode 100644
index 0000000000..eed2f478ff
--- /dev/null
+++ b/translated/tech/20190407 Fixing Ubuntu Freezing at Boot Time.md
@@ -0,0 +1,186 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Raverstern)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Fixing Ubuntu Freezing at Boot Time)
+[#]: via: (https://itsfoss.com/fix-ubuntu-freezing/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+解决 Ubuntu 在启动时冻结的问题
+======
+
+_**本文将向您一步步展示如何通过安装 NVIDIA 专有驱动来处理 Ubuntu 在启动过程中冻结的问题。本教程仅在一个新安装的 Ubuntu 系统上操作验证过,不过在其他情况下也理应可用。**_
+
+不久前我买了台[宏碁掠夺者][1](此为[广告联盟][2]链接)笔记本电脑来测试各种 Linux 发行版。这台庞大且笨重的机器与我喜欢的,类似[戴尔 XPS][3]那般小巧轻便的笔记本电脑大相径庭。
+
+我即便不打游戏也选择这台电竞笔记本电脑的原因,就是为了 [NVIDIA 的显卡][4]。宏碁掠夺者 Helios 300 上搭载了一块 [NVIDIA Geforce][5] GTX 1050Ti 显卡。
+
+NVIDIA 那糟糕的 Linux 兼容性为人们所熟知。过去很多 It’s FOSS 的读者都向我求助过关于 NVIDIA 笔记本电脑的问题,而我当时无能为力,因为我手头上没有使用 NVIDIA 显卡的系统。
+
+所以当我决定搞一台专门的设备来测试 Linux 发行版时,我选择了带有 NVIDIA 显卡的笔记本电脑。
+
+这台笔记本原装的 Windows 10 系统安装在 120 GB 的固态硬盘上,并另外配有 1 TB 的机械硬盘来存储数据。在此之上我配置好了 [Windows 10 和 Ubuntu 18.04 双系统][6]。整个的安装过程舒适,方便,快捷。
+
+随后我启动了 [Ubuntu][7]。那熟悉的紫色界面展现了出来,然后我就发现它卡在那儿了。鼠标一动不动,我也输入不了任何东西,然后除了长按电源键强制关机以外我啥事儿都做不了。
+
+然后再次尝试启动,结果一模一样。整个系统就一直卡在那个紫色界面,随后的登录界面也出不来。
+
+这听起来很耳熟吧?下面就让我来告诉您如何解决这个 Ubuntu 在启动过程中冻结的问题。
+
+要不您考虑考虑抛弃 Ubuntu?
+
+请注意,尽管是在 Ubuntu 18.04 上操作的,本教程应该也能用于其他基于 Ubuntu 的发行版,例如 Linux Mint,elementary OS 等等。关于这点我已经在 Zorin OS 上确认过。
+
+### 解决 Ubuntu 启动中由 NVIDIA 驱动引起的冻结问题
+
+![][8]
+
+我介绍的解决方案适用于配有 NVIDIA 显卡的系统,因为您所面临的系统冻结问题是由开源的 [NVIDIA Nouveau 驱动][9]所导致的。
+
+事不宜迟,让我们马上来看看如何解决这个问题。
+
+#### 步骤 1:编辑 Grub
+
+在启动系统的过程中,请您在如下图所示的 Grub 界面上停下。如果您没看到这个界面,在启动电脑时请按住 Shift 键。
+
+在这个界面上,按“E”键进入编辑模式。
+
+![按“E”按键][10]
+
+您应该看到一些如下图所示的代码。此刻您应关注于以 Linux 开头的那一行。
+
+![前往 Linux 开头的那一行][11]
+
+#### 步骤 2:在 Grub 中临时修改 Linux 内核参数
+
+回忆一下,我们的问题出在 NVIDIA 显卡驱动上,是开源版 NVIDIA 驱动的不适配导致了我们的问题。所以此处我们能做的就是禁用这些驱动。
+
+此刻,您有多种方式可以禁用这些驱动。我最喜欢的方式是通过 nomodeset 来禁用所有显卡的驱动。
+
+请把下列文本添加到以 Linux 开头的那一行的末尾。此处您应该可以正常输入。请确保您把这段文本加到了行末。
+
+```
+nomodeset
+```
+
+现在您屏幕上的显示应如下图所示:
+
+![通过向内核添加 nomodeset 来禁用显卡驱动][12]
+
+按 Ctrl+X 或 F10 保存并退出。下次您就将以修改后的内核参数来启动。
+
+对以上操作的解释(点击展开)
+
+所以我们究竟做了些啥?那个 nomodeset 又是个什么玩意儿?让我来向您简单地解释一下。
+
+通常来说,显卡是在 X 或者是其他显示服务开始执行后才被启用的,也就是在您登录系统并看到图形界面以后。
+
+但最近,视频模式的设置被移植进了内核。这么做的众多优点之一就是能您看到一个漂亮且高清的启动画面。
+
+若您往内核中加入 nomodeset 参数,它就会指示内核在显示服务启动后才加载显卡驱动。
+
+换句话说,您在此时禁止视频驱动的加载,由此产生的冲突也会随之消失。您在登录进系统以后,还是能看到一切如旧,那是因为显卡驱动在随后的过程中被加载了。
+
+#### 步骤 3:更新您的系统并安装 NVIDIA 专有驱动
+
+别因为现在可以登录系统了就过早地高兴起来。您之前所做的只是临时措施,在下次启动的时候,您的系统依旧会尝试加载 Nouveau 驱动而因此冻结。
+
+这是否意味着您将不得不在 Grub 界面上不断地编辑内核?可喜可贺,答案是否定的。
+
+您可以在 Ubuntu 上为 NVIDIA 显卡[安装额外的驱动][13]。在使用专有驱动后,Ubuntu 将不会在启动过程中冻结。
+
+我假设这是您第一次登录到一个新安装的系统。这意味着在做其他事情之前您必须先[更新 Ubuntu][14]。通过 Ubuntu 的 Ctrl+Alt+T [系统快捷键][15]打开一个终端,并输入以下命令:
+
+```
+sudo apt update && sudo apt upgrade -y
+```
+
+在上述命令执行完以后,您可以尝试安装额外的驱动。不过根据我的经验,在安装新驱动之前您需要先重启一下您的系统。在您重启时,您还是需要按我们之前做的那样修改内核参数。
+
+当您的系统已经更新和重启完毕,按下 Windows 键打开一个菜单栏,并搜索“软件与更新”(Software & Updates)。
+
+![点击“软件与更新”(Software & Updates)][16]
+
+然后切换到“额外驱动”(Additional Drivers)标签页,并等待数秒。然后您就能看到可供系统使用的专有驱动了。在这个列表上您应该可以找到 NVIDIA。
+
+选择专有驱动并点击“应用更改”(Apply Changes)。
+
+![NVIDIA 驱动安装中][17]
+
+新驱动的安装会费点时间。若您的系统启用了 UEFI 安全启动,您将被要求设置一个密码。_您可以将其设置为任何容易记住的密码_。它的用处我将在步骤 4 中说明。
+
+![您可能需要设置一个安全启动密码][18]
+
+安装完成后,您会被要求重启系统以令之前的更改生效。
+
+![在新驱动安装好后重启您的系统][19]
+
+#### 步骤 4:处理 MOK(仅针对启用了 UEFI 安全启动的设备)
+
+如果您之前被要求设置安全启动密码,此刻您会看到一个蓝色界面,上面写着“MOK management”。这是个复杂的概念,我试着长话短说。
+
+对 MOK([设备所有者密码][20])的要求是因为安全启动的功能要求所有内核模块都必须被签名。Ubuntu 中所有随 ISO 镜像发行的内核模块都已经签了名。由于您安装了一个新模块(也就是那额外的驱动),或者您对内核模块做了修改,您的安全系统可能视之为一个未经验证的外部修改,从而拒绝启动。
+
+因此,您可以自己对系统模块进行签名(以告诉 UEFI 系统莫要大惊小怪,这些修改是您做的),或者您也可以简单粗暴地[禁用安全启动][21]。
+
+现在你对[安全启动和 MOK ][22]有了一定了解,那咱们就来看看在遇到这个蓝色界面后该做些什么。
+
+如果您选择“继续启动”,您的系统将有很大概率如往常一样启动,并且您啥事儿也不用做。不过在这种情况下,新驱动的有些功能有可能工作不正常。
+
+这就是为什么,您应该**选择注册 MOK **。
+
+![][23]
+
+它会在下一个页面让您点击“继续”,然后要您输入一串密码。请输入在上一步中,在安装额外驱动时设置的密码。
+
+别担心!
+
+如果您错过了这个关于 MOK 的蓝色界面,或不小心点了“继续启动”而不是“注册 MOK”,不必惊慌。您的主要目的是能够成功启动系统,而通过禁用 Nouveau 显卡驱动,您已经成功地实现了这一点。
+
+最坏的情况也不过就是您的系统切换到 Intel 集成显卡而不再使用 NVIDIA 显卡。您可以之后的任何时间安装 NVIDIA 显卡驱动。您的首要任务是启动系统。
+
+#### 步骤 5:享受安装了专有 NVIDIA 驱动的 Linux 系统
+
+当新驱动被安装好后,您需要再次重启系统。别担心!目前的情况应该已经好起来了,并且您不必再去修改内核参数,而是能够直接启动 Ubuntu 系统了。
+
+我希望本教程帮助您解决了 Ubuntu 系统在启动中冻结的问题,并让您能够成功启动 Ubuntu 系统。
+
+如果您有任何问题或建议,请在下方评论区给我留言。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/fix-ubuntu-freezing/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[Raverstern](https://github.com/Raverstern)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://amzn.to/2YVV6rt
+[2]: https://itsfoss.com/affiliate-policy/
+[3]: https://itsfoss.com/dell-xps-13-ubuntu-review/
+[4]: https://www.nvidia.com/en-us/
+[5]: https://www.nvidia.com/en-us/geforce/
+[6]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
+[7]: https://www.ubuntu.com/
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/fixing-frozen-ubuntu.png?resize=800%2C450&ssl=1
+[9]: https://nouveau.freedesktop.org/wiki/
+[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/edit-grub-menu.jpg?resize=800%2C393&ssl=1
+[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue.jpg?resize=800%2C343&ssl=1
+[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue-2.jpg?resize=800%2C320&ssl=1
+[13]: https://itsfoss.com/install-additional-drivers-ubuntu/
+[14]: https://itsfoss.com/update-ubuntu/
+[15]: https://itsfoss.com/ubuntu-shortcuts/
+[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/activities_software_updates_search-e1551416201782-800x228.png?resize=800%2C228&ssl=1
+[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-nvidia-driver-ubuntu.jpg?resize=800%2C520&ssl=1
+[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/secure-boot-nvidia.jpg?ssl=1
+[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/nvidia-drivers-installed-Ubuntu.jpg?resize=800%2C510&ssl=1
+[20]: https://firmware.intel.com/blog/using-mok-and-uefi-secure-boot-suse-linux
+[21]: https://itsfoss.com/disable-secure-boot-in-acer/
+[22]: https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS
+[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/MOK-Secure-boot.jpg?resize=800%2C350&ssl=1
diff --git a/translated/tech/20190409 Enhanced security at the edge.md b/translated/tech/20190409 Enhanced security at the edge.md
new file mode 100644
index 0000000000..5062c4b229
--- /dev/null
+++ b/translated/tech/20190409 Enhanced security at the edge.md
@@ -0,0 +1,64 @@
+[#]: collector: (lujun9972)
+[#]: translator: (hopefully2333)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Enhanced security at the edge)
+[#]: via: (https://www.networkworld.com/article/3388130/enhanced-security-at-the-edge.html#tk.rss_all)
+[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
+
+增强边缘计算的安全性
+======
+边缘计算环境带来的安全风险迫使公司必须特别关注它的安全措施。
+
+说数据安全是高管们和董事会最关注的问题已经是陈词滥调了。但问题是:数据安全问题不会自己消失。
+
+骇客和攻击者一直在寻找利用漏洞的新方法。就像公司开始使用人工智能和机器学习等新兴技术来自动化地保护他们的组织一样,攻击者们也在使用这些技术来达成他们的目的。
+
+简而言之,安全问题是一定不能忽视的。现在,随着越来越多的公司开始使用边缘计算,如何保护这些边缘计算环境,需要有新的安全考量。
+
+**边缘计算的风险更高**
+
+正如 Network World 中一篇文章所建议的,边缘计算的安全架构应该将重点放在物理安全上。这并不是说要忽视,保护传输过程中的数据这一点。而是说,实际情况里的物理环境和物理设备更加值得关注。
+
+例如,边缘计算的硬件设备通常位于大公司或者广阔空间中,有时候是在很容易进入的共享办公室和公共区域里。从表面上看,这节省了成本,能更快地访问到相关的数据,而不必在后端的数据中心和前端的设备之间往返。
+
+但是,如果没有任何级别的访问控制,这台设备就会暴露在恶意操作和简单人为错误的双重风险之下。想象一下办公室的清洁工意外地关掉了设备,以及随之而来的设置停机所造成的后果。
+
+另一个风险是 “Shadow edge IT”。有时候非 IT 的工作人员会部署一个边缘站点来实现快速启动项目,却没有及时通知 IT 部门这个站点正在连接到网络。例如,零售商店可能会主动安装他们自己的数字标牌,或者,销售团队会将物联网传感器应用到电视中,并在销售演示中实时地部署它们。
+
+在这种情况下,IT 部门很少甚至完全看不到这些设备和边缘站点,这就使得网络可能暴露在外。
+
+**保护边缘计算环境**
+
+部署微型数据中心是规避上述风险的一个简单方法。(MDC)
+
+“在历史上,大多数这些[边缘]环境都是不受控制的,”施耐德电气安全能源部门的首席技术官和创新高级副总裁 Kevin Brown 说。“它们可能是第一级,但很可能是第 0 级类型的设计-它们就像开放的配件柜。它们现在需要像微型数据中心一样的对待。你管理它需要像管理关键任务数据中心一样”
+
+单说听起来的感觉,这个解决方案是一个安全,独立的机箱,它包括在室内和室外运行程序所需的所有存储空间,处理性能和网络资源。它同样包含必要的电源、冷却、安全和管理工具。
+
+最重要的部分是高级别的安全性。这个装置是封闭的,有上锁的们,以防止非法入侵。通过合适的供应商,DMC 可以进行定制,包括用于远程数字化管理的监控摄像头、传感器和监控技术。
+
+随着越来越多的公司开始利用边缘计算的优势,他们必须利用安全解决方案的优势来保护他们的数据和边缘环境。
+
+在 APC.com 上了解保护你的边缘计算环境的最佳方案。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3388130/enhanced-security-at-the-edge.html#tk.rss_all
+
+作者:[Anne Taylor][a]
+选题:[lujun9972][b]
+译者:[hopefully2333](https://github.com/hopefully2333)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Anne-Taylor/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/istock-1091707448-100793312-large.jpg
+[2]: https://www.csoonline.com/article/3250144/6-ways-hackers-will-use-machine-learning-to-launch-attacks.html
+[3]: https://www.marketwatch.com/press-release/edge-computing-market-2018-global-analysis-opportunities-and-forecast-to-2023-2018-08-20
+[4]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[5]: https://www.youtube.com/watch?v=1NLk1cXEukQ
+[6]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
diff --git a/translated/tech/20190409 Four Methods To Add A User To Group In Linux.md b/translated/tech/20190409 Four Methods To Add A User To Group In Linux.md
new file mode 100644
index 0000000000..23c3a51c9b
--- /dev/null
+++ b/translated/tech/20190409 Four Methods To Add A User To Group In Linux.md
@@ -0,0 +1,350 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( NeverKnowsTomorrow )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Four Methods To Add A User To Group In Linux)
+[#]: via: (https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-usermod-gpasswd/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+在 Linux 中添加用户到组的四个方法
+======
+
+Linux 组是用于管理 Linux 中用户帐户的组织单位。
+
+对于 Linux 系统中的每一个用户和组,它都有惟一的数字标识号。
+
+它被称为 userid (UID) 和 groupid (GID)。组的主要目的是为组的成员定义一组特权。
+
+它们都可以执行特定的操作,但不能执行其他操作。
+
+Linux 中有两种类型的默认组可用。每个用户应该只有一个 主要组primary group 和任意数量的 次要组secondary group。
+
+ * **主要组:** 创建用户帐户时,已将主组添加到用户。它通常是用户的名称。在执行诸如创建新文件(或目录)、修改文件或执行命令等任何操作时,主组将应用于用户。用户主要组信息存储在 `/etc/passwd` 文件中。
+ * **次要组:** 它被称为次要组。它允许用户组在同一组成员文件中执行特定操作。
+
+例如,如果你希望允许少数用户运行 apache(httpd)服务命令,那么它将非常适合。
+
+你可能对以下与用户管理相关的文章感兴趣。
+
+ * 在 Linux 中创建用户帐户的三种方法?
+ * 如何在 Linux 中创建批量用户?
+ * 如何在 Linux 中使用不同的方法更新/更改用户密码?
+
+可以使用以下四种方法实现。
+
+ * **`usermod:`** usermod 命令修改系统帐户文件,以反映在命令行中指定的更改。
+ * **`gpasswd:`** gpasswd 命令用于管理 /etc/group 和 /etc/gshadow。每个组都可以有管理员、成员和密码。
+ * **`Shell Script:`** shell 脚本允许管理员自动执行所需的任务。
+ * **`Manual Method:`** 我们可以通过编辑 `/etc/group` 文件手动将用户添加到任何组中。
+
+我假设你已经拥有此活动所需的组和用户。在本例中,我们将使用以下用户和组:`user1`、`user2`、`user3`,group 是 `mygroup` 和 `mygroup1`。
+
+在进行更改之前,我想检查用户和组信息。详见下文。
+
+我可以看到下面的用户与他们自己的组关联,而不是与其他组关联。
+
+```
+# id user1
+uid=1008(user1) gid=1008(user1) groups=1008(user1)
+
+# id user2
+uid=1009(user2) gid=1009(user2) groups=1009(user2)
+
+# id user3
+uid=1010(user3) gid=1010(user3) groups=1010(user3)
+```
+
+我可以看到这个组中没有关联的用户。
+
+```
+# getent group mygroup
+mygroup:x:1012:
+
+# getent group mygroup1
+mygroup1:x:1013:
+```
+
+### 方法 1:什么是 usermod 命令?
+
+usermod 命令修改系统帐户文件,以反映命令行上指定的更改。
+
+### 如何使用 usermod 命令将现有的用户添加到次要组或附加组?
+
+要将现有用户添加到辅助组,请使用带有 `-g` 选项和组名称的 usermod 命令。
+
+语法
+
+```
+# usermod [-G] [GroupName] [UserName]
+```
+
+如果系统中不存在给定的用户或组,你将收到一条错误消息。如果没有得到任何错误,那么用户已经被添加到相应的组中。
+
+```
+# usermod -a -G mygroup user1
+```
+
+让我使用 id 命令查看输出。是的,添加成功。
+
+```
+# id user1
+uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
+```
+
+### 如何使用 usermod 命令将现有的用户添加到多个次要组或附加组?
+
+要将现有用户添加到多个次要组中,请使用带有 `-G` 选项的 usermod 命令和带有逗号分隔的组名称。
+
+语法
+
+```
+# usermod [-G] [GroupName1,GroupName2] [UserName]
+```
+
+在本例中,我们将把 `user2` 添加到 `mygroup` 和 `mygroup1` 中。
+
+```
+# usermod -a -G mygroup,mygroup1 user2
+```
+
+让我使用 `id` 命令查看输出。是的,`user2` 已成功添加到 `myGroup` 和 `myGroup1` 中。
+
+```
+# id user2
+uid=1009(user2) gid=1009(user2) groups=1009(user2),1012(mygroup),1013(mygroup1)
+```
+
+### 如何改变用户的主要组?
+
+要更改用户的主要组,请使用带有 `-g` 选项和组名称的 usermod 命令。
+
+语法
+
+```
+# usermod [-g] [GroupName] [UserName]
+```
+
+我们必须使用 `-g` 改变用户的主要组。
+
+```
+# usermod -g mygroup user3
+```
+
+让我们看看输出。是的,已成功更改。现在,它将 mygroup 显示为 user3 主要组而不是 user3。
+
+```
+# id user3
+uid=1010(user3) gid=1012(mygroup) groups=1012(mygroup)
+```
+
+### 方法 2:什么是 gpasswd 命令?
+
+`gpasswd` 命令用于管理 `/etc/group` 和 `/etc/gshadow`。每个组都可以有管理员、成员和密码。
+
+### 如何使用 gpasswd 命令将现有用户添加到次要组或者附加组?
+
+要将现有用户添加到次要组,请使用带有 `-M` 选项和组名称的 gpasswd 命令。
+
+语法
+
+```
+# gpasswd [-M] [UserName] [GroupName]
+```
+
+在本例中,我们将把 `user1 ` 添加到 `mygroup` 中。
+
+```
+# gpasswd -M user1 mygroup
+```
+
+让我使用 id 命令查看输出。是的,`user1` 已成功添加到 `mygroup` 中。
+
+```
+# id user1
+uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
+```
+
+### 如何使用 gpasswd 命令添加多个用户到次要组或附加组中?
+
+要将多个用户添加到辅助组中,请使用带有 `-M` 选项和组名称的 gpasswd 命令。
+
+语法
+
+```
+# gpasswd [-M] [UserName1,UserName2] [GroupName]
+```
+
+在本例中,我们将把 `user2` 和 `user3` 添加到 `mygroup1` 中。
+
+```
+# gpasswd -M user2,user3 mygroup1
+```
+
+让我使用 getent 命令查看输出。是的,`user2` 和 `user3` 已成功添加到 `myGroup1` 中。
+
+```
+# getent group mygroup1
+mygroup1:x:1013:user2,user3
+```
+
+### 如何使用 gpasswd 命令从组中删除一个用户?
+
+要从组中删除用户,请使用带有 `-d` 选项的 gpasswd 命令以及用户和组的名称。
+
+语法
+
+```
+# gpasswd [-d] [UserName] [GroupName]
+```
+
+在本例中,我们将从 `mygroup` 中删除 `user1` 。
+
+```
+# gpasswd -d user1 mygroup
+Removing user user1 from group mygroup
+```
+
+### 方法 3:使用 Shell 脚本?
+
+基于上面的例子,我知道 `usermod` 命令没有能力将多个用户添加到组中,但是可以通过 `gpasswd` 命令完成。
+
+但是,它将覆盖当前与组关联的现有用户。
+
+例如,`user1` 已经与 `mygroup` 关联。如果要使用 `gpasswd` 命令将 `user2` 和 `user3` 添加到 `mygroup` 中,它将不会按预期生效,而是对组进行修改。
+
+如果要将多个用户添加到多个组中,解决方案是什么?
+
+两个命令中都没有默认选项来实现这一点。
+
+因此,我们需要编写一个小的 shell 脚本来实现这一点。
+
+### 如何使用 gpasswd 命令将多个用户添加到次要组或附加组?
+
+如果要使用 gpasswd 命令将多个用户添加到次要组或附加组,请创建以下小的 shell 脚本。
+
+创建用户列表。每个用户应该在单独的行中。
+
+```bash
+$ cat user-lists.txt
+user1
+user2
+user3
+```
+
+使用以下 shell 脚本将多个用户添加到单个次要组。
+
+```bash
+vi group-update.sh
+
+#!/bin/bash
+for user in `cat user-lists.txt`
+do
+usermod -a -G mygroup $user
+done
+```
+
+设置 `group-update.sh` 文件的可执行权限。
+
+```
+# chmod + group-update.sh
+```
+
+最后运行脚本来实现它。
+
+```
+# sh group-update.sh
+```
+
+让我看看使用 getent 命令的输出。 是的,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup` 中。
+
+```
+# getent group mygroup
+mygroup:x:1012:user1,user2,user3
+```
+
+### 如何使用 gpasswd 命令将多个用户添加到多个次要组或附加组?
+
+如果要使用 gpasswd 命令将多个用户添加到多个次要组或附加组中,请创建以下小的 shell 脚本。
+
+创建用户列表。每个用户应该在单独的行中。
+
+```bash
+$ cat user-lists.txt
+user1
+user2
+user3
+```
+
+创建组列表。每组应在单独的行中。
+
+```bash
+$ cat group-lists.txt
+mygroup
+mygroup1
+```
+
+使用以下 shell 脚本将多个用户添加到多个次要组。
+
+```bash
+#!/bin/sh
+for user in `more user-lists.txt`
+do
+for group in `more group-lists.txt`
+do
+usermod -a -G $group $user
+done
+```
+
+设置 `group-update-1.sh` 文件的可执行权限。
+
+```
+# chmod +x group-update-1.sh
+```
+
+最后运行脚本来实现它。
+
+```
+# sh group-update-1.sh
+```
+
+让我看看使用 getent 命令的输出。 是的,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup` 中。
+
+```
+# getent group mygroup
+mygroup:x:1012:user1,user2,user3
+```
+
+此外,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup1` 中。
+
+```
+# getent group mygroup1
+mygroup1:x:1013:user1,user2,user3
+```
+
+### 方法 4:在 Linux 中将用户添加到组中的手动方法?
+
+我们可以通过编辑 `/etc/group` 文件手动将用户添加到任何组中。
+
+打开 `/etc/group` 文件并搜索要更新用户的组名。最后将用户更新到相应的组中。
+
+```
+# vi /etc/group
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-usermod-gpasswd/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[NeverKnowsTomorrow](https://github.com/NeverKnowsTomorrow)
+校对:[校对者 ID](https://github.com/校对者 ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/linux-user-account-creation-useradd-adduser-newusers/
+[2]: https://www.2daygeek.com/how-to-create-the-bulk-users-in-linux/
+[3]: https://www.2daygeek.com/linux-passwd-chpasswd-command-set-update-change-users-password-in-linux-using-shell-script/
diff --git a/translated/tech/20190409 How To Install And Enable Flatpak Support On Linux.md b/translated/tech/20190409 How To Install And Enable Flatpak Support On Linux.md
new file mode 100644
index 0000000000..fe10d72bd7
--- /dev/null
+++ b/translated/tech/20190409 How To Install And Enable Flatpak Support On Linux.md
@@ -0,0 +1,302 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Install And Enable Flatpak Support On Linux?)
+[#]: via: (https://www.2daygeek.com/how-to-install-and-enable-flatpak-support-on-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+如何在 Linux 上安装并启用 Flatpak 支持?
+======
+
+
+
+目前,我们都在使用 Linux 发行版的官方软件包管理器来安装所需的软件包。
+
+在 Linux 中,它做得很好,没有任何问题。(它很好地完成了它应该做的工作,同时它没有任何妥协)
+
+在一些方面它也有一些限制,所以会让我们考虑其他替代解决方案来解决。
+
+是的,默认情况下,我们不会从发行版官方软件包管理器获取最新版本的软件包,因为这些软件包是在构建当前 OS 版本时构建的。它们只会提供安全更新,直到下一个主要版本发布。
+
+那么,这种情况有什么解决办法吗?
+
+是的,我们有多种解决方案,而且我们大多数人已经开始使用其中的一些了。
+
+有些什么呢,它们有什么好处?
+
+ * **对于基于 Ubuntu 的系统:** PPAs
+ * **对于基于 RHEL 的系统:** [EPEL Repository][1]、[ELRepo Repository][2]、[nux-dextop Repository][3]、[IUS Community Repo][4]、[RPMfusion Repository][5] 和 [Remi Repository][6]
+
+
+使用上面的仓库,我们将获得最新的软件包。这些软件包通常都得到了很好的维护,还有大多数社区的建议。但这对于操作系统来说应该是适当的,因为它们可能并不安全。
+
+近年来,出现了一下通用软件包封装格式,并且得到了广泛的应用。
+
+ * **`Flatpak:`** 它是独立于发行版的包格式,主要贡献者是 Fedora 项目团队。大多数主要的 Linux 发行版都采用了 Flatpak 框架。
+ * **`Snaps:`** Snappy 是一种通用的软件包封装格式,最初由 Canonical 为 Ubuntu 手机及其操作系统设计和构建的。后来,大多数发行版都进行了改编。
+ * **`AppImage:`** AppImage 是一种可移植的包格式,可以在不安装或不需要 root 权限的情况下运行。
+
+我们之前已经介绍过 **[Snap 包管理器和包封装格式][7]**。今天我们将讨论 Flatpak 包封装格式。
+
+### 什么是 Flatpak?
+
+Flatpak(以前称为 X Desktop Group 或 xdg-app)是一个软件实用程序。它提供了一种通用的包封装格式,可以在任何 Linux 发行版中使用。
+
+它提供了一个沙箱(隔离的)环境来运行应用程序,不会影响其他应用程序和发行版核心软件包。我们还可以安装并运行不同版本的软件包。
+
+Flatpak 的一个缺点是不像 Snap 和 AppImage 那样支持服务器操作系统,它只在少数桌面环境下工作。
+
+比如说,如果你想在系统上运行两个版本的 php,那么你可以轻松安装并按照你的意愿运行。
+
+这就是现在通用包封装格式非常有名的地方。
+
+### 如何在 Linux 中安装 Flatpak?
+
+大多数 Linux 发行版官方仓库都提供 Flatpak 软件包。因此,可以使用它们来进行安装。
+
+对于 **`Fedora`** 系统,使用 **[DNF 命令][8]** 来安装 flatpak。
+
+```
+$ sudo dnf install flatpak
+```
+
+对于 **`Debian/Ubuntu`** 系统,使用 **[APT-GET 命令][9]** 或 **[APT 命令][10]** 来安装 flatpak。
+
+```
+$ sudo apt install flatpak
+```
+
+对于较旧的 Ubuntu 版本:
+
+```
+$ sudo add-apt-repository ppa:alexlarsson/flatpak
+$ sudo apt update
+$ sudo apt install flatpak
+```
+
+对于基于 **`Arch Linux`** 的系统,使用 **[Pacman 命令][11]** 来安装 flatpak。
+
+```
+$ sudo pacman -S flatpak
+```
+
+对于 **`RHEL/CentOS`** 系统,使用 **[YUM 命令][12]** 来安装 flatpak。
+
+```
+$ sudo yum install flatpak
+```
+
+对于 **`openSUSE Leap`** 系统,使用 **[Zypper 命令][13]** 来安装 flatpak。
+
+```
+$ sudo zypper install flatpak
+```
+
+### 如何在 Linux 上启用 Flathub 支持?
+
+Flathub 网站是一个应用程序商店,你可以在其中找到 flatpak。
+
+它是一个中央仓库,所有的 flatpak 应用程序都可供用户使用。
+
+运行以下命令在 Linux 上启用 Flathub 支持:
+
+```
+$ sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
+```
+
+为 GNOME 桌面环境安装 Software Flatpak 插件。
+
+```
+$ sudo apt install gnome-software-plugin-flatpak
+```
+
+此外,如果你使用的是 GNOME 桌面环境,则可以启用 GNOME 仓库。它包含所有 GNOME 核心应用程序。
+
+```
+$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg
+$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg --if-not-exists gnome-apps https://sdk.gnome.org/repo-apps/
+```
+
+### 如何列出已配置的 flakpak 仓库?
+
+如果要查看系统上已配置的 flatpak 仓库列表,运行以下命令:
+
+```
+$ flatpak remotes
+Name Options
+flathub system
+gnome-apps system
+```
+
+### 如何列出已配置仓库中的可用软件包?
+
+如果要查看已配置仓库中的可用软件包的列表(它将显示所有软件包,如应用程序和运行环境),运行以下命令:
+
+```
+$ flatpak remote-ls | head -10
+
+org.freedesktop.GlxInfo gnome-apps
+org.gnome.Books gnome-apps
+org.gnome.Builder gnome-apps
+org.gnome.Calculator gnome-apps
+org.gnome.Calendar gnome-apps
+org.gnome.Characters gnome-apps
+org.gnome.Devhelp gnome-apps
+org.gnome.Dictionary gnome-apps
+org.gnome.Documents gnome-apps
+org.gnome.Epiphany gnome-apps
+```
+
+仅列出应用程序:
+
+```
+$ flatpak remote-ls --app
+```
+
+列出特定的仓库应用程序:
+
+```
+$ flatpak remote-ls gnome-apps
+```
+
+### 如何从 flatpak 安装包?
+
+运行以下命令从 flatpak 仓库安装软件包:
+
+```
+$ sudo flatpak install flathub com.github.muriloventuroso.easyssh
+
+Required runtime for com.github.muriloventuroso.easyssh/x86_64/stable (runtime/org.gnome.Platform/x86_64/3.30) found in remote flathub
+Do you want to install it? [y/n]: y
+Installing in system:
+org.gnome.Platform/x86_64/3.30 flathub 4e93789f42ac
+org.gnome.Platform.Locale/x86_64/3.30 flathub 6abf9c0e2b72
+org.freedesktop.Platform.html5-codecs/x86_64/18.08 flathub d6abde36c0be
+com.github.muriloventuroso.easyssh/x86_64/stable flathub 337db43043d2
+ permissions: ipc, network, wayland, x11, dri
+ file access: home, xdg-run/dconf, ~/.config/dconf:ro
+ dbus access: ca.desrt.dconf
+com.github.muriloventuroso.easyssh.Locale/x86_64/stable flathub af837356b222
+Is this ok [y/n]: y
+Installing: org.gnome.Platform/x86_64/3.30 from flathub
+[####################] 1 metadata, 14908 content objects fetched; 228018 KiB transferred in 364 seconds
+Now at 4e93789f42ac.
+Installing: org.gnome.Platform.Locale/x86_64/3.30 from flathub
+[####################] 4 metadata, 1 content objects fetched; 16 KiB transferred in 2 seconds
+Now at 6abf9c0e2b72.
+Installing: org.freedesktop.Platform.html5-codecs/x86_64/18.08 from flathub
+[####################] 26 metadata, 131 content objects fetched; 2737 KiB transferred in 8 seconds
+Now at d6abde36c0be.
+Installing: com.github.muriloventuroso.easyssh/x86_64/stable from flathub
+[####################] 191 metadata, 3633 content objects fetched; 24857 KiB transferred in 117 seconds
+Now at 337db43043d2.
+Installing: com.github.muriloventuroso.easyssh.Locale/x86_64/stable from flathub
+[####################] 3 metadata, 1 content objects fetched; 14 KiB transferred in 2 seconds
+Now at af837356b222.
+```
+
+所有已安装的应用程序都将放在以下位置:
+
+```
+$ ls /var/lib/flatpak/app/
+com.github.muriloventuroso.easyssh
+```
+
+### 如何运行已安装的应用程序?
+
+运行以下命令以启动所需的应用程序,确保替换为你的应用程序名称:
+
+```
+$ flatpak run com.github.muriloventuroso.easyssh
+```
+
+### 如何查看已安装的应用程序?
+
+运行以下命令来查看已安装的应用程序:
+
+```
+$ flatpak list
+Ref Options
+com.github.muriloventuroso.easyssh/x86_64/stable system,current
+org.freedesktop.Platform.html5-codecs/x86_64/18.08 system,runtime
+org.gnome.Platform/x86_64/3.30 system,runtime
+```
+
+### 如何查看有关已安装应用程序的详细信息?
+
+运行以下命令以查看有关已安装应用程序的详细信息。
+
+```
+$ flatpak info com.github.muriloventuroso.easyssh
+
+Ref: app/com.github.muriloventuroso.easyssh/x86_64/stable
+ID: com.github.muriloventuroso.easyssh
+Arch: x86_64
+Branch: stable
+Origin: flathub
+Collection ID: org.flathub.Stable
+Date: 2019-01-08 13:36:32 +0000
+Subject: Update com.github.muriloventuroso.easyssh.json (cd35819c)
+Commit: 337db43043d282c74d14a9caecdc780464b5e526b4626215d534d38b0935049f
+Parent: 6e49096146f675db6ecc0ce7c5347b4b4f049b21d83a6cc4d01ff3f27c707cb6
+Location: /var/lib/flatpak/app/com.github.muriloventuroso.easyssh/x86_64/stable/337db43043d282c74d14a9caecdc780464b5e526b4626215d534d38b0935049f
+Installed size: 100.0 MB
+Runtime: org.gnome.Platform/x86_64/3.30
+Sdk: org.gnome.Sdk/x86_64/3.30
+```
+
+### 如何更新已安装的应用程序?
+
+运行以下命令将已安装的应用程序更新到最新版本:
+
+```
+$ flatpak update
+```
+
+对于特定应用程序,使用以下格式:
+
+```
+$ flatpak update com.github.muriloventuroso.easyssh
+```
+
+### 如何移除已安装的应用程序?
+
+运行以下命令来移除已安装的应用程序:
+```
+$ sudo flatpak uninstall com.github.muriloventuroso.easyssh
+```
+
+进入 man 页面以获取更多细节和选项:
+
+```
+$ flatpak --help
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/how-to-install-and-enable-flatpak-support-on-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
+[2]: https://www.2daygeek.com/install-enable-elrepo-on-rhel-centos-scientific-linux/
+[3]: https://www.2daygeek.com/install-enable-nux-dextop-repository-on-centos-rhel-scientific-linux/
+[4]: https://www.2daygeek.com/install-enable-ius-community-repository-on-rhel-centos/
+[5]: https://www.2daygeek.com/install-enable-rpm-fusion-repository-on-centos-fedora-rhel/
+[6]: https://www.2daygeek.com/install-enable-remi-repository-on-centos-rhel-fedora/
+[7]: https://www.2daygeek.com/linux-snap-package-manager-ubuntu/
+[8]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[9]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[10]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[11]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[12]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[13]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
diff --git a/translated/tech/20190410 Managing Partitions with sgdisk.md b/translated/tech/20190410 Managing Partitions with sgdisk.md
new file mode 100644
index 0000000000..19f2752245
--- /dev/null
+++ b/translated/tech/20190410 Managing Partitions with sgdisk.md
@@ -0,0 +1,94 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Managing Partitions with sgdisk)
+[#]: via: (https://fedoramagazine.org/managing-partitions-with-sgdisk/)
+[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
+
+使用 sgdisk 管理分区
+======
+
+![][1]
+
+[Roderick W. Smith][2] 的 _sgdisk_ 命令可在命令行中管理硬盘的分区。下面将介绍使用它所需的基础知识。
+
+以下六个参数是你使用 sgdisk 大多数基本功能所需了解的:
+
+ 1. **-p**
+_打印_ 分区表:
+### sgdisk -p /dev/sda
+ 2. **-d x**
+_删除_分区 x:
+### sgdisk -d 1 /dev/sda
+ 3. **-n x:y:z**
+创建一个编号 x 的_新_分区,从 y 开始,从 z 结束:
+### sgdisk -n 1:1MiB:2MiB /dev/sda
+ 4. **-c x:y**
+_更改_分区 x 的名称为 y:
+### sgdisk -c 1:grub /dev/sda
+ 5. **-t x:y**
+将分区 x 的_类型_更改为 y:
+### sgdisk -t 1:ef02 /dev/sda
+ 6. **–list-types**
+列出分区类型代码:
+### sgdisk --list-types
+
+
+
+![The SGDisk Command][3]
+
+如你在上面的例子中所见,大多数命令都要求将要操作的硬盘的[设备文件名][4]指定为最后一个参数。
+
+可以组合上面的参数,这样你可以一次定义所有分区:
+
+### sgdisk -n 1:1MiB:2MiB -t 1:ef02 -c 1:grub /dev/sda
+
+在值的前面加上 **+** 或 **–** 符号,可以为某些字段指定相对值。如果你使用相对值,sgdisk 会为你做数学运算。例如,上面的例子可以写成:
+
+### sgdisk -n 1:1MiB:+1MiB -t 1:ef02 -c 1:grub /dev/sda
+
+**0** 值对于以下几个字段是特殊情况:
+
+ * 对于_分区号_字段,0 表示应使用下一个可用编号(编号从 1 开始)。
+ * 对于_起始地址_字段,0 表示使用最大可用空闲块的头。硬盘开头的一些空间始终保留给分区表本身。
+ * 对于_结束地址_字段,0 表示使用最大可用空闲块的末尾。
+
+
+
+通过在适当的字段中使用 **0** 和相对值,你可以创建一系列分区,而无需预先计算任何绝对值。例如,如果在一块空白硬盘中,以下 sgdisk 命令序列将创建典型 Linux 安装所需的所有基本分区:
+
+### sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub /dev/sda
+### sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot /dev/sda
+### sgdisk -n 0:0:+4GiB -t 0:8200 -c 0:swap /dev/sda
+### sgdisk -n 0:0:0 -t 0:8300 -c 0:root /dev/sda
+
+上面的例子展示了如何为基于 BIOS 的计算机分区硬盘。基于 UEFI 的计算机上不需要 [grub分区][5]。由于 sgdisk 在上面的示例中为你计算了所有绝对值,因此你可以在基于 UEFI 的计算机上跳过第一个命令,并且可以无需修改即可运行其余命令。同样,你可以跳过创建交换分区,并且不需要修改其余命令。
+
+还有使用一个命令删除硬盘上所有分区的快捷方式:
+
+### sgdisk --zap-all /dev/sda
+
+关于最新和详细信息,请查看手册页:
+
+$ man sgdisk
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/managing-partitions-with-sgdisk/
+
+作者:[Gregory Bartholomew][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/glb/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/managing-partitions-816x345.png
+[2]: https://www.rodsbooks.com/
+[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/sgdisk.jpg
+[4]: https://en.wikipedia.org/wiki/Device_file
+[5]: https://en.wikipedia.org/wiki/BIOS_boot_partition
diff --git a/translated/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md b/translated/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md
new file mode 100644
index 0000000000..49857445e1
--- /dev/null
+++ b/translated/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md
@@ -0,0 +1,106 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Zip Files and Folders in Linux [Beginner Tip])
+[#]: via: (https://itsfoss.com/linux-zip-folder/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+如何在 Linux 中 zip 压缩文件和文件夹(初学者提示)
+======
+
+_ **简介:本文向你展示了如何在 Ubuntu 和其他 Linux 发行版中创建一个 zip 文件夹。终端和 GUI 方法都有。** _
+
+zip 是最流行的归档文件格式之一。使用 zip,你可以将多个文件压缩到一个文件中。这不仅节省了磁盘空间,还节省了网络带宽。这就是为什么你几乎一直会看到 zip 文件的原因。
+
+作为普通用户,大多数情况下你会在 Linux 中解压缩文件夹。但是如何在 Linux 中压缩文件夹?本文可以帮助你回答这个问题。
+
+**先决条件:验证是否安装了 zip**
+
+通常 [zip][1] 已经安装,但验证下也没坏处。你可以运行以下命令来安装 zip 和 unzip。如果它尚未安装,它将立即安装。
+
+```
+sudo apt install zip unzip
+```
+
+现在你知道你的系统有 zip 支持,你可以继续了解如何在 Linux 中压缩一个目录。
+
+![][2]
+
+### 在 Linux 命令行中压缩文件夹
+
+zip 命令的语法非常简单。
+
+```
+zip [option] output_file_name input1 input2
+```
+
+虽然有几个选项,但我不希望你将它们混淆。如果你只想要将一堆文件变成一个 zip 文件夹,请使用如下命令:
+
+```
+zip -r output_file.zip file1 folder1
+```
+
+-r 选项将递归目录并压缩其内容。输出文件中的 .zip 扩展名是可选的,因为默认情况下会添加 .zip。
+
+你应该会在 zip 操作期间看到要添加到压缩文件夹中的文件。
+
+```
+zip -r myzip abhi-1.txt abhi-2.txt sample_directory
+ adding: abhi-1.txt (stored 0%)
+ adding: abhi-2.txt (stored 0%)
+ adding: sample_directory/ (stored 0%)
+ adding: sample_directory/newfile.txt (stored 0%)
+ adding: sample_directory/agatha.txt (deflated 41%)
+```
+
+你可以使用 -e 选项[在 Linux 中创建密码保护的 zip 文件夹][3]。
+
+你并不是只能通过终端创建 zip 归档文件。你也可以用图形方式做到这一点。下面是如何做的!
+
+### 在 Ubuntu Linux 中使用 GUI 压缩文件夹
+
+_虽然我在这里使用 Ubuntu,但在使用 GNOME 或其他桌面环境的其他发行版中,方法应该基本相同。_
+
+如果要在 Linux 桌面中压缩文件或文件夹,只需点击几下即可。
+
+进入到你想将文件(和文件夹)压缩到一个 zip 文件夹的所在文件夹。
+
+在这里,选择文件和文件夹。现在,右键单击并选择“压缩”。你也可以对单个文件执行相同操作。
+
+![Select the files, right click and click compress][4]
+
+现在,你可以使用 zip、tar xz 或 7z 格式创建压缩归档文件。如果你好奇,这三个都是各种压缩算法,你可以使用它们来压缩文件。
+
+输入一个你想要的名字,并点击“创建”
+
+![Create archive file][5]
+
+这不会花很长时间,你会同一目录中看到一个归档文件。
+
+![][6]
+
+好了,就是这些。你已经成功地在 Linux 中创建了一个 zip 文件夹。
+
+我希望这篇文章能帮助你了解 zip 文件。请随时分享你的建议。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/linux-zip-folder/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Zip_(file_format)
+[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/zip-folder-linux.png?resize=800%2C450&ssl=1
+[3]: https://itsfoss.com/password-protect-zip-file/
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/create-zip-file-ubuntu.jpg?resize=800%2C428&ssl=1
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/create-zip-folder-ubuntu-1.jpg?ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/zip-file-created-in-ubuntu.png?resize=800%2C277&ssl=1
diff --git a/translated/tech/20190415 How to identify duplicate files on Linux.md b/translated/tech/20190415 How to identify duplicate files on Linux.md
new file mode 100644
index 0000000000..033c3d85a1
--- /dev/null
+++ b/translated/tech/20190415 How to identify duplicate files on Linux.md
@@ -0,0 +1,124 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to identify duplicate files on Linux)
+[#]: via: (https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+如何识别 Linux 上的重复文件
+======
+Linux 系统上的一些文件可能出现在多个位置。按照本文指示查找并识别这些“同卵双胞胎”,还可以了解为什么硬链接会如此有利。
+![Archana Jarajapu \(CC BY 2.0\)][1]
+
+识别共享磁盘空间的文件依赖于利用文件共享相同的 `inode` 这一事实。这种数据结构存储除了文件名和内容之外的所有信息。如果两个或多个文件具有不同的名称和文件系统位置,但共享一个 inode,则它们还共享内容、所有权、权限等。
+
+这些文件通常被称为“硬链接”,不像符号链接(即软链接)那样仅仅通过包含它们的名称指向其他文件,符号链接很容易在文件列表中通过第一个位置的 “l” 和引用文件的 **->** 符号识别出来。
+
+```
+$ ls -l my*
+-rw-r--r-- 4 shs shs 228 Apr 12 19:37 myfile
+lrwxrwxrwx 1 shs shs 6 Apr 15 11:18 myref -> myfile
+-rw-r--r-- 4 shs shs 228 Apr 12 19:37 mytwin
+```
+
+识别单个目录中的硬链接并不是很明显,但它仍然非常容易。如果使用 **ls -i** 命令列出文件并按 `inode` 编号排序,则可以非常容易地挑选出硬链接。在这种类型的 `ls` 输出中,第一列显示 `inode` 编号。
+
+```
+$ ls -i | sort -n | more
+ ...
+ 788000 myfile <==
+ 788000 mytwin <==
+ 801865 Name_Labels.pdf
+ 786692 never leave home angry
+ 920242 NFCU_Docs
+ 800247 nmap-notes
+```
+
+扫描输出,查找相同的 `inode` 编号,任何匹配都会告诉你想知道的内容。
+
+**[另请参考:[Linux 疑难解答的宝贵提示和技巧][2]]**
+
+另一方面,如果你只是想知道某个特定文件是否是另一个文件的硬链接,那么有一种方法比浏览数百个文件的列表更简单,即 `find` 命令的 **-samefile** 选项将帮助你完成工作。
+```
+$ find . -samefile myfile
+./myfile
+./save/mycopy
+./mytwin
+```
+
+注意,提供给 `find` 命令的起始位置决定文件系统会扫描多少来进行匹配。在上面的示例中,我们正在查看当前目录和子目录。
+
+使用 find 的 **-ls** 选项添加输出的详细信息可能更有说服力:
+```
+$ find . -samefile myfile -ls
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./myfile
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./save/mycopy
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./mytwin
+```
+
+第一列显示 `inode` 编号,然后我们会看到文件权限、链接、所有者、文件大小、日期信息以及引用相同磁盘内容的文件的名称。注意,在这种情况下,`link` 字段是 “4” 而不是我们可能期望的 “3”。这告诉我们还有另一个指向同一个 `inode` 的链接(但不在我们的搜索范围内)。
+
+如果你想在一个目录中查找所有硬链接的实例,可以尝试以下的脚本来创建列表并为你查找副本:
+```
+#!/bin/bash
+
+# seaches for files sharing inodes
+
+prev=""
+
+# list files by inode
+ls -i | sort -n > /tmp/$0
+
+# search through file for duplicate inode #s
+while read line
+do
+ inode=`echo $line | awk '{print $1}'`
+ if [ "$inode" == "$prev" ]; then
+ grep $inode /tmp/$0
+ fi
+ prev=$inode
+done < /tmp/$0
+
+# clean up
+rm /tmp/$0
+
+$ ./findHardLinks
+ 788000 myfile
+ 788000 mytwin
+```
+
+你还可以使用 `find` 命令按 `inode` 编号查找文件,如命令中所示。但是,此搜索可能涉及多个文件系统,因此可能会得到错误的结果。因为相同的 `inode` 编号可能会在另一个文件系统中使用,代表另一个文件。如果是这种情况,文件的其他详细信息将不相同。
+
+```
+$ find / -inum 788000 -ls 2> /dev/null
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /tmp/mycopy
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/myfile
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/save/mycopy
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/mytwin
+```
+
+注意,错误输出被重定向到 `/dev/null`,这样我们就不必查看所有 "Permission denied" 错误,否则这些错误将显示在我们不允许查看的其他目录中。
+
+此外,扫描包含相同内容但不共享 `inode` 的文件(即,简单的文本拷贝)将花费更多的时间和精力。
+
+加入 [Facebook][3] 和 [LinkedIn][4] 上的网络世界社区,对重要的话题发表评论。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/reflections-candles-100793651-large.jpg
+[2]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
diff --git a/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md b/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
new file mode 100644
index 0000000000..a05948c9af
--- /dev/null
+++ b/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
@@ -0,0 +1,309 @@
+[#]: collector: (lujun9972)
+[#]: translator: (zgj1024)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (HTTPie – A Modern Command Line HTTP Client For Curl And Wget Alternative)
+[#]: via: (https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+HTTPie – 替代 Curl 和 Wget 的现代 HTTP 命令行客户端
+======
+
+大多数时间我们会使用 Curl 命令或是 Wget 命令下载文件或者做其他事
+
+我们以前曾写过 **[最佳命令行下载管理器][1]** 的文章。你可以点击相应的 URL 连接来浏览这些文章。
+
+ * **[aria2 – Linux 下的多协议命令行下载工具][2]**
+ * **[Axel – Linux 下的轻量级命令行下载加速器][3]**
+ * **[Wget – Linux 下的标准命令行下载工具][4]**
+ * **[curl – Linux 下的实用的命令行下载工具][5]**
+
+
+今天我们将讨论同样的话题。实用程序名为 HTTPie。
+
+它是现代命令行 http 客户端,也是curl和wget命令的最佳替代品。
+
+### 什么是 HTTPie?
+
+HTTPie (发音是 aitch-tee-tee-pie) 是一个 Http 命令行客户端。
+
+httpie 工具是现代命令的 HTTP 客户端,它能让命令行界面与 Web 服务进行交互。
+
+他提供一个简单 Http 命令,运行使用简单而自然的语法发送任意的 HTTP 请求,并会显示彩色的输出。
+
+HTTPie 能用于测试、debugging及与 HTTP 服务器交互。
+
+### 主要特点
+
+ * 具表达力的和直观语法
+ * 格式化的及彩色化的终端输出
+ * 内置 JSON 支持
+ * 表单和文件上传
+ * HTTPS, 代理, 和认证
+ * 任意请求数据
+ * 自定义头部
+ * 持久化会话(sessions)
+ * 类似 wget 的下载
+ * 支持 Python 2.7 和 3.x
+
+### 在 Linux 下如何安装 HTTPie
+
+大部分 Linux 发行版都提供了系统包管理器,可以用它来安装。
+
+**`Fedora`** 系统,使用 **[DNF 命令][6]** 来安装 httpie
+
+```
+$ sudo dnf install httpie
+```
+
+**`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][7]** 或 **[APT 命令][8]** 来安装 httpie。
+
+```
+$ sudo apt install httpie
+```
+
+基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][9]** 来安装 httpie。
+
+```
+$ sudo pacman -S httpie
+```
+
+**`RHEL/CentOS`** 的系统, 使用 **[YUM 命令][10]** 来安装 httpie。
+
+```
+$ sudo yum install httpie
+```
+
+**`openSUSE Leap`** 系统, 使用 **[Zypper 命令][11]** 来安装 httpie。
+
+```
+$ sudo zypper install httpie
+```
+
+### 1) 如何使用 HTTPie 请求URL?
+
+httpie 的基本用法是将网站的 URL 作为参数。
+
+```
+# http 2daygeek.com
+HTTP/1.1 301 Moved Permanently
+CF-RAY: 4c4a618d0c02ce6d-LHR
+Cache-Control: max-age=3600
+Connection: keep-alive
+Date: Tue, 09 Apr 2019 06:21:28 GMT
+Expires: Tue, 09 Apr 2019 07:21:28 GMT
+Location: https://2daygeek.com/
+Server: cloudflare
+Transfer-Encoding: chunked
+Vary: Accept-Encoding
+```
+
+### 2) 如何使用 HTTPie 下载文件
+
+你可以使用带 `--download` 参数的 HTTPie 命令下载文件。类似于 wget 命令。
+
+```
+# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png
+HTTP/1.1 200 OK
+Accept-Ranges: bytes
+CF-Cache-Status: HIT
+CF-RAY: 4c4a65d5ca360a66-LHR
+Cache-Control: public, max-age=7200
+Connection: keep-alive
+Content-Length: 32066
+Content-Type: image/png
+Date: Tue, 09 Apr 2019 06:24:23 GMT
+Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
+Expires: Tue, 09 Apr 2019 08:24:23 GMT
+Last-Modified: Mon, 08 Apr 2019 04:54:25 GMT
+Server: cloudflare
+Set-Cookie: __cfduid=dd2034b2f95ae42047e082f59f2b964f71554791063; expires=Wed, 08-Apr-20 06:24:23 GMT; path=/; domain=.2daygeek.com; HttpOnly; Secure
+Vary: Accept-Encoding
+
+Downloading 31.31 kB to "Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png"
+Done. 31.31 kB in 0.01187s (2.58 MB/s)
+```
+
+你还可以使用 `-o` 参数用不同的名称保存输出文件。
+
+```
+# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png -o Anbox-1.png
+HTTP/1.1 200 OK
+Accept-Ranges: bytes
+CF-Cache-Status: HIT
+CF-RAY: 4c4a68194daa0a66-LHR
+Cache-Control: public, max-age=7200
+Connection: keep-alive
+Content-Length: 32066
+Content-Type: image/png
+Date: Tue, 09 Apr 2019 06:25:56 GMT
+Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
+Expires: Tue, 09 Apr 2019 08:25:56 GMT
+Last-Modified: Mon, 08 Apr 2019 04:54:25 GMT
+Server: cloudflare
+Set-Cookie: __cfduid=d3eea753081690f9a2d36495a74407dd71554791156; expires=Wed, 08-Apr-20 06:25:56 GMT; path=/; domain=.2daygeek.com; HttpOnly; Secure
+Vary: Accept-Encoding
+
+Downloading 31.31 kB to "Anbox-1.png"
+Done. 31.31 kB in 0.01551s (1.97 MB/s)
+```
+如何使用HTTPie恢复部分下载?
+### 3) 如何使用 HTTPie 恢复部分下载?
+
+你可以使用带 `-c` 参数的 HTTPie 继续下载。
+```
+# http --download --continue https://speed.hetzner.de/100MB.bin -o 100MB.bin
+HTTP/1.1 206 Partial Content
+Connection: keep-alive
+Content-Length: 100442112
+Content-Range: bytes 4415488-104857599/104857600
+Content-Type: application/octet-stream
+Date: Tue, 09 Apr 2019 06:32:52 GMT
+ETag: "5253f0fd-6400000"
+Last-Modified: Tue, 08 Oct 2013 11:48:13 GMT
+Server: nginx
+Strict-Transport-Security: max-age=15768000; includeSubDomains
+
+Downloading 100.00 MB to "100MB.bin"
+ | 24.14 % 24.14 MB 1.12 MB/s 0:01:07 ETA^C
+```
+
+你根据下面的输出验证是否同一个文件
+```
+[email protected]:/var/log# ls -lhtr 100MB.bin
+-rw-r--r-- 1 root root 25M Apr 9 01:33 100MB.bin
+```
+
+### 5) 如何使用 HTTPie 上传文件?
+
+你可以通过使用带有 `小于号 "<"` 的 HTTPie 命令上传文件
+You can upload a file using HTTPie with the `less-than symbol "<"` symbol.
+
+```
+$ http https://transfer.sh < Anbox-1.png
+```
+
+### 6) 如何使用带有重定向符号">" 的 HTTPie 下载文件?
+
+你可以使用带有 `重定向 ">"` 符号的 HTTPie 命令下载文件。
+
+```
+# http https://www.2daygeek.com/wp-content/uploads/2019/03/How-To-Install-And-Enable-Flatpak-Support-On-Linux-1.png > Flatpak.png
+
+# ls -ltrh Flatpak.png
+-rw-r--r-- 1 root root 47K Apr 9 01:44 Flatpak.png
+```
+
+### 7) 发送一个 HTTP GET 请求?
+
+您可以在请求中发送 HTTP GET 方法。GET 方法会使用给定的 URI,从给定服务器检索信息。
+
+
+```
+# http GET httpie.org
+HTTP/1.1 301 Moved Permanently
+CF-RAY: 4c4a83a3f90dcbe6-SIN
+Cache-Control: max-age=3600
+Connection: keep-alive
+Date: Tue, 09 Apr 2019 06:44:44 GMT
+Expires: Tue, 09 Apr 2019 07:44:44 GMT
+Location: https://httpie.org/
+Server: cloudflare
+Transfer-Encoding: chunked
+Vary: Accept-Encoding
+```
+
+### 8) 提交表单?
+
+使用以下格式提交表单。POST 请求用于向服务器发送数据,例如客户信息、文件上传等。要使用 HTML 表单。
+
+```
+# http -f POST Ubuntu18.2daygeek.com hello='World'
+HTTP/1.1 200 OK
+Accept-Ranges: bytes
+Connection: Keep-Alive
+Content-Encoding: gzip
+Content-Length: 3138
+Content-Type: text/html
+Date: Tue, 09 Apr 2019 06:48:12 GMT
+ETag: "2aa6-5844bf1b047fc-gzip"
+Keep-Alive: timeout=5, max=100
+Last-Modified: Sun, 17 Mar 2019 15:29:55 GMT
+Server: Apache/2.4.29 (Ubuntu)
+Vary: Accept-Encoding
+```
+
+运行下面的指令以查看正在发送的请求。
+
+```
+# http -v Ubuntu18.2daygeek.com
+GET / HTTP/1.1
+Accept: */*
+Accept-Encoding: gzip, deflate
+Connection: keep-alive
+Host: ubuntu18.2daygeek.com
+User-Agent: HTTPie/0.9.8
+
+hello=World
+
+HTTP/1.1 200 OK
+Accept-Ranges: bytes
+Connection: Keep-Alive
+Content-Encoding: gzip
+Content-Length: 3138
+Content-Type: text/html
+Date: Tue, 09 Apr 2019 06:48:30 GMT
+ETag: "2aa6-5844bf1b047fc-gzip"
+Keep-Alive: timeout=5, max=100
+Last-Modified: Sun, 17 Mar 2019 15:29:55 GMT
+Server: Apache/2.4.29 (Ubuntu)
+Vary: Accept-Encoding
+```
+
+### 9) HTTP 认证?
+
+当前支持的身份验证认证方案是基本认证(Basic)和摘要验证(Digest)
+The currently supported authentication schemes are Basic and Digest
+
+基本认证
+
+```
+$ http -a username:password example.org
+```
+
+摘要验证
+
+```
+$ http -A digest -a username:password example.org
+```
+
+提示输入密码
+```
+$ http -a username example.org
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/zgj1024)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/best-4-command-line-download-managers-accelerators-for-linux/
+[2]: https://www.2daygeek.com/aria2-linux-command-line-download-utility-tool/
+[3]: https://www.2daygeek.com/axel-linux-command-line-download-accelerator/
+[4]: https://www.2daygeek.com/wget-linux-command-line-download-utility-tool/
+[5]: https://www.2daygeek.com/curl-linux-command-line-download-manager/
+[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
\ No newline at end of file