`。 例如,要将文件 `example.txt` 移到你的 `Documents` 目录中:
+
+```
+$ touch example.txt
+$ mv example.txt ~/Documents
+$ ls ~/Documents
+example.txt
+```
+
+就像你通过将文件拖放到文件夹图标上来移动文件一样,此命令不会将 `Documents` 替换为 `example.txt`。相反,`mv` 会检测到 `Documents` 是一个文件夹,并将 `example.txt` 文件放入其中。
+
+你还可以方便地在移动文件时重命名该文件:
+
+```
+$ touch example.txt
+$ mv example.txt ~/Documents/foo.txt
+$ ls ~/Documents
+foo.txt
+```
+
+这很重要,这使你不用将文件移动到另一个位置,也可以重命名文件,例如:
+
+```
+$ touch example.txt
+$ mv example.txt foo2.txt
+$ ls foo2.txt`
+```
+
+#### 移动目录
+
+不像 [cp][8] 命令,`mv` 命令处理文件和目录没有什么不同,你可以用同样的格式移动目录或文件:
+
+```
+$ touch file.txt
+$ mkdir foo_directory
+$ mv file.txt foo_directory
+$ mv foo_directory ~/Documents
+```
+
+#### 安全地移动文件
+
+如果你移动一个文件到一个已有同名文件的地方,默认情况下,`mv` 会用你移动的文件替换目标文件。这种行为被称为清除,有时候这就是你想要的结果,而有时则不是。
+
+一些发行版将 `mv` 别名定义为 `mv --interactive`(你也可以[自己写一个][9]),这会提醒你确认是否覆盖。而另外一些发行版没有这样做,那么你可以使用 `--interactive` 或 `-i` 选项来确保当两个文件有一样的名字而发生冲突时让 `mv` 请你来确认。
+
+```
+$ mv --interactive example.txt ~/Documents
+mv: overwrite '~/Documents/example.txt'?
+```
+
+如果你不想手动干预,那么可以使用 `--no-clobber` 或 `-n`。该选项会在发生冲突时静默拒绝移动操作。在这个例子当中,一个名为 `example.txt` 的文件以及存在于 `~/Documents`,所以它不会如命令要求从当前目录移走。
+
+```
+$ mv --no-clobber example.txt ~/Documents
+$ ls
+example.txt
+```
+
+#### 带备份的移动
+
+如果你使用 GNU `mv`,有一个备份选项提供了另外一种安全移动的方式。要为任何冲突的目标文件创建备份文件,可以使用 `-b` 选项。
+
+```
+$ mv -b example.txt ~/Documents
+$ ls ~/Documents
+example.txt example.txt~
+```
+
+这个选项可以确保 `mv` 完成移动操作,但是也会保护目录位置的已有文件。
+
+另外的 GNU 备份选项是 `--backup`,它带有一个定义了备份文件如何命名的参数。
+
+* `existing`:如果在目标位置已经存在了编号备份文件,那么会创建编号备份。否则,会使用 `simple` 方式。
+* `none`:即使设置了 `--backup`,也不会创建备份。当 `mv` 被别名定义为带有备份选项时,这个选项可以覆盖这种行为。
+* `numbered`:给目标文件名附加一个编号。
+* `simple`:给目标文件附加一个 `~`,当你日常使用带有 `--ignore-backups` 选项的 [ls][2] 时,这些文件可以很方便地隐藏起来。
+
+简单来说:
+
+```
+$ mv --backup=numbered example.txt ~/Documents
+$ ls ~/Documents
+-rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt
+-rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~
+```
+
+可以使用环境变量 `VERSION_CONTROL` 设置默认的备份方案。你可以在 `~/.bashrc` 文件中设置该环境变量,也可以在命令前动态设置:
+
+```
+$ VERSION_CONTROL=numbered mv --backup example.txt ~/Documents
+$ ls ~/Documents
+-rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt
+-rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~
+-rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~
+```
+
+`--backup` 选项仍然遵循 `--interactive` 或 `-i` 选项,因此即使它在执行备份之前创建了备份,它仍会提示你覆盖目标文件:
+
+```
+$ mv --backup=numbered example.txt ~/Documents
+mv: overwrite '~/Documents/example.txt'? y
+$ ls ~/Documents
+-rw-rw-r--. 1 seth users 128 Aug 1 17:24 example.txt
+-rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~
+-rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~
+-rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt.~3~
+```
+
+你可以使用 `--force` 或 `-f` 选项覆盖 `-i`。
+
+```
+$ mv --backup=numbered --force example.txt ~/Documents
+$ ls ~/Documents
+-rw-rw-r--. 1 seth users 128 Aug 1 17:26 example.txt
+-rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~
+-rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~
+-rw-rw-r--. 1 seth users 128 Aug 1 17:24 example.txt.~3~
+-rw-rw-r--. 1 seth users 128 Aug 1 17:25 example.txt.~4~
+```
+
+`--backup` 选项在 BSD `mv` 中不可用。
+
+#### 一次性移动多个文件
+
+移动多个文件时,`mv` 会将最终目录视为目标:
+
+```
+$ mv foo bar baz ~/Documents
+$ ls ~/Documents
+foo bar baz
+```
+
+如果最后一个项目不是目录,则 `mv` 返回错误:
+
+```
+$ mv foo bar baz
+mv: target 'baz' is not a directory
+```
+
+GNU `mv` 的语法相当灵活。如果无法把目标目录作为提供给 `mv` 命令的最终参数,请使用 `--target-directory` 或 `-t` 选项:
+
+```
+$ mv --target-directory=~/Documents foo bar baz
+$ ls ~/Documents
+foo bar baz
+```
+
+当从某些其他命令的输出构造 `mv` 命令时(例如 `find` 命令、`xargs` 或 [GNU Parallel][10]),这特别有用。
+
+#### 基于修改时间移动
+
+使用 GNU `mv`,你可以根据要移动的文件是否比要替换的目标文件新来定义移动动作。该方式可以通过 `--update` 或 `-u` 选项使用,在BSD `mv` 中不可用:
+
+```
+$ ls -l ~/Documents
+-rw-rw-r--. 1 seth users 128 Aug 1 17:32 example.txt
+$ ls -l
+-rw-rw-r--. 1 seth users 128 Aug 1 17:42 example.txt
+$ mv --update example.txt ~/Documents
+$ ls -l ~/Documents
+-rw-rw-r--. 1 seth users 128 Aug 1 17:42 example.txt
+$ ls -l
+```
+
+此结果仅基于文件的修改时间,而不是两个文件的差异,因此请谨慎使用。只需使用 `touch` 命令即可愚弄 `mv`:
+
+```
+$ cat example.txt
+one
+$ cat ~/Documents/example.txt
+one
+two
+$ touch example.txt
+$ mv --update example.txt ~/Documents
+$ cat ~/Documents/example.txt
+one
+```
+
+显然,这不是最智能的更新功能,但是它提供了防止覆盖最新数据的基本保护。
+
+### 移动
+
+除了 `mv` 命令以外,还有更多的移动数据的方法,但是作为这项任务的默认程序,`mv` 是一个很好的通用选择。现在你知道了有哪些可以使用的选项,可以比以前更智能地使用 `mv` 了。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/8/moving-files-linux-depth
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sethhttps://opensource.com/users/doni08521059
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
+[2]: https://opensource.com/article/19/7/master-ls-command
+[3]: https://opensource.com/sites/default/files/uploads/gnome-mv.jpg (Moving a file in GNOME.)
+[4]: https://opensource.com/sites/default/files/uploads/kde-mv.jpg (Moving a file in KDE.)
+[5]: https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them
+[6]: https://opensource.com/article/19/7/navigating-filesystem-relative-paths
+[7]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
+[8]: https://opensource.com/article/19/7/copying-files-linux
+[9]: https://opensource.com/article/19/7/bash-aliases
+[10]: https://opensource.com/article/18/5/gnu-parallel
diff --git a/published/20190823 How To Check Your IP Address in Ubuntu -Beginner-s Tip.md b/published/20190823 How To Check Your IP Address in Ubuntu -Beginner-s Tip.md
new file mode 100644
index 0000000000..33084aaa52
--- /dev/null
+++ b/published/20190823 How To Check Your IP Address in Ubuntu -Beginner-s Tip.md
@@ -0,0 +1,112 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11308-1.html)
+[#]: subject: (How To Check Your IP Address in Ubuntu [Beginner’s Tip])
+[#]: via: (https://itsfoss.com/check-ip-address-ubuntu/)
+[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
+
+如何在 Ubuntu 中检查你的 IP 地址
+======
+
+不知道你的 IP 地址是什么?以下是在 Ubuntu 和其他 Linux 发行版中检查 IP 地址的几种方法。
+
+![][1]
+
+### 什么是 IP 地址?
+
+**互联网协议地址**(通常称为 **IP 地址**)是分配给连接到计算机网络的每个设备(使用互联网协议)的数字标签。IP 地址用于识别和定位机器。
+
+**IP 地址**在网络中是*唯一的*,使得所有连接设备能够通信。
+
+你还应该知道有两种**类型的 IP 地址**:**公有**和**私有**。**公有 IP 地址**是用于互联网通信的地址,这与你用于邮件的物理地址相同。但是,在本地网络(例如使用路由器的家庭)的环境中,会为每个设备分配在该子网内唯一的**私有 IP 地址**。这在本地网络中使用,而不直接暴露公有 IP(路由器用它与互联网通信)。
+
+另外还有区分 **IPv4** 和 **IPv6** 协议。**IPv4** 是经典的 IP 格式,它由基本的 4 部分结构组成,四个字节用点分隔(例如 127.0.0.1)。但是,随着设备数量的增加,IPv4 很快就无法提供足够的地址。这就是 **IPv6** 被发明的原因,它使用 **128 位地址**的格式(与 **IPv4** 使用的 **32 位地址**相比)。
+
+### 在 Ubuntu 中检查你的 IP 地址(终端方式)
+
+检查 IP 地址的最快和最简单的方法是使用 `ip` 命令。你可以按以下方式使用此命令:
+
+```
+ip addr show
+```
+
+它将同时显示 IPv4 和 IPv6 地址:
+
+![Display IP Address in Ubuntu Linux][2]
+
+实际上,你可以进一步缩短这个命令 `ip a`。它会给你完全相同的结果。
+
+```
+ip a
+```
+
+如果你希望获得最少的细节,也可以使用 `hostname`:
+
+```
+hostname -I
+```
+
+还有一些[在 Linux 中检查 IP 地址的方法][3],但是这两个命令足以满足这个目的。
+
+`ifconfig` 如何?
+
+老用户可能会想要使用 `ifconfig`(net-tools 软件包的一部分),但该程序已被弃用。一些较新的 Linux 发行版不再包含此软件包,如果你尝试运行它,你将看到 ifconfig 命令未找到的错误。
+
+### 在 Ubuntu 中检查你的 IP 地址(GUI 方式)
+
+如果你对命令行不熟悉,你还可以使用图形方式检查 IP 地址。
+
+打开 Ubuntu 应用菜单(在屏幕左下角**显示应用**)并搜索**Settings**,然后单击图标:
+
+![Applications Menu Settings][5]
+
+这应该会打开**设置菜单**。进入**网络**:
+
+![Network Settings Ubuntu][6]
+
+按下连接旁边的**齿轮图标**会打开一个窗口,其中包含更多设置和有关你网络链接的信息,其中包括你的 IP 地址:
+
+![IP Address GUI Ubuntu][7]
+
+### 额外提示:检查你的公共 IP 地址(适用于台式计算机)
+
+首先,要检查你的**公有 IP 地址**(用于与服务器通信),你可以[使用 curl 命令][8]。打开终端并输入以下命令:
+
+```
+curl ifconfig.me
+```
+
+这应该只会返回你的 IP 地址而没有其他多余信息。我建议在分享这个地址时要小心,因为这相当于公布你的个人地址。
+
+**注意:** 如果 `curl` 没有安装,只需使用 `sudo apt install curl -y` 来解决问题,然后再试一次。
+
+另一种可以查看公共 IP 地址的简单方法是在 Google 中搜索 “ip address”。
+
+### 总结
+
+在本文中,我介绍了在 Uuntu Linux 中找到 IP 地址的不同方法,并向你概述了 IP 地址的用途以及它们对我们如此重要的原因。
+
+我希望你喜欢这篇文章。如果你觉得文章有用,请在评论栏告诉我们!
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/check-ip-address-ubuntu/
+
+作者:[Sergiu][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/sergiu/
+[b]: https://github.com/lujun9972
+[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/checking-ip-address-ubuntu.png?resize=800%2C450&ssl=1
+[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/ip_addr_show.png?fit=800%2C493&ssl=1
+[3]: https://linuxhandbook.com/find-ip-address/
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/applications_menu_settings.jpg?fit=800%2C309&ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/network_settings_ubuntu.jpg?fit=800%2C591&ssl=1
+[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/ip_address_gui_ubuntu.png?fit=800%2C510&ssl=1
+[8]: https://linuxhandbook.com/curl-command-examples/
diff --git a/published/20190823 The Linux kernel- Top 5 innovations.md b/published/20190823 The Linux kernel- Top 5 innovations.md
new file mode 100644
index 0000000000..486270ccfd
--- /dev/null
+++ b/published/20190823 The Linux kernel- Top 5 innovations.md
@@ -0,0 +1,104 @@
+[#]: collector: (lujun9972)
+[#]: translator: (heguangzhi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11368-1.html)
+[#]: subject: (The Linux kernel: Top 5 innovations)
+[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+Linux 内核的五大创新
+======
+
+> 想知道什么是 Linux 内核上真正的(不是那种时髦的)创新吗?
+
+
+
+在科技行业,*创新*这个词几乎和*革命*一样到处泛滥,所以很难将那些夸张的东西与真正令人振奋的东西区分开来。Linux 内核被称为创新,但它又被称为现代计算中最大的奇迹,一个微观世界中的庞然大物。
+
+撇开营销和模式不谈,Linux 可以说是开源世界中最受欢迎的内核,它在近 30 年的生命时光当中引入了一些真正的规则改变者。
+
+### Cgroups(2.6.24)
+
+早在 2007 年,Paul Menage 和 Rohit Seth 就在内核中添加了深奥的[控制组(cgroups)][2]功能(cgroups 的当前实现是由 Tejun Heo 重写的)。这种新技术最初被用作一种方法,从本质上来说,是为了确保一组特定任务的服务质量。
+
+例如,你可以为与你的 WEB 服务相关联的所有任务创建一个控制组定义(cgroup),为例行备份创建另一个 cgroup ,再为一般操作系统需求创建另一个 cgroup。然后,你可以控制每个组的资源百分比,这样你的操作系统和 WEB 服务就可以获得大部分系统资源,而你的备份进程可以访问剩余的资源。
+
+然而,cgroups 如今变得这么著名是因其作为驱动云技术的角色:容器。事实上,cgroups 最初被命名为[进程容器][3]。当它们被 [LXC][4]、[CoreOS][5] 和 Docker 等项目采用时,这并不奇怪。
+
+就像闸门打开后一样,“容器” 一词就像成为了 Linux 的同义词一样,微服务风格的基于云的“应用”概念很快成为了规范。如今,已经很难摆脱 cgroups 了,它们是如此普遍。每一个大规模的基础设施(如果你运行 Linux 的话,可能还有你的笔记本电脑)都以一种合理的方式使用了 cgroups,这使得你的计算体验比以往任何时候都更加易于管理和灵活。
+
+例如,你可能已经在电脑上安装了 [Flathub][6] 或 [Flatpak][7],或者你已经在工作中使用 [Kubernetes][8] 和/或 [OpenShift][9]。不管怎样,如果“容器”这个术语对你来说仍然模糊不清,则可以 [通过 Linux 容器从背后][10]获得对容器的实际理解。
+
+### LKMM(4.17)
+
+2018 年,Jade Alglave、Alan Stern、Andrea Parri、Luc Maranget、Paul McKenney 以及其他几个人的辛勤工作的成果被合并到主线 Linux 内核中,以提供正式的内存模型。Linux 内核内存[一致性]模型(LKMM)子系统是一套描述 Linux 内存一致性模型的工具,同时也产生用于测试的用例(特别命名为 klitmus)。
+
+随着系统在物理设计上变得越来越复杂(增加了更多的中央处理器内核,高速缓存和内存增长,等等),它们就越难知道哪个中央处理器需要哪个地址空间,以及何时需要。例如,如果 CPU0 需要将数据写入内存中的共享变量,并且 CPU1 需要读取该值,那么 CPU0 必须在 CPU1 尝试读取之前写入。类似地,如果值是以一种顺序方式写入内存的,那么期望它们也以同样的顺序被读取,而不管哪个或哪些 CPU 正在读取。
+
+即使在单个处理器上,内存管理也需要特定的任务顺序。像 `x = y` 这样的简单操作需要处理器从内存中加载 `y` 的值,然后将该值存储在 `x` 中。在处理器从内存中读取值之前,是不能将存储在 `y` 中的值放入 `x` 变量的。此外还有地址依赖:`x[n] = 6` 要求在处理器能够存储值 `6` 之前加载 `n`。
+
+LKMM 可以帮助识别和跟踪代码中的这些内存模式。它部分是通过一个名为 `herd` 的工具来实现的,该工具(以逻辑公式的形式)定义了内存模型施加的约束,然后列举了与这些约束一致性的所有可能的结果。
+
+### 低延迟补丁(2.6.38)
+
+很久以前,在 2011 年之前,如果你想[在 Linux 上进行多媒体工作][11],你必须得有一个低延迟内核。这主要适用于[录音][12]时添加了许多实时效果(如对着麦克风唱歌和添加混音,以及在耳机中无延迟地听到你的声音)。有些发行版,如 [Ubuntu Studio][13],可靠地提供了这样一个内核,所以实际上这没有什么障碍,这只不过是当艺术家选择发行版时的一个重要提醒。
+
+然而,如果你没有使用 Ubuntu Studio,或者你需要在你的发行版提供之前更新你的内核,你必须跳转到 rt-patches 网页,下载内核补丁,将它们应用到你的内核源代码,编译,然后手动安装。
+
+后来,随着内核版本 2.6.38 的发布,这个过程结束了。Linux 内核突然像变魔术一样默认内置了低延迟代码(根据基准测试,延迟至少降低了 10 倍)。不再需要下载补丁,不用编译。一切都很顺利,这都是因为 Mike Galbraith 编写了一个 200 行的小补丁。
+
+对于全世界的开源多媒体艺术家来说,这是一个规则改变者。从 2011 年开始事情变得如此美好,到 2016 年我自己做了一个挑战,[在树莓派 v1(型号 B)上建造一个数字音频工作站(DAW)][14],结果发现它运行得出奇地好。
+
+### RCU(2.5)
+
+RCU,即读-拷贝-更新,是计算机科学中定义的一个系统,它允许多个处理器线程从共享内存中读取数据。它通过延迟更新但也将它们标记为已更新来做到这一点,以确保数据读取为最新内容。实际上,这意味着读取与更新同时发生。
+
+典型的 RCU 循环有点像这样:
+
+1. 删除指向数据的指针,以防止其他读操作引用它。
+2. 等待读操作完成它们的关键处理。
+3. 回收内存空间。
+
+将更新阶段划分为删除和回收阶段意味着更新程序会立即执行删除,同时推迟回收直到所有活动读取完成(通过阻止它们或注册一个回调以便在完成时调用)。
+
+虽然 RCU 的概念不是为 Linux 内核发明的,但它在 Linux 中的实现是该技术的一个定义性的例子。
+
+### 合作(0.01)
+
+对于 Linux 内核创新的问题的最终答案永远是协作。你可以说这是一个好时机,也可以称之为技术优势,称之为黑客能力,或者仅仅称之为开源,但 Linux 内核及其支持的许多项目是协作与合作的光辉范例。
+
+它远远超出了内核范畴。各行各业的人都对开源做出了贡献,可以说都是因为 Linux 内核。Linux 曾经是,现在仍然是[自由软件][15]的主要力量,激励人们把他们的代码、艺术、想法或者仅仅是他们自己带到一个全球化的、有生产力的、多样化的人类社区中。
+
+### 你最喜欢的创新是什么?
+
+这个列表偏向于我自己的兴趣:容器、非统一内存访问(NUMA)和多媒体。无疑,列表中肯定缺少你最喜欢的内核创新。在评论中告诉我。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[heguangzhi](https://github.com/heguangzhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
+[2]: https://en.wikipedia.org/wiki/Cgroups
+[3]: https://lkml.org/lkml/2006/10/20/251
+[4]: https://linuxcontainers.org
+[5]: https://coreos.com/
+[6]: http://flathub.org
+[7]: http://flatpak.org
+[8]: http://kubernetes.io
+[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
+[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
+[11]: http://slackermedia.info
+[12]: https://opensource.com/article/17/6/qtractor-audio
+[13]: http://ubuntustudio.org
+[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
+[15]: http://fsf.org
diff --git a/published/20190825 Top 5 IoT networking security mistakes.md b/published/20190825 Top 5 IoT networking security mistakes.md
new file mode 100644
index 0000000000..237d81b266
--- /dev/null
+++ b/published/20190825 Top 5 IoT networking security mistakes.md
@@ -0,0 +1,62 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Morisun029)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11299-1.html)
+[#]: subject: (Top 5 IoT networking security mistakes)
+[#]: via: (https://www.networkworld.com/article/3433476/top-5-iot-networking-security-mistakes.html)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+五大物联网网络安全错误
+======
+
+> IT 供应商兄弟国际公司分享了五种最常见的物联网安全错误,这是从它们的打印机和多功能设备买家中看到的。
+
+![Getty Images][1]
+
+尽管[兄弟国际公司][2]是许多 IT 产品的供应商,从[机床][3]到[头戴式显示器][4]再到[工业缝纫机][5],但它最知名的产品是打印机。在当今世界,这些打印机不再是独立的设备,而是物联网的组成部分。
+
+这也是我为什么对罗伯特•伯内特提供的这份列表感兴趣的原因。伯内特是兄弟公司的总监,负责 B2B 产品和提供解决方案。基本上是该公司负责大客户实施的关键人物。所以他对打印机相关的物联网安全错误非常关注,并且分享了兄弟国际公司对于处理这五大错误的建议。
+
+### #5:不控制访问和授权
+
+伯内特说:“过去,成本控制是管理谁可以使用机器、何时结束工作背后的推动力。”当然,这在今天也仍然很重要,但他指出安全性正迅速成为管理控制打印和扫描设备的关键因素。这不仅适用于大型企业,也适用于各种规模的企业。
+
+### #4:无法定期更新固件
+
+让我们来面对这一现实,大多数 IT 专业人员都忙于保持服务器和其他网络基础设施设备的更新,确保其基础设施尽可能的安全高效。“在这日常的流程中,像打印机这样的设备经常被忽视。”但过时的固件可能会使基础设施面临新的威胁。
+
+### #3:设备意识不足
+
+伯内特说:“正确理解谁在使用什么设备,以及整套设备中所有连接设备的功能是什么,这是至关重要的。使用端口扫描技术、协议分析和其他检测技术检查这些设备应作为你的网络基础设施整体安全审查中的一部分。 他常常提醒人们说:“处理打印设备的方法是:如果没有损坏,就不要修理!”但即使是可靠运行多年的设备也应该成为安全审查的一部分。这是因为旧设备可能无法提供更强大的安全设置,或者可能需要更新其配置才能满足当今更高的安全要求,这其中包括设备的监控/报告功能。
+
+### #2:用户培训不足
+
+“应该把培训团队在工作过程中管理文档的最佳实践作为强有力的安全计划中的一部分。”伯内特说道,“然而,事实却是,无论你如何努力地去保护物联网设备,人为因素通常是一家企业在保护重要和敏感信息方面最薄弱的环节。像这些简单的事情,如无意中将重要文件留在打印机上供任何人查看,或者将文件扫描到错误的目的地,不仅会给企业带来经济损失和巨大的负面影响,还会影响企业的知识产权、声誉,引起合规性/监管问题。”
+
+### #1:使用默认密码
+
+“只是因为它很方便并不意味着它不重要!”伯内特说,“保护打印机和多功能设备免受未经授权的管理员访问不仅有助于保护敏感的机器配置设置和报告信息,还可以防止访问个人信息,例如,像可能用于网络钓鱼攻击的用户名。”
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3433476/top-5-iot-networking-security-mistakes.html
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[Morisun029](https://github.com/Morisun029)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/iot_security_tablet_conference_digital-100787102-large.jpg
+[2]: https://www.brother-usa.com/business
+[3]: https://www.brother-usa.com/machinetool/default?src=default
+[4]: https://www.brother-usa.com/business/hmd#sort=%40productcatalogsku%20ascending
+[5]: https://www.brother-usa.com/business/industrial-sewing
+[6]: https://www.networkworld.com/article/2855207/internet-of-things/5-ways-to-prepare-for-internet-of-things-security-threats.html#tk.nww-infsb
+[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[8]: https://www.facebook.com/NetworkWorld/
+[9]: https://www.linkedin.com/company/network-world
diff --git a/published/20190826 5 ops tasks to do with Ansible.md b/published/20190826 5 ops tasks to do with Ansible.md
new file mode 100644
index 0000000000..de7916b81d
--- /dev/null
+++ b/published/20190826 5 ops tasks to do with Ansible.md
@@ -0,0 +1,95 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11312-1.html)
+[#]: subject: (5 ops tasks to do with Ansible)
+[#]: via: (https://opensource.com/article/19/8/ops-tasks-ansible)
+[#]: author: (Mark Phillips https://opensource.com/users/markphttps://opensource.com/users/adminhttps://opensource.com/users/alsweigarthttps://opensource.com/users/belljennifer43)
+
+5 个 Ansible 运维任务
+======
+
+> 让 DevOps 少一点,OpsDev 多一点。
+
+![gears and lightbulb to represent innovation][1]
+
+在这个 DevOps 世界中,看起来开发(Dev)这一半成为了关注的焦点,而运维(Ops)则是这个关系中被遗忘的另一半。这几乎就好像是领头的开发告诉尾随的运维做什么,几乎所有的“运维”都是开发说要做的。因此,运维被抛到后面,降级到了替补席上。
+
+我想看到更多的 OpsDev。因此,让我们来看看 Ansible 在日常的运维中可以帮助你什么。
+
+![Job templates][2]
+
+我选择在 [Ansible Tower][3] 中展示这些方案,因为我认为用户界面 (UI) 可以增色大多数的任务。如果你想模拟测试,你可以在 Tower 的上游开源版本 [AWX][4] 中测试它。
+
+### 管理用户
+
+在大规模环境中,你的用户将集中在活动目录或 LDAP 等系统中。但我敢打赌,仍然存在许多包含大量的静态用户的全负荷环境。Ansible 可以帮助你将这些分散的环境集中到一起。*社区*已为我们解决了这个问题。看看 [Ansible Galaxy][5] 中的 [users][6] 角色。
+
+这个角色的聪明之处在于它允许我们通过*数据*管理用户,而无需更改运行逻辑。
+
+![User data][7]
+
+通过简单的数据结构,我们可以在系统上添加、删除和修改静态用户。这很有用。
+
+### 管理 sudo
+
+提权有[多种形式][8],但最流行的是 [sudo][9]。通过每个 `user`、`group` 等离散文件来管理 sudo 相对容易。但一些人对给予特权感到紧张,并倾向于有时限地给予提权。因此[下面是一种方案][10],它使用简单的 `at` 命令对授权访问设置时间限制。
+
+![Managing sudo][11]
+
+### 管理服务
+
+给入门级运维团队提供[菜单][12]以便他们可以重启某些服务不是很好吗?看下面!
+
+![Managing services][13]
+
+### 管理磁盘空间
+
+这有[一个简单的角色][14],可在特定目录中查找字节大于某个大小的文件。在 Tower 中这么做时,启用[回调][15]有额外的好处。想象一下,你的监控方案发现文件系统已超过 X% 并触发 Tower 中的任务以找出是什么文件导致的。
+
+![Managing disk space][16]
+
+### 调试系统性能问题
+
+[这个角色][17]相当简单:它运行一些命令并打印输出。细节在最后输出,让你 —— 系统管理员快速浏览一眼。另外可以使用 [正则表达式][18] 在输出中找到某些条件(比如说 CPU 占用率超过 80%)。
+
+![Debugging system performance][19]
+
+### 总结
+
+我已经录制了这五个任务的简短视频。你也可以在 Github 上找到[所有代码][20]!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/8/ops-tasks-ansible
+
+作者:[Mark Phillips][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/markphttps://opensource.com/users/adminhttps://opensource.com/users/alsweigarthttps://opensource.com/users/belljennifer43
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation)
+[2]: https://opensource.com/sites/default/files/uploads/00_templates.png (Job templates)
+[3]: https://www.ansible.com/products/tower
+[4]: https://github.com/ansible/awx
+[5]: https://galaxy.ansible.com
+[6]: https://galaxy.ansible.com/singleplatform-eng/users
+[7]: https://opensource.com/sites/default/files/uploads/01_users_data.png (User data)
+[8]: https://docs.ansible.com/ansible/latest/plugins/become.html
+[9]: https://www.sudo.ws/intro.html
+[10]: https://github.com/phips/ansible-demos/tree/master/roles/sudo
+[11]: https://opensource.com/sites/default/files/uploads/02_sudo.png (Managing sudo)
+[12]: https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#surveys
+[13]: https://opensource.com/sites/default/files/uploads/03_services.png (Managing services)
+[14]: https://github.com/phips/ansible-demos/tree/master/roles/disk
+[15]: https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#provisioning-callbacks
+[16]: https://opensource.com/sites/default/files/uploads/04_diskspace.png (Managing disk space)
+[17]: https://github.com/phips/ansible-demos/tree/master/roles/gather_debug
+[18]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#regular-expression-filters
+[19]: https://opensource.com/sites/default/files/uploads/05_debug.png (Debugging system performance)
+[20]: https://github.com/phips/ansible-demos
diff --git a/published/20190826 How to rename a group of files on Linux.md b/published/20190826 How to rename a group of files on Linux.md
new file mode 100644
index 0000000000..e80d1bc31d
--- /dev/null
+++ b/published/20190826 How to rename a group of files on Linux.md
@@ -0,0 +1,128 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11300-1.html)
+[#]: subject: (How to rename a group of files on Linux)
+[#]: via: (https://www.networkworld.com/article/3433865/how-to-rename-a-group-of-files-on-linux.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+如何在 Linux 上重命名一组文件
+======
+
+> 要用单个命令重命名一组文件,请使用 rename 命令。它需要使用正则表达式,并且可以在开始前告诉你会有什么更改。
+
+
+
+几十年来,Linux 用户一直使用 `mv` 命令重命名文件。它很简单,并且能做到你要做的。但有时你需要重命名一大组文件。在这种情况下,`rename` 命令可以使这个任务更容易。它只需要一些正则表达式的技巧。
+
+与 `mv` 命令不同,`rename` 不允许你简单地指定旧名称和新名称。相反,它使用类似于 Perl 中的正则表达式。在下面的例子中,`s` 指定我们将第一个字符串替换为第二个字符串(旧的),从而将 `this.new` 变为 `this.old`。
+
+```
+$ rename 's/new/old/' this.new
+$ ls this*
+this.old
+```
+
+使用 `mv this.new this.old` 可以更容易地进行更改一个,但是将字符串 `this` 变成通配符 `*`,你可以用一条命令将所有的 `*.new` 文件重命名为 `*.old`:
+
+```
+$ ls *.new
+report.new schedule.new stats.new this.new
+$ rename 's/new/old/' *.new
+$ ls *.old
+report.old schedule.old stats.old this.old
+```
+
+正如你所料,`rename` 命令不限于更改文件扩展名。如果你需要将名为 `report.*` 的文件更改为 `review.*`,那么可以使用以下命令做到:
+
+```
+$ rename 's/report/review/' *
+```
+
+正则表达式中的字符串可以更改文件名的任何部分,无论是文件名还是扩展名。
+
+```
+$ rename 's/123/124/' *
+$ ls *124*
+status.124 report124.txt
+```
+
+如果你在 `rename` 命令中添加 `-v` 选项,那么该命令将提供一些反馈,以便你可以看到所做的更改,或许会包含你没注意的。这让你注意到并按需还原更改。
+
+```
+$ rename -v 's/123/124/' *
+status.123 renamed as status.124
+report123.txt renamed as report124.txt
+```
+
+另一方面,使用 `-n`(或 `--nono`)选项会使 `rename` 命令告诉你将要做的但不会实际做的更改。这可以让你免于执行不不想要的操作,然后再还原更改。
+
+```
+$ rename -n 's/old/save/' *
+rename(logger.man-old, logger.man-save)
+rename(lyrics.txt-old, lyrics.txt-save)
+rename(olderfile-, saveerfile-)
+rename(oldfile, savefile)
+rename(review.old, review.save)
+rename(schedule.old, schedule.save)
+rename(stats.old, stats.save)
+rename(this.old, this.save)
+```
+
+如果你对这些更改满意,那么就可以运行不带 `-n` 选项的命令来更改文件名。
+
+但请注意,正则表达式中的 `.` **不会**被视为句点,而是作为匹配任何字符的通配符。上面和下面的示例中的一些更改可能不是输入命令的人希望的。
+
+```
+$ rename -n 's/.old/.save/' *
+rename(logger.man-old, logger.man.save)
+rename(lyrics.txt-old, lyrics.txt.save)
+rename(review.old, review.save)
+rename(schedule.old, schedule.save)
+rename(stats.old, stats.save)
+rename(this.old, this.save)
+```
+
+为确保句点按照字面意思执行,请在它的前面加一个反斜杠。这将使其不被解释为通配符并匹配任何字符。请注意,进行此更改时,仅选择了 `.old` 文件。
+
+```
+$ rename -n 's/\.old/.save/' *
+rename(review.old, review.save)
+rename(schedule.old, schedule.save)
+rename(stats.old, stats.save)
+rename(this.old, this.save)
+```
+
+下面的命令会将文件名中的所有大写字母更改为小写,除了使用 `-n` 选项来确保我们在命令执行之前检查将做的修改。注意在正则表达式中使用了 `y`,这是改变大小写所必需的。
+
+```
+$ rename -n 'y/A-Z/a-z/' W*
+rename(WARNING_SIGN.pdf, warning_sign.pdf)
+rename(Will_Gardner_buttons.pdf, will_gardner_buttons.pdf)
+rename(Wingding_Invites.pdf, wingding_invites.pdf)
+rename(WOW-buttons.pdf, wow-buttons.pdf)
+```
+
+在上面的例子中,我们将所有大写字母更改为了小写,但这仅对以大写字母 `W` 开头的文件名。
+
+### 总结
+
+当你需要重命名大量文件时,`rename` 命令非常有用。请注意不要做比预期更多的更改。请记住,`-n`(或者 `--nono`)选项可以帮助你避免耗时的错误。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3433865/how-to-rename-a-group-of-files-on-linux.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/08/card-catalog-machester_city_library-100809242-large.jpg
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/published/20190828 Managing Ansible environments on MacOS with Conda.md b/published/20190828 Managing Ansible environments on MacOS with Conda.md
new file mode 100644
index 0000000000..24e8d65fa0
--- /dev/null
+++ b/published/20190828 Managing Ansible environments on MacOS with Conda.md
@@ -0,0 +1,168 @@
+[#]: collector: (lujun9972)
+[#]: translator: (heguangzhi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11356-1.html)
+[#]: subject: (Managing Ansible environments on MacOS with Conda)
+[#]: via: (https://opensource.com/article/19/8/using-conda-ansible-administration-macos)
+[#]: author: (James Farrell https://opensource.com/users/jamesf)
+
+
+使用 Conda 管理 MacOS 上的 Ansible 环境
+=====
+
+> Conda 将 Ansible 所需的一切都收集到虚拟环境中并将其与其他项目分开。
+
+
+
+如果你是一名使用 MacOS 并涉及到 Ansible 管理的 Python 开发人员,你可能希望使用 Conda 包管理器将 Ansible 的工作内容与核心操作系统和其他本地项目分开。
+
+Ansible 基于 Python。要让 Ansible 在 MacOS 上工作,Conda 并不是必须要的,但是它确实让你管理 Python 版本和包依赖变得更加容易。这允许你在 MacOS 上使用升级的 Python 版本,并在你的系统中、Ansible 和其他编程项目之间保持 Python 包的依赖性相互独立。
+
+在 MacOS 上安装 Ansible 还有其他方法。你可以使用 [Homebrew][2],但是如果你对 Python 开发(或 Ansible 开发)感兴趣,你可能会发现在一个独立 Python 虚拟环境中管理 Ansible 可以减少一些混乱。我觉得这更简单;与其试图将 Python 版本和依赖项加载到系统或 `/usr/local` 目录中 ,还不如使用 Conda 帮助我将 Ansible 所需的一切都收集到一个虚拟环境中,并将其与其他项目完全分开。
+
+本文着重于使用 Conda 作为 Python 项目来管理 Ansible,以保持它的干净并与其他项目分开。请继续阅读,并了解如何安装 Conda、创建新的虚拟环境、安装 Ansible 并对其进行测试。
+
+### 序幕
+
+最近,我想学习 [Ansible][3],所以我需要找到安装它的最佳方法。
+
+我通常对在我的日常工作站上安装东西很谨慎。我尤其不喜欢对供应商的默认操作系统安装应用手动更新(这是我多年作为 Unix 系统管理的习惯)。我真的很想使用 Python 3.7,但是 MacOS 的 Python 包是旧的 2.7,我不会安装任何可能干扰核心 MacOS 系统的全局 Python 包。
+
+所以,我使用本地 Ubuntu 18.04 虚拟机上开始了我的 Ansible 工作。这提供了真正意义上的的安全隔离,但我很快发现管理它是非常乏味的。所以我着手研究如何在本机 MacOS 上获得一个灵活但独立的 Ansible 系统。
+
+由于 Ansible 基于 Python,Conda 似乎是理想的解决方案。
+
+### 安装 Conda
+
+Conda 是一个开源软件,它提供方便的包和环境管理功能。它可以帮助你管理多个版本的 Python、安装软件包依赖关系、执行升级和维护项目隔离。如果你手动管理 Python 虚拟环境,Conda 将有助于简化和管理你的工作。浏览 [Conda 文档][4]可以了解更多细节。
+
+我选择了 [Miniconda][5] Python 3.7 安装在我的工作站中,因为我想要最新的 Python 版本。无论选择哪个版本,你都可以使用其他版本的 Python 安装新的虚拟环境。
+
+要安装 Conda,请下载 PKG 格式的文件,进行通常的双击,并选择 “Install for me only” 选项。安装在我的系统上占用了大约 158 兆的空间。
+
+安装完成后,调出一个终端来查看你有什么了。你应该看到:
+
+ * 在你的家目录中的 `miniconda3` 目录
+ * shell 提示符被修改为 `(base)`
+ * `.bash_profile` 文件更新了一些 Conda 特有的设置内容
+
+现在基础已经安装好了,你有了第一个 Python 虚拟环境。运行 Python 版本检查可以证明这一点,你的 `PATH` 将指向新的位置:
+
+```
+(base) $ which python
+/Users/jfarrell/miniconda3/bin/python
+(base) $ python --version
+Python 3.7.1
+```
+
+现在安装了 Conda,下一步是建立一个虚拟环境,然后安装 Ansible 并运行。
+
+### 为 Ansible 创建虚拟环境
+
+我想将 Ansible 与我的其他 Python 项目分开,所以我创建了一个新的虚拟环境并切换到它:
+
+```
+(base) $ conda create --name ansible-env --clone base
+(base) $ conda activate ansible-env
+(ansible-env) $ conda env list
+```
+
+第一个命令将 Conda 库克隆到一个名为 `ansible-env` 的新虚拟环境中。克隆引入了 Python 3.7 版本和一系列默认的 Python 模块,你可以根据需要添加、删除或升级这些模块。
+
+第二个命令将 shell 上下文更改为这个新的环境。它为 Python 及其包含的模块设置了正确的路径。请注意,在 `conda activate ansible-env` 命令后,你的 shell 提示符会发生变化。
+
+第三个命令不是必须的;它列出了安装了哪些 Python 模块及其版本和其他数据。
+
+你可以随时使用 Conda 的 `activate` 命令切换到另一个虚拟环境。这将带你回到基本环境:`conda base`。
+
+### 安装 Ansible
+
+安装 Ansible 有多种方法,但是使用 Conda 可以将 Ansible 版本和所有需要的依赖项打包在一个地方。Conda 提供了灵活性,既可以将所有内容分开,又可以根据需要添加其他新环境(我将在后面演示)。
+
+要安装 Ansible 的相对较新版本,请使用:
+
+```
+(base) $ conda activate ansible-env
+(ansible-env) $ conda install -c conda-forge ansible
+```
+
+由于 Ansible 不是 Conda 默认通道的一部分,因此 `-c` 用于从备用通道搜索和安装。Ansible 现已安装到 `ansible-env` 虚拟环境中,可以使用了。
+
+### 使用 Ansible
+
+既然你已经安装了 Conda 虚拟环境,就可以使用它了。首先,确保要控制的节点已将工作站的 SSH 密钥安装到正确的用户帐户。
+
+调出一个新的 shell 并运行一些基本的 Ansible 命令:
+
+```
+(base) $ conda activate ansible-env
+(ansible-env) $ ansible --version
+ansible 2.8.1
+ config file = None
+ configured module search path = ['/Users/jfarrell/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
+ ansible python module location = /Users/jfarrell/miniconda3/envs/ansibleTest/lib/python3.7/site-packages/ansible
+ executable location = /Users/jfarrell/miniconda3/envs/ansibleTest/bin/ansible
+ python version = 3.7.1 (default, Dec 14 2018, 13:28:58) [Clang 4.0.1 (tags/RELEASE_401/final)]
+(ansible-env) $ ansible all -m ping -u ansible
+192.168.99.200 | SUCCESS => {
+ "ansible_facts": {
+ "discovered_interpreter_python": "/usr/bin/python"
+ },
+ "changed": false,
+ "ping": "pong"
+}
+```
+
+现在 Ansible 工作了,你可以在控制台中抽身,并从你的 MacOS 工作站中使用它们。
+
+### 克隆新的 Ansible 进行 Ansible 开发
+
+这部分完全是可选的;只有当你想要额外的虚拟环境来修改 Ansible 或者安全地使用有问题的 Python 模块时,才需要它。你可以通过以下方式将主 Ansible 环境克隆到开发副本中:
+
+```
+(ansible-env) $ conda create --name ansible-dev --clone ansible-env
+(ansible-env) $ conda activte ansible-dev
+(ansible-dev) $
+```
+
+### 需要注意的问题
+
+偶尔你可能遇到使用 Conda 的麻烦。你通常可以通过以下方式删除不良环境:
+
+```
+$ conda activate base
+$ conda remove --name ansible-dev --all
+```
+
+如果出现无法解决的错误,通常可以通过在 `~/miniconda3/envs` 中找到该环境并删除整个目录来直接删除环境。如果基础环境损坏了,你可以删除整个 `~/miniconda3`,然后从 PKG 文件中重新安装。只要确保保留 `~/miniconda3/envs` ,或使用 Conda 工具导出环境配置并在以后重新创建即可。
+
+MacOS 上不包括 `sshpass` 程序。只有当你的 Ansible 工作要求你向 Ansible 提供 SSH 登录密码时,才需要它。你可以在 SourceForge 上找到当前的 [sshpass 源代码][6]。
+
+最后,基础的 Conda Python 模块列表可能缺少你工作所需的一些 Python 模块。如果你需要安装一个模块,首选命令是 `conda install package`,但是需要的话也可以使用 `pip`,Conda 会识别安装的模块。
+
+### 结论
+
+Ansible 是一个强大的自动化工具,值得我们去学习。Conda 是一个简单有效的 Python 虚拟环境管理工具。
+
+在你的 MacOS 环境中保持软件安装分离是保持日常工作环境的稳定性和健全性的谨慎方法。Conda 尤其有助于升级你的 Python 版本,将 Ansible 从其他项目中分离出来,并安全地使用 Ansible。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/8/using-conda-ansible-administration-macos
+
+作者:[James Farrell][a]
+选题:[lujun9972][b]
+译者:[heguangzhi](https://github.com/heguangzhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jamesf
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
+[2]: https://brew.sh/
+[3]: https://docs.ansible.com/?extIdCarryOver=true&sc_cid=701f2000001OH6uAAG
+[4]: https://conda.io/projects/conda/en/latest/index.html
+[5]: https://docs.conda.io/en/latest/miniconda.html
+[6]: https://sourceforge.net/projects/sshpass/
diff --git a/published/20190829 Getting started with HTTPie for API testing.md b/published/20190829 Getting started with HTTPie for API testing.md
new file mode 100644
index 0000000000..c85c165df5
--- /dev/null
+++ b/published/20190829 Getting started with HTTPie for API testing.md
@@ -0,0 +1,334 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11333-1.html)
+[#]: subject: (Getting started with HTTPie for API testing)
+[#]: via: (https://opensource.com/article/19/8/getting-started-httpie)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshezhttps://opensource.com/users/mkalindepauleduhttps://opensource.com/users/jamesf)
+
+使用 HTTPie 进行 API 测试
+======
+
+> 使用 HTTPie 调试 API,这是一个用 Python 写的易用的命令行工具。
+
+
+
+[HTTPie][2] 是一个非常易用、易于升级的 HTTP 客户端。它的发音为 “aitch-tee-tee-pie” 并以 `http` 命令运行,它是一个用 Python 编写的来用于访问 Web 的命令行工具。
+
+由于这是一篇关于 HTTP 客户端的指导文章,因此你需要一个 HTTP 服务器来试用它。在这里,访问 [httpbin.org][3],它是一个简单的开源 HTTP 请求和响应服务。httpbin.org 网站是一种测试 Web API 的强大方式,并能仔细管理并显示请求和响应内容,不过现在让我们专注于 HTTPie 的强大功能。
+
+### Wget 和 cURL 的替代品
+
+你可能听说过古老的 [Wget][4] 或稍微新一些的 [cURL][5] 工具,它们允许你从命令行访问 Web。它们是为访问网站而编写的,而 HTTPie 则用于访问 Web API。
+
+网站请求发生在计算机和正在阅读并响应它所看到的内容的最终用户之间,这并不太依赖于结构化的响应。但是,API 请求会在两台计算机之间进行*结构化*调用,人并不是该流程内的一部分,像 HTTPie 这样的命令行工具的参数可以有效地处理这个问题。
+
+### 安装 HTTPie
+
+有几种方法可以安装 HTTPie。你可以通过包管理器安装,无论你使用的是 `brew`、`apt`、`yum` 还是 `dnf`。但是,如果你已配置 [virtualenvwrapper][6],那么你可以用自己的方式安装:
+
+
+```
+$ mkvirtualenv httpie
+...
+(httpie) $ pip install httpie
+...
+(httpie) $ deactivate
+$ alias http=~/.virtualenvs/httpie/bin/http
+$ http -b GET https://httpbin.org/get
+{
+ "args": {},
+ "headers": {
+ "Accept": "*/*",
+ "Accept-Encoding": "gzip, deflate",
+ "Host": "httpbin.org",
+ "User-Agent": "HTTPie/1.0.2"
+ },
+ "origin": "104.220.242.210, 104.220.242.210",
+ "url": "https://httpbin.org/get"
+}
+```
+
+通过将 `http` 别名指向为虚拟环境中的命令,即使虚拟环境在非活动状态,你也可以运行它。你可以将 `alias` 命令放在 `.bash_profile` 或 `.bashrc` 中,这样你就可以使用以下命令升级 HTTPie:
+
+
+```
+$ ~/.virtualenvs/httpie/bin/pip install -U pip
+```
+
+### 使用 HTTPie 查询网站
+
+HTTPie 可以简化查询和测试 API。上面使用了一个选项,`-b`(即 `--body`)。没有它,HTTPie 将默认打印整个响应,包括响应头:
+
+```
+$ http GET https://httpbin.org/get
+HTTP/1.1 200 OK
+Access-Control-Allow-Credentials: true
+Access-Control-Allow-Origin: *
+Connection: keep-alive
+Content-Encoding: gzip
+Content-Length: 177
+Content-Type: application/json
+Date: Fri, 09 Aug 2019 20:19:47 GMT
+Referrer-Policy: no-referrer-when-downgrade
+Server: nginx
+X-Content-Type-Options: nosniff
+X-Frame-Options: DENY
+X-XSS-Protection: 1; mode=block
+
+{
+ "args": {},
+ "headers": {
+ "Accept": "*/*",
+ "Accept-Encoding": "gzip, deflate",
+ "Host": "httpbin.org",
+ "User-Agent": "HTTPie/1.0.2"
+ },
+ "origin": "104.220.242.210, 104.220.242.210",
+ "url": "https://httpbin.org/get"
+}
+```
+
+这在调试 API 服务时非常重要,因为大量信息在响应头中发送。例如,查看发送的 cookie 通常很重要。httpbin.org 提供了通过 URL 路径设置 cookie(用于测试目的)的方式。以下设置一个标题为 `opensource`, 值为 `awesome` 的 cookie:
+
+```
+$ http GET https://httpbin.org/cookies/set/opensource/awesome
+HTTP/1.1 302 FOUND
+Access-Control-Allow-Credentials: true
+Access-Control-Allow-Origin: *
+Connection: keep-alive
+Content-Length: 223
+Content-Type: text/html; charset=utf-8
+Date: Fri, 09 Aug 2019 20:22:39 GMT
+Location: /cookies
+Referrer-Policy: no-referrer-when-downgrade
+Server: nginx
+Set-Cookie: opensource=awesome; Path=/
+X-Content-Type-Options: nosniff
+X-Frame-Options: DENY
+X-XSS-Protection: 1; mode=block
+
+
+Redirecting...
+Redirecting...
+You should be redirected automatically to target URL:
+/cookies. If not click the link.
+```
+
+注意 `Set-Cookie: opensource=awesome; Path=/` 的响应头。这表明你预期设置的 cookie 已正确设置,路径为 `/`。另请注意,即使你得到了 `302` 重定向,`http` 也不会遵循它。如果你想要遵循重定向,则需要明确使用 `--follow` 标志请求:
+
+```
+$ http --follow GET https://httpbin.org/cookies/set/opensource/awesome
+HTTP/1.1 200 OK
+Access-Control-Allow-Credentials: true
+Access-Control-Allow-Origin: *
+Connection: keep-alive
+Content-Encoding: gzip
+Content-Length: 66
+Content-Type: application/json
+Date: Sat, 10 Aug 2019 01:33:34 GMT
+Referrer-Policy: no-referrer-when-downgrade
+Server: nginx
+X-Content-Type-Options: nosniff
+X-Frame-Options: DENY
+X-XSS-Protection: 1; mode=block
+
+{
+ "cookies": {
+ "opensource": "awesome"
+ }
+}
+```
+
+但此时你无法看到原来的 `Set-Cookie` 头。为了看到中间响应,你需要使用 `--all`:
+
+
+```
+$ http --headers --all --follow GET https://httpbin.org/cookies/set/opensource/awesome
+HTTP/1.1 302 FOUND
+Access-Control-Allow-Credentials: true
+Access-Control-Allow-Origin: *
+Content-Type: text/html; charset=utf-8
+Date: Sat, 10 Aug 2019 01:38:40 GMT
+Location: /cookies
+Referrer-Policy: no-referrer-when-downgrade
+Server: nginx
+Set-Cookie: opensource=awesome; Path=/
+X-Content-Type-Options: nosniff
+X-Frame-Options: DENY
+X-XSS-Protection: 1; mode=block
+Content-Length: 223
+Connection: keep-alive
+
+HTTP/1.1 200 OK
+Access-Control-Allow-Credentials: true
+Access-Control-Allow-Origin: *
+Content-Encoding: gzip
+Content-Type: application/json
+Date: Sat, 10 Aug 2019 01:38:41 GMT
+Referrer-Policy: no-referrer-when-downgrade
+Server: nginx
+X-Content-Type-Options: nosniff
+X-Frame-Options: DENY
+X-XSS-Protection: 1; mode=block
+Content-Length: 66
+Connection: keep-alive
+```
+
+打印响应体并不有趣,因为你大多数时候只关心 cookie。如果你想看到中间请求的响应头,而不是最终请求中的响应体,你可以使用:
+
+```
+$ http --print hb --history-print h --all --follow GET https://httpbin.org/cookies/set/opensource/awesome
+HTTP/1.1 302 FOUND
+Access-Control-Allow-Credentials: true
+Access-Control-Allow-Origin: *
+Content-Type: text/html; charset=utf-8
+Date: Sat, 10 Aug 2019 01:40:56 GMT
+Location: /cookies
+Referrer-Policy: no-referrer-when-downgrade
+Server: nginx
+Set-Cookie: opensource=awesome; Path=/
+X-Content-Type-Options: nosniff
+X-Frame-Options: DENY
+X-XSS-Protection: 1; mode=block
+Content-Length: 223
+Connection: keep-alive
+
+HTTP/1.1 200 OK
+Access-Control-Allow-Credentials: true
+Access-Control-Allow-Origin: *
+Content-Encoding: gzip
+Content-Type: application/json
+Date: Sat, 10 Aug 2019 01:40:56 GMT
+Referrer-Policy: no-referrer-when-downgrade
+Server: nginx
+X-Content-Type-Options: nosniff
+X-Frame-Options: DENY
+X-XSS-Protection: 1; mode=block
+Content-Length: 66
+Connection: keep-alive
+
+{
+ "cookies": {
+ "opensource": "awesome"
+ }
+}
+```
+
+你可以使用 `--print` 精确控制打印的内容(`h`:响应头;`b`:响应体),并使用 `--history-print` 覆盖中间请求的打印内容设置。
+
+### 使用 HTTPie 下载二进制文件
+
+有时响应体并不是文本形式,它需要发送到可被不同应用打开的文件:
+
+```
+$ http GET https://httpbin.org/image/jpeg
+HTTP/1.1 200 OK
+Access-Control-Allow-Credentials: true
+Access-Control-Allow-Origin: *
+Connection: keep-alive
+Content-Length: 35588
+Content-Type: image/jpeg
+Date: Fri, 09 Aug 2019 20:25:49 GMT
+Referrer-Policy: no-referrer-when-downgrade
+Server: nginx
+X-Content-Type-Options: nosniff
+X-Frame-Options: DENY
+X-XSS-Protection: 1; mode=block
+
+
++-----------------------------------------+
+| NOTE: binary data not shown in terminal |
++-----------------------------------------+
+```
+
+要得到正确的图片,你需要保存到文件:
+
+```
+$ http --download GET https://httpbin.org/image/jpeg
+HTTP/1.1 200 OK
+Access-Control-Allow-Credentials: true
+Access-Control-Allow-Origin: *
+Connection: keep-alive
+Content-Length: 35588
+Content-Type: image/jpeg
+Date: Fri, 09 Aug 2019 20:28:13 GMT
+Referrer-Policy: no-referrer-when-downgrade
+Server: nginx
+X-Content-Type-Options: nosniff
+X-Frame-Options: DENY
+X-XSS-Protection: 1; mode=block
+
+Downloading 34.75 kB to "jpeg.jpe"
+Done. 34.75 kB in 0.00068s (50.05 MB/s)
+```
+
+试一下!图片很可爱。
+
+### 使用 HTTPie 发送自定义请求
+
+你可以发送指定的请求头。这对于需要非标准头的自定义 Web API 很有用:
+
+```
+$ http GET https://httpbin.org/headers X-Open-Source-Com:Awesome
+{
+ "headers": {
+ "Accept": "*/*",
+ "Accept-Encoding": "gzip, deflate",
+ "Host": "httpbin.org",
+ "User-Agent": "HTTPie/1.0.2",
+ "X-Open-Source-Com": "Awesome"
+ }
+}
+```
+
+最后,如果要发送 JSON 字段(尽管可以指定确切的内容),对于许多嵌套较少的输入,你可以使用快捷方式:
+
+
+```
+$ http --body PUT https://httpbin.org/anything open-source=awesome author=moshez
+{
+ "args": {},
+ "data": "{\"open-source\": \"awesome\", \"author\": \"moshez\"}",
+ "files": {},
+ "form": {},
+ "headers": {
+ "Accept": "application/json, */*",
+ "Accept-Encoding": "gzip, deflate",
+ "Content-Length": "46",
+ "Content-Type": "application/json",
+ "Host": "httpbin.org",
+ "User-Agent": "HTTPie/1.0.2"
+ },
+ "json": {
+ "author": "moshez",
+ "open-source": "awesome"
+ },
+ "method": "PUT",
+ "origin": "73.162.254.113, 73.162.254.113",
+ "url": "https://httpbin.org/anything"
+}
+```
+
+下次在调试 Web API 时,无论是你自己的还是别人的,记得放下 cURL,试试 HTTPie 这个命令行工具。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/8/getting-started-httpie
+
+作者:[Moshe Zadka][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshezhttps://opensource.com/users/mkalindepauleduhttps://opensource.com/users/jamesf
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pie-raspberry-bake-make-food.png?itok=QRV_R8Fa (Raspberry pie with slice missing)
+[2]: https://httpie.org/
+[3]: https://github.com/postmanlabs/httpbin
+[4]: https://en.wikipedia.org/wiki/Wget
+[5]: https://en.wikipedia.org/wiki/CURL
+[6]: https://opensource.com/article/19/6/virtual-environments-python-macos
diff --git a/published/20190829 Three Ways to Exclude Specific-Certain Packages from Yum Update.md b/published/20190829 Three Ways to Exclude Specific-Certain Packages from Yum Update.md
new file mode 100644
index 0000000000..f0d1ba9087
--- /dev/null
+++ b/published/20190829 Three Ways to Exclude Specific-Certain Packages from Yum Update.md
@@ -0,0 +1,138 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11315-1.html)
+[#]: subject: (Three Ways to Exclude Specific/Certain Packages from Yum Update)
+[#]: via: (https://www.2daygeek.com/redhat-centos-yum-update-exclude-specific-packages/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+从 Yum 更新中排除特定/某些包的三种方法
+======
+
+
+
+作为系统更新的一部分,你也许需要在基于 Red Hat 系统中由于应用依赖排除一些软件包。
+
+如果是,如何排除?可以采取多少种方式?有三种方式可以做到,我们会在本篇中教你这三种方法。
+
+包管理器是一组工具,它允许用户在 Linux 系统中轻松管理包。它能让用户在 Linux 系统中安装、更新/升级、删除、查询、重新安装和搜索软件包。
+
+对于基于 Red Hat 的系统,我们使用 [yum 包管理器][1] 和 [rpm 包管理器][2] 进行包管理。
+
+### 什么是 yum?
+
+yum 代表 “Yellowdog Updater, Modified”。Yum 是用于 rpm 系统的自动更新程序和包安装/卸载器。
+
+它在安装包时自动解决依赖关系。
+
+### 什么是 rpm?
+
+rpm 代表 “Red Hat Package Manager”,它是一款用于 Red Hat 系统的功能强大的包管理工具。
+
+RPM 指的是 `.rpm` 文件格式,它包含已编译的软件和必要的库。
+
+你可能有兴趣阅读以下与本主题相关的文章。如果是的话,请进入相应的链接。
+
+ * [如何检查 Red Hat(RHEL)和 CentOS 系统上的可用安全更新][3]
+ * [在 Red Hat(RHEL)和 CentOS 系统上安装安全更新的四种方法][4]
+ * [在 Redhat(RHEL)和 CentOS 系统上检查或列出已安装的安全更新的两种方法][5]
+
+### 方法 1:手动或临时用 yum 命令排除包
+
+我们可以在 yum 中使用 `--exclude` 或 `-x` 开关来阻止 yum 命令获取特定包的更新。
+
+我可以说,这是一种临时方法或按需方法。如果你只想将特定包排除一次,那么我们可以使用此方法。
+
+以下命令将更新除 kernel 之外的所有软件包。
+
+要排除单个包:
+
+```
+# yum update --exclude=kernel
+或者
+# yum update -x 'kernel'
+```
+
+要排除多个包。以下命令将更新除 kernel 和 php 之外的所有软件包。
+
+```
+# yum update --exclude=kernel* --exclude=php*
+或者
+# yum update --exclude httpd,php
+```
+
+### 方法 2:在 yum 命令中永久排除软件包
+
+这是永久性方法,如果你经常执行修补程序更新,那么可以使用此方法。
+
+为此,请在 `/etc/yum.conf` 中添加相应的软件包以永久禁用软件包更新。
+
+添加后,每次运行 `yum update` 命令时都不需要指定这些包。此外,这可以防止任何意外更新这些包。
+
+```
+# vi /etc/yum.conf
+
+[main]
+cachedir=/var/cache/yum/$basearch/$releasever
+keepcache=0
+debuglevel=2
+logfile=/var/log/yum.log
+exactarch=1
+obsoletes=1
+gpgcheck=1
+plugins=1
+installonly_limit=3
+exclude=kernel* php*
+```
+
+### 方法 3:使用 Yum versionlock 插件排除包
+
+这也是与上面类似的永久方法。Yum versionlock 插件允许用户通过 `yum` 命令锁定指定包的更新。
+
+为此,请运行以下命令。以下命令将从 `yum update` 中排除 freetype 包。
+
+或者,你可以直接在 `/etc/yum/pluginconf.d/versionlock.list` 中添加条目。
+
+```
+# yum versionlock add freetype
+
+Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
+Adding versionlock on: 0:freetype-2.8-12.el7
+versionlock added: 1
+```
+
+运行以下命令来检查被 versionlock 插件锁定的软件包列表。
+
+```
+# yum versionlock list
+
+Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
+0:freetype-2.8-12.el7.*
+versionlock list done
+```
+
+运行以下命令清空该列表。
+
+```
+# yum versionlock clear
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/redhat-centos-yum-update-exclude-specific-packages/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[2]: https://www.2daygeek.com/rpm-command-examples/
+[3]: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/
+[4]: https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/
+[5]: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/
diff --git a/published/20190830 Change your Linux terminal color theme.md b/published/20190830 Change your Linux terminal color theme.md
new file mode 100644
index 0000000000..321dc40997
--- /dev/null
+++ b/published/20190830 Change your Linux terminal color theme.md
@@ -0,0 +1,86 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11310-1.html)
+[#]: subject: (Change your Linux terminal color theme)
+[#]: via: (https://opensource.com/article/19/8/add-color-linux-terminal)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+如何更改 Linux 终端颜色主题
+======
+
+> 你可以用丰富的选项来定义你的终端主题。
+
+
+
+如果你大部分时间都盯着终端,那么你很自然地希望它看起来能赏心悦目。美与不美,全在观者,自 CRT 串口控制台以来,终端已经经历了很多变迁。因此,你的软件终端窗口有丰富的选项,可以用来定义你看到的主题,不管你如何定义美,这总是件好事。
+
+### 设置
+
+包括 GNOME、KDE 和 Xfce 在内的流行的软件终端应用,它们都提供了更改其颜色主题的选项。调整主题就像调整应用首选项一样简单。Fedora、RHEL 和 Ubuntu 默认使用 GNOME,因此本文使用该终端作为示例,但对 Konsole、Xfce 终端和许多其他终端的设置流程类似。
+
+首先,进入到应用的“首选项”或“设置”面板。在 GNOME 终端中,你可以通过屏幕顶部或窗口右上角的“应用”菜单访问它。
+
+在“首选项”中,单击“配置文件” 旁边的加号(“+”)来创建新的主题配置文件。在新配置文件中,单击“颜色”选项卡。
+
+![GNOME Terminal preferences][2]
+
+在“颜色”选项卡中,取消选择“使用系统主题中的颜色”选项,以使窗口的其余部分变为可选状态。最开始,你可以选择内置的颜色方案。这些包括浅色主题,它有明亮的背景和深色的前景文字;还有深色主题,它有深色背景和浅色前景文字。
+
+当没有其他设置(例如 `dircolors` 命令的设置)覆盖它们时,“默认颜色”色板将同时定义前景色和背景色。“调色板”设置 `dircolors` 命令定义的颜色。这些颜色由终端以 `LS_COLORS` 环境变量的形式使用,以在 [ls][3] 命令的输出中添加颜色。如果这些颜色不吸引你,请在此更改它们。
+
+如果对主题感到满意,请关闭“首选项”窗口。
+
+要将终端更改为新的配置文件,请单击“应用”菜单,然后选择“配置文件”。选择新的配置文件,接着享受自定义主题。
+
+![GNOME Terminal profile selection][4]
+
+### 命令选项
+
+如果你的终端没有合适的设置窗口,它仍然可以在启动命令中提供颜色选项。xterm 和 rxvt 终端(旧的和启用 Unicode 的变体,有时称为 urxvt 或 rxvt-unicode)都提供了这样的选项,因此即使没有桌面环境和大型 GUI 框架,你仍然可以设置终端模拟器的主题。
+
+两个明显的选项是前景色和背景色,分别用 `-fg` 和 `-bg` 定义。每个选项的参数是*颜色名*而不是它的 ANSI 编号。例如:
+
+```
+$ urxvt -bg black -fg green
+```
+
+这些会设置默认的前景和背景。如果有任何其他规则会控制特定文件或设备类型的颜色,那么就使用这些颜色。有关如何设置它们的信息,请参阅 [dircolors][5] 命令。
+
+你还可以使用 `-cr` 设置文本光标(而不是鼠标光标)的颜色:
+
+```
+$ urxvt -bg black -fg green -cr teal
+```
+
+![Setting color in urxvt][6]
+
+你的终端模拟器可能还有更多选项,如边框颜色(rxvt 中的 `-bd`)、光标闪烁(urxvt 中的 `-bc` 和 `+bc`),甚至背景透明度。请参阅终端的手册页,了解更多的功能。
+
+要使用你选择的颜色启动终端,你可以将选项添加到用于启动终端的命令或菜单中(例如,在你的 Fluxbox 菜单文件、`$HOME/.local/share/applications` 目录中的 `.desktop` 或者类似的)。或者,你可以使用 [xrdb][7] 工具来管理与 X 相关的资源(但这超出了本文的范围)。
+
+### 家是可定制的地方
+
+自定义 Linux 机器并不意味着你需要学习如何编程。你可以而且应该进行小而有意义的更改,来使你的数字家庭感觉更舒适。而且没有比终端更好的起点了!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/8/add-color-linux-terminal
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
+[2]: https://opensource.com/sites/default/files/uploads/gnome-terminal-preferences.jpg (GNOME Terminal preferences)
+[3]: https://opensource.com/article/19/7/master-ls-command
+[4]: https://opensource.com/sites/default/files/uploads/gnome-terminal-profile-select.jpg (GNOME Terminal profile selection)
+[5]: http://man7.org/linux/man-pages/man1/dircolors.1.html
+[6]: https://opensource.com/sites/default/files/uploads/urxvt-color.jpg (Setting color in urxvt)
+[7]: https://www.x.org/releases/X11R7.7/doc/man/man1/xrdb.1.xhtml
diff --git a/published/20190830 How to Create and Use Swap File on Linux.md b/published/20190830 How to Create and Use Swap File on Linux.md
new file mode 100644
index 0000000000..d8db4b5623
--- /dev/null
+++ b/published/20190830 How to Create and Use Swap File on Linux.md
@@ -0,0 +1,254 @@
+[#]: collector: (lujun9972)
+[#]: translator: (heguangzhi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11341-1.html)
+[#]: subject: (How to Create and Use Swap File on Linux)
+[#]: via: (https://itsfoss.com/create-swap-file-linux/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+如何在 Linux 上创建和使用交换文件
+======
+
+本教程讨论了 Linux 中交换文件的概念,为什么使用它以及它相对于传统交换分区的优势。你将学习如何创建交换文件和调整其大小。
+
+### 什么是 Linux 的交换文件?
+
+交换文件允许 Linux 将磁盘空间模拟为内存。当你的系统开始耗尽内存时,它会使用交换空间将内存的一些内容交换到磁盘空间上。这样释放了内存,为更重要的进程服务。当内存再次空闲时,它会从磁盘交换回数据。我建议[阅读这篇文章,了解 Linux 上的交换空间的更多内容][1]。
+
+传统上,交换空间是磁盘上的一个独立分区。安装 Linux 时,只需创建一个单独的分区进行交换。但是这种趋势在最近几年发生了变化。
+
+使用交换文件,你不再需要单独的分区。你会根目录下创建一个文件,并告诉你的系统将其用作交换空间就行了。
+
+使用专用的交换分区,在许多情况下,调整交换空间的大小是一个可怕而不可能的任务。但是有了交换文件,你可以随意调整它们的大小。
+
+最新版本的 Ubuntu 和其他一些 Linux 发行版已经开始 [默认使用交换文件][2]。甚至如果你没有创建交换分区,Ubuntu 也会自己创建一个 1GB 左右的交换文件。
+
+让我们看看交换文件的更多信息。
+
+
+
+### 检查 Linux 的交换空间
+
+在你开始添加交换空间之前,最好检查一下你的系统中是否已经有了交换空间。
+
+你可以用[Linux 上的 free 命令][4]检查它。就我而言,我的[戴尔 XPS][5]有 14GB 的交换容量。
+
+```
+free -h
+ total used free shared buff/cache available
+Mem: 7.5G 4.1G 267M 971M 3.1G 2.2G
+Swap: 14G 0B 14G
+```
+
+`free` 命令给出了交换空间的大小,但它并没有告诉你它是真实的交换分区还是交换文件。`swapon` 命令在这方面会更好。
+
+```
+swapon --show
+NAME TYPE SIZE USED PRIO
+/dev/nvme0n1p4 partition 14.9G 0B -2
+```
+
+如你所见,我有 14.9GB 的交换空间,它在一个单独的分区上。如果是交换文件,类型应该是 `file` 而不是 `partition`。
+
+```
+swapon --show
+NAME TYPE SIZE USED PRIO
+/swapfile file 2G 0B -2
+```
+
+如果你的系统上没有交换空间,它应该显示如下内容:
+
+```
+free -h
+ total used free shared buff/cache available
+Mem: 7.5G 4.1G 267M 971M 3.1G 2.2G
+Swap: 0B 0B 0B
+```
+
+而 `swapon` 命令不会显示任何输出。
+
+
+### 在 Linux 上创建交换文件
+
+如果你的系统没有交换空间,或者你认为交换空间不足,你可以在 Linux 上创建交换文件。你也可以创建多个交换文件。
+
+让我们看看如何在 Linux 上创建交换文件。我在本教程中使用 Ubuntu 18.04,但它也应该适用于其他 Linux 发行版本。
+
+#### 步骤 1:创建一个新的交换文件
+
+首先,创建一个具有所需交换空间大小的文件。假设我想给我的系统增加 1GB 的交换空间。使用`fallocate` 命令创建大小为 1GB 的文件。
+
+```
+sudo fallocate -l 1G /swapfile
+```
+
+建议只允许 `root` 用户读写该交换文件。当你尝试将此文件用于交换区域时,你甚至会看到类似“不安全权限 0644,建议 0600”的警告。
+
+```
+sudo chmod 600 /swapfile
+```
+
+请注意,交换文件的名称可以是任意的。如果你需要多个交换空间,你可以给它任何合适的名称,如 `swap_file_1`、`swap_file_2` 等。它们只是一个预定义大小的文件。
+
+#### 步骤 2:将新文件标记为交换空间
+
+你需要告诉 Linux 系统该文件将被用作交换空间。你可以用 [mkswap][7] 工具做到这一点。
+
+```
+sudo mkswap /swapfile
+```
+
+你应该会看到这样的输出:
+
+```
+Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
+no label, UUID=7e1faacb-ea93-4c49-a53d-fb40f3ce016a
+```
+
+#### 步骤 3:启用交换文件
+
+现在,你的系统知道文件 `swapfile` 可以用作交换空间。但是还没有完成。你需要启用该交换文件,以便系统可以开始使用该文件作为交换。
+
+```
+sudo swapon /swapfile
+```
+
+现在,如果你检查交换空间,你应该会看到你的 Linux 系统会识别并使用它作为交换空间:
+
+```
+swapon --show
+NAME TYPE SIZE USED PRIO
+/swapfile file 1024M 0B -2
+```
+
+#### 步骤 4:让改变持久化
+
+迄今为止你所做的一切都是暂时的。重新启动系统,所有更改都将消失。
+
+你可以通过将新创建的交换文件添加到 `/etc/fstab` 文件来使更改持久化。
+
+对 `/etc/fstab` 文件进行任何更改之前,最好先进行备份。
+
+```
+sudo cp /etc/fstab /etc/fstab.back
+```
+
+现在将以下行添加到 `/etc/fstab` 文件的末尾:
+
+```
+/swapfile none swap sw 0 0
+```
+
+你可以使用[命令行文本编辑器][8]手动操作,或者使用以下命令:
+
+```
+echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
+```
+
+现在一切都准备好了。即使在重新启动你的 Linux 系统后,你的交换文件也会被使用。
+
+### 调整 swappiness 参数
+
+`swappiness` 参数决定了交换空间的使用频率。`swappiness` 值的范围从 0 到 100。较高的值意味着交换空间将被更频繁地使用。
+
+Ubuntu 桌面的默认的 `swappiness` 是 60,而服务器的默认 `swappiness` 是 1。你可以使用以下命令检查 `swappiness`:
+
+```
+cat /proc/sys/vm/swappiness
+```
+
+为什么服务器应该使用低的 `swappiness` 值?因为交换空间比内存慢,为了获得更好的性能,应该尽可能多地使用内存。在服务器上,性能因素至关重要,因此 `swappiness` 应该尽可能低。
+
+你可以使用以下系统命令动态更改 `swappiness`:
+
+```
+sudo sysctl vm.swappiness=25
+```
+
+这种改变只是暂时的。如果要使其永久化,可以编辑 `/etc/sysctl.conf` 文件,并在文件末尾添加`swappiness` 值:
+
+```
+vm.swappiness=25
+```
+
+### 在 Linux 上调整交换空间的大小
+
+在 Linux 上有几种方法可以调整交换空间的大小。但是在你看到这一点之前,你应该了解一些关于它的事情。
+
+当你要求系统停止将交换文件用于交换空间时,它会将所有数据(确切地说是内存页)传输回内存。所以你应该有足够的空闲内存,然后再停止交换。
+
+这就是为什么创建和启用另一个临时交换文件是一个好的做法的原因。这样,当你关闭原来的交换空间时,你的系统将使用临时交换文件。现在你可以调整原来的交换空间的大小。你可以手动删除临时交换文件或留在那里,下次启动时会自动删除(LCTT 译注:存疑?)。
+
+如果你有足够的可用内存或者创建了临时交换空间,那就关闭你原来的交换文件。
+
+```
+sudo swapoff /swapfile
+```
+
+现在你可以使用 `fallocate` 命令来更改文件的大小。比方说,你将其大小更改为 2GB:
+
+```
+sudo fallocate -l 2G /swapfile
+```
+
+现在再次将文件标记为交换空间:
+
+```
+sudo mkswap /swapfile
+```
+
+并再次启用交换文件:
+
+```
+sudo swapon /swapfile
+```
+
+你也可以选择同时拥有多个交换文件。
+
+### 删除 Linux 中的交换文件
+
+你可能有不在 Linux 上使用交换文件的原因。如果你想删除它,该过程类似于你刚才看到的调整交换大小的过程。
+
+首先,确保你有足够的空闲内存。现在关闭交换文件:
+
+```
+sudo swapoff /swapfile
+```
+
+下一步是从 `/etc/fstab` 文件中删除相应的条目。
+
+最后,你可以删除该文件来释放空间:
+
+```
+sudo rm /swapfile
+```
+
+### 你用了交换空间了吗?
+
+我想你现在已经很好地理解了 Linux 中的交换文件概念。现在,你可以根据需要轻松创建交换文件或调整它们的大小。
+
+如果你对这个话题有什么要补充的或者有任何疑问,请在下面留下评论。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/create-swap-file-linux/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[heguangzhi](https://github.com/heguangzhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/swap-size/
+[2]: https://help.ubuntu.com/community/SwapFaq
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/swap-file-linux.png?resize=800%2C450&ssl=1
+[4]: https://linuxhandbook.com/free-command/
+[5]: https://itsfoss.com/dell-xps-13-ubuntu-review/
+[6]: https://itsfoss.com/fix-missing-system-settings-ubuntu-1404-quick-tip/
+[7]: http://man7.org/linux/man-pages/man8/mkswap.8.html
+[8]: https://itsfoss.com/command-line-text-editors-linux/
+[9]: https://itsfoss.com/replace-linux-from-dual-boot/
diff --git a/published/20190830 git exercises- navigate a repository.md b/published/20190830 git exercises- navigate a repository.md
new file mode 100644
index 0000000000..2c7899d172
--- /dev/null
+++ b/published/20190830 git exercises- navigate a repository.md
@@ -0,0 +1,82 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11379-1.html)
+[#]: subject: (git exercises: navigate a repository)
+[#]: via: (https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+Git 练习:存储库导航
+======
+
+我觉得前几天的 [curl 练习][1]进展顺利,所以今天我醒来后,想尝试编写一些 Git 练习。Git 是一大块需要学习的技能,可能要花几个小时才能学会,所以我分解练习的第一个思路是从“导航”一个存储库开始的。
+
+我本来打算使用一个玩具测试库,但后来我想,为什么不使用真正的存储库呢?这样更有趣!因此,我们将浏览 Ruby 编程语言的存储库。你无需了解任何 C 即可完成此练习,只需熟悉一下存储库中的文件随时间变化的方式即可。
+
+### 克隆存储库
+
+开始之前,需要克隆存储库:
+
+```
+git clone https://github.com/ruby/ruby
+```
+
+与实际使用的大多数存储库相比,该存储库的最大不同之处在于它没有分支,但是它有很多标签,它们与分支相似,因为它们都只是指向一个提交的指针而已。因此,我们将使用标签而不是分支进行练习。*改变*标签的方式和分支非常不同,但*查看*标签和分支的方式完全相同。
+
+### Git SHA 总是引用同一个代码
+
+执行这些练习时要记住的最重要的一点是,如本页面所述,像`9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` 这样的 Git SHA 始终引用同一个的代码。下图摘自我与凯蒂·西勒·米勒撰写的一本杂志,名为《[Oh shit, git!][2]》。(她还有一个名为 的很棒的网站,启发了该杂志。)
+
+
+
+我们将在练习中大量使用 Git SHA,以使你习惯于使用它们,并帮助你了解它们与标签和分支的对应关系。
+
+### 我们将要使用的 Git 子命令
+
+所有这些练习仅使用这 5 个 Git 子命令:
+
+```
+git checkout
+git log (--oneline, --author, and -S will be useful)
+git diff (--stat will be useful)
+git show
+git status
+```
+
+### 练习
+
+1. 查看 matz 从 1998 年开始的 Ruby 提交。提交 ID 为 ` 3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`。找出当时 Ruby 的代码行数。
+2. 检出当前的 master 分支。
+3. 查看文件 `hash.c` 的历史记录。更改该文件的最后一个提交 ID 是什么?
+4. 了解最近 20 年来 `hash.c` 的变化:将 master 分支上的文件与提交 `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4` 的文件进行比较。
+5. 查找最近更改了 `hash.c` 的提交,并查看该提交的差异。
+6. 对于每个 Ruby 版本,该存储库都有一堆**标签**。获取所有标签的列表。
+7. 找出在标签 `v1_8_6_187` 和标签 `v1_8_6_188` 之间更改了多少文件。
+8. 查找 2015 年的提交(任何一个提交)并将其检出,简单地查看一下文件,然后返回 master 分支。
+9. 找出标签 `v1_8_6_187` 对应的提交。
+10. 列出目录 `.git/refs/tags`。运行 `cat .git/refs/tags/v1_8_6_187` 来查看其中一个文件的内容。
+11. 找出当前 `HEAD` 对应的提交 ID。
+12. 找出已经对 `test/` 目录进行了多少次提交。
+13. 提交 `65a5162550f58047974793cdc8067a970b2435c0` 和 `9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` 之间的 `lib/telnet.rb` 的差异。该文件更改了几行?
+14. 在 Ruby 2.5.1 和 2.5.2 之间进行了多少次提交(标记为 `v2_5_1` 和 `v2_5_3`)(这一步有点棘手,步骤不只一步)
+15. “matz”(Ruby 的创建者)作了多少提交?
+16. 最近包含 “tkutil” 一词的提交是什么?
+17. 检出提交 `e51dca2596db9567bd4d698b18b4d300575d3881` 并创建一个指向该提交的新分支。
+18. 运行 `git reflog` 以查看你到目前为止完成的所有存储库导航操作。
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://jvns.ca/blog/2019/08/27/curl-exercises/
+[2]: https://wizardzines.com/zines/oh-shit-git/
diff --git a/published/20190831 Google opens Android speech transcription and gesture tracking, Twitter-s telemetry tooling, Blender-s growing adoption, and more news.md b/published/20190831 Google opens Android speech transcription and gesture tracking, Twitter-s telemetry tooling, Blender-s growing adoption, and more news.md
new file mode 100644
index 0000000000..c6370eb975
--- /dev/null
+++ b/published/20190831 Google opens Android speech transcription and gesture tracking, Twitter-s telemetry tooling, Blender-s growing adoption, and more news.md
@@ -0,0 +1,88 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11292-1.html)
+[#]: subject: (Google opens Android speech transcription and gesture tracking, Twitter's telemetry tooling, Blender's growing adoption, and more news)
+[#]: via: (https://opensource.com/19/8/news-august-31)
+[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
+
+开源新闻综述:谷歌开源 Android 语音转录和手势追踪、Twitter 的遥测工具
+======
+
+> 不要错过两周以来最大的开源头条新闻。
+
+![Weekly news roundup with TV][1]
+
+在本期的开源新闻综述中,我们来看看谷歌发布的两个开源软件、Twitter 的最新可观测性工具、动漫工作室对 Blender 的采用在增多等等新闻!
+
+### 谷歌的开源双响炮
+
+搜索引擎巨头谷歌的开发人员最近一直忙于开源。在过去的两周里,他们以开源的方式发布了两个截然不同的软件。
+
+第一个是 Android 的语音识别和转录工具 Live Transcribe 的[语音引擎][2],它可以“在移动设备上使用机器学习算法将音频变成实时字幕”。谷歌的声明称,它正在开源 Live Transcribe 以“让所有开发人员可以为长篇对话提供字幕”。你可以[在 GitHub 上][3]浏览或下载 Live Transcribe 的源代码。
+
+谷歌还为 Android 和 iOS 开源了[手势跟踪软件][4],它建立在其 [MediaPipe][5] 机器学习框架之上。该软件结合了三种人工智能组件:手掌探测器、“返回 3D 手点”的模型和手势识别器。据谷歌研究人员称,其目标是改善“跨各种技术领域和平台的用户体验”。该软件的源代码和文档[可在 GitHub 上获得][6]。
+
+### Twitter 开源 Rezolus 遥测工具
+
+当想到网络中断时,我们想到的是影响站点或服务性能的大崩溃或减速。让我们感到惊讶的是性能慢慢被吃掉的小尖峰的重要性。为了更容易地检测这些尖峰,Twitter 开发了一个名为 Rezolus 的工具,该公司[已将其开源][7]。
+
+> 我们现有的按分钟采样的遥测技术未能反映出这些异常现象。这是因为相对于该异常发生的长度,较低的采样率掩盖了这些持续时间大约为 10 秒的异常。这使得很难理解正在发生的事情并调整系统以获得更高的性能。
+
+Rezolus 旨在检测“非常短暂但有时显著的性能异常”——仅持续几秒钟。Twitter 已经运行了 Rezolus 大约一年,并且一直在使用它收集的内容“与后端服务日志来确定峰值的来源”。
+
+如果你对将 Rezolus 添加到可观测性堆栈中的结果感到好奇,请查看 Twitter 的 [GitHub 存储库][8]中的源代码。
+
+### 日本的 Khara 动画工作室采用 Blender
+
+Blender 被认为是开源的动画和视觉效果软件的黄金标准。它被几家制作公司采用,其中最新的是[日本动漫工作室 Khara][9]。
+
+Khara 正在使用 Blender 开发 Evangelion: 3.0+1.0,这是基于流行动漫系列《Neon Genesis Evangelion》的电影系列的最新版本。虽然该电影的工作不能在 Blender 中全部完成,但 Khara 的员工“将从 2020 年 6 月开始使用 Blender 进行大部分工作”。为了强调其对 Blender 和开源的承诺,“Khara 宣布它将作为企业会员加入 Blender 基金会的发展基金。“
+
+### NSA 分享其固件安全工具
+
+继澳大利亚同行[共享他们的一个工具][10]之后,美国国家安全局(NSA)正在[提供][11]的一个项目的成果“可以更好地保护机器免受固件攻击“。这个最新的软件以及其他保护固件的开源工作可以在 [Coreboot Gerrit 存储库][12]下找到。
+
+这个名为“具有受保护执行的 SMI 传输监视器”(STM-PE)的软件“将与运行 Coreboot 的 x86 处理器配合使用”以防止固件攻击。根据 NSA 高级网络安全实验室的 Eugene Meyers 的说法,STM-PE 采用低级操作系统代码“并将其置于一个盒子中,以便它只能访问它需要访问的设备系统”。这有助于防止篡改,Meyers 说,“这将提高系统的安全性。”
+
+### 其它新闻
+
+* [Linux 内核中的 exFAT?是的!][13]
+* [瓦伦西亚继续支持 Linux 学校发行版][14]
+* [西班牙首个开源卫星][15]
+* [Western Digital 从开放标准到开源芯片的长途旅行][16]
+* [用于自动驾驶汽车多模传感器的 Waymo 开源数据集][17]
+
+一如既往地感谢 Opensource.com 的工作人员和主持人本周的帮助。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/19/8/news-august-31
+
+作者:[Scott Nesbitt][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/scottnesbitt
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
+[2]: https://venturebeat.com/2019/08/16/google-open-sources-live-transcribes-speech-engine/
+[3]: https://github.com/google/live-transcribe-speech-engine
+[4]: https://venturebeat.com/2019/08/19/google-open-sources-gesture-tracking-ai-for-mobile-devices/
+[5]: https://github.com/google/mediapipe
+[6]: https://github.com/google/mediapipe/blob/master/mediapipe/docs/hand_tracking_mobile_gpu.md
+[7]: https://blog.twitter.com/engineering/en_us/topics/open-source/2019/introducing-rezolus.html
+[8]: https://github.com/twitter/rezolus
+[9]: https://www.neowin.net/news/anime-studio-khara-is-planning-to-use-open-source-blender-software/
+[10]: https://linux.cn/article-11241-1.html
+[11]: https://www.cyberscoop.com/nsa-firmware-open-source-coreboot-stm-pe-eugene-myers/
+[12]: https://review.coreboot.org/admin/repos
+[13]: https://cloudblogs.microsoft.com/opensource/2019/08/28/exfat-linux-kernel/
+[14]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/120000-lliurex-desktops
+[15]: https://hackaday.com/2019/08/15/spains-first-open-source-satellite/
+[16]: https://www.datacenterknowledge.com/open-source/western-digitals-long-trip-open-standards-open-source-chips
+[17]: https://venturebeat.com/2019/08/21/waymo-open-sources-data-set-for-autonomous-vehicle-multimodal-sensors/
diff --git a/published/20190901 Different Ways to Configure Static IP Address in RHEL 8.md b/published/20190901 Different Ways to Configure Static IP Address in RHEL 8.md
new file mode 100644
index 0000000000..d67e035961
--- /dev/null
+++ b/published/20190901 Different Ways to Configure Static IP Address in RHEL 8.md
@@ -0,0 +1,237 @@
+[#]: collector: (lujun9972)
+[#]: translator: (heguangzhi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11390-1.html)
+[#]: subject: (Different Ways to Configure Static IP Address in RHEL 8)
+[#]: via: (https://www.linuxtechi.com/configure-static-ip-address-rhel8/)
+[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
+
+在 RHEL8 配置静态 IP 地址的不同方法
+======
+
+在 Linux 服务器上工作时,在网卡/以太网卡上分配静态 IP 地址是每个 Linux 工程师的常见任务之一。如果一个人在 Linux 服务器上正确配置了静态地址,那么他/她就可以通过网络远程访问它。在本文中,我们将演示在 RHEL 8 服务器网卡上配置静态 IP 地址的不同方法。
+
+
+
+以下是在网卡上配置静态IP的方法:
+
+ * `nmcli`(命令行工具)
+ * 网络脚本文件(`ifcfg-*`)
+ * `nmtui`(基于文本的用户界面)
+
+### 使用 nmcli 命令行工具配置静态 IP 地址
+
+每当我们安装 RHEL 8 服务器时,就会自动安装命令行工具 `nmcli`,它是由网络管理器使用的,可以让我们在以太网卡上配置静态 IP 地址。
+
+运行下面的 `ip addr` 命令,列出 RHEL 8 服务器上的以太网卡
+
+```
+[root@linuxtechi ~]# ip addr
+```
+
+正如我们在上面的命令输出中看到的,我们有两个网卡 `enp0s3` 和 `enp0s8`。当前分配给网卡的 IP 地址是通过 DHCP 服务器获得的。
+
+假设我们希望在第一个网卡 (`enp0s3`) 上分配静态 IP 地址,具体内容如下:
+
+ * IP 地址 = 192.168.1.4
+ * 网络掩码 = 255.255.255.0
+ * 网关 = 192.168.1.1
+ * DNS = 8.8.8.8
+
+依次运行以下 `nmcli` 命令来配置静态 IP,
+
+使用 `nmcli connection` 命令列出当前活动的以太网卡,
+
+```
+[root@linuxtechi ~]# nmcli connection
+NAME UUID TYPE DEVICE
+enp0s3 7c1b8444-cb65-440d-9bf6-ea0ad5e60bae ethernet enp0s3
+virbr0 3020c41f-6b21-4d80-a1a6-7c1bd5867e6c bridge virbr0
+[root@linuxtechi ~]#
+```
+
+使用下面的 `nmcli` 给 `enp0s3` 分配静态 IP。
+
+**命令语法:**
+
+```
+# nmcli connection modify ipv4.address
+```
+
+**注意:** 为了简化语句,在 `nmcli` 命令中,我们通常用 `con` 关键字替换 `connection`,并用 `mod` 关键字替换 `modify`。
+
+将 IPv4 地址 (192.168.1.4) 分配给 `enp0s3` 网卡上,
+
+```
+[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.addresses 192.168.1.4/24
+```
+
+使用下面的 `nmcli` 命令设置网关,
+
+```
+[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.gateway 192.168.1.1
+```
+
+设置手动配置(从 dhcp 到 static),
+
+```
+[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.method manual
+```
+
+设置 DNS 值为 “8.8.8.8”,
+
+```
+[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.dns "8.8.8.8"
+```
+
+要保存上述更改并重新加载,请执行如下 `nmcli` 命令,
+
+```
+[root@linuxtechi ~]# nmcli con up enp0s3
+Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)
+```
+
+以上命令显示网卡 `enp0s3` 已成功配置。我们使用 `nmcli` 命令做的那些更改都将永久保存在文件 `etc/sysconfig/network-scripts/ifcfg-enp0s3` 里。
+
+```
+[root@linuxtechi ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
+```
+
+![ifcfg-enp0s3-file-rhel8][2]
+
+要确认 IP 地址是否分配给了 `enp0s3` 网卡了,请使用以下 IP 命令查看,
+
+```
+[root@linuxtechi ~]#ip addr show enp0s3
+```
+
+### 使用网络脚本文件(ifcfg-*)手动配置静态 IP 地址
+
+我们可以使用配置以太网卡的网络脚本或 `ifcfg-*` 文件来配置以太网卡的静态 IP 地址。假设我们想在第二个以太网卡 `enp0s8` 上分配静态 IP 地址:
+
+* IP 地址 = 192.168.1.91
+* 前缀 = 24
+* 网关 =192.168.1.1
+* DNS1 =4.2.2.2
+
+
+转到目录 `/etc/sysconfig/network-scripts`,查找文件 `ifcfg-enp0s8`,如果它不存在,则使用以下内容创建它,
+
+```
+[root@linuxtechi ~]# cd /etc/sysconfig/network-scripts/
+[root@linuxtechi network-scripts]# vi ifcfg-enp0s8
+TYPE="Ethernet"
+DEVICE="enp0s8"
+BOOTPROTO="static"
+ONBOOT="yes"
+NAME="enp0s8"
+IPADDR="192.168.1.91"
+PREFIX="24"
+GATEWAY="192.168.1.1"
+DNS1="4.2.2.2"
+```
+
+保存并退出文件,然后重新启动网络管理器服务以使上述更改生效,
+
+```
+[root@linuxtechi network-scripts]# systemctl restart NetworkManager
+```
+
+现在使用下面的 `ip` 命令来验证 IP 地址是否分配给网卡,
+
+```
+[root@linuxtechi ~]# ip add show enp0s8
+3: enp0s8: mtu 1500 qdisc fq_codel state UP group default qlen 1000
+ link/ether 08:00:27:7c:bb:cb brd ff:ff:ff:ff:ff:ff
+ inet 192.168.1.91/24 brd 192.168.1.255 scope global noprefixroute enp0s8
+ valid_lft forever preferred_lft forever
+ inet6 fe80::a00:27ff:fe7c:bbcb/64 scope link
+ valid_lft forever preferred_lft forever
+[root@linuxtechi ~]#
+```
+
+以上输出内容确认静态 IP 地址已在网卡 `enp0s8` 上成功配置了。
+
+### 使用 nmtui 实用程序配置静态 IP 地址
+
+`nmtui` 是一个基于文本用户界面的,用于控制网络的管理器,当我们执行 `nmtui` 时,它将打开一个基于文本的用户界面,通过它我们可以添加、修改和删除连接。除此之外,`nmtui` 还可以用来设置系统的主机名。
+
+假设我们希望通过以下细节将静态 IP 地址分配给网卡 `enp0s3` ,
+
+* IP 地址 = 10.20.0.72
+* 前缀 = 24
+* 网关 = 10.20.0.1
+* DNS1 = 4.2.2.2
+
+运行 `nmtui` 并按照屏幕说明操作,示例如下所示,
+
+```
+[root@linuxtechi ~]# nmtui
+```
+
+![nmtui-rhel8][3]
+
+选择第一个选项 “Edit a connection”,然后选择接口为 “enp0s3”,
+
+![Choose-interface-nmtui-rhel8][4]
+
+选择 “Edit”,然后指定 IP 地址、前缀、网关和域名系统服务器 IP,
+
+![set-ip-nmtui-rhel8][5]
+
+选择确定,然后点击回车。在下一个窗口中,选择 “Activate a connection”,
+
+![Activate-option-nmtui-rhel8][6]
+
+选择 “enp0s3”,选择 “Deactivate” 并点击回车,
+
+![Deactivate-interface-nmtui-rhel8][7]
+
+现在选择 “Activate” 并点击回车,
+
+![Activate-interface-nmtui-rhel8][8]
+
+选择 “Back”,然后选择 “Quit”,
+
+![Quit-Option-nmtui-rhel8][9]
+
+使用下面的 `ip` 命令验证 IP 地址是否已分配给接口 `enp0s3`,
+
+```
+[root@linuxtechi ~]# ip add show enp0s3
+2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
+ link/ether 08:00:27:53:39:4d brd ff:ff:ff:ff:ff:ff
+ inet 10.20.0.72/24 brd 10.20.0.255 scope global noprefixroute enp0s3
+ valid_lft forever preferred_lft forever
+ inet6 fe80::421d:5abf:58bd:c47e/64 scope link noprefixroute
+ valid_lft forever preferred_lft forever
+[root@linuxtechi ~]#
+```
+
+以上输出内容显示我们已经使用 `nmtui` 实用程序成功地将静态 IP 地址分配给接口 `enp0s3`。
+
+以上就是本教程的全部内容,我们已经介绍了在 RHEL 8 系统上为以太网卡配置 IPv4 地址的三种不同方法。请在下面的评论部分分享反馈和评论。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxtechi.com/configure-static-ip-address-rhel8/
+
+作者:[Pradeep Kumar][a]
+选题:[lujun9972][b]
+译者:[heguangzhi](https://github.com/heguangzhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linuxtechi.com/author/pradeep/
+[b]: https://github.com/lujun9972
+[1]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Configure-Static-IP-RHEL8.jpg
+[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/ifcfg-enp0s3-file-rhel8.jpg
+[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/nmtui-rhel8.jpg
+[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-interface-nmtui-rhel8.jpg
+[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/set-ip-nmtui-rhel8.jpg
+[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Activate-option-nmtui-rhel8.jpg
+[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Deactivate-interface-nmtui-rhel8.jpg
+[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Activate-interface-nmtui-rhel8.jpg
+[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Quit-Option-nmtui-rhel8.jpg
diff --git a/published/20190902 Why I use Java.md b/published/20190902 Why I use Java.md
new file mode 100644
index 0000000000..d4ec8d6570
--- /dev/null
+++ b/published/20190902 Why I use Java.md
@@ -0,0 +1,105 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11337-1.html)
+[#]: subject: (Why I use Java)
+[#]: via: (https://opensource.com/article/19/9/why-i-use-java)
+[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
+
+我为什么使用 Java
+======
+
+> 根据你的工作需要,可能有比 Java 更好的语言,但是我还没有看到任何能把我拉走的语言。
+
+
+
+我记得我是从 1997 年开始使用 Java 的,就在 [Java 1.1 刚刚发布][2]不久之后。从那时起,总的来说,我非常喜欢用 Java 编程;虽然我得承认,这些日子我经常像在 Java 中编写“严肃的代码”一样编写 [Groovy][3] 脚本。
+
+来自 [FORTRAN][4]、[PL/1][5]、[Pascal][6] 以及最后的 [C 语言][7] 背景,我发现了许多让我喜欢 Java 的东西。Java 是我[面向对象编程][8]的第一次重要实践经验。到那时,我已经编程了大约 20 年,而且可以说我对什么重要、什么不重要有了一些看法。
+
+### 调试是一个关键的语言特性
+
+我真的很讨厌浪费时间追踪由我的代码不小心迭代到数组末尾而导致的模糊错误,特别是在 IBM 大型机上的 FORTRAN 编程时代。另一个不时出现的隐晦问题是调用一个子程序时,该子程序带有一个四字节整数参数,而预期有两个字节;在小端架构上,这通常是一个良性的错误,但在大端机器上,前两个字节的值通常并不总是为零。
+
+在那种批处理环境中进行调试也非常不便,通过核心转储或插入打印语句进行调试,这些语句本身会移动错误的位置甚至使它们消失。
+
+所以我使用 Pascal 的早期体验,先是在 [MTS][9] 上,然后是在 [IBM OS/VS1][10] 上使用相同的 MTS 编译器,让我的生活变得更加轻松。Pascal 的[强类型和静态类型][11]是取得这种胜利的重要组成部分,我使用的每个 Pascal 编译器都会在数组的边界和范围上插入运行时检查,因此错误可以在发生时检测到。当我们在 20 世纪 80 年代早期将大部分工作转移到 Unix 系统时,移植 Pascal 代码是一项简单的任务。
+
+### 适量的语法
+
+但是对于我所喜欢的 Pascal 来说,我的代码很冗长,而且语法似乎要比代码还要多;例如,使用:
+
+```
+if ... then begin ... end else ... end
+```
+
+而不是 C 或类似语言中的:
+
+```
+if (...) { ... } else { ... }
+```
+
+另外,有些事情在 Pascal 中很难完成,在 C 中更容易。但是,当我开始越来越多地使用 C 时,我发现自己遇到了我曾经在 FORTRAN 中遇到的同样类型的错误,例如,超出数组边界。在原始的错误点未检测到数组结束,而仅在程序执行后期才会检测到它们的不利影响。幸运的是,我不再生活在那种批处理环境中,并且手头有很好的调试工具。不过,C 对于我来说有点太灵活了。
+
+当我遇到 [awk][12] 时,我发现它与 C 相比又是另外一种样子。那时,我的很多工作都涉及转换字段数据并创建报告。我发现用 `awk` 加上其他 Unix 命令行工具,如 `sort`、`sed`、`cut`、`join`、`paste`、`comm` 等等,可以做到事情令人吃惊。从本质上讲,这些工具给了我一个像是基于文本文件的关系数据库管理器,这种文本文件具有列式结构,是我们很多字段数据的保存方式。或者,即便不是这种格式,大部分时候也可以从关系数据库或某种二进制格式导出到列式结构中。
+
+`awk` 支持的字符串处理、[正则表达式][13]和[关联数组][14],以及 `awk` 的基本特性(它实际上是一个数据转换管道),非常符合我的需求。当面对二进制数据文件、复杂的数据结构和关键性能需求时,我仍然会转回到 C;但随着我越来越多地使用 `awk`,我发现 C 的非常基础的字符串支持越来越令人沮丧。随着时间的推移,更多的时候我只会在必须时才使用 C,并且在其余的时候里大量使用 `awk`。
+
+### Java 的抽象层级合适
+
+然后是 Java。它看起来相当不错 —— 相对简洁的语法,让人联想到 C,或者这种相似性至少要比 Pascal 或其他任何早期的语言更为明显。它是强类型的,因此很多编程错误会在编译时被捕获。它似乎并不需要过多的面向对象的知识就能起步,这是一件好事,因为我当时对 [OOP 设计模式][15]毫不熟悉。但即使在刚刚开始,我也喜欢它的简化[继承模型][16]背后的思想。(Java 允许使用提供的接口进行单继承,以在某种程度上丰富范例。)
+
+它似乎带有丰富的功能库(即“自备电池”的概念),在适当的水平上直接满足了我的需求。最后,我发现自己很快就会想到将数据和行为在对象中组合在一起的想法。这似乎是明确控制数据之间交互的好方法 —— 比大量的参数列表或对全局变量的不受控制的访问要好得多。
+
+从那以后,Java 在我的编程工具箱中成为了 Helvetic 军刀。我仍然偶尔会在 `awk` 中编写程序,或者使用 Linux 命令行实用程序(如 `cut`、`sort` 或 `sed`),因为它们显然是解决手头问题的直接方法。我怀疑过去 20 年我可能没写过 50 行的 C 语言代码;Java 完全满足了我的需求。
+
+此外,Java 一直在不断改进。首先,它变得更加高效。并且它添加了一些非常有用的功能,例如[可以用 try 来测试资源][17],它可以很好地清理在文件 I/O 期间冗长而有点混乱的错误处理代码;或 [lambda][18],它提供了声明函数并将其作为参数传递的能力,而旧方法需要创建类或接口来“托管”这些函数;或[流][19],它在函数中封装了迭代行为,可以创建以链式函数调用形式实现的高效数据转换管道。
+
+### Java 越来越好
+
+许多语言设计者研究了从根本上改善 Java 体验的方法。对我来说,其中大部分没有引起我的太多兴趣;再次,这更多地反映了我的典型工作流程,并且(更多地)减少了这些语言带来的功能。但其中一个演化步骤已经成为我的编程工具中不可或缺的一部分:[Groovy][20]。当我遇到一个小问题,需要一个简单的解决方案时,Groovy 已经成为了我的首选。而且,它与 Java 高度兼容。对我来说,Groovy 填补了 Python 为许多其他人所提供的相同用处 —— 它紧凑、DRY(不要重复自己)和具有表达性(列表和词典有完整的语言支持)。我还使用了 [Grails][21],它使用 Groovy 为非常高性能和有用的 Java Web 应用程序提供简化的 Web 框架。
+
+### Java 仍然开源吗?
+
+最近,对 [OpenJDK][22] 越来越多的支持进一步提高了我对 Java 的舒适度。许多公司以各种方式支持 OpenJDK,包括 [AdoptOpenJDK、Amazon 和 Red Hat][23]。在我的一个更大、更长期的项目中,我们使用 AdoptOpenJDK [来在几个桌面平台上生成自定义的运行时环境][24]。
+
+有没有比 Java 更好的语言?我确信有,这取决于你的工作需要。但我一直对 Java 非常满意,我还没有遇到任何可能会让我失望的东西。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/why-i-use-java
+
+作者:[Chris Hermansen][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clhermansen
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-beans.jpg?itok=3hkjX5We (Coffee beans)
+[2]: https://en.wikipedia.org/wiki/Java_version_history
+[3]: https://en.wikipedia.org/wiki/Apache_Groovy
+[4]: https://en.wikipedia.org/wiki/Fortran
+[5]: https://en.wikipedia.org/wiki/PL/I
+[6]: https://en.wikipedia.org/wiki/Pascal_(programming_language)
+[7]: https://en.wikipedia.org/wiki/C_(programming_language)
+[8]: https://en.wikipedia.org/wiki/Object-oriented_programming
+[9]: https://en.wikipedia.org/wiki/Michigan_Terminal_System
+[10]: https://en.wikipedia.org/wiki/OS/VS1
+[11]: https://stackoverflow.com/questions/11889602/difference-between-strong-vs-static-typing-and-weak-vs-dynamic-typing
+[12]: https://en.wikipedia.org/wiki/AWK
+[13]: https://en.wikipedia.org/wiki/Regular_expression
+[14]: https://en.wikipedia.org/wiki/Associative_array
+[15]: https://opensource.com/article/19/7/understanding-software-design-patterns
+[16]: https://www.w3schools.com/java/java_inheritance.asp
+[17]: https://www.baeldung.com/java-try-with-resources
+[18]: https://www.baeldung.com/java-8-lambda-expressions-tips
+[19]: https://www.tutorialspoint.com/java8/java8_streams
+[20]: https://groovy-lang.org/
+[21]: https://grails.org/
+[22]: https://openjdk.java.net/
+[23]: https://en.wikipedia.org/wiki/OpenJDK
+[24]: https://opensource.com/article/19/4/java-se-11-removing-jnlp
diff --git a/published/20190903 5 open source speed-reading applications.md b/published/20190903 5 open source speed-reading applications.md
new file mode 100644
index 0000000000..ef922bf1e8
--- /dev/null
+++ b/published/20190903 5 open source speed-reading applications.md
@@ -0,0 +1,93 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11316-1.html)
+[#]: subject: (5 open source speed-reading applications)
+[#]: via: (https://opensource.com/article/19/8/speed-reading-open-source)
+[#]: author: (Jaouhari Youssef https://opensource.com/users/jaouhari)
+
+5 个开源的速读应用
+======
+
+> 使用这五个应用训练自己更快地阅读文本。
+
+
+
+英国散文家和政治家 [Joseph Addison][2] 曾经说过,“读书益智,运动益体。”如今,我们大多数人(如果不是全部)都是通过计算机显示器、电视屏幕、移动设备、街道标志、报纸、杂志上阅读,以及在工作场所和学校阅读论文来训练我们的大脑。
+
+鉴于我们每天都会收到大量的书面信息,通过做一些挑战我们经典的阅读习惯并教会我们吸收更多内容和数据的特定练习,来训练我们的大脑以便更快地阅读似乎是有利的。学习这些技能的目的不仅仅是浏览文本,因为没有理解的阅读就是浪费精力。目标是提高你的阅读速度,同时仍然达到高水平的理解。
+
+### 阅读和处理输入
+
+在深入探讨速读之前,让我们来看看阅读过程。根据法国眼科医生 Louis Emile Javal 的说法,阅读分为三个步骤:
+
+ 1. 固定
+ 2. 处理
+ 3. [扫视][3]
+
+在第一步,我们确定文本中的固定点,称为最佳识别点。在第二步中,我们在眼睛固定的同时引入(处理)新信息。最后,我们改变注视点的位置,这是一种称为扫视的操作,此时未获取任何新信息。
+
+在实践中,阅读更快的读者之间的主要差异是固定时间短于平均值,更长距离扫视,重读更少。
+
+### 阅读练习
+
+阅读不是人类的自然过程,因为它是人类生存跨度中一个相当新的发展。第一个书写系统是在大约 5000 年前创建的,它不足以让人们发展成为阅读机器。因此,我们必须运用我们的阅读技巧,在这项沟通的基本任务中变得更加娴熟和高效。
+
+第一项练习包括减少默读,也被称为无声语音,这是一种在阅读时内部发音的习惯。它是一个减慢阅读速度的自然过程,因为阅读速度限于语速。减少默读的关键是只说出一些阅读的单词。一种方法是用其他任务来占据内部声音,例如用口香糖。
+
+第二个练习包括减少回归,或称为重读。回归是一种懒惰的机制,因为我们的大脑可以随时重读任何材料,从而降低注意力。
+
+### 5 个开源应用来训练你的大脑
+
+有几个有趣的开源应用可用于锻炼你的阅读速度。
+
+一个是 [Gritz][4],它是一个开源文件阅读器,它一次一个地弹出单词,以减少回归。它适用于 Linux、Windows 和 MacOS,并在 GPL 许可证下发布,因此你可以随意使用它。
+
+其他选择包括 [Spray Speed-Reader][5],一个用 JavaScript 编写的开源速读应用,以及 [Sprits-it!][6],一个开源 Web 应用,可以快速阅读网页。
+
+对于 Android 用户,[Comfort Reader][7] 是一个开源的速读应用。它可以在 [F-droid][8] 和 [Google Play][9] 应用商店中找到。
+
+我最喜欢的应用是 [Speedread][10],它是一个简单的终端程序,它可以在最佳阅读点逐字显示文本。要安装它,请在你的设备上克隆 Github 仓库,然后输入相应的命令来选择以喜好的每分钟字数 (WPM)来阅读文档。默认速率为 250 WPM。例如,要以 400 WPM 阅读 `your_text_file.txt`,你应该输入:
+
+```
+`cat your_text_file.txt | ./speedread -w 400`
+```
+
+下面是该程序的运行界面:
+
+![Speedread demo][11]
+
+由于你可能不会只阅读[纯文本][12],因此可以使用 [Pandoc][13] 将文件从标记格式转换为文本格式。你还可以使用 Android 终端模拟器 [Termux][14] 在 Android 设备上运行 Speedread。
+
+### 其他方案
+
+对于开源社区来说,构建一个解决方案是一个有趣的项目,它仅为了通过使用特定练习来提高阅读速度,以此改进如默读和重读。我相信这个项目会非常有益,因为在当今信息丰富的环境中,提高阅读速度非常有价值。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/8/speed-reading-open-source
+
+作者:[Jaouhari Youssef][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jaouhari
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_stack_library_reading.jpg?itok=uulcS8Sw (stack of books)
+[2]: https://en.wikipedia.org/wiki/Joseph_Addison
+[3]: https://en.wikipedia.org/wiki/Saccade
+[4]: https://github.com/jeffkowalski/gritz
+[5]: https://github.com/chaimpeck/spray
+[6]: https://github.com/the-happy-hippo/sprits-it
+[7]: https://github.com/mschlauch/comfortreader
+[8]: https://f-droid.org/packages/com.mschlauch.comfortreader/
+[9]: https://play.google.com/store/apps/details?id=com.mschlauch.comfortreader
+[10]: https://github.com/pasky/speedread
+[11]: https://opensource.com/sites/default/files/uploads/speedread_demo.gif (Speedread demo)
+[12]: https://plaintextproject.online/
+[13]: https://opensource.com/article/18/9/intro-pandoc
+[14]: https://termux.com/
diff --git a/published/20190903 An introduction to Hyperledger Fabric.md b/published/20190903 An introduction to Hyperledger Fabric.md
new file mode 100644
index 0000000000..3bd8f72221
--- /dev/null
+++ b/published/20190903 An introduction to Hyperledger Fabric.md
@@ -0,0 +1,104 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Morisun029)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11328-1.html)
+[#]: subject: (An introduction to Hyperledger Fabric)
+[#]: via: (https://opensource.com/article/19/9/introduction-hyperledger-fabric)
+[#]: author: (Matt Zand https://opensource.com/users/mattzandhttps://opensource.com/users/ron-mcfarlandhttps://opensource.com/users/wonderchook)
+
+Hyperledger Fabric 介绍
+======
+
+> Hyperledger (超级账本)是一组开源工具,旨在构建一个强大的、业务驱动的区块链框架。
+
+
+
+[Hyperledger][2] (超级账本)是区块链行业中最大的项目之一,它由一组开源工具和多个子项目组成。该项目是由 Linux 基金会主办的一个全球协作项目,其中包括一些不同领域的领导者们,这些领导者们的目标是建立一个强大的、业务驱动的区块链框架。
+
+区块链网络主要有三种类型:公共区块链、联盟或联合区块链,以及私有区块链。Hyperledger 是一个区块链框架,旨在帮助公司建立私人或联盟许可的区块链网络,在该网络中,多个组织可以共享控制和操作网络内节点的权限。
+
+因为区块链是一个透明的,基于不可变模式的安全的去中心化系统,所以它被认为是传统的供应链行业改变游戏规则的一种解决方案。它可以通过以下方式支持有效的供应链系统:
+
+* 跟踪整个区块链中的产品
+* 校验和验证区块链中的产品
+* 在供应链参与者之间共享整个区块链的信息
+* 提供可审核性
+
+本文通过食品供应链的例子来解释 Hyperledger 区块链是如何改变传统供应链系统的。
+
+### 食品行业供应链
+
+传统供应链效率低下的主要原因是由于缺乏透明度而导致报告不可靠和竞争上的劣势。
+
+在传统的供应链模式中,有关实体的信息对该区块链中的其他人来说并不完全透明,这就导致了不准确的报告和缺乏互操作性问题。电子邮件和印刷文档提供了一些信息,但它们不可能包含完整详细的可见性数据,因为很难在整个供应链中去追踪产品。这也使消费者几乎不可能知道产品的真正价值和来源。
+
+食品行业的供应链环境复杂,多个参与者需要协作将货物运送到最终目的地 —— 客户手中。下图显示了食品供应链(多级)网络中的主要参与者。
+
+![典型的食品供应链][3]
+
+该区块链的每个阶段都会引入潜在的安全问题、整合问题和其他低效问题。目前食品供应链中的主要威胁仍然是假冒食品和食品欺诈。
+
+基于 Hyperledger 区块链的食品跟踪系统可实现对食品信息全面的可视性和和可追溯性。更重要的是,它以一种不变但可行的方式来记录产品细节,确保食品信息的真实性。最终用户通过在不可变框架上共享产品的详细信息,可以自我验证产品的真实性。
+
+### Hyperledger Fabric
+
+Hyperledger Fabric 是 Hyperledger 项目的基石。它是基于许可的区块链,或者更准确地说是一种分布式分类帐技术(DLT),该技术最初由 IBM 公司和 Digital Asset 创建。分布式分类帐技术被设计为具有不同组件的模块化框架(概述如下)。它也是提供可插入的共识模型的一种灵活的解决方案,尽管它目前仅提供基于投票的许可共识(假设今天的 Hyperledger 网络在部分可信赖的环境中运行)。
+
+鉴于此,无需匿名矿工来验证交易,也无需用作激励措施的相关货币。所有的参与者必须经过身份验证才能参与到该区块链进行交易。与以太坊一样,Hyperledger Fabric 支持智能合约,在 Hyperledger 中称为 Chaincodes,这些合约描述并执行系统的应用程序逻辑。
+
+然而,与以太坊不同,Hyperledger Fabric 不需要昂贵的挖矿计算来提交交易,因此它有助于构建可以在更短的延迟内进行扩展的区块链。
+
+Hyperledger Fabric 不同于以太坊或比特币这样的区块链,不仅在于它们类型不同,或者说是它与货币无关,而且它们在内部机制方面也不同。以下是典型的 Hyperledger 网络的关键要素:
+
+* 账本:存储了一系列块,这些块保留了所有状态交易的所有不可变历史记录。
+* 节点:区块链的逻辑实体。它有三种类型:
+ * 客户端:是代表用户向网络提交事务的应用程序。
+ * 对等体:是提交交易并维护分类帐状态的实体。
+ * 排序者 在客户端和对等体之间创建共享通信渠道,还将区块链交易打包成块发送给遵从的对等体节点。
+
+除了这些要素,Hyperledger Fabric 还有以下关键设计功能:
+
+* 链码:类似于其它诸如以太坊的网络中的智能合约。它是用一种更高级的语言编写的程序,在针对分类帐当前状态的数据库执行。
+* 通道:用于在多个网络成员之间共享机密信息的专用通信子网。每笔交易都在一个只有经过身份验证和授权的各方可见的通道上执行。
+* 背书人 验证交易,调用链码,并将背书的交易结果返回给调用应用程序。
+* 成员服务提供商(MSP)通过颁发和验证证书来提供身份验证和身份验证过程。MSP 确定信任哪些证书颁发机构(CA)去定义信任域的成员,并确定成员可能扮演的特定角色(成员、管理员等)。
+
+### 如何验证交易
+
+探究一笔交易是如何通过验证的是理解 Hyperledger Fabric 在底层如何工作的好方法。此图显示了在典型的 Hyperledger 网络中处理交易的端到端系统流程:
+
+![Hyperledger 交易验证流程][4]
+
+首先,客户端通过向基于 Hyperledger Fabric 的应用程序客户端发送请求来启动交易,该客户端将交易提议提交给背书对等体。这些对等体通过执行由交易指定的链码(使用该状态的本地副本)来模拟该交易,并将结果发送回应用程序。此时,应用程序将交易与背书相结合,并将其广播给排序服务。排序服务检查背书并为每个通道创建一个交易块,然后将其广播给通道中的其它节点,对的体验证该交易并进行提交。
+
+Hyperledger Fabric 区块链可以通过透明的、不变的和共享的食品来源数据记录、处理数据,及运输细节等信息将食品供应链中的参与者们连接起来。链码由食品供应链中的授权参与者来调用。所有执行的交易记录都永久保存在分类帐中,所有参与者都可以查看此信息。
+
+### Hyperledger Composer
+
+除了 Fabric 或 Iroha 等区块链框架外,Hyperledger 项目还提供了 Composer、Explorer 和 Cello 等工具。 Hyperledger Composer 提供了一个工具集,可帮助你更轻松地构建区块链应用程序。 它包括:
+
+* CTO,一种建模语言
+* Playground,一种基于浏览器的开发工具,用于快速测试和部署
+* 命令行界面(CLI)工具
+
+Composer 支持 Hyperledger Fabric 的运行时和基础架构,在内部,Composer 的 API 使用底层 Fabric 的 API。Composer 在 Fabric 上运行,这意味着 Composer 生成的业务网络可以部署到 Hyperledger Fabric 执行。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/introduction-hyperledger-fabric
+
+作者:[Matt Zand][a]
+选题:[lujun9972][b]
+译者:[Morisun029](https://github.com/Morisun029)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mattzandhttps://opensource.com/users/ron-mcfarlandhttps://opensource.com/users/wonderchook
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chain.png?itok=sgAjswFf (Chain image)
+[2]: https://www.hyperledger.org/
+[3]: https://opensource.com/sites/default/files/uploads/foodindustrysupplychain.png (Typical food supply chain)
+[4]: https://opensource.com/sites/default/files/uploads/hyperledger-fabric-transaction-flow.png (Hyperledger transaction validation flow)
+[5]: https://coding-bootcamps.com/ultimate-guide-for-building-a-blockchain-supply-chain-using-hyperledger-fabric-and-composer.html
diff --git a/published/20190903 The birth of the Bash shell.md b/published/20190903 The birth of the Bash shell.md
new file mode 100644
index 0000000000..8102b01097
--- /dev/null
+++ b/published/20190903 The birth of the Bash shell.md
@@ -0,0 +1,102 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11314-1.html)
+[#]: subject: (The birth of the Bash shell)
+[#]: via: (https://opensource.com/19/9/command-line-heroes-bash)
+[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg)
+
+Bash shell 的诞生
+======
+
+> 本周的《代码英雄》播客深入研究了最广泛使用的、已经成为事实标准的脚本语言,它来自于自由软件基金会及其作者的早期灵感。
+
+![Listen to the Command Line Heroes Podcast][1]
+
+对于任何从事于系统管理员方面的人来说,Shell 脚本编程是一门必不可少的技能,而如今人们编写脚本的主要 shell 是 Bash。Bash 是几乎所有的 Linux 发行版和现代 MacOS 版本的默认配置,也很快就会成为 [Windows 终端][2]的原生部分。你可以说 Bash 无处不在。
+
+那么它是如何做到这一点的呢?本周的《[代码英雄][3]》播客将通过询问编写那些代码的人来深入研究这个问题。
+
+### 肇始于 Unix
+
+像所有编程方面的东西一样,我们必须追溯到 Unix。shell 的简短历史是这样的:1971 年,Ken Thompson 发布了第一个 Unix shell:Thompson shell。但是,脚本用户所能做的存在严重限制,这意味着严重制约了自动化以及整个 IT 运营领域。
+
+这个[奇妙的研究][4]概述了早期尝试脚本的挑战:
+
+> 类似于它在 Multics 中的前身,这个 shell(`/bin/sh`)是一个在内核外执行的独立用户程序。诸如通配(参数扩展的模式匹配,例如 `*.txt`)之类的概念是在一个名为 `glob` 的单独的实用程序中实现的,就像用于计算条件表达式的 `if` 命令一样。这种分离使 shell 变得更小,才不到 900 行的 C 源代码。
+>
+> shell 引入了紧凑的重定向(`<`、`>` 和 `>>`)和管道(`|` 或 `^`)语法,它们已经存在于现代 shell 中。你还可以找到对调用顺序命令(`;`)和异步命令(`&`)的支持。
+>
+> Thompson shell 缺少的是编写脚本的能力。它的唯一目的是作为一个交互式 shell(命令解释器)来调用命令和查看结果。
+
+随着对终端使用的增长,对自动化的兴趣随之增长。
+
+### Bourne shell 前进一步
+
+在 Thompson 发布 shell 六年后,1977 年,Stephen Bourne 发布了 Bourne shell,旨在解决Thompson shell 中的脚本限制。(Chet Ramey 是自 1990 年以来 Bash 语言的主要维护者,在这一集的《代码英雄》中讨论了它)。作为 Unix 系统的一部分,这是这个来自贝尔实验室的技术的自然演变。
+
+Bourne 打算做什么不同的事情?[研究员 M. Jones][4] 很好地概述了它:
+
+> Bourne shell 有两个主要目标:作为命令解释器以交互方式执行操作系统的命令,和用于脚本编程(编写可通过 shell 调用的可重用脚本)。除了替换 Thompson shell,Bourne shell 还提供了几个优于其前辈的优势。Bourne 将控制流、循环和变量引入脚本,提供了更具功能性的语言来(以交互式和非交互式)与操作系统交互。该 shell 还允许你使用 shell 脚本作为过滤器,为处理信号提供集成支持,但它缺乏定义函数的能力。最后,它结合了我们今天使用的许多功能,包括命令替换(使用后引号)和 HERE 文档(以在脚本中嵌入保留的字符串文字)。
+
+Bourne 在[之前的一篇采访中][5]这样描述它:
+
+> 最初的 shell (编程语言)不是一种真正的语言;它是一种记录 —— 一种从文件中线性执行命令序列的方法,唯一的控制流的原语是 `GOTO` 到一个标签。Ken Thompson 所编写的这个最初的 shell 的这些限制非常重要。例如,你无法简单地将命令脚本用作过滤器,因为命令文件本身是标准输入。而在过滤器中,标准输入是你从父进程继承的,不是命令文件。
+>
+> 最初的 shell 很简单,但随着人们开始使用 Unix 进行应用程序开发和脚本编写,它就太有限了。它没有变量、它没有控制流,而且它的引用能力非常不足。
+
+对于脚本编写者来说,这个新 shell 是一个巨大的进步,但前提是你可以使用它。
+
+### 以自由软件来重新构思 Bourne Shell
+
+在此之前,这个占主导地位的 shell 是由贝尔实验室拥有和管理的专有软件。幸运的话,你的大学可能有权访问 Unix shell。但这种限制性访问远非自由软件基金会(FSF)想要实现的世界。
+
+Richard Stallman 和一群志同道合的开发人员那时正在编写所有的 Unix 功能,其带有可以在 GNU 许可证下免费获得的许可。其中一个开发人员的任务是制作一个 shell,那位开发人员是 Brian Fox。他对他的任务的讲述十分吸引我。正如他在播客上所说:
+
+> 它之所以如此具有挑战性,是因为我们必须忠实地模仿 Bourne shell 的所有行为,同时允许扩展它以使其成为一个供人们使用的更好工具。
+
+而那时也恰逢人们在讨论 shell 标准是什么的时候。在这一历史背景和将来的竞争前景下,流行的 Bourne shell 被重新构想,并再次重生。
+
+### 重新打造 Bourne Shell
+
+自由软件的使命和竞争这两个催化剂使重制的 Bourne shell(Bash)具有了生命。和之前不同的是,Fox 并没有把 shell 放到自己的名字之后命名,他专注于从 Unix 到自由软件的演变。(虽然 Fox Shell 这个名字看起来要比 Fish shell 更适合作为 fsh 命令 #missedopportunity)。这个命名选择似乎符合他的个性。正如 Fox 在剧集中所说,他甚至对个人的荣耀也不感兴趣;他只是试图帮助编程文化发展。然而,他并不是一个优秀的双关语。
+
+而 Bourne 也并没有因为他命名 shell 的文字游戏而感到被轻视。Bourne 讲述了一个故事,有人走到他面前,并在会议上给了他一件 Bash T 恤,而那个人是 Brian Fox。
+
+Shell | 发布于 | 创造者
+---|---|---
+Thompson Shell | 1971 | Ken Thompson
+Bourne Shell | 1977 | Stephen Bourne
+Bourne-Again Shell | 1989 | Brian Fox
+
+随着时间的推移,Bash 逐渐成长。其他工程师开始使用它并对其设计进行改进。事实上,多年后,Fox 坚定地认为学会放弃控制 Bash 是他一生中最重要的事情之一。随着 Unix 让位于 Linux 和开源软件运动,Bash 成为开源世界的至关重要的脚本语言。这个伟大的项目似乎超出了单一一个人的愿景范围。
+
+### 我们能从 shell 中学到什么?
+
+shell 是一项技术,它是笔记本电脑日常使用中的一个组成部分,你很容易忘记它也需要发明出来。从 Thompson 到 Bourne 再到 Bash,shell 的故事为我们描绘了一些熟悉的结论:
+
+* 有动力的人可以在正确的使命中取得重大进展。
+* 我们今天所依赖的大部分内容都建立在我们行业中仍然活着的那些传奇人物打下的基础之上。
+* 能够生存下来的软件超越了其原始创作者的愿景。
+
+代码英雄在全部的第三季中讲述了编程语言,并且正在接近它的尾声。[请务必订阅,来了解你想知道的有关编程语言起源的各种内容][3],我很乐意在下面的评论中听到你的 shell 故事。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/19/9/command-line-heroes-bash
+
+作者:[Matthew Broberg][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/commnad_line_hereoes_ep6_blog-header-292x521.png?itok=Bs1RlwoW (Listen to the Command Line Heroes Podcast)
+[2]: https://devblogs.microsoft.com/commandline/introducing-windows-terminal/
+[3]: https://www.redhat.com/en/command-line-heroes
+[4]: https://developer.ibm.com/tutorials/l-linux-shells/
+[5]: https://www.computerworld.com.au/article/279011/-z_programming_languages_bourne_shell_sh
diff --git a/published/20190904 How to build Fedora container images.md b/published/20190904 How to build Fedora container images.md
new file mode 100644
index 0000000000..0ab7ed8684
--- /dev/null
+++ b/published/20190904 How to build Fedora container images.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11327-1.html)
+[#]: subject: (How to build Fedora container images)
+[#]: via: (https://fedoramagazine.org/how-to-build-fedora-container-images/)
+[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
+
+如何构建 Fedora 容器镜像
+======
+
+![][1]
+
+随着容器和容器技术的兴起,现在所有主流的 Linux 发行版都提供了容器基础镜像。本文介绍了 Fedora 项目如何构建其基本镜像。同时还展示了如何使用它来创建分层图像。
+
+### 基础和分层镜像
+
+在看如何构建 Fedora 容器基础镜像之前,让我们定义基础镜像和分层镜像。定义基础镜像的简单方法是没有父镜像层的镜像。但这具体意味着什么呢?这意味着基础镜像通常只包含操作系统的根文件系统基础镜像(rootfs)。基础镜像通常提供安装软件以创建分层镜像所需的工具。
+
+分层镜像在基础镜像上添加了一组层,以便安装、配置和运行应用。分层镜像在 Dockerfile 中使用 `FROM` 指令引用基础镜像:
+
+```
+FROM fedora:latest
+```
+
+### 如何构建基础镜像
+
+Fedora 有一整套用于构建容器镜像的工具。[其中包括 podman][2],它不需要以 root 身份运行。
+
+#### 构建 rootfs
+
+基础镜像主要由一个 [tarball][3] 构成。这个 tarball 包含一个 rootfs。有不同的方法来构建此 rootfs。Fedora 项目使用 [kickstart][4] 安装方式以及 [imagefactory][5] 来创建这些 tarball。
+
+在创建 Fedora 基础镜像期间使用的 kickstart 文件可以在 Fedora 的构建系统 [Koji][6] 中找到。[Fedora-Container-Base][7] 包重新组合了所有基础镜像的构建版本。如果选择了一个构建版本,那么可以访问所有相关文件,包括 kickstart 文件。查看 [示例][8],文件末尾的 `%packages` 部分定义了要安装的所有软件包。这就是让软件放在基础镜像中的方法。
+
+#### 使用 rootfs 构建基础镜像
+
+rootfs 完成后,构建基础镜像就很容易了。它只需要一个包含以下指令的 Dockerfile:
+
+```
+FROM scratch
+ADD layer.tar /
+CMD ["/bin/bash"]
+```
+
+这里的重要部分是 `FROM scratch` 指令,它会创建一个空镜像。然后,接下来的指令将 rootfs 添加到镜像,并设置在运行镜像时要执行的默认命令。
+
+让我们使用 Koji 内置的 Fedora rootfs 构建一个基础镜像:
+
+```
+$ curl -o fedora-rootfs.tar.xz https://kojipkgs.fedoraproject.org/packages/Fedora-Container-Base/Rawhide/20190902.n.0/images/Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz
+$ tar -xJvf fedora-rootfs.tar.xz 51c14619f9dfd8bf109ab021b3113ac598aec88870219ff457ba07bc29f5e6a2/layer.tar
+$ mv 51c14619f9dfd8bf109ab021b3113ac598aec88870219ff457ba07bc29f5e6a2/layer.tar layer.tar
+$ printf "FROM scratch\nADD layer.tar /\nCMD [\"/bin/bash\"]" > Dockerfile
+$ podman build -t my-fedora .
+$ podman run -it --rm my-fedora cat /etc/os-release
+```
+
+需要从下载的存档中提取包含 rootfs 的 `layer.tar` 文件。这在 Fedora 生成的镜像已经可以被容器运行时使用才需要。
+
+因此,使用 Fedora 生成的镜像,获得基础镜像会更容易。让我们看看它是如何工作的:
+
+```
+$ curl -O https://kojipkgs.fedoraproject.org/packages/Fedora-Container-Base/Rawhide/20190902.n.0/images/Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz
+$ podman load --input Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz
+$ podman run -it --rm localhost/fedora-container-base-rawhide-20190902.n.0.x86_64:latest cat /etc/os-release
+```
+
+### 构建分层镜像
+
+要构建使用 Fedora 基础镜像的分层镜像,只需在 `FROM` 行指令中指定 `fedora`:
+
+```
+FROM fedora:latest
+```
+
+`latest` 标记引用了最新的 Fedora 版本(编写本文时是 Fedora 30)。但是可以使用镜像的标签来使用其他版本。例如,`FROM fedora:31` 将使用 Fedora 31 基础镜像。
+
+Fedora 支持将软件作为容器来构建并发布。这意味着你可以维护 Dockerfile 来使其他人可以使用你的软件。关于在 Fedora 中成为容器镜像维护者的更多信息,请查看 [Fedora 容器指南][9]。
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/how-to-build-fedora-container-images/
+
+作者:[Clément Verna][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/cverna/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/fedoracontainers-816x345.jpg
+[2]: https://linux.cn/article-10156-1.html
+[3]: https://en.wikipedia.org/wiki/Tar_(computing)
+[4]: https://en.wikipedia.org/wiki/Kickstart_(Linux)
+[5]: http://imgfac.org/
+[6]: https://koji.fedoraproject.org/koji/
+[7]: https://koji.fedoraproject.org/koji/packageinfo?packageID=26387
+[8]: https://kojipkgs.fedoraproject.org//packages/Fedora-Container-Base/30/20190902.0/images/koji-f30-build-37420478-base.ks
+[9]: https://docs.fedoraproject.org/en-US/containers/guidelines/guidelines/
diff --git a/published/20190905 How to Change Themes in Linux Mint.md b/published/20190905 How to Change Themes in Linux Mint.md
new file mode 100644
index 0000000000..dd2f69b044
--- /dev/null
+++ b/published/20190905 How to Change Themes in Linux Mint.md
@@ -0,0 +1,97 @@
+[#]: collector: (lujun9972)
+[#]: translator: (qfzy1233)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11359-1.html)
+[#]: subject: (How to Change Themes in Linux Mint)
+[#]: via: (https://itsfoss.com/install-themes-linux-mint/)
+[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
+
+如何在 Linux Mint 中更换主题
+======
+
+
+
+一直以来,使用 Cinnamon 桌面环境的 Linux Mint 都是一种卓越的体验。这也是[为何我喜爱 Linux Mint][1]的主要原因之一。
+
+自从 Mint 的开发团队[开始更为严肃的对待设计][2], “桌面主题” 应用便成为了更换新主题、图标、按钮样式、窗口边框以及鼠标指针的重要方式,当然你也可以直接通过它安装新的主题。感兴趣么?让我们开始吧。
+
+### 如何在 Linux Mint 中更换主题
+
+在菜单中搜索主题并打开主题应用。
+
+![Theme Applet provides an easy way of installing and changing themes][3]
+
+在应用中有一个“添加/删除”按钮,非常简单吧。点击它,我们可以看到按流行程度排序的 Cinnamon Spices(Cinnamon 的官方插件库)的主题。
+
+![Installing new themes in Linux Mint Cinnamon][4]
+
+要安装主题,你所要做的就是点击你喜欢的主题,然后等待它下载。之后,主题将在应用第一页的“Desktop”选项中显示可用。只需双击已安装的主题之一就可以开始使用它。
+
+![Changing themes in Linux Mint Cinnamon][5]
+
+下面是默认的 Linux Mint 外观:
+
+![Linux Mint Default Theme][6]
+
+这是在我更换主题之后:
+
+![Linux Mint with Carta Theme][7]
+
+所有的主题都可以在 Cinnamon Spices 网站上获得更多的信息和更大的截图,这样你就可以更好地了解你的系统的外观。
+
+- [浏览 Cinnamon 主题][8]
+
+### 在 Linux Mint 中安装第三方主题
+
+> “我在另一个网站上看到了这个优异的主题,但 Cinnamon Spices 网站上没有……”
+
+Cinnamon Spices 集成了许多优秀的主题,但你仍然会发现,你看到的主题并没有被 Cinnamon Spices 官方网站收录。
+
+这时你可能会想:如果有别的办法就好了,对么?你可能会认为有(我的意思是……当然啦)。首先,我们可以在其他网站上找到一些很酷的主题。
+
+我推荐你去 Cinnamon Look 浏览一下那儿的主题。如果你喜欢什么,就下载吧。
+
+- [在 Cinnamon Look 中获取更多主题][9]
+
+下载了首选主题之后,你现在将得到一个压缩文件,其中包含安装所需的所有内容。提取它并保存到 `~/.themes`。迷糊么? `~` 代表了你的家目录的对应路径:`/home/{YOURUSER}/.themes`。
+
+然后跳转到你的家目录。按 `Ctrl+H` 来[显示 Linux 中的隐藏文件][11]。如果没有看到 `.themes` 文件夹,创建一个新文件夹并命名为 `.themes`。记住,文件夹名称开头的点很重要。
+
+将提取的主题文件夹从下载目录复制到你的家目录中的 `.themes` 文件夹中。
+
+最后,在上面提到的应用中查找已安装的主题。
+
+> 注记
+>
+> 请记住,主题必须是 Cinnamon 相对应的,即使它是一个从 GNOME 复刻的系统也不行,并不是所有的 GNOME 主题都适用于 Cinnamon。
+
+改变主题是 Cinnamon 定制工作的一部分。你还可以[通过更改图标来更改 Linux Mint 的外观][12]。
+
+我希望你现在已经知道如何在 Linux Mint 中更改主题了。快去选取你喜欢的主题吧。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-themes-linux-mint/
+
+作者:[It's FOSS][a]
+选题:[lujun9972][b]
+译者:[qfzy1233](https://github.com/qfzy1233)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/itsfoss/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/tiny-features-linux-mint-cinnamon/
+[2]: https://itsfoss.com/linux-mint-new-design/
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-1.jpg?resize=800%2C625&ssl=1
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-2.jpg?resize=800%2C625&ssl=1
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/install-theme-linux-mint-3.jpg?resize=800%2C450&ssl=1
+[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-mint-default-theme.jpg?resize=800%2C450&ssl=1
+[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-mint-carta-theme.jpg?resize=800%2C450&ssl=1
+[8]: https://cinnamon-spices.linuxmint.com/themes
+[9]: https://www.cinnamon-look.org/
+[10]: https://itsfoss.com/failed-to-start-session-ubuntu-14-04/
+[11]: https://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/
+[12]: https://itsfoss.com/install-icon-linux-mint/
diff --git a/published/20190905 How to Get Average CPU and Memory Usage from SAR Reports Using the Bash Script.md b/published/20190905 How to Get Average CPU and Memory Usage from SAR Reports Using the Bash Script.md
new file mode 100644
index 0000000000..87c0308360
--- /dev/null
+++ b/published/20190905 How to Get Average CPU and Memory Usage from SAR Reports Using the Bash Script.md
@@ -0,0 +1,207 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11352-1.html)
+[#]: subject: (How to Get Average CPU and Memory Usage from SAR Reports Using the Bash Script)
+[#]: via: (https://www.2daygeek.com/linux-get-average-cpu-memory-utilization-from-sar-data-report/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+如何使用 Bash 脚本从 SAR 报告中获取 CPU 和内存使用情况
+======
+
+大多数 Linux 管理员使用 [SAR 报告][1]监控系统性能,因为它会收集一周的性能数据。但是,你可以通过更改 `/etc/sysconfig/sysstat` 文件轻松地将其延长到四周。同样,这段时间可以延长一个月以上。如果超过 28,那么日志文件将放在多个目录中,每月一个。
+
+要将覆盖期延长至 28 天,请对 `/etc/sysconfig/sysstat` 文件做以下更改。
+
+编辑 `sysstat` 文件并将 `HISTORY=7` 更改为 `HISTORY=28`。
+
+在本文中,我们添加了三个 bash 脚本,它们可以帮助你在一个地方轻松查看每个数据文件的平均值。
+
+我们过去加过许多有用的 shell 脚本。如果你想查看它们,请进入下面的链接。
+
+* [如何使用 shell 脚本自动化日常操作][2]
+
+这些脚本简单明了。出于测试目的,我们仅包括两个性能指标,即 CPU 和内存。你可以修改脚本中的其他性能指标以满足你的需求。
+
+### 脚本 1:从 SAR 报告中获取平均 CPU 利用率的 Bash 脚本
+
+该 bash 脚本从每个数据文件中收集 CPU 平均值并将其显示在一个页面上。
+
+由于是月末,它显示了 2019 年 8 月的 28 天数据。
+
+```
+# vi /opt/scripts/sar-cpu-avg.sh
+
+#!/bin/sh
+
+echo "+----------------------------------------------------------------------------------+"
+echo "|Average: CPU %user %nice %system %iowait %steal %idle |"
+echo "+----------------------------------------------------------------------------------+"
+
+for file in `ls -tr /var/log/sa/sa* | grep -v sar`
+do
+ dat=`sar -f $file | head -n 1 | awk '{print $4}'`
+ echo -n $dat
+ sar -f $file | grep -i Average | sed "s/Average://"
+done
+
+echo "+----------------------------------------------------------------------------------+"
+```
+
+运行脚本后,你将看到如下输出。
+
+```
+# sh /opt/scripts/sar-cpu-avg.sh
+
++----------------------------------------------------------------------------------+
+|Average: CPU %user %nice %system %iowait %steal %idle |
++----------------------------------------------------------------------------------+
+08/01/2019 all 0.70 0.00 1.19 0.00 0.00 98.10
+08/02/2019 all 1.73 0.00 3.16 0.01 0.00 95.10
+08/03/2019 all 1.73 0.00 3.16 0.01 0.00 95.11
+08/04/2019 all 1.02 0.00 1.80 0.00 0.00 97.18
+08/05/2019 all 0.68 0.00 1.08 0.01 0.00 98.24
+08/06/2019 all 0.71 0.00 1.17 0.00 0.00 98.12
+08/07/2019 all 1.79 0.00 3.17 0.01 0.00 95.03
+08/08/2019 all 1.78 0.00 3.14 0.01 0.00 95.08
+08/09/2019 all 1.07 0.00 1.82 0.00 0.00 97.10
+08/10/2019 all 0.38 0.00 0.50 0.00 0.00 99.12
+.
+.
+.
+08/29/2019 all 1.50 0.00 2.33 0.00 0.00 96.17
+08/30/2019 all 2.32 0.00 3.47 0.01 0.00 94.20
++----------------------------------------------------------------------------------+
+```
+
+### 脚本 2:从 SAR 报告中获取平均内存利用率的 Bash 脚本
+
+该 bash 脚本从每个数据文件中收集内存平均值并将其显示在一个页面上。
+
+由于是月末,它显示了 2019 年 8 月的 28 天数据。
+
+```
+# vi /opt/scripts/sar-memory-avg.sh
+
+#!/bin/sh
+
+echo "+-------------------------------------------------------------------------------------------------------------------+"
+echo "|Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty |"
+echo "+-------------------------------------------------------------------------------------------------------------------+"
+
+for file in `ls -tr /var/log/sa/sa* | grep -v sar`
+do
+ dat=`sar -f $file | head -n 1 | awk '{print $4}'`
+ echo -n $dat
+ sar -r -f $file | grep -i Average | sed "s/Average://"
+done
+
+echo "+-------------------------------------------------------------------------------------------------------------------+"
+```
+
+运行脚本后,你将看到如下输出。
+
+```
+# sh /opt/scripts/sar-memory-avg.sh
+
++--------------------------------------------------------------------------------------------------------------------+
+|Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty |
++--------------------------------------------------------------------------------------------------------------------+
+08/01/2019 1492331 2388461 61.55 29888 1152142 1560615 12.72 1693031 380472 6
+08/02/2019 1493126 2387666 61.53 29888 1147811 1569624 12.79 1696387 373346 3
+08/03/2019 1489582 2391210 61.62 29888 1147076 1581711 12.89 1701480 370325 3
+08/04/2019 1490403 2390389 61.60 29888 1148206 1569671 12.79 1697654 373484 4
+08/05/2019 1484506 2396286 61.75 29888 1152409 1563804 12.75 1702424 374628 4
+08/06/2019 1473593 2407199 62.03 29888 1151137 1577491 12.86 1715426 371000 8
+08/07/2019 1467150 2413642 62.19 29888 1155639 1596653 13.01 1716900 372574 13
+08/08/2019 1451366 2429426 62.60 29888 1162253 1604672 13.08 1725931 376998 5
+08/09/2019 1451191 2429601 62.61 29888 1158696 1582192 12.90 1728819 371025 4
+08/10/2019 1450050 2430742 62.64 29888 1160916 1579888 12.88 1729975 370844 5
+.
+.
+.
+08/29/2019 1365699 2515093 64.81 29888 1198832 1593567 12.99 1781733 376157 15
+08/30/2019 1361920 2518872 64.91 29888 1200785 1595105 13.00 1784556 375641 8
++-------------------------------------------------------------------------------------------------------------------+
+```
+
+### 脚本 3:从 SAR 报告中获取 CPU 和内存平均利用率的 Bash 脚本
+
+该 bash 脚本从每个数据文件中收集 CPU 和内存平均值并将其显示在一个页面上。
+
+该脚本与上面相比稍微不同。它在同一位置同时显示两者(CPU 和内存)平均值,而不是其他数据。
+
+```
+# vi /opt/scripts/sar-cpu-mem-avg.sh
+
+#!/bin/bash
+
+for file in `ls -tr /var/log/sa/sa* | grep -v sar`
+do
+ sar -f $file | head -n 1 | awk '{print $4}'
+ echo "-----------"
+ sar -u -f $file | awk '/Average:/{printf("CPU Average: %.2f%\n"), 100 - $8}'
+ sar -r -f $file | awk '/Average:/{printf("Memory Average: %.2f%\n"),(($3-$5-$6)/($2+$3)) * 100 }'
+ printf "\n"
+done
+```
+
+运行脚本后,你将看到如下输出。
+
+```
+# sh /opt/scripts/sar-cpu-mem-avg.sh
+
+08/01/2019
+-----------
+CPU Average: 1.90%
+Memory Average: 31.09%
+
+08/02/2019
+-----------
+CPU Average: 4.90%
+Memory Average: 31.18%
+
+08/03/2019
+-----------
+CPU Average: 4.89%
+Memory Average: 31.29%
+
+08/04/2019
+-----------
+CPU Average: 2.82%
+Memory Average: 31.24%
+
+08/05/2019
+-----------
+CPU Average: 1.76%
+Memory Average: 31.28%
+.
+.
+.
+08/29/2019
+-----------
+CPU Average: 3.83%
+Memory Average: 33.15%
+
+08/30/2019
+-----------
+CPU Average: 5.80%
+Memory Average: 33.19%
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-get-average-cpu-memory-utilization-from-sar-data-report/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/sar-system-performance-monitoring-command-tool-linux/
+[2]: https://www.2daygeek.com/category/shell-script/
diff --git a/published/20190905 USB4 gets final approval, offers Ethernet-like speed.md b/published/20190905 USB4 gets final approval, offers Ethernet-like speed.md
new file mode 100644
index 0000000000..ade65f4a5e
--- /dev/null
+++ b/published/20190905 USB4 gets final approval, offers Ethernet-like speed.md
@@ -0,0 +1,53 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11330-1.html)
+[#]: subject: (USB4 gets final approval, offers Ethernet-like speed)
+[#]: via: (https://www.networkworld.com/article/3435113/usb4-gets-final-approval-offers-ethernet-like-speed.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+USB4 规范获得最终批准,像以太网一样快!
+======
+
+> USB4 会是一个统一的接口,可以淘汰笨重的电缆和超大的插头,并提供满足从笔记本电脑用户到服务器管理员的每个人的吞吐量。
+
+
+
+USB 开发者论坛(USB-IF)是通用串行总线(USB)规范开发背后的行业协会,上周宣布了它已经完成了下一代 USB4 的技术规范。
+
+USB4 最重要的一个方面(它们在此版本中省略了首字母缩略词和版本号之间的空格)是它将 USB 与 Intel 设计的接口 Thunderbolt 3 融合在了一起。Thunderbolt 尽管有潜力,但在除了笔记本之外并未真正流行起来。出于这个原因,Intel 向 USB 联盟提供了 Thunderbolt 规范。
+
+不幸的是,Thunderbolt 3 被列为 USB4 设备的一个可选项,因此有些设备会有,有些则不会。这无疑会让人头疼,希望所有设备制造商都会包括 Thunderbolt 3。
+
+USB4 将使用与 USB type-C 相同的外形尺寸,这个小型插头用在所有现代 Android 手机和 Thunderbolt 3 中。它将向后兼容 USB 3.2、USB 2.0 以及 Thunderbolt。因此,几乎任何现有的USB type-C 设备都可以连接到具有 USB4 总线的机器,但将以连接电缆的额定速度运行。
+
+### USB4:体积更小,速度更快
+
+因为它支持 Thunderbolt 3,所以这种新连接将同时支持数据和显示协议,因此这可能意味着小型 USB-C 端口将取代显示器上庞大的 DVI 端口,而显示器则带有多个 USB4 端口来作为集线器。
+
+新标准的主要内容:它提供双通道 40Gbps 传输速度,是当前 USB 3.2 规格的两倍,是 USB 3 的八倍。这是以太网的速度,应该足够给你的高清显示器以及其他数据传输提供充足带宽。
+
+USB4 也为视频提供了更好的资源分配,因此如果你使用 USB4 端口同时传输视频和数据,端口将相应地分配带宽。这将允许计算机同时使用外部独立的 GPU(因为有 Thunderbolt 3,它已经上市 )和外部 SSD。
+
+这可能会启发各种新的服务器设计,因为大型、笨重的设备,如 GPU 或其他不易放入 1U 或 2U 机箱的卡板,现在可以外部连接并以与内部设备相当的速度运行。
+
+当然,我们看到配备了 USB4 端口的电脑还需要一段时间,更别说服务器了。将 USB 3 用于 PC 花费了数年时间,而 USB-C 的采用速度非常缓慢。USB 2 的 U 盘仍然是这些设备的主要市场,并且主板仍然带有 USB 2 端口。
+
+尽管如此,USB4 还是有可能成为一个统一的接口,可以摆脱拥有超大插头的笨重电缆,并提供满足从笔记本电脑用户到服务器管理员的每个人的吞吐量。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3435113/usb4-gets-final-approval-offers-ethernet-like-speed.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
diff --git a/published/20190906 Great News- Firefox 69 Blocks Third-Party Cookies, Autoplay Videos - Cryptominers by Default.md b/published/20190906 Great News- Firefox 69 Blocks Third-Party Cookies, Autoplay Videos - Cryptominers by Default.md
new file mode 100644
index 0000000000..9118ebe1ef
--- /dev/null
+++ b/published/20190906 Great News- Firefox 69 Blocks Third-Party Cookies, Autoplay Videos - Cryptominers by Default.md
@@ -0,0 +1,96 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11346-1.html)
+[#]: subject: (Great News! Firefox 69 Blocks Third-Party Cookies, Autoplay Videos & Cryptominers by Default)
+[#]: via: (https://itsfoss.com/firefox-69/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Firefox 69 默认阻拦第三方 Cookie、自动播放的视频和加密矿工
+======
+
+如果你使用的是 [Mozilla Firefox][1] 并且尚未更新到最新版本,那么你将错过许多新的重要功能。
+
+### Firefox 69 版本中的一些新功能
+
+首先,Mozilla Firefox 69 会默认强制执行更强大的安全和隐私选项。以下是新版本的一些主要亮点。
+
+#### Firefox 69 阻拦视频自动播放
+
+![][2]
+
+现在很多网站都提供了自动播放视频。无论是弹出视频还是嵌入在文章中设置为自动播放的视频,默认情况下,Firefox 69 都会阻止它(或者可能会提示你)。
+
+这个[阻拦自动播放][3]功能可让用户自动阻止任何视频播放。
+
+#### 禁止第三方跟踪 cookie
+
+默认情况下,作为增强型跟踪保护功能的一部分,它现在将阻止第三方跟踪 Cookie 和加密矿工。这是 Mozilla Firefox 的增强隐私保护功能的非常有用的改变。
+
+Cookie 有两种:第一方的和第三方的。第一方 cookie 由网站本身拥有。这些是“好的 cookie”,可以让你保持登录、记住你的密码或输入字段等来改善浏览体验。第三方 cookie 由你访问的网站以外的域所有。广告服务器使用这些 Cookie 来跟踪你,并在你访问的所有网站上跟踪广告。Firefox 69 旨在阻止这些。
+
+当它发挥作用时,你将在地址栏中看到盾牌图标。你可以选择为特定网站禁用它。
+
+![Firefox Blocking Tracking][4]
+
+#### 禁止加密矿工消耗你的 CPU
+
+![][5]
+
+对加密货币的欲望一直困扰着这个世界。GPU 的价格已经高企,因为专业的加密矿工们使用它们来挖掘加密货币。
+
+人们使用工作场所的计算机秘密挖掘加密货币。当我说工作场所时,我不一定是指 IT 公司。就在今年,[人们在乌克兰的一家核电站抓住了偷挖加密货币的活动][6]。
+
+不仅如此。如果你访问某些网站,他们会运行脚本并使用你的计算机的 CPU 来挖掘加密货币。这在 IT 术语中被称为 [挖矿攻击][7]。
+
+好消息是 Firefox 69 会自动阻止这些加密矿工脚本。因此,网站不再能利用你的系统资源进行挖矿攻击了。
+
+#### Firefox 69 带来的更强隐私保护
+
+![][8]
+
+如果你把隐私保护设置得更严格,那么它也会阻止指纹。因此,当你在 Firefox 69 中选择严格的隐私设置时,你不必担心通过[指纹][9]共享计算机的配置信息。
+
+在[关于这次发布的官方博客文章][10]中,Mozilla 提到,在此版本中,他们希望默认情况下为 100% 的用户提供保护。
+
+#### 性能改进
+
+尽管在更新日志中没有提及 Linux,但它提到了在 Windows 10/mac OS 上运行性能、UI 和电池寿命有所改进。如果你发现任何性能改进,请在评论中提及。
+
+### 总结
+
+除了所有这些之外,还有很多底层的改进。你可以查看[发行说明][11]中的详细信息。
+
+Firefox 69 对于关注其隐私的用户来说是一个令人印象深刻的更新。与我们最近对某些[安全电子邮件服务][12]的建议类似,我们建议你更新浏览器以充分受益。新版本已在大多数 Linux 发行版中提供,你只需要更新你的系统即可。
+
+如果你对阻止广告和跟踪 Cookie 的浏览器感兴趣,请尝试[开源的 Brave 浏览器][13],他们甚至给你提供了加密货币以让你使用他们的浏览器,你可以使用这些加密货币来奖励你最喜爱的发布商。
+
+你觉得这个版本怎么样?请在下面的评论中告诉我们你的想法。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/firefox-69/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/why-firefox/
+[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/auto-block-firefox.png?ssl=1
+[3]: https://support.mozilla.org/en-US/kb/block-autoplay
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/firefox-blocking-tracking.png?ssl=1
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/firefox-shield.png?ssl=1
+[6]: https://thenextweb.com/hardfork/2019/08/22/ukrainian-nuclear-powerplant-mine-cryptocurrency-state-secrets/
+[7]: https://hackernoon.com/cryptojacking-in-2019-is-not-dead-its-evolving-984b97346d16
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/firefox-secure.jpg?ssl=1
+[9]: https://clearcode.cc/blog/device-fingerprinting/
+[10]: https://blog.mozilla.org/blog/2019/09/03/todays-firefox-blocks-third-party-tracking-cookies-and-cryptomining-by-default/
+[11]: https://www.mozilla.org/en-US/firefox/69.0/releasenotes/
+[12]: https://itsfoss.com/secure-private-email-services/
+[13]: https://itsfoss.com/brave-web-browser/
diff --git a/published/20190906 How to change the color of your Linux terminal.md b/published/20190906 How to change the color of your Linux terminal.md
new file mode 100644
index 0000000000..df5bd052bd
--- /dev/null
+++ b/published/20190906 How to change the color of your Linux terminal.md
@@ -0,0 +1,200 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11324-1.html)
+[#]: subject: (How to change the color of your Linux terminal)
+[#]: via: (https://opensource.com/article/19/9/linux-terminal-colors)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+如何改变你的终端颜色
+======
+
+> 使 Linux 变得丰富多彩(或单色)。
+
+
+
+你可以使用特殊的 ANSI 编码设置为 Linux 终端添加颜色,可以在终端命令或配置文件中动态添加,也可以在终端仿真器中使用现成的主题。无论哪种方式,你都可以黑色屏幕上找回怀旧的绿色或琥珀色文本。本文演示了如何使 Linux 变得丰富多彩(或单色)的方法。
+
+### 终端的功能特性
+
+现代系统的终端的颜色配置通常默认至少是 xterm-256color,但如果你尝试为终端添加颜色但未成功,则应检查你的 `TERM` 设置。
+
+从历史上看,Unix 终端从字面上讲是:用户可以输入命令的共享计算机系统上实际的物理端点(终点)。它们专指通常用于远程发出命令的电传打字机(这也是我们今天在 Linux 中仍然使用 `/dev/tty` 设备的原因)。终端内置了 CRT 显示器,因此用户可以坐在办公室的终端上直接与大型机进行交互。CRT 显示器价格昂贵 —— 无论是制造还是使用控制;比担心抗锯齿和现代计算机专家理所当然认为的漂亮信息,让计算机吐出原始 ASCII 文本更容易。然而,即使在那时,技术的发展也很快,很快人们就会发现,随着新的视频显示终端的设计,他们需要新的功能特性来提供可选功能。
+
+例如,1978 年发布的花哨的新 VT100 支持 ANSI 颜色,因此如果用户将终端类型识别为 vt100,则计算机可以提供彩色输出,而基本串行设备可能没有这样的选项。同样的原则适用于今天,它是由 `TERM` [环境变量][2]设定的。你可以使用 `echo` 检查你的 `TERM` 定义:
+
+```
+$ echo $TERM
+xterm-256color
+```
+
+过时的(但在一些系统上仍然为了向后兼容而维护)`/etc/termcap` 文件定义了终端和打印机的功能特性。现代的版本是 `terminfo`,位于 `/etc` 或 `/usr/share` 中,具体取决于你的发行版。 这些文件列出了不同类型终端中可用的功能特性,其中许多都是由历史上的硬件定义的,如 vt100 到 vt220 的定义,以及 xterm 和 Xfce 等现代软件仿真器。大多数软件并不关心你使用的终端类型; 在极少数情况下,登录到检查兼容功能的服务器时,你可能会收到有关错误的终端类型的警告或错误。如果你的终端设置为功能特性很少的配置文件,但你知道你所使用的仿真器能够支持更多功能特性,那么你可以通过定义 `TERM` 环境变量来更改你的设置。你可以通过在 `~/.bashrc` 配置文件中导出 `TERM` 变量来完成此操作:
+
+```
+export TERM=xterm-256color
+```
+
+保存文件并重新载入设置:
+
+```
+$ source ~/.bashrc
+```
+
+### ANSI 颜色代码
+
+现代终端继承了用于“元”特征的 ANSI 转义序列。这些是特殊的字符序列,终端将其解释为操作而不是字符。例如,此序列将清除屏幕,直到下一个提示符:
+
+```
+$ printf '\033[2J'
+```
+
+它不会清除你的历史信息;它只是清除终端仿真器中的屏幕,因此它是一个安全且具有示范性的 ANSI 转义序列。
+
+ANSI 还具有设置终端颜色的序列。例如,键入此代码会将后续文本更改为绿色:
+
+```
+$ printf '\033[32m'
+```
+
+只要你对相同的计算机使用同一个颜色,就可以使用颜色来帮助你记住你登录的系统。例如,如果你经常通过 SSH 连接到服务器,则可以将服务器的提示符设置为绿色,以帮助你一目了然地将其与本地的提示符区分开来。 要设置绿色提示符,请在提示符前使用 ANSI 代码设置为绿色,并使用代表正常默认颜色的代码结束:
+
+
+```
+export PS1=`printf "\033[32m$ \033[39m"`
+```
+
+### 前景色和背景色
+
+你不仅可以设置文本的颜色。使用 ANSI 代码,你还可以控制文本的背景颜色以及做一些基本的样式。
+
+例如,使用 `\033[4m`,你可以为文本加上下划线,或者使用 `\033[5m` 你可以将其设置为闪烁的文本。起初这可能看起来很愚蠢,因为你可能不会将你的终端设置为所有文本带有下划线并整天闪烁, 但它对某些功能很有用。例如,你可以将 shell 脚本生成的紧急错误设置为闪烁(作为对用户的警报),或者你可以为 URL 添加下划线。
+
+作为参考,以下是前景色和背景色的代码。前景色在 30 范围内,而背景色在 40 范围内:
+
+颜色 | 前景色 | 背景色
+---|---|---
+黑色 | \033[30m | \033[40m
+红色 | \033[31m | \033[41m
+绿色 | \033[32m | \033[42m
+橙色 | \033[33m | \033[43m
+蓝色 | \033[34m | \033[44m
+品红 | \033[35m | \033[45m
+青色 | \033[36m | \033[46m
+浅灰 | \033[37m | \033[47m
+回退到发行版默认值 | \033[39m | \033[49m
+
+还有一些可用于背景的其他颜色:
+
+颜色 | 背景色
+---|---
+深灰 | \033[100m
+浅红 | \033[101m
+浅绿 | \033[102m
+黄色 | \033[103m
+浅蓝 | \033[104m
+浅紫 | \033[105m
+蓝绿 | \033[106m
+白色 | \033[107m
+
+### 持久设置
+
+在终端会话中设置颜色只是暂时的,相对无条件的。有时效果会持续几行;这是因为这种设置颜色的方法依赖于 `printf` 语句来设置一种模式,该模式仅持续到其他设置覆盖它。
+
+终端仿真器通常获取使用哪种颜色的指令的方式来自于 `LS_COLORS` 环境变量的设置,该设置又由 `dircolors` 的设置填充。你可以使用 `echo` 语句查看当前设置:
+
+```
+$ echo $LS_COLORS
+rs=0:di=38;5;33:ln=38;5;51:mh=00:pi=40;
+38;5;11:so=38;5;13:do=38;5;5:bd=48;5;
+232;38;5;11:cd=48;5;232;38;5;3:or=48;
+5;232;38;5;9:mi=01;05;37;41:su=48;5;
+196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;
+196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;
+[...]
+```
+
+或者你可以直接使用 `dircolors`:
+
+```
+$ dircolors --print-database
+[...]
+# image formats
+.jpg 01;35
+.jpeg 01;35
+.mjpg 01;35
+.mjpeg 01;35
+.gif 01;35
+.bmp 01;35
+.pbm 01;35
+.tif 01;35
+.tiff 01;35
+[...]
+```
+
+这看起来很神秘。文件类型后面的第一个数字是属性代码,它有六种选择:
+
+* 00 无
+* 01 粗体
+* 04 下划线
+* 05 闪烁
+* 07 反白
+* 08 暗色
+
+下一个数字是简化形式的颜色代码。你可以通过获取 ANSI 代码的最后一个数字来获取颜色代码(绿色前景为 32,绿色背景为 42;红色为 31 或 41,依此类推)。
+
+你的发行版可能全局设置了 `LS_COLORS`,因此系统上的所有用户都会继承相同的颜色。如果你想要一组自定义的颜色,可以使用 `dircolors`。首先,生成颜色设置的本地副本:
+
+```
+$ dircolors --print-database > ~/.dircolors
+```
+
+根据需要编辑本地列表。如果你对自己的选择感到满意,请保存文件。你的颜色设置只是一个数据库,不能由 [ls][3] 直接使用,但你可以使用 `dircolors` 获取可用于设置 `LS_COLORS` 的 shellcode:
+
+```
+$ dircolors --bourne-shell ~/.dircolors
+LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:
+pi=40;33:so=01;35:do=01;35:bd=40;33;01:
+cd=40;33;01:or=40;31;01:mi=00:su=37;41:
+sg=30;43:ca=30;41:tw=30;42:ow=34;
+[...]
+export LS_COLORS
+```
+
+将输出复制并粘贴到 `~/.bashrc` 文件中并重新加载。或者,你可以将该输出直接转储到 `.bashrc` 文件中并重新加载。
+
+```
+$ dircolors --bourne-shell ~/.dircolors >> ~/.bashrc
+$ source ~/.bashrc
+```
+
+你也可以在启动时使 Bash 解析 `.dircolors` 而不是手动进行转换。实际上,你可能不会经常改变颜色,所以这可能过于激进,但如果你打算改变你的配色方案,这是一个选择。在 `.bashrc` 文件中,添加以下规则:
+
+```
+[[ -e $HOME/.dircolors ]] && eval "`dircolors --sh $HOME/.dircolors`"
+```
+
+如果你的主目录中有 `.dircolors` 文件,Bash 会在启动时对其进行评估并相应地设置 `LS_COLORS`。
+
+### 颜色
+
+在终端中使用颜色是一种可以为你自己提供特定信息的快速视觉参考的简单方法。但是,你可能不希望过于依赖它们。毕竟,颜色不是通用的,所以如果其他人使用你的系统,他们可能不会像你那样看懂颜色代表的含义。此外,如果你使用各种工具与计算机进行交互,你可能还会发现某些终端或远程连接无法提供你期望的颜色(或根本不提供颜色)。
+
+除了上述警示之外,颜色在某些工作流程中可能很有用且很有趣,因此创建一个 `.dircolor` 数据库并根据你的想法对其进行自定义吧。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/linux-terminal-colors
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos.png?itok=aOBLy7Ky (4 different color terminal windows with code)
+[2]: https://opensource.com/article/19/8/what-are-environment-variables
+[3]: https://opensource.com/article/19/7/master-ls-command
diff --git a/published/20190906 How to put an HTML page on the internet.md b/published/20190906 How to put an HTML page on the internet.md
new file mode 100644
index 0000000000..8e5e283364
--- /dev/null
+++ b/published/20190906 How to put an HTML page on the internet.md
@@ -0,0 +1,69 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11374-1.html)
+[#]: subject: (How to put an HTML page on the internet)
+[#]: via: (https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/)
+[#]: author: (Julia Evans https://jvns.ca/)
+
+如何在互联网放置 HTML 页面
+======
+
+
+
+我喜欢互联网的一点是在互联网放置静态页面是如此简单。今天有人问我该怎么做,所以我想我会快速地写下来!
+
+### 只是一个 HTML 页面
+
+我的所有网站都只是静态 HTML 和 CSS。我的网页设计技巧相对不高( 是我自己开发的最复杂的网站),因此保持我所有的网站相对简单意味着我可以做一些改变/修复,而不会花费大量时间。
+
+因此,我们将在此文章中采用尽可能简单的方式 —— 只需一个 HTML 页面。
+
+### HTML 页面
+
+我们要放在互联网上的网站只是一个名为 `index.html` 的文件。你可以在 找到它,它是一个 Github 仓库,其中只包含一个文件。
+
+HTML 文件中包含一些 CSS,使其看起来不那么无聊,部分复制自 。
+
+### 如何将 HTML 页面放在互联网上
+
+有以下几步:
+
+ 1. 注册 [Neocities][1] 帐户
+ 2. 将 index.html 复制到你自己 neocities 站点的 index.html 中
+ 3. 完成
+
+上面的 `index.html` 页面位于 [julia-example-website.neocities.com][2] 中,如果你查看源代码,你将看到它与 github 仓库中的 HTML 相同。
+
+我认为这可能是将 HTML 页面放在互联网上的最简单的方法(这是一次回归 Geocities,它是我在 2003 年制作我的第一个网站的方式):)。我也喜欢 Neocities (像 [glitch][3],我也喜欢)它能实验、学习,并有乐趣。
+
+### 其他选择
+
+这绝不是唯一简单的方式,在你推送 Git 仓库时,Github pages 和 Gitlab pages 以及 Netlify 都将会自动发布站点,并且它们都非常易于使用(只需将它们连接到你的 GitHub 仓库即可)。我个人使用 Git 仓库的方式,因为 Git 不会让我感到紧张,我想知道我实际推送的页面发生了什么更改。但我想你如果第一次只想将 HTML/CSS 制作的站点放到互联网上,那么 Neocities 就是一个非常好的方法。
+
+如果你不只是玩,而是要将网站用于真实用途,那么你或许会需要买一个域名,以便你将来可以更改托管服务提供商,但这有点不那么简单。
+
+### 这是学习 HTML 的一个很好的起点
+
+如果你熟悉在 Git 中编辑文件,同时想练习 HTML/CSS 的话,我认为将它放在网站中是一个有趣的方式!我真的很喜欢它的简单性 —— 实际上这只有一个文件,所以没有其他花哨的东西需要去理解。
+
+还有很多方法可以复杂化/扩展它,比如这个博客实际上是用 [Hugo][4] 生成的,它生成了一堆 HTML 文件并放在网络中,但从基础开始总是不错的。
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://neocities.org/
+[2]: https://julia-example-website.neocities.org/
+[3]: https://glitch.com
+[4]: https://gohugo.io/
diff --git a/published/20190909 Firefox 69 available in Fedora.md b/published/20190909 Firefox 69 available in Fedora.md
new file mode 100644
index 0000000000..79249a373f
--- /dev/null
+++ b/published/20190909 Firefox 69 available in Fedora.md
@@ -0,0 +1,63 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11354-1.html)
+[#]: subject: (Firefox 69 available in Fedora)
+[#]: via: (https://fedoramagazine.org/firefox-69-available-in-fedora/)
+[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
+
+Firefox 69 已可在 Fedora 中获取
+======
+
+![][1]
+
+当你安装 Fedora Workstation 时,你会发现它包括了世界知名的 Firefox 浏览器。 Mozilla 基金会以开发 Firefox 以及其他促进开放、安全和隐私的互联网项目为己任。Firefox 有快速的浏览引擎和大量的隐私功能。
+
+开发者社区不断改进和增强 Firefox。最新版本 Firefox 69 于最近发布,你可在稳定版 Fedora 系统(30 及更高版本)中获取它。继续阅读以获得更多详情。
+
+### Firefox 69 中的新功能
+
+最新版本的 Firefox 包括[增强跟踪保护][2](ETP)。当你使用带有新(或重置)配置文件的 Firefox 69 时,浏览器会使网站更难以跟踪你的信息或滥用你的计算机资源。
+
+例如,不太正直的网站使用脚本让你的系统进行大量计算来产生加密货币,这称为[加密挖矿][3]。加密挖矿在你不知情或未经许可的情况下发生,因此是对你的系统的滥用。Firefox 69 中的新标准设置可防止网站遭受此类滥用。
+
+Firefox 69 还有其他设置,可防止识别或记录你的浏览器指纹,以供日后使用。这些改进为你提供了额外的保护,免于你的活动被在线追踪。
+
+另一个常见的烦恼是在没有提示的情况下播放视频。视频播放也会占用更多的 CPU,你可能不希望未经许可就在你的笔记本上发生这种情况。Firefox 使用[阻止自动播放][4]这个功能阻止了这种情况的发生。而 Firefox 69 还允许你停止静默开始播放的视频。此功能可防止不必要的突然的噪音。它还解决了更多真正的问题 —— 未经许可使用计算机资源。
+
+新版本中还有许多其他新功能。在 [Firefox 发行说明][5]中阅读有关它们的更多信息。
+
+### 如何获得更新
+
+Firefox 69 存在于稳定版 Fedora 30、预发布版 Fedora 31 和 Rawhide 仓库中。该更新由 Fedora 的 Firefox 包维护者提供。维护人员还确保更新了 Mozilla 的网络安全服务(nss 包)。我们感谢 Mozilla 项目和 Firefox 社区在提供此新版本方面的辛勤工作。
+
+如果你使用的是 Fedora 30 或更高版本,请在 Fedora Workstation 上使用*软件中心*,或在任何 Fedora 系统上运行以下命令:
+
+```
+$ sudo dnf --refresh upgrade firefox
+```
+
+如果你使用的是 Fedora 29,请[帮助测试更新][6],这样它可以变得稳定,让所有用户可以轻松使用。
+
+Firefox 可能会提示你升级个人设置以使用新设置。要使用新功能,你应该这样做。
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/firefox-69-available-in-fedora/
+
+作者:[Paul W. Frields][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/pfrields/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/firefox-v69-816x345.jpg
+[2]: https://blog.mozilla.org/blog/2019/09/03/todays-firefox-blocks-third-party-tracking-cookies-and-cryptomining-by-default/
+[3]: https://www.webopedia.com/TERM/C/cryptocurrency-mining.html
+[4]: https://support.mozilla.org/kb/block-autoplay
+[5]: https://www.mozilla.org/en-US/firefox/69.0/releasenotes/
+[6]: https://bodhi.fedoraproject.org/updates/FEDORA-2019-89ae5bb576
diff --git a/published/20190909 How to Install Shutter Screenshot Tool in Ubuntu 19.04.md b/published/20190909 How to Install Shutter Screenshot Tool in Ubuntu 19.04.md
new file mode 100644
index 0000000000..fd526ef267
--- /dev/null
+++ b/published/20190909 How to Install Shutter Screenshot Tool in Ubuntu 19.04.md
@@ -0,0 +1,88 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11335-1.html)
+[#]: subject: (How to Install Shutter Screenshot Tool in Ubuntu 19.04)
+[#]: via: (https://itsfoss.com/install-shutter-ubuntu/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+如何在 Ubuntu 19.04 中安装 Shutter 截图工具
+======
+
+Shutter 是我在 [Linux 中最喜欢的截图工具][1]。你可以使用它截图,还可以用它编辑截图或其他图像。它是一个在图像上添加箭头和文本的不错的工具。你也可以使用它在 Ubuntu 或其它你使用的发行版中[调整图像大小][2]。FOSS 上大多数截图教程都使用 Shutter 编辑。
+
+![Install Shutter Ubuntu][8]
+
+
+虽然 [Shutter][4] 一直是一款很棒的工具,但它的开发却停滞了。这几年来一直没有新版本的 Shutter。甚至像 [Shutter 中编辑模式被禁用][5]这样的简单 bug 也没有修复。根本没有开发者的消息。
+
+也许这就是为什么新版本的 Ubuntu 放弃它的原因。在 Ubuntu 18.04 LTS 之前,你可以在软件中心,或者[启用 universe 仓库][7]来[使用 apt-get 命令][6]安装它。但是从 Ubuntu 18.10 及更高版本开始,你就不能再这样做了。
+
+抛开这些缺点,Shutter 是一个很好的工具,我想继续使用它。也许你也是像我这样的 Shutter 粉丝,并且想要使用它。好的方面是你仍然可以在 Ubuntu 19.04 中安装 Shutter,这要归功于非官方 PPA。
+
+### 在 Ubuntu 19.04 上安装 Shutter
+
+![][3]
+
+我希望你了解 PPA 的概念。如果不了解,我强烈建议阅读我的指南,以了解更多关于[什么是 PPA 以及如何使用它][9]。
+
+现在,打开终端并使用以下命令添加新仓库:
+
+```
+sudo add-apt-repository -y ppa:linuxuprising/shutter
+```
+
+不需要再使用 `apt update`,因为从 Ubuntu 18.04 开始,仓库会在添加新条目后自动更新。
+
+现在使用 `apt` 命令安装 Shutter:
+
+```
+sudo apt install shutter
+```
+
+完成。你应该已经安装 Shutter 截图工具。你可从菜单搜索并启动它。
+
+### 删除通过非官方 PPA 安装的 Shutter
+
+最后我以卸载 Shutter 以及删除添加的仓库来结束教程。
+
+首先,从系统中删除 Shutter:
+
+```
+sudo apt remove shutter
+```
+
+接下来,从你的仓库列表中删除 PPA:
+
+```
+sudo add-apt-repository --remove ppa:linuxuprising/shutter
+```
+
+你或许还想了解 [Y PPA Manager][11],这是一款 PPA 图形管理工具。
+
+Shutter 是一个很好的工具,我希望它能被积极开发。我希望它的开发人员没问题,他/她可以找一些时间来处理它。或者是时候让其他人分叉并继续让它变得更棒。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-shutter-ubuntu/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/take-screenshot-linux/
+[2]: https://itsfoss.com/resize-images-with-right-click/
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/08/shutter-screenshot.jpg?ssl=1
+[4]: http://shutter-project.org/
+[5]: https://itsfoss.com/shutter-edit-button-disabled/
+[6]: https://itsfoss.com/apt-get-linux-guide/
+[7]: https://itsfoss.com/ubuntu-repositories/
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/Install-Shutter-ubuntu.jpg?resize=800%2C450&ssl=1
+[9]: https://itsfoss.com/ppa-guide/
+[11]: https://itsfoss.com/y-ppa-manager/
diff --git a/published/20190911 How to set up a TFTP server on Fedora.md b/published/20190911 How to set up a TFTP server on Fedora.md
new file mode 100644
index 0000000000..bdd7ab6f0d
--- /dev/null
+++ b/published/20190911 How to set up a TFTP server on Fedora.md
@@ -0,0 +1,175 @@
+[#]: collector: (lujun9972)
+[#]: translator: (amwps290)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11371-1.html)
+[#]: subject: (How to set up a TFTP server on Fedora)
+[#]: via: (https://fedoramagazine.org/how-to-set-up-a-tftp-server-on-fedora/)
+[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/)
+
+如何在 Fedora 上建立一个 TFTP 服务器
+======
+
+![][1]
+
+TFTP 即简单文本传输协议,允许用户通过 [UDP][2] 协议在系统之间传输文件。默认情况下,协议使用的是 UDP 的 69 号端口。TFTP 协议广泛用于无盘设备的远程启动。因此,在你的本地网络建立一个 TFTP 服务器,这样你就可以对 [安装好的 Fedora][3] 和其他无盘设备做一些操作,这将非常有趣。
+
+TFTP 仅仅能够从远端系统读取数据或者向远端系统写入数据,而没有列出远端服务器上文件的能力。它也没提供用户身份验证。由于安全隐患和缺乏高级功能,TFTP 通常仅用于局域网内部(LAN)。
+
+### 安装 TFTP 服务器
+
+首先你要做的事就是安装 TFTP 客户端和 TFTP 服务器:
+
+```
+dnf install tftp-server tftp -y
+```
+
+上述的这条命令会在 `/usr/lib/systemd/system` 目录下为 [systemd][4] 创建 `tftp.service` 和 `tftp.socket` 文件。
+
+```
+/usr/lib/systemd/system/tftp.service
+/usr/lib/systemd/system/tftp.socket
+```
+
+接下来,将这两个文件复制到 `/etc/systemd/system` 目录下,并重新命名。
+
+```
+cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service
+cp /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket
+```
+
+### 修改文件
+
+当你把这些文件复制和重命名后,你就可以去添加一些额外的参数,下面是 `tftp-server.service` 刚开始的样子:
+
+```
+[Unit]
+Description=Tftp Server
+Requires=tftp.socket
+Documentation=man:in.tftpd
+
+[Service]
+ExecStart=/usr/sbin/in.tftpd -s /var/lib/tftpboot
+StandardInput=socket
+
+[Install]
+Also=tftp.socket
+```
+
+在 `[Unit]` 部分添加如下内容:
+
+```
+Requires=tftp-server.socket
+```
+
+修改 `[ExecStart]` 行:
+
+```
+ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
+```
+
+下面是这些选项的意思:
+
+ * `-c` 选项允许创建新的文件
+ * `-p` 选项用于指明在正常系统提供的权限检查之上没有其他额外的权限检查
+ * `-s` 建议使用该选项以确保安全性以及与某些引导 ROM 的兼容性,这些引导 ROM 在其请求中不容易包含目录名。
+
+默认的上传和下载位置位于 `/var/lib/tftpboot`。
+
+下一步,修改 `[Install]` 部分的内容
+
+```
+[Install]
+WantedBy=multi-user.target
+Also=tftp-server.socket
+```
+
+不要忘记保存你的修改。
+
+下面是 `/etc/systemd/system/tftp-server.service` 文件的完整内容:
+
+```
+[Unit]
+Description=Tftp Server
+Requires=tftp-server.socket
+Documentation=man:in.tftpd
+
+[Service]
+ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
+StandardInput=socket
+
+[Install]
+WantedBy=multi-user.target
+Also=tftp-server.socket
+```
+
+### 启动 TFTP 服务器
+
+重新启动 systemd 守护进程:
+
+```
+systemctl daemon-reload
+```
+
+启动服务器:
+
+```
+systemctl enable --now tftp-server
+```
+
+要更改 TFTP 服务器允许上传和下载的权限,请使用此命令。注意 TFTP 是一种固有的不安全协议,因此不建议你在与其他人共享的网络上这样做。
+
+```
+chmod 777 /var/lib/tftpboot
+```
+
+配置防火墙让 TFTP 能够使用:
+
+```
+firewall-cmd --add-service=tftp --perm
+firewall-cmd --reload
+```
+
+### 客户端配置
+
+安装 TFTP 客户端
+
+```
+yum install tftp -y
+```
+
+运行 `tftp` 命令连接服务器。下面是一个启用详细信息选项的例子:
+
+```
+[client@thinclient:~ ]$ tftp 192.168.1.164
+tftp> verbose
+Verbose mode on.
+tftp> get server.logs
+getting from 192.168.1.164:server.logs to server.logs [netascii]
+Received 7 bytes in 0.0 seconds [inf bits/sec]
+tftp> quit
+[client@thinclient:~ ]$
+```
+
+记住,因为 TFTP 没有列出服务器上文件的能力,因此,在你使用 `get` 命令之前需要知道文件的具体名称。
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/how-to-set-up-a-tftp-server-on-fedora/
+
+作者:[Curt Warfield][a]
+选题:[lujun9972][b]
+译者:[amwps290](https://github.com/amwps290)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/rcurtiswarfield/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/tftp-server-816x345.jpg
+[2]: https://en.wikipedia.org/wiki/User_Datagram_Protocol
+[3]: https://docs.fedoraproject.org/en-US/fedora/f30/install-guide/advanced/Network_based_Installations/
+[4]: https://fedoramagazine.org/systemd-getting-a-grip-on-units/
+[5]: https://unsplash.com/@laikanotebooks?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[6]: https://unsplash.com/search/photos/file-folders?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
diff --git a/published/20190912 Bash Script to Send a Mail About New User Account Creation.md b/published/20190912 Bash Script to Send a Mail About New User Account Creation.md
new file mode 100644
index 0000000000..849d7c5597
--- /dev/null
+++ b/published/20190912 Bash Script to Send a Mail About New User Account Creation.md
@@ -0,0 +1,116 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11362-1.html)
+[#]: subject: (Bash Script to Send a Mail About New User Account Creation)
+[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-user-creation-send-email/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+用 Bash 脚本发送新用户帐户创建的邮件
+======
+
+
+
+出于某些原因,你可能需要跟踪 Linux 上的新用户创建信息。同时,你可能需要通过邮件发送详细信息。这或许是审计目标的一部分,或者安全团队出于跟踪目的可能希望对此进行监控。
+
+我们可以通过其他方式进行此操作,正如我们在上一篇文章中已经描述的那样。
+
+* [在系统中创建新用户帐户时发送邮件的 Bash 脚本][1]
+
+Linux 有许多开源监控工具可以使用。但我不认为他们有办法跟踪新用户创建过程,并在发生时提醒管理员。
+
+那么我们怎样才能做到这一点?
+
+我们可以编写自己的 Bash 脚本来实现这一目标。我们过去写过许多有用的 shell 脚本。如果你想了解,请进入下面的链接。
+
+* [如何使用 shell 脚本自动化日常活动?][2]
+
+### 这个脚本做了什么?
+
+这将每天两次(一天的开始和结束)备份 `/etc/passwd` 文件,这将使你能够获取指定日期的新用户创建详细信息。
+
+我们需要添加以下两个 cron 任务来复制 `/etc/passwd` 文件。
+
+```
+# crontab -e
+
+1 0 * * * cp /etc/passwd /opt/scripts/passwd-start-$(date +"%Y-%m-%d")
+59 23 * * * cp /etc/passwd /opt/scripts/passwd-end-$(date +"%Y-%m-%d")
+```
+
+它使用 `diff` 命令来检测文件之间的差异,如果发现与昨日有任何差异,脚本将向指定 email 发送新用户详细信息。
+
+我们不用经常运行此脚本,因为用户创建不经常发生。但是,我们计划每天运行一次此脚本。
+
+这样,你可以获得有关新用户创建的综合报告。
+
+**注意:**我们在脚本中使用了我们的电子邮件地址进行演示。因此,我们要求你用自己的电子邮件地址。
+
+```
+# vi /opt/scripts/new-user-detail.sh
+
+#!/bin/bash
+mv /opt/scripts/passwd-start-$(date --date='yesterday' '+%Y-%m-%d') /opt/scripts/passwd-start
+mv /opt/scripts/passwd-end-$(date --date='yesterday' '+%Y-%m-%d') /opt/scripts/passwd-end
+ucount=$(diff /opt/scripts/passwd-start /opt/scripts/passwd-end | grep ">" | cut -d":" -f6 | cut -d"/" -f3 | wc -l)
+if [ $ucount -gt 0 ]
+then
+ SUBJECT="ATTENTION: New User Account is created on server : `date --date='yesterday' '+%b %e'`"
+ MESSAGE="/tmp/new-user-logs.txt"
+ TO="2daygeek@gmail.com"
+ echo "Hostname: `hostname`" >> $MESSAGE
+ echo -e "\n" >> $MESSAGE
+ echo "The New User Details are below." >> $MESSAGE
+ echo "+------------------------------+" >> $MESSAGE
+ diff /opt/scripts/passwd-start /opt/scripts/passwd-end | grep ">" | cut -d":" -f6 | cut -d"/" -f3 >> $MESSAGE
+ echo "+------------------------------+" >> $MESSAGE
+ mail -s "$SUBJECT" "$TO" < $MESSAGE
+ rm $MESSAGE
+fi
+```
+
+给 `new-user-detail.sh` 文件添加可执行权限。
+
+```
+$ chmod +x /opt/scripts/new-user-detail.sh
+```
+
+最后添加一个 cron 任务来自动执行此操作。它在每天早上 7 点运行。
+
+```
+# crontab -e
+
+0 7 * * * /bin/bash /opt/scripts/new-user.sh
+```
+
+**注意:**你会在每天早上 7 点都会收到一封关于昨日详情的邮件提醒。
+
+**输出:**输出与下面的输出相同。
+
+```
+# cat /tmp/new-user-logs.txt
+
+Hostname: CentOS.2daygeek.com
+
+The New User Details are below.
++------------------------------+
+tuser3
++------------------------------+
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-shell-script-to-monitor-user-creation-send-email/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/linux-bash-script-to-monitor-user-creation-send-email/
+[2]: https://www.2daygeek.com/category/shell-script/
diff --git a/published/20190913 An introduction to Virtual Machine Manager.md b/published/20190913 An introduction to Virtual Machine Manager.md
new file mode 100644
index 0000000000..aa452f418a
--- /dev/null
+++ b/published/20190913 An introduction to Virtual Machine Manager.md
@@ -0,0 +1,99 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11364-1.html)
+[#]: subject: (An introduction to Virtual Machine Manager)
+[#]: via: (https://opensource.com/article/19/9/introduction-virtual-machine-manager)
+[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
+
+虚拟机管理器(Virtual Machine Manager)简介
+======
+
+> virt-manager 为 Linux 虚拟化提供了全方位的选择。
+
+
+
+在我关于 [GNOME Boxes][3] 的[系列文章][2]中,我已经解释了 Linux 用户如何能够在他们的桌面上快速启动虚拟机。当你只需要简单的配置时,Box 可以轻而易举地创建虚拟机。
+
+但是,如果你需要在虚拟机中配置更多详细信息,那么你就需要一个工具,为磁盘、网卡(NIC)和其他硬件提供全面的选项。这时就需要 [虚拟机管理器(Virtual Machine Manager)][4](virt-manager)了。如果在应用菜单中没有看到它,你可以从包管理器或命令行安装它:
+
+* 在 Fedora 上:`sudo dnf install virt-manager`
+* 在 Ubuntu 上:`sudo apt install virt-manager`
+
+安装完成后,你可以从应用菜单或在命令行中输入 `virt-manager` 启动。
+
+![Virtual Machine Manager's main screen][5]
+
+为了演示如何使用 virt-manager 创建虚拟机,我将设置一个 Red Hat Enterprise Linux 8 虚拟机。
+
+首先,单击 “文件” 然后点击 “新建虚拟机”。Virt-manager 的开发者已经标记好了每一步(例如,“第 1 步,共 5 步”)来使其变得简单。单击 “本地安装介质” 和 “下一步”。
+
+![Step 1 virtual machine creation][6]
+
+在下个页面中,选择要安装的操作系统的 ISO 文件。(RHEL 8 镜像位于我的下载目录中。)Virt-manager 自动检测操作系统。
+
+![Step 2 Choose the ISO File][7]
+
+在步骤 3 中,你可以指定虚拟机的内存和 CPU。默认值为内存 1,024MB 和一个 CPU。
+
+![Step 3 Set CPU and Memory][8]
+
+我想给 RHEL 充足的配置来运行,我使用的硬件配置也充足,所以我将它们(分别)增加到 4,096MB 和两个 CPU。
+
+下一步为虚拟机配置存储。默认设置是 10GB 硬盘。(我保留此设置,但你可以根据需要进行调整。)你还可以选择现有磁盘镜像或在自定义位置创建一个磁盘镜像。
+
+![Step 4 Configure VM Storage][9]
+
+步骤 5 是命名虚拟机并单击“完成”。这相当于创建了一台虚拟机,也就是 GNOME Boxes 中的一个 Box。虽然技术上讲是最后一步,但你有几个选择(如下面的截图所示)。由于 virt-manager 的优点是能够自定义虚拟机,因此在单击“完成”之前,我将选中“在安装前定制配置”的复选框。
+
+因为我选择了自定义配置,virt-manager 打开了一个有一组设备和设置的页面。这里是重点!
+
+这里你也可以命名该虚拟机。在左侧列表中,你可以查看各个方面的详细信息,例如 CPU、内存、磁盘、控制器和许多其他项目。例如,我可以单击 “CPU” 来验证我在步骤 3 中所做的更改。
+
+![Changing the CPU count][10]
+
+我也可以确认我设置的内存量。
+
+当虚拟机作为服务器运行时,我通常会禁用或删除声卡。为此,请选择 “声卡” 并单击 “移除” 或右键单击 “声卡” 并选择 “移除硬件”。
+
+你还可以使用底部的 “添加硬件” 按钮添加硬件。这会打开 “添加新的虚拟硬件” 页面,你可以在其中添加其他存储设备、内存、声卡等。这就像可以访问一个库存充足的(虚拟)计算机硬件仓库。
+
+![The Add New Hardware screen][11]
+
+对 VM 配置感到满意后,单击 “开始安装”,系统将启动并开始从 ISO 安装指定的操作系统。
+
+![Begin installing the OS][12]
+
+完成后,它会重新启动,你的新虚拟机就可以使用了。
+
+![Red Hat Enterprise Linux 8 running in VMM][13]
+
+Virtual Machine Manager 是桌面 Linux 用户的强大工具。它是开源的,是专有和封闭虚拟化产品的绝佳替代品。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/introduction-virtual-machine-manager
+
+作者:[Alan Formy-Duval][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/alanfdoss
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb (A person programming)
+[2]: https://opensource.com/sitewide-search?search_api_views_fulltext=GNOME%20Box
+[3]: https://wiki.gnome.org/Apps/Boxes
+[4]: https://virt-manager.org/
+[5]: https://opensource.com/sites/default/files/1-vmm_main_0.png (Virtual Machine Manager's main screen)
+[6]: https://opensource.com/sites/default/files/2-vmm_step1_0.png (Step 1 virtual machine creation)
+[7]: https://opensource.com/sites/default/files/3-vmm_step2.png (Step 2 Choose the ISO File)
+[8]: https://opensource.com/sites/default/files/4-vmm_step3default.png (Step 3 Set CPU and Memory)
+[9]: https://opensource.com/sites/default/files/6-vmm_step4.png (Step 4 Configure VM Storage)
+[10]: https://opensource.com/sites/default/files/9-vmm_customizecpu.png (Changing the CPU count)
+[11]: https://opensource.com/sites/default/files/11-vmm_addnewhardware.png (The Add New Hardware screen)
+[12]: https://opensource.com/sites/default/files/12-vmm_rhelbegininstall.png
+[13]: https://opensource.com/sites/default/files/13-vmm_rhelinstalled_0.png (Red Hat Enterprise Linux 8 running in VMM)
diff --git a/published/20190913 How to Find and Replace a String in File Using the sed Command in Linux.md b/published/20190913 How to Find and Replace a String in File Using the sed Command in Linux.md
new file mode 100644
index 0000000000..d49a5146b9
--- /dev/null
+++ b/published/20190913 How to Find and Replace a String in File Using the sed Command in Linux.md
@@ -0,0 +1,346 @@
+[#]: collector: (lujun9972)
+[#]: translator: (asche910)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11367-1.html)
+[#]: subject: (How to Find and Replace a String in File Using the sed Command in Linux)
+[#]: via: (https://www.2daygeek.com/linux-sed-to-find-and-replace-string-in-files/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+使用 sed 命令查找和替换文件中的字符串的 16 个示例
+======
+
+
+
+当你在使用文本文件时,很可能需要查找和替换文件中的字符串。`sed` 命令主要用于替换一个文件中的文本。在 Linux 中这可以通过使用 `sed` 命令和 `awk` 命令来完成。
+
+在本教程中,我们将告诉你使用 `sed` 命令如何做到这一点,然后讨论讨论 `awk` 命令相关的。
+
+### sed 命令是什么
+
+`sed` 命令表示 Stream Editor(流编辑器),用来在 Linux 上执行基本的文本操作。它可以执行各种功能,如搜索、查找、修改、插入或删除文件。
+
+此外,它也可以执行复杂的正则表达式匹配。
+
+它可用于以下目的:
+
+* 查找和替换匹配给定的格式的内容。
+* 在指定行查找和替换匹配给定的格式的内容。
+* 在所有行查找和替换匹配给定的格式的内容。
+* 搜索并同时替换两种不同的模式。
+
+本文列出的十五个例子可以帮助你掌握 `sed` 命令。
+
+如果要使用 `sed` 命令删除文件中的行,去下面的文章。
+
+注意:由于这是一篇演示文章,我们使用不带 `-i` 选项的 `sed` 命令,该选项会在 Linux 终端中删除行并打印文件内容。
+
+但是,在实际环境中如果你想删除源文件中的行,使用带 `-i` 选项的 `sed` 命令。
+
+常见的 `sed` 替换字符串的语法。
+
+```
+sed -i 's/Search_String/Replacement_String/g' Input_File
+```
+
+首先我们需要了解 `sed` 语法来做到这一点。请参阅有关的细节。
+
+* `sed`:这是一个 Linux 命令。
+* `-i`:这是 `sed` 命令的一个选项,它有什么作用?默认情况下,`sed` 打印结果到标准输出。当你使用 `sed` 添加这个选项时,那么它会在适当的位置修改文件。当你添加一个后缀(比如,`-i.bak`)时,就会创建原始文件的备份。
+* `s`:字母 `s` 是一个替换命令。
+* `Search_String`:搜索一个给定的字符串或正则表达式。
+* `Replacement_String`:替换的字符串。
+* `g`:全局替换标志。默认情况下,`sed` 命令替换每一行第一次出现的模式,它不会替换行中的其他的匹配结果。但是,提供了该替换标志时,所有匹配都将被替换。
+* `/`:分界符。
+* `Input_File`:要执行操作的文件名。
+
+让我们来看看文件中用sed命令来搜索和转换文本的一些常用例子。
+
+我们已经创建了用于演示的以下文件。
+
+```
+# cat sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 1) 如何查找和替换一行中“第一次”模式匹配
+
+下面的 `sed` 命令用 `linux` 替换文件中的 `unix`。这仅仅改变了每一行模式的第一个实例。
+
+```
+# sed 's/unix/linux/' sed-test.txt
+
+1 Unix linux unix 23
+2 linux Linux 34
+3 linuxlinux UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 2) 如何查找和替换每一行中“第 N 次”出现的模式
+
+在行中使用`/1`、`/2`……`/n` 等标志来代替相应的匹配。
+
+下面的 `sed` 命令在一行中用 `linux` 来替换 `unix` 模式的第二个实例。
+
+```
+# sed 's/unix/linux/2' sed-test.txt
+
+1 Unix unix linux 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 3) 如何搜索和替换一行中所有的模式实例
+
+下面的 `sed` 命令用 `linux` 替换 `unix` 格式的所有实例,因为 `g` 是一个全局替换标志。
+
+```
+# sed 's/unix/linux/g' sed-test.txt
+
+1 Unix linux linux 23
+2 linux Linux 34
+3 linuxlinux UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 4) 如何查找和替换一行中从“第 N 个”开始的所有匹配的模式实例
+
+下面的 `sed` 命令在一行中替换从模式的“第 N 个”开始的匹配实例。
+
+```
+# sed 's/unix/linux/2g' sed-test.txt
+
+1 Unix unix linux 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 5) 在特定的行号搜索和替换模式
+
+你可以替换特定行号中的字符串。下面的 `sed` 命令用 `linux` 仅替换第三行的 `unix` 模式。
+
+```
+# sed '3 s/unix/linux/' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxlinux UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 6) 在特定范围行号间搜索和替换模式
+
+你可以指定行号的范围,以替换字符串。
+
+下面的 `sed` 命令在 1 到 3 行间用 `linux` 替换 `Unix` 模式。
+
+```
+# sed '1,3 s/unix/linux/' sed-test.txt
+
+1 Unix linux unix 23
+2 linux Linux 34
+3 linuxlinux UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 7) 如何查找和修改最后一行的模式
+
+下面的 sed 命令允许你只在最后一行替换匹配的字符串。
+
+下面的 `sed` 命令只在最后一行用 `Unix` 替换 `Linux` 模式。
+
+```
+# sed '$ s/Linux/Unix/' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Unix is free and opensource operating system
+```
+
+### 8) 在一行中如何只查找和替换正确的模式匹配
+
+你可能已经注意到,子串 `linuxunix` 被替换为在第 6 个示例中的 `linuxlinux`。如果你只想更改正确的匹配词,在搜索串的两端用这个边界符 `\b`。
+
+```
+# sed '1,3 s/\bunix\b/linux/' sed-test.txt
+
+1 Unix linux unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 9) 如何以不区分大小写来搜索与替换模式
+
+大家都知道,Linux 是区分大小写的。为了与不区分大小写的模式匹配,使用 `I` 标志。
+
+```
+# sed 's/unix/linux/gI' sed-test.txt
+
+1 linux linux linux 23
+2 linux Linux 34
+3 linuxlinux linuxLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 10) 如何查找和替换包含分隔符的字符串
+
+当你搜索和替换含分隔符的字符串时,我们需要用反斜杠 `\` 来取消转义。
+
+在这个例子中,我们将用 `/usr/bin/fish` 来替换 `/bin/bash`。
+
+```
+# sed 's/\/bin\/bash/\/usr\/bin\/fish/g' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /usr/bin/fish CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+上述 `sed` 命令按预期工作,但它看起来来很糟糕。 为了简化,大部分的人会用竖线 `|` 作为正则表达式的定位符。 所以,我建议你用它。
+
+```
+# sed 's|/bin/bash|/usr/bin/fish/|g' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /usr/bin/fish/ CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 11) 如何以给定的模式来查找和替换数字
+
+类似地,数字可以用模式来代替。下面的 `sed` 命令以 `[0-9]` 替换所有数字为 `number`。
+
+```
+# sed 's/[0-9]/number/g' sed-test.txt
+
+number Unix unix unix numbernumber
+number linux Linux numbernumber
+number linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 12) 如何用模式仅查找和替换两个数字
+
+如果你想用模式来代替两位数字,使用下面的 `sed` 命令。
+
+```
+# sed 's/\b[0-9]\{2\}\b/number/g' sed-test.txt
+
+1 Unix unix unix number
+2 linux Linux number
+3 linuxunix UnixLinux
+linux /bin/bash CentOS Linux OS
+Linux is free and opensource operating system
+```
+
+### 13) 如何用 sed 命令仅打印被替换的行
+
+如果你想显示仅更改的行,使用下面的 `sed` 命令。
+
+* `p` - 它在终端上输出替换的行两次。
+* `-n` - 它抑制由 `p` 标志所产生的重复行。
+
+```
+# sed -n 's/Unix/Linux/p' sed-test.txt
+
+1 Linux unix unix 23
+3 linuxunix LinuxLinux
+```
+
+### 14) 如何同时运行多个 sed 命令
+
+以下 `sed` 命令同时检测和置换两个不同的模式。
+
+下面的 `sed` 命令搜索 `linuxunix` 和 `CentOS` 模式,用 `LINUXUNIX` 和 `RHEL8` 一次性更换它们。
+
+```
+# sed -e 's/linuxunix/LINUXUNIX/g' -e 's/CentOS/RHEL8/g' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 LINUXUNIX UnixLinux
+linux /bin/bash RHEL8 Linux OS
+Linux is free and opensource operating system
+```
+
+下面的 `sed` 命令搜索替换两个不同的模式,并一次性替换为一个字符串。
+
+以下 `sed` 的命令搜索 `linuxunix` 和 `CentOS` 模式,用 `Fedora30` 替换它们。
+
+```
+# sed -e 's/\(linuxunix\|CentOS\)/Fedora30/g' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 Fedora30 UnixLinux
+linux /bin/bash Fedora30 Linux OS
+Linux is free and opensource operating system
+```
+
+### 15) 如果给定的模式匹配,如何查找和替换整个行
+
+如果模式匹配,可以使用 `sed` 命令用新行来代替整行。这可以通过使用 `c` 标志来完成。
+
+```
+# sed '/OS/ c\
+New Line
+' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+New Line
+Linux is free and opensource operating system
+```
+
+### 16) 如何搜索和替换相匹配的模式行
+
+在 `sed` 命令中你可以为行指定适合的模式。在匹配该模式的情况下,`sed` 命令搜索要被替换的字符串。
+
+下面的 `sed` 命令首先查找具有 `OS` 模式的行,然后用 `ArchLinux` 替换单词 `Linux`。
+
+```
+# sed '/OS/ s/Linux/ArchLinux/' sed-test.txt
+
+1 Unix unix unix 23
+2 linux Linux 34
+3 linuxunix UnixLinux
+linux /bin/bash CentOS ArchLinux OS
+Linux is free and opensource operating system
+```
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-sed-to-find-and-replace-string-in-files/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[Asche910](https://github.com/asche910)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
diff --git a/published/20190914 GNOME 3.34 Released With New Features - Performance Improvements.md b/published/20190914 GNOME 3.34 Released With New Features - Performance Improvements.md
new file mode 100644
index 0000000000..69911195d2
--- /dev/null
+++ b/published/20190914 GNOME 3.34 Released With New Features - Performance Improvements.md
@@ -0,0 +1,89 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11345-1.html)
+[#]: subject: (GNOME 3.34 Released With New Features & Performance Improvements)
+[#]: via: (https://itsfoss.com/gnome-3-34-release/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+GNOME 3.34 发布
+======
+
+
+
+最新版本的 GNOME 代号为“塞萨洛尼基”。考虑到这个版本经过了 6 个月的开发,这应该是对 [GNOME 3.32][1] 的一次令人印象深刻的升级。
+
+在此版本中,有许多新功能和显著的性能改进。除了新功能外,可定制的程度也得到了提升。
+
+以下是新的变化:
+
+### GNOME 3.34 的关键改进
+
+你可以观看此视频,了解 GNOME 3.34 中的新功能:
+
+- [视频](https://img.linux.net.cn/static/video/_-qAjPRr5SGoY.mp4)
+
+#### 拖放图标到文件夹
+
+新的 shell 主题允许你拖放应用程序抽屉中的图标以重新排列它们,或将它们组合到一个文件夹中。你可能已经在 Android 或 iOS 智能手机中使用过此类功能。
+
+![You can now drag and drop icons into a folder][2]
+
+#### 改进的日历管理器
+
+改进的日历管理器可以轻松地与第三方服务集成,使你能够直接从 Linux 系统管理日程安排,而无需单独使用其他应用程序。
+
+![GNOME Calendar Improvements][3]
+
+#### 背景选择的设置
+
+现在,更容易为主屏幕和锁定屏幕选择自定义背景,因为它在同一屏幕中显示所有可用背景。为你节省至少一次鼠标点击。
+
+![It’s easier to select backgrounds now][4]
+
+#### 重新排列搜索选项
+
+搜索选项和结果可以手动重新排列。因此,当你要搜索某些内容时,可以决定哪些内容先出现。
+
+#### 响应式设计的“设置”应用
+
+设置菜单 UI 现在具有响应性,因此无论你使用何种类型(或尺寸)的设备,都可以轻松访问所有选项。这肯定对 [Linux 智能手机(如 Librem 5)][5] 上的 GNOME 有所帮助。
+
+除了所有这些之外,[官方公告][6]还提到到开发人员的有用补充(增加了系统分析器和虚拟化改进):
+
+> 对于开发人员,GNOME 3.34 在 Sysprof 中包含更多数据源,使应用程序的性能分析更加容易。对 Builder 的多项改进中包括集成的 D-Bus 检查器。
+
+![Improved Sysprof tool in GNOME 3.34][7]
+
+### 如何获得GNOME 3.34?
+
+虽然新版本已经发布,但它还没有进入 Linux 发行版的官方存储库。所以,我们建议等待它,并在它作为更新包提供时进行升级。不管怎么说,如果你想构建它,你都可以在这里找到[源代码][8]。
+
+嗯,就是这样。如果你感兴趣,可以查看[完整版本说明][10]以了解技术细节。
+
+你如何看待新的 GNOME 3.34?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/gnome-3-34-release/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://www.gnome.org/news/2019/03/gnome-3-32-released/
+[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/icon-grid-drag-gnome.png?ssl=1
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/gnome-calendar-improvements.jpg?ssl=1
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/background-panel-GNOME.png?resize=800%2C555&ssl=1
+[5]: https://itsfoss.com/librem-linux-phone/
+[6]: https://www.gnome.org/press/2019/09/gnome-3-34-released/
+[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/sysprof-gnome.jpg?resize=800%2C493&ssl=1
+[8]: https://download.gnome.org/
+[9]: https://itsfoss.com/fedora-26-release/
+[10]: https://help.gnome.org/misc/release-notes/3.34/
diff --git a/published/20190914 Manjaro Linux Graduates From A Hobby Project To A Professional Project.md b/published/20190914 Manjaro Linux Graduates From A Hobby Project To A Professional Project.md
new file mode 100644
index 0000000000..0f5ce59599
--- /dev/null
+++ b/published/20190914 Manjaro Linux Graduates From A Hobby Project To A Professional Project.md
@@ -0,0 +1,78 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11349-1.html)
+[#]: subject: (Manjaro Linux Graduates From A Hobby Project To A Professional Project)
+[#]: via: (https://itsfoss.com/manjaro-linux-business-formation/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Manjaro Linux 从业余爱好项目成长为专业项目
+======
+
+> Manjaro 正在走专业化路线。虽然 Manjaro 社区将负责项目的开发和其他相关活动,但该团队已成立了一家公司作为其法人实体处理商业协议和专业服务。
+
+Manjaro 是一个相当流行的 Linux 发行版,而它只是由三个人(Bernhard、Jonathan 和 Phili)于 2011 年激情之下创建的项目。现在,它是目前[最好的 Linux 发行版][1]之一,所以它不能真的一直还只是个业余爱好项目了,对吧。
+
+嗯,现在有个好消息:Manjaro 已经建立了一家新公司“[Manjaro GmbH & Co. KG]”,以 [Blue Systems][2] 为顾问,以便能够全职雇佣维护人员,并探索未来的商业机会。
+
+![][3]
+
+### 具体有什么变化?
+
+根据[官方公告][4],Manjaro 项目将保持不变。但是,成立了一家新公司来保护该项目,以允许他们制定法律合同、官方协议和进行其他潜在的商业活动。因此,这使得这个“业余爱好项目”成为了一项专业工作。
+
+除此之外,捐赠资金将转给非营利性的[财政托管][5]([CommunityBridge][6] 和 [OpenCollective][7]),让他们来代表项目接受和管理资金。请注意,这些捐赠没有被用于创建这个公司,因此,将资金转移给非营利的财务托管将在确保捐赠的同时也确保透明度。
+
+### 这会有何改善?
+
+随着这个公司的成立,(如开发者所述)新结构将以下列方式帮助 Manjaro:
+
+* 使开发人员能够全职投入 Manjaro 及其相关项目;
+* 在 Linux 相关的比赛和活动中与其他开发人员进行互动;
+* 保护 Manjaro 作为一个社区驱动项目的独立性,并保护其品牌;
+* 提供更快的安全更新,更有效地响应用户需求;
+* 提供在专业层面上作为公司行事的手段。
+
+Manjaro 团队还阐明了它将如何继续致力于社区:
+
+> Manjaro 的使命和目标将与以前一样 —— 支持 Manjaro 的协作开发及其广泛使用。这项工作将继续通过捐赠和赞助来支持,这些捐赠和赞助在任何情况下都不会被这个成立的公司使用。
+
+### 关于 Manjaro 公司的更多信息
+
+尽管他们提到该项目将独立于公司,但并非所有人都清楚当有了一家具有商业利益的公司时 Manjaro 与“社区”的关系。因此,该团队还在公告中澄清了他们作为一家公司的计划。
+
+> Manjaro GmbH & Co.KG 的成立旨在有效地参与商业协议、建立合作伙伴关系并提供专业服务。有了这个,Manjaro 开发者 Bernhard 和 Philip 现在可以全职工作投入到 Manjaro,而 Blue Systems 将担任顾问。
+
+> 公司将能够正式签署合同并承担职责和保障,而社区不能承担或承担责任。
+
+### 总结
+
+因此,通过这一举措以及商业机会,他们计划全职工作并聘请贡献者。
+
+当然,现在他们的意思是“业务”(我希望不是作为坏人)。对此公告的大多数反应都是积极的,我们都祝他们好运。虽然有些人可能对具有“商业”利益的“社区”项目持怀疑态度(还记得 [FreeOffice 和 Manjaro 的挫败][9]吗?),但我认为这是一个有趣的举措。
+
+你怎么看?请在下面的评论中告诉我们你的想法。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/manjaro-linux-business-formation/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/best-linux-distributions/
+[2]: https://www.blue-systems.com/
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/manjaro-gmbh.jpg?ssl=1
+[4]: https://forum.manjaro.org/t/manjaro-is-taking-the-next-step/102105
+[5]: https://en.wikipedia.org/wiki/Fiscal_sponsorship
+[6]: https://communitybridge.org/
+[7]: https://opencollective.com/
+[8]: https://itsfoss.com/linux-mint-hacked/
+[9]: https://itsfoss.com/libreoffice-freeoffice-manjaro-linux/
diff --git a/published/20190915 Sandboxie-s path to-open source, update on the Pentagon-s open source initiative, open source in Hollywood,-and more.md b/published/20190915 Sandboxie-s path to-open source, update on the Pentagon-s open source initiative, open source in Hollywood,-and more.md
new file mode 100644
index 0000000000..e70c9f8832
--- /dev/null
+++ b/published/20190915 Sandboxie-s path to-open source, update on the Pentagon-s open source initiative, open source in Hollywood,-and more.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11351-1.html)
+[#]: subject: (Sandboxie's path to open source, update on the Pentagon's open source initiative, open source in Hollywood, and more)
+[#]: via: (https://opensource.com/article/19/9/news-september-15)
+[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
+
+开源新闻综述:五角大楼、好莱坞和 Sandboxie 的开源
+======
+
+> 不要错过两周以来最大的开源头条新闻。
+
+![Weekly news roundup with TV][1]
+
+在本期我们的开源新闻综述中有 Sandboxie 的开源之路、五角大楼开源计划的进一步变化、好莱坞开源等等!
+
+### 五角大楼不符合白宫对开源软件的要求
+
+2016 年,美国白宫要求每个美国政府机构必须在三年内开放至少 20% 的定制软件。2017 年有一篇关于这一倡议的[有趣文章][5],其中列出了一些令人激动的事情和面临的挑战。
+
+根据美国政府问责局(GAO)的说法,[美国五角大楼做的还远远不足][6]。
+
+在一篇关于 Nextgov 的文章中,Jack Corrigan 写道,截至 2019 年 7 月,美国五角大楼仅发布了 10% 的代码为开源代码。他们还没有实施的其它白宫任务包括要求制定开源软件政策和定制代码的清单。
+
+根据该报告,一些美国政府官员告诉 GAO,他们担心美国政府部门间共享代码的安全风险。他们还承认没有创建衡量开源工作成功的指标。美国五角大楼的首席技术官将五角大楼的规模列为不执行白宫的开源任务的原因。在周二发布的一份报告中,GAO 表示,“在(美国国防部)完全实施其试点计划并确定完成行政管理和预算局(OMB)要求的里程碑之前,该部门将无法达成显著的成本节约和效率的目的。”
+
+### Sandboxie 在开源的过程中变成了免费软件
+
+一家英国安全公司 Sophos Group plc 发布了[其流行的 Sandboxie 工具的免费版本][2],它用作Windows 的隔离操作环境([可在此下载][2])。
+
+Sophos 表示,由于 Sandboxie 不是其业务的核心,因此更容易做出的决定就是关闭它。但 Sandboxie 因为无需让用户的操作系统冒风险就可以在安全的环境中运行未知软件而[广受赞誉][3],因此该团队正在投入额外的工作将其作为开源软件发布。这个免费但非开源的中间阶段似乎与当前的系统设计有关,因为它需要激活密钥:
+
+> Sandboxie 目前使用许可证密钥来激活和授予仅针对付费客户开放的高级功能的访问权限(与使用免费版本的用户相比)。我们修改了代码,并发布了一个不限制任何功能的免费版本的更新版。换句话说,新的免费许可证将可以访问之前仅供付费客户使用的所有功能。
+
+受此工具的社区影响,Sophos 的高级领导人宣布发布 Sandboxie 版本 5.31.4,这个不受限制的程序版本将保持免费,直到该工具完全开源。
+
+> “Sandboxie 用户群代表了一些最热情、前瞻性和知识渊博的安全社区成员,我们不想让你失望,”[Sophos 的博文说到][4]。“经过深思熟虑后,我们认为让 Sandboxie 走下去的最佳方式是将其交还给用户,将其转换为开源工具。”
+
+### 志愿者团队致力于查找和数字化无版权书籍
+
+1924 年以前在美国出版的所有书籍都是[公有的、可以自由使用/复制的][7]。1964 年及之后出版的图书在出版日期后将保留 95 年的版权。但由于版权漏洞,1923 年至 1964 年间出版的书籍中有高达 75% 可以免费阅读和复制。现在只需要耗时确认那些书是什么。
+
+因此,一些图书馆、志愿者和档案管理员们联合起来了解哪些图书没有版权,然后将其数字化并上传到互联网。由于版权续约记录已经数字化,因此很容易判断 1923 年至 1964 年间出版的书籍是否更新了其版权。但是,由于试图提供的是反证,因此寻找缺乏版权更新的难度要大得多。
+
+参与者包括纽约公共图书馆(NYPL),它[最近解释了][8]为什么这个耗时的项目是值得的。为了帮助更快地找到更多书籍,NYPL 将许多记录转换为 XML 格式。这样可以更轻松地自动执行查找可以将哪些书籍添加到公共域的过程。
+
+### 好莱坞的学院软件基金会获得新成员
+
+微软和苹果公司宣布计划以学院软件基金会(ASWF)的高级会员做出贡献。他们将加入[创始董事会成员][9],其它成员还包括 Netflix、Google Cloud、Disney Studios 和 Sony Pictures。
+
+学院软件基金会于 2018 年作为[电影艺术与科学学院][10]和[Linux 基金会][11]的联合项目而启动。
+
+> 学院软件基金会(ASWF)的使命是提高贡献到内容创作行业的开源软件库的质量和数量;提供一个中立的论坛来协调跨项目的工作;提供通用的构建和测试基础架构;并为个人和组织提供参与推进我们的开源生态系统的明确途径。
+
+在第一年内,该基金会构建了 [OpenTimelineIO][12],这是一种开源 API 和交换格式,可帮助工作室团队跨部门协作。OpenTImelineIO 被该[基金会技术咨询委员会][13]去年 7 月正式接受为第五个托管项目。他们现在将它与 [OpenColorIO][14]、[OpenCue][15]、[OpenEXR][16] 和 [OpenVDB] [17] 并列维护。
+
+### 其它新闻
+
+* [Comcast 将开源网络软件投入生产环境][18]
+* [SD Times 本周开源项目:Ballerina][19]
+* [美国国防部努力实施开源计划][20]
+* [Kong 开源通用服务网格 Kuma][21]
+* [Eclipse 推出 Jakarta EE 8][22]
+
+一如既往地感谢 Opensource.com 的工作人员和主持人本周的帮助。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/news-september-15
+
+作者:[Lauren Maffeo][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/lmaffeo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
+[2]: https://www.sandboxie.com/DownloadSandboxie
+[3]: https://betanews.com/2019/09/13/sandboxie-free-open-source/
+[4]: https://community.sophos.com/products/sandboxie/f/forum/115109/major-sandboxie-news-sandboxie-is-now-a-free-tool-with-plans-to-transition-it-to-an-open-source-tool/414522
+[5]: https://medium.com/@DefenseDigitalService/code-mil-an-open-source-initiative-at-the-pentagon-5ae4986b79bc
+[6]: https://www.nextgov.com/analytics-data/2019/09/pentagon-needs-make-more-software-open-source-watchdog-says/159832/
+[7]: https://www.vice.com/en_us/article/a3534j/libraries-and-archivists-are-scanning-and-uploading-books-that-are-secretly-in-the-public-domain
+[8]: https://www.nypl.org/blog/2019/09/01/historical-copyright-records-transparency
+[9]: https://variety.com/2019/digital/news/microsoft-apple-academy-software-foundation-1203334675/
+[10]: https://www.oscars.org/
+[11]: http://www.linuxfoundation.org/
+[12]: https://github.com/PixarAnimationStudios/OpenTimelineIO
+[13]: https://www.linuxfoundation.org/press-release/2019/07/opentimelineio-joins-aswf/
+[14]: https://opencolorio.org/
+[15]: https://www.opencue.io/
+[16]: https://www.openexr.com/
+[17]: https://www.openvdb.org/
+[18]: https://www.fiercetelecom.com/operators/comcast-puts-open-source-networking-software-into-production
+[19]: https://sdtimes.com/os/sd-times-open-source-project-of-the-week-ballerina/
+[20]: https://www.fedscoop.com/open-source-software-dod-struggles/
+[21]: https://sdtimes.com/micro/kong-open-sources-universal-service-mesh-kuma/
+[22]: https://devclass.com/2019/09/11/hey-were-open-source-again-eclipse-unveils-jakarta-ee-8/
diff --git a/published/20190916 How to freeze and lock your Linux system (and why you would want to).md b/published/20190916 How to freeze and lock your Linux system (and why you would want to).md
new file mode 100644
index 0000000000..37b0a31311
--- /dev/null
+++ b/published/20190916 How to freeze and lock your Linux system (and why you would want to).md
@@ -0,0 +1,97 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11384-1.html)
+[#]: subject: (How to freeze and lock your Linux system (and why you would want to))
+[#]: via: (https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+如何冻结和锁定你的 Linux 系统
+======
+
+> 冻结终端窗口并锁定屏幕意味着什么 - 以及如何在 Linux 系统上管理这些活动。
+
+
+
+如何在 Linux 系统上冻结和“解冻”屏幕,很大程度上取决于这些术语的含义。有时“冻结屏幕”可能意味着冻结终端窗口,以便该窗口内的活动停止。有时它意味着锁定屏幕,这样就没人可以在你去拿一杯咖啡时,走到你的系统旁边代替你输入命令了。
+
+在这篇文章中,我们将研究如何使用和控制这些操作。
+
+### 如何在 Linux 上冻结终端窗口
+
+你可以输入 `Ctrl+S`(按住 `Ctrl` 键和 `s` 键)冻结 Linux 系统上的终端窗口。把 `s` 想象成“开始冻结”。如果在此操作后继续输入命令,那么你不会看到输入的命令或你希望看到的输出。实际上,命令将堆积在一个队列中,并且只有在通过输入 `Ctrl+Q` 解冻时才会运行。把它想象成“退出冻结”。
+
+查看其工作的一种简单方式是使用 `date` 命令,然后输入 `Ctrl+S`。接着再次输入 `date` 命令并等待几分钟后再次输入 `Ctrl+Q`。你会看到这样的情景:
+
+```
+$ date
+Mon 16 Sep 2019 06:47:34 PM EDT
+$ date
+Mon 16 Sep 2019 06:49:49 PM EDT
+```
+
+这两次时间显示的差距表示第二次的 `date` 命令直到你解冻窗口时才运行。
+
+无论你是坐在计算机屏幕前还是使用 PuTTY 等工具远程运行,终端窗口都可以冻结和解冻。
+
+这有一个可以派上用场的小技巧。如果你发现终端窗口似乎处于非活动状态,那么可能是你或其他人无意中输入了 `Ctrl+S`。那么,输入 `Ctrl+Q` 来尝试解决不妨是个不错的办法。
+
+### 如何锁定屏幕
+
+要在离开办公桌前锁定屏幕,请按住 `Ctrl+Alt+L` 或 `Super+L`(即按住 `Windows` 键和 `L` 键)。屏幕锁定后,你必须输入密码才能重新登录。
+
+### Linux 系统上的自动屏幕锁定
+
+虽然最佳做法建议你在即将离开办公桌时锁定屏幕,但 Linux 系统通常会在一段时间没有活动后自动锁定。 “消隐”屏幕(使其变暗)并实际锁定屏幕(需要登录才能再次使用)的时间取决于你个人首选项中的设置。
+
+要更改使用 GNOME 屏幕保护程序时屏幕变暗所需的时间,请打开设置窗口并选择 “Power” 然后 “Blank screen”。你可以选择 1 到 15 分钟或从不变暗。要选择屏幕变暗后锁定所需时间,请进入设置,选择 “Privacy”,然后选择 “Blank screen”。设置应包括 1、2、3、5 和 30 分钟或一小时。
+
+### 如何在命令行锁定屏幕
+
+如果你使用的是 GNOME 屏幕保护程序,你还可以使用以下命令从命令行锁定屏幕:
+
+```
+gnome-screensaver-command -l
+```
+
+这里是小写的 L,代表“锁定”。
+
+### 如何检查锁屏状态
+
+你还可以使用 `gnome-screensaver` 命令检查屏幕是否已锁定。使用 `--query` 选项,该命令会告诉你屏幕当前是否已锁定(即处于活动状态)。使用 `--time` 选项,它会告诉你锁定生效的时间。这是一个示例脚本:
+
+```
+#!/bin/bash
+
+gnome-screensaver-command --query
+gnome-screensaver-command --time
+```
+
+运行脚本将会输出:
+
+```
+$ ./check_lockscreen
+The screensaver is active
+The screensaver has been active for 1013 seconds.
+```
+
+#### 总结
+
+如果你记住了正确的控制方式,那么锁定终端窗口是很简单的。对于屏幕锁定,它的效果取决于你自己的设置,或者你是否习惯使用默认设置。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
diff --git a/published/20190916 Linux Plumbers, Appwrite, and more industry trends.md b/published/20190916 Linux Plumbers, Appwrite, and more industry trends.md
new file mode 100644
index 0000000000..8ca1e16da6
--- /dev/null
+++ b/published/20190916 Linux Plumbers, Appwrite, and more industry trends.md
@@ -0,0 +1,92 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11355-1.html)
+[#]: subject: (Linux Plumbers, Appwrite, and more industry trends)
+[#]: via: (https://opensource.com/article/19/9/conferences-industry-trends)
+[#]: author: (Tim Hildred https://opensource.com/users/thildred)
+
+
+每周开源点评:Linux Plumbers、Appwrite
+======
+
+> 了解每周的开源社区和行业趋势。
+
+![Person standing in front of a giant computer screen with numbers, data][1]
+
+作为采用开源开发模式的企业软件公司的高级产品营销经理,这是我为产品营销人员、经理和其他相关人员发布的有关开源社区、市场和行业趋势的定期更新。以下是本次更新中我最喜欢的五篇文章。
+
+### 《在 Linux Plumbers 会议上解决 Linux 具体细节》
+
+- [文章地址][2]
+
+> Linux 的创建者 Linus Torvalds 告诉我,内核维护者峰会是顶级 Linux 内核开发人员的邀请制聚会。但是,虽然你可能认为这是关于规划 Linux 内核的未来的会议,但事实并非如此。“这个维护者峰会真的与众不同,因为它甚至不谈论技术问题。”相反,“全都谈的是关于创建和维护 Linux 内核的过程。”
+
+**影响**:这就像技术版的 Bilderberg 会议:你们举办的都是各种华丽的流行语会议,而在这里我们做出的才是真正的决定。不过我觉得,可能不太会涉及到私人飞机吧。(LCTT 译注:有关 Bilderberg 请自行搜索)
+
+### 《微软主办第一个 WSL 会议》
+
+- [文章地址][3]
+
+> [Whitewater Foundry][4] 是一家专注于 [Windows 的 Linux 子系统(WSL)][5]的创业公司,它的创始人 Hayden Barnes [宣布举办 WSLconf 1][6],这是 WSL 的第一次社区会议。该活动将于 2020 年 3 月 10 日至 11 日在华盛顿州雷德蒙市的微软总部 20 号楼举行。会议是合办的。我们已经知道将有来自[Pengwin(Whitewater 的 Linux for Windows)][7]、微软 WSL 和 Canonical 的 Ubuntu on WSL 开发人员的演讲和研讨会。
+
+**影响**:微软正在培育社区成长的种子,围绕它越来越多地采用开源软件并作出贡献。这足以让我眼前一亮。
+
+### 《Appwrite 简介:面向移动和 Web 开发人员的开源后端服务器》
+
+- [文章链接][10]
+
+> [Appwrite][11] 是一个新的[开源软件][12],用于前端和移动开发人员的端到端的后端服务器,可以让你更快地构建应用程序。[Appwrite][13] 的目标是抽象和简化 REST API 和工具背后的常见开发任务,以帮助开发人员更快地构建高级应用程序。
+>
+> 在这篇文章中,我将简要介绍一些主要的 [Appwrite][14] 服务,并解释它们的主要功能以及它们的设计方式,相比从头开始编写所有后端 API,这可以帮助你更快地构建下一个项目。
+
+**影响**:随着更多开源中间件变得更易于使用,软件开发越来越容易。Appwrite 声称可将开发时间和成本降低 70%。想象一下这对小型移动开发机构或个人开发者意味着什么。我很好奇他们将如何通过这种方式赚钱。
+
+### 《“不只是 IT”:开源技术专家说协作文化是政府转型的关键》
+
+- [文章链接][15]
+
+> AGL(敏捷的政府领导)正在为那些帮助政府更好地为公众工作的人们提供价值支持网络。该组织专注于我非常热衷的事情:DevOps、数字化转型、开源以及许多政府 IT 领导者首选的类似主题。AGL 为我提供了一个社区,可以了解当今最优秀和最聪明的人所做的事情,并与整个行业的同行分享这些知识。
+
+**影响**:不管你的政治信仰如何,对政府都很容易愤世嫉俗。我发现令人耳目一新的是,政府也是由一个个实际的人组成的,他们大多在尽力将相关技术应用于公益事业。特别是当该技术是开源的!
+
+### 《彭博社如何通过 Kubernetes 实现接近 90-95% 的硬件利用率》
+
+- [文章链接][16]
+
+> 2016 年,彭博社采用了 Kubernetes(当时仍处于 alpha 阶段中),自使用该项目的上游代码以来,取得了显著成果。Rybka 说:“借助 Kubernetes,我们能够非常高效地使用我们的硬件,使利用率接近 90% 到 95%。”Kubernetes 中的自动缩放使系统能够更快地满足需求。此外,Kubernetes “为我们提供了标准化我们构建和管理服务的方法的能力,这意味着我们可以花费更多时间专注于实际使用我们支持的开源工具,”数据和分析基础架构主管 Steven Bower 说,“如果我们想要在世界的另一个位置建立一个新的集群,那么这样做真的非常简单。一切都只是代码。配置就是代码。”
+
+**影响**:没有什么能像利用率统计那样穿过营销的迷雾。我听说过关于 Kube 的一件事是,当人们运行它时,他们不知道用它做什么。像这样的用例可以给他们(和你)一些想要的东西。
+
+*我希望你喜欢这个上周重要内容的清单,请下周回来了解有关开源社区、市场和行业趋势的更多信息。*
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/conferences-industry-trends
+
+作者:[Tim Hildred][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/thildred
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
+[2]: https://www.zdnet.com/article/working-on-linuxs-nuts-and-bolts-at-linux-plumbers/
+[3]: https://www.zdnet.com/article/microsoft-hosts-first-windows-subsystem-for-linux-conference/
+[4]: https://github.com/WhitewaterFoundry
+[5]: https://docs.microsoft.com/en-us/windows/wsl/install-win10
+[6]: https://www.linkedin.com/feed/update/urn:li:activity:6574754435518599168/
+[7]: https://www.zdnet.com/article/pengwin-a-linux-specifically-for-windows-subsystem-for-linux/
+[8]: https://canonical.com/
+[9]: https://ubuntu.com/
+[10]: https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d
+[11]: https://appwrite.io
+[12]: https://github.com/appwrite/appwrite
+[13]: https://medium.com/@eldadfux/introducing-appwrite-an-open-source-backend-server-for-mobile-web-developers-4be70731575d?source=friends_link&sk=b6a2be384aafd1fa5b1b6ff12906082c
+[14]: https://appwrite.io/
+[15]: https://medium.com/agile-government-leadership/more-than-just-it-open-source-technologist-says-collaborative-culture-is-key-to-government-c46d1489f822
+[16]: https://www.cncf.io/blog/2019/09/12/how-bloomberg-achieves-close-to-90-95-hardware-utilization-with-kubernetes/
diff --git a/published/20190917 Getting started with Zsh.md b/published/20190917 Getting started with Zsh.md
new file mode 100644
index 0000000000..460ab91c92
--- /dev/null
+++ b/published/20190917 Getting started with Zsh.md
@@ -0,0 +1,215 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11378-1.html)
+[#]: subject: (Getting started with Zsh)
+[#]: via: (https://opensource.com/article/19/9/getting-started-zsh)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+Zsh 入门
+======
+
+> 从 Bash 进阶到 Z-shell,改进你的 shell 体验。
+
+
+
+Z-shell(Zsh)是一种 Bourne 式的交互式 POSIX shell,以其丰富的创新功能而著称。Z-Shell 用户经常会提及它的许多便利之处,赞誉它对效率的提高和丰富的自定义支持。
+
+如果你刚接触 Linux 或 Unix,但你的经验足以让你可以打开终端并运行一些命令的话,那么你可能使用的就是 Bash shell。Bash 可能是最具有代表意义的自由软件 shell,部分是因为它具有的先进的功能,部分是因为它是大多数流行的 Linux 和 Unix 操作系统上的默认 shell。但是,随着使用的次数越多,你可能会开始发现一些细节可能能够做的更好。开源有一个众所周知的地方,那就是选择。所以,许多人选择从 Bash “毕业”到 Z。
+
+### Zsh 介绍
+
+Shell 只是操作系统的接口。交互式 shell 程序允许你通过称为*标准输入*(stdin)的某个东西键入命令,并通过*标准输出*(stdout)和*标准错误*(stderr)获取输出。有很多种 shell,如 Bash、Csh、Ksh、Tcsh、Dash 和 Zsh。每个都有其开发者所认为最适合于 Shell 的功能。而这些功能的好坏,则取决于最终用户。
+
+Zsh 具有交互式制表符补全、自动文件搜索、支持正则表达式、用于定义命令范围的高级速记符,以及丰富的主题引擎等功能。这些功能也包含在你所熟悉的其它 Bourne 式 shell 环境中,这意味着,如果你已经了解并喜欢 Bash,那么你也会熟悉 Zsh,除此以外,它还有更多的功能。你可能会认为它是一种 Bash++。
+
+### 安装 Zsh
+
+用你的包管理器安装 Zsh。
+
+在 Fedora、RHEL 和 CentOS 上:
+
+```
+$ sudo dnf install zsh
+```
+
+在 Ubuntu 和 Debian 上:
+
+```
+$ sudo apt install zsh
+```
+
+在 MacOS 上你可以使用 MacPorts 安装它:
+
+```
+$ sudo port install zsh
+```
+
+或使用 Homebrew:
+
+```
+$ brew install zsh
+```
+
+在 Windows 上也可以运行 Zsh,但是只能在 Linux 层或类似 Linux 的层之上运行,例如 [Windows 的 Linux 子系统][2](WSL)或 [Cygwin][3]。这类安装超出了本文的范围,因此请参考微软的文档。
+
+### 设置 Zsh
+
+Zsh 不是终端模拟器。它是在终端仿真器中运行的 shell。因此,要启动 Zsh,必须首先启动一个终端窗口,例如 GNOME Terminal、Konsole、Terminal、iTerm2、rxvt 或你喜欢的其它终端。然后,你可以通过键入以下命令启动 Zsh:
+
+```
+$ zsh
+```
+
+首次启动 Zsh 时,会要求你选择一些配置选项。这些都可以在以后更改,因此请按 `1` 继续。
+
+```
+This is the Z Shell configuration function for new users, zsh-newuser-install.
+
+(q) Quit and do nothing.
+
+(0) Exit, creating the file ~/.zshrc
+
+(1) Continue to the main menu.
+```
+
+偏好设置分为四类,因此请从顶部开始。
+
+1. 第一个类使你可以选择在 shell 历史记录文件中保留多少个命令。默认情况下,它设置为 1,000 行。
+2. Zsh 补全是其最令人兴奋的功能之一。为了简单起见,请考虑使用其默认选项激活它,直到你习惯了它的工作方式。按 `1` 使用默认选项,按 `2` 手动设置选项。
+3. 选择 Emacs 式键绑定或 Vi 式键绑定。Bash 使用 Emacs 式绑定,因此你可能已经习惯了。
+4. 最后,你可以了解(以及设置或取消设置)Zsh 的一些精妙的功能。例如,当你提供不带命令的非可执行路径时,可以通过让 Zsh 来改变目录而无需你使用 `cd` 命令。要激活这些额外选项之一,请输入选项号并输入 `s` 进行设置。请尝试打开所有选项以获得完整的 Zsh 体验。你可以稍后通过编辑 `~/.zshrc` 取消设置它们。
+
+要完成配置,请按 `0`。
+
+### 使用 Zsh
+
+刚开始,Zsh 的使用感受就像使用 Bash 一样,这无疑是其众多功能之一。例如,Bash 和 Tcsh 之间就存在严重的差异,因此如果你必须在工作中或在服务器上使用 Bash,而 Zsh 就可以在家里轻松尝试和使用,这样在 Bash 和 Zsh 之间轻松切换就是一种便利。
+
+#### 在 Zsh 中改变目录
+
+正是这些微小的差异使 Zsh 变得好用。首先,尝试在没有 `cd` 命令的情况下,将目录更改为 `Documents` 文件夹。简直太棒了,难以置信。如果你输入的是目录路径而没有进一步的指令,Zsh 会更改为该目录:
+
+```
+% Documents
+% pwd
+/home/seth/Documents
+```
+
+而这会在 Bash 或任何其他普通 shell 中导致错误。但是 Zsh 却根本不是普通的 shell,而这仅仅才是开始。
+
+#### 在 Zsh 中搜索
+
+当你想使用普通 shell 程序查找文件时,可以使用 `find` 或 `locate` 命令。最起码,你可以使用 `ls -R` 来递归地列出一组目录。Zsh 内置有允许它在当前目录或任何其他子目录中查找文件的功能。
+
+例如,假设你有两个名为 `foo.txt` 的文件。一个位于你的当前目录中,另一个位于名为 `foo` 的子目录中。在 Bash Shell 中,你可以使用以下命令列出当前目录中的文件:
+
+```
+$ ls
+foo.txt
+```
+
+你可以通过明确指明子目录的路径来列出另一个目录:
+
+```
+$ ls foo
+foo.txt
+```
+
+要同时列出这两者,你必须使用 `-R` 开关,并结合使用 `grep`:
+
+```
+$ ls -R | grep foo.txt
+foo.txt
+foo.txt
+```
+
+但是在 Zsh 中,你可以使用 `**` 速记符号:
+
+```
+% ls **/foo.txt
+foo.txt
+foo.txt
+```
+
+你可以在任何命令中使用此语法,而不仅限于 `ls`。想象一下在这样的场景中提高的效率:将特定文件类型从一组目录中移动到单个位置、将文本片段串联到一个文件中,或对日志进行抽取。
+
+### 使用 Zsh 的制表符补全
+
+制表符补全是 Bash 和其他一些 Shell 中的高级用户功能,它变得司空见惯,席卷了 Unix 世界。Unix 用户不再需要在输入冗长而乏味的路径时使用通配符(例如输入 `/h*/s*h/V*/SCS/sc*/comp*/t*/a*/*9/04/LS*boat*v`,比输入 `/home/seth/Videos/SCS/scenes/composite/takes/approved/109/04/LS_boat-port-cargo-mover.mkv` 要容易得多)。相反,他们只要输入足够的唯一字符串即可按 `Tab` 键。例如,如果你知道在系统的根目录下只有一个以 `h` 开头的目录,则可以键入 `/h`,然后单击 `Tab`。快速、简单、高效。它还会确认路径存在;如果 `Tab` 无法完成任何操作,则说明你在错误的位置或输入了错误的路径部分。
+
+但是,如果你有许多目录有五个或更多相同的首字母,`Tab` 会坚决拒绝进行补全。尽管在大多数现代终端中,它将(至少会)显示阻止其进行猜测你的意思的文件,但通常需要按两次 `Tab` 键才能显示它们。因此,制表符补全通常会变成来回按下键盘上字母和制表符,以至于你好像在接受钢琴独奏会的训练。
+
+Zsh 通过循环可能的补全来解决这个小问题。如果键入 `*ls ~/D` 并按 `Tab`,则 Zsh 首先使用 `Documents` 来完成命令;如果再次按 `Tab`,它将提供 `Downloads`,依此类推,直到找到所需的选项。
+
+### Zsh 中的通配符
+
+在 Zsh 中,通配符的行为不同于 Bash 中用户所习惯的行为。首先,可以对其进行修改。例如,如果要列出当前目录中的所有文件夹,则可以使用修改后的通配符:
+
+```
+% ls
+dir0 dir1 dir2 file0 file1
+% ls *(/)
+dir0 dir1 dir2
+```
+
+在此示例中,`(/)` 限定了通配符的结果,因此 Zsh 仅显示目录。要仅列出文件,请使用 `(.)`。要列出符号链接,请使用 `(@)`。要列出可执行文件,请使用 `(*)`。
+
+```
+% ls ~/bin/*(*)
+fop exify tt
+```
+
+Zsh 不仅仅知道文件类型。它也可以使用相同的通配符修饰符约定根据修改时间列出。例如,如果要查找在过去八个小时内修改的文件,请使用 `mh` 修饰符(即 “modified hours” 的缩写)和小时的负整数:
+
+```
+% ls ~/Documents/*(mh-8)
+cal.org game.org home.org
+```
+
+要查找超过(例如)两天前修改过的文件,修饰符更改为 `md`(即 “modified day” 的缩写),并带上天数的正整数:
+
+```
+% ls ~/Documents/*(+2)
+holiday.org
+```
+
+通配符修饰符和限定符还可以做很多事情,因此,请阅读 [Zsh 手册页][4],以获取全部详细信息。
+
+#### 通配符的副作用
+
+要像在 Bash 中使用通配符一样使用它,有时必须在 Zsh 中对通配符进行转义。例如,如果要在 Bash 中将某些文件复制到服务器上,则可以使用如下通配符:
+
+```
+$ scp IMG_*.JPG seth@example.com:~/www/ph*/*19/09/14
+```
+
+这在 Bash 中有效,但是在 Zsh 中会返回错误,因为它在发出 `scp` 命令之前尝试在远程端扩展该变量(通配符)。为避免这种情况,必须转义远程变量(通配符):
+
+```
+% scp IMG_*.JPG seth@example.com:~/www/ph\*/\*19/09/14
+```
+
+当你切换到新的 shell 时,这些小异常可能会使你感到沮丧。使用 Zsh 时会遇到的问题不多(体验过 Zsh 后切换回 Bash 的可能遇到更多),但是当它们发生时,请保持镇定且坦率。严格遵守 POSIX 的情况很少会出错,但是如果失败了,请查找问题以解决并继续。对于许多在工作中困在一个 shell 上而在家中困在另一个 shell 上的用户来说,[hyperpolyglot.org][5] 已被证明其是无价的。
+
+在我的下一篇 Zsh 文章中,我将向你展示如何安装主题和插件以定制你的 Z-Shell 甚至 Z-ier。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/9/getting-started-zsh
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
+[2]: https://devblogs.microsoft.com/commandline/category/bash-on-ubuntu-on-windows/
+[3]: https://www.cygwin.com/
+[4]: https://linux.die.net/man/1/zsh
+[5]: http://hyperpolyglot.org/unix-shells
diff --git a/published/20190917 How to Check Linux Mint Version Number - Codename.md b/published/20190917 How to Check Linux Mint Version Number - Codename.md
new file mode 100644
index 0000000000..5f102dfa89
--- /dev/null
+++ b/published/20190917 How to Check Linux Mint Version Number - Codename.md
@@ -0,0 +1,139 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Morisun029)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11360-1.html)
+[#]: subject: (How to Check Linux Mint Version Number & Codename)
+[#]: via: (https://itsfoss.com/check-linux-mint-version/)
+[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
+
+如何查看 Linux Mint 版本号和代号
+======
+
+Linux Mint 每两年发布一次主版本(如 Mint 19),每六个月左右发布一次次版本(如 Mint 19.1、19.2 等)。 你可以自己升级 Linux Mint 版本,而次版本也会自动更新。
+
+在所有这些版本中,你可能想知道你正在使用的是哪个版本。了解 Linux Mint 版本号可以帮助你确定某个特定软件是否适用于你的系统,或者检查你的系统是否已达到使用寿命。
+
+你可能需要 Linux Mint 版本号有多种原因,你也有多种方法可以获取此信息。让我向你展示用图形和命令行的方式获取 Mint 版本信息。
+
+* [使用命令行查看 Linux Mint 版本信息][1]
+* [使用 GUI(图形用户界面)查看 Linux Mint 版本信息][2]
+
+### 使用终端查看 Linux Mint 版本号的方法
+
+![][3]
+
+我将介绍几种使用非常简单的命令查看 Linux Mint 版本号和代号的方法。 你可以从 “菜单” 中打开终端,或按 `CTRL+ALT+T`(默认热键)打开。
+
+本文中的最后两个命令还会输出你当前的 Linux Mint 版本所基于的 Ubuntu 版本。
+
+#### 1、/etc/issue
+
+从最简单的 CLI 方法开始,你可以打印出 `/etc/issue` 的内容来检查你的版本号和代号:
+
+```
+[email protected]:~$ cat /etc/issue
+Linux Mint 19.2 Tina \n \l
+```
+
+#### 2、hostnamectl
+
+![hostnamectl][4]
+
+这一个命令(`hostnamectl`)打印的信息几乎与“系统信息”中的信息相同。 你可以看到你的操作系统(带有版本号)以及你的内核版本。
+
+#### 3、lsb_release
+
+`lsb_release` 是一个非常简单的 Linux 实用程序,用于查看有关你的发行版本的基本信息:
+
+```
+[email protected]:~$ lsb_release -a
+No LSB modules are available.
+Distributor ID: LinuxMint
+Description: Linux Mint 19.2 Tina
+Release: 19.2
+Codename: tina
+```
+
+**注:** 我使用 `–a` 标签打印所有参数, 但你也可以使用 `-s` 作为简写格式,`-d` 用于描述等 (用 `man lsb_release` 查看所有选项)
+
+#### 4、/etc/linuxmint/info
+
+![/etc/linuxmint/info][5]
+
+这不是命令,而是 Linux Mint 系统上的文件。只需使用 `cat` 命令将其内容打印到终端,然后查看你的版本号和代号。
+
+#### 5、使用 /etc/os-release 命令也可以获取到 Ubuntu 代号
+
+![/etc/os-release][7]
+
+Linux Mint 基于 Ubuntu。每个 Linux Mint 版本都基于不同的 Ubuntu 版本。了解你的 Linux Mint 版本所基于的 Ubuntu 版本有助你在必须要使用 Ubuntu 版本号的情况下使用(比如你需要在 [Linux Mint 中安装最新的 Virtual Box][8]添加仓库时)。
+
+`os-release` 则是另一个类似于 `info` 的文件,向你展示 Linux Mint 所基于的 Ubuntu 版本代号。
+
+#### 6、使用 /etc/upstream-release/lsb-release 只获取 Ubuntu 的基本信息
+
+如果你只想要查看有关 Ubuntu 的基本信息,请输出 `/etc/upstream-release/lsb-release`:
+
+```
+[email protected]:~$ cat /etc/upstream-release/lsb-release
+DISTRIB_ID=Ubuntu
+DISTRIB_RELEASE=18.04
+DISTRIB_CODENAME=bionic
+DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS"
+```
+
+特别提示:[你可以使用 uname 命令查看 Linux 内核版本][9]。
+
+```
+[email protected]:~$ uname -r
+4.15.0-54-generic
+```
+
+**注:** `-r` 代表 release,你可以使用 `man uname` 查看其他信息。
+
+### 使用 GUI 查看 Linux Mint 版本信息
+
+如果你对终端和命令行不满意,可以使用图形方法。如你所料,这个非常明了。
+
+打开“菜单” (左下角),然后转到“偏好设置 > 系统信息”:
+
+![Linux Mint 菜单][10]
+
+或者,在菜单中,你可以搜索“System Info”:
+
+![Menu Search System Info][11]
+
+在这里,你可以看到你的操作系统(包括版本号),内核和桌面环境的版本号:
+
+![System Info][12]
+
+### 总结
+
+我已经介绍了一些不同的方法,用这些方法你可以快速查看你正在使用的 Linux Mint 的版本和代号(以及所基于的 Ubuntu 版本和内核)。我希望这个初学者教程对你有所帮助。请在评论中告诉我们你最喜欢哪个方法!
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/check-linux-mint-version/
+
+作者:[Sergiu][a]
+选题:[lujun9972][b]
+译者:[Morisun029](https://github.com/Morisun029)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/sergiu/
+[b]: https://github.com/lujun9972
+[1]: tmp.pL5Hg3N6Qt#terminal
+[2]: tmp.pL5Hg3N6Qt#GUI
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/check-linux-mint-version.png?ssl=1
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/hostnamectl.jpg?ssl=1
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/linuxmint_info.jpg?ssl=1
+[6]: https://itsfoss.com/rid-google-chrome-icons-dock-elementary-os-freya/
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/os_release.jpg?ssl=1
+[8]: https://itsfoss.com/install-virtualbox-ubuntu/
+[9]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux_mint_menu.jpg?ssl=1
+[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/menu_search_system_info.jpg?ssl=1
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/system_info.png?ssl=1
diff --git a/published/20190918 Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President.md b/published/20190918 Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President.md
new file mode 100644
index 0000000000..e8a658cebc
--- /dev/null
+++ b/published/20190918 Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President.md
@@ -0,0 +1,146 @@
+[#]: collector: (lujun9972)
+[#]: translator: (name1e5s)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11358-1.html)
+[#]: subject: (Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President)
+[#]: via: (https://itsfoss.com/richard-stallman-controversy/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+Richard Stallman 被迫辞去 FSF 主席的职务
+======
+
+> Richard Stallman,自由软件基金会的创建者以及主席,已经辞去主席及董事会职务。此前,因为 Stallman 对于爱泼斯坦事件中的受害者的观点,一小撮活动家以及媒体人发起了清除 Stallman 的运动。这份声明就是在这些活动后发生的。阅读全文以获得更多信息。
+
+![][1]
+
+### Stallman 事件的背景概述
+
+如果你不知道这次事件发生的前因后果,请看本段的详细信息。
+
+[Richard Stallman][2],66岁,是就职于 [MIT][3] 的计算机科学家。他最著名的成就就是在 1983 年发起了[自由软件运动][4]。他也开发了 GNU 项目旗下的部分软件,比如 GCC 和 Emacs。受自由软件运动影响选择使用 GPL 开源协议的项目不计其数。Linux 是其中最出名的项目之一。
+
+[Jeffrey Epstein][5](爱泼斯坦),美国亿万富翁,金融大佬。其涉嫌为社会上流精英提供性交易服务(其中有未成年少女)而被指控成为性犯罪者。在受审期间,爱泼斯坦在监狱中自杀身亡。
+
+[Marvin Lee Minsky][6],MIT 知名计算机科学家。他在 MIT 建立了人工智能实验室。2016 年,88 岁的 Minsky 逝世。在 Minsky 逝世后,一位名为 Misky 的爱泼斯坦事件受害者声称其在未成年时曾被“诱导”到爱泼斯坦的私人岛屿,与之发生性关系。
+
+但是这些与 Richard Stallman 有什么关系?这要从 Stallman 发给 MIT 计算机科学与人工智能实验室(CSAIL)的学生以及附属机构就爱泼斯坦的捐款提出抗议的邮件列表的邮件说起。邮件全文翻译如下:
+
+> 周五事件的公告对 Marvin Minsky 来说是不公正的。
+>
+> “已故的人工智能 ‘先锋’ Marvin Minsky (被控告侵害了爱泼斯坦事件的受害者之一\[2])”
+>
+> 不公正之处在于 “侵害” 这个用语。“性侵犯” 这个用语非常的糢糊,夸大了指控的严重性:宣称某人做了 X 但误导别人,让别人觉得这个人做了 Y,Y 远远比 X 严重。
+>
+> 上面引用的指控显然就是夸大。报导声称 Minksy 与爱泼斯坦的女眷之一发生了性关系(详见 )。我们假设这是真的(我找不到理由不相信)。
+>
+> “侵害” 这个词,意味着他使用了某种暴力。但那篇报道并没有提到这个,只说了他们发生了性关系。
+>
+> 我们可以想像很多种情况,但最合理的情况是,她在 Marvin 面前表现的像是完全自愿的。假设她是被爱泼斯坦强迫的,那爱泼斯坦有充足的理由让她对大多数人守口如瓶。
+>
+> 从各种的指控夸大事例中,我总结出,在指控时使用“性侵犯”是绝对错误的。
+>
+> 无论你想要批判什么行为,你都应该使用特定的词汇来描述,以此避免批判的本质的道德模糊性。
+
+### “清除 Stallman” 的呼吁
+
+‘爱泼斯坦’在美国是颇具争议的‘话题’。Stallman 对该敏感事件做出如此鲁莽的 “知识陈述” 不会有好结果,事实也是如此。
+
+一位机器人学工程师从她的朋友那里收到了转发的邮件并发起了一个[清除 Stallman 的活动][7]。她要的不是澄清或者道歉,她只想要清除斯托曼,就算这意味着 “将 MIT 夷为平地” 也在所不惜。
+
+> 是,至少 Stallman 没有被控强奸任何人。但这就是我们的最高标准吗?这所声望极高的学院坚持的标准就是这样的吗?如果这是麻省理工学院想要捍卫的、想要代表的标准的话,还不如把这破学校夷为平地…
+>
+> 如果有必要的话,就把所有人都清除出去,之后从废墟中建立出更好的秩序。
+>
+> —— Salem,发起“清除 Stallman“运动的机器人学专业学生
+
+Salem 的声讨最初没有被主流媒体重视。但它还是被反对软件行业内的精英崇拜以及性别偏见的积极分子发现了。
+
+> [#epstein][8] [#MIT][9] 嗨 记者没有回复我我很生气就自己写了这么个故事。作为 MIT 的校友我还真是高兴啊🙃
+>
+> — SZJG (@selamjie) [September 12, 2019][10]
+
+.
+
+> 是不是对于性侵儿童的 “杰出混蛋” 我们也可以辩护说 “万一这是你情我愿的”
+>
+> — Tracy Chou 👩🏻💻 (@triketora) [September 13, 2019][11]
+
+.
+
+> 多年来我就一直发推说 Richard "RMS" Stallman 这人有多恶心 —— 恋童癖、厌女症、还残障歧视
+>
+> 不可避免的是,每次我这样做,都会有老哥检查我的数据来源,然后说 “这都是几年前的事了!他现在变了!”
+>
+> 变个屁。
+>
+> — Sarah Mei (@sarahmei) [September 12, 2019][12]
+
+下面是 Sage Sharp 开头的一篇关于 Stallman 的行为如何对科技人员产生负面影响的帖子:
+
+> 👇大家说下 Richard Stallman 对科技从业者的影响吧,尤其是女性。 [例如: 强奸、乱伦、残障歧视、性交易]
+>
+> [@fsf][13] 有必要永久禁止 Richard Stallman 担任自由软件基金会董事会主席。
+>
+> — Sage Sharp (@\_sagesharp\_) [September 16, 2019][14]
+
+Stallman 一直以来也不是一个圣人。他粗暴,不合时宜、多年来一直在开带有性别歧视的笑话。你可以在[这里][15]和[这里][16]读到。
+
+很快这个消息就被 [The Vice][17]、[每日野兽][18],[未来主义][19]等大媒体采访。他们把 Stallman 描绘成爱泼斯坦的捍卫者。在强烈的抗议声中,[GNOME 执行董事威胁要结束 GNOME 和 FSF 之间的关系][20]。
+
+最后,Stallman 先是从 MIT 辞职,现在又从 [自由软件基金会][21] 辞职。
+
+![][22]
+
+### 危险的特权?
+
+我们见识到了,把一个人从他创建并为之工作了三十多年的组织中驱逐出去仅仅需要五天。这甚至还是在 Stallman 没有参与性交易丑闻的情况下。
+
+其中一些 “活动家” 过去也曾[针对过 Linux 的作者 Linus Torvalds][23]。Linux 基金会背后的管理层预见到了科技行业激进主义的增长趋势,因此他们制定了[适用于 Linux 内核开发的行为准则][24]并[强制 Torvalds 接受培训以改善他的行为][25]。如果他们没有采取纠正措施,可能 Torvalds 也已经被批倒批臭了。
+
+忽视技术支持者的鲁莽行为和性别歧视是不可接受的,但是对于那些遇到不同意某种流行观点的人就进行声讨,施以私刑也是不道德的做法。我不支持 Stallman 和他过去的言论,但我也不能接受他以这种方式(被迫?)辞职。
+
+Techrights 对此有一些有趣的评论,你可以在[这里][26]和[这里][27]看到。
+
+*你对此事有何看法?请文明分享你的观点和意见。过激评论将不会公布。*
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/richard-stallman-controversy/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[name1e5s](https://github.com/name1e5s)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/stallman-conroversy.png?w=800&ssl=1
+[2]: https://en.wikipedia.org/wiki/Richard_Stallman
+[3]: https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
+[4]: https://en.wikipedia.org/wiki/Free_software_movement
+[5]: https://en.wikipedia.org/wiki/Jeffrey_Epstein
+[6]: https://en.wikipedia.org/wiki/Marvin_Minsky
+[7]: https://medium.com/@selamie/remove-richard-stallman-fec6ec210794
+[8]: https://twitter.com/hashtag/epstein?src=hash&ref_src=twsrc%5Etfw
+[9]: https://twitter.com/hashtag/MIT?src=hash&ref_src=twsrc%5Etfw
+[10]: https://twitter.com/selamjie/status/1172244207978897408?ref_src=twsrc%5Etfw
+[11]: https://twitter.com/triketora/status/1172443389536555009?ref_src=twsrc%5Etfw
+[12]: https://twitter.com/sarahmei/status/1172283772428906496?ref_src=twsrc%5Etfw
+[13]: https://twitter.com/fsf?ref_src=twsrc%5Etfw
+[14]: https://twitter.com/_sagesharp_/status/1173637138413318144?ref_src=twsrc%5Etfw
+[15]: https://geekfeminism.wikia.org/wiki/Richard_Stallman
+[16]: https://medium.com/@selamie/remove-richard-stallman-appendix-a-a7e41e784f88
+[17]: https://www.vice.com/en_us/article/9ke3ke/famed-computer-scientist-richard-stallman-described-epstein-victims-as-entirely-willing
+[18]: https://www.thedailybeast.com/famed-mit-computer-scientist-richard-stallman-defends-epstein-victims-were-entirely-willing
+[19]: https://futurism.com/richard-stallman-epstein-scandal
+[20]: https://blog.halon.org.uk/2019/09/gnome-foundation-relationship-gnu-fsf/
+[21]: https://www.fsf.org/news/richard-m-stallman-resigns
+[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/richard-stallman.png?resize=800%2C94&ssl=1
+[23]: https://www.newyorker.com/science/elements/after-years-of-abusive-e-mails-the-creator-of-linux-steps-aside
+[24]: https://itsfoss.com/linux-code-of-conduct/
+[25]: https://itsfoss.com/torvalds-takes-a-break-from-linux/
+[26]: http://techrights.org/2019/09/15/media-attention-has-been-shifted/
+[27]: http://techrights.org/2019/09/16/stallman-removed/
diff --git a/published/20190918 How to remove carriage returns from text files on Linux.md b/published/20190918 How to remove carriage returns from text files on Linux.md
new file mode 100644
index 0000000000..2f746068d7
--- /dev/null
+++ b/published/20190918 How to remove carriage returns from text files on Linux.md
@@ -0,0 +1,111 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11389-1.html)
+[#]: subject: (How to remove carriage returns from text files on Linux)
+[#]: via: (https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+如何在 Linux 中删除文本中的回车字符
+======
+
+> 当回车字符(`Ctrl+M`)让你紧张时,别担心。有几种简单的方法消除它们。
+
+
+
+“回车”字符可以往回追溯很长一段时间 —— 早在打字机上就有一个机械装置或杠杆将承载纸滚筒的机架移到右边,以便可以重新在左侧输入字母。他们在 Windows 上的文本文件上保留了它,但从未在 Linux 系统上使用过。当你尝试在 Linux 上处理在 Windows 上创建的文件时,这种不兼容性有时会导致问题,但这是一个非常容易解决的问题。
+
+如果你使用 `od`(八进制转储)命令查看文件,那么回车(也用 `Ctrl+M` 代表)字符将显示为八进制的 15。字符 `CRLF` 通常用于表示 Windows 文本文件中的一行结束的回车符和换行符序列。那些注意看八进制转储的会看到 `\r\n`。相比之下,Linux 文本仅以换行符结束。
+
+这有一个 `od` 输出的示例,高亮显示了行中的 `CRLF` 字符,以及它的八进制。
+
+```
+$ od -bc testfile.txt
+0000000 124 150 151 163 040 151 163 040 141 040 164 145 163 164 040 146
+ T h i s i s a t e s t f
+0000020 151 154 145 040 146 162 157 155 040 127 151 156 144 157 167 163
+ i l e f r o m W i n d o w s
+0000040 056 015 012 111 164 047 163 040 144 151 146 146 145 162 145 156 <==
+ . \r \n I t ' s d i f f e r e n <==
+0000060 164 040 164 150 141 156 040 141 040 125 156 151 170 040 164 145
+ t t h a n a U n i x t e
+0000100 170 164 040 146 151 154 145 015 012 167 157 165 154 144 040 142 <==
+ x t f i l e \r \n w o u l d b <==
+```
+
+虽然这些字符不是大问题,但是当你想要以某种方式解析文本,并且不希望就它们是否存在进行编码时,这有时候会产生干扰。
+
+### 3 种从文本中删除回车符的方法
+
+幸运的是,有几种方法可以轻松删除回车符。这有三个选择:
+
+#### dos2unix
+
+你可能会在安装时遇到麻烦,但 `dos2unix` 可能是将 Windows 文本转换为 Unix/Linux 文本的最简单方法。一个命令带上一个参数就行了。不需要第二个文件名。该文件会被直接更改。
+
+```
+$ dos2unix testfile.txt
+dos2unix: converting file testfile.txt to Unix format...
+```
+
+你应该会发现文件长度减少,具体取决于它包含的行数。包含 100 行的文件可能会缩小 99 个字符,因为只有最后一行不会以 `CRLF` 字符结尾。
+
+之前:
+
+```
+-rw-rw-r-- 1 shs shs 121 Sep 14 19:11 testfile.txt
+```
+
+之后:
+
+```
+-rw-rw-r-- 1 shs shs 118 Sep 14 19:12 testfile.txt
+```
+
+如果你需要转换大量文件,不用每次修复一个。相反,将它们全部放在一个目录中并运行如下命令:
+
+```
+$ find . -type f -exec dos2unix {} \;
+```
+
+在此命令中,我们使用 `find` 查找常规文件,然后运行 `dos2unix` 命令一次转换一个。命令中的 `{}` 将被替换为文件名。运行时,你应该处于包含文件的目录中。此命令可能会损坏其他类型的文件,例如除了文本文件外在上下文中包含八进制 15 的文件(如,镜像文件中的字节)。
+
+#### sed
+
+你还可以使用流编辑器 `sed` 来删除回车符。但是,你必须提供第二个文件名。以下是例子:
+
+```
+$ sed -e “s/^M//” before.txt > after.txt
+```
+
+一件需要注意的重要的事情是,请不要输入你看到的字符。你必须按下 `Ctrl+V` 后跟 `Ctrl+M` 来输入 `^M`。`s` 是替换命令。斜杠将我们要查找的文本(`Ctrl + M`)和要替换的文本(这里为空)分开。
+
+#### vi
+
+你甚至可以使用 `vi` 删除回车符(`Ctrl+M`),但这里假设你没有打开数百个文件,或许也在做一些其他的修改。你可以键入 `:` 进入命令行,然后输入下面的字符串。与 `sed` 一样,命令中 `^M` 需要通过 `Ctrl+V` 输入 `^`,然后 `Ctrl+M` 插入 `M`。`%s` 是替换操作,斜杠再次将我们要删除的字符和我们想要替换它的文本(空)分开。 `g`(全局)意味在所有行上执行。
+
+```
+:%s/^M//g
+```
+
+### 总结
+
+`dos2unix` 命令可能是最容易记住的,也是从文本中删除回车的最可靠的方法。其他选择使用起来有点困难,但它们提供相同的基本功能。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://www.flickr.com/photos/kmsiever/5895380540/in/photolist-9YXnf5-cNmpxq-2KEvib-rfecPZ-9snnkJ-2KAcDR-dTxzKW-6WdgaG-6H5i46-2KzTZX-7cnSw7-e3bUdi-a9meh9-Zm3pD-xiFhs-9Hz6YM-ar4DEx-4PXAhw-9wR4jC-cihLcs-asRFJc-9ueXvG-aoWwHq-atwL3T-ai89xS-dgnntH-5en8Te-dMUDd9-aSQVn-dyZqij-cg4SeS-abygkg-f2umXt-Xk129E-4YAeNn-abB6Hb-9313Wk-f9Tot-92Yfva-2KA7Sv-awSCtG-2KDPzb-eoPN6w-FE9oi-5VhaNf-eoQgx7-eoQogA-9ZWoYU-7dTGdG-5B1aSS
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
diff --git a/published/20190918 Microsoft brings IBM iron to Azure for on-premises migrations.md b/published/20190918 Microsoft brings IBM iron to Azure for on-premises migrations.md
new file mode 100644
index 0000000000..4063ab4778
--- /dev/null
+++ b/published/20190918 Microsoft brings IBM iron to Azure for on-premises migrations.md
@@ -0,0 +1,55 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Morisun029)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11375-1.html)
+[#]: subject: (Microsoft brings IBM iron to Azure for on-premises migrations)
+[#]: via: (https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Skytap 和微软将 IBM 机器搬到了 Azure
+======
+
+> 微软再次证明了其摒弃了“非我发明”这一态度来支持客户。
+
+
+
+当微软将 Azure 作为其 Windows 服务器操作系统的云计算版本发布时,它并没有使其成为仅支持 Windows 系统的版本,它还支持 Linux 系统,并且在短短几年内[其 Linux 实例的数量现在已经超过了Windows 实例的数量][1]。
+
+很高兴看到微软终于摆脱了这种长期以来非常有害的“非我发明”态度,该公司的最新举动确实令人惊讶。
+
+微软与一家名为 Skytap 的公司合作,以在 Azure 云服务上提供 IBM Power9 实例,可以在 Azure 云内运行基于 Power 的系统,该系统将与其已有的 Xeon 和 Epyc 实例一同作为 Azure 的虚拟机(VM)。
+
+Skytap 是一家有趣的公司。它由华盛顿大学的三位教授创立,专门研究本地遗留硬件的云迁移,如 IBM System I 或 Sparc 的云迁移。该公司在西雅图拥有一个数据中心,以 IBM 的硬件运行 IBM 的 PowerVM 管理程序,并且对在美国和英格兰的 IBM 数据中心提供主机托管。
+
+该公司的座右铭是快速迁移,然后按照自己的节奏进行现代化。因此,它专注于帮助一些企业将遗留系统迁移到云,然后实现应用程序的现代化,这也是它与微软合作的目的。Azure 将通过为企业提供平台来提高传统应用程序的价值,而无需花费巨额费用重写一个新平台。
+
+Skytap 提供了预览,可以看到使用 Skytap 上的 DB2 提升和扩展原有的 IBM i 应用程序以及通过 Azure 的物联网中心进行扩展时可能发生的情况。该应用程序无缝衔接新旧架构,并证明了不需要完全重写可靠的 IBM i 应用程序即可从现代云功能中受益。
+
+### 迁移到 Azure
+
+根据协议,微软将把 IBM 的 Power S922 服务器部署在一个未声明的 Azure 区域。这些机器可以运行 PowerVM 管理程序,这些管理程序支持老式 IBM 操作系统以及 Linux 系统。
+
+Skytap 首席执行官布拉德·希克在一份声明中说道:“通过先替换旧技术来迁移上云既耗时又冒险。……Skytap 的愿景一直是通过一些小小的改变和较低的风险实现企业系统到云平台的迁移。与微软合作,我们将为各种遗留应用程序迁移到 Azure 提供本地支持,包括那些在 IBM i、AIX 和 Power Linux 上运行的程序。这将使企业能够通过使用 Azure 服务进行现代化来延长传统系统的寿命并增加其价值。”
+
+随着基于 Power 应用程序的现代化,Skytap 随后将引入 DevOps CI/CD 工具链来加快软件的交付。迁移到 Azure 的 Skytap 上后,客户将能够集成 Azure DevOps,以及 Power 的 CI/CD 工具链,例如 Eradani 和 UrbanCode。
+
+这些听起来像是迈出了第一步,但这意味着以后将会实现更多,尤其是在应用程序迁移方面。如果它仅在一个 Azure 区域中,听起来好像它们正在对该项目进行测试和验证,并可能在今年晚些时候或明年进行扩展。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[Morisun029](https://github.com/Morisun029)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://www.openwall.com/lists/oss-security/2019/06/27/7
+[2]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
diff --git a/published/20190918 Oracle Unleashes World-s Fastest Database Machine ‘Exadata X8M.md b/published/20190918 Oracle Unleashes World-s Fastest Database Machine ‘Exadata X8M.md
new file mode 100644
index 0000000000..8721fea8c8
--- /dev/null
+++ b/published/20190918 Oracle Unleashes World-s Fastest Database Machine ‘Exadata X8M.md
@@ -0,0 +1,64 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11366-1.html)
+[#]: subject: (Oracle Unleashes World’s Fastest Database Machine ‘Exadata X8M’)
+[#]: via: (https://opensourceforu.com/2019/09/oracle-unleashes-worlds-fastest-database-machine-exadata-x8m/)
+[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
+
+Oracle 发布全球最快的数据库机器 Exadata X8M
+======
+
+> Exadata X8M 是第一台具有集成持久内存和 RoCE 的数据库机器。Oracle 还宣布推出 Oracle 零数据丢失恢复设备 X8M(ZDLRA)。
+
+
+
+Oracle 发布了新的 Exadata 数据库机器 X8M,旨在为数据库基础架构市场树立新的标杆。
+
+Exadata X8M 结合了英特尔 Optane DC 持久存储器和通过融合以太网(RoCE)的 100 千兆的远程直接内存访问(RDMA)来消除存储瓶颈,并显著提高性能,其适用于最苛刻的工作负载,如在线事务处理(OLTP)、分析、物联网、欺诈检测和高频交易。
+
+“借助 Exadata X8M,我们可以提供内存级的性能,同时为 OLTP 和分析提供共享存储的所有优势,”Oracle 任务关键型数据库技术执行副总裁 Juan Loaiza 说。
+
+“使用对共享持久存储器的直接数据库访问将响应时间减少一个数量级,可加速每个 OLTP 应用程序,它是需要实时访问大量数据的应用程序的游戏规则改变者,例如欺诈检测和个性化购物,”官方补充。
+
+### 它有什么独特之处?
+
+Oracle Exadata X8M 使用 RDMA 让数据库直接访问智能存储服务器中的持久内存,从而绕过整个操作系统、IO 和网络软件堆栈。这导致更低的延迟和更高的吞吐量。使用 RDMA 绕过软件堆栈还可以释放存储服务器上的 CPU 资源,以执行更多智能扫描查询来支持分析工作负载。
+
+### 更少的存储瓶颈
+
+“高性能 OLTP 应用需要高的每秒输入/输出操作(IOPS)和低延迟。直接数据库访问共享持久性内存可将SQL 读取的峰值性能提升至 1600 万 IOPS,比行业领先的 Exadata X8 高出 2.5 倍,“Oracle 在一份声明中表示。
+
+此外,Exadata X8M 通过使远程 IO 延迟低于 19 微秒,大大减少了关键数据库 IO 的延迟 —— 这比 Exadata X8 快 10 倍以上。即使对于每秒需要数百万 IO 的工作负载,也可实现这些超低延迟。
+
+### 比 AWS 和 Azure 更高效
+
+该公司声称,与 Oracle 最快的 Amazon RDS 存储相比,Exadata X8M 的延迟降低了 50 倍,IOPS 提高了 200 倍,容量提高了 15 倍。
+
+与 Azure SQL 数据库服务存储相比,Exadata X8M 的延迟降低了 100 倍,IOPS 提高了 150 倍,容量提高了 300 倍。
+
+据 Oracle 称,单机架 Exadata X8M 可提供高达 2 倍的 OLTP 读取 IOPS,3 倍的吞吐量和比具有持久性内存的共享存储系统(如 Dell EMC PowerMax 8000 的单机架)低 5 倍的延迟。
+
+“通过同时支持更快的 OLTP 查询和更高的分析工作负载吞吐量,Exadata X8M 是融合混合工作负载环境以降低 IT 成本和复杂性的理想平台,”该公司说。
+
+### Oracle 零数据丢失恢复设备 X8
+
+Oracle 当天还宣布推出 Oracle 零数据丢失恢复设备 X8M(ZDLRA),它使用新的 100Gb RoCE,用于计算和存储服务器之间的高吞吐量内部数据传输。
+
+Exadata 和 ZDLRA 客户现在可以在 RoCE 或基于 InfiniBand 的工程系统之间进行选择,以在其架构部署中实现最佳灵活性。
+
+--------------------------------------------------------------------------------
+
+via: https://opensourceforu.com/2019/09/oracle-unleashes-worlds-fastest-database-machine-exadata-x8m/
+
+作者:[Longjam Dineshwori][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensourceforu.com/author/dineshwori-longjam/
+[b]: https://github.com/lujun9972
+[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/02/Oracle-Cloud.jpg?resize=350%2C212&ssl=1
diff --git a/published/20190921 How to Remove (Delete) Symbolic Links in Linux.md b/published/20190921 How to Remove (Delete) Symbolic Links in Linux.md
new file mode 100644
index 0000000000..bbe57011eb
--- /dev/null
+++ b/published/20190921 How to Remove (Delete) Symbolic Links in Linux.md
@@ -0,0 +1,130 @@
+[#]: collector: (lujun9972)
+[#]: translator: (arrowfeng)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11382-1.html)
+[#]: subject: (How to Remove (Delete) Symbolic Links in Linux)
+[#]: via: (https://www.2daygeek.com/remove-delete-symbolic-link-softlink-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+在 Linux 中怎样移除(删除)符号链接
+======
+
+你可能有时需要在 Linux 上创建或者删除符号链接。如果有,你知道该怎样做吗?之前你做过吗?你踩坑没有?如果你踩过坑,那没什么问题。如果还没有,别担心,我们将在这里帮助你。
+
+使用 `rm` 和 `unlink` 命令就能完成移除(删除)符号链接的操作。
+
+### 什么是符号链接?
+
+符号链接(symlink)又称软链接,它是一种特殊的文件类型,在 Linux 中该文件指向另一个文件或者目录。它类似于 Windows 中的快捷方式。它能在相同或者不同的文件系统或分区中指向一个文件或着目录。
+
+符号链接通常用来链接库文件。它也可用于链接日志文件和挂载的 NFS(网络文件系统)上的文件夹。
+
+### 什么是 rm 命令?
+
+[rm 命令][1] 被用来移除文件和目录。它非常危险,你每次使用 `rm` 命令的时候要非常小心。
+
+### 什么是 unlink 命令?
+
+`unlink` 命令被用来移除特殊的文件。它被作为 GNU Gorutils 的一部分安装了。
+
+### 1) 使用 rm 命令怎样移除符号链接文件
+
+`rm` 命令是在 Linux 中使用最频繁的命令,它允许我们像下列描述那样去移除符号链接。
+
+```
+# rm symlinkfile
+```
+
+始终将 `rm` 命令与 `-i` 一起使用以了解正在执行的操作。
+
+```
+# rm -i symlinkfile1
+rm: remove symbolic link ‘symlinkfile1’? y
+```
+
+它允许我们一次移除多个符号链接:
+
+```
+# rm -i symlinkfile2 symlinkfile3
+
+rm: remove symbolic link ‘symlinkfile2’? y
+rm: remove symbolic link ‘symlinkfile3’? y
+```
+
+#### 1a) 使用 rm 命令怎样移除符号链接目录
+
+这像移除符号链接文件那样。使用下列命令移除符号链接目录。
+
+```
+# rm -i symlinkdir
+
+rm: remove symbolic link ‘symlinkdir’? y
+```
+
+使用下列命令移除多个符号链接目录。
+
+```
+# rm -i symlinkdir1 symlinkdir2
+
+rm: remove symbolic link ‘symlinkdir1’? y
+rm: remove symbolic link ‘symlinkdir2’? y
+```
+
+如果你在结尾增加 `/`,这个符号链接目录将不会被删除。如果你加了,你将得到一个错误。
+
+```
+# rm -i symlinkdir/
+
+rm: cannot remove ‘symlinkdir/’: Is a directory
+```
+
+你可以增加 `-r` 去处理上述问题。**但如果你增加这个参数,它将会删除目标目录下的内容,并且它不会删除这个符号链接文件。**(LCTT 译注:这可能不是你的原意。)
+
+```
+# rm -ri symlinkdir/
+
+rm: descend into directory ‘symlinkdir/’? y
+rm: remove regular file ‘symlinkdir/file4.txt’? y
+rm: remove directory ‘symlinkdir/’? y
+rm: cannot remove ‘symlinkdir/’: Not a directory
+```
+
+### 2) 使用 unlink 命令怎样移除符号链接
+
+`unlink` 命令删除指定文件。它一次仅接受一个文件。
+
+删除符号链接文件:
+
+```
+# unlink symlinkfile
+```
+
+删除符号链接目录:
+
+```
+# unlink symlinkdir2
+```
+
+如果你在结尾增加 `/`,你不能使用 `unlink` 命令删除符号链接目录。
+
+```
+# unlink symlinkdir3/
+
+unlink: cannot unlink ‘symlinkdir3/’: Not a directory
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/remove-delete-symbolic-link-softlink-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[arrowfeng](https://github.com/arrowfeng)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/linux-remove-files-directories-folders-rm-command/
diff --git a/published/20190921 Oracle Autonomous Linux- A Self Updating, Self Patching Linux Distribution for Cloud Computing.md b/published/20190921 Oracle Autonomous Linux- A Self Updating, Self Patching Linux Distribution for Cloud Computing.md
new file mode 100644
index 0000000000..5390aee222
--- /dev/null
+++ b/published/20190921 Oracle Autonomous Linux- A Self Updating, Self Patching Linux Distribution for Cloud Computing.md
@@ -0,0 +1,64 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11370-1.html)
+[#]: subject: (Oracle Autonomous Linux: A Self Updating, Self Patching Linux Distribution for Cloud Computing)
+[#]: via: (https://itsfoss.com/oracle-autonomous-linux/)
+[#]: author: (John Paul https://itsfoss.com/author/john/)
+
+Oracle Autonomous Linux:用于云计算的自我更新、自我修补的 Linux 发行版
+======
+
+自动化是 IT 行业的增长趋势,其目的是消除重复任务中的手动干扰。Oracle 通过推出 Oracle Autonomous Linux 向自动化世界迈出了又一步,这无疑将使 IoT 和云计算行业受益。
+
+### Oracle Autonomous Linux:减少人工干扰,增多自动化
+
+![][1]
+
+周一,Oracle 联合创始人拉里·埃里森参加了在旧金山举行的Oracle OpenWorld 全球大会。[他宣布了][2]一个新产品:世界上第一个自治 Linux。这是 Oracle 向第二代云迈进的第二步。第一步是两年前发布的 [Autonomous Database][3]。
+
+Oracle Autonomous Linux 的最大特性是降低了维护成本。根据 [Oracle 网站][4] 所述,Autonomous Linux “使用先进的机器学习和自治功能来提供前所未有的成本节省、安全性和可用性,并释放关键的 IT 资源来应对更多的战略计划”。
+
+Autonomous Linux 可以无需人工干预就安装更新和补丁。这些自动更新包括 “Linux 内核和关键用户空间库”的补丁。“不需要停机,而且可以免受外部攻击和内部恶意用户的攻击。”它们也可以在系统运行时进行,以减少停机时间。Autonomous Linux 还会自动处理伸缩,以确保满足所有计算需求。
+
+埃里森强调了新的自治系统将如何提高安全性。他特别提到了 [Capitol One 数据泄露][5]是由于配置错误而发生的。他说:“一个防止数据被盗的简单规则:将数据放入自治系统。没有人为错误,没有数据丢失。 那是我们与 AWS 之间的最大区别。”
+
+有趣的是,Oracle 还瞄准了这一新产品以与 IBM 竞争。埃里森说:“如果你付钱给 IBM,可以停了。”所有 Red Hat 应用程序都应该能够在 Autonomous Linux 上运行而无需修改。有趣的是,Oracle Linux 是从 Red Hat Enterprise Linux 的源代码中[构建][6]的。
+
+看起来,Oracle Autonomous Linux 不会用于企业市场以外。
+
+### 关于 Oracle Autonomous Linux 的思考
+
+Oracle 是云服务市场的重要参与者。这种新的 Linux 产品将使其能够与 IBM 竞争。让人感兴趣的是 IBM 的反应会是如何,特别是当他们有来自 Red Hat 的新一批开源智能软件。
+
+如果你看一下市场数字,那么对于 IBM 或 Oracle 来说情况都不好。大多数云业务由 [Amazon Web Services、Microsoft Azure 和 Google Cloud Platform][7] 所占据。IBM 和 Oracle 落后于他们。[IBM 收购 Red Hat][8] 试图获得发展。这项新的自主云计划是 Oracle 争取统治地位(或至少试图获得更大的市场份额)的举动。让人感兴趣的是,到底有多少公司因为购买了 Oracle 的系统而在互联网的狂野西部变得更加安全?
+
+我必须简单提一下:当我第一次阅读该公告时,我的第一反应就是“好吧,我们离天网更近了一步。”如果我们技术性地考虑一下,我们就像是要进入了机器人末日。如果你打算帮我,我计划去购买一些罐头食品。
+
+你对 Oracle 的新产品感兴趣吗?你会帮助他们赢得云战争吗?在下面的评论中让我们知道。
+
+如果你觉得这篇文章有趣,请花一点时间在社交媒体、Hacker News 或 [Reddit][9] 上分享。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/oracle-autonomous-linux/
+
+作者:[John Paul][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/john/
+[b]: https://github.com/lujun9972
+[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/oracle-autonomous-linux.png?resize=800%2C450&ssl=1
+[2]: https://www.zdnet.com/article/oracle-announces-oracle-autonomous-linux/
+[3]: https://www.oracle.com/in/database/what-is-autonomous-database.html
+[4]: https://www.oracle.com/corporate/pressrelease/oow19-oracle-autonomous-linux-091619.html
+[5]: https://www.zdnet.com/article/100-million-americans-and-6-million-canadians-caught-up-in-capital-one-breach/
+[6]: https://distrowatch.com/table.php?distribution=oracle
+[7]: https://www.zdnet.com/article/top-cloud-providers-2019-aws-microsoft-azure-google-cloud-ibm-makes-hybrid-move-salesforce-dominates-saas/
+[8]: https://itsfoss.com/ibm-red-hat-acquisition/
+[9]: https://reddit.com/r/linuxusersgroup
diff --git a/scripts/check/common.inc.sh b/scripts/check/common.inc.sh
index 2bc0334930..905699a139 100644
--- a/scripts/check/common.inc.sh
+++ b/scripts/check/common.inc.sh
@@ -10,7 +10,7 @@ export TSL_DIR='translated' # 已翻译
export PUB_DIR='published' # 已发布
# 定义匹配规则
-export CATE_PATTERN='(talk|tech)' # 类别
+export CATE_PATTERN='(talk|tech|news)' # 类别
export FILE_PATTERN='[0-9]{8} [a-zA-Z0-9_.,() -]*\.md' # 文件名
# 获取用于匹配操作的正则表达式
diff --git a/sources/news/20190921 Samsung introduces SSDs it claims will -never die.md b/sources/news/20190921 Samsung introduces SSDs it claims will -never die.md
new file mode 100644
index 0000000000..09c1a52d7a
--- /dev/null
+++ b/sources/news/20190921 Samsung introduces SSDs it claims will -never die.md
@@ -0,0 +1,58 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Samsung introduces SSDs it claims will 'never die')
+[#]: via: (https://www.networkworld.com/article/3440026/samsung-introduces-ssds-it-claims-will-never-die.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Samsung introduces SSDs it claims will 'never die'
+======
+New fail-in-place technology in Samsung's SSDs will allow the chips to gracefully recover from chip failure.
+Samsung
+
+[Solid-state drives][1] (SSDs) operate by writing to cells within the chip, and after so many writes, the cell eventually dies off and can no longer be written to. For that reason, SSDs have more actual capacity than listed. A 1TB drive, for example, has about 1.2TB of capacity, and as chips die off from repeated writes, new ones are brought online to keep the 1TB capacity.
+
+But that's for gradual wear. Sometimes SSDs just up and die completely, and without warning after a whole chip fails, not just a few cells. So Samsung is trying to address that with a new generation of SSD memory chips with a technology it calls fail-in-place (FIP).
+
+**Also read: [Inside Hyperconvergence: Combining compute, storage and networking][2]**
+
+FIP technology allows a drive to cope with a failure by working around the dead chip and allowing the SSD to keep operating and just not using the bad chip. You will have less storage, but in all likelihood that drive will be replaced anyway, so this helps prevent data loss.
+
+FIP also scans the data for any damage before copying it to the remaining NAND, which would be the first time I've ever seen a SSD with built-in data recovery.
+
+### Built-in virtualization and machine learning technology
+
+The new Samsung SSDs come with two other software innovations. The first is built-in virtualization technology, which allows a single SSD to be divided up into up to 64 smaller drives for a virtual environment.
+
+The second is V-NAND machine learning technology, which helps to "accurately predict and verify cell characteristics, as well as detect any variation among circuit patterns through big data analytics," as Samsung put it. Doing so means much higher levels of performance from the drive.
+
+As you can imagine, this technology is aimed at enterprises and large-scale data centers, not consumers. All told, Samsung is launching 19 models of these new SSDs called under the names PM1733 and PM1735.
+
+**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
+
+The PM1733 line features six models in a 2.5-inch U.2 form factor, offering storage capacity of between 960GB and 15.63TB, as well as four HHHL card-type drives with capacity ranging from 1.92TB to 30.72TB of storage. Each drive is guaranteed for one drive writes per day (DWPD) for five years. In other words, the warranty is good for writing the equivalent of the drive's total capacity once per day every day for five years.
+
+The PM1735 drives have lower capacity, maxing out at 12.8TB, but they are far more durable, guaranteeing three DWPD for five years. Both drives support PCI Express 4, which has double the throughput of the widely used PCI Express 3. The PM1735 offers nearly 14 times the sequential performance of a SATA-based SSD, with 8GB/s for read operations and 3.8GB/s for writes.
+
+Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3440026/samsung-introduces-ssds-it-claims-will-never-die.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
+[2]: https://www.idginsiderpro.com/article/3409019/inside-hyperconvergence-combining-compute-storage-and-networking.html
+[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/news/20190924 Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale.md b/sources/news/20190924 Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale.md
new file mode 100644
index 0000000000..f2525fa198
--- /dev/null
+++ b/sources/news/20190924 Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale.md
@@ -0,0 +1,61 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale)
+[#]: via: (https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/)
+[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
+
+Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale
+======
+
+ * _**The Foundation aims to make the database search engine “the fastest and most reliable SQL engine for massively distributed data processing.”**_
+ * _**Presto’s architecture allows users to query a variety of data sources and move at scale and speed.**_
+
+
+
+![Facebook][1]
+
+Facebook, Uber, Twitter and Alibaba have joined hands to form a foundation to help Presto, a database search engine and processing tool, scale and diversify its community.
+
+Under Presto will be now hosted under the Linux Foundation, the U.S.-based non-profit organization announced on Monday.
+
+The newly established Presto Foundation will operate under a community governance model with representation from each of the founding members. It aims to make the engine “the fastest and most reliable SQL engine for massively distributed data processing.”
+
+“The Linux Foundation is excited to work with the Presto community, collaborating to solve the increasing problem of massive distributed data processing at internet scale,” said Michael Dolan, VP of Strategic Programs at the Linux Foundation.”
+
+**Presto can run on large clusters of machines**
+
+Presto was developed at Facebook in 2012 as a high-performance distributed SQL query engine for large scale data analytics. Presto’s architecture allows users to query a variety of data sources such as Hadoop, S3, Alluxio, MySQL, PostgreSQL, Kafka, MongoDB and move at scale and speed.
+
+It can query data where it is stored without needing to move the data to a separate system. Its in-memory and distributed query processing results in query latencies of seconds to minutes.
+
+“Presto has been designed for high performance exabyte-scale data processing on a large number of machines. Its flexible design allows processing data from a wide variety of data sources. From day one Presto has been designed with efficiency, scalability and reliability in mind, and it has been improved over the years to take on additional use cases at Facebook, such as batch and other application specific interactive use cases,” said Nezih Yigitbasi, Engineering Manager of Presto at Facebook.
+
+Presto is being used by over a thousand Facebook employees for running several million queries and processing petabytes of data per day, according to Kathy Kam, Head of Open Source at Facebook.
+
+**Expanding community for the benefit of all**
+
+Facebook released the source code of Presto to developers in 2013 in the hope that other companies would help to drive the future direction of the project.
+
+“It turns out many other companies were interested and so under The Linux Foundation, we believe the project can engage others and grow the community for the benefit of all,” said Kathy Kam.
+
+Uber’s data platform architecture uses Presto to extract critical insights from aggregated data. “Uber is honoured to partner with the Linux Foundation and major contributors from the tech community to bring the Presto Foundation to life. Our goal is to help create an open and collaborative community in which Presto developers can thrive,” asserted Brian Hsieh, Head of Open Source at Uber.
+
+Liang Lin, Senior Director of Alibaba OLAP products, believes that the collaboration would eventually benefit the community as well as Alibaba and its customers.
+
+--------------------------------------------------------------------------------
+
+via: https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/
+
+作者:[Longjam Dineshwori][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensourceforu.com/author/dineshwori-longjam/
+[b]: https://github.com/lujun9972
+[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/06/Facebook-Like.jpg?resize=350%2C213&ssl=1
diff --git a/sources/talk/20120911 Doug Bolden, Dunnet (IF).md b/sources/talk/20120911 Doug Bolden, Dunnet (IF).md
deleted file mode 100644
index c856dc5be0..0000000000
--- a/sources/talk/20120911 Doug Bolden, Dunnet (IF).md
+++ /dev/null
@@ -1,52 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Doug Bolden, Dunnet (IF))
-[#]: via: (http://www.wyrmis.com/games/if/dunnet.html)
-[#]: author: (W Doug Bolden http://www.wyrmis.com)
-
-Doug Bolden, Dunnet (IF)
-======
-
-### Dunnet (IF)
-
-#### Review
-
-When I began becoming a semi-serious hobbyist of IF last year, I mostly focused on Infocom, Adventures Unlimited, other Scott Adams based games, and freeware titles. I went on to buy some from Malinche. I picked up _1893_ and _Futureboy_ and (most recnetly) _Treasures of a Slave Kingdom_. I downloaded a lot of free games from various sites. With all of my research and playing, I never once read anything that talked about a game being bundled with Emacs.
-
-Partially, this is because I am a Vim guy. But I used to use Emacs. Kind of a lot. For probably my first couple of years with Linux. About as long as I have been a diehard Vim fan, now. I just never explored, it seems.
-
-I booted up Emacs tonight, and my fonts were hosed. Still do not know exactly why. I surfed some menus to find out what was going wrong and came across a menu option called "Adventure" under Games, which I assumed (I know, I know) meant the Crowther and Woods and 1977 variety. When I clicked it tonight, thinking that it has been a few months since I chased a bird around with a cage in a mine so I can fight off giant snakes or something, I was brought up text involving ends of roads and shovels. Trees, if shaken, that kill me with a coconut. This was not the game I thought it was.
-
-I dug around (or, in purely technical terms, typed "help") and got directed to [this website][1]. Well, here was an IF I had never touched before. Brand spanking new to me. I had planned to play out some _ToaSK_ tonight, but figured that could wait. Besides, I was not quite in the mood for the jocular fun of S. John Ross's commerical IF outing. I needed something a little more direct, and this apparently it.
-
-Most of the game plays out just like the _Colossal Cave Adventure_ cousins of the oldschool (generally commercial) IF days. There are items you pick. Each does a single task (well, there could be one exception to this, I guess). You collect treasures. Winning is a combination of getting to the end and turning in the treasures. The game slightly tweaks the formula by allowing multiple drop off points for the treasures. Since there is a weight limit, though, you usually have to drop them off at a particular time to avoid getting stuck. At several times, your "item cache" is flushed, so to speak, meaning you have to go back and replay earlier portions to find out how to bring things foward. Damage to items can occur to stop you from being able to play. Replaying is pretty much unavoidable, unless you guess outcomes just right.
-
-It also inherits many problems from the era it came. There is a twisty maze. I'm not sure how big it is. I just cheated and looked up a walkthrough for the maze portion. I plan on going back and replaying up to the maze bit and mapping it out, though. I was just mentally and physically beat when I played and knew that I was going to have to call it quits on the game for the night or cheat through the maze. I'm glad I cheated, because there are some interesting things after the maze.
-
-It also has the same sort of stilted syntax and variable levels of description that the original _Adventure_ had. Looking at one item might give you "there is nothing special about that" while looking at another might give you a sentence of flavor text. Several things mentioned in the background do not exist to the parser, which some do. Part of game play is putting up with experimenting. This includes, in cases, a tendency of room descriptions to be written from the perspective of the first time you enter. I know that the Classroom found towards the end of the game does not mention the South exit, either. There are possibly other times this occured that I didn't notice.
-
-It's final issue, again coming out of the era it was designed, is random death syndrome. This is not too common, but there are a few places where things that have no initially apparent fatal outcome lead to one anyhow. In some ways, this "fatal outcome" is just the game reaching an unwinnable state. For an example of the former, type "shake trees" in the first room. For an example of the latter, send either the lamp, the key, or the shovel through the ftp without switching ftp modes first. At least with the former, there is a sense of exploration in finding out new ways to die. In IF, creative deaths is a form of victory in their own right.
-
-_Dunnet_ has a couple of differences from most IF. The former difference is minor. There are little odd descriptions throughout the game. "This room is red" or "The towel has a picture of Snoopy one it" or "There is a cliff here" that do not seem to have an immediate effect on the game. Sure, you can jump over the cliff (and die, obviously) but but it still comes off as a bright spot in the standard description matrix. Towards the end, you will be forced to bring back these details. It makes a neat little diversion of looking around and exploring things. Most of the details are cute and/or add to the surreality of the game overall.
-
-The other big difference, and the one that greatly increased both my annoyance with and my enjoyment of the game, revolves around the two-three computer oriented scenes in the game. You have to type commands into two different computers throughout. One is a VAX and the other is, um, something like a PC (I forget). In both cases, there are clues to be found by knowing your way around the interface. This is a game for computer folk, so most who play it will have a sense of how to type "ls" or "dir" depending on the OS. But not all, will. Beating the game requires a general sense of computer literacy. You must know what types are in ftp. You must know how to determine what type a file is. You must know how to read a text file on a DOS style prompt. You must know something about protocols and etiquette for logging into ftp servers. All this sort of thing. If you do, or are willing to learn (I looked up some of the stuff online) then you can get past this portion with no problem. But this can be like the maze to some people, requiring several replays to get things right.
-
-The end result is a quirky but fun game that I wish I had known about before because now I have the feeling that my computer is hiding other secrets from me. Glad to have played. Will likely play again to see how many ways I can die.
-
---------------------------------------------------------------------------------
-
-via: http://www.wyrmis.com/games/if/dunnet.html
-
-作者:[W Doug Bolden][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: http://www.wyrmis.com
-[b]: https://github.com/lujun9972
-[1]: http://www.driver-aces.com/ronnie.html
diff --git a/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md b/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md
deleted file mode 100644
index 7be913c3bf..0000000000
--- a/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md
+++ /dev/null
@@ -1,111 +0,0 @@
-My Lisp Experiences and the Development of GNU Emacs
-======
-
-> (Transcript of Richard Stallman's Speech, 28 Oct 2002, at the International Lisp Conference).
-
-Since none of my usual speeches have anything to do with Lisp, none of them were appropriate for today. So I'm going to have to wing it. Since I've done enough things in my career connected with Lisp I should be able to say something interesting.
-
-My first experience with Lisp was when I read the Lisp 1.5 manual in high school. That's when I had my mind blown by the idea that there could be a computer language like that. The first time I had a chance to do anything with Lisp was when I was a freshman at Harvard and I wrote a Lisp interpreter for the PDP-11. It was a very small machine — it had something like 8k of memory — and I managed to write the interpreter in a thousand instructions. This gave me some room for a little bit of data. That was before I got to see what real software was like, that did real system jobs.
-
-I began doing work on a real Lisp implementation with JonL White once I started working at MIT. I got hired at the Artificial Intelligence Lab not by JonL, but by Russ Noftsker, which was most ironic considering what was to come — he must have really regretted that day.
-
-During the 1970s, before my life became politicized by horrible events, I was just going along making one extension after another for various programs, and most of them did not have anything to do with Lisp. But, along the way, I wrote a text editor, Emacs. The interesting idea about Emacs was that it had a programming language, and the user's editing commands would be written in that interpreted programming language, so that you could load new commands into your editor while you were editing. You could edit the programs you were using and then go on editing with them. So, we had a system that was useful for things other than programming, and yet you could program it while you were using it. I don't know if it was the first one of those, but it certainly was the first editor like that.
-
-This spirit of building up gigantic, complicated programs to use in your own editing, and then exchanging them with other people, fueled the spirit of free-wheeling cooperation that we had at the AI Lab then. The idea was that you could give a copy of any program you had to someone who wanted a copy of it. We shared programs to whomever wanted to use them, they were human knowledge. So even though there was no organized political thought relating the way we shared software to the design of Emacs, I'm convinced that there was a connection between them, an unconscious connection perhaps. I think that it's the nature of the way we lived at the AI Lab that led to Emacs and made it what it was.
-
-The original Emacs did not have Lisp in it. The lower level language, the non-interpreted language — was PDP-10 Assembler. The interpreter we wrote in that actually wasn't written for Emacs, it was written for TECO. It was our text editor, and was an extremely ugly programming language, as ugly as could possibly be. The reason was that it wasn't designed to be a programming language, it was designed to be an editor and command language. There were commands like ‘5l’, meaning ‘move five lines’, or ‘i’ and then a string and then an ESC to insert that string. You would type a string that was a series of commands, which was called a command string. You would end it with ESC ESC, and it would get executed.
-
-Well, people wanted to extend this language with programming facilities, so they added some. For instance, one of the first was a looping construct, which was < >. You would put those around things and it would loop. There were other cryptic commands that could be used to conditionally exit the loop. To make Emacs, we (1) added facilities to have subroutines with names. Before that, it was sort of like Basic, and the subroutines could only have single letters as their names. That was hard to program big programs with, so we added code so they could have longer names. Actually, there were some rather sophisticated facilities; I think that Lisp got its unwind-protect facility from TECO.
-
-We started putting in rather sophisticated facilities, all with the ugliest syntax you could ever think of, and it worked — people were able to write large programs in it anyway. The obvious lesson was that a language like TECO, which wasn't designed to be a programming language, was the wrong way to go. The language that you build your extensions on shouldn't be thought of as a programming language in afterthought; it should be designed as a programming language. In fact, we discovered that the best programming language for that purpose was Lisp.
-
-It was Bernie Greenberg, who discovered that it was (2). He wrote a version of Emacs in Multics MacLisp, and he wrote his commands in MacLisp in a straightforward fashion. The editor itself was written entirely in Lisp. Multics Emacs proved to be a great success — programming new editing commands was so convenient that even the secretaries in his office started learning how to use it. They used a manual someone had written which showed how to extend Emacs, but didn't say it was a programming. So the secretaries, who believed they couldn't do programming, weren't scared off. They read the manual, discovered they could do useful things and they learned to program.
-
-So Bernie saw that an application — a program that does something useful for you — which has Lisp inside it and which you could extend by rewriting the Lisp programs, is actually a very good way for people to learn programming. It gives them a chance to write small programs that are useful for them, which in most arenas you can't possibly do. They can get encouragement for their own practical use — at the stage where it's the hardest — where they don't believe they can program, until they get to the point where they are programmers.
-
-At that point, people began to wonder how they could get something like this on a platform where they didn't have full service Lisp implementation. Multics MacLisp had a compiler as well as an interpreter — it was a full-fledged Lisp system — but people wanted to implement something like that on other systems where they had not already written a Lisp compiler. Well, if you didn't have the Lisp compiler you couldn't write the whole editor in Lisp — it would be too slow, especially redisplay, if it had to run interpreted Lisp. So we developed a hybrid technique. The idea was to write a Lisp interpreter and the lower level parts of the editor together, so that parts of the editor were built-in Lisp facilities. Those would be whatever parts we felt we had to optimize. This was a technique that we had already consciously practiced in the original Emacs, because there were certain fairly high level features which we re-implemented in machine language, making them into TECO primitives. For instance, there was a TECO primitive to fill a paragraph (actually, to do most of the work of filling a paragraph, because some of the less time-consuming parts of the job would be done at the higher level by a TECO program). You could do the whole job by writing a TECO program, but that was too slow, so we optimized it by putting part of it in machine language. We used the same idea here (in the hybrid technique), that most of the editor would be written in Lisp, but certain parts of it that had to run particularly fast would be written at a lower level.
-
-Therefore, when I wrote my second implementation of Emacs, I followed the same kind of design. The low level language was not machine language anymore, it was C. C was a good, efficient language for portable programs to run in a Unix-like operating system. There was a Lisp interpreter, but I implemented facilities for special purpose editing jobs directly in C — manipulating editor buffers, inserting leading text, reading and writing files, redisplaying the buffer on the screen, managing editor windows.
-
-Now, this was not the first Emacs that was written in C and ran on Unix. The first was written by James Gosling, and was referred to as GosMacs. A strange thing happened with him. In the beginning, he seemed to be influenced by the same spirit of sharing and cooperation of the original Emacs. I first released the original Emacs to people at MIT. Someone wanted to port it to run on Twenex — it originally only ran on the Incompatible Timesharing System we used at MIT. They ported it to Twenex, which meant that there were a few hundred installations around the world that could potentially use it. We started distributing it to them, with the rule that “you had to send back all of your improvements” so we could all benefit. No one ever tried to enforce that, but as far as I know people did cooperate.
-
-Gosling did, at first, seem to participate in this spirit. He wrote in a manual that he called the program Emacs hoping that others in the community would improve it until it was worthy of that name. That's the right approach to take towards a community — to ask them to join in and make the program better. But after that he seemed to change the spirit, and sold it to a company.
-
-At that time I was working on the GNU system (a free software Unix-like operating system that many people erroneously call “Linux”). There was no free software Emacs editor that ran on Unix. I did, however, have a friend who had participated in developing Gosling's Emacs. Gosling had given him, by email, permission to distribute his own version. He proposed to me that I use that version. Then I discovered that Gosling's Emacs did not have a real Lisp. It had a programming language that was known as ‘mocklisp’, which looks syntactically like Lisp, but didn't have the data structures of Lisp. So programs were not data, and vital elements of Lisp were missing. Its data structures were strings, numbers and a few other specialized things.
-
-I concluded I couldn't use it and had to replace it all, the first step of which was to write an actual Lisp interpreter. I gradually adapted every part of the editor based on real Lisp data structures, rather than ad hoc data structures, making the data structures of the internals of the editor exposable and manipulable by the user's Lisp programs.
-
-The one exception was redisplay. For a long time, redisplay was sort of an alternate world. The editor would enter the world of redisplay and things would go on with very special data structures that were not safe for garbage collection, not safe for interruption, and you couldn't run any Lisp programs during that. We've changed that since — it's now possible to run Lisp code during redisplay. It's quite a convenient thing.
-
-This second Emacs program was ‘free software’ in the modern sense of the term — it was part of an explicit political campaign to make software free. The essence of this campaign was that everybody should be free to do the things we did in the old days at MIT, working together on software and working with whomever wanted to work with us. That is the basis for the free software movement — the experience I had, the life that I've lived at the MIT AI lab — to be working on human knowledge, and not be standing in the way of anybody's further using and further disseminating human knowledge.
-
-At the time, you could make a computer that was about the same price range as other computers that weren't meant for Lisp, except that it would run Lisp much faster than they would, and with full type checking in every operation as well. Ordinary computers typically forced you to choose between execution speed and good typechecking. So yes, you could have a Lisp compiler and run your programs fast, but when they tried to take `car` of a number, it got nonsensical results and eventually crashed at some point.
-
-The Lisp machine was able to execute instructions about as fast as those other machines, but each instruction — a car instruction would do data typechecking — so when you tried to get the car of a number in a compiled program, it would give you an immediate error. We built the machine and had a Lisp operating system for it. It was written almost entirely in Lisp, the only exceptions being parts written in the microcode. People became interested in manufacturing them, which meant they should start a company.
-
-There were two different ideas about what this company should be like. Greenblatt wanted to start what he called a “hacker” company. This meant it would be a company run by hackers and would operate in a way conducive to hackers. Another goal was to maintain the AI Lab culture (3). Unfortunately, Greenblatt didn't have any business experience, so other people in the Lisp machine group said they doubted whether he could succeed. They thought that his plan to avoid outside investment wouldn't work.
-
-Why did he want to avoid outside investment? Because when a company has outside investors, they take control and they don't let you have any scruples. And eventually, if you have any scruples, they also replace you as the manager.
-
-So Greenblatt had the idea that he would find a customer who would pay in advance to buy the parts. They would build machines and deliver them; with profits from those parts, they would then be able to buy parts for a few more machines, sell those and then buy parts for a larger number of machines, and so on. The other people in the group thought that this couldn't possibly work.
-
-Greenblatt then recruited Russell Noftsker, the man who had hired me, who had subsequently left the AI Lab and created a successful company. Russell was believed to have an aptitude for business. He demonstrated this aptitude for business by saying to the other people in the group, “Let's ditch Greenblatt, forget his ideas, and we'll make another company.” Stabbing in the back, clearly a real businessman. Those people decided they would form a company called Symbolics. They would get outside investment, not have scruples, and do everything possible to win.
-
-But Greenblatt didn't give up. He and the few people loyal to him decided to start Lisp Machines Inc. anyway and go ahead with their plans. And what do you know, they succeeded! They got the first customer and were paid in advance. They built machines and sold them, and built more machines and more machines. They actually succeeded even though they didn't have the help of most of the people in the group. Symbolics also got off to a successful start, so you had two competing Lisp machine companies. When Symbolics saw that LMI was not going to fall flat on its face, they started looking for ways to destroy it.
-
-Thus, the abandonment of our lab was followed by “war” in our lab. The abandonment happened when Symbolics hired away all the hackers, except me and the few who worked at LMI part-time. Then they invoked a rule and eliminated people who worked part-time for MIT, so they had to leave entirely, which left only me. The AI lab was now helpless. And MIT had made a very foolish arrangement with these two companies. It was a three-way contract where both companies licensed the use of Lisp machine system sources. These companies were required to let MIT use their changes. But it didn't say in the contract that MIT was entitled to put them into the MIT Lisp machine systems that both companies had licensed. Nobody had envisioned that the AI lab's hacker group would be wiped out, but it was.
-
-So Symbolics came up with a plan (4). They said to the lab, “We will continue making our changes to the system available for you to use, but you can't put it into the MIT Lisp machine system. Instead, we'll give you access to Symbolics' Lisp machine system, and you can run it, but that's all you can do.”
-
-This, in effect, meant that they demanded that we had to choose a side, and use either the MIT version of the system or the Symbolics version. Whichever choice we made determined which system our improvements went to. If we worked on and improved the Symbolics version, we would be supporting Symbolics alone. If we used and improved the MIT version of the system, we would be doing work available to both companies, but Symbolics saw that we would be supporting LMI because we would be helping them continue to exist. So we were not allowed to be neutral anymore.
-
-Up until that point, I hadn't taken the side of either company, although it made me miserable to see what had happened to our community and the software. But now, Symbolics had forced the issue. So, in an effort to help keep Lisp Machines Inc. going (5) — I began duplicating all of the improvements Symbolics had made to the Lisp machine system. I wrote the equivalent improvements again myself (i.e., the code was my own).
-
-After a while (6), I came to the conclusion that it would be best if I didn't even look at their code. When they made a beta announcement that gave the release notes, I would see what the features were and then implement them. By the time they had a real release, I did too.
-
-In this way, for two years, I prevented them from wiping out Lisp Machines Incorporated, and the two companies went on. But, I didn't want to spend years and years punishing someone, just thwarting an evil deed. I figured they had been punished pretty thoroughly because they were stuck with competition that was not leaving or going to disappear (7). Meanwhile, it was time to start building a new community to replace the one that their actions and others had wiped out.
-
-The Lisp community in the 70s was not limited to the MIT AI Lab, and the hackers were not all at MIT. The war that Symbolics started was what wiped out MIT, but there were other events going on then. There were people giving up on cooperation, and together this wiped out the community and there wasn't much left.
-
-Once I stopped punishing Symbolics, I had to figure out what to do next. I had to make a free operating system, that was clear — the only way that people could work together and share was with a free operating system.
-
-At first, I thought of making a Lisp-based system, but I realized that wouldn't be a good idea technically. To have something like the Lisp machine system, you needed special purpose microcode. That's what made it possible to run programs as fast as other computers would run their programs and still get the benefit of typechecking. Without that, you would be reduced to something like the Lisp compilers for other machines. The programs would be faster, but unstable. Now that's okay if you're running one program on a timesharing system — if one program crashes, that's not a disaster, that's something your program occasionally does. But that didn't make it good for writing the operating system in, so I rejected the idea of making a system like the Lisp machine.
-
-I decided instead to make a Unix-like operating system that would have Lisp implementations to run as user programs. The kernel wouldn't be written in Lisp, but we'd have Lisp. So the development of that operating system, the GNU operating system, is what led me to write the GNU Emacs. In doing this, I aimed to make the absolute minimal possible Lisp implementation. The size of the programs was a tremendous concern.
-
-There were people in those days, in 1985, who had one-megabyte machines without virtual memory. They wanted to be able to use GNU Emacs. This meant I had to keep the program as small as possible.
-
-For instance, at the time the only looping construct was ‘while’, which was extremely simple. There was no way to break out of the ‘while’ statement, you just had to do a catch and a throw, or test a variable that ran the loop. That shows how far I was pushing to keep things small. We didn't have ‘caar’ and ‘cadr’ and so on; “squeeze out everything possible” was the spirit of GNU Emacs, the spirit of Emacs Lisp, from the beginning.
-
-Obviously, machines are bigger now, and we don't do it that way any more. We put in ‘caar’ and ‘cadr’ and so on, and we might put in another looping construct one of these days. We're willing to extend it some now, but we don't want to extend it to the level of common Lisp. I implemented Common Lisp once on the Lisp machine, and I'm not all that happy with it. One thing I don't like terribly much is keyword arguments (8). They don't seem quite Lispy to me; I'll do it sometimes but I minimize the times when I do that.
-
-That was not the end of the GNU projects involved with Lisp. Later on around 1995, we were looking into starting a graphical desktop project. It was clear that for the programs on the desktop, we wanted a programming language to write a lot of it in to make it easily extensible, like the editor. The question was what it should be.
-
-At the time, TCL was being pushed heavily for this purpose. I had a very low opinion of TCL, basically because it wasn't Lisp. It looks a tiny bit like Lisp, but semantically it isn't, and it's not as clean. Then someone showed me an ad where Sun was trying to hire somebody to work on TCL to make it the “de-facto standard extension language” of the world. And I thought, “We've got to stop that from happening.” So we started to make Scheme the standard extensibility language for GNU. Not Common Lisp, because it was too large. The idea was that we would have a Scheme interpreter designed to be linked into applications in the same way TCL was linked into applications. We would then recommend that as the preferred extensibility package for all GNU programs.
-
-There's an interesting benefit you can get from using such a powerful language as a version of Lisp as your primary extensibility language. You can implement other languages by translating them into your primary language. If your primary language is TCL, you can't very easily implement Lisp by translating it into TCL. But if your primary language is Lisp, it's not that hard to implement other things by translating them. Our idea was that if each extensible application supported Scheme, you could write an implementation of TCL or Python or Perl in Scheme that translates that program into Scheme. Then you could load that into any application and customize it in your favorite language and it would work with other customizations as well.
-
-As long as the extensibility languages are weak, the users have to use only the language you provided them. Which means that people who love any given language have to compete for the choice of the developers of applications — saying “Please, application developer, put my language into your application, not his language.” Then the users get no choices at all — whichever application they're using comes with one language and they're stuck with [that language]. But when you have a powerful language that can implement others by translating into it, then you give the user a choice of language and we don't have to have a language war anymore. That's what we're hoping ‘Guile’, our scheme interpreter, will do. We had a person working last summer finishing up a translator from Python to Scheme. I don't know if it's entirely finished yet, but for anyone interested in this project, please get in touch. So that's the plan we have for the future.
-
-I haven't been speaking about free software, but let me briefly tell you a little bit about what that means. Free software does not refer to price; it doesn't mean that you get it for free. (You may have paid for a copy, or gotten a copy gratis.) It means that you have freedom as a user. The crucial thing is that you are free to run the program, free to study what it does, free to change it to suit your needs, free to redistribute the copies of others and free to publish improved, extended versions. This is what free software means. If you are using a non-free program, you have lost crucial freedom, so don't ever do that.
-
-The purpose of the GNU project is to make it easier for people to reject freedom-trampling, user-dominating, non-free software by providing free software to replace it. For those who don't have the moral courage to reject the non-free software, when that means some practical inconvenience, what we try to do is give a free alternative so that you can move to freedom with less of a mess and less of a sacrifice in practical terms. The less sacrifice the better. We want to make it easier for you to live in freedom, to cooperate.
-
-This is a matter of the freedom to cooperate. We're used to thinking of freedom and cooperation with society as if they are opposites. But here they're on the same side. With free software you are free to cooperate with other people as well as free to help yourself. With non-free software, somebody is dominating you and keeping people divided. You're not allowed to share with them, you're not free to cooperate or help society, anymore than you're free to help yourself. Divided and helpless is the state of users using non-free software.
-
-We've produced a tremendous range of free software. We've done what people said we could never do; we have two operating systems of free software. We have many applications and we obviously have a lot farther to go. So we need your help. I would like to ask you to volunteer for the GNU project; help us develop free software for more jobs. Take a look at [http://www.gnu.org/help][1] to find suggestions for how to help. If you want to order things, there's a link to that from the home page. If you want to read about philosophical issues, look in /philosophy. If you're looking for free software to use, look in /directory, which lists about 1900 packages now (which is a fraction of all the free software out there). Please write more and contribute to us. My book of essays, “Free Software and Free Society”, is on sale and can be purchased at [www.gnu.org][2]. Happy hacking!
-
---------------------------------------------------------------------------------
-
-via: https://www.gnu.org/gnu/rms-lisp.html
-
-作者:[Richard Stallman][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.gnu.org
-[1]:https://www.gnu.org/help/
-[2]:http://www.gnu.org/
diff --git a/sources/talk/20170908 Betting on the Web.md b/sources/talk/20170908 Betting on the Web.md
deleted file mode 100644
index 84d70e164f..0000000000
--- a/sources/talk/20170908 Betting on the Web.md
+++ /dev/null
@@ -1,467 +0,0 @@
-[Betting on the Web][27]
-============================================================
-
-
-
- _Note: I just spoke at [Coldfront 2017][12] about why I’m such a big proponent of the Web. What follows is essentially that talk as a blog post (I’ll add a link to the video once it is published)._
-
- _Also: the Starbucks PWA mentioned in the talk has shipped! 🎉_
-
-I’m _not_ going to tell you what to do. Instead, I’m going to explain why I’ve chosen to bet my whole career on this crazy Web thing. "Betting" sounds a bit haphazard, it’s more calculated than that. It would probably be better described as "investing."
-
-Investing what? Our time and attention.
-
-Many of us only have maybe 6 or so _really_ productive hours per day when we’re capable of being super focused and doing our absolute best work. So how we chose to invest that very limited time is kind of a big deal. Even though I really enjoy programming I rarely do it aimlessly just for the pure joy of it. Ultimately, I’m investing that productive time expecting to get _some kind of return_ even if it’s just mastering something or solving a difficult problem.
-
- [### "So what, what’s your point?"][28]
-
-> > More than most of us realize we are _constantly_ investing
-
-Sure, someone may be paying for our time directly but there’s more to it than just trading hours for money. In the long run, what we chose to invest our professional efforts into has other effects:
-
-**1\. Building Expertise:** We learn as we work and gain valuable experience in the technologies and platform we’re investing in. That expertise impacts our future earning potential and what types of products we’re capable of building.
-
-**2\. Building Equity:** Hopefully we’re generating equity and adding value to whatever product we’re building.
-
-**3\. Shaping tomorrow’s job market:** We’re building tomorrow’s legacy code today™. Today’s new hotness is tomorrow’s maintenance burden. In many cases the people that initially build a product or service are not the ones that ultimately maintain it. This means the technology choices we make when building a new product or service, determine whether or not there will be jobs later that require expertise in that particular platform/technology. So, those tech choices _literally shape tomorrow’s job market!_
-
-**4\. Body of knowledge:** As developers, we’re pretty good at sharing what we learn. We blog, we "Stack-Overflow", etc. These things all contribute to the corpus of knowledge available about that given platform which adds significant value by making it easier/faster for others to build things using these tools.
-
-**5\. Open Source:** We solve problems and share our work. When lots of developers do this it adds _tremendous value_ to the technologies and platforms these tools are for. The sheer volume of work that we _don’t have to do_ because we can use someone else’s library that already does it is mind-boggling. Millions and millions of hours of development work are available to us for free with a simple `npm install`.
-
-**6\. Building apps for users on that platform:** Last but not least, without apps there is no platform. By making more software available to end users, we’re contributing significant value to the platforms that run our apps.
-
-Looking at that list, the last four items are not about _us_ at all. They represent other significant long-term impacts.
-
-> > We often have a broader impact than we realize
-
-We’re not just investing time into a job, we're also shaping the platform, community, and technologies we use.
-
-We’re going to come back to this, but hopefully, recognizing that greater impact can help us make better investments.
-
- [### With all investing comes _risk_][29]
-
-We can’t talk about investing without talking about risk. So what are some of the potential risks?
-
- [### Are we building for the right platform?][30]
-
-Platform stability is indeed A Thing™. Just ask a Flash developer, Windows Phone developer, or Blackberry developer. Platforms _can_ go away.
-
-If we look at those three platforms, what do they have in common? They’re _closed_ platforms. What I mean is there’s a single controlling interest. When you build for them, you’re building for a specific operating system and coding against a particular implementation as opposed to coding against a set of _open standards_ . You could argue, that at least to some degree, Flash died because of its "closed-ness". Regardless, one thing is clear from a risk mitigation perspective: open is better than closed.
-
-the Web is _incredibly_ open. It would be quite difficult for any one entity to kill it off.
-
-Now, for Windows Phone/Blackberry it failed due to a lack of interested users... or was it lack of interested developers??
-
-
-
-Maybe if Ballmer ☝️ has just yelled "developers" _one more time_ we’d all have Windows Phones in our pockets right now 😜.
-
-From a risk mitigation perspective, two things are clear with regard to platform stability:
-
-1. Having _many users_ is better than having few users
-
-2. Having _more developers_ building for the platform is better than having few developers
-
-> > There is no bigger more popular open platform than the Web
-
- [### Are we building the right software?][31]
-
-Many of us are building apps. Well, we used to build "applications" but that wasn’t nearly cool enough. So now we build "apps" instead 😎.
-
-What does "app" mean to a user? This is important because I think it’s changed a bit over the years. To a user, I would suggest it basically means: "a thing I put on my phone."
-
-But for our purposes I want to get a bit more specific. I’d propose that an app is really:
-
-1. An "ad hoc" user interface
-
-2. That is local(ish) to the device
-
-The term "ad hoc" is Latin and translates to **"for this"**. This actually matches pretty closely with what Apple’s marketing campaigns have been teaching the masses:
-
-> There’s an app **for that**
->
-> – Apple
-
-The point is it helps you _do_ something. The emphasis is on action. I happen to think this is largely the difference between a "site" and an "app". A news site for example has articles that are resources in and of themselves. Where a news app is software that runs on the device that helps you consume news articles.
-
-Another way to put it would be that site is more like a book, while an app is a tool.
-
- [### Should we be building apps at all?!][32]
-
-Remember when chatbots were supposed to take over the world? Or perhaps we’ll all be walking around with augmented reality glasses and that’s how we’ll interact with the world?
-
-I’ve heard it said that "the future app is _no_ app" and virtual assistants will take over everything.
-
-
-
-I’ve had one of these sitting in my living room for a couple of years, but I find it all but useless. It’s just a nice bluetooth speaker that I can yell at to play me music.
-
-But I find it very interesting that:
-
-> > Even Alexa has an app!
-
-Why? Because there’s no screen! As it turns out these "ad hoc visual interfaces" are extremely efficient.
-
-Sure, I can yell out "Alexa, what’s the weather going to be like today" and I’ll hear a reply with high and low and whether it’s cloudy, rainy, or sunny. But in that same amount of time, I can pull my phone out tap the weather app and before Alexa can finish telling me those 3 pieces of data, I can visually scan the entire week’s worth of data, air quality, sunrise/sunset times, etc. It’s just _so much more_ efficient as a mechanism for consuming this type of data.
-
-As a result of that natural efficiency, I believe that having a visual interface is going to continue to be useful for all sorts of things for a long time to come.
-
-That’s _not_ to say virtual assistants aren’t useful! Google Assistant on my Pixel is quite useful in part because it can show me answers and can tolerate vagueness in a way that an app with a fixed set of buttons never could.
-
-But, as is so often the case with new useful tech, rarely does it complete replace everything that came before it, instead, it augments what we already have.
-
- [### If apps are so great why are we so "apped out"?][33]
-
-How do we explain that supposed efficiency when there’s data like this?
-
-* [65% of smartphone users download zero apps per month][13]
-
-* [More than 75% of app downloads open an app once and never come back][14]
-
-I think to answer that we have to really look at what isn’t working well.
-
- [### What sucks about apps?][34]
-
-1. **Downloading them certainly sucks.** No one wants to open an app store, search for the app they’re trying to find, then wait to download the huge file. These days a 50mb app is pretty small. Facebook for iOS 346MB, Twitter iOS 212MB.
-
-2. **Updating them sucks.** Every night I plug in my phone I download a whole slew of app updates that I, as a user, **could not possibly care less about**. In addition, many of these apps are things I installed _once_ and will **never open again, ever!**. I’d love to know the global stats on how much bandwidth has been wasted on app updates for apps that were never opened again.
-
-3. **Managing them sucks.** Sure, when I first got an iPhone ages ago and could first download apps my home screen was impeccable. Then when we got folders!! Wow... what an amazing development! Now I could finally put all those pesky uninstallable Apple apps in a folder called "💩" and pretend they didn’t exist. But now, my home screen is a bit of a disaster. Sitting there dragging apps around is not my idea of a good time. So eventually things get all cluttered up again.
-
-The thing I’ve come to realize, is this:
-
-> > We don’t care how they got there. We only care that they’re _there_ when we need them.
-
-For example, I love to go mountain biking and I enjoy tracking my rides with an app called Strava. I get all geared up for my ride, get on my bike and then go, "Oh right, gotta start Strava." So I pull out my phone _with my gloves on_ and go: "Ok Google, open Strava".
-
-I _could not care less_ about where that app was or where it came from when I said that.
-
-I don’t care if it was already installed, I don’t care if it never existed on my home screen, or if it was generated out of thin air on the spot.
-
-> > Context is _everything_ !
-
-If I’m at a parking meter, I want the app _for that_ . If I’m visiting Portland, I want their public transit app.
-
-But I certainly _do not_ want it as soon as I’ve left.
-
-If I’m at a conference, I might want a conference app to see the schedule, post questions to speakers, or whatnot. But wow, talk about something that quickly becomes worthless as soon as that conference is over!
-
-As it turns out the more "ad hoc" these things are, the better! The more _disposable_ and _re-inflatable_ the better!
-
-Which also reminds me of something that I feel like we often forget. We always assume people want our shiny apps and we measure things like "engagement" and "time spent in the app" when really, and there certainly are exceptions to this such as apps that are essentially entertainment, but often...
-
-> > People don’t want to use your app. They want _to be done_ using your app.
-
- [### Enter PWAs][35]
-
-I’ve been contracting with Starbucks for the past 18 months. They’ve taken on the ambitious project of essentially re-building a lot of their web stuff in Node.js and React. One of the things I’ve helped them with (and pushed hard for) was to build a PWA (Progressive Web App) that could provide similar functionality as their native apps. Coincidentally it was launched today: [https://preview.starbucks.com][18]!
-
-
-
-This gives is a nice real world example:
-
-* Starbucks iOS: 146MB
-
-* Starbucks PWA: ~600KB
-
-The point is there’s a _tremendous_ size difference.
-
-It’s 0.4% of the size. To put it differently, I could download the PWA **243 times**in the same amount of time it would take to download the iOS app. Then, of course on iOS it then also still has to install and boot up!
-
-Personally, I’d have loved it if the app ended up even smaller and there are plans to shrink it further. But even still, they’re _not even on the same planet_ in terms of file-size!
-
-Market forces are _strongly_ aligned with PWAs here:
-
-* Few app downloads
-
-* User acquisition is _hard_
-
-* User acquisition is _expensive_
-
-If the goal is to get people to sign up for the rewards program, that type of size difference could very well make the difference of getting someone signed up and using the app experience (via PWA) by the time they reach the front of the line at Starbucks or not.
-
-User acquisition is hard enough already, the more time and barriers that can be removed from that process, the better.
-
- [### Quick PWA primer][36]
-
-As mentioned, PWA stands for "Progressive Web Apps" or, as I like to call them: "Web Apps" 😄
-
-Personally I’ve been trying to build what a user would define as an "app" with web technology for _years_ . But until PWAs came along, as hard as we tried, you couldn’t quite build a _real app_ with just web tech. Honestly, I kinda hate myself for saying that, but in terms of something that a user would understand as an "app" I’m afraid that statement has probably true until very recently.
-
-So what’s a PWA? As one of its primary contributors put it:
-
-> It’s just a website that took all the right vitamins.
->
-> – Alex Russell
-
-It involves a few specific technologies, namely:
-
-* Service Worker. Which enable true reliability on the web. What I mean by that is I can build an app that as long as you loaded it while you were online, from then on it will _always_ open, even if you’re not. This puts it on equal footing with other apps.
-
-* HTTPS. Requires encrypted connections
-
-* Web App Manifest. A simple JSON file that describes your application. What icons to use is someone adds it to their home screen, what its name is, etc.
-
-There are plenty of other resources about PWAs on the web. The point for my purposes is:
-
-> > It is now possible to build PWAs that are _indistinguishable_ from their native counter parts
-
-They can be up and running in a fraction of the time whether or not they were already "installed" and unlike "apps" can be saved as an app on the device _at the user’s discretion!_
-
-Essentially they’re really great for creating "ad hoc" experiences that can be "cold started" on a whim nearly as fast as if it were already installed.
-
-I’ve said it before and I’ll say it again:
-
-> PWAs are the biggest thing to happen to the mobile web since the iPhone.
->
-> – Um... that was me
-
- [### Let’s talk Internet of things][37]
-
-I happen to think that PWAs + IoT = ✨ MAGIC ✨. As several smart folks have pointed out.
-
-The one-app-per-device approach to smart devices probably isn’t particularly smart.
-
-It doesn’t scale well and it completely fails in terms of "ad hoc"-ness. Sure, if I have a Nest thermostat and Phillips Hue lightbulbs, it’s reasonable to have two apps installed. But even that sucks as soon as I want someone else to be able to use control them. If _I just let you into my house_ , trust me... I’m perfectly happy to let you flip a light switch, you’re in my house, after all. But for the vast majority of these things there’s no concept of "nearby apps" and, it’s silly for my guest (or a house-sitter) to download an app they don’t actually want, just so I can let them control my lights.
-
-The whole "nearby apps" thing has so many uses:
-
-* thermostat
-
-* lights
-
-* locks
-
-* garage doors
-
-* parking meter
-
-* setting refrigerator temp
-
-* conference apps
-
-Today there are lots of new capabilities being added to the web to enable web apps to interact with physical devices in the real world. Things like WebUSB, WebBluetooth, WebNFC, and efforts like [Physical Web][19]. Even for things like Augmented (and Virtual) reality, the idea of the items we want to interact with having URLs makes so much sense and I can’t imagine a better, more flexible use of those URLs than for them to point to a PWA that lets you interact with that device!
-
- [### Forward looking statements...][38]
-
-I’ve been talking about all this in terms of investing. If you’ve ever read any company statement that discusses the future you always see this line explaining that things that are about to be discussed contains "forward looking statements" that may or may not ultimately happen.
-
-So, here are _my_ forward looking statements.
-
- [### 1\. PWA-only startups][39]
-
-Given the cost (and challenge) of user-acquisition and the quality of app you can build with PWAs these days, I feel like this is inevitable. If you’re trying to get something off the ground, it just isn’t very efficient to spin up _three whole teams_ to build for iOS, Android, and the Web.
-
- [### 2\. PWAs listed in App Stores][40]
-
-So, there’s a problem with "web only" which is that for the good part of a decade we’ve been training users to look for apps in the app store for their given platform. So if you’re already a recognized brand, especially if you already have a native app that you’re trying to replace, it simply isn’t smart for you _not to exist_ in the app stores.
-
-So, some of this isn’t all that "forward looking" as it turns out [Microsoft has already committed to listing PWAs in the Windows Store][20], more than once!
-
-**They haven’t even finished implementing Service Worker in Edge yet!** But they’re already committing hard to PWAs. In addition to post linked above, one of their lead Developer Relations folks, Aaron Gustafson just [wrote an article for A List Apart][21] telling everyone to build PWAs.
-
-But if you think about it from their perspective, of course they should do that! As I said earlier they’ve struggled to attract developer to build for their mobile phones. In fact, they’ve at times _paid_ companies to write apps for them simply to make sure apps exist so that users will be able to have apps they want when using a Windows Phone. Remember how I said developer time is a scarce resource and without apps, the platform is worthless? So _of course_ they should add first class support for PWAs. If you build a PWA like a lot of folks are doing then TADA!!! 🎉 You just made a Windows/Windows Phone app!
-
-I’m of the opinion that the writing is on the wall for Google to do the same thing. It’s pure speculation, but it certainly seems like they are taking steps that suggest they may be planning on listing PWAs too. Namely that the Chrome folks recently shipped a feature referred to as "WebAPKs" for Chrome stable on Android (yep, everyone). In the past I’ve [explained in more detail][22] why I think this is a big deal. But a shorted version would be that before this change, sure you could save a PWA to your home screen... _But_ , in reality it was actually a glorified bookmark. That’s what changes with WebAPKs. Instead, when you add a PWA to your home screen it generates and "side loads" an actual `.apk`file on the fly. This allows that PWA to enjoy some privileges that were simply impossible until the operating system recognized it as "an app." For example:
-
-* You can now mute push notifications for a specific PWA without muting it for all of Chrome.
-
-* The PWA is listed in the "app tray" that shows all installed apps (previously it was just the home screen).
-
-* You can see power usage, and permissions granted to the PWA just like any other app.
-
-* The app developer can now update the icon for the app by publishing an update to the app manifest. Before, there was no way to updated the icon once it had been added.
-
-* And a slew of other similar benefits...
-
-If you’ve ever installed an Android app from a source other than the Play Store (or carriers/OEMs store) you know that you have to flip a switch in settings to allow installs from "untrusted sources". So, how then, you might ask, can they generate and install an actual `.apk` file for a PWA without requiring that you change that setting? As it turns out the answer is quite simple: Use a trusted source!
-
-> > As it turns out WebAPKs are managed through Google Play Services!
-
-I’m no rocket scientist, but based on their natural business alignment with the web, their promotion of PWAs, the lengths they’ve gone to to grant PWAs equal status on the operating system as native apps, it only seems natural that they’d eventually _list them in the store_ .
-
-Additionally, if Google did start listing PWAs in the Play Store both them and Microsoft would be doing it _leaving Apple sticking out like a sore thumb and looking like the laggard_ . Essentially, app developers would be able to target a _massive_ number of users on a range of platforms with a single well-built PWA. But, just like developers grew to despise IE for not keeping up with the times and forcing them to jump through extra hoops to support it, the same thing would happen here. Apple does _not_ want to be the next IE and I’ve already seen many prominent developers suggesting they already are.
-
-Which bring us to another forward-looking statement:
-
- [### 3\. PWAs on iOS][41]
-
-Just a few weeks ago the Safari folks announced that Service Worker is now [officially under development][23].
-
- [### 4\. PWAs everywhere][42]
-
-I really think we’ll start seeing them everywhere:
-
-* Inside VR/AR/MR experiences
-
-* Inside chat bots (again, pulling up an ad-hoc interface is so much more efficient).
-
-* Inside Xbox?!
-
-As it turns out, if you look at Microsoft’s status page for Edge about Service Worker you see this:
-
-
-
-I hinted at this already, but I also think PWAs pair very nicely with virtual assistants being able to pull up an PWA on a whim without requiring it to already be installed would add tremendous power to the virtual assistant. Incidentally, this also becomes easier if there’s a known "registered" name of a PWA listed in an app store.
-
-Some other fun use cases:
-
-* Apparently the new digital menu displays in McDonald’s Restaurants (at least in the U.S.) are actually a web app built with Polymer ([source][15]). I don’t know if there’s a Service Worker or not, but it would make sense for there to be.
-
-* Sports score boards!? I’m a [independent consultant][16], and someone approached me about potentially using a set of TVs and web apps to build a score keeping system at an arena. Point is, there are so many cool examples!
-
-The web really is the universal platform!
-
- [### For those who think PWAs are just a Google thing][43]
-
-First off, I’m pretty sure Microsoft, Opera, Firefox, and Samsung folks would want to punch you for that. It [simply isn’t true][24] and increasingly we’re seeing a lot more compatibility efforts between browser vendors.
-
-For example: check out the [Web Platform Tests][25] which is essentially Continuous Integration for web features that are run against new releases of major browsers. Some folks will recall that when Apple first claimed they implemented IndexedDb in Safari, the version they shipped was essentially unusable because it had major shortcomings and bugs.
-
-Now, with the WPTs, you can drill into these features (to quite some detail) and see whether a given browser passes or fails. No more claiming "we shipped!" but not actually shipping.
-
- [### What about feature "x" on platform "y" that we need?][44]
-
-It could well be that you have a need that isn’t yet covered by the web platform. In reality, that list is getting shorter and shorter, also... HAVE YOU ASKED?! Despite what it may feel like, browser vendors eagerly want to know what you’re trying to do that you can’t. If there are missing features, be loud, be nice, but from my experience it’s worth making your desires known.
-
-Also, it doesn’t take much to wrap a web view and add hooks into the native OS that your JavaScript can call to do things that aren’t _quite_ possible yet.
-
-But that also brings me to another point, in terms of investing, as the world’s greatest hockey player said:
-
-> Skate to where the puck is going, not where it has been.
->
-> – Wayne Gretzky
-
-Based on what I’ve outlined thus far, it could be more risky to building an entire application for a whole other platform that you ultimately may not need than to at least exhaust your options seeing what you can do with the Web first.
-
-So to line ’em up in terms of PWA support:
-
-* Chrome: yup
-
-* Firefox: yup
-
-* Opera: yup
-
-* Samsung Internet ([the 3rd largest browser surprise!][17]): yup
-
-* Microsoft: huge public commitment
-
-* Safari: at least implementing Service Worker
-
- [### Ask them add your feature!][45]
-
-Sure, it may not happen, it may take a long time but _at least_ try. Remember, developers have a lot more influence over platforms than we typically realize. Make. your. voice. heard.
-
- [### Side note about React-Native/Expo][46]
-
-These projects are run by awesome people, the tech is incredibly impressive. If you’re Facebook and you’re trying to consolidate your development efforts, for the same basic reasons as why it makes sense for them to create their on [VM for running PHP][26]. They have realities to deal with at a scale that most of us will never have to deal with. Personally, I’m not Facebook.
-
-As a side note, I find it interesting that building native apps and having as many people do that as possible, plays nicely into their advertising competition with Google.
-
-It just so happens that Google is well positioned to capitalize off of people using the Web. Inversely, I’m fairly certain Facebook wouldn’t mind that ad revenue _not_ going Google. Facebook, seemingly would much rather _be_ your web, that be part of the Web.
-
-Anyway, all that aside, for me it’s also about investing well.
-
-By building a native app you’re volunteering for a 30% app-store tax. Plus, like we covered earlier odds are that no one wants to go download your app. Also, though it seems incredibly unlikely, I feel compelled to point out that in terms of "openness" Apple’s App Store is very clearly _anything_ but that. Apple could decide one day that they really don’t like how it’s possible to essentially circumvent their normal update/review process when you use Expo. One day they could just decide to reject all React Native apps. I really don’t think they would because of the uproar it would cause. I’m simply pointing out that it’s _their_ platform and they would have _every_ right to do so.
-
- [### So is it all about investing for your own gain?][47]
-
-So far, I’ve presented all this from kind of a cold, heartless investor perspective: getting the most for your time.
-
-But, that’s not the whole story is it?
-
-Life isn’t all about me. Life isn’t all about us.
-
-I want to invest in platforms that increase opportunities **for others**. Personally, I really hope the next friggin’ Mark Zuckerburg isn’t an ivy-league dude. Wouldn’t it be amazing if instead the next huge success was, I don’t know, perhaps a young woman in Nairobi or something? The thing is, if owning an iPhone is a prerequisite for building apps, it _dramatically_ decreases the odds of something like that happening. I feel like the Web really is the closest thing we have to a level playing field.
-
-**I want to invest in and improve _that_ platform!**
-
-This quote really struck me and has stayed with me when thinking about these things:
-
-> If you’re the kind of person who tends to succeed in what you start,
->
-> changing what you start could be _the most extraordinary thing_ you could do.
->
-> – Anand Giridharadas
-
-Thanks for your valuable attention ❤️. I’ve presented the facts as I see them and I’ve done my best not to "should on you."
-
-Ultimately though, no matter how prepared we are or how much research we’ve done; investing is always a bit of a gamble.
-
-So I guess the only thing left to say is:
-
-> > I’m all in.
-
---------------------------------------------------------------------------------
-
-via: https://joreteg.com/blog/betting-on-the-web
-
-作者:[Joreteg][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://joreteg.com/
-[1]:https://twitter.com/davidbrunelle
-[2]:https://twitter.com/intent/tweet?in_reply_to=905931990444244995
-[3]:https://twitter.com/intent/retweet?tweet_id=905931990444244995
-[4]:https://twitter.com/intent/like?tweet_id=905931990444244995
-[5]:https://twitter.com/davidbrunelle/status/905931990444244995/photo/1
-[6]:https://twitter.com/davidbrunelle
-[7]:https://twitter.com/Starbucks
-[8]:https://t.co/tEUXM8BLgP
-[9]:https://twitter.com/davidbrunelle/status/905931990444244995
-[10]:https://twitter.com/davidbrunelle/status/905931990444244995/photo/1
-[11]:https://support.twitter.com/articles/20175256
-[12]:https://2017.coldfront.co/
-[13]:https://qz.com/253618/most-smartphone-users-download-zero-apps-per-month/
-[14]:http://fortune.com/2016/05/19/app-economy/
-[15]:https://twitter.com/AJStacy06/status/857628546507968512
-[16]:http://consulting.joreteg.com/
-[17]:https://medium.com/samsung-internet-dev/think-you-know-the-top-web-browsers-458a0a070175
-[18]:https://preview.starbucks.com/
-[19]:https://google.github.io/physical-web/
-[20]:https://blogs.windows.com/msedgedev/2016/07/08/the-progress-of-web-apps/
-[21]:https://alistapart.com/article/yes-that-web-project-should-be-a-pwa
-[22]:https://joreteg.com/blog/installing-web-apps-for-real
-[23]:https://webkit.org/status/#specification-service-workers
-[24]:https://jakearchibald.github.io/isserviceworkerready/
-[25]:http://wpt.fyi/
-[26]:http://hhvm.com/
-[27]:https://joreteg.com/blog/betting-on-the-web
-[28]:https://joreteg.com/blog/betting-on-the-web#quotso-what-whats-your-pointquot
-[29]:https://joreteg.com/blog/betting-on-the-web#with-all-investing-comes
-[30]:https://joreteg.com/blog/betting-on-the-web#are-we-building-for-the-right-platform
-[31]:https://joreteg.com/blog/betting-on-the-web#are-we-building-the-right-software
-[32]:https://joreteg.com/blog/betting-on-the-web#should-we-be-building-apps-at-all
-[33]:https://joreteg.com/blog/betting-on-the-web#if-apps-are-so-great-why-are-we-so-quotapped-outquot
-[34]:https://joreteg.com/blog/betting-on-the-web#what-sucks-about-apps
-[35]:https://joreteg.com/blog/betting-on-the-web#enter-pwas
-[36]:https://joreteg.com/blog/betting-on-the-web#quick-pwa-primer
-[37]:https://joreteg.com/blog/betting-on-the-web#lets-talk-internet-of-things
-[38]:https://joreteg.com/blog/betting-on-the-web#forward-looking-statements
-[39]:https://joreteg.com/blog/betting-on-the-web#1-pwa-only-startups
-[40]:https://joreteg.com/blog/betting-on-the-web#2-pwas-listed-in-app-stores
-[41]:https://joreteg.com/blog/betting-on-the-web#3-pwas-on-ios
-[42]:https://joreteg.com/blog/betting-on-the-web#4-pwas-everywhere
-[43]:https://joreteg.com/blog/betting-on-the-web#for-those-who-think-pwas-are-just-a-google-thing
-[44]:https://joreteg.com/blog/betting-on-the-web#what-about-feature-quotxquot-on-platform-quotyquot-that-we-need
-[45]:https://joreteg.com/blog/betting-on-the-web#ask-them-add-your-feature
-[46]:https://joreteg.com/blog/betting-on-the-web#side-note-about-react-nativeexpo
-[47]:https://joreteg.com/blog/betting-on-the-web#so-is-it-all-about-investing-for-your-own-gain
diff --git a/sources/talk/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md b/sources/talk/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md
deleted file mode 100644
index 9eee39888a..0000000000
--- a/sources/talk/20171129 Inside AGL Familiar Open Source Components Ease Learning Curve.md
+++ /dev/null
@@ -1,70 +0,0 @@
-Inside AGL: Familiar Open Source Components Ease Learning Curve
-============================================================
-
-
-Konsulko’s Matt Porter (pictured) and Scott Murray ran through the major components of the AGL’s Unified Code Base at Embedded Linux Conference Europe.[The Linux Foundation][1]
-
-Among the sessions at the recent [Embedded Linux Conference Europe (ELCE)][5] — 57 of which are [available on YouTube][2] -- are several reports on the Linux Foundation’s [Automotive Grade Linux project][6]. These include [an overview from AGL Community Manager Walt Miner ][3]showing how AGL’s Unified Code Base (UCB) Linux distribution is expanding from in-vehicle infotainment (IVI) to ADAS. There was even a presentation on using AGL to build a remote-controlled robot (see links below).
-
-Here we look at the “State of AGL: Plumbing and Services,” from Konsulko Group’s CTO Matt Porter and senior staff software engineer Scott Murray. Porter and Murray ran through the components of the current [UCB 4.0 “Daring Dab”][7] and detailed major upstream components and API bindings, many of which will be appear in the Electric Eel release due in Jan. 2018.
-
-Despite the automotive focus of the AGL stack, most of the components are already familiar to Linux developers. “It looks a lot like a desktop distro,” Porter told the ELCE attendees in Prague. “All these familiar friends.”
-
-Some of those friends include the underlying Yocto Project “Poky” with OpenEmbedded foundation, which is topped with layers like oe-core, meta-openembedded, and metanetworking. Other components are based on familiar open source software like systemd (application control), Wayland and Weston (graphics), BlueZ (Bluetooth), oFono (telephony), PulseAudio and ALSA (audio), gpsd (location), ConnMan (Internet), and wpa-supplicant (WiFi), among others.
-
-UCB’s application framework is controlled through a WebSocket interface to the API bindings, thereby enabling apps to talk to each other. There’s also a new W3C widget for an alternative application packaging scheme, as well as support for SmartDeviceLink, a technology developed at Ford that automatically syncs up IVI systems with mobile phones.
-
-AGL UCB’s Wayland/Weston graphics layer is augmented with an “IVI shell” that works with the layer manager. “One of the unique requirements of automotive is the ability to separate aspects of the application in the layers,” said Porter. “For example, in a navigation app, the graphics rendering for the map may be completely different than the engine used for the UI decorations. One engine layers to a surface in Wayland to expose the map while the decorations and controls are handled by another layer.”
-
-For audio, ALSA and PulseAudio are joined by GENIVI AudioManager, which works together with PulseAudio. “We use AudioManager for policy driven audio routing,” explained Porter. “It allows you to write a very complex XML-based policy using a rules engine with audio routing.”
-
-UCB leans primarily on the well-known [Smack Project][8] for security, and also incorporates Tizen’s [Cynara][9] safe policy-checker service. A Cynara-enabled D-Bus daemon is used to control Cynara security policies.
-
-Porter and Murray went on to explain AGL’s API binding mechanism, which according to Murray “abstracts the UI from its back-end logic so you can replace it with your own custom UI.” You can re-use application logic with different UI implementations, such as moving from the default Qt to HTML5 or a native toolkit. Application binding requests and responses use JSON via HTTP or WebSocket. Binding calls can be made from applications or from other bindings, thereby enabling “stacking” of bindings.
-
-Porter and Murray concluded with a detailed description of each binding. These include upstream bindings currently in various stages of development. The first is a Master binding that manages the application lifecycle, including tasks such as install, uninstall, start, and terminate. Other upstream bindings include the WiFi binding and the BlueZ-based Bluetooth binding, which in the future will be upgraded with Bluetooth [PBAP][10] (Phone Book Access Profile). PBAP can connect with contacts databases on your phone, and links to the Telephony binding to replicate caller ID.
-
-The oFono-based Telephony binding also makes calls to the Bluetooth binding for Bluetooth Hands-Free-Profile (HFP) support. In the future, Telephony binding will add support for sent dial tones, call waiting, call forwarding, and voice modem support.
-
-Support for AM/FM radio is not well developed in the Linux world, so for its Radio binding, AGL started by supporting [RTL-SDR][11] code for low-end radio dongles. Future plans call for supporting specific automotive tuner devices.
-
-The MediaPlayer binding is in very early development, and is currently limited to GStreamer based audio playback and control. Future plans call for adding playlist controls, as well as one of the most actively sought features among manufacturers: video playback support.
-
-Location bindings include the [gpsd][12] based GPS binding, as well as GeoClue and GeoFence. GeoClue, which is built around the [GeoClue][13] D-Bus geolocation service, “overlaps a little with GPS, which uses the same location data,” says Porter. GeoClue also gathers location data from WiFi AP databases, 3G/4G tower info, and the GeoIP database — sources that are useful “if you’re inside or don’t have a good fix,” he added.
-
-GeoFence depends on the GPS binding, as well. It lets you establish a bounding box, and then track ingress and egress events. GeoFence also tracks “dwell” status, which is determined by arriving at home and staying for 10 minutes. “It then triggers some behavior based on a timeout,” said Porter. Future plans call for a customizable dwell transition time.
-
-While most of these Upstream bindings are well established, there are also Work in Progress (WIP) bindings that are still in the early stages, including CAN, HomeScreen, and WindowManager bindings. Farther out, there are plans to add speech recognition and text-to-speech bindings, as well as a WWAN modem binding.
-
-In conclusion, Porter noted: “Like any open source project, we desperately need more developers.” The Automotive Grade Linux project may seem peripheral to some developers, but it offers a nice mix of familiarity — grounded in many widely used open source projects -- along with the excitement of expanding into a new and potentially game changing computing form factor: your automobile. AGL has also demonstrated success — you can now [check out AGL in action in the 2018 Toyota Camry][14], followed in the coming month by most Toyota and Lexus vehicles sold in North America.
-
-Watch the complete video below:
-
-[视频][15]
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/event/elce/2017/11/inside-agl-familiar-open-source-components-ease-learning-curve
-
-作者:[ ERIC BROWN][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/ericstephenbrown
-[1]:https://www.linux.com/licenses/category/linux-foundation
-[2]:https://www.youtube.com/playlist?list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk
-[3]:https://www.youtube.com/watch?v=kfwEmjSjAzM&index=14&list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk
-[4]:https://www.linux.com/files/images/porter-elce-aglpng
-[5]:http://events.linuxfoundation.org/events/embedded-linux-conference-europe
-[6]:https://www.automotivelinux.org/
-[7]:https://www.linux.com/blog/2017/8/automotive-grade-linux-moves-ucb-40-launches-virtualization-workgroup
-[8]:http://schaufler-ca.com/
-[9]:https://wiki.tizen.org/Security:Cynara
-[10]:https://wiki.maemo.org/Bluetooth_PBAP
-[11]:https://www.rtl-sdr.com/about-rtl-sdr/
-[12]:http://www.catb.org/gpsd/
-[13]:https://www.freedesktop.org/wiki/Software/GeoClue/
-[14]:https://www.linux.com/blog/event/automotive-linux-summit/2017/6/linux-rolls-out-toyota-and-lexus-vehicles
-[15]:https://youtu.be/RgI-g5h1t8I
diff --git a/sources/talk/20171222 18 Cyber-Security Trends Organizations Need to Brace for in 2018.md b/sources/talk/20171222 18 Cyber-Security Trends Organizations Need to Brace for in 2018.md
deleted file mode 100644
index 09223ccb21..0000000000
--- a/sources/talk/20171222 18 Cyber-Security Trends Organizations Need to Brace for in 2018.md
+++ /dev/null
@@ -1,116 +0,0 @@
-18 Cyber-Security Trends Organizations Need to Brace for in 2018
-======
-
-### 18 Cyber-Security Trends Organizations Need to Brace for in 2018
-
-Enterprises, end users and governments faced no shortage of security challenges in 2017. Some of those same challenges will continue into 2018, and there will be new problems to solve as well. Ransomware has been a concern for several years and will likely continue to be a big issue in 2018. The new year is also going to bring the formal introduction of the European Union's General Data Protection Regulation (GDPR), which will impact how organizations manage private information. A key trend that emerged in 2017 was an increasing use of artificial intelligence (AI) to help solve cyber-security challenges, and that's a trend that will continue to accelerate in 2018. What else will the new year bring? In this slide show, eWEEK presents 18 security predictions for the year ahead from 18 security experts.
-
-
-### Africa Emerges as New Area for Threat Actors and Targets
-
-"In 2018, Africa will emerge as a new focus area for cyber-threats--both targeting organizations based there and attacks originating from the continent. With its growth in technology adoption and operations and rising economy, and its increasing number of local resident threat actors, Africa has the largest potential for net-new impactful cyber events." -Steve Stone, IBM X-Force IRIS
-
-
-### AI vs. AI
-
-"2018 will see a rise in AI-based attacks as cyber-criminals begin using machine learning to spoof human behaviors. The cyber-security industry will need to tune their own AI tools to better combat the new threats. The cat and mouse game of cybercrime and security innovation will rapidly escalate to include AI-enabled tools on both sides." --Caleb Barlow, vice president of Threat Intelligence, IBM Security
-
-
-### Cyber-Security as a Growth Driver
-
-"CEOs view cyber-security as one of their top risks, but many also see it as an opportunity to innovate and find new ways to generate revenue. In 2018 and beyond, effective cyber-security measures will support companies that are transforming their security, privacy and continuity controls in an effort to grow their businesses." -Greg Bell, KMPG's Global Cyber Security Practice co-leader
-
-
-### GDPR Means Good Enough Isn't Good Enough
-
-"Too many professionals share a 'good enough' philosophy that they've adopted from their consumer mindset that they can simply upgrade and patch to comply with the latest security and compliance best practices or regulations. In 2018, with the upcoming enforcement of the EU GDPR 'respond fast' rules, organizations will quickly come to terms, and face fines, with why 'good enough' is not 'good' anymore." -Kris Lovejoy, CEO of BluVector
-
-
-### Consumerization of Cyber-Security
-
-"2018 will mark the debut of the 'consumerization of cyber-security.' This means consumers will be offered a unified, comprehensive suite of security offerings, including, in addition to antivirus and spyware protection, credit and identify abuse monitoring and identity restoration. This is a big step forward compared to what is available in one package today. McAfee Total Protection, which safeguards consumer identities in addition to providing virus and malware protection, is an early, simplified example of this. Consumers want to feel more secure." -Don Dixon, co-founder and managing director, Trident Capital Cybersecurity
-
-
-### Ransomware Will Continue
-
-"Ransomware will continue to plague organizations with 'old' attacks 'refreshed' and reused. The threat of ransomware will continue into 2018. This year we've seen ransomware wreak havoc across the globe with both WannaCry and NotPetya hitting the headlines. Threats of this type and on this scale will be a common feature of the next 12 months." -Andrew Avanessian, chief operating officer at Avecto
-
-
-### More Encryption Will Be Needed
-
-"It will become increasingly clear in the industry that HTTPS does not offer the robust security and end-to-end encryption as is commonly believed, and there will be a push to encrypt data before it is sent over HTTPS." -Darren Guccione, CEO and co-founder, Keeper Security
-
-
-### Denial of Service Will Become Financially Lucrative
-
-"Denial of service will become as financially lucrative as identity theft. Using stolen identities for new account fraud has been the major revenue driver behind breaches. However, in recent years ransomware attacks have caused as much if not more damage, as increased reliance on distributed applications and cloud services results in massive business damage when information, applications or systems are held hostage by attackers." -John Pescatore. SANS' director of emerging security trends
-
-
-### Goodbye Social Security Number
-
-"2018 is the turning point for the retirement of the Social Security number. At this point, the vast majority of SSNs are compromised, and we can no longer rely on them--nor should we have previously." -Michael Sutton, CISO, Zscaler
-
-
-### Post-Quantum Cyber-Security Discussion Warms Up the Boardroom
-
-"The uncertainty of cyber-security in a post-quantum world is percolating some circles, but 2018 is the year the discussions gain momentum in the top levels of business. As security experts grapple with preparing for a post-quantum world, top executives will begin to ask what can be done to ensure all of our connected 'things' remain secure." -Malte Pollmann, CEO of Utimaco
-
-
-### Market Consolidation Is Coming
-
-"There will be accelerated consolidation of cyber niche markets flooded with too many 'me-too' companies offering extremely similar products and services. As an example, authentication, end-point security and threat intelligence now boast a total of more than 25 competitors. Ultimately, only three to six companies in each niche can survive." -Mike Janke, co-founder of DataTribe
-
-
-### Health Care Will Be a Lucrative Target
-
-"Health records are highly valued on the black market because they are saturated with Personally Identifiable Information (PII). Health care institutions will continue to be a target as they have tighter allocations for security in their IT budgets. Also, medical devices are hard to update and often run on older operating system versions." -Larry Cashdollar, senior engineer, Security Intelligence Response Team, Akamai
-
-
-### 2018: The Year of Simple Multifactor Authentication for SMBs
-
-"Unfortunately, effective multifactor authentication (MFA) solutions have remained largely out of reach for the average small- and medium-sized business. Though enterprise multifactor technology is quite mature, it often required complex on-premises solutions and expensive hardware tokens that most small businesses couldn't afford or manage. However, the growth of SaaS and smartphones has introduced new multifactor solutions that are inexpensive and easy for small businesses to use. Next year, many SMBs will adopt these new MFA solutions to secure their more privileged accounts and users. 2018 will be the year of MFA for SMBs." -Corey Nachreiner, CTO at WatchGuard Technologies
-
-
-### Automation Will Improve the IT Skills Gap
-
-"The security skills gap is widening every year, with no signs of slowing down. To combat the skills gap and assist in the growing adoption of advanced analytics, automation will become an even higher priority for CISOs." -Haiyan Song, senior vice president of Security Markets at Splunk
-
-
-### Industrial Security Gets Overdue Attention
-
-"The high-profile attacks of 2017 acted as a wake-up call, and many plant managers now worry that they could be next. Plant manufacturers themselves will offer enhanced security. Third-party companies going on their own will stay in a niche market. The industrial security manufacturers themselves will drive a cooperation with the security industry to provide security themselves. This is because there is an awareness thing going on and impending government scrutiny. This is different from what happened in the rest of IT/IoT where security vendors just go to market by themselves as a layer on top of IT (i.e.: an antivirus on top of Windows)." -Renaud Deraison, co-founder and CTO, Tenable
-
-
-### Cryptocurrencies Become the New Playground for Identity Thieves
-
-"The rising value of cryptocurrencies will lead to greater attention from hackers and bad actors. Next year we'll see more fraud, hacks and money laundering take place across the top cryptocurrency marketplaces. This will lead to a greater focus on identity verification and, ultimately, will result in legislation focused on trader identity." -Stephen Maloney, executive vice president of Business Development & Strategy, Acuant
-
-
-### GDPR Compliance Will Be a Challenge
-
-"In 2018, three quarters of companies or apps will be ruled out of compliance with GDPR and at least one major corporation will be fined to the highest extent in 2018 to set an example for others. Most companies are preparing internally by performing more security assessments and recruiting a mix of security professionals with privacy expertise and lawyers, but with the deadline quickly approaching, it's clear the bulk of businesses are woefully behind and may not be able to avoid these consequences." -Sanjay Beri, founder and CEO, Netskope
-
-
-### Data Security Solidifies Its Spot in the IT Security Stack
-
-"Many businesses are stuck in the mindset that security of networks, servers and applications is sufficient to protect their data. However, the barrage of breaches in 2017 highlights a clear disconnect between what organizations think is working and what actually works. In 2018, we expect more businesses to implement data security solutions that complement their existing network security deployments." -Jim Varner, CEO of SecurityFirst
-
-
-### [Eight Cyber-Security Vendors Raise New Funding in November 2017][1]
-
-Though the pace of funding slowed in November, multiple firms raised new venture capital to develop and improve their cyber-security products.
-
-Though the pace of funding slowed in November, multiple firms raised new venture capital to develop and improve their cyber-security products.
-
---------------------------------------------------------------------------------
-
-via: http://voip.eweek.com/security/18-cyber-security-trends-organizations-need-to-brace-for-in-2018
-
-作者:[Sean Michael Kerner][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://voip.eweek.com/Authors/sean-michael-kerner
-[1]:http://voip.eweek.com/security/eight-cyber-security-vendors-raise-new-funding-in-november-2017
diff --git a/sources/talk/20180104 How allowing myself to be vulnerable made me a better leader.md b/sources/talk/20180104 How allowing myself to be vulnerable made me a better leader.md
deleted file mode 100644
index 1cd6a22162..0000000000
--- a/sources/talk/20180104 How allowing myself to be vulnerable made me a better leader.md
+++ /dev/null
@@ -1,64 +0,0 @@
-How allowing myself to be vulnerable made me a better leader
-======
-
-
-Conventional wisdom suggests that leadership is strong, bold, decisive. In my experience, leadership does feel like that some days.
-
-Some days leadership feels more vulnerable. Doubts creep in: Am I making good decisions? Am I the right person for this job? Am I focusing on the most important things?
-
-The trick with these moments is to talk about these moments. When we keep them secret, our insecurity only grows. Being an open leader means pushing our vulnerability into the spotlight. Only then can we seek comfort from others who have experienced similar moments.
-
-To demonstrate how this works, I'll share a story.
-
-### A nagging question
-
-If you work in the tech industry, you'll note an obvious focus on creating [an organization that's inclusive][1]--a place for diversity to flourish. Long story short: I thought I was a "diversity hire," someone hired because of my gender, not my ability. Even after more than 15 years in the industry, with all of the focus on diversity in hiring, that possibility got under my skin. Along came the doubts: Was I hired because I was the best person for the job--or because I was a woman? After years of knowing I was hired because I was the best person, the fact that I was female suddenly seemed like it was more interesting to potential employers.
-
-I rationalized that it didn't matter why I was hired; I knew I was the best person for the job and would prove it. I worked hard, delivered results, made mistakes, learned, and did everything an employer would want from an employee.
-
-And yet the "diversity hire" question nagged. I couldn't shake it. I avoided the subject like the plague and realized that not talking about it was a signal that I had no choice but to deal with it. If I continued to avoid the subject, it was going to affect my work. And that's the last thing I wanted.
-
-### Speaking up
-
-Talking about diversity and inclusion can be awkward. So many factors enter into the decision to open up:
-
- * Can we trust our co-workers with a vulnerable moment?
- * Can a leader of a team be too vulnerable?
- * What if I overstep? Do I damage my career?
-
-
-
-In my case, I ended up at a lunch Q&A session with an executive who's a leader in many areas of the organization--especially candid conversations. A coworker asked the "Was I a diversity hire?" question. He stopped and spent a significant amount of time talking about this question to a room full of women. I'm not going to recount the entire discussion here; I will share the most salient point: If you know you're qualified for the job and you know the interview went well, don't doubt the outcome. Anyone who questions whether you're a diversity hire has their own questions to answer. You don't have to go on their journey.
-
-Mic drop.
-
-I wish I could say that I stopped thinking about this topic. I didn't. The question lingered: What if I am the exception to the rule? What if I was the one diversity hire? I realized that I couldn't avoid the nagging question.
-
-Because I had the courage to be vulnerable--to go there with my question--I had the burden of my secret question lifted.
-
-A few weeks later I had a one-on-one with the executive. At the end of conversation, I mentioned that, as a woman, I appreciate his candid conversations about diversity and inclusion. It's easier to talk about these topics when a recognized leader is willing to have the conversation. I also returned to the "Was I a diversity hire? question. He didn't hesitate: We talked. At the end of the conversation, I realized that I was hungry to talk about these things that require bravery; I only needed a nudge and someone who cared enough to talk and listen.
-
-Because I had the courage to be vulnerable--to go there with my question--I had the burden of my secret question lifted. Feeling physically lighter, I started to have constructive conversations around the questions of implicit bias, what we can do to be inclusive, and what diversity looks like. As I've learned, every person has a different answer when I ask the diversity question. I wouldn't have gotten to have all of these amazing conversations if I'd stayed stuck with my secret.
-
-I had courage to talk, and I hope you will too.
-
-Let's talk about these things that hold us back in terms of our ability to lead so we can be more open leaders in every sense of the phrase. Has allowing yourself to be vulnerable made you a better leader?
-
-### About The Author
-
-Angela Robertson;Angela Robertson Works As A Senior Manager At Microsoft. She Works With An Amazing Team Of People Passionate About Community Contributions;Engaged In Open Organizations. Before Joining Microsoft;Angela Worked At Red Hat
-
-
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/17/12/how-allowing-myself-be-vulnerable-made-me-better-leader
-
-作者:[Angela Robertson][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/arobertson98
-[1]:https://opensource.com/open-organization/17/9/building-for-inclusivity
diff --git a/sources/talk/20180117 How technology changes the rules for doing agile.md b/sources/talk/20180117 How technology changes the rules for doing agile.md
deleted file mode 100644
index 1b67935509..0000000000
--- a/sources/talk/20180117 How technology changes the rules for doing agile.md
+++ /dev/null
@@ -1,95 +0,0 @@
-How technology changes the rules for doing agile
-======
-
-
-
-More companies are trying agile and [DevOps][1] for a clear reason: Businesses want more speed and more experiments - which lead to innovations and competitive advantage. DevOps helps you gain that speed. But doing DevOps in a small group or startup and doing it at scale are two very different things. Any of us who've worked in a cross-functional group of 10 people, come up with a great solution to a problem, and then tried to apply the same patterns across a team of 100 people know the truth: It often doesn't work. This path has been so hard, in fact, that it has been easy for IT leaders to put off agile methodology for another year.
-
-But that time is over. If you've tried and stalled, it's time to jump back in.
-
-Until now, DevOps required customized answers for many organizations - lots of tweaks and elbow grease. But today, [Linux containers ][2]and Kubernetes are fueling standardization of DevOps tools and processes. That standardization will only accelerate. The technology we are using to practice the DevOps way of working has finally caught up with our desire to move faster.
-
-Linux containers and [Kubernetes][3] are changing the way teams interact. Moreover, on the Kubernetes platform, you can run any application you now run on Linux. What does that mean? You can run a tremendous number of enterprise apps (and handle even previously vexing coordination issues between Windows and Linux.) Finally, containers and Kubernetes will handle almost all of what you'll run tomorrow. They're being future-proofed to handle machine learning, AI, and analytics workloads - the next wave of problem-solving tools.
-
-**[ See our related article,[4 container adoption patterns: What you need to know. ] ][4]**
-
-Think about machine learning, for example. Today, people still find the patterns in much of an enterprise's data. When machines find the patterns (think machine learning), your people will be able to act on them faster. With the addition of AI, machines can not only find but also act on patterns. Today, with people doing everything, three weeks is an aggressive software development sprint cycle. With AI, machines can change code multiple times per second. Startups will use that capability - to disrupt you.
-
-Consider how fast you have to be to compete. If you can't make a leap of faith now to DevOps and a one week cycle, think of what will happen when that startup points its AI-fueled process at you. It's time to move to the DevOps way of working now, or get left behind as your competitors do.
-
-### How are containers changing how teams work?
-
-DevOps has frustrated many groups trying to scale this way of working to a bigger group. Many IT (and business) people are suspicious of agile: They've heard it all before - languages, frameworks, and now models (like DevOps), all promising to revolutionize application development and IT process.
-
-**[ Want DevOps advice from other CIOs? See our comprehensive resource, [DevOps: The IT Leader's Guide][5]. ]**
-
-It's not easy to "sell" quick development sprints to your stakeholders, either. Imagine if you bought a house this way. You're not going to pay a fixed amount to your builder anymore. Instead, you get something like: "We'll pour the foundation in 4 weeks and it will cost x. Then we'll frame. Then we'll do electrical. But we only know the timing on the foundation right now." People are used to buying homes with a price up front and a schedule.
-
-The challenge is that building software is not like building a house. The same builder builds thousands of houses that are all the same. Software projects are never the same. This is your first hurdle to get past.
-
-Dev and operations teams really do work differently: I know because I've worked on both sides. We incent them differently. Developers are rewarded for changing and creating, while operations pros are rewarded for reducing cost and ensuring security. We put them in different groups and generally minimize interaction. And the roles typically attract technical people who think quite differently. This situation sets IT up to fail. You have to be willing to break down these barriers.
-
-Think of what has traditionally happened. You throw pieces over the wall, then the business throws requirements over the wall because they are operating in "house-buying" mode: "We'll see you in 9 months." Developers build to those requirements and make changes as needed for technical constraints. Then they throw it over the wall to operations to "figure out how to run this." Operations then works diligently to make a slew of changes to align the software with their infrastructure. And what's the end result?
-
-More often than not, the end result isn't even recognizable to the business when they see it in its final glory. We've watched this pattern play out time and time again in our industry for the better part of two decades. It's time for a change.
-
-It's Linux containers that truly crack the problem - because containers close the gap between development and operations. They allow both teams to understand and design to all of the critical requirements, but still uniquely fulfill their team's responsibilities. Basically, we take out the telephone game between developers and operations. With containers, we can have smaller operations teams, even teams responsible for millions of applications, but development teams that can change software as quickly as needed. (In larger organizations, the desired pace may be faster than humans can respond on the operations side.)
-
-With containers, you're separating what is delivered from where it runs. Your operations teams are responsible for the host that will run the containers and the security footprint, and that's all. What does this mean?
-
-First, it means you can get going on DevOps now, with the team you have. That's right. Keep teams focused on the expertise they already have: With containers, just teach them the bare minimum of the required integration dependencies.
-
-If you try and retrain everyone, no one will be that good at anything. Containers let teams interact, but alongside a strong boundary, built around each team's strengths. Your devs know what needs to be consumed, but don't need to know how to make it run at scale. Ops teams know the core infrastructure, but don't need to know the minutiae of the app. Also, Ops teams can update apps to address new security implications, before you become the next trending data breach story.
-
-Teaching a large IT organization of say 30,000 people both ops and devs skills? It would take you a decade. You don't have that kind of time.
-
-When people talk about "building new, cloud-native apps will get us out of this problem," think critically. You can build cloud-native apps in 10-person teams, but that doesn't scale for a Fortune 1000 company. You can't just build new microservices one by one until you're somehow not reliant on your existing team: You'll end up with a siloed organization. It's an alluring idea, but you can't count on these apps to redefine your business. I haven't met a company that could fund parallel development at this scale and succeed. IT budgets are already constrained; doubling or tripling them for an extended period of time just isn't realistic.
-
-### When the remarkable happens: Hello, velocity
-
-Linux containers were made to scale. Once you start to do so, [orchestration tools like Kubernetes come into play][6] - because you'll need to run thousands of containers. Applications won't consist of just a single container, they will depend on many different pieces, all running on containers, all running as a unit. If they don't, your apps won't run well in production.
-
-Think of how many small gears and levers come together to run your business: The same is true for any application. Developers are responsible for all the pulleys and levers in the application. (You could have an integration nightmare if developers don't own those pieces.) At the same time, your operations team is responsible for all the pulleys and levers that make up your infrastructure, whether on-premises or in the cloud. With Kubernetes as an abstraction, your operations team can give the application the fuel it needs to run - without being experts on all those pieces.
-
-Developers get to experiment. The operations team keeps infrastructure secure and reliable. This combination opens up the business to take small risks that lead to innovation. Instead of having to make only a couple of bet-the-farm size bets, real experimentation happens inside the company, incrementally and quickly.
-
-In my experience, this is where the remarkable happens inside organizations: Because people say "How do we change planning to actually take advantage of this ability to experiment?" It forces agile planning.
-
-For example, KeyBank, which uses a DevOps model, containers, and Kubernetes, now deploys code every day. (Watch this [video][7] in which John Rzeszotarski, director of Continuous Delivery and Feedback at KeyBank, explains the change.) Similarly, Macquarie Bank uses DevOps and containers to put something in production every day.
-
-Once you push software every day, it changes every aspect of how you plan - and [accelerates the rate of change to the business][8]. "An idea can get to a customer in a day," says Luis Uguina, CDO of Macquarie's banking and financial services group. (See this [case study][9] on Red Hat's work with Macquarie Bank).
-
-### The right time to build something great
-
-The Macquarie example demonstrates the power of velocity. How would that change your approach to your business? Remember, Macquarie is not a startup. This is the type of disruptive power that CIOs face, not only from new market entrants but also from established peers.
-
-The developer freedom also changes the talent equation for CIOs running agile shops. Suddenly, individuals within huge companies (even those not in the hottest industries or geographies) can have great impact. Macquarie uses this dynamic as a recruiting tool, promising developers that all new hires will push something live within the first week.
-
-At the same time, in this day of cloud-based compute and storage power, we have more infrastructure available than ever. That's fortunate, considering the [leaps that machine learning and AI tools will soon enable][10].
-
-This all adds up to this being the right time to build something great. Given the pace of innovation in the market, you need to keep building great things to keep customers loyal. So if you've been waiting to place your bet on DevOps, now is the right time. Containers and Kubernetes have changed the rules - in your favor.
-
-**Want more wisdom like this, IT leaders? [Sign up for our weekly email newsletter][11].**
-
---------------------------------------------------------------------------------
-
-via: https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile
-
-作者:[Matt Hicks][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://enterprisersproject.com/user/matt-hicks
-[1]:https://enterprisersproject.com/tags/devops
-[2]:https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA
-[3]:https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA
-[4]:https://enterprisersproject.com/article/2017/8/4-container-adoption-patterns-what-you-need-know?sc_cid=70160000000h0aXAAQ
-[5]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ
-[6]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity
-[7]:https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA
-[8]:https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation
-[9]:https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA
-[10]:https://enterprisersproject.com/article/2018/1/4-ai-trends-watch
-[11]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ
diff --git a/sources/talk/20180209 A review of Virtual Labs virtualization solutions for MOOCs - WebLog Pro Olivier Berger.md b/sources/talk/20180209 A review of Virtual Labs virtualization solutions for MOOCs - WebLog Pro Olivier Berger.md
deleted file mode 100644
index 0cb3755ca1..0000000000
--- a/sources/talk/20180209 A review of Virtual Labs virtualization solutions for MOOCs - WebLog Pro Olivier Berger.md
+++ /dev/null
@@ -1,255 +0,0 @@
-A review of Virtual Labs virtualization solutions for MOOCs – WebLog Pro Olivier Berger
-======
-### 1 Introduction
-
-This is a memo that tries to capture some of the experience gained in the [FLIRT project][3] on the topic of Virtual Labs for MOOCs (Massive Open Online Courses).
-
-In this memo, we try to draw an overview of some benefits and concerns with existing approaches at using virtualization techniques for running Virtual Labs, as distributions of tools made available for distant learners.
-
-We describe 3 main technical architectures: (1) running Virtual Machine images locally on a virtual machine manager, or (2) displaying the remote execution of similar virtual machines on a IaaS cloud, and (3) the potential of connecting to the remote execution of minimized containers on a remote PaaS cloud.
-
-We then elaborate on some perspectives for locally running ports of applications to the WebAssembly virtual machine of the modern Web browsers.
-
-Disclaimer: This memo doesn’t intend to point to extensive literature on the subject, so part of our analysis may be biased by our particular context.
-
-### 2 Context : MOOCs
-
-Many MOOCs (Massive Open Online Courses) include a kind of “virtual laboratory” for learners to experiment with tools, as a way to apply the knowledge, practice, and be more active in the learning process. In quite a few (technical) disciplines, this can consist in using a set of standard applications in a professional domain, which represent typical tools that would be used in real life scenarii.
-
-Our main perspective will be that of a MOOC editor and of MOOC production teams which want to make “virtual labs” available for MOOC participants.
-
-Such a “virtual lab” would typically contain installations of existing applications, pre-installed and configured, and loaded with scenario data in order to perform a lab.
-
-The main constraint here is that such labs would typically be fabricated with limited software development expertise and funds[1][4]. Thus we consider here only the assembly of existing “normal” applications and discard the option of developping novel “serious games” and simulator applications for such MOOCs.
-
-#### 2.1 The FLIRT project
-
-The [FLIRT project][5] groups a consortium of 19 partners in Industry, SMEs and Academia to work on a collection of MOOCs and SPOCs for professional development in Networks and Telecommunications. Lead by Institut Mines Telecom, it benefits from the funding support of the French “Investissements d’avenir” programme.
-
-As part of the FLIRT roadmap, we’re leading an “innovation task” focused on Virtual Labs in the context of the Cloud. This memo was produced as part of this task.
-
-#### 2.2 Some challenges in virtual labs design for distant learning
-
-Virtual Labs used in distance learning contexts require the use of software applications in autonomy, either running on a personal, or professional computer. In general, the technical skills of participants may be diverse. So much for the quality (bandwith, QoS, filtering, limitations: firewaling) of the hardware and networks they use at home or at work. It’s thus very optimistic to seek for one solution fits all strategy.
-
-Most of the time there’s a learning curve on getting familiar with the tools which students will have to use, which constitutes as many challenges to overcome for beginners. These tools may not be suited for beginners, but they will still be selected by the trainers as they’re representative of the professional context being taught.
-
-In theory, this usability challenge should be addressed by devising an adapted pedagogical approach, especially in a context of distance learning, so that learners can practice the labs on their own, without the presence of a tutor or professor. Or some particular prerequisite skills could be required (“please follow System Administration 101 before applying to this course”).
-
-Unfortunately there are many cases where instructors basically just translate to a distant learning scenario, previous lab resources that had previously been devised for in presence learning. This lets learner faced with many challenges to overcome. The only support resource is often a regular forum on the MOOC’s LMS (Learning Management System).
-
-My intuition[2][6] is that developing ad-hoc simulators for distant education would probably be more efficient and easy to use for learners. But that would require a too high investment for the designers of the courses.
-
-In the context of MOOCs which are mainly free to participate to, not much investment is possible in devising ad-hoc lab applications, and instructors have to rely on existing applications, tools and scenarii to deliver a cheap enough environment. Furthermore, technical or licensing constraints[3][7] may lead to selecting lab tools which may not be easy to learn, but have the great advantage or being freely redistributable[4][8].
-
-### 3 Virtual Machines for Virtual Labs
-
-The learners who will try unattended learning in such typical virtual labs will face difficulties in making specialized applications run. They must overcome the technical details of downloading, installing and configuring programs, before even trying to perform a particular pedagogical scenario linked to the matter studied.
-
-To diminish these difficulties, one traditional approach for implementing labs in MOOCs has been to assemble in advance a Virtual Machine image. This already made image can then be downloaded and run with a virtual machine simulator (like [VirtualBox][9][5][10]).
-
-The pre-loaded VM will already have everything ready for use, so that the learners don’t have to install anything on their machines.
-
-An alternative is to let learners download and install the needed software tools themselves, but this leads to so many compatibility issues or technical skill prerequisites, that this is often not advised, and mentioned only as a fallback option.
-
-#### 3.1 Downloading and installation issues
-
-Experience shows[2][11] that such virtual machines also bring some issues. Even if installation of every piece of software is no longer required, learners still need to be able to run the VM simulator on a wide range of diverse hardware, OSes and configurations. Even managing to download the VMs, still causes many issues (lack admin privileges, weight vs download speed, memory or CPU load, disk space, screen configurations, firewall filtering, keayboard layout, etc.).
-
-These problems aren’t generally faced by the majority of learners, but the impacted minority is not marginal either, and they generally will produce a lot of support requests for the MOOC team (usually in the forums), which needs to be anticipated by the community managers.
-
-The use of VMs is no show stopper for most, but can be a serious problem for a minority of learners, and is then no silver bullet.
-
-Some general usability issues may also emerge if users aren’t used to the look and feel of the enclosed desktop. For instance, the VM may consist of a GNU/Linux desktop, whereas users would use a Windows or Mac OS system.
-
-#### 3.2 Fabrication issues for the VM images
-
-On the MOOC team’s side, the fabrication of a lightweight, fast, tested, license-free and easy to use VM image isn’t necessarily easy.
-
-Software configurations tend to rot as time passes, and maintenance may not be easy when the later MOOC editions evolutions lead to the need to maintain the virtual lab scenarii years later.
-
-Ideally, this would require adopting an “industrial” process in building (and testing) the lab VMs, but this requires quite an expertise (system administration, packaging, etc.) that may or not have been anticipated at the time of building the MOOC (unlike video editing competence, for instance).
-
-Our experiment with the [Vagrant][12] technology [[0][13]] and Debian packaging was interesting in this respect, as it allowed us to use a well managed “script” to precisely control the build of a minimal VM image.
-
-### 4 Virtual Labs as a Service
-
-To overcome the difficulties in downloading and running Virtual Machines on one’s local computer, we have started exploring the possibility to run these applications in a kind of Software as a Service (SaaS) context, “on the cloud”.
-
-But not all applications typically used in MOOC labs are already available for remote execution on the cloud (unless the course deals precisely with managing email in GMail).
-
-We have then studied the option to use such an approach not for a single application, but for a whole virtual “desktop” which would be available on the cloud.
-
-#### 4.1 IaaS deployments
-
-A way to achieve this goal is to deploy Virtual Machine images quite similar to the ones described above, on the cloud, in an Infrastructure as a Service (IaaS) context[6][14], to offer access to remote desktops for every learners.
-
-There are different technical options to achieve this goal, but a simplified description of the architecture can be seen as just running Virtual Machines on a single IaaS platform instead of on each learner’s computer. Access to the desktop and application interfaces is made possible with the use of Web pages (or other dedicated lightweight clients) which will display a “full screen” display of the remote desktop running for the user on the cloud VM. Under the hood, the remote display of a Linux desktop session is made with technologies like [VNC][15] and [RDP][16] connecting to a [Guacamole][17] server on the remote VM.
-
-In the context of the FLIRT project, we have made early experiments with such an architecture. We used the CloVER solution by our partner [ProCAN][18] which provides a virtual desktops broker between [OpenEdX][19] and an [OpenStack][20] IaaS public platform.
-
-The expected benefit is that users don’t have to install anything locally, as the only tool needed locally is a Web browser (displaying a full-screen [HTML5 canvas][21] displaying the remote desktop run by the Guacamole server running on the cloud VM.
-
-But there are still some issues with such an approach. First, the cost of operating such an infrastructure : Virtual Machines need to be hosted on a IaaS platform, and that cost of operation isn’t null[7][22] for the MOOC editor, compared to the cost of VirtualBox and a VM running on the learner’s side (basically zero for the MOOC editor).
-
-Another issue, which could be more problematic lies in the need for a reliable connection to the Internet during the whole sequences of lab execution by the learners[8][23]. Even if Guacamole is quite efficient at compressing rendering traffic, some basic connectivity is needed during the whole Lab work sessions, preventing some mobile uses for instance.
-
-One other potential annoyance is the potential delays for making a VM available to a learner (provisioning a VM), when huge VMs images need to be copied inside the IaaS platform when a learner connects to the Virtual Lab activity for the first time (several minutes delays). This may be worse if the VM image is too big (hence the need for optimization of the content[9][24]).
-
-However, the fact that all VMs are running on a platform under the control of the MOOC editor allows new kind of features for the MOOC. For instance, learners can submit results of their labs directly to the LMS without the need to upload or copy-paste results manually. This can help monitor progress or perform evaluation or grading.
-
-The fact that their VMs run on the same platform also allows new kinds of pedagogical scenarii, as VMs of multiple learners can be interconnected, allowing cooperative activities between learners. The VM images may then need to be instrumented and deployed in particular configurations, which may require the use of a dedicated broker like CloVER to manage such scenarii.
-
-For the records, we have yet to perform a rigorous benchmarking of such a solution in order to evaluate its benefits, or constraints given particular contexts. In FLIRT, our main focus will be in the context of SPOCs for professional training (a bit different a context than public MOOCs).
-
-Still this approach doesn’t solve the VMs fabrication issues for the MOOC staff. Installing software inside a VM, be it local inside a VirtualBox simulator of over the cloud through a remote desktop display, makes not much difference. This relies mainly on manual operations and may not be well managed in terms of quality of the process (reproducibility, optimization).
-
-#### 4.2 PaaS deployments using containers
-
-Some key issues in the IaaS context described above, are the cost of operation of running full VMs, and long provisioning delays.
-
-We’re experimenting with new options to address these issues, through the use of [Linux containers][25] running on a PaaS (Platform as a Service) platform, instead of full-fleshed Virtual Machines[10][26].
-
-The main difference, with containers instead of Virtual Machines, lies in the reduced size of images, and much lower CPU load requirements, as the container remove the need for one layer of virtualization. Also, the deduplication techniques at the heart of some virtual file-systems used by container platforms lead to really fast provisioning, avoiding the need to wait for the labs to start.
-
-The traditional making of VMs, done by installing packages and taking a snapshot, was affordable for the regular teacher, but involved manual operations. In this respect, one other major benefit of containers is the potential for better industrialization of the virtual lab fabrication, as they are generally not assembled manually. Instead, one uses a “scripting” approach in describing which applications and their dependencies need to be put inside a container image. But this requires new competence from the Lab creators, like learning the [Docker][27] technology (and the [OpenShift][28] PaaS, for instance), which may be quite specialized. Whereas Docker containers tend to become quite popular in Software Development faculty (through the “[devops][29]” hype), they may be a bit new to other field instructors.
-
-The learning curve to mastering the automation of the whole container-based labs installation needs to be evaluated. There’s a trade-off to consider in adopting technology like Vagrant or Docker: acquiring container/PaaS expertise vs quality of industrialization and optimization. The production of a MOOC should then require careful planning if one has to hire or contract with a PaaS expert for setting up the Virtual Labs.
-
-We may also expect interesting pedagogical benefits. As containers are lightweight, and platforms allow to “easily” deploy multiple interlinked containers (over dedicated virtual networks), this enables the setup of more realistic scenarii, where each learner may be provided with multiple “nodes” over virtual networks (all running their individual containers). This would be particularly interesting for Computer Networks or Security teaching for instance, where each learner may have access both to client and server nodes, to study client-server protocols, for instance. This is particularly interesting for us in the context of our FLIRT project, where we produce a collection of Computer Networks courses.
-
-Still, this mode of operation relies on a good connectivity of the learners to the Cloud. In contexts of distance learning in poorly connected contexts, the PaaS architecture doesn’t solve that particular issue compared to the previous IaaS architecture.
-
-### 5 Future server-less Virtual Labs with WebAssembly
-
-As we have seen, the IaaS or PaaS based Virtual Labs running on the Cloud offer alternatives to installing local virtual machines on the learner’s computer. But they both require to be connected for the whole duration of the Lab, as the applications would be executed on the remote servers, on the Cloud (either inside VMs or containers).
-
-We have been thinking of another alternative which could allow the deployment of some Virtual Labs on the local computers of the learners without the hassles of downloading and installing a Virtual Machine manager and VM image. We envision the possibility to use the infrastructure provided by modern Web browsers to allow running the lab’s applications.
-
-At the time of writing, this architecture is still highly experimental. The main idea is to rebuild the applications needed for the Lab so that they can be run in the “generic” virtual machine present in the modern browsers, the [WebAssembly][30] and Javascript execution engine.
-
-WebAssembly is a modern language which seeks for maximum portability, and as its name hints, is a kind of assembly language for the Web platform. What is of interest for us is that WebAssembly is portable on most modern Web browsers, making it a very interesting platform for portability.
-
-Emerging toolchains allow recompiling applications written in languages like C or C++ so that they can be run on the WebAssembly virtual machine in the browser. This is interesting as it doesn’t require modifying the source code of these programs. Of course, there are limitations, in the kind of underlying APIs and libraries compatible with that platform, and on the sandboxing of the WebAssembly execution engine enforced by the Web browser.
-
-Historically, WebAssembly has been developped so as to allow running games written in C++ for a framework like Unity, in the Web browser.
-
-In some contexts, for instance for tools with an interactive GUI, and processing data retrieved from files, and which don’t need very specific interaction with the underlying operating system, it seems possible to port these programs to WebAssembly for running them inside the Web browser.
-
-We have to experiment deeper with this technology to validate its potential for running Virtual Labs in the context of a Web browser.
-
-We used a similar approach in the past in porting a Relational Database course lab to the Web browser, for standalone execution. A real database would run in the minimal SQLite RDBMS, recompiled to JavaScript[11][31]. Instead of having to download, install and run a VM with a RDBMS, the students would only connect to a Web page, which would load the DBMS in memory, and allow performing the lab SQL queries locally, disconnected from any third party server.
-
-In a similar manner, we can think for instance, of a Lab scenario where the Internet packet inspector features of the Wireshark tool would run inside the WebAssembly virtual machine, to allow dissecting provided capture files, without having to install Wireshard, directly into the Web browser.
-
-We expect to publish a report on that last experiment in the future with more details and results.
-
-### 6 Conclusion
-
-The most promising architecture for Virtual Lab deployments seems to be the use of containers on a PaaS platform for deploying virtual desktops or virtual application GUIs available in the Web browser.
-
-This would allow the controlled fabrication of Virtual Labs containing the exact bits needed for learners to practice while minimizing the delays.
-
-Still the need for always-on connectivity can be a problem.
-
-Also, the potential for inter-networked containers allowing the kind of multiple nodes and collaborative scenarii we described, would require a lot of expertise to develop, and management platforms for the MOOC operators, which aren’t yet mature.
-
-We hope to be able to report on our progress in the coming months and years on those aspects.
-
-### 7 References
-
-
-
-[0]
-Olivier Berger, J Paul Gibson, Claire Lecocq and Christian Bac “Designing a virtual laboratory for a relational database MOOC”. International Conference on Computer Supported Education, SCITEPRESS, 23-25 may 2015, Lisbonne, Portugal, 2015, vol. 7, pp. 260-268, ISBN 978-989-758-107-6 – [DOI: 10.5220/0005439702600268][1] ([preprint (HTML)][2])
-
-### 8 Copyright
-
- [][45]
-
-This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][46]
-
-.
-
-### Footnotes:
-
-[1][32] – The FLIRT project also works on business model aspects of MOOC or SPOC production in the context of professional development, but the present memo starts from a minimalitic hypothesis where funding for course production is quite limited.
-
-[2][33] – research-based evidence needed
-
-[3][34] – In typical MOOCs which are free to participate, the VM should include only gratis tools, which typically means a GNU/Linux distribution loaded with applications available under free and open source licenses.
-
-[4][35] – Typically, Free and Open Source software, aka Libre Software
-
-[5][36] – VirtualBox is portable on many operating systems, making it a very popular solution for such a need
-
-[6][37] – the IaaS platform could typically be an open cloud for MOOCs or a private cloud for SPOCs (for closer monitoring of student activity or security control reasons).
-
-[7][38] – Depending of the expected use of the lab by learners, this cost may vary a lot. The size and configuration required for the included software may have an impact (hence the need to minimize the footprint of the VM images). With diminishing costs in general this may not be a show stopper. Refer to marketing figures of commercial IaaS offerings for accurate figures. Attention to additional licensing costs if the OS of the VM isn’t free software, or if other licenses must be provided for every learners.
-
-[8][39] – The needs for always-on connectivity may not be a problem for professional development SPOCs where learners connect from enterprise networks for instance. It may be detrimental when MOOCs are very popular in southern countries where high bandwidth is both unreliable and expensive.
-
-[9][40] – In this respect, providing a full Linux desktop inside the VM doesn’t necessarily make sense. Instead, running applications full-screen may be better, avoiding installation of whole desktop environments like Gnome or XFCE… but which has usability consequences. Careful tuning and testing is needed in any case.
-
-[10][41] – The availability of container based architectures is quite popular in the industry, but has not yet been deployed to a large scale in the context of large public MOOC hosting platforms, to our knowledge, at the time of writing. There are interesting technical challenges which the FLIRT project tries to tackle together with its partner ProCAN.
-
-[11][42] – See the corresponding paragraph [http://www-inf.it-sudparis.eu/PROSE/csedu2015/#standalone-sql-env][43] in [0][44]
-
-
---------------------------------------------------------------------------------
-
-via: https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/
-
-作者:[Author;Olivier Berger;Télécom Sudparis][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www-public.tem-tsp.eu
-[1]:http://dx.doi.org/10.5220/0005439702600268
-[2]:http://www-inf.it-sudparis.eu/PROSE/csedu2015/
-[3]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#org50fdc1a
-[4]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.1
-[5]:http://flirtmooc.wixsite.com/flirt-mooc-telecom
-[6]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.2
-[7]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.3
-[8]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.4
-[9]:http://virtualbox.org
-[10]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.5
-[11]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.2
-[12]:https://www.vagrantup.com/
-[13]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#orgde5af50
-[14]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.6
-[15]:https://en.wikipedia.org/wiki/Virtual_Network_Computing
-[16]:https://en.wikipedia.org/wiki/Remote_Desktop_Protocol
-[17]:http://guacamole.apache.org/
-[18]:https://www.procan-group.com/
-[19]:https://open.edx.org/
-[20]:http://openstack.org/
-[21]:https://en.wikipedia.org/wiki/Canvas_element
-[22]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.7
-[23]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.8
-[24]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.9
-[25]:https://www.redhat.com/en/topics/containers
-[26]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.10
-[27]:https://en.wikipedia.org/wiki/Docker_(software)
-[28]:https://www.openshift.com/
-[29]:https://en.wikipedia.org/wiki/DevOps
-[30]:http://webassembly.org/
-[31]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.11
-[32]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.1
-[33]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.2
-[34]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.3
-[35]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.4
-[36]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.5
-[37]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.6
-[38]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.7
-[39]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.8
-[40]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.9
-[41]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.10
-[42]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.11
-[43]:http://www-inf.it-sudparis.eu/PROSE/csedu2015/#standalone-sql-env
-[44]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#orgde5af50
-[45]:http://creativecommons.org/licenses/by-nc-sa/4.0/
-[46]:http://creativecommons.org/licenses/by-nc-sa/4.0/
diff --git a/sources/talk/20180604 10 principles of resilience for women in tech.md b/sources/talk/20180604 10 principles of resilience for women in tech.md
index 3f451089bb..be1960d0c9 100644
--- a/sources/talk/20180604 10 principles of resilience for women in tech.md
+++ b/sources/talk/20180604 10 principles of resilience for women in tech.md
@@ -1,93 +1,93 @@
-10 principles of resilience for women in tech
-======
-
-
-
-Being a woman in tech is pretty damn cool. For every headline about [what Silicon Valley thinks of women][1], there are tens of thousands of women building, innovating, and managing technology teams around the world. Women are helping build the future despite the hurdles they face, and the community of women and allies growing to support each other is stronger than ever. From [BetterAllies][2] to organizations like [Girls Who Code][3] and communities like the one I met recently at [Red Hat Summit][4], there are more efforts than ever before to create an inclusive community for women in tech.
-
-But the tech industry has not always been this welcoming, nor is the experience for women always aligned with the aspiration. And so we're feeling the pain. Women in technology roles have dropped from its peak in 1991 at 36% to 25% today, [according to a report by NCWIT][5]. [Harvard Business Review estimates][6] that more than half of the women in tech will eventually leave due to hostile work conditions. Meanwhile, Ernst & Young recently shared [a study][7] and found that merely 11% of high school girls are planning to pursue STEM careers.
-
-We have much work to do, lest we build a future that is less inclusive than the one we live in today. We need everyone at the table, in the lab, at the conference and in the boardroom.
-
-I've been interviewing both women and men for more than a year now about their experiences in tech, all as part of [The Chasing Grace Project][8], a documentary series about women in tech. The purpose of the series is to help recruit and retain female talent for the tech industry and to give women a platform to be seen, heard, and acknowledged for their experiences. We believe that compelling story can begin to transform culture.
-
-### What Chasing Grace taught me
-
-What I've learned is that no matter the dismal numbers, women want to keep building and they collectively possess a resilience unmatched by anything I've ever seen. And this is inspiring me. I've found a power, a strength, and a beauty in every story I've heard that is the result of resilience. I recently shared with the attendees at the Red Hat Summit Women’s Leadership Luncheon the top 10 principles of resilience I've heard from throughout my interviews so far. I hope that by sharing them here the ideas and concepts can support and inspire you, too.
-
-#### 1\. Practice optimism
-
-When taken too far, optimism can give you blind spots. But a healthy dose of optimism allows you to see the best in people and situations and that positive energy comes back to you 100-fold. I haven’t met a woman yet as part of this project who isn’t an optimist.
-
-#### 2\. Build mental toughness
-
-I haven’t met a woman yet as part of this project who isn’t an optimist.
-
-When I recently asked a 32-year-old tech CEO, who is also a single mom of three young girls, what being a CEO required she said _mental toughness_. It really summed up what I’d heard in other words from other women, but it connected with me on another level when she proceeded to tell me how caring for her daughter—who was born with a hole in heart—prepared her for what she would encounter as a tech CEO. Being mentally tough to her means fighting for what you love, persisting like a badass, and building your EQ as well as your IQ.
-
-#### 3\. Recognize your power
-
-When I recently asked a 32-year-old tech CEO, who is also a single mom of three young girls, what being a CEO required she said. It really summed up what I’d heard in other words from other women, but it connected with me on another level when she proceeded to tell me how caring for her daughter—who was born with a hole in heart—prepared her for what she would encounter as a tech CEO. Being mentally tough to her means fighting for what you love, persisting like a badass, and building your EQ as well as your IQ.
-
-Most of the women I’ve interviewed don’t know their own power and so they give it away unknowingly. Too many women have told me that they willingly took on the housekeeping roles on their teams—picking up coffee, donuts, office supplies, and making the team dinner reservations. Usually the only woman on their teams, this put them in a position to be seen as less valuable than their male peers who didn’t readily volunteer for such tasks. All of us, men and women, have innate powers. Identify and know what your powers are and understand how to use them for good. You have so much more power than you realize. Know it, recognize it, use it strategically, and don’t give it away. It’s yours.
-
-#### 4\. Know your strength
-
-Not sure whether you can confront your boss about why you haven’t been promoted? You can. You don’t know your strength until you exercise it. Then, you’re unstoppable. Test your strength by pushing your fear aside and see what happens.
-
-#### 5\. Celebrate vulnerability
-
-Every single successful women I've interviewed isn't afraid to be vulnerable. She finds her strength in acknowledging where she is vulnerable and she looks to connect with others in that same place. Exposing, sharing, and celebrating each other’s vulnerabilities allows us to tap into something far greater than simply asserting strength; it actually builds strength—mental and emotional muscle. One women with whom we’ve talked shared how starting her own tech company made her feel like she was letting her husband down. She shared with us the details of that conversation with her husband. Honest conversations that share our doubts and our aspirations is what makes women uniquely suited to lead in many cases. Allow yourself to be seen and heard. It’s where we grow and learn.
-
-#### 6\. Build community
-
-If it doesn't exist, build it.
-
-Building community seems like a no-brainer in the world of open source, right? But take a moment to think about how many minorities in tech, especially those outside the collaborative open source community, don’t always feel like part of the community. Many women in tech, for example, have told me they feel alone. Reach out and ask questions or answer questions in community forums, at meetups, and in IRC and Slack. When you see a woman alone at an event, consider engaging with her and inviting her into a conversation. Start a meetup group in your company or community for women in tech. I've been so pleased with the number of companies that host these groups. If it doesn't exists, build it.
-
-#### 7\. Celebrate victories
-
-Building community seems like a no-brainer in the world of open source, right? But take a moment to think about how many minorities in tech, especially those outside the collaborative open source community, don’t always feel like part of the community. Many women in tech, for example, have told me they feel alone. Reach out and ask questions or answer questions in community forums, at meetups, and in IRC and Slack. When you see a woman alone at an event, consider engaging with her and inviting her into a conversation. Start a meetup group in your company or community for women in tech. I've been so pleased with the number of companies that host these groups. If it doesn't exists, build it.
-
-One of my favorite Facebook groups is [TechLadies][9] because of its recurring hashtag #YEPIDIDTHAT. It allows women to share their victories in a supportive community. No matter how big or small, don't let a victory go unrecognized. When you recognize your wins, you own them. They become a part of you and you build on top of each one.
-
-#### 8\. Be curious
-
-Being curious in the tech community often means asking questions: How does that work? What language is that written in? How can I make this do that? When I've managed teams over the years, my best employees have always been those who ask a lot of questions, those who are genuinely curious about things. But in this context, I mean be curious when your gut tells you something doesn't seem right. _The energy in the meeting was off. Did he/she just say what I think he said?_ Ask questions. Investigate. Communicate openly and clearly. It's the only way change happens.
-
-#### 9\. Harness courage
-
-One women told me a story about a meeting in which the women in the room kept being dismissed and talked over. During the debrief roundtable portion of the meeting, she called it out and asked if others noticed it, too. Being a 20-year tech veteran, she'd witnessed and experienced this many times but she had never summoned the courage to speak up about it. She told me she was incredibly nervous and was texting other women in the room to see if they agreed it should be addressed. She didn't want to be a "troublemaker." But this kind of courage results in an increased understanding by everyone in that room and can translate into other meetings, companies, and across the industry.
-
-#### 10\. Share your story
-
-When people connect to compelling story, they begin to change behaviors.
-
-Share your experience with a friend, a group, a community, or an industry. Be empowered by the experience of sharing your experience. Stories change culture. When people connect to compelling story, they begin to change behaviors. When people act, companies and industries begin to transform.
-
-Share your experience with a friend, a group, a community, or an industry. Be empowered by the experience of sharing your experience. Stories change culture. When people connect to compelling story, they begin to change behaviors. When people act, companies and industries begin to transform.
-
-If you would like to support [The Chasing Grace Project][8], email Jennifer Cloer to learn more about how to get involved: [jennifer@wickedflicksproductions.com][10]
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/6/being-woman-tech-10-principles-resilience
-
-作者:[Jennifer Cloer][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/jennifer-cloer
-[1]:http://www.newsweek.com/2015/02/06/what-silicon-valley-thinks-women-302821.html%E2%80%9D
-[2]:https://opensource.com/article/17/6/male-allies-tech-industry-needs-you%E2%80%9D
-[3]:https://twitter.com/GirlsWhoCode%E2%80%9D
-[4]:http://opensource.com/tags/red-hat-summit%E2%80%9D
-[5]:https://www.ncwit.org/sites/default/files/resources/womenintech_facts_fullreport_05132016.pdf%E2%80%9D
-[6]:Dhttp://www.latimes.com/business/la-fi-women-tech-20150222-story.html%E2%80%9D
-[7]:http://www.ey.com/us/en/newsroom/news-releases/ey-news-new-research-reveals-the-differences-between-boys-and-girls-career-and-college-plans-and-an-ongoing-need-to-engage-girls-in-stem%E2%80%9D
-[8]:https://www.chasinggracefilm.com/
-[9]:https://www.facebook.com/therealTechLadies/%E2%80%9D
-[10]:mailto:jennifer@wickedflicksproductions.com
+10 principles of resilience for women in tech
+======
+
+
+
+Being a woman in tech is pretty damn cool. For every headline about [what Silicon Valley thinks of women][1], there are tens of thousands of women building, innovating, and managing technology teams around the world. Women are helping build the future despite the hurdles they face, and the community of women and allies growing to support each other is stronger than ever. From [BetterAllies][2] to organizations like [Girls Who Code][3] and communities like the one I met recently at [Red Hat Summit][4], there are more efforts than ever before to create an inclusive community for women in tech.
+
+But the tech industry has not always been this welcoming, nor is the experience for women always aligned with the aspiration. And so we're feeling the pain. Women in technology roles have dropped from its peak in 1991 at 36% to 25% today, [according to a report by NCWIT][5]. [Harvard Business Review estimates][6] that more than half of the women in tech will eventually leave due to hostile work conditions. Meanwhile, Ernst & Young recently shared [a study][7] and found that merely 11% of high school girls are planning to pursue STEM careers.
+
+We have much work to do, lest we build a future that is less inclusive than the one we live in today. We need everyone at the table, in the lab, at the conference and in the boardroom.
+
+I've been interviewing both women and men for more than a year now about their experiences in tech, all as part of [The Chasing Grace Project][8], a documentary series about women in tech. The purpose of the series is to help recruit and retain female talent for the tech industry and to give women a platform to be seen, heard, and acknowledged for their experiences. We believe that compelling story can begin to transform culture.
+
+### What Chasing Grace taught me
+
+What I've learned is that no matter the dismal numbers, women want to keep building and they collectively possess a resilience unmatched by anything I've ever seen. And this is inspiring me. I've found a power, a strength, and a beauty in every story I've heard that is the result of resilience. I recently shared with the attendees at the Red Hat Summit Women’s Leadership Luncheon the top 10 principles of resilience I've heard from throughout my interviews so far. I hope that by sharing them here the ideas and concepts can support and inspire you, too.
+
+#### 1\. Practice optimism
+
+When taken too far, optimism can give you blind spots. But a healthy dose of optimism allows you to see the best in people and situations and that positive energy comes back to you 100-fold. I haven’t met a woman yet as part of this project who isn’t an optimist.
+
+#### 2\. Build mental toughness
+
+I haven’t met a woman yet as part of this project who isn’t an optimist.
+
+When I recently asked a 32-year-old tech CEO, who is also a single mom of three young girls, what being a CEO required she said _mental toughness_. It really summed up what I’d heard in other words from other women, but it connected with me on another level when she proceeded to tell me how caring for her daughter—who was born with a hole in heart—prepared her for what she would encounter as a tech CEO. Being mentally tough to her means fighting for what you love, persisting like a badass, and building your EQ as well as your IQ.
+
+#### 3\. Recognize your power
+
+When I recently asked a 32-year-old tech CEO, who is also a single mom of three young girls, what being a CEO required she said. It really summed up what I’d heard in other words from other women, but it connected with me on another level when she proceeded to tell me how caring for her daughter—who was born with a hole in heart—prepared her for what she would encounter as a tech CEO. Being mentally tough to her means fighting for what you love, persisting like a badass, and building your EQ as well as your IQ.
+
+Most of the women I’ve interviewed don’t know their own power and so they give it away unknowingly. Too many women have told me that they willingly took on the housekeeping roles on their teams—picking up coffee, donuts, office supplies, and making the team dinner reservations. Usually the only woman on their teams, this put them in a position to be seen as less valuable than their male peers who didn’t readily volunteer for such tasks. All of us, men and women, have innate powers. Identify and know what your powers are and understand how to use them for good. You have so much more power than you realize. Know it, recognize it, use it strategically, and don’t give it away. It’s yours.
+
+#### 4\. Know your strength
+
+Not sure whether you can confront your boss about why you haven’t been promoted? You can. You don’t know your strength until you exercise it. Then, you’re unstoppable. Test your strength by pushing your fear aside and see what happens.
+
+#### 5\. Celebrate vulnerability
+
+Every single successful women I've interviewed isn't afraid to be vulnerable. She finds her strength in acknowledging where she is vulnerable and she looks to connect with others in that same place. Exposing, sharing, and celebrating each other’s vulnerabilities allows us to tap into something far greater than simply asserting strength; it actually builds strength—mental and emotional muscle. One women with whom we’ve talked shared how starting her own tech company made her feel like she was letting her husband down. She shared with us the details of that conversation with her husband. Honest conversations that share our doubts and our aspirations is what makes women uniquely suited to lead in many cases. Allow yourself to be seen and heard. It’s where we grow and learn.
+
+#### 6\. Build community
+
+If it doesn't exist, build it.
+
+Building community seems like a no-brainer in the world of open source, right? But take a moment to think about how many minorities in tech, especially those outside the collaborative open source community, don’t always feel like part of the community. Many women in tech, for example, have told me they feel alone. Reach out and ask questions or answer questions in community forums, at meetups, and in IRC and Slack. When you see a woman alone at an event, consider engaging with her and inviting her into a conversation. Start a meetup group in your company or community for women in tech. I've been so pleased with the number of companies that host these groups. If it doesn't exists, build it.
+
+#### 7\. Celebrate victories
+
+Building community seems like a no-brainer in the world of open source, right? But take a moment to think about how many minorities in tech, especially those outside the collaborative open source community, don’t always feel like part of the community. Many women in tech, for example, have told me they feel alone. Reach out and ask questions or answer questions in community forums, at meetups, and in IRC and Slack. When you see a woman alone at an event, consider engaging with her and inviting her into a conversation. Start a meetup group in your company or community for women in tech. I've been so pleased with the number of companies that host these groups. If it doesn't exists, build it.
+
+One of my favorite Facebook groups is [TechLadies][9] because of its recurring hashtag #YEPIDIDTHAT. It allows women to share their victories in a supportive community. No matter how big or small, don't let a victory go unrecognized. When you recognize your wins, you own them. They become a part of you and you build on top of each one.
+
+#### 8\. Be curious
+
+Being curious in the tech community often means asking questions: How does that work? What language is that written in? How can I make this do that? When I've managed teams over the years, my best employees have always been those who ask a lot of questions, those who are genuinely curious about things. But in this context, I mean be curious when your gut tells you something doesn't seem right. _The energy in the meeting was off. Did he/she just say what I think he said?_ Ask questions. Investigate. Communicate openly and clearly. It's the only way change happens.
+
+#### 9\. Harness courage
+
+One women told me a story about a meeting in which the women in the room kept being dismissed and talked over. During the debrief roundtable portion of the meeting, she called it out and asked if others noticed it, too. Being a 20-year tech veteran, she'd witnessed and experienced this many times but she had never summoned the courage to speak up about it. She told me she was incredibly nervous and was texting other women in the room to see if they agreed it should be addressed. She didn't want to be a "troublemaker." But this kind of courage results in an increased understanding by everyone in that room and can translate into other meetings, companies, and across the industry.
+
+#### 10\. Share your story
+
+When people connect to compelling story, they begin to change behaviors.
+
+Share your experience with a friend, a group, a community, or an industry. Be empowered by the experience of sharing your experience. Stories change culture. When people connect to compelling story, they begin to change behaviors. When people act, companies and industries begin to transform.
+
+Share your experience with a friend, a group, a community, or an industry. Be empowered by the experience of sharing your experience. Stories change culture. When people connect to compelling story, they begin to change behaviors. When people act, companies and industries begin to transform.
+
+If you would like to support [The Chasing Grace Project][8], email Jennifer Cloer to learn more about how to get involved: [jennifer@wickedflicksproductions.com][10]
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/6/being-woman-tech-10-principles-resilience
+
+作者:[Jennifer Cloer][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/jennifer-cloer
+[1]:http://www.newsweek.com/2015/02/06/what-silicon-valley-thinks-women-302821.html%E2%80%9D
+[2]:https://opensource.com/article/17/6/male-allies-tech-industry-needs-you%E2%80%9D
+[3]:https://twitter.com/GirlsWhoCode%E2%80%9D
+[4]:http://opensource.com/tags/red-hat-summit%E2%80%9D
+[5]:https://www.ncwit.org/sites/default/files/resources/womenintech_facts_fullreport_05132016.pdf%E2%80%9D
+[6]:Dhttp://www.latimes.com/business/la-fi-women-tech-20150222-story.html%E2%80%9D
+[7]:http://www.ey.com/us/en/newsroom/news-releases/ey-news-new-research-reveals-the-differences-between-boys-and-girls-career-and-college-plans-and-an-ongoing-need-to-engage-girls-in-stem%E2%80%9D
+[8]:https://www.chasinggracefilm.com/
+[9]:https://www.facebook.com/therealTechLadies/%E2%80%9D
+[10]:mailto:jennifer@wickedflicksproductions.com
diff --git a/sources/talk/20180629 Reflecting on the GPLv3 license for its 11th anniversary.md b/sources/talk/20180629 Reflecting on the GPLv3 license for its 11th anniversary.md
deleted file mode 100644
index af352aefe1..0000000000
--- a/sources/talk/20180629 Reflecting on the GPLv3 license for its 11th anniversary.md
+++ /dev/null
@@ -1,65 +0,0 @@
-Reflecting on the GPLv3 license for its 11th anniversary
-======
-
-
-
-Last year, I missed the opportunity to write about the 10th anniversary of [GPLv3][1], the third version of the GNU General Public License. GPLv3 was officially released by the Free Software Foundation (FSF) on June 29, 2007—better known in technology history as the date Apple launched the iPhone. Now, one year later, I feel some retrospection on GPLv3 is due. For me, much of what is interesting about GPLv3 goes back somewhat further than 11 years, to the public drafting process in which I was an active participant.
-
-In 2005, following nearly a decade of enthusiastic self-immersion in free software, yet having had little open source legal experience to speak of, I was hired by Eben Moglen to join the Software Freedom Law Center as counsel. SFLC was then outside counsel to the FSF, and my role was conceived as focusing on the incipient public phase of the GPLv3 drafting process. This opportunity rescued me from a previous career turn that I had found rather dissatisfying. Free and open source software (FOSS) legal matters would come to be my new specialty, one that I found fascinating, gratifying, and intellectually rewarding. My work at SFLC, and particularly the trial by fire that was my work on GPLv3, served as my on-the-job training.
-
-GPLv3 must be understood as the product of an earlier era of FOSS, the contours of which may be difficult for some to imagine today. By the beginning of the public drafting process in 2006, Linux and open source were no longer practically synonymous, as they might have been for casual observers several years earlier, but the connection was much closer than it is now.
-
-Reflecting the profound impact that Linux was already having on the technology industry, everyone assumed GPL version 2 was the dominant open source licensing model. We were seeing the final shakeout of a Cambrian explosion of open source (and pseudo-open source) business models. A frothy business-fueled hype surrounded open source (for me most memorably typified by the Open Source Business Conference) that bears little resemblance to the present-day embrace of open source development by the software engineering profession. Microsoft, with its expanding patent portfolio and its competitive opposition to Linux, was commonly seen in the FOSS community as an existential threat, and the [SCO litigation][2] had created a cloud of legal risk around Linux and the GPL that had not quite dissipated.
-
-That environment necessarily made the drafting of GPLv3 a high-stakes affair, unprecedented in free software history. Lawyers at major technology companies and top law firms scrambled for influence over the license, convinced that GPLv3 was bound to take over and thoroughly reshape open source and all its massive associated business investment.
-
-A similar mindset existed within the technical community; it can be detected in the fears expressed in the final paragraph of the Linux kernel developers' momentous September 2006 [denunciation][3] of GPLv3. Those of us close to the FSF knew a little better, but I think we assumed the new license would be either an overwhelming success or a resounding failure—where "success" meant something approximating an upgrade of the existing GPLv2 project ecosystem to GPLv3, though perhaps without the kernel. The actual outcome was something in the middle.
-
-I have no confidence in attempts to measure open source license adoption, which have in recent years typically been used to demonstrate a loss of competitive advantage for copyleft licensing. My own experience, which is admittedly distorted by proximity to Linux and my work at Red Hat, suggests that GPLv3 has enjoyed moderate popularity as a license choice for projects launched since 2007, though most GPLv2 projects that existed before 2007, along with their post-2007 offshoots, remained on the old license. (GPLv3's sibling licenses LGPLv3 and AGPLv3 never gained comparable popularity.) Most of the existing GPLv2 projects (with a few notable exceptions like the kernel and Busybox) were licensed as "GPLv2 or any later version." The technical community decided early on that "GPLv2 or later" was a politically neutral license choice that embraced both GPLv2 and GPLv3; this goes some way to explain why adoption of GPLv3 was somewhat gradual and limited, especially within the Linux community.
-
-During the GPLv3 drafting process, some expressed concerns about a "balkanized" Linux ecosystem, whether because of the overhead of users having to understand two different, strong copyleft licenses or because of GPLv2/GPLv3 incompatibility. These fears turned out to be entirely unfounded. Within mainstream server and workstation Linux stacks, the two licenses have peacefully coexisted for a decade now. This is partly because such stacks are made up of separate units of strong copyleft scope (see my discussion of [related issues in the container setting][4]). As for incompatibility inside units of strong copyleft scope, here, too, the prevalence of "GPLv2 or later" was seen by the technical community as neatly resolving the theoretical problem, despite the fact that nominal license upgrading of GPLv2-or-later to GPLv3 hardly ever occurred.
-
-I have alluded to the handwringing that some of us FOSS license geeks have brought to the topic of supposed copyleft decline. GPLv3 has taken its share of abuse from critics as far back as the beginning of the public drafting process, and some, predictably, have drawn a link between GPLv3 in particular and GPL or copyleft disfavor in general.
-
-I have viewed it somewhat differently: Largely because of its complexity and baroqueness, GPLv3 was a lost opportunity to create a strong copyleft license that would appeal very broadly to modern individual software authors and corporate licensors. I believe individual developers today tend to prefer short, simple, easy to understand, minimalist licenses, the most obvious example of which is the [MIT License][5].
-
-Some corporate decisionmakers around open source license selection may naturally share that view, while others may associate some parts of GPLv3, such as the patent provisions or the anti-lockdown requirements, as too risky or incompatible with their business models. The great irony is that the characteristics of GPLv3 that fail to attract these groups are there in part because of conscious attempts to make the license appeal to these same sorts of interests.
-
-How did GPLv3 come to be so baroque? As I have said, GPLv3 was the product of an earlier time, in which FOSS licenses were viewed as the primary instruments of project governance. (Today, we tend to associate governance with other kinds of legal or quasi-legal tools, such as structuring of nonprofit organizations, rules around project decision making, codes of conduct, and contributor agreements.)
-
-GPLv3, in its drafting, was the high point of an optimistic view of FOSS licenses as ambitious means of private regulation. This was already true of GPLv2, but GPLv3 took things further by addressing in detail a number of new policy problems—software patents, anti-circumvention laws, device lockdown. That was bound to make the license longer and more complex than GPLv2, as the FSF and SFLC noted apologetically in the first GPLv3 [rationale document][6].
-
-But a number of other factors at play in the drafting of GPLv3 unintentionally caused the complexity of the license to grow. Lawyers representing vendors' and commercial users' interests provided useful suggestions for improvements from a legal and commercial perspective, but these often took the form of making simply worded provisions more verbose, arguably without net increases in clarity. Responses to feedback from the technical community, typically identifying loopholes in license provisions, had a similar effect.
-
-The GPLv3 drafters also famously got entangled in a short-term political crisis—the controversial [Microsoft/Novell deal][7] of 2006—resulting in the permanent addition of new and unusual conditions in the patent section of the license, which arguably served little purpose after 2007 other than to make license compliance harder for conscientious patent-holding vendors. Of course, some of the complexity in GPLv3 was simply the product of well-intended attempts to make compliance easier, especially for community project developers, or to codify FSF interpretive practice. Finally, one can take issue with the style of language used in GPLv3, much of which had a quality of playful parody or mockery of conventional software license legalese; a simpler, straightforward form of phrasing would in many cases have been an improvement.
-
-The complexity of GPLv3 and the movement towards preferring brevity and simplicity in license drafting and unambitious license policy objectives meant that the substantive text of GPLv3 would have little direct influence on later FOSS legal drafting. But, as I noted with surprise and [delight][8] back in 2012, MPL 2.0 adapted two parts of GPLv3: the 30-day cure and 60-day repose language from the GPLv3 termination provision, and the assurance that downstream upgrading to a later license version adds no new obligations on upstream licensors.
-
-The GPLv3 cure language has come to have a major impact, particularly over the past year. Following the Software Freedom Conservancy's promulgation, with the FSF's support, of the [Principles of Community-Oriented GPL Enforcement][9], which calls for extending GPLv3 cure opportunities to GPLv2 violations, the Linux Foundation Technical Advisory Board published a [statement][10], endorsed by over a hundred Linux kernel developers, which incorporates verbatim the cure language of GPLv3. This in turn was followed by a Red Hat-led series of [corporate commitments][11] to extend the GPLv3 cure provisions to GPLv2 and LGPLv2.x noncompliance, a campaign to get individual open source developers to extend the same commitment, and an announcement by Red Hat that henceforth GPLv2 and LGPLv2.x projects it leads will use the commitment language directly in project repositories. I discussed these developments in a recent [blog post][12].
-
-One lasting contribution of GPLv3 concerns changed expectations for how revisions of widely-used FOSS licenses are done. It is no longer acceptable for such licenses to be revised entirely in private, without opportunity for comment from the community and without efforts to consult key stakeholders. The drafting of MPL 2.0 and, more recently, EPL 2.0 reflects this new norm.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/6/gplv3-anniversary
-
-作者:[Richard Fontana][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/fontana
-[1]:https://www.gnu.org/licenses/gpl-3.0.en.html
-[2]:https://en.wikipedia.org/wiki/SCO%E2%80%93Linux_disputes
-[3]:https://lwn.net/Articles/200422/
-[4]:https://opensource.com/article/18/1/containers-gpl-and-copyleft
-[5]:https://opensource.org/licenses/MIT
-[6]:http://gplv3.fsf.org/gpl-rationale-2006-01-16.html
-[7]:https://en.wikipedia.org/wiki/Novell#Agreement_with_Microsoft
-[8]:https://opensource.com/law/12/1/the-new-mpl
-[9]:https://sfconservancy.org/copyleft-compliance/principles.html
-[10]:https://www.kernel.org/doc/html/v4.16/process/kernel-enforcement-statement.html
-[11]:https://www.redhat.com/en/about/press-releases/technology-industry-leaders-join-forces-increase-predictability-open-source-licensing
-[12]:https://www.redhat.com/en/blog/gpl-cooperation-commitment-and-red-hat-projects?source=author&term=26851
diff --git a/sources/talk/20180705 New Training Options Address Demand for Blockchain Skills.md b/sources/talk/20180705 New Training Options Address Demand for Blockchain Skills.md
deleted file mode 100644
index dedbf748d6..0000000000
--- a/sources/talk/20180705 New Training Options Address Demand for Blockchain Skills.md
+++ /dev/null
@@ -1,45 +0,0 @@
-New Training Options Address Demand for Blockchain Skills
-======
-
-
-
-Blockchain technology is transforming industries and bringing new levels of trust to contracts, payment processing, asset protection, and supply chain management. Blockchain-related jobs are the second-fastest growing in today’s labor market, [according to TechCrunch][1]. But, as in the rapidly expanding field of artificial intelligence, there is a pronounced blockchain skills gap and a need for expert training resources.
-
-### Blockchain for Business
-
-A new training option was recently announced from The Linux Foundation. Enrollment is now open for a free training course called[Blockchain: Understanding Its Uses and Implications][2], as well as a [Blockchain for Business][2] professional certificate program. Delivered through the edX training platform, the new course and program provide a way to learn about the impact of blockchain technologies and a means to demonstrate that knowledge. Certification, in particular, can make a difference for anyone looking to work in the blockchain arena.
-
-“In the span of only a year or two, blockchain has gone from something seen only as related to cryptocurrencies to a necessity for businesses across a wide variety of industries,” [said][3] Linux Foundation General Manager, Training & Certification Clyde Seepersad. “Providing a free introductory course designed not only for technical staff but business professionals will help improve understanding of this important technology, while offering a certificate program through edX will enable professionals from all over the world to clearly demonstrate their expertise.”
-
-TechCrunch [also reports][4] that venture capital is rapidly flowing toward blockchain-focused startups. And, this new program is designed for business professionals who need to understand the potential – or threat – of blockchain to their company and industry.
-
-“Professional Certificate programs on edX deliver career-relevant education in a flexible, affordable way, by focusing on the critical skills industry leaders and successful professionals are seeking today,” said Anant Agarwal, edX CEO and MIT Professor.
-
-### Hyperledger Fabric
-
-The Linux Foundation is steward to many valuable blockchain resources and includes some notable community members. In fact, a recent New York Times article — “[The People Leading the Blockchain Revolution][5]” — named Brian Behlendorf, Executive Director of The Linux Foundation’s [Hyperledger Project][6], one of the [top influential voices][7] in the blockchain world.
-
-Hyperledger offers proven paths for gaining credibility and skills in the blockchain space. For example, the project offers a free course titled Introduction to Hyperledger Fabric for Developers. Fabric has emerged as a key open source toolset in the blockchain world. Through the Hyperledger project, you can also take the B9-lab Certified Hyperledger Fabric Developer course. More information on both courses is available [here][8].
-
-“As you can imagine, someone needs to do the actual coding when companies move to experiment and replace their legacy systems with blockchain implementations,” states the Hyperledger website. “With training, you could gain serious first-mover advantage.”
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/2018/7/new-training-options-address-demand-blockchain-skills
-
-作者:[SAM DEAN][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/sam-dean
-[1]:https://techcrunch.com/2018/02/14/blockchain-engineers-are-in-demand/
-[2]:https://www.edx.org/course/understanding-blockchain-and-its-implications
-[3]:https://www.linuxfoundation.org/press-release/as-demand-skyrockets-for-blockchain-expertise-the-linux-foundation-and-edx-offer-new-introductory-blockchain-course-and-blockchain-for-business-professional-certificate-program/
-[4]:https://techcrunch.com/2018/05/20/with-at-least-1-3-billion-invested-globally-in-2018-vc-funding-for-blockchain-blows-past-2017-totals/
-[5]:https://www.nytimes.com/2018/06/27/business/dealbook/blockchain-stars.html
-[6]:https://www.hyperledger.org/
-[7]:https://www.linuxfoundation.org/blog/hyperledgers-brian-behlendorf-named-as-top-blockchain-influencer-by-new-york-times/
-[8]:https://www.hyperledger.org/resources/training
diff --git a/sources/talk/20180720 A brief history of text-based games and open source.md b/sources/talk/20180720 A brief history of text-based games and open source.md
deleted file mode 100644
index 2b8728fb39..0000000000
--- a/sources/talk/20180720 A brief history of text-based games and open source.md
+++ /dev/null
@@ -1,142 +0,0 @@
-A brief history of text-based games and open source
-======
-
-
-
-The [Interactive Fiction Technology Foundation][1] (IFTF) is a non-profit organization dedicated to the preservation and improvement of technologies enabling the digital art form we call interactive fiction. When a Community Moderator for Opensource.com suggested an article about IFTF, the technologies and services it supports, and how it all intersects with open source, I found it a novel angle to the decades-long story I’ve so often told. The history of IF is longer than—but quite enmeshed with—the modern FOSS movement. I hope you’ll enjoy my sharing it here.
-
-### Definitions and history
-
-To me, the term interactive fiction includes any video game or digital artwork whose audience interacts with it primarily through text. The term originated in the 1980s when parser-driven text adventure games—epitomized in the United States by [Zork][2], [The Hitchhiker’s Guide to the Galaxy][3], and the rest of [Infocom][4]’s canon—defined home-computer entertainment. Its mainstream commercial viability had guttered by the 1990s, but online hobbyist communities carried on the tradition, releasing both games and game-creation tools.
-
-After a quarter century, interactive fiction now comprises a broad and sparkling variety of work, from puzzle-laden text adventures to sprawling and introspective hypertexts. Regular online competitions and festivals provide a great place to peruse and play new work: The English-language IF world enjoys annual events including [Spring Thing][5] and [IFComp][6], the latter a centerpiece of modern IF since 1995—which also makes it the longest-lived continually running game showcase event of its kind in any genre. [IFComp’s crop of judged-and-ranked entries from 2017][7] shows the amazing diversity in form, style, and subject matter that text-based games boast today.
-
-(I specify "English-language" above because IF communities tend to self-segregate by language, perhaps due to the technology's focus on writing. There are also annual IF events in [French][8] and [Italian][9], for example, and I've heard of at least one Chinese IF festival. Happily, these borders are porous; during the four years I managed IFComp, it has welcomed English-translated work from all international communities.)
-
-![counterfeit monkey game screenshot][11]
-
-Starting a new game of Emily Short's "Counterfeit Monkey," running on the interpreter Lectrote (both open source software).
-
-Also due to its focus on text, IF presents some of the most accessible platforms for both play and authorship. Almost anyone who can read digital text—including users of assistive technology such as text-to-speech software—can play most IF works. Likewise, IF creation is open to all writers willing to learn and work with its tools and techniques.
-
-This brings us to IF’s long relationship with open source, which has helped enable the art form’s availability since its commercial heyday. I'll provide an overview of contemporary open-source IF creation tools, and then discuss the ancient and sometimes curious tradition of IF works that share their source code.
-
-### The world of open source IF tools
-
-A number of development platforms, most of which are open source, are available to create traditional parser-driven IF in which the user types commands—for example, `go north,` `get lamp`, `pet the cat`, or `ask Zoe about quantum mechanics`—to interact with the game’s world. The early 1990s saw the emergence of several hacker-friendly parser-game development kits; those still in use today include [TADS][12], [Alan][13], and [Quest][14]—all open, with the latter two bearing FOSS licenses.
-
-But by far the most prominent of these is [Inform][15], first released by Graham Nelson in 1993 and now maintained by a team Nelson still leads. Inform source is semi-open, in an unusual fashion: Inform 6, the previous major version, [makes its source available through the Artistic License][16]. This has more immediate relevance than may be obvious, since the otherwise proprietary Inform 7 holds Inform 6 at its core, translating its [remarkable natural-language syntax][17] into its predecessor’s more C-like code before letting it compile the work down into machine code.
-
-![inform 7 IDE screenshot][19]
-
-The Inform 7 IDE, loaded up with documentation and a sample project.
-
-Inform games run on a virtual machine, a relic of the Infocom era when that publisher targeted a VM so that it could write a single game that would run on Apple II, Commodore 64, Atari 800, and other flavors of the "[home computer][20]." Fewer popular operating systems exist today, but Inform’s virtual machines—the relatively modern [Glulx][21] or the charmingly antique [Z-machine][22], a reverse-engineered clone of Infocom’s historical VM—let Inform-created work run on any computer with an Inform interpreter. Currently, popular cross-platform interpreters include desktop programs like [Lectrote][23] and [Gargoyle][24] or browser-based ones like [Quixe][25] and [Parchment][26]. All are open source.
-
-If the pace of Inform’s development has slowed in its maturity, it remains vital through an active and transparent ecosystem—just like any other popular open source project. In Inform’s case, this includes the aforementioned interpreters, [a collection of language extensions][27] (usually written in a mix of Inform 6 and 7), and of course, all the work created with it and shared with the world, sometimes with source included (I’ll return to that topic later in this article).
-
-IF creation tools invented in the 21st century tend to explore player interactions outside of the traditional parser, generating hypertext-driven work that any modern web browser can load. Chief among these is [Twine][28], originally developed by Chris Klimas in 2009 and under active development by many contributors today as [a GNU-licensed open source project][29]. (In fact, [Twine][30] can trace its OSS lineage back to [TiddlyWiki][31], the project from which Klimas initially derived it.)
-
-Twine represents a sort of maximally [open and accessible approach][30] to IF development: Beyond its own FOSS nature, it renders its output as self-contained websites, relying not on machine code requiring further specialized interpretation but the open and well-exercised standards of HTML, CSS, and JavaScript. As a creative tool, Twine can match its own exposed complexity to the creator’s skill level. Users with little or no programming knowledge can create simple but playable IF work, while those with more coding and design skills—including those developing these skills by making Twine games—can develop more sophisticated projects. Little wonder that Twine’s visibility and popularity in educational contexts has grown quite a bit in recent years.
-
-Other noteworthy open source IF development projects include the MIT-licensed [Undum][32] by Ian Millington, and [ChoiceScript][33] by Dan Fabulich and the [Choice of Games][34] team—both of which also target the web browser as the gameplay platform. Looking beyond strict development systems like these, web-based IF gives us a rich and ever-churning ecosystem of open source work, such as furkle’s [collection of Twine-extending tools][35] and Liza Daly’s [Windrift][36], a JavaScript framework purpose-built for her own IF games.
-
-### Programs, games, and game-programs
-
-Twine benefits from [a standing IFTF program dedicated to its support][37], allowing the public to help fund its maintenance and development. IFTF also directly supports two long-time public services, IFComp and the IF Archive, both of which depend upon and contribute back into open software and technologies.
-
-![Harmonia opening screen shot][39]
-
-The opening of Liza Daly's "Harmonia," created with the Windrift open source IF-creation framework.
-
-The Perl- and JavaScript-based application that runs the IFComp’s website has been [a shared-source project][40] since 2014, and it reflects [the stew of FOSS licenses used by its IF-specific sub-components][41], including the various code libraries that allow parser-driven competition entries to run in a web browser. [The IF Archive][42]—online since 1992 and [an IFTF project since 2017][43]—is a set of mirrored repositories based entirely on ancient and stable internet standards, with [a little open source Python script][44] to handle indexing.
-
-### At last, the fun part: Let's talk about open source text games
-
-The bulk of the archive [comprises games][45], of course—years and years of games, reflecting decades of evolving game-design trends and IF tool development.
-
-Lots of IF work shares its source code, and the community’s quick-start solution for finding it is simple: [Search the IFDB for the tag "source available"][46]. (The IFDB is yet another long-running IF community service, run privately by TADS creator Mike Roberts.) Users who are comfortable with a more bare-bones interface may also wish to browse [the `/games/source` directory][47] of the IF Archive, which groups content by development platform and written language (there's also a lot of work either too miscellaneous or too ancient to categorize floating at the top).
-
-A little bit of random sampling of these code-sharing games reveals an interesting dilemma: Unlike the wider world of open source software, the IF community lacks a generally agreed-upon way of licensing all the code that it generates. Unlike a software tool—including all the tools we use to build IF—an interactive fiction game is a work of art in the most literal sense, meaning that an open source license intended for software would fit it no better than it would any other work of prose or poetry. But then again, an IF game is also a piece of software, and it exhibits source-code patterns and techniques that its creator may legitimately wish to share with the world. What is an open source-aware IF creator to do?
-
-Some games address this by passing their code into the public domain, either through explicit license or—as in the case of [the original 42-year-old Adventure by Crowther and Woods][48]—through community fiat. Some try to split the difference, rolling their own license that allows for free re-use of a game’s exposed business logic but prohibits the creation of work derived specifically from its prose. This is the tack I took when I opened up the source of my own game, [The Warbler’s Nest][49]. Lord knows how well that’d stand up in court, but I didn’t have any better ideas at the time.
-
-Naturally, you can find work that simply puts everything under a single common license and never mind the naysayers. A prominent example is [Emily Short’s epic Counterfeit Monkey][50], released in its entirety under a Creative Commons 4.0 license. [CC frowns at its application to code][51], but you could argue that [the strangely prose-like nature of Inform 7 source][52] makes it at least a little more compatible with a CC license than a more traditional software project would be.
-
-### What now, adventurer?
-
-If you are eager to start exploring the world of interactive fiction, here are a few links to check out:
-
-
-+ As mentioned above, IFDB and the IF Archive both present browsable interfaces to more than 40 years worth of collected interactive fiction work. Much of this is playable in a web browser, but some require additional interpreter programs. IFDB can help you find and install these.
-
- IFComp’s annual results pages provide another view into the best of this free and archive-available work.
-
-+ The Interactive Fiction Technology Foundation is a charitable non-profit organization that helps support Twine, IFComp, and the IF Archive, as well as improve the accessibility of IF, explore IF’s use in education, and more. Join its mailing list to receive IFTF’s monthly newsletter, peruse its blog, and browse some thematic merchandise.
-
-+ John Paul Wohlscheid wrote this article about open-source IF tools earlier this year. It covers some platforms not mentioned here, so if you’re still hungry for more, have a look.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/7/interactive-fiction-tools
-
-作者:[Jason Mclntosh][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/jmac
-[1]:http://iftechfoundation.org/
-[2]:https://en.wikipedia.org/wiki/Zork
-[3]:https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy_(video_game)
-[4]:https://en.wikipedia.org/wiki/Infocom
-[5]:http://www.springthing.net/
-[6]:http://ifcomp.org/
-[7]:https://ifcomp.org/comp/2017
-[8]:http://www.fiction-interactive.fr/
-[9]:http://www.oldgamesitalia.net/content/marmellata-davventura-2018
-[10]:/file/403396
-[11]:https://opensource.com/sites/default/files/uploads/monkey.png (counterfeit monkey game screenshot)
-[12]:http://tads.org/
-[13]:https://www.alanif.se/
-[14]:http://textadventures.co.uk/quest/
-[15]:http://inform7.com/
-[16]:https://github.com/DavidKinder/Inform6
-[17]:http://inform7.com/learn/man/RB_4_1.html#e307
-[18]:/file/403386
-[19]:https://opensource.com/sites/default/files/uploads/inform.png (inform 7 IDE screenshot)
-[20]:https://www.youtube.com/watch?v=bu55q_3YtOY
-[21]:http://ifwiki.org/index.php/Glulx
-[22]:http://ifwiki.org/index.php/Z-machine
-[23]:https://github.com/erkyrath/lectrote
-[24]:https://github.com/garglk/garglk/
-[25]:http://eblong.com/zarf/glulx/quixe/
-[26]:https://github.com/curiousdannii/parchment
-[27]:https://github.com/i7/extensions
-[28]:http://twinery.org/
-[29]:https://github.com/klembot/twinejs
-[30]:/article/18/7/twine-vs-renpy-interactive-fiction
-[31]:https://tiddlywiki.com/
-[32]:https://github.com/idmillington/undum
-[33]:https://github.com/dfabulich/choicescript
-[34]:https://www.choiceofgames.com/
-[35]:https://github.com/furkle
-[36]:https://github.com/lizadaly/windrift
-[37]:http://iftechfoundation.org/committees/twine/
-[38]:/file/403391
-[39]:https://opensource.com/sites/default/files/uploads/harmonia.png (Harmonia opening screen shot)
-[40]:https://github.com/iftechfoundation/ifcomp
-[41]:https://github.com/iftechfoundation/ifcomp/blob/master/LICENSE.md
-[42]:https://www.ifarchive.org/
-[43]:http://blog.iftechfoundation.org/2017-06-30-iftf-is-adopting-the-if-archive.html
-[44]:https://github.com/iftechfoundation/ifarchive-ifmap-py
-[45]:https://www.ifarchive.org/indexes/if-archiveXgames
-[46]:http://ifdb.tads.org/search?sortby=ratu&searchfor=%22source+available%22
-[47]:https://www.ifarchive.org/indexes/if-archiveXgamesXsource.html
-[48]:http://ifdb.tads.org/viewgame?id=fft6pu91j85y4acv
-[49]:https://github.com/jmacdotorg/warblers-nest/
-[50]:https://github.com/i7/counterfeit-monkey
-[51]:https://creativecommons.org/faq/#can-i-apply-a-creative-commons-license-to-software
-[52]:https://github.com/i7/counterfeit-monkey/blob/master/Counterfeit%20Monkey.materials/Extensions/Counterfeit%20Monkey/Liquids.i7x
diff --git a/sources/talk/20180802 How blockchain will influence open source.md b/sources/talk/20180802 How blockchain will influence open source.md
deleted file mode 100644
index 707f4fe033..0000000000
--- a/sources/talk/20180802 How blockchain will influence open source.md
+++ /dev/null
@@ -1,185 +0,0 @@
-How blockchain will influence open source
-======
-
-
-
-What [Satoshi Nakamoto][1] started as Bitcoin a decade ago has found a lot of followers and turned into a movement for decentralization. For some, blockchain technology is a religion that will have the same impact on humanity as the Internet has had. For others, it is hype and technology suitable only for Ponzi schemes. While blockchain is still evolving and trying to find its place, one thing is for sure: It is a disruptive technology that will fundamentally transform certain industries. And I'm betting open source will be one of them.
-
-### The open source model
-
-Open source is a collaborative software development and distribution model that allows people with common interests to gather and produce something that no individual can create on their own. It allows the creation of value that is bigger than the sum of its parts. Open source is enabled by distributed collaboration tools (IRC, email, git, wiki, issue trackers, etc.), distributed and protected by an open source licensing model and often governed by software foundations such as the [Apache Software Foundation][2] (ASF), [Cloud Native Computing Foundation][3] (CNCF), etc.
-
-One interesting aspect of the open source model is the lack of financial incentives in its core. Some people believe open source work should remain detached from money and remain a free and voluntary activity driven only by intrinsic motivators (such as "common purpose" and "for the greater good”). Others believe open source work should be rewarded directly or indirectly through extrinsic motivators (such as financial incentive). While the idea of open source projects prospering only through voluntary contributions is romantic, in reality, the majority of open source contributions are done through paid development. Yes, we have a lot of voluntary contributions, but these are on a temporary basis from contributors who come and go, or for exceptionally popular projects while they are at their peak. Creating and sustaining open source projects that are useful for enterprises requires developing, documenting, testing, and bug-fixing for prolonged periods, even when the software is no longer shiny and exciting. It is a boring activity that is best motivated through financial incentives.
-
-### Commercial open source
-
-Software foundations such as ASF survive on donations and other income streams such as sponsorships, conference fees, etc. But those funds are primarily used to run the foundations, to provide legal protection for the projects, and to ensure there are enough servers to run builds, issue trackers, mailing lists, etc.
-
-Similarly, CNCF has member fees and other income streams, which are used to run the foundation and provide resources for the projects. These days, most software is not built on laptops; it is run and tested on hundreds of machines on the cloud, and that requires money. Creating marketing campaigns, brand designs, distributing stickers, etc. takes money, and some foundations can assist with that as well. At its core, foundations implement the right processes to interact with users, developers, and control mechanisms and ensure distribution of available financial resources to open source projects for the common good.
-
-If users of open source projects can donate money and the foundations can distribute it in a fair way, what is missing?
-
-What is missing is a direct, transparent, trusted, decentralized, automated bidirectional link for transfer of value between the open source producers and the open source consumer. Currently, the link is either unidirectional or indirect:
-
- * **Unidirectional** : A developer (think of a "developer" as any role that is involved in the production, maintenance, and distribution of software) can use their brain juice and devote time to do a contribution and share that value with all open source users. But there is no reverse link.
-
- * **Indirect** : If there is a bug that affects a specific user/company, the options are:
-
- * To have in-house developers to fix the bug and do a pull request. That is ideal, but it not always possible to hire in-house developers who are knowledgeable about hundreds of open source projects used daily.
-
- * To hire a freelancer specializing in that specific open source project and pay for the services. Ideally, the freelancer is also a committer for the open source project and can directly change the project code quickly. Otherwise, the fix might not ever make it to the project.
-
- * To approach a company providing services around the open source project. Such companies typically employ open source committers to influence and gain credibility in the community and offer products, expertise, and professional services.
-
-
-
-
-The third option has been a successful [model][4] for sustaining many open source projects. Whether they provide services (training, consulting, workshops), support, packaging, open core, or SaaS, there are companies that employ hundreds of staff members who work on open source full time. There is a long [list of companies][5] that have managed to build a successful open source business model over the years, and that list is growing steadily.
-
-The companies that back open source projects play an important role in the ecosystem: They are the catalyst between the open source projects and the users. The ones that add real value do more than just package software nicely; they can identify user needs and technology trends, and they create a full stack and even an ecosystem of open source projects to address these needs. They can take a boring project and support it for years. If there is a missing piece in the stack, they can start an open source project from scratch and build a community around it. They can acquire a closed source software company and open source the projects (here I got a little carried away, but yes, I'm talking about my employer, [Red Hat][6]).
-
-To summarize, with the commercial open source model, projects are officially or unofficially managed and controlled by a very few individuals or companies that monetize them and give back to the ecosystem by ensuring the project is successful. It is a win-win-win for open source developers, managing companies, and end users. The alternative is inactive projects and expensive closed source software.
-
-### Self-sustaining, decentralized open source
-
-For a project to become part of a reputable foundation, it must conform to certain criteria. For example, ASF and CNCF require incubation and graduation processes, respectively, where apart from all the technical and formal requirements, a project must have a healthy number of active committer and users. And that is the essence of forming a sustainable open source project. Having source code on GitHub is not the same thing as having an active open source project. The latter requires committers who write the code and users who use the code, with both groups enforcing each other continuously by exchanging value and forming an ecosystem where everybody benefits. Some project ecosystems might be tiny and short-lived, and some may consist of multiple projects and competing service providers, with very complex interactions lasting for many years. But as long as there is an exchange of value and everybody benefits from it, the project is developed, maintained, and sustained.
-
-If you look at ASF [Attic][7], you will find projects that have reached their end of life. When a project is no longer technologically fit for its purpose, it is usually its natural end. Similarly, in the ASF [Incubator][8], you will find tons of projects that never graduated but were instead retired. Typically, these projects were not able to build a large enough community because they are too specialized or there are better alternatives available.
-
-But there are also cases where projects with high potential and superior technology cannot sustain themselves because they cannot form or maintain a functioning ecosystem for the exchange of value. The open source model and the foundations do not provide a framework and mechanisms for developers to get paid for their work or for users to get their requests heard. There isn’t a common value commitment framework for either party. As a result, some projects can sustain themselves only in the context of commercial open source, where a company acts as an intermediary and value adder between developers and users. That adds another constraint and requires a service provider company to sustain some open source projects. Ideally, users should be able to express their interest in a project and developers should be able to show their commitment to the project in a transparent and measurable way, which forms a community with common interest and intent for the exchange of value.
-
-Imagine there is a model with mechanisms and tools that enable direct interaction between open source users and developers. This includes not only code contributions through pull requests, questions over the mailing lists, GitHub stars, and stickers on laptops, but also other ways that allow users to influence projects' destinies in a richer, more self-controlled and transparent manner.
-
-This model could include incentives for actions such as:
-
- * Funding open source projects directly rather than through software foundations
-
- * Influencing the direction of projects through voting (by token holders)
-
- * Feature requests driven by user needs
-
- * On-time pull request merges
-
- * Bounties for bug hunts
-
- * Better test coverage incentives
-
- * Up-to-date documentation rewards
-
- * Long-term support guarantees
-
- * Timely security fixes
-
- * Expert assistance, support, and services
-
- * Budget for evangelism and promotion of the projects
-
- * Budget for regular boring activities
-
- * Fast email and chat assistance
-
- * Full visibility of the overall project findings, etc.
-
-
-
-
-If you haven't guessed, I'm talking about using blockchain and [smart contracts][9] to allow such interactions between users and developers—smart contracts that will give power to the hand of token holders to influence projects.
-
-![blockchain_in_open_source_ecosystem.png][11]
-
-The usage of blockchain in the open source ecosystem
-
-Existing channels in the open source ecosystem provide ways for users to influence projects through financial commitments to service providers or other limited means through the foundations. But the addition of blockchain-based technology to the open source ecosystem could open new channels for interaction between users and developers. I'm not saying this will replace the commercial open source model; most companies working with open source do many things that cannot be replaced by smart contracts. But smart contracts can spark a new way of bootstrapping new open source projects, giving a second life to commodity projects that are a burden to maintain. They can motivate developers to apply boring pull requests, write documentation, get tests to pass, etc., providing a direct value exchange channel between users and open source developers. Blockchain can add new channels to help open source projects grow and become self-sustaining in the long term, even when company backing is not feasible. It can create a new complementary model for self-sustaining open source projects—a win-win.
-
-### Tokenizing open source
-
-There are already a number of initiatives aiming to tokenize open source. Some focus only on an open source model, and some are more generic but apply to open source development as well:
-
- * [Gitcoin][12] \- grow open source, one of the most promising ones in this area.
-
- * [Oscoin][13] \- cryptocurrency for open source
-
- * [Open collective][14] \- a platform for supporting open source projects.
-
- * [FundYourselfNow][15] \- Kickstarter and ICOs for projects.
-
- * [Kauri][16] \- support for open source project documentation.
-
- * [Liberapay][17] \- a recurrent donations platform.
-
- * [FundRequest][18] \- a decentralized marketplace for open source collaboration.
-
- * [CanYa][19] \- recently acquired [Bountysource][20], now the world’s largest open source P2P bounty platform.
-
- * [OpenGift][21] \- a new model for open source monetization.
-
- * [Hacken][22] \- a white hat token for hackers.
-
- * [Coinlancer][23] \- a decentralized job market.
-
- * [CodeFund][24] \- an open source ad platform.
-
- * [IssueHunt][25] \- a funding platform for open source maintainers and contributors.
-
- * [District0x 1Hive][26] \- a crowdfunding and curation platform.
-
- * [District0x Fixit][27] \- github bug bounties.
-
-
-
-
-This list is varied and growing rapidly. Some of these projects will disappear, others will pivot, but a few will emerge as the [SourceForge][28], the ASF, the GitHub of the future. That doesn't necessarily mean they'll replace these platforms, but they'll complement them with token models and create a richer open source ecosystem. Every project can pick its distribution model (license), governing model (foundation), and incentive model (token). In all cases, this will pump fresh blood to the open source world.
-
-### The future is open and decentralized
-
- * Software is eating the world.
-
- * Every company is a software company.
-
- * Open source is where innovation happens.
-
-
-
-
-Given that, it is clear that open source is too big to fail and too important to be controlled by a few or left to its own destiny. Open source is a shared-resource system that has value to all, and more importantly, it must be managed as such. It is only a matter of time until every company on earth will want to have a stake and a say in the open source world. Unfortunately, we don't have the tools and the habits to do it yet. Such tools would allow anybody to show their appreciation or ignorance of software projects. It would create a direct and faster feedback loop between producers and consumers, between developers and users. It will foster innovation—innovation driven by user needs and expressed through token metrics.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/8/open-source-tokenomics
-
-作者:[Bilgin lbryam][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/bibryam
-[1]:https://en.wikipedia.org/wiki/Satoshi_Nakamoto
-[2]:https://www.apache.org/
-[3]:https://www.cncf.io/
-[4]:https://medium.com/open-consensus/3-oss-business-model-progressions-dafd5837f2d
-[5]:https://docs.google.com/spreadsheets/d/17nKMpi_Dh5slCqzLSFBoWMxNvWiwt2R-t4e_l7LPLhU/edit#gid=0
-[6]:http://jobs.redhat.com/
-[7]:https://attic.apache.org/
-[8]:http://incubator.apache.org/
-[9]:https://en.wikipedia.org/wiki/Smart_contract
-[10]:/file/404421
-[11]:https://opensource.com/sites/default/files/uploads/blockchain_in_open_source_ecosystem.png (blockchain_in_open_source_ecosystem.png)
-[12]:https://gitcoin.co/
-[13]:http://oscoin.io/
-[14]:https://opencollective.com/opensource
-[15]:https://www.fundyourselfnow.com/page/about
-[16]:https://kauri.io/
-[17]:https://liberapay.com/
-[18]:https://fundrequest.io/
-[19]:https://canya.io/
-[20]:https://www.bountysource.com/
-[21]:https://opengift.io/pub/
-[22]:https://hacken.io/
-[23]:https://www.coinlancer.com/home
-[24]:https://codefund.io/
-[25]:https://issuehunt.io/
-[26]:https://blog.district0x.io/district-proposal-spotlight-1hive-283957f57967
-[27]:https://github.com/district0x/district-proposals/issues/177
-[28]:https://sourceforge.net/
diff --git a/sources/talk/20180813 Using D Features to Reimplement Inheritance and Polymorphism.md b/sources/talk/20180813 Using D Features to Reimplement Inheritance and Polymorphism.md
new file mode 100644
index 0000000000..eed249b8bf
--- /dev/null
+++ b/sources/talk/20180813 Using D Features to Reimplement Inheritance and Polymorphism.md
@@ -0,0 +1,235 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Using D Features to Reimplement Inheritance and Polymorphism)
+[#]: via: (https://theartofmachinery.com/2018/08/13/inheritance_and_polymorphism_2.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+Using D Features to Reimplement Inheritance and Polymorphism
+======
+
+Some months ago I showed [how inheritance and polymorphism work in compiled languages][1] by reimplementing them with basic structs and function pointers. I wrote that code in D, but it could be translated directly to plain old C. In this post I’ll show how to take advantage of D’s features to make DIY inheritance a bit more ergonomic to use.
+
+Although [I have used these tricks in real code][2], I’m honestly just writing this because I think it’s neat what D can do, and because it helps explain how high-level features of D can be implemented — using the language itself.
+
+### `alias this`
+
+In the original version of the code, the `Run` command inherited from the `Commmand` base class by including a `Command` instance as its first member. `Run` and `Command` were still considered completely different types, so this meant explicit typecasting was needed every time a `Run` instance was polymorphically used as a `Command`.
+
+The D type system actually allows declaring a struct to be a subtype of another struct (or even of a primitive type) using a feature called “[`alias this`][3]”. Here’s a simple example of how it works:
+
+```
+struct Base
+{
+ int x;
+}
+
+struct Derived
+{
+ // Add an instance of Base as a member like before...
+ Base _base;
+ // ...but this time we declare that the member is used for subtyping
+ alias _base this;
+}
+
+void foo(Base b)
+{
+ // ...
+}
+
+void main()
+{
+ Derived d;
+
+ // Derived "inherits" members from Base
+ d.x = 42;
+
+ // Derived instances can be used where a Base instance is expected
+ foo(d);
+}
+```
+
+The code above works in the same way as the code in the previous blog post, but `alias this` tells the type system what we’re doing. This allows us to work _with_ the type system more, and do less typecasting. The example showed a `Derived` instance being passed by value as a `Base` instance, but passing by `ref` also works. Unfortunately, D version 2.081 won’t implicitly convert a `Derived*` to a `Base*`, but maybe it’ll be implemented in future.
+
+Here’s an example of `alias this` being used to implement some slightly more realistic inheritance:
+
+```
+import io = std.stdio;
+
+struct Animal
+{
+ struct VTable
+ {
+ void function(Animal* instance) greet;
+ }
+ immutable(VTable)* vtable;
+
+ void greet()
+ {
+ vtable.greet(&this);
+ }
+}
+
+struct Penguin
+{
+ private:
+ static immutable Animal.VTable vtable = {greet: &greetImpl};
+ auto _base = Animal(&vtable);
+ alias _base this;
+
+ public:
+ string name;
+
+ this(string name) pure
+ {
+ this.name = name;
+ }
+
+ static void greetImpl(Animal* instance)
+ {
+ // We still need one typecast here because the type system can't guarantee this is okay
+ auto penguin = cast(Penguin*) instance;
+ io.writef("I'm %s the penguin and I can swim.\n", penguin.name);
+ }
+}
+
+void main()
+{
+ auto p = Penguin("Paul");
+
+ // p inherits members from Animal
+ p.greet();
+
+ // and can be passed to functions that work with Animal instances
+ doThings(p);
+}
+
+void doThings(ref Animal a)
+{
+ a.greet();
+}
+```
+
+Unlike the code in the previous blog post, this version uses a vtable, just like the polymorphic inheritance in normal compiled languages. As explained in the previous post, every `Penguin` instance will use the same list of function pointers for its virtual functions. So instead of repeating the function pointers in every instance, we can have one list of function pointers that’s shared across all `Penguin` instances (i.e., a list that’s a `static` member). That’s all the vtable is, but it’s how real-world compiled OOP languages work.
+
+### Template Mixins
+
+If we implemented another `Animal` subtype, we’d have to add exactly the same vtable and base member boilerplate as in `Penguin`:
+
+```
+struct Snake
+{
+ // This bit is exactly the same as before
+ private:
+ static immutable Animal.VTable vtable = {greet: &greetImpl};
+ auto _base = Animal(&vtable);
+ alias _base this;
+
+ public:
+
+ static void greetImpl(Animal* instance)
+ {
+ io.writeln("I'm an unfriendly snake. Go away.");
+ }
+}
+```
+
+D has another feature for dumping this kind of boilerplate code into things: [template mixins][4].
+
+```
+mixin template DeriveAnimal()
+{
+ private:
+ static immutable Animal.VTable vtable = {greet: &greetImpl};
+ auto _base = Animal(&vtable);
+ alias _base this;
+}
+
+struct Snake
+{
+ mixin DeriveAnimal;
+
+ static void greetImpl(Animal* instance)
+ {
+ io.writeln("I'm an unfriendly snake. Go away.");
+ }
+}
+```
+
+Actually, template mixins can take parameters, so it’s possible to create a generic `Derive` mixin that inherits from any struct that defines a `VTable` struct. Because template mixins can inject any kind of declaration, including template functions, the `Derive` mixin can even handle more complex things, like the typecast from `Animal*` to the subtype.
+
+By the way, [the `mixin` statement can also be used to “paste” code into places][5]. It’s like a hygienic version of the C preprocessor, and it’s used below (and also in this [compile-time Brainfuck compiler][6]).
+
+### `opDispatch()`
+
+There’s some highly redundant wrapper code inside the definition of `Animal`:
+
+```
+void greet()
+{
+ vtable.greet(&this);
+}
+```
+
+If we added another virtual method, we’d have to add another wrapper:
+
+```
+void eat(Food food)
+{
+ vtable.eat(&this, food);
+}
+```
+
+But D has `opDispatch()`, which provides a way to automatically add members to a struct. When an `opDispatch()` is defined in a struct, any time the compiler fails to find a member, it tries the `opDispatch()` template function. In other words, it’s a fallback for member lookup. A fallback to a fully generic `return vtable.MEMBER(&this, args)` will effectively fill in all the virtual function dispatchers for us:
+
+```
+auto opDispatch(string member_name, Args...)(auto ref Args args)
+{
+ mixin("return vtable." ~ member_name ~ "(&this, args);");
+}
+```
+
+The downside is that if the `opDispatch()` fails for any reason, the compiler gives up on the member lookup and we get a generic “Error: no property foo for type Animal”. This is confusing if `foo` is actually a valid virtual member but was called with arguments of the wrong type, or something, so `opDispatch()` needs some good error handling (e.g., with [`static assert`][7]).
+
+### `static foreach`
+
+An alternative is to use a newer feature of D: [`static foreach`][8]. This is a powerful tool that can create declarations inside a struct (and other places) using a loop. We can directly read a list of members from the `VTable` definition by using some compile-time reflection:
+
+```
+import std.traits : FieldNameTuple;
+static foreach (member; FieldNameTuple!VTable)
+{
+ mixin("auto " ~ member ~ "(Args...)(auto ref Args args) { return vtable." ~ member ~ "(&this, args); }");
+}
+```
+
+The advantage in this case is that we’re explicitly creating struct members. Now the compiler can distinguish between a member that shouldn’t exist at all, and a member that exists but isn’t used properly.
+
+### It’s all just like the C equivalent
+
+As I said, this is basically just a tour-de-force of ways that D can improve the code from the previous post. However, the original motivation for this blog post was people asking me about tricks I used to implement polymorphic inheritance in bare metal D code, so I’ll finish up by saying this: All this stuff works in [`-betterC`][9] code, and none of it requires extra runtime support. The code in this post implements the same kind of thing as in the [previous post][1]. It’s just expressed in a more compact and less error-prone way.
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2018/08/13/inheritance_and_polymorphism_2.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: /2018/04/02/inheritance_and_polymorphism.html
+[2]: https://gitlab.com/sarneaud/xanthe/blob/master/src/game/rigid_body.d#L15
+[3]: https://dlang.org/spec/class.html#alias-this
+[4]: https://dlang.org/spec/template-mixin.html
+[5]: https://dlang.org/articles/mixin.html
+[6]: /2017/12/31/compile_time_brainfuck.html
+[7]: https://dlang.org/spec/version.html#StaticAssert
+[8]: https://dlang.org/spec/version.html#staticforeach
+[9]: https://dlang.org/blog/2018/06/11/dasbetterc-converting-make-c-to-d/
diff --git a/sources/talk/20180816 Debian Turns 25- Here are Some Interesting Facts About Debian Linux.md b/sources/talk/20180816 Debian Turns 25- Here are Some Interesting Facts About Debian Linux.md
deleted file mode 100644
index c90262dfee..0000000000
--- a/sources/talk/20180816 Debian Turns 25- Here are Some Interesting Facts About Debian Linux.md
+++ /dev/null
@@ -1,116 +0,0 @@
-Debian Turns 25! Here are Some Interesting Facts About Debian Linux
-======
-One of the oldest Linux distribution still in development, Debian has just turned 25. Let’s have a look at some interesting facts about this awesome FOSS project.
-
-### 10 Interesting facts about Debian Linux
-
-![Interesting facts about Debian Linux][1]
-
-The facts presented here have been collected from various sources available from the internet. They are true to my knowledge, but in case of any error, please remind me to update the article.
-
-#### 1\. One of the oldest Linux distributions still under active development
-
-[Debian project][2] was announced on 16th August 1993 by Ian Murdock, Debian Founder. Like Linux creator [Linus Torvalds][3], Ian was a college student when he announced Debian project.
-
-
-
-#### 2\. Some people get tattoo while some name their project after their girlfriend’s name
-
-The project was named by combining the name of Ian and his then-girlfriend Debra Lynn. Ian and Debra got married and had three children. Debra and Ian got divorced in 2008.
-
-#### 3\. Ian Murdock: The Maverick behind the creation of Debian project
-
-![Debian Founder Ian Murdock][4]
-Ian Murdock
-
-[Ian Murdock][5] led the Debian project from August 1993 until March 1996. He shaped Debian into a community project based on the principals of Free Software. The [Debian Manifesto][6] and the [Debian Social Contract][7] are still governing the project.
-
-He founded a commercial Linux company called [Progeny Linux Systems][8] and worked for a number of Linux related companies such as Sun Microsystems, Linux Foundation and Docker.
-
-Sadly, [Ian committed suicide in December 2015][9]. His contribution to Debian is certainly invaluable.
-
-#### 4\. Debian is a community project in the true sense
-
-Debian is a community based project in true sense. No one ‘owns’ Debian. Debian is being developed by volunteers from all over the world. It is not a commercial project, backed by corporates like many other Linux distributions.
-
-Debian Linux distribution is composed of Free Software only. It’s one of the few Linux distributions that is true to the spirit of [Free Software][10] and takes proud in being called a GNU/Linux distribution.
-
-Debian has its non-profit organization called [Software in Public Interest][11] (SPI). Along with Debian, SPI supports many other open source projects financially.
-
-#### 5\. Debian and its 3 branches
-
-Debian has three branches or versions: Debian Stable, Debian Unstable (Sid) and Debian Testing.
-
-Debian Stable, as the name suggests, is the stable branch that has all the software and packages well tested to give you a rock solid stable system. Since it takes time before a well-tested software lands in the stable branch, Debian Stable often contains older versions of programs and hence people joke that Debian Stable means stale.
-
-[Debian Unstable][12] codenamed Sid is the version where all the development of Debian takes place. This is where the new packages first land or developed. After that, these changes are propagated to the testing version.
-
-[Debian Testing][13] is the next release after the current stable release. If the current stable release is N, Debian testing would be the N+1 release. The packages from Debian Unstable are tested in this version. After all the new changes are well tested, Debian Testing is then ‘promoted’ as the new Stable version.
-
-There is no strict release schedule for Debian.
-
-#### 7\. There was no Debian 1.0 release
-
-Debian 1.0 was never released. The CD vendor, InfoMagic, accidentally shipped a development release of Debian and entitled it 1.0 in 1996. To prevent confusion between the CD version and the actual Debian release, the Debian Project renamed its next release to “Debian 1.1”.
-
-#### 8\. Debian releases are codenamed after Toy Story characters
-
-![Toy Story Characters][14]
-
-Debian releases are codenamed after the characters from Pixar’s hit animation movie series [Toy Story][15].
-
-Debian 1.1 was the first release with a codename. It was named Buzz after the Toy Story character Buzz Lightyear.
-
-It was in 1996 and [Bruce Perens][16] had taken over leadership of the Project from Ian Murdock. Bruce was working at Pixar at the time.
-
-This trend continued and all the subsequent releases had codenamed after Toy Story characters. For example, the current stable release is Stretch while the upcoming release has been codenamed Buster.
-
-The unstable Debian version is codenamed Sid. This character in Toy Story is a kid with emotional problems and he enjoys breaking toys. This is symbolic in the sense that Debian Unstable might break your system with untested packages.
-
-#### 9\. Debian also has a BSD ditribution
-
-Debian is not limited to Linux. Debian also has a distribution based on FreeBSD kernel. It is called [Debian GNU/kFreeBSD][17].
-
-#### 10\. Google uses Debian
-
-[Google uses Debian][18] as its in-house development platform. Earlier, Google used a customized version of Ubuntu as its development platform. Recently they opted for Debian based gLinux.
-
-#### Happy 25th birthday Debian
-
-![Happy 25th birthday Debian][19]
-
-I hope you liked these little facts about Debian. Stuff like these are reasons why people love Debian.
-
-I wish a very happy 25th birthday to Debian. Please continue to be awesome. Cheers :)
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/debian-facts/
-
-作者:[Abhishek Prakash][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/abhishek/
-[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/Interesting-facts-about-debian.jpeg
-[2]:https://www.debian.org
-[3]:https://itsfoss.com/linus-torvalds-facts/
-[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/ian-murdock.jpg
-[5]:https://en.wikipedia.org/wiki/Ian_Murdock
-[6]:https://www.debian.org/doc/manuals/project-history/ap-manifesto.en.html
-[7]:https://www.debian.org/social_contract
-[8]:https://en.wikipedia.org/wiki/Progeny_Linux_Systems
-[9]:https://itsfoss.com/ian-murdock-dies-mysteriously/
-[10]:https://www.fsf.org/
-[11]:https://www.spi-inc.org/
-[12]:https://www.debian.org/releases/sid/
-[13]:https://www.debian.org/releases/testing/
-[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/toy-story-characters.jpeg
-[15]:https://en.wikipedia.org/wiki/Toy_Story_(franchise)
-[16]:https://perens.com/about-bruce-perens/
-[17]:https://wiki.debian.org/Debian_GNU/kFreeBSD
-[18]:https://itsfoss.com/goobuntu-glinux-google/
-[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/happy-25th-birthday-Debian.jpeg
diff --git a/sources/talk/20180904 How blockchain can complement open source.md b/sources/talk/20180904 How blockchain can complement open source.md
deleted file mode 100644
index 7712539f3f..0000000000
--- a/sources/talk/20180904 How blockchain can complement open source.md
+++ /dev/null
@@ -1,95 +0,0 @@
-How blockchain can complement open source
-======
-
-
-
-[The Cathedral and The Bazaar][1] is a classic open source story, written 20 years ago by Eric Steven Raymond. In the story, Eric describes a new revolutionary software development model where complex software projects are built without (or with a very little) central management. This new model is open source.
-
-Eric's story compares two models:
-
- * The classic model (represented by the cathedral), in which software is crafted by a small group of individuals in a closed and controlled environment through slow and stable releases.
- * And the new model (represented by the bazaar), in which software is crafted in an open environment where individuals can participate freely but still produce a stable and coherent system.
-
-
-
-Some of the reasons open source is so successful can be traced back to the founding principles Eric describes. Releasing early, releasing often, and accepting the fact that many heads are inevitably better than one allows open source projects to tap into the world’s pool of talent (and few companies can match that using the closed source model).
-
-Two decades after Eric's reflective analysis of the hacker community, we see open source becoming dominant. It is no longer a model only for scratching a developer’s personal itch, but instead, the place where innovation happens. Even the world's [largest][2] software companies are transitioning to this model in order to continue dominating.
-
-### A barter system
-
-If we look closely at how the open source model works in practice, we realize that it is a closed system, exclusive only to open source developers and techies. The only way to influence the direction of a project is by joining the open source community, understanding the written and the unwritten rules, learning how to contribute, the coding standards, etc., and doing it yourself.
-
-This is how the bazaar works, and it is where the barter system analogy comes from. A barter system is a method of exchanging services and goods in return for other services and goods. In the bazaar—where the software is built—that means in order to take something, you must also be a producer yourself and give something back in return. And that is by exchanging your time and knowledge for getting something done. A bazaar is a place where open source developers interact with other open source developers and produce open source software the open source way.
-
-The barter system is a great step forward and an evolution from the state of self-sufficiency where everybody must be a jack of all trades. The bazaar (open source model) using the barter system allows people with common interests and different skills to gather, collaborate, and create something that no individual can create on their own. The barter system is simple and lacks complex problems of the modern monetary systems, but it also has some limitations, such as:
-
- * Lack of divisibility: In the absence of a common medium of exchange, a large indivisible commodity/value cannot be exchanged for a smaller commodity/value. For example, if you want to do even a small change in an open source project, you may sometimes still need to go through a high entry barrier.
- * Storing value: If a project is important to your company, you may want to have a large investment/commitment in it. But since it is a barter system among open source developers, the only way to have a strong say is by employing many open source committers, and that is not always possible.
- * Transferring value: If you have invested in a project (trained employees, hired open source developers) and want to move focus to another project, it is not possible to transfer expertise, reputation, and influence quickly.
- * Temporal decoupling: The barter system does not provide a good mechanism for deferred or advance commitments. In the open source world, that means a user cannot express commitment or interest in a project in a measurable way in advance, or continuously for future periods.
-
-
-
-Below, we will explore how to address these limitations using the back door to the bazaar.
-
-### A currency system
-
-People are hanging at the bazaar for different reasons: Some are there to learn, some are there to scratch a personal developer's itch, and some work for large software farms. Because the only way to have a say in the bazaar is to become part of the open source community and join the barter system, in order to gain credibility in the open source world, many large software companies employ these developers and pay them in monetary value. This represents the use of a currency system to influence the bazaar. Open source is no longer only for scratching the personal developer itch. It also accounts for a significant part of the overall software production worldwide, and there are many who want to have an influence.
-
-Open source sets the guiding principles through which developers interact and build a coherent system in a distributed way. It dictates how a project is governed, how software is built, and how the output distributed to users. It is an open consensus model for decentralized entities for building quality software together. But the open source model does not cover how open source is subsidized. Whether it is sponsored, directly or indirectly, through intrinsic or extrinsic motivators is irrelevant to the bazaar.
-
-
-
-Currently, there is no equivalent of the decentralized open source development model for subsidization purposes. The majority of open source subsidization is centralized, where typically one company dominates a project by employing the majority of the open source developers of that project. And to be honest, this is currently the best-case scenario, as it guarantees that the developers will be paid for a long period and the project will continue to flourish.
-
-There are also exceptions for the project monopoly scenario: For example, some Cloud Native Computing Foundation projects are developed by a large number of competing companies. Also, the Apache Software Foundation aims for their projects not to be dominated by a single vendor by encouraging diverse contributors, but most of the popular projects, in reality, are still single-vendor projects.
-
-What we are missing is an open and decentralized model that works like the bazaar without a central coordination and ownership, where consumers (open source users) and producers (open source developers) interact with each other, driven by market forces and open source value. In order to complement open source, such a model must also be open and decentralized, and this is why I think the blockchain technology would [fit best here][3].
-
-Most of the existing blockchain (and non-blockchain) platforms that aim to subsidize open source development are targeting primarily bug bounties, small and piecemeal tasks. A few also focus on funding new open source projects. But not many aim to provide mechanisms for sustaining continued development of open source projects—basically, a system that would emulate the behavior of an open source service provider company, or open core, open source-based SaaS product company: ensuring developers get continued and predictable incentives and guiding the project development based on the priorities of the incentivizers; i.e., the users. Such a model would address the limitations of the barter system listed above:
-
- * Allow divisibility: If you want something small fixed, you can pay a small amount rather than the full premium of becoming an open source developer for a project.
- * Storing value: You can invest a large amount into a project and ensure both its continued development and that your voice is heard.
- * Transferring value: At any point, you can stop investing in the project and move funds into other projects.
- * Temporal decoupling: Allow regular recurring payments and subscriptions.
-
-
-
-There would be also other benefits, purely from the fact that such a blockchain-based system is transparent and decentralized: to quantify a project’s value/usefulness based on its users’ commitment, open roadmap commitment, decentralized decision making, etc.
-
-### Conclusion
-
-On the one hand, we see large companies hiring open source developers and acquiring open source startups and even foundational platforms (such as Microsoft buying GitHub). Many, if not most, long-running successful open source projects are centralized around a single vendor. The significance of open source and its centralization is a fact.
-
-On the other hand, the challenges around [sustaining open source][4] software are becoming more apparent, and many are investigating this space and its foundational issues more deeply. There are a few projects with high visibility and a large number of contributors, but there are also many other still-important projects that lack enough contributors and maintainers.
-
-There are [many efforts][3] trying to address the challenges of open source through blockchain. These projects should improve the transparency, decentralization, and subsidization and establish a direct link between open source users and developers. This space is still very young, but it is progressing quickly, and with time, the bazaar is going to have a cryptocurrency system.
-
-Given enough time and adequate technology, decentralization is happening at many levels:
-
- * The internet is a decentralized medium that has unlocked the world’s potential for sharing and acquiring knowledge.
- * Open source is a decentralized collaboration model that has unlocked the world’s potential for innovation.
- * Similarly, blockchain can complement open source and become the decentralized open source subsidization model.
-
-
-
-Follow me on [Twitter][5] for other posts in this space.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/9/barter-currency-system
-
-作者:[Bilgin lbryam][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/bibryam
-[1]: http://catb.org/
-[2]: http://oss.cash/
-[3]: https://opensource.com/article/18/8/open-source-tokenomics
-[4]: https://www.youtube.com/watch?v=VS6IpvTWwkQ
-[5]: http://twitter.com/bibryam
diff --git a/sources/talk/20180916 The Rise and Demise of RSS (Old Version).md b/sources/talk/20180916 The Rise and Demise of RSS (Old Version).md
new file mode 100644
index 0000000000..b6e1a4fdd9
--- /dev/null
+++ b/sources/talk/20180916 The Rise and Demise of RSS (Old Version).md
@@ -0,0 +1,278 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The Rise and Demise of RSS (Old Version))
+[#]: via: (https://twobithistory.org/2018/09/16/the-rise-and-demise-of-rss.html)
+[#]: author: (Two-Bit History https://twobithistory.org)
+
+The Rise and Demise of RSS (Old Version)
+======
+
+_A newer version of this post was published on [December 18th, 2018][1]._
+
+There are two stories here. The first is a story about a vision of the web’s future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development.
+
+In the late 1990s, in the go-go years between Netscape’s IPO and the Dot-com crash, everyone could see that the web was going to be an even bigger deal than it already was, even if they didn’t know exactly how it was going to get there. One theory was that the web was about to be revolutionized by syndication. The web, originally built to enable a simple transaction between two parties—a client fetching a document from a single host server—would be broken open by new standards that could be used to repackage and redistribute entire websites through a variety of channels. Kevin Werbach, writing for _Release 1.0_, a newsletter influential among investors in the 1990s, predicted that syndication “would evolve into the core model for the Internet economy, allowing businesses and individuals to retain control over their online personae while enjoying the benefits of massive scale and scope.”[1][2] He invited his readers to imagine a future in which fencing aficionados, rather than going directly to an “online sporting goods site” or “fencing equipment retailer,” could buy a new épée directly through e-commerce widgets embedded into their favorite website about fencing.[2][3] Just like in the television world, where big networks syndicate their shows to smaller local stations, syndication on the web would allow businesses and publications to reach consumers through a multitude of intermediary sites. This would mean, as a corollary, that consumers would gain significant control over where and how they interacted with any given business or publication on the web.
+
+RSS was one of the standards that promised to deliver this syndicated future. To Werbach, RSS was “the leading example of a lightweight syndication protocol.”[3][4] Another contemporaneous article called RSS the first protocol to realize the potential of XML.[4][5] It was going to be a way for both users and content aggregators to create their own customized channels out of everything the web had to offer. And yet, two decades later, RSS [appears to be a dying technology][6], now used chiefly by podcasters and programmers with tech blogs. Moreover, among that latter group, RSS is perhaps used as much for its political symbolism as its actual utility. Though of course some people really do have RSS readers, stubbornly adding an RSS feed to your blog, even in 2018, is a reactionary statement. That little tangerine bubble has become a wistful symbol of defiance against a centralized web increasingly controlled by a handful of corporations, a web that hardly resembles the syndicated web of Werbach’s imagining.
+
+The future once looked so bright for RSS. What happened? Was its downfall inevitable, or was it precipitated by the bitter infighting that thwarted the development of a single RSS standard?
+
+### Muddied Water
+
+RSS was invented twice. This meant it never had an obvious owner, a state of affairs that spawned endless debate and acrimony. But it also suggests that RSS was an important idea whose time had come.
+
+In 1998, Netscape was struggling to envision a future for itself. Its flagship product, the Netscape Navigator web browser—once preferred by 80% of web users—was quickly losing ground to Internet Explorer. So Netscape decided to compete in a new arena. In May, a team was brought together to start work on what was known internally as “Project 60.”[5][7] Two months later, Netscape announced “My Netscape,” a web portal that would fight it out with other portals like Yahoo, MSN, and Excite.
+
+The following year, in March, Netscape announced an addition to the My Netscape portal called the “My Netscape Network.” My Netscape users could now customize their My Netscape page so that it contained “channels” featuring the most recent headlines from sites around the web. As long as your favorite website published a special file in a format dictated by Netscape, you could add that website to your My Netscape page, typically by clicking an “Add Channel” button that participating websites were supposed to add to their interfaces. A little box containing a list of linked headlines would then appear.
+
+![A My Netscape Network Channel][8]
+
+The special file that participating websites had to publish was an RSS file. In the My Netscape Network announcement, Netscape explained that RSS stood for “RDF Site Summary.”[6][9] This was somewhat of a misnomer. RDF, or the Resource Description Framework, is basically a grammar for describing certain properties of arbitrary resources. (See [my article about the Semantic Web][10] if that sounds really exciting to you.) In 1999, a draft specification for RDF was being considered by the W3C. Though RSS was supposed to be based on RDF, the example RSS document Netscape actually released didn’t use any RDF tags at all, even if it declared the RDF XML namespace. In a document that accompanied the Netscape RSS specification, Dan Libby, one of the specification’s authors, explained that “in this release of MNN, Netscape has intentionally limited the complexity of the RSS format.”[7][11] The specification was given the 0.90 version number, the idea being that subsequent versions would bring RSS more in line with the W3C’s XML specification and the evolving draft of the RDF specification.
+
+RSS had been cooked up by Libby and another Netscape employee, Ramanathan Guha. Guha previously worked for Apple, where he came up with something called the Meta Content Framework. MCF was a format for representing metadata about anything from web pages to local files. Guha demonstrated its power by developing an application called [HotSauce][12] that visualized relationships between files as a network of nodes suspended in 3D space. After leaving Apple for Netscape, Guha worked with a Netscape consultant named Tim Bray to produce an XML-based version of MCF, which in turn became the foundation for the W3C’s RDF draft.[8][13] It’s no surprise, then, that Guha and Libby were keen to incorporate RDF into RSS. But Libby later wrote that the original vision for an RDF-based RSS was pared back because of time constraints and the perception that RDF was “‘too complex’ for the ‘average user.’”[9][14]
+
+While Netscape was trying to win eyeballs in what became known as the “portal wars,” elsewhere on the web a new phenomenon known as “weblogging” was being pioneered.[10][15] One of these pioneers was Dave Winer, CEO of a company called UserLand Software, which developed early content management systems that made blogging accessible to people without deep technical fluency. Winer ran his own blog, [Scripting News][16], which today is one of the oldest blogs on the internet. More than a year before Netscape announced My Netscape Network, on December 15th, 1997, Winer published a post announcing that the blog would now be available in XML as well as HTML.[11][17]
+
+Dave Winer’s XML format became known as the Scripting News format. It was supposedly similar to Microsoft’s Channel Definition Format (a “push technology” standard submitted to the W3C in March, 1997), but I haven’t been able to find a file in the original format to verify that claim.[12][18] Like Netscape’s RSS, it structured the content of Winer’s blog so that it could be understood by other software applications. When Netscape released RSS 0.90, Winer and UserLand Software began to support both formats. But Winer believed that Netscape’s format was “woefully inadequate” and “missing the key thing web writers and readers need.”[13][19] It could only represent a list of links, whereas the Scripting News format could represent a series of paragraphs, each containing one or more links.
+
+In June, 1999, two months after Netscape’s My Netscape Network announcement, Winer introduced a new version of the Scripting News format, called ScriptingNews 2.0b1. Winer claimed that he decided to move ahead with his own format only after trying but failing to get anyone at Netscape to care about RSS 0.90’s deficiencies.[14][20] The new version of the Scripting News format added several items to the `` element that brought the Scripting News format to parity with RSS. But the two formats continued to differ in that the Scripting News format, which Winer nicknamed the “fat” syndication format, could include entire paragraphs and not just links.
+
+Netscape got around to releasing RSS 0.91 the very next month. The updated specification was a major about-face. RSS no longer stood for “RDF Site Summary”; it now stood for “Rich Site Summary.” All the RDF—and there was almost none anyway—was stripped out. Many of the Scripting News tags were incorporated. In the text of the new specification, Libby explained:
+
+> RDF references removed. RSS was originally conceived as a metadata format providing a summary of a website. Two things have become clear: the first is that providers want more of a syndication format than a metadata format. The structure of an RDF file is very precise and must conform to the RDF data model in order to be valid. This is not easily human-understandable and can make it difficult to create useful RDF files. The second is that few tools are available for RDF generation, validation and processing. For these reasons, we have decided to go with a standard XML approach.[15][21]
+
+Winer was enormously pleased with RSS 0.91, calling it “even better than I thought it would be.”[16][22] UserLand Software adopted it as a replacement for the existing ScriptingNews 2.0b1 format. For a while, it seemed that RSS finally had a single authoritative specification.
+
+### The Great Fork
+
+A year later, the RSS 0.91 specification had become woefully inadequate. There were all sorts of things people were trying to do with RSS that the specification did not address. There were other parts of the specification that seemed unnecessarily constraining—each RSS channel could only contain a maximum of 15 items, for example.
+
+By that point, RSS had been adopted by several more organizations. Other than Netscape, which seemed to have lost interest after RSS 0.91, the big players were Dave Winer’s UserLand Software; O’Reilly Net, which ran an RSS aggregator called Meerkat; and Moreover.com, which also ran an RSS aggregator focused on news.[17][23] Via mailing list, representatives from these organizations and others regularly discussed how to improve on RSS 0.91. But there were deep disagreements about what those improvements should look like.
+
+The mailing list in which most of the discussion occurred was called the Syndication mailing list. [An archive of the Syndication mailing list][24] is still available. It is an amazing historical resource. It provides a moment-by-moment account of how those deep disagreements eventually led to a political rupture of the RSS community.
+
+On one side of the coming rupture was Winer. Winer was impatient to evolve RSS, but he wanted to change it only in relatively conservative ways. In June, 2000, he published his own RSS 0.91 specification on the UserLand website, meant to be a starting point for further development of RSS. It made no significant changes to the 0.91 specification published by Netscape. Winer claimed in a blog post that accompanied his specification that it was only a “cleanup” documenting how RSS was actually being used in the wild, which was needed because the Netscape specification was no longer being maintained.[18][25] In the same post, he argued that RSS had succeeded so far because it was simple, and that by adding namespaces or RDF back to the format—some had suggested this be done in the Syndication mailing list—it “would become vastly more complex, and IMHO, at the content provider level, would buy us almost nothing for the added complexity.” In a message to the Syndication mailing list sent around the same time, Winer suggested that these issues were important enough that they might lead him to create a fork:
+
+> I’m still pondering how to move RSS forward. I definitely want ICE-like stuff in RSS2, publish and subscribe is at the top of my list, but I am going to fight tooth and nail for simplicity. I love optional elements. I don’t want to go down the namespaces and schema road, or try to make it a dialect of RDF. I understand other people want to do this, and therefore I guess we’re going to get a fork. I have my own opinion about where the other fork will lead, but I’ll keep those to myself for the moment at least.[19][26]
+
+Arrayed against Winer were several other people, including Rael Dornfest of O’Reilly, Ian Davis (responsible for a search startup called Calaba), and a precocious, 14-year-old Aaron Swartz, who all thought that RSS needed namespaces in order to accommodate the many different things everyone wanted to do with it. On another mailing list hosted by O’Reilly, Davis proposed a namespace-based module system, writing that such a system would “make RSS as extensible as we like rather than packing in new features that over-complicate the spec.”[20][27] The “namespace camp” believed that RSS would soon be used for much more than the syndication of blog posts, so namespaces, rather than being a complication, were the only way to keep RSS from becoming unmanageable as it supported more and more use cases.
+
+At the root of this disagreement about namespaces was a deeper disagreement about what RSS was even for. Winer had invented his Scripting News format to syndicate the posts he wrote for his blog. Guha and Libby at Netscape had designed RSS and called it “RDF Site Summary” because in their minds it was a way of recreating a site in miniature within Netscape’s online portal. Davis, writing to the Syndication mailing list, explained his view that RSS was “originally conceived as a way of building mini sitemaps,” and that now he and others wanted to expand RSS “to encompass more types of information than simple news headlines and to cater for the new uses of RSS that have emerged over the last 12 months.”[21][28] Winer wrote a prickly reply, stating that his Scripting News format was in fact the original RSS and that it had been meant for a different purpose. Given that the people most involved in the development of RSS disagreed about why RSS had even been created, a fork seems to have been inevitable.
+
+The fork happened after Dornfest announced a proposed RSS 1.0 specification and formed the RSS-DEV Working Group—which would include Davis, Swartz, and several others but not Winer—to get it ready for publication. In the proposed specification, RSS once again stood for “RDF Site Summary,” because RDF had had been added back in to represent metadata properties of certain RSS elements. The specification acknowledged Winer by name, giving him credit for popularizing RSS through his “evangelism.”[22][29] But it also argued that just adding more elements to RSS without providing for extensibility with a module system—that is, what Winer was suggesting—”sacrifices scalability.” The specification went on to define a module system for RSS based on XML namespaces.
+
+Winer was furious that the RSS-DEV Working Group had arrogated the “RSS 1.0” name for themselves.[23][30] In another mailing list about decentralization, he described what the RSS-DEV Working Group had done as theft.[24][31] Other members of the Syndication mailing list also felt that the RSS-DEV Working Group should not have used the name “RSS” without unanimous agreement from the community on how to move RSS forward. But the Working Group stuck with the name. Dan Brickley, another member of the RSS-DEV Working Group, defended this decision by arguing that “RSS 1.0 as proposed is solidly grounded in the original RSS vision, which itself had a long heritage going back to MCF (an RDF precursor) and related specs (CDF etc).”[25][32] He essentially felt that the RSS 1.0 effort had a better claim to the RSS name than Winer did, since RDF had originally been a part of RSS. The RSS-DEV Working Group published a final version of their specification in December. That same month, Winer published his own improvement to RSS 0.91, which he called RSS 0.92, on UserLand’s website. RSS 0.92 made several small optional improvements to RSS, among which was the addition of the `` tag soon used by podcasters everywhere. RSS had officially forked.
+
+It’s not clear to me why a better effort was not made to involve Winer in the RSS-DEV Working Group. He was a prominent contributor to the Syndication mailing list and obviously responsible for much of RSS’ popularity, as the members of the Working Group themselves acknowledged. But Tim O’Reilly, founder and CEO of O’Reilly, explained in a UserLand discussion group that Winer more or less refused to participate:
+
+> A group of people involved in RSS got together to start thinking about its future evolution. Dave was part of the group. When the consensus of the group turned in a direction he didn’t like, Dave stopped participating, and characterized it as a plot by O’Reilly to take over RSS from him, despite the fact that Rael Dornfest of O’Reilly was only one of about a dozen authors of the proposed RSS 1.0 spec, and that many of those who were part of its development had at least as long a history with RSS as Dave had.[26][33]
+
+To this, Winer said:
+
+> I met with Dale [Dougherty] two weeks before the announcement, and he didn’t say anything about it being called RSS 1.0. I spoke on the phone with Rael the Friday before it was announced, again he didn’t say that they were calling it RSS 1.0. The first I found out about it was when it was publicly announced.
+>
+> Let me ask you a straight question. If it turns out that the plan to call the new spec “RSS 1.0” was done in private, without any heads-up or consultation, or for a chance for the Syndication list members to agree or disagree, not just me, what are you going to do?
+>
+> UserLand did a lot of work to create and popularize and support RSS. We walked away from that, and let your guys have the name. That’s the top level. If I want to do any further work in Web syndication, I have to use a different name. Why and how did that happen Tim?[27][34]
+
+I have not been able to find a discussion in the Syndication mailing list about using the RSS 1.0 name prior to the announcement of the RSS 1.0 proposal.
+
+RSS would fork again in 2003, when several developers frustrated with the bickering in the RSS community sought to create an entirely new format. These developers created Atom, a format that did away with RDF but embraced XML namespaces. Atom would eventually be specified by [a proposed IETF standard][35]. After the introduction of Atom, there were three competing versions of RSS: Winer’s RSS 0.92 (updated to RSS 2.0 in 2002 and renamed “Really Simple Syndication”), the RSS-DEV Working Group’s RSS 1.0, and Atom.
+
+### Decline
+
+The proliferation of competing RSS specifications may have hampered RSS in other ways that I’ll discuss shortly. But it did not stop RSS from becoming enormously popular during the 2000s. By 2004, the New York Times had started offering its headlines in RSS and had written an article explaining to the layperson what RSS was and how to use it.[28][36] Google Reader, an RSS aggregator ultimately used by millions, was launched in 2005. By 2013, RSS seemed popular enough that the New York Times, in its obituary for Aaron Swartz, called the technology “ubiquitous.”[29][37] For a while, before a third of the planet had signed up for Facebook, RSS was simply how many people stayed abreast of news on the internet.
+
+The New York Times published Swartz’ obituary in January, 2013. By that point, though, RSS had actually turned a corner and was well on its way to becoming an obscure technology. Google Reader was shutdown in July, 2013, ostensibly because user numbers had been falling “over the years.”[30][38] This prompted several articles from various outlets declaring that RSS was dead. But people had been declaring that RSS was dead for years, even before Google Reader’s shuttering. Steve Gillmor, writing for TechCrunch in May, 2009, advised that “it’s time to get completely off RSS and switch to Twitter” because “RSS just doesn’t cut it anymore.”[31][39] He pointed out that Twitter was basically a better RSS feed, since it could show you what people thought about an article in addition to the article itself. It allowed you to follow people and not just channels. Gillmor told his readers that it was time to let RSS recede into the background. He ended his article with a verse from Bob Dylan’s “Forever Young.”
+
+Today, RSS is not dead. But neither is it anywhere near as popular as it once was. Lots of people have offered explanations for why RSS lost its broad appeal. Perhaps the most persuasive explanation is exactly the one offered by Gillmor in 2009. Social networks, just like RSS, provide a feed featuring all the latest news on the internet. Social networks took over from RSS because they were simply better feeds. They also provide more benefits to the companies that own them. Some people have accused Google, for example, of shutting down Google Reader in order to encourage people to use Google+. Google might have been able to monetize Google+ in a way that it could never have monetized Google Reader. Marco Arment, the creator of Instapaper, wrote on his blog in 2013:
+
+> Google Reader is just the latest casualty of the war that Facebook started, seemingly accidentally: the battle to own everything. While Google did technically “own” Reader and could make some use of the huge amount of news and attention data flowing through it, it conflicted with their far more important Google+ strategy: they need everyone reading and sharing everything through Google+ so they can compete with Facebook for ad-targeting data, ad dollars, growth, and relevance.[32][40]
+
+So both users and technology companies realized that they got more out of using social networks than they did out of RSS.
+
+Another theory is that RSS was always too geeky for regular people. Even the New York Times, which seems to have been eager to adopt RSS and promote it to its audience, complained in 2006 that RSS is a “not particularly user friendly” acronym coined by “computer geeks.”[33][41] Before the RSS icon was designed in 2004, websites like the New York Times linked to their RSS feeds using little orange boxes labeled “XML,” which can only have been intimidating.[34][42] The label was perfectly accurate though, because back then clicking the link would take a hapless user to a page full of XML. [This great tweet][43] captures the essence of this explanation for RSS’ demise. Regular people never felt comfortable using RSS; it hadn’t really been designed as a consumer-facing technology and involved too many hurdles; people jumped ship as soon as something better came along.
+
+RSS might have been able to overcome some of these limitations if it had been further developed. Maybe RSS could have been extended somehow so that friends subscribed to the same channel could syndicate their thoughts about an article to each other. But whereas a company like Facebook was able to “move fast and break things,” the RSS developer community was stuck trying to achieve consensus. The Great RSS Fork only demonstrates how difficult it was to do that. So if we are asking ourselves why RSS is no longer popular, a good first-order explanation is that social networks supplanted it. If we ask ourselves why social networks were able to supplant it, then the answer may be that the people trying to make RSS succeed faced a problem much harder than, say, building Facebook. As Dornfest wrote to the Syndication mailing list at one point, “currently it’s the politics far more than the serialization that’s far from simple.”[35][44]
+
+So today we are left with centralized silos of information. In a way, we _do_ have the syndicated internet that Kevin Werbach foresaw in 1999. After all, _The Onion_ is a publication that relies on syndication through Facebook and Twitter the same way that Seinfeld relied on syndication to rake in millions after the end of its original run. But syndication on the web only happens through one of a very small number of channels, meaning that none of us “retain control over our online personae” the way that Werbach thought we would. One reason this happened is garden-variety corporate rapaciousness—RSS, an open format, didn’t give technology companies the control over data and eyeballs that they needed to sell ads, so they did not support it. But the more mundane reason is that centralized silos are just easier to design than common standards. Consensus is difficult to achieve and it takes time, but without consensus spurned developers will go off and create competing standards. The lesson here may be that if we want to see a better, more open web, we have to get better at not screwing each other over.
+
+_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][45] on Twitter or subscribe to the [RSS feed][46] to make sure you know when a new post is out._
+
+_Previously on TwoBitHistory…_
+
+> New post: This week we're traveling back in time in our DeLorean to see what it was like learning to program on early home computers.
+>
+> — TwoBitHistory (@TwoBitHistory) [September 2, 2018][47]
+
+ 1. Kevin Werbach, “The Web Goes into Syndication,” Release 1.0, July 22, 1999, 1, accessed September 14, 2018, . [↩︎][48]
+
+ 2. ibid. [↩︎][49]
+
+ 3. Werbach, 8. [↩︎][50]
+
+ 4. Peter Wiggin, “RSS Delivers the XML Promise,” Web Review, October 29, 1999, accessed September 14, 2018, . [↩︎][51]
+
+ 5. Ben Hammersley, RSS and Atom (O’Reilly), 8, accessed September 14, 2018, . [↩︎][52]
+
+ 6. “RSS 0.90 Specification,” RSS Advisory Board, accessed September 14, 2018, . [↩︎][53]
+
+ 7. “My Netscape Network Future Directions,” RSS Advisory Board, accessed September 14, 2018, . [↩︎][54]
+
+ 8. Tim Bray, “The RDF.net Challenge,” Ongoing by Tim Bray, May 21, 2003, accessed September 14, 2018, . [↩︎][55]
+
+ 9. Dan Libby, “RSS: Introducing Myself,” August 24, 2000, RSS-DEV Mailing List, accessed September 14, 2018, . [↩︎][56]
+
+ 10. Alexandra Krasne, “Browser Wars May Become Portal Wars,” CNN, accessed September 14, 2018, . [↩︎][57]
+
+ 11. Dave Winer, “Scripting News in XML,” Scripting News, December 15, 1997, accessed September 14, 2018, . [↩︎][58]
+
+ 12. Joseph Reagle, “RSS History,” 2004, accessed September 14, 2018, . [↩︎][59]
+
+ 13. Dave Winer, “A Faceoff with Netscape,” Scripting News, June 16, 1999, accessed September 14, 2018, . [↩︎][60]
+
+ 14. ibid. [↩︎][61]
+
+ 15. Dan Libby, “RSS 0.91 Specification (Netscape),” RSS Advisory Board, accessed September 14, 2018, . [↩︎][62]
+
+ 16. Dave Winer, “Scripting News: 7/28/1999,” Scripting News, July 28, 1999, accessed September 14, 2018, . [↩︎][63]
+
+ 17. Oliver Willis, “RSS Aggregators?” June 19, 2000, Syndication Mailing List, accessed September 14, 2018, . [↩︎][64]
+
+ 18. Dave Winer, “Scripting News: 07/07/2000,” Scripting News, July 07, 2000, accessed September 14, 2018, . [↩︎][65]
+
+ 19. Dave Winer, “Re: RSS 0.91 Restarted,” June 9, 2000, Syndication Mailing List, accessed September 14, 2018, . [↩︎][66]
+
+ 20. Leigh Dodds, “RSS Modularization,” XML.com, July 5, 2000, accessed September 14, 2018, . [↩︎][67]
+
+ 21. Ian Davis, “Re: [syndication] RSS Modularization Demonstration,” June 28, 2000, Syndication Mailing List, accessed September 14, 2018, . [↩︎][68]
+
+ 22. “RDF Site Summary (RSS) 1.0,” December 09, 2000, accessed September 14, 2018, . [↩︎][69]
+
+ 23. Dave Winer, “Re: [syndication] Re: Thoughts, Questions, and Issues,” August 16, 2000, Syndication Mailing List, accessed September 14, 2018, . [↩︎][70]
+
+ 24. Mark Pilgrim, “History of the RSS Fork,” Dive into Mark, September 5, 2002, accessed September 14, 2018, . [↩︎][71]
+
+ 25. Dan Brickley, “RSS-Classic, RSS 1.0 and a Historical Debt,” November 7, 2000, Syndication Mailing List, accessed September 14, 2018, . [↩︎][72]
+
+ 26. Tim O’Reilly, “Re: Asking Tim,” UserLand, September 20, 2000, accessed September 14, 2018, . [↩︎][73]
+
+ 27. Dave Winer, “Re: Asking Tim,” UserLand, September 20, 2000, accessed September 14, 2018, . [↩︎][74]
+
+ 28. John Quain, “BASICS; Fine-Tuning Your Filter for Online Information,” The New York Times, 2004, accessed September 14, 2018, . [↩︎][75]
+
+ 29. John Schwartz, “Aaron Swartz, Internet Activist, Dies at 26,” The New York Times, January 12, 2013, accessed September 14, 2018, . [↩︎][76]
+
+ 30. “A Second Spring of Cleaning,” Official Google Blog, March 13, 2013, accessed September 14, 2018, . [↩︎][77]
+
+ 31. Steve Gillmor, “Rest in Peace, RSS,” TechCrunch, May 5, 2009, accessed September 14, 2018, . [↩︎][78]
+
+ 32. Marco Arment, “Lockdown,” Marco.org, July 3, 2013, accessed September 14, 2018, . [↩︎][79]
+
+ 33. Bob Tedeschi, “There’s a Popular New Code for Deals: RSS,” The New York Times, January 29, 2006, accessed September 14, 2018, . [↩︎][80]
+
+ 34. “NYTimes.com RSS Feeds,” The New York Times, accessed September 14, 2018, . [↩︎][81]
+
+ 35. Rael Dornfest, “RE: Re: [syndication] RE: RFC: Clearing Confusion for RSS, Agreement for Forward Motion,” May 31, 2001, Syndication Mailing List, accessed September 14, 2018, . [↩︎][82]
+
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://twobithistory.org/2018/09/16/the-rise-and-demise-of-rss.html
+
+作者:[Two-Bit History][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://twobithistory.org
+[b]: https://github.com/lujun9972
+[1]: https://twobithistory.org/2018/12/18/rss.html
+[2]: tmp.F599d8dnXW#fn:3
+[3]: tmp.F599d8dnXW#fn:4
+[4]: tmp.F599d8dnXW#fn:5
+[5]: tmp.F599d8dnXW#fn:6
+[6]: https://trends.google.com/trends/explore?date=all&geo=US&q=rss
+[7]: tmp.F599d8dnXW#fn:7
+[8]: https://twobithistory.org/images/mnn-channel.gif
+[9]: tmp.F599d8dnXW#fn:8
+[10]: https://twobithistory.org/2018/05/27/semantic-web.html
+[11]: tmp.F599d8dnXW#fn:9
+[12]: http://web.archive.org/web/19970703020212/http://mcf.research.apple.com:80/hs/screen_shot.html
+[13]: tmp.F599d8dnXW#fn:10
+[14]: tmp.F599d8dnXW#fn:11
+[15]: tmp.F599d8dnXW#fn:12
+[16]: http://scripting.com/
+[17]: tmp.F599d8dnXW#fn:13
+[18]: tmp.F599d8dnXW#fn:14
+[19]: tmp.F599d8dnXW#fn:15
+[20]: tmp.F599d8dnXW#fn:16
+[21]: tmp.F599d8dnXW#fn:17
+[22]: tmp.F599d8dnXW#fn:18
+[23]: tmp.F599d8dnXW#fn:19
+[24]: https://groups.yahoo.com/neo/groups/syndication/info
+[25]: tmp.F599d8dnXW#fn:20
+[26]: tmp.F599d8dnXW#fn:21
+[27]: tmp.F599d8dnXW#fn:22
+[28]: tmp.F599d8dnXW#fn:23
+[29]: tmp.F599d8dnXW#fn:24
+[30]: tmp.F599d8dnXW#fn:25
+[31]: tmp.F599d8dnXW#fn:26
+[32]: tmp.F599d8dnXW#fn:27
+[33]: tmp.F599d8dnXW#fn:28
+[34]: tmp.F599d8dnXW#fn:29
+[35]: https://tools.ietf.org/html/rfc4287
+[36]: tmp.F599d8dnXW#fn:30
+[37]: tmp.F599d8dnXW#fn:31
+[38]: tmp.F599d8dnXW#fn:32
+[39]: tmp.F599d8dnXW#fn:33
+[40]: tmp.F599d8dnXW#fn:34
+[41]: tmp.F599d8dnXW#fn:35
+[42]: tmp.F599d8dnXW#fn:36
+[43]: https://twitter.com/mgsiegler/status/311992206716203008
+[44]: tmp.F599d8dnXW#fn:37
+[45]: https://twitter.com/TwoBitHistory
+[46]: https://twobithistory.org/feed.xml
+[47]: https://twitter.com/TwoBitHistory/status/1036295112375115778?ref_src=twsrc%5Etfw
+[48]: tmp.F599d8dnXW#fnref:3
+[49]: tmp.F599d8dnXW#fnref:4
+[50]: tmp.F599d8dnXW#fnref:5
+[51]: tmp.F599d8dnXW#fnref:6
+[52]: tmp.F599d8dnXW#fnref:7
+[53]: tmp.F599d8dnXW#fnref:8
+[54]: tmp.F599d8dnXW#fnref:9
+[55]: tmp.F599d8dnXW#fnref:10
+[56]: tmp.F599d8dnXW#fnref:11
+[57]: tmp.F599d8dnXW#fnref:12
+[58]: tmp.F599d8dnXW#fnref:13
+[59]: tmp.F599d8dnXW#fnref:14
+[60]: tmp.F599d8dnXW#fnref:15
+[61]: tmp.F599d8dnXW#fnref:16
+[62]: tmp.F599d8dnXW#fnref:17
+[63]: tmp.F599d8dnXW#fnref:18
+[64]: tmp.F599d8dnXW#fnref:19
+[65]: tmp.F599d8dnXW#fnref:20
+[66]: tmp.F599d8dnXW#fnref:21
+[67]: tmp.F599d8dnXW#fnref:22
+[68]: tmp.F599d8dnXW#fnref:23
+[69]: tmp.F599d8dnXW#fnref:24
+[70]: tmp.F599d8dnXW#fnref:25
+[71]: tmp.F599d8dnXW#fnref:26
+[72]: tmp.F599d8dnXW#fnref:27
+[73]: tmp.F599d8dnXW#fnref:28
+[74]: tmp.F599d8dnXW#fnref:29
+[75]: tmp.F599d8dnXW#fnref:30
+[76]: tmp.F599d8dnXW#fnref:31
+[77]: tmp.F599d8dnXW#fnref:32
+[78]: tmp.F599d8dnXW#fnref:33
+[79]: tmp.F599d8dnXW#fnref:34
+[80]: tmp.F599d8dnXW#fnref:35
+[81]: tmp.F599d8dnXW#fnref:36
+[82]: tmp.F599d8dnXW#fnref:37
diff --git a/sources/talk/20180916 The Rise and Demise of RSS.md b/sources/talk/20180916 The Rise and Demise of RSS.md
deleted file mode 100644
index 8511d220d9..0000000000
--- a/sources/talk/20180916 The Rise and Demise of RSS.md
+++ /dev/null
@@ -1,122 +0,0 @@
-The Rise and Demise of RSS
-======
-There are two stories here. The first is a story about a vision of the web’s future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development.
-
-In the late 1990s, in the go-go years between Netscape’s IPO and the Dot-com crash, everyone could see that the web was going to be an even bigger deal than it already was, even if they didn’t know exactly how it was going to get there. One theory was that the web was about to be revolutionized by syndication. The web, originally built to enable a simple transaction between two parties—a client fetching a document from a single host server—would be broken open by new standards that could be used to repackage and redistribute entire websites through a variety of channels. Kevin Werbach, writing for Release 1.0, a newsletter influential among investors in the 1990s, predicted that syndication “would evolve into the core model for the Internet economy, allowing businesses and individuals to retain control over their online personae while enjoying the benefits of massive scale and scope.” He invited his readers to imagine a future in which fencing aficionados, rather than going directly to an “online sporting goods site” or “fencing equipment retailer,” could buy a new épée directly through e-commerce widgets embedded into their favorite website about fencing. Just like in the television world, where big networks syndicate their shows to smaller local stations, syndication on the web would allow businesses and publications to reach consumers through a multitude of intermediary sites. This would mean, as a corollary, that consumers would gain significant control over where and how they interacted with any given business or publication on the web.
-
-RSS was one of the standards that promised to deliver this syndicated future. To Werbach, RSS was “the leading example of a lightweight syndication protocol.” Another contemporaneous article called RSS the first protocol to realize the potential of XML. It was going to be a way for both users and content aggregators to create their own customized channels out of everything the web had to offer. And yet, two decades later, RSS [appears to be a dying technology][1], now used chiefly by podcasters and programmers with tech blogs. Moreover, among that latter group, RSS is perhaps used as much for its political symbolism as its actual utility. Though of course some people really do have RSS readers, stubbornly adding an RSS feed to your blog, even in 2018, is a reactionary statement. That little tangerine bubble has become a wistful symbol of defiance against a centralized web increasingly controlled by a handful of corporations, a web that hardly resembles the syndicated web of Werbach’s imagining.
-
-The future once looked so bright for RSS. What happened? Was its downfall inevitable, or was it precipitated by the bitter infighting that thwarted the development of a single RSS standard?
-
-### Muddied Water
-
-RSS was invented twice. This meant it never had an obvious owner, a state of affairs that spawned endless debate and acrimony. But it also suggests that RSS was an important idea whose time had come.
-
-In 1998, Netscape was struggling to envision a future for itself. Its flagship product, the Netscape Navigator web browser—once preferred by 80% of web users—was quickly losing ground to Internet Explorer. So Netscape decided to compete in a new arena. In May, a team was brought together to start work on what was known internally as “Project 60.” Two months later, Netscape announced “My Netscape,” a web portal that would fight it out with other portals like Yahoo, MSN, and Excite.
-
-The following year, in March, Netscape announced an addition to the My Netscape portal called the “My Netscape Network.” My Netscape users could now customize their My Netscape page so that it contained “channels” featuring the most recent headlines from sites around the web. As long as your favorite website published a special file in a format dictated by Netscape, you could add that website to your My Netscape page, typically by clicking an “Add Channel” button that participating websites were supposed to add to their interfaces. A little box containing a list of linked headlines would then appear.
-
-![A My Netscape Network Channel][2]
-
-The special file that participating websites had to publish was an RSS file. In the My Netscape Network announcement, Netscape explained that RSS stood for “RDF Site Summary.” This was somewhat of a misnomer. RDF, or the Resource Description Framework, is basically a grammar for describing certain properties of arbitrary resources. (See [my article about the Semantic Web][3] if that sounds really exciting to you.) In 1999, a draft specification for RDF was being considered by the W3C. Though RSS was supposed to be based on RDF, the example RSS document Netscape actually released didn’t use any RDF tags at all, even if it declared the RDF XML namespace. In a document that accompanied the Netscape RSS specification, Dan Libby, one of the specification’s authors, explained that “in this release of MNN, Netscape has intentionally limited the complexity of the RSS format.” The specification was given the 0.90 version number, the idea being that subsequent versions would bring RSS more in line with the W3C’s XML specification and the evolving draft of the RDF specification.
-
-RSS had been cooked up by Libby and another Netscape employee, Ramanathan Guha. Guha previously worked for Apple, where he came up with something called the Meta Content Framework. MCF was a format for representing metadata about anything from web pages to local files. Guha demonstrated its power by developing an application called [HotSauce][4] that visualized relationships between files as a network of nodes suspended in 3D space. After leaving Apple for Netscape, Guha worked with a Netscape consultant named Tim Bray to produce an XML-based version of MCF, which in turn became the foundation for the W3C’s RDF draft. It’s no surprise, then, that Guha and Libby were keen to incorporate RDF into RSS. But Libby later wrote that the original vision for an RDF-based RSS was pared back because of time constraints and the perception that RDF was “‘too complex’ for the ‘average user.’”
-
-While Netscape was trying to win eyeballs in what became known as the “portal wars,” elsewhere on the web a new phenomenon known as “weblogging” was being pioneered. One of these pioneers was Dave Winer, CEO of a company called UserLand Software, which developed early content management systems that made blogging accessible to people without deep technical fluency. Winer ran his own blog, [Scripting News][5], which today is one of the oldest blogs on the internet. More than a year before Netscape announced My Netscape Network, on December 15th, 1997, Winer published a post announcing that the blog would now be available in XML as well as HTML.
-
-Dave Winer’s XML format became known as the Scripting News format. It was supposedly similar to Microsoft’s Channel Definition Format (a “push technology” standard submitted to the W3C in March, 1997), but I haven’t been able to find a file in the original format to verify that claim. Like Netscape’s RSS, it structured the content of Winer’s blog so that it could be understood by other software applications. When Netscape released RSS 0.90, Winer and UserLand Software began to support both formats. But Winer believed that Netscape’s format was “woefully inadequate” and “missing the key thing web writers and readers need.” It could only represent a list of links, whereas the Scripting News format could represent a series of paragraphs, each containing one or more links.
-
-In June, 1999, two months after Netscape’s My Netscape Network announcement, Winer introduced a new version of the Scripting News format, called ScriptingNews 2.0b1. Winer claimed that he decided to move ahead with his own format only after trying but failing to get anyone at Netscape to care about RSS 0.90’s deficiencies. The new version of the Scripting News format added several items to the `` element that brought the Scripting News format to parity with RSS. But the two formats continued to differ in that the Scripting News format, which Winer nicknamed the “fat” syndication format, could include entire paragraphs and not just links.
-
-Netscape got around to releasing RSS 0.91 the very next month. The updated specification was a major about-face. RSS no longer stood for “RDF Site Summary”; it now stood for “Rich Site Summary.” All the RDF—and there was almost none anyway—was stripped out. Many of the Scripting News tags were incorporated. In the text of the new specification, Libby explained:
-
-> RDF references removed. RSS was originally conceived as a metadata format providing a summary of a website. Two things have become clear: the first is that providers want more of a syndication format than a metadata format. The structure of an RDF file is very precise and must conform to the RDF data model in order to be valid. This is not easily human-understandable and can make it difficult to create useful RDF files. The second is that few tools are available for RDF generation, validation and processing. For these reasons, we have decided to go with a standard XML approach.
-
-Winer was enormously pleased with RSS 0.91, calling it “even better than I thought it would be.” UserLand Software adopted it as a replacement for the existing ScriptingNews 2.0b1 format. For a while, it seemed that RSS finally had a single authoritative specification.
-
-### The Great Fork
-
-A year later, the RSS 0.91 specification had become woefully inadequate. There were all sorts of things people were trying to do with RSS that the specification did not address. There were other parts of the specification that seemed unnecessarily constraining—each RSS channel could only contain a maximum of 15 items, for example.
-
-By that point, RSS had been adopted by several more organizations. Other than Netscape, which seemed to have lost interest after RSS 0.91, the big players were Dave Winer’s UserLand Software; O’Reilly Net, which ran an RSS aggregator called Meerkat; and Moreover.com, which also ran an RSS aggregator focused on news. Via mailing list, representatives from these organizations and others regularly discussed how to improve on RSS 0.91. But there were deep disagreements about what those improvements should look like.
-
-The mailing list in which most of the discussion occurred was called the Syndication mailing list. [An archive of the Syndication mailing list][6] is still available. It is an amazing historical resource. It provides a moment-by-moment account of how those deep disagreements eventually led to a political rupture of the RSS community.
-
-On one side of the coming rupture was Winer. Winer was impatient to evolve RSS, but he wanted to change it only in relatively conservative ways. In June, 2000, he published his own RSS 0.91 specification on the UserLand website, meant to be a starting point for further development of RSS. It made no significant changes to the 0.91 specification published by Netscape. Winer claimed in a blog post that accompanied his specification that it was only a “cleanup” documenting how RSS was actually being used in the wild, which was needed because the Netscape specification was no longer being maintained. In the same post, he argued that RSS had succeeded so far because it was simple, and that by adding namespaces or RDF back to the format—some had suggested this be done in the Syndication mailing list—it “would become vastly more complex, and IMHO, at the content provider level, would buy us almost nothing for the added complexity.” In a message to the Syndication mailing list sent around the same time, Winer suggested that these issues were important enough that they might lead him to create a fork:
-
-> I’m still pondering how to move RSS forward. I definitely want ICE-like stuff in RSS2, publish and subscribe is at the top of my list, but I am going to fight tooth and nail for simplicity. I love optional elements. I don’t want to go down the namespaces and schema road, or try to make it a dialect of RDF. I understand other people want to do this, and therefore I guess we’re going to get a fork. I have my own opinion about where the other fork will lead, but I’ll keep those to myself for the moment at least.
-
-Arrayed against Winer were several other people, including Rael Dornfest of O’Reilly, Ian Davis (responsible for a search startup called Calaba), and a precocious, 14-year-old Aaron Swartz, who all thought that RSS needed namespaces in order to accommodate the many different things everyone wanted to do with it. On another mailing list hosted by O’Reilly, Davis proposed a namespace-based module system, writing that such a system would “make RSS as extensible as we like rather than packing in new features that over-complicate the spec.” The “namespace camp” believed that RSS would soon be used for much more than the syndication of blog posts, so namespaces, rather than being a complication, were the only way to keep RSS from becoming unmanageable as it supported more and more use cases.
-
-At the root of this disagreement about namespaces was a deeper disagreement about what RSS was even for. Winer had invented his Scripting News format to syndicate the posts he wrote for his blog. Guha and Libby at Netscape had designed RSS and called it “RDF Site Summary” because in their minds it was a way of recreating a site in miniature within Netscape’s online portal. Davis, writing to the Syndication mailing list, explained his view that RSS was “originally conceived as a way of building mini sitemaps,” and that now he and others wanted to expand RSS “to encompass more types of information than simple news headlines and to cater for the new uses of RSS that have emerged over the last 12 months.” Winer wrote a prickly reply, stating that his Scripting News format was in fact the original RSS and that it had been meant for a different purpose. Given that the people most involved in the development of RSS disagreed about why RSS had even been created, a fork seems to have been inevitable.
-
-The fork happened after Dornfest announced a proposed RSS 1.0 specification and formed the RSS-DEV Working Group—which would include Davis, Swartz, and several others but not Winer—to get it ready for publication. In the proposed specification, RSS once again stood for “RDF Site Summary,” because RDF had had been added back in to represent metadata properties of certain RSS elements. The specification acknowledged Winer by name, giving him credit for popularizing RSS through his “evangelism.” But it also argued that just adding more elements to RSS without providing for extensibility with a module system—that is, what Winer was suggesting—”sacrifices scalability.” The specification went on to define a module system for RSS based on XML namespaces.
-
-Winer was furious that the RSS-DEV Working Group had arrogated the “RSS 1.0” name for themselves. In another mailing list about decentralization, he described what the RSS-DEV Working Group had done as theft. Other members of the Syndication mailing list also felt that the RSS-DEV Working Group should not have used the name “RSS” without unanimous agreement from the community on how to move RSS forward. But the Working Group stuck with the name. Dan Brickley, another member of the RSS-DEV Working Group, defended this decision by arguing that “RSS 1.0 as proposed is solidly grounded in the original RSS vision, which itself had a long heritage going back to MCF (an RDF precursor) and related specs (CDF etc).” He essentially felt that the RSS 1.0 effort had a better claim to the RSS name than Winer did, since RDF had originally been a part of RSS. The RSS-DEV Working Group published a final version of their specification in December. That same month, Winer published his own improvement to RSS 0.91, which he called RSS 0.92, on UserLand’s website. RSS 0.92 made several small optional improvements to RSS, among which was the addition of the `` tag soon used by podcasters everywhere. RSS had officially forked.
-
-It’s not clear to me why a better effort was not made to involve Winer in the RSS-DEV Working Group. He was a prominent contributor to the Syndication mailing list and obviously responsible for much of RSS’ popularity, as the members of the Working Group themselves acknowledged. But Tim O’Reilly, founder and CEO of O’Reilly, explained in a UserLand discussion group that Winer more or less refused to participate:
-
-> A group of people involved in RSS got together to start thinking about its future evolution. Dave was part of the group. When the consensus of the group turned in a direction he didn’t like, Dave stopped participating, and characterized it as a plot by O’Reilly to take over RSS from him, despite the fact that Rael Dornfest of O’Reilly was only one of about a dozen authors of the proposed RSS 1.0 spec, and that many of those who were part of its development had at least as long a history with RSS as Dave had.
-
-To this, Winer said:
-
-> I met with Dale [Dougherty] two weeks before the announcement, and he didn’t say anything about it being called RSS 1.0. I spoke on the phone with Rael the Friday before it was announced, again he didn’t say that they were calling it RSS 1.0. The first I found out about it was when it was publicly announced.
->
-> Let me ask you a straight question. If it turns out that the plan to call the new spec “RSS 1.0” was done in private, without any heads-up or consultation, or for a chance for the Syndication list members to agree or disagree, not just me, what are you going to do?
->
-> UserLand did a lot of work to create and popularize and support RSS. We walked away from that, and let your guys have the name. That’s the top level. If I want to do any further work in Web syndication, I have to use a different name. Why and how did that happen Tim?
-
-I have not been able to find a discussion in the Syndication mailing list about using the RSS 1.0 name prior to the announcement of the RSS 1.0 proposal.
-
-RSS would fork again in 2003, when several developers frustrated with the bickering in the RSS community sought to create an entirely new format. These developers created Atom, a format that did away with RDF but embraced XML namespaces. Atom would eventually be specified by [a proposed IETF standard][7]. After the introduction of Atom, there were three competing versions of RSS: Winer’s RSS 0.92 (updated to RSS 2.0 in 2002 and renamed “Really Simple Syndication”), the RSS-DEV Working Group’s RSS 1.0, and Atom.
-
-### Decline
-
-The proliferation of competing RSS specifications may have hampered RSS in other ways that I’ll discuss shortly. But it did not stop RSS from becoming enormously popular during the 2000s. By 2004, the New York Times had started offering its headlines in RSS and had written an article explaining to the layperson what RSS was and how to use it. Google Reader, an RSS aggregator ultimately used by millions, was launched in 2005. By 2013, RSS seemed popular enough that the New York Times, in its obituary for Aaron Swartz, called the technology “ubiquitous.” For a while, before a third of the planet had signed up for Facebook, RSS was simply how many people stayed abreast of news on the internet.
-
-The New York Times published Swartz’ obituary in January, 2013. By that point, though, RSS had actually turned a corner and was well on its way to becoming an obscure technology. Google Reader was shutdown in July, 2013, ostensibly because user numbers had been falling “over the years.” This prompted several articles from various outlets declaring that RSS was dead. But people had been declaring that RSS was dead for years, even before Google Reader’s shuttering. Steve Gillmor, writing for TechCrunch in May, 2009, advised that “it’s time to get completely off RSS and switch to Twitter” because “RSS just doesn’t cut it anymore.” He pointed out that Twitter was basically a better RSS feed, since it could show you what people thought about an article in addition to the article itself. It allowed you to follow people and not just channels. Gillmor told his readers that it was time to let RSS recede into the background. He ended his article with a verse from Bob Dylan’s “Forever Young.”
-
-Today, RSS is not dead. But neither is it anywhere near as popular as it once was. Lots of people have offered explanations for why RSS lost its broad appeal. Perhaps the most persuasive explanation is exactly the one offered by Gillmor in 2009. Social networks, just like RSS, provide a feed featuring all the latest news on the internet. Social networks took over from RSS because they were simply better feeds. They also provide more benefits to the companies that own them. Some people have accused Google, for example, of shutting down Google Reader in order to encourage people to use Google+. Google might have been able to monetize Google+ in a way that it could never have monetized Google Reader. Marco Arment, the creator of Instapaper, wrote on his blog in 2013:
-
-> Google Reader is just the latest casualty of the war that Facebook started, seemingly accidentally: the battle to own everything. While Google did technically “own” Reader and could make some use of the huge amount of news and attention data flowing through it, it conflicted with their far more important Google+ strategy: they need everyone reading and sharing everything through Google+ so they can compete with Facebook for ad-targeting data, ad dollars, growth, and relevance.
-
-So both users and technology companies realized that they got more out of using social networks than they did out of RSS.
-
-Another theory is that RSS was always too geeky for regular people. Even the New York Times, which seems to have been eager to adopt RSS and promote it to its audience, complained in 2006 that RSS is a “not particularly user friendly” acronym coined by “computer geeks.” Before the RSS icon was designed in 2004, websites like the New York Times linked to their RSS feeds using little orange boxes labeled “XML,” which can only have been intimidating. The label was perfectly accurate though, because back then clicking the link would take a hapless user to a page full of XML. [This great tweet][8] captures the essence of this explanation for RSS’ demise. Regular people never felt comfortable using RSS; it hadn’t really been designed as a consumer-facing technology and involved too many hurdles; people jumped ship as soon as something better came along.
-
-RSS might have been able to overcome some of these limitations if it had been further developed. Maybe RSS could have been extended somehow so that friends subscribed to the same channel could syndicate their thoughts about an article to each other. But whereas a company like Facebook was able to “move fast and break things,” the RSS developer community was stuck trying to achieve consensus. The Great RSS Fork only demonstrates how difficult it was to do that. So if we are asking ourselves why RSS is no longer popular, a good first-order explanation is that social networks supplanted it. If we ask ourselves why social networks were able to supplant it, then the answer may be that the people trying to make RSS succeed faced a problem much harder than, say, building Facebook. As Dornfest wrote to the Syndication mailing list at one point, “currently it’s the politics far more than the serialization that’s far from simple.”
-
-So today we are left with centralized silos of information. In a way, we do have the syndicated internet that Kevin Werbach foresaw in 1999. After all, The Onion is a publication that relies on syndication through Facebook and Twitter the same way that Seinfeld relied on syndication to rake in millions after the end of its original run. But syndication on the web only happens through one of a very small number of channels, meaning that none of us “retain control over our online personae” the way that Werbach thought we would. One reason this happened is garden-variety corporate rapaciousness—RSS, an open format, didn’t give technology companies the control over data and eyeballs that they needed to sell ads, so they did not support it. But the more mundane reason is that centralized silos are just easier to design than common standards. Consensus is difficult to achieve and it takes time, but without consensus spurned developers will go off and create competing standards. The lesson here may be that if we want to see a better, more open web, we have to get better at not screwing each other over.
-
-If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][9] on Twitter or subscribe to the [RSS feed][10] to make sure you know when a new post is out.
-
-Previously on TwoBitHistory…
-
-> New post: This week we're traveling back in time in our DeLorean to see what it was like learning to program on early home computers.
->
-> — TwoBitHistory (@TwoBitHistory) [September 2, 2018][11]
-
---------------------------------------------------------------------------------
-
-via: https://twobithistory.org/2018/09/16/the-rise-and-demise-of-rss.html
-
-作者:[Two-Bit History][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://twobithistory.org
-[b]: https://github.com/lujun9972
-[1]: https://trends.google.com/trends/explore?date=all&geo=US&q=rss
-[2]: https://twobithistory.org/images/mnn-channel.gif
-[3]: https://twobithistory.org/2018/05/27/semantic-web.html
-[4]: http://web.archive.org/web/19970703020212/http://mcf.research.apple.com:80/hs/screen_shot.html
-[5]: http://scripting.com/
-[6]: https://groups.yahoo.com/neo/groups/syndication/info
-[7]: https://tools.ietf.org/html/rfc4287
-[8]: https://twitter.com/mgsiegler/status/311992206716203008
-[9]: https://twitter.com/TwoBitHistory
-[10]: https://twobithistory.org/feed.xml
-[11]: https://twitter.com/TwoBitHistory/status/1036295112375115778?ref_src=twsrc%5Etfw
diff --git a/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md b/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md
deleted file mode 100644
index 5a0c1aabdd..0000000000
--- a/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md
+++ /dev/null
@@ -1,97 +0,0 @@
-Interview With Peter Ganten, CEO of Univention GmbH
-======
-I have been asking the Univention team to share the behind-the-scenes story of [**Univention**][1] for a couple of months. Finally, today we got the interview of **Mr. Peter H. Ganten** , CEO of Univention GmbH. Despite his busy schedule, in this interview, he shares what he thinks of the Univention project and its impact on open source ecosystem, what open source developers and companies will need to do to keep thriving and what are the biggest challenges for open source projects.
-
-**OSTechNix: What’s your background and why have you founded Univention?**
-
-**Peter Ganten:** I studied physics and psychology. In psychology I was a research assistant and coded evaluation software. I realized how important it is that results have to be disclosed in order to verify or falsify them. The same goes for the code that leads to the results. This brought me into contact with Open Source Software (OSS) and Linux.
-
-
-
-I was a kind of technical lab manager and I had the opportunity to try out a lot, which led to my book about Debian. That was still in the New Economy era where the first business models emerged on how to make money with Open Source. When the bubble burst, I had the plan to make OSS a solid business model without venture capital but with Hanseatic business style – seriously, steadily, no bling bling.
-
-**What were the biggest challenges at the beginning?**
-
-When I came from the university, the biggest challenge clearly was to gain entrepreneurial and business management knowledge. I quickly learned that it’s not about Open Source software as an end to itself but always about customer value, and the benefits OSS offers its customers. We all had to learn a lot.
-
-In the beginning, we expected that Linux on the desktop would become established in a similar way as Linux on the server. However, this has not yet been proven true. The replacement has happened with Android and the iPhone. Our conclusion then was to change our offerings towards ID management and enterprise servers.
-
-**Why does UCS matter? And for whom makes it sense to use it?**
-
-There is cool OSS in all areas, but many organizations are not capable to combine it all together and make it manageable. For the basic infrastructure (Windows desktops, users, user rights, roles, ID management, apps) we need a central instance to which groupware, CRM etc. is connected. Without Univention this would have to be laboriously assembled and maintained manually. This is possible for very large companies, but far too complex for many other organizations.
-
-[**UCS**][2] can be used out of the box and is scalable. That’s why it’s becoming more and more popular – more than 10,000 organizations are using UCS already today.
-
-**Who are your users and most important clients? What do they love most about UCS?**
-
-The Core Edition is free of charge and used by organizations from all sectors and industries such as associations, micro-enterprises, universities or large organizations with thousands of users. In the enterprise environment, where Long Term Servicing (LTS) and professional support are particularly important, we have organizations ranging in size from 30-50 users to several thousand users. One of the target groups is the education system in Germany. In many large cities and within their school administrations UCS is used, for example, in Cologne, Hannover, Bremen, Kassel and in several federal states. They are looking for manageable IT and apps for schools. That’s what we offer, because we can guarantee these authorities full control over their users’ identities.
-
-Also, more and more cloud service providers and MSPs want to take UCS to deliver a selection of cloud-based app solutions.
-
-**Is UCS 100% Open Source? If so, how can you run a profitable business selling it?**
-
-Yes, UCS is 100% Open Source, every line, the whole code is OSS. You can download and use UCS Core Edition for **FREE!**
-
-We know that in large, complex organizations, vendor support and liability is needed for LTS, SLAs, and we offer that with our Enterprise subscriptions and consulting services. We don’t offer these in the Core Edition.
-
-**And what are you giving back to the OS community?**
-
-A lot. We are involved in the Debian team and co-finance the LTS maintenance for Debian. For important OS components in UCS like [**OpenLDAP**][3], Samba or KVM we co-finance the development or have co-developed them ourselves. We make it all freely available.
-
-We are also involved on the political level in ensuring that OSS is used. We are engaged, for example, in the [**Free Software Foundation Europe (FSFE)**][4] and the [**German Open Source Business Alliance**][5], of which I am the chairman. We are working hard to make OSS more successful.
-
-**How can I get started with UCS?**
-
-It’s easy to get started with the Core Edition, which, like the Enterprise Edition, has an App Center and can be easily installed on your own hardware or as an appliance in a virtual machine. Just [**download Univention ISO**][6] and install it as described in the below link.
-
-Alternatively, you can try the [**UCS Online Demo**][7] to get a first impression of Univention Corporate Server without actually installing it on your system.
-
-**What do you think are the biggest challenges for Open Source?**
-
-There is a certain attitude you can see over and over again even in bigger projects: OSS alone is viewed as an almost mandatory prerequisite for a good, sustainable, secure and trustworthy IT solution – but just having decided to use OSS is no guarantee for success. You have to carry out projects professionally and cooperate with the manufacturers. A danger is that in complex projects people think: “Oh, OSS is free, I just put it all together by myself”. But normally you do not have the know-how to successfully implement complex software solutions. You would never proceed like this with Closed Source. There people think: “Oh, the software costs 3 $ millions, so it’s okay if I have to spend another 300,000 Dollars on consultants.”
-
-At OSS this is different. If such projects fail and leave burnt ground behind, we have to explain again and again that the failure of such projects is not due to the nature of OSS but to its poor implementation and organization in a specific project: You have to conclude reasonable contracts and involve partners as in the proprietary world, but you’ll gain a better solution.
-
-Another challenge: We must stay innovative, move forward, attract new people who are enthusiastic about working on projects. That’s sometimes a challenge. For example, there are a number of proprietary cloud services that are good but lead to extremely high dependency. There are approaches to alternatives in OSS, but no suitable business models yet. So it’s hard to find and fund developers. For example, I can think of Evernote and OneNote for which there is no reasonable OSS alternative.
-
-**And what will the future bring for Univention?**
-
-I don’t have a crystal ball, but we are extremely optimistic. We see a very high growth potential in the education market. More OSS is being made in the public sector, because we have repeatedly experienced the dead ends that can be reached if we solely rely on Closed Source.
-
-Overall, we will continue our organic growth at double-digit rates year after year.
-
-UCS and its core functionalities of identity management, infrastructure management and app center will increasingly be offered and used from the cloud as a managed service. We will support our technology in this direction, e.g., through containers, so that a hypervisor or bare metal is not always necessary for operation.
-
-**You have been the CEO of Univention for a long time. What keeps you motivated?**
-
-I have been the CEO of Univention for more than 16 years now. My biggest motivation is to realize that something is moving. That we offer the better way for IT. That the people who go this way with us are excited to work with us. I go home satisfied in the evening (of course not every evening). It’s totally cool to work with the team I have. It motivates and pushes you every time I need it myself.
-
-I’m a techie and nerd at heart, I enjoy dealing with technology. So I’m totally happy at this place and I’m grateful to the world that I can do whatever I want every day. Not everyone can say that.
-
-**Who gives you inspiration?**
-
-My employees, the customers and the Open Source projects. The exchange with other people.
-
-The motivation behind everything is that we want to make sure that mankind will be able to influence and change the IT that surrounds us today and in the future just the way we want it and we thinks it’s good. We want to make a contribution to this. That is why Univention is there. That is important to us every day.
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/interview-with-peter-ganten-ceo-of-univention-gmbh/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/sk/
-[1]: https://www.ostechnix.com/introduction-univention-corporate-server/
-[2]: https://www.univention.com/products/ucs/
-[3]: https://www.ostechnix.com/redhat-and-suse-announced-to-withdraw-support-for-openldap/
-[4]: https://fsfe.org/
-[5]: https://osb-alliance.de/
-[6]: https://www.univention.com/downloads/download-ucs/
-[7]: https://www.univention.com/downloads/ucs-online-demo/
diff --git a/sources/talk/20181007 Why it-s Easier to Get a Payrise by Switching Jobs.md b/sources/talk/20181007 Why it-s Easier to Get a Payrise by Switching Jobs.md
new file mode 100644
index 0000000000..3baf2f7126
--- /dev/null
+++ b/sources/talk/20181007 Why it-s Easier to Get a Payrise by Switching Jobs.md
@@ -0,0 +1,99 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Why it's Easier to Get a Payrise by Switching Jobs)
+[#]: via: (https://theartofmachinery.com/2018/10/07/payrise_by_switching_jobs.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+Why it's Easier to Get a Payrise by Switching Jobs
+======
+
+It’s an empirical fact that it’s easier to get a payrise if you’re negotiating a new job than if you’re negotiating within your current job. When I look back over my own career, every time I’ve worked somewhere longer term (over a year), payrises have been a hard struggle. But eventually I’d leave for a new position, and my new pay made all payrises at the previous job irrelevant. These days I make job switching upfront and official: I run my own business and most of my money comes from short contracts. Getting rewarded for new skills or extra work is nowhere near as difficult as before.
+
+I know I’m not the only one to notice this effect, but I’ve never heard anyone explain why things might be this way.
+
+Before I give my explanation, let me make a couple of things clear from the start. I’m not going to argue that everyone should quit their jobs. I don’t know your situation, and maybe you’re getting a good deal already. Also, I apply game theory here, but, no, I don’t assume that humans are slaves to simplistic, mechanical laws of behaviour. However, just like music composition, even if humans are free, there are still patterns that matter. If you understand this stuff, you’ll have a career advantage.
+
+But first, some background.
+
+### BATNA
+
+Many geeks think negotiation is like a role-playing game: roll the die, add your charisma score, and if the result is high enough you’re convincing. Geeks who think that way usually have low confidence in their “charisma score”, and they blame that for their struggle with things like asking for payrises.
+
+Charisma isn’t totally irrelevant, but the good news for geeks is that there’s a nerdy thing that’s much more important for negotiation: BATNA, or Best Alternative To Negotiated Agreement. Despite the jargony name, it’s a very simple idea: it’s about analysing the best outcome for both sides in a negotiation, assuming that at least one side says no to the other. Although most people don’t know it’s called “BATNA”, it’s the core of how any agreement works (or doesn’t work).
+
+It’s easy to explain with an example. Imagine you buy a couch for $500, but when you take it home, you discover that it doesn’t fit the place you wanted to put it. A silly mistake, but thankfully the shop offers you a full refund if you return it. Just as you’re taking it back to the shop, you meet a stranger who says they want a couch like that, and they offer to buy it. What’s the price? If you ask for $1000,000, the deal won’t happen because their BATNA is that they go to the shop and buy one themselves for $500. If they offer $1 to buy, your BATNA is that you go to the shop and get the $500 refund. You’ll only come to an agreement if the price is something like $500. If transporting the couch to the shop costs significant time and money, you’ll accept less than $500 because your BATNA is worth $500 minus the cost of transport. On the other hand, if the stranger needs to cover up a stained carpet before the landlord does an inspection in half an hour, they’ll be willing to pay a heavy premium because their BATNA is so bad.
+
+You can’t expect a negotiation to go well unless you’ve considered the BATNA of both sides.
+
+### Employment and Self-Employment
+
+Most people of a certain socioeconomic class believe that the ideal, “proper” career is salaried, full-time employment at someone else’s business. Many people in this class never even imagine any other way to make a living, but there are alternatives. In Australia, like other countries, you’re free to register your own business number and then do whatever it is that people will pay for. That includes sitting at a desk and working on software and computer systems, or other work that’s more commonly done as an employee.
+
+So why is salaried employment so popular? As someone who’s done both kinds of employment, one answer is obvious: stability. You can be (mostly) sure about exactly how much money you’ll make in the next six months when you have a salary. The next obvious answer is simplicity: as long as you meet the minimum bar of “work” done ([whatever “work” means][1]), the company promises to look after you. You don’t have to think about where your next dollar comes from, or about marketing, or insurances, or accounting, or even how to find people to socialise with.
+
+That sums up the main reasons to like salaried employment (not that they’re bad reasons). I sometimes hear claims about other benefits of salaried employment, but they’re typically things that you can buy. If you’re self-employed and your work isn’t paying you enough to have the same lifestyle as you could under a salary (doing the same work) that means you’re not billing high enough. A lot of people make that mistake when they quit a salaried job for self-employment, but it’s still just a mistake.
+
+### Asking for that Payrise
+
+Let’s say you’ve been working as a salaried employee at a company for a while. As a curious, self-motivated person who regularly reads essays by nerds on the internet, you’ve learned a lot in that time. You’ve applied your new skills to your work, and proven yourself to be a much more valuable employee than when you were first hired. Is it time to ask for a payrise? You practise your most charismatic phrasing, and approach your manager with your d20 in hand. The response is that you’re doing great, and they’d love to give you a payrise, but the rules say
+
+ 1. You can’t get a payrise unless you’ve been working for more than N years
+ 2. You can’t get more than one payrise in N years
+ 3. That inflation adjustment on your salary counted as a payrise, so you can’t ask for a payrise now
+ 4. You can’t be paid more than [Peter][2]
+ 5. We need more time to see if you’re ready, so keep up the great work for another year or so and we’ll consider it then
+
+
+
+The thing to realise is that all these rules are completely arbitrary. If the company had a genuine motivation to give you a payrise, the rules would vanish. To see that, try replacing “payrise” with “workload increase”. Software projects are extremely expensive, require skill, and have a high failure rate. Software work therefore carries a non-trivial amount of responsibility, so you might argue that employers should be very conservative about increasing how much involvement someone has in a project. But I’ve never heard an employer say anything like, “Great job on getting that last task completed ahead of schedule, but we need more time to see if you’re ready to increase your workload. Just take a break until the next scheduled task, and if you do well at that one, too, maybe we can start giving you more work to do.”
+
+If you’re hearing feedback that you’re doing well, but there are various arbitrary reasons you can’t get rewarded for it, that’s a strong sign you’re being paid below market rates. Now, the term “market rates” gets used pretty loosely, so let me be super clear: that means someone else would agree to pay you more if you asked.
+
+Note that I’m not claiming that your manager is evil. At most larger companies, your manager really can’t do much against the company rules. I’m not writing this to call companies evil, either, because that won’t help you or me to get any payrises. What _will_ help is understanding why companies can afford to make payrises difficult.
+
+### Getting that Payrise
+
+You’ve probably seen this coming: it’s all about BATNA, and how you can’t expect your employer to agree to something that’s worse than their BATNA. So, what’s their BATNA? What happens if you ask for a payrise, and they say no?
+
+Sometimes you see a story online about someone who was burning themselves out working super hard as an obviously vital member of a team. This person asks for a small payrise and gets rejected for some silly reason. Shortly after that, they tell their employer that they have a much bigger offer from another company. Suddenly the reason for rejecting the payrise evaporates, and the employer comes up with a counteroffer, but it’s too late: the worker leaves for a better job. The original employer is left wailing and gnashing their teeth. If only companies appreciated their employees more!
+
+These stories are like hero stories in the movies. They tickle our sense of justice, but aren’t exactly representative of normal life. The reality is that most employees would just go back to their desks if they’re told, “No.” Sure, they’ll grumble, and they’ll upvote the next “Tech workers are underappreciated!” post on Reddit, but to many companies this is a completely acceptable BATNA.
+
+In short, the main bargaining chip a salaried employee has is quitting, but that negates the reasons to be a salaried employee in the first place.
+
+When you’re negotiating a contract with a new potential employer, however, the situation is totally different. Whatever conditions you ask for will be compared against the BATNA of searching for someone else who has your skills. Any reasonable request has a much higher chance of being accepted.
+
+### The Job Security Tax
+
+Now, something might be bothering you: despite what I’ve said, people _do_ get payrises. But all I’ve argued is that companies can make payrises difficult, not impossible. Sure, salaried employees might not quit when they’re a little underpaid. (They might not even realise they’re underpaid.) But if the underpayment gets big and obvious enough, maybe they will, so employers have to give out payrises eventually. Occasional payrises also make a good carrot for encouraging employees to keep working harder.
+
+At the scale of a large company, it’s just a matter of tuning. Payrises can be delayed a little here, and made a bit smaller there, and the company saves money. Go too far, and the employee attrition rate goes up, which is a sign to back off and start paying more again.
+
+Sure, the employee’s salary will tend to grow as their skills grow, but that growth will be slowed down. How much it is slowed down will depend (long term) on how strongly the employee values job security. It’s a job security tax.
+
+### What Should You Do?
+
+As I said before, I’m not going to tell you to quit (or not quit) without knowing what your situation is.
+
+Perhaps you read this thinking that it sounds nothing like your workplace. If so, you’re lucky to be in one of the better places. You now have solid reasons to appreciate your employer as much as they appreciate you.
+
+For the rest of you, I guess there are two broad options. Obviously, there’s the one I’m taking: not being a salaried employee. The other option is to understand the job security tax and try to optimise it. If you’re young and single, maybe you don’t need job security so much (at least for now). Even if you have good reasons to want job security (and there are plenty), maybe you can reduce your dependence on it by saving money in an emergency fund, and making sure your friendship group includes people who aren’t your current colleagues. That’s a good idea even if you aren’t planning to quit today — you never know what the future will be like.
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2018/10/07/payrise_by_switching_jobs.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: /2017/09/14/busywork.html
+[2]: https://www.youtube.com/watch?v=zBfTrjPSShs
diff --git a/sources/talk/20181024 Why it matters that Microsoft released old versions of MS-DOS as open source.md b/sources/talk/20181024 Why it matters that Microsoft released old versions of MS-DOS as open source.md
deleted file mode 100644
index 3129e4b6f0..0000000000
--- a/sources/talk/20181024 Why it matters that Microsoft released old versions of MS-DOS as open source.md
+++ /dev/null
@@ -1,74 +0,0 @@
-Why it matters that Microsoft released old versions of MS-DOS as open source
-======
-
-Microsoft's release of MS-DOS 1.25 and 2.0 on GitHub adopts an open source license that's compatible with GNU GPL.
-
-
-One open source software project I work on is the FreeDOS Project. It's a complete, free, DOS-compatible operating system that you can use to play classic DOS games, run legacy business software, or develop embedded systems. Any program that works on MS-DOS should also run on FreeDOS.
-
-So I took notice when Microsoft recently released the source code to MS-DOS 1.25 and 2.0 via a [GitHub repository][1]. This is a huge step for Microsoft, and I’d like to briefly explain why it is significant.
-
-### MS-DOS as open source software
-
-Some open source fans may recall that this is not the first time Microsoft has officially released the MS-DOS source code. On March 25, 2014, Microsoft posted the source code to MS-DOS 1.1 and 2.0 via the [Computer History Museum][2]. Unfortunately, this source code was released under a “look but do not touch” license that limited what you could do with it. According to the license from the 2014 source code release, users were barred from re-using it in other projects and could use it “[solely for non-commercial research, experimentation, and educational purposes.][3]”
-
-The museum license wasn’t friendly to open source software, and as a result, the MS-DOS source code was ignored. On the FreeDOS Project, we interpreted the “look but do not touch” license as a potential risk to FreeDOS, so we decided developers who had viewed the MS-DOS source code could not contribute to FreeDOS.
-
-But Microsoft’s recent MS-DOS source code release represents a significant change. This MS-DOS source code uses the MIT License (also called the Expat License). Quoting Microsoft’s [LICENSE.md][4] file on GitHub:
-
-> ## MS-DOS v1.25 and v2.0 Source Code
->
-> Copyright © Microsoft Corporation.
->
-> All rights reserved.
->
-> MIT License.
->
-> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the Software), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
->
-> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
->
-> THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,TORT OR OTHERWISE, ARISING FROM OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-
-If that text looks familiar to you, it is because that’s the same text as the MIT License recognized by the [Open Source Initiative][5]. It’s also the same as the Expat License recognized by the [Free Software Foundation][6].
-
-The Free Software Foundation (via GNU) says the Expat License is compatible with the [GNU General Public License][7]. Specifically, GNU describes the Expat License as “a lax, permissive non-copyleft free software license, compatible with the GNU GPL. It is sometimes ambiguously referred to as the MIT License.” Also according to GNU, when they say a license is [compatible with the GNU GPL][8], “you can combine code released under the other license [MIT/Expat License] with code released under the GNU GPL in one larger program.”
-
-Microsoft’s use of the MIT/Expat License for the original MS-DOS source code is significant because the license is not only open source software but free software.
-
-### What does it mean?
-
-This is great, but there’s a practical side to the source code release. You might think, “If Microsoft has released the MS-DOS source code under a license compatible with the GNU GPL, will that help FreeDOS?”
-
-Not really. Here's why: FreeDOS started from an original source code base, independent from MS-DOS. Certain functions and behaviors of MS-DOS were identified and documented in the comprehensive [Interrupt List by Ralf Brown][9], and we provided MS-DOS compatibility in FreeDOS by referencing the Interrupt List. But many significant fundamental technical differences remain between FreeDOS and MS-DOS. For example, FreeDOS uses a completely different memory structure and memory layout. You can’t simply forklift MS-DOS source code into FreeDOS and expect it to work. The code assumptions are quite different.
-
-There’s also the simple matter that these are very old versions of MS-DOS. For example, MS-DOS 2.0 was the first version to support directories and redirection. But these versions of MS-DOS did not yet include more advanced features, including networking, CDROM support, and ’386 support such as EMM386. These features have been standard in FreeDOS for a long time.
-
-So the MS-DOS source code release is interesting, but FreeDOS would not be able to reuse this code for any modern features anyway. FreeDOS has already surpassed these versions of MS-DOS in functionality and features.
-
-### Congratulations
-
-Still, it’s important to recognize the big step that Microsoft has taken in releasing these versions of MS-DOS as open source software. The new MS-DOS source code release on GitHub does away with the restrictive license from 2014 and adopts a recognized open source software license that is compatible with the GNU GPL. Congratulations to Microsoft for releasing MS-DOS 1.25 and 2.0 under an open source license!
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/10/microsoft-open-source-old-versions-ms-dos
-
-作者:[Jim Hall][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/jim-hall
-[b]: https://github.com/lujun9972
-[1]: https://github.com/Microsoft/MS-DOS
-[2]: http://www.computerhistory.org/press/ms-source-code.html
-[3]: http://www.computerhistory.org/atchm/microsoft-research-license-agreement-msdos-v1-1-v2-0/
-[4]: https://github.com/Microsoft/MS-DOS/blob/master/LICENSE.md
-[5]: https://opensource.org/licenses/MIT
-[6]: https://directory.fsf.org/wiki/License:Expat
-[7]: https://www.gnu.org/licenses/license-list.en.html#Expat
-[8]: https://www.gnu.org/licenses/gpl-faq.html#WhatDoesCompatMean
-[9]: http://www.cs.cmu.edu/~ralf/files.html
diff --git a/sources/talk/20181107 Understanding a -nix Shell by Writing One.md b/sources/talk/20181107 Understanding a -nix Shell by Writing One.md
new file mode 100644
index 0000000000..acad742117
--- /dev/null
+++ b/sources/talk/20181107 Understanding a -nix Shell by Writing One.md
@@ -0,0 +1,412 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Understanding a *nix Shell by Writing One)
+[#]: via: (https://theartofmachinery.com/2018/11/07/writing_a_nix_shell.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+Understanding a *nix Shell by Writing One
+======
+
+A typical *nix shell has a lot of programming-like features, but works quite differently from languages like Python or C++. This can make a lot of shell features — like process management, argument quoting and the `export` keyword — seem like mysterious voodoo.
+
+But a shell is just a program, so a good way to learn how a shell works is to write one. I’ve written [a simple shell that fits in a few hundred lines of commented D source][1]. Here’s a post that walks through how it works and how you could write one yourself.
+
+### First (Cheating) Steps
+
+A shell is a kind of REPL (Read Evaluate Print Loop). At its heart is just a simple loop that reads commands from the input, processes them, and returns a result:
+
+```
+import std.process;
+import io = std.stdio;
+
+enum kPrompt = "> ";
+
+void main()
+{
+ io.write(kPrompt);
+ foreach (line; io.stdin.byLineCopy())
+ {
+ // "Cheating" by using the existing shell for now
+ auto result = executeShell(line);
+ io.write(result.output);
+ io.write(kPrompt);
+ }
+}
+
+$ dmd shell.d
+$ ./shell
+> head /usr/share/dict/words
+A
+a
+aa
+aal
+aalii
+aam
+Aani
+aardvark
+aardwolf
+Aaron
+> # Press Ctrl+D to quit
+>
+$
+```
+
+If you try out this code out for yourself, you’ll soon notice that you don’t have any nice editing features like tab completion or command history. The popular Bash shell uses a library called [GNU Readline][2] for that. You can get most of the features of Readline when playing with these toy examples just by running them under [rlwrap][3] (probably already in your system’s package manager).
+
+### DIY Command Execution (First Attempt)
+
+That first example demonstrated the absolute basic structure of a shell, but it cheated by passing commands directly to the shell already running on the system. Obviously, that doesn’t explain anything about how a real shell processes commands.
+
+The basic idea, though, is very simple. Nearly everything that gets called a “shell command” (e.g., `ls` or `head` or `grep`) is really just a program on the filesystem. The shell just has to run it. At the operating system level, running a program is done using the `execve` system call (or one of its alternatives). For portability and convenience, the normal way to make a system call is to use one of the wrapper functions in the C library. Let’s try using `execv()`:
+
+```
+import core.sys.posix.stdio;
+import core.sys.posix.unistd;
+
+import io = std.stdio;
+import std.string;
+
+enum kPrompt = "> ";
+
+void main()
+{
+ io.write(kPrompt);
+ foreach (line; io.stdin.byLineCopy())
+ {
+ runCommand(line);
+ io.write(kPrompt);
+ }
+}
+
+void runCommand(string cmd)
+{
+ // Need to convert D string to null-terminated C string
+ auto cmdz = cmd.toStringz();
+
+ // We need to pass execv an array of program arguments
+ // By convention, the first element is the name of the program
+
+ // C arrays don't carry a length, just the address of the first element.
+ // execv starts reading memory from the first element, and needs a way to
+ // know when to stop. Instead of taking a length value as an argument,
+ // execv expects the array to end with a null as a stopping marker.
+
+ auto argsz = [cmdz, null];
+ auto error = execv(cmdz, argsz.ptr);
+ if (error)
+ {
+ perror(cmdz);
+ }
+}
+```
+
+Here’s a sample run:
+
+```
+> ls
+ls: No such file or directory
+> head
+head: No such file or directory
+> grep
+grep: No such file or directory
+> ಠ_ಠ
+ಠ_ಠ: No such file or directory
+>
+```
+
+Okay, so that’s not working so well. The problem is that that the `execve` call isn’t as smart as a shell: it just literally executes the program it’s told to. In particular, it has no smarts for finding the programs that implement `ls` or `head`. For now, let’s do the finding ourselves, and then give `execve` the full path to the command:
+
+```
+$ which ls
+/bin/ls
+$ ./shell
+> /bin/ls
+shell shell.d shell.o
+$
+```
+
+This time the `ls` command worked, but our shell quit and we dropped straight back into the system’s shell. What’s going on? Well, `execve` really is a single-purpose call: it doesn’t spawn a new process for running the program separately from the current program, it _replaces_ the current program. (The toy shell actually quit when `ls` started, not when it finished.) Creating a new process is done with a different system call: traditionally `fork`. This isn’t how programming languages normally work, so it might seem like weird and annoying behaviour, but it’s actually really useful. Decoupling process creation from program execution allows a lot of flexibility, as will become clearer later.
+
+### Fork and Exec
+
+To keep the shell running, we’ll use the `fork()` C function to create a new process, and then make that new process `execv()` the program that implements the command. (On modern GNU/Linux systems, `fork()` is actually a wrapper around a system call called `clone`, but it still behaves like the classic `fork` system call.)
+
+`fork()` duplicates the current process. We get a second process that’s running the same program, at the same point, with a copy of everything in memory and all the same open files. Both the original process (parent) and the duplicate (child) keep running normally. Of course, we want the parent process to keep running the shell, and the child to `execv()` the command. The `fork()` function helps us differentiate them by returning zero in the child and a non-zero value in the parent. (This non-zero value is the process ID of the child.)
+
+Let’s try it out in a new version of the `runCommand()` function:
+
+```
+int runCommand(string cmd)
+{
+ // fork() duplicates the process
+ auto pid = fork();
+ // Both the parent and child keep running from here as if nothing happened
+ // pid will be < 0 if forking failed for some reason
+ // Otherwise pid == 0 for the child and != 0 for the parent
+ if (pid < 0)
+ {
+ perror("Can't create a new process");
+ exit(1);
+ }
+ if (pid == 0)
+ {
+ // Child process
+ auto cmdz = cmd.toStringz();
+ auto argsz = [cmdz, null];
+ execv(cmdz, argsz.ptr);
+
+ // Only get here if exec failed
+ perror(cmdz);
+ exit(1);
+ }
+ // Parent process
+ // This toy shell can only run one command at a time
+ // All the parent does is wait for the child to finish
+ int status;
+ wait(&status);
+ // This is the exit code of the child
+ // (Conventially zero means okay, non-zero means error)
+ return WEXITSTATUS(status);
+}
+```
+
+Here it is in action:
+
+```
+> /bin/ls
+shell shell.d shell.o
+> /bin/uname
+Linux
+>
+```
+
+Progress! But it still doesn’t feel like a real shell if we have to tell it exactly where to find each command.
+
+### PATH
+
+If you try using `which` to find the implementations of various commands, you might notice they’re all in the same small set of directories. The list of directories that contains commands is stored in an environment variable called `PATH`. It looks something like this:
+
+```
+$ echo $PATH
+/home/user/bin:/home/user/local/bin:/home/user/.local/bin:/usr/local/bin:/usr/bin:/bin:/opt/bin:/usr/games/bin
+```
+
+As you can see, it’s a list of directories separated by colons. If you ask a shell to run `ls`, it’s supposed to search each directory in this list for a program called `ls`. The search should be done in order starting from the first directory, so a personal implementation of `ls` in `/home/user/bin` could override the one in `/bin`. Production-ready shells cache this lookup.
+
+`PATH` is only used by default. If we type in a path to a program, that program will be used directly.
+
+Here’s a simple implemention of a smarter conversion of a command name to a C string that points to the executable. It returns a null if the command can’t be found.
+
+```
+const(char*) findExecutable(string cmd)
+{
+ if (cmd.canFind('/'))
+ {
+ if (exists(cmd)) return cmd.toStringz();
+ return null;
+ }
+
+ foreach (dir; environment["PATH"].splitter(":"))
+ {
+ import std.path : buildPath;
+ auto candidate = buildPath(dir, cmd);
+ if (exists(candidate)) return candidate.toStringz();
+ }
+ return null;
+}
+```
+
+Here’s what the shell looks like now:
+
+```
+> ls
+shell shell.d shell.o
+> uname
+Linux
+> head shell.d
+head shell.d: No such file or directory
+>
+```
+
+### Complex Commands
+
+That last command failed because the toy shell doesn’t handle program arguments yet, so it tries to find a command literally called “head shell.d”.
+
+If you look back at the implementation of `runCommand()`, you’ll see that `execv()` takes a C array of arguments, as well as the path to the program to run. All we have to do is process the command to make the array `["head", "shell.d", null]`. Something like this would do it:
+
+```
+// Key difference: split the command into pieces
+auto args = cmd.split();
+
+auto cmdz = findExecutable(args[0]);
+if (cmdz is null)
+{
+ io.stderr.writef("%s: No such file or directory\n", args[0]);
+ // 127 means "Command not found"
+ // http://tldp.org/LDP/abs/html/exitcodes.html
+ exit(127);
+}
+auto argsz = args.map!(toStringz).array;
+argsz ~= null;
+auto error = execv(cmdz, argsz.ptr);
+```
+
+That makes simple arguments work, but we quickly get into problems:
+
+```
+> head -n 5 shell.d
+import core.sys.posix.fcntl;
+import core.sys.posix.stdio;
+import core.sys.posix.stdlib;
+import core.sys.posix.sys.wait;
+import core.sys.posix.unistd;
+> echo asdf
+asdf
+> echo $HOME
+$HOME
+> ls *.d
+ls: cannot access '*.d': No such file or directory
+> ls '/home/user/file with spaces.txt'
+ls: cannot access "'/home/user/file": No such file or directory
+ls: cannot access 'with': No such file or directory
+ls: cannot access "spaces.txt'": No such file or directory
+>
+```
+
+As you might guess by looking at the above, shells like a POSIX Bourne shell (or Bash) do a _lot_ more than just `split()`. Take the `echo $HOME` example. It’s a common idiom to use `echo` for viewing environment variables (like `HOME`), but `echo` itself doesn’t actually do any environment variable handling. A POSIX shell processes a command like `echo $HOME` into an array like `["echo", "/home/user", null]` and passes it to `echo`, which does nothing but reflect its arguments back to the terminal.
+
+A POSIX shell also handles glob patterns like `*.d`. That’s why glob patterns work with _any_ command in *nix (unlike MS-DOS, for example): the commands don’t even see the globs.
+
+The command `ls '/home/user/file with spaces.txt'` got split into `["ls", "'/home/user/file", "with", "spaces.txt'", null]`. Any useful shell lets you use quoting and escaping to prevent any processing (like splitting into arguments) that you don’t want. Once again, quotes are completely handled by the shell; commands don’t even see them. Also, unlike most programming languages, everything is a string in shell, so there’s no difference between `head -n 5 shell.d` and `head -n '5' shell.d` — both turn into `["head", "-n", "5", "shell.d", null]`.
+
+There’s something you might notice from that last example: the shell can’t treat flags like `-n 5` differently from positional arguments like `shell.d` because `execve` only takes a single array of all arguments. So that means argument types are one thing that programs _do_ have to figure out for themselves, which explains [the clichéd inteview question about why quotes won’t help you delete a file called `-`][4] (i.e., the quotes are processed before the `rm` command sees them).
+
+A POSIX shell supports quite complex constructs like `while` loops and pipelines, but the toy shell only supports simple commands.
+
+### Tweaking the Child Process
+
+I said earlier that decoupling `fork` from `exec` allows extra flexibility. Let me give a couple of examples.
+
+#### I/O Redirection
+
+A key design principle of Unix is that commands should be agnostic about where their input and output are from, so that user input/output can be replaced with file input/output, or even input/output of other commands. E.g.:
+
+```
+sort events.txt | head -n 10 > /tmp/top_ten_events.txt
+```
+
+How does it work? Take the `head` command. The shell forks off a new child process. The child is a duplicate of the parent, so it inherits the same standard input and output. However, the child can replace its own standard input with a pipe shared with the process for `sort`, and replace its own standard output with a file handle for `/tmp/top_ten_events.txt`. After calling `execv()`, the process will become a `head` process that blindly reads/writes to/from whatever standard I/O it has.
+
+Getting down to the low-level details, *nix systems represent all file handles with so-called “file descriptors”, which are just integers as far as user programs are concerned, but point to data structures inside the operating system kernel. Standard input is file descriptor 0, and standard output is file descriptor 1. Replacing standard output for `head` looks something like this (minus error handling):
+
+```
+// The fork happens somewhere back here
+// Now running in the child process
+
+// Open the new file (no control over the file descriptor)
+auto new_fd = open("/tmp/top_ten_events.txt", O_WRONLY | O_CREAT, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
+// Copy the open file into file #1 (standard output)
+dup2(new_fd, 1);
+// Close the spare file descriptor
+close(new_fd);
+
+// The exec happens somewhere down here
+```
+
+The pipeline works in the same kind of way, except instead of using `open()` to open a file, we use `pipe()` to create _two_ connected file descriptors, and then let `sort` use one, and `head` use the other.
+
+#### Environment Variables
+
+If you’ve ever had to deploy something using a command line, there’s a good chance you’ve had to set some of these configuration variables. Each process carries its own set of environment variables, so you can override, say, `AUDIODEV` for one running program without affecting others. The C standard library provides functions for manipulating environment variables, but they’re not actually managed by the operating system kernel — the [C runtime][5] manages them using the same user-space memory that other program variables use. That means they also get copied to child processes on a `fork`. The runtime and the kernel co-operate to preserve them on `execve`.
+
+There’s no reason we can’t manipulate the environment variables the child process ends up using. POSIX shells support this: just put any variable assignments you want directly in front of the command.
+
+```
+$ uname
+Linux
+$ # LD_DEBUG is an environment variable for enabling linker debugging
+$ # (Doesn't work on all systems.)
+$ LD_DEBUG=statistics uname
+12128:
+12128: runtime linker statistics:
+12128: total startup time in dynamic loader: 2591152 cycles
+12128: time needed for relocation: 816752 cycles (31.5%)
+12128: number of relocations: 153
+12128: number of relocations from cache: 3
+12128: number of relative relocations: 1304
+12128: time needed to load objects: 1196148 cycles (46.1%)
+Linux
+$ # LD_DEBUG was only set for uname
+$ echo $LD_DEBUG
+
+$ # Pop quiz: why doesn't this print "bar"?
+$ FOO=bar echo $FOO
+
+$
+```
+
+These temporary environment variables are useful and easy to implement.
+
+### Builtins
+
+It’s great that the fork/exec pattern lets us reconfigure the child process as much as we like without affecting the parent shell. But some commands _need_ to affect the shell. A good example is the `cd` command for changing the current working directory. It would be pointless if it ran in a child process, changed its own working directory, then just quit, leaving the shell unchanged.
+
+The simple solution to this problem is builtins. I said that most shell commands are implemented as external programs on the filesystem. Well, some aren’t — they’re handled directly by the shell itself. Before searching PATH for a command implementation, the shell just checks if it has it’s own built-in implementation. A neat way to code this is [the function pointer approach I described in a previous post][6].
+
+You can read [a list of Bash builtins in the Advanced Bash-Scripting Guide][7]. Some, like `cd`, are builtins because they’re highly coupled to the shell. Others, like `echo`, have built-in implementations for performance reasons (most systems also have a standalone `echo` program).
+
+There’s one builtin that confuses a lot of people: `export`. It makes sense if you realise that the POSIX shell scripting language has its own variables that are totally separate from environment variables. A variable assignment is just a shell variable by default, and `export` makes it into an environment variable (when spawning child processes, at least). The difference is that the C runtime doesn’t know anything about shell variables, so they get lost on `execve`.
+
+```
+$ uname
+Linux
+$ # Let's try setting LD_DEBUG
+$ LD_DEBUG=statistics
+$ # It has no effect because that's actually just a shell variable
+$ uname
+Linux
+$ # Let's try making into an environment variable:
+$ export LD_DEBUG
+$ uname
+12128:
+12128: runtime linker statistics:
+12128: total startup time in dynamic loader: 2591152 cycles
+12128: time needed for relocation: 816752 cycles (31.5%)
+12128: number of relocations: 153
+12128: number of relocations from cache: 3
+12128: number of relative relocations: 1304
+12128: time needed to load objects: 1196148 cycles (46.1%)
+Linux
+$ # Now every non-builtin will dump debugging info
+$ # Let's stop that for sanity's sake
+$ unset LD_DEBUG
+$
+```
+
+### Putting it Together
+
+A POSIX-compliant shell does a lot more stuff (like signal handling and job management) but that’s enough to understand how to write an MVP *nix shell. You can see all the pieces together by checking out [the complete working example in my repository][1].
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2018/11/07/writing_a_nix_shell.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: https://gitlab.com/sarneaud/toyshell
+[2]: https://tiswww.case.edu/php/chet/readline/rltop.html
+[3]: https://github.com/hanslub42/rlwrap
+[4]: https://unix.stackexchange.com/questions/1519/how-do-i-delete-a-file-whose-name-begins-with-hyphen-a-k-a-dash-or-minus
+[5]: /2017/06/04/what_is_the_d_runtime.html#what-about-c--does-c-really-have-a-runtime-too
+[6]: /2018/04/02/inheritance_and_polymorphism.html
+[7]: https://www.tldp.org/LDP/abs/html/internal.html
diff --git a/sources/talk/20181205 Unfortunately, Garbage Collection isn-t Enough.md b/sources/talk/20181205 Unfortunately, Garbage Collection isn-t Enough.md
new file mode 100644
index 0000000000..0c08320139
--- /dev/null
+++ b/sources/talk/20181205 Unfortunately, Garbage Collection isn-t Enough.md
@@ -0,0 +1,44 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Unfortunately, Garbage Collection isn't Enough)
+[#]: via: (https://theartofmachinery.com/2018/12/05/gc_not_enough.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+Unfortunately, Garbage Collection isn't Enough
+======
+
+Here’s a little story of some mysterious server failures I had to debug a year ago. The servers would run okay for a while, then eventually start crashing. After that, trying to run practically anything on the machines failed with “No space left on device” errors, but the filesystem only reported a few gigabytes of files on the ~20GB disks.
+
+The problem turned out to be caused by a log shipper. This was a Ruby app that read in log files, sent the data to a remote server, and deleted the old files. The bug was that the open log files weren’t being explicitly closed. The app was letting Ruby’s automatic garbage collector clean up the `File` objects, instead. Trouble is, `File` objects don’t use much memory, so the log shipper could theoretically keep millions of log files open before a collection was needed.
+
+*nix filesystems decouple filenames from file data. File data on disk can have multiple filenames pointing to it (i.e., hard links), and the data is only deleted when the last reference is removed. An open file descriptor counts as a reference, so if you delete a file while a program is reading it, the filename disappears from the directory listing, but the file data stays until the program closes it. That’s what was happening with the log shipper. The `du` (“disk usage”) command finds files using directory listings, so it didn’t see the gigabytes of file data for the thousands of log files the shipper had open. Those files only appeared after running `lsof` (“list open files”).
+
+Of course, the same kind of bug happens with other things. A couple of months ago I had to deal with a Java app that was breaking in production after a few days because it leaked network connections.
+
+Once upon a time, I wrote most of my code in C and then C++. In those days, I thought manual resource management was enough. How hard could it be? Every `malloc()` needs a `free()`, and every `open()` needs a `close()`. Simple. Except not all programs are simple, so manual resource management became a straitjacket. Then one day I discovered reference counting and garbage collection. I thought that solved all my problems, and I stopped caring about resource management completely. Once again, that was okay for simple programs, but not all programs are simple.
+
+Relying on garbage collection doesn’t work because it only solves the _memory_ management problem, and complex programs have to deal with a lot more than just memory. There’s a popular meme that responds to that by saying that [memory is 95% of your resource problems][1]. Well, you could say that all resources are 0% of your problems — until you run out of one of them. Then that resource becomes 100% of your problems.
+
+But that kind of thinking still treats resources as a special case. The deeper problem is that as programs get more complex, everything tends to become a resource. For example, take a calendar program. A complex calendar program allows multiple users to manage multiple, shareable calendars, with events that can be shared across calendars. Any piece of data will eventually have multiple parts of the program depending on it being up-to-date and accurate. So all dynamic data needs an owner, and not just for memory management. As more features are added, more parts of the program will need to update data. If you’re sane, you’ll only allow one part of the program to update data at a time, so the right and responsibility to update data becomes a limited resource, itself. Modelling mutable data with immutable datastructures doesn’t make these problems disappear; it just translates them into a different paradigm.
+
+Planning the ownership and lifespan of resources is an inescapable part of designing complex software. It’s easier if you exploit some common patterns. One pattern is fungible resources. An example is an immutable string “foo”, which is semantically the same as any other immutable string “foo”. This kind of resource doesn’t need a pre-determined lifespan or ownership. In fact, to keep the system as simple as possible, it’s better to have _no_ pre-determined lifespan or ownership. Another pattern is resources that are non-fungible, but have a deterministic lifespan. This includes network connections, as well as more abstract things like the ownership of a piece of data. It’s sanest to explicitly enforce the lifespan of these things in code.
+
+Notice that automatic garbage collection is really good for implementing the first pattern, but not the second, while manual resource management techniques (like RAII) are great for implementing the second pattern, but terrible for the first. The two approaches become complements in complex programs.
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2018/12/05/gc_not_enough.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: https://yosefk.com/c++fqa/dtor.html#fqa-11.1
diff --git a/sources/talk/20181218 The Rise and Demise of RSS.md b/sources/talk/20181218 The Rise and Demise of RSS.md
index e260070c5c..2dfea2074c 100644
--- a/sources/talk/20181218 The Rise and Demise of RSS.md
+++ b/sources/talk/20181218 The Rise and Demise of RSS.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (beamrolling)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/talk/20181220 D in the Browser with Emscripten, LDC and bindbc-sdl (translation).md b/sources/talk/20181220 D in the Browser with Emscripten, LDC and bindbc-sdl (translation).md
new file mode 100644
index 0000000000..b4dc33b434
--- /dev/null
+++ b/sources/talk/20181220 D in the Browser with Emscripten, LDC and bindbc-sdl (translation).md
@@ -0,0 +1,276 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (D in the Browser with Emscripten, LDC and bindbc-sdl (translation))
+[#]: via: (https://theartofmachinery.com/2018/12/20/emscripten_d.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+D in the Browser with Emscripten, LDC and bindbc-sdl (translation)
+======
+
+Here’s a tutorial about using Emscripten to run D code in a normal web browser. It’s uses a different approach from the [Dscripten game demo][1] and the [dscripten-tools][2] toolchain that’s based on it.
+
+ * Instead of porting the D runtime, it uses a lightweight, runtimeless `-betterC` build.
+ * It uses Docker to manage the Emscripten installation.
+
+
+
+LDC has recently gained support for [compiling directly to WebAssembly][3], but (unlike the Emscripten approach) that doesn’t automatically get you libraries.
+
+You can find [the complete working code on Github][4]. `./run.sh` starts a shell in a Docker image that contains the development environment. `dub build --build=release` generates the HTML and JavaScript assets and puts them into the `dist/` directory.
+
+[This tutorial is translated from a Japanese post by outlandkarasu][5], who deserves all the credit for figuring this stuff out.
+
+### Background
+
+#### What’s Emscripten?
+
+[Emscripten][6] is a compiler toolchain for asm.js and WebAssembly that comes with ported versions of the libc and SDL2 C libraries. It can compile regular Linux-based applications in languages like C to code that can run in a browser.
+
+### How do you use Emscripten with D?
+
+Emscripten is a toolchain designed for C/C++, but the C/C++ part is just a frontend. The toolchain actually compiles LLVM intermediate representation (IR). You can generate LLVM IR bitcode from D using [LDC][7], so it should be possible to feed that through Emscripten and run D in a browser, just like C/C++.
+
+#### Gotchas using Emscripten
+
+Ideally that’s all it would take, but there are some things that require special attention (or trial and error).
+
+ 1. D runtime library features like GC and Phobos can’t be used without an Emscripten port.
+ 2. It’s not enough to just produce LLVM IR. The code needs to meet Emscripten’s requirements.
+ * It needs to use ported libraries.
+ * Pointer sizes and data structure binary layouts need to match.
+ 3. Emscripten bugs need to be worked around.
+ * Debug information is particularly problematic.
+
+
+
+### Implementation
+
+#### Plan of attack
+
+Here’s the plan for making D+Emscripten development work:
+
+ 1. Use `-betterC` and the `@nogc` and `nothrow` attributes to avoid D runtime features.
+ 2. Use SDL2 functions directly by statically compiling with [`bindbc-sdl`][8].
+ 3. Keep on trying.
+
+
+
+#### Environment setup
+
+Emscripten is based on LLVM, clang and various other libraries, and is hard to set up, so I decided to [do the job with Docker][9]. I wrote a Dockerfile that would also add LDC and other tools at `docker build` time:
+
+```
+FROM trzeci/emscripten-slim:sdk-tag-1.38.21-64bit
+
+# Install D and tools, and enable them in the shell by adding them to .bashrc
+RUN apt-get -y update && \
+ apt-get -y install vim sudo curl && \
+ sudo -u emscripten /bin/sh -c "curl -fsS https://dlang.org/install.sh | bash -s ldc-1.12.0" && \
+ (echo 'source $(~/dlang/install.sh ldc -a)' >> /home/emscripten/.bashrc)
+
+# dub settings (explained later)
+ADD settings.json /var/lib/dub/settings.json
+```
+
+Docker makes these big toolchains pretty easy :)
+
+#### Coding
+
+Here’s a basic demo that displays an image:
+
+```
+// Import SDL2 and SDL_image
+// Both work with Emscripten
+import bindbc.sdl;
+import bindbc.sdl.image;
+import core.stdc.stdio : printf; // printf works in Emscripten, too
+
+// Function declarations for the main loop
+alias em_arg_callback_func = extern(C) void function(void*) @nogc nothrow;
+extern(C) void emscripten_set_main_loop_arg(em_arg_callback_func func, void *arg, int fps, int simulate_infinite_loop) @nogc nothrow;
+extern(C) void emscripten_cancel_main_loop() @nogc nothrow;
+
+// Log output
+void logError(size_t line = __LINE__)() @nogc nothrow {
+ printf("%d:%s\n", line, SDL_GetError());
+}
+
+struct MainLoopArguments {
+ SDL_Renderer* renderer;
+ SDL_Texture* texture;
+}
+
+// Language features restricted with @nogc and nothrow
+extern(C) int main(int argc, const char** argv) @nogc nothrow {
+ // Initialise SDL
+ if(SDL_Init(SDL_INIT_VIDEO) != 0) {
+ logError();
+ return -1;
+ }
+ scope(exit) SDL_Quit();
+
+ // Initialise SDL_image (with PNG support)
+ if(IMG_Init(IMG_INIT_PNG) != IMG_INIT_PNG) {
+ logError();
+ return -1;
+ }
+ scope(exit) IMG_Quit();
+
+ // Make the window and its renderer
+ SDL_Window* window;
+ SDL_Renderer* renderer;
+ if(SDL_CreateWindowAndRenderer(640, 480, SDL_WINDOW_SHOWN, &window, &renderer) != 0) {
+ logError();
+ return -1;
+ }
+ scope(exit) {
+ SDL_DestroyRenderer(renderer);
+ SDL_DestroyWindow(window);
+ }
+
+ // Load image file
+ auto dman = IMG_Load("images/dman.png");
+ if(!dman) {
+ logError();
+ return -1;
+ }
+ scope(exit) SDL_FreeSurface(dman);
+
+ // Make a texture from the image
+ auto texture = SDL_CreateTextureFromSurface(renderer, dman);
+ if(!texture) {
+ logError();
+ return -1;
+ }
+ scope(exit) SDL_DestroyTexture(texture);
+
+ // Start the image main loop
+ auto arguments = MainLoopArguments(renderer, texture);
+ emscripten_set_main_loop_arg(&mainLoop, &arguments, 60, 1);
+ return 0;
+}
+
+extern(C) void mainLoop(void* p) @nogc nothrow {
+ // Get arguments
+ auto arguments = cast(MainLoopArguments*) p;
+ auto renderer = arguments.renderer;
+ auto texture = arguments.texture;
+
+ // Clear background
+ SDL_SetRenderDrawColor(renderer, 0x00, 0x00, 0x00, 0x00);
+ SDL_RenderClear(renderer);
+
+ // Texture image
+ SDL_RenderCopy(renderer, texture, null, null);
+ SDL_RenderPresent(renderer);
+
+ // End of loop iteration
+ emscripten_cancel_main_loop();
+}
+```
+
+#### Building
+
+Now building is the tricky bit.
+
+##### `dub.json`
+
+Here’s the `dub.json` I made through trial and error. It runs the whole build from D to WebAssembly.
+
+```
+{
+ "name": "emdman",
+ "authors": [
+ "outland.karasu@gmail.com"
+ ],
+ "description": "A minimal emscripten D man demo.",
+ "copyright": "Copyright © 2018, outland.karasu@gmail.com",
+ "license": "BSL-1.0",
+ "dflags-ldc": ["--output-bc", "-betterC"], // Settings for bitcode output
+ "targetName": "app.bc",
+ "dependencies": {
+ "bindbc-sdl": "~>0.4.1"
+ },
+ "subConfigurations": {
+ "bindbc-sdl": "staticBC" // Statically-linked, betterC build
+ },
+ "versions": ["BindSDL_Image"], // Use SDL_image
+
+ // Run the Emscripten compiler after generating bitcode
+ // * Disable optimisations
+ // * Enable WebAssembly
+ // * Use SDL+SDL_image (with PNG)
+ // * Set web-only as the environment
+ // * Embed image file(s)
+ // * Generate HTML for running in browser
+ "postBuildCommands": ["emcc -v -O0 -s WASM=1 -s USE_SDL=2 -s USE_SDL_IMAGE=2 -s SDL2_IMAGE_FORMATS='[\"png\"]' -s ENVIRONMENT=web --embed-file images -o dist/index.html app.bc"]
+}
+```
+
+##### Switch to 32b (x86) code generation
+
+Compiling with 64b “worked” but I got a warning about different data layouts:
+
+```
+warning: Linking two modules of different data layouts: '/tmp/emscripten_temp_WwvmL5_archive_contents/mulsc3_20989819.c.o' is 'e-p:32:32-i64:64-v128:32:128-n32-S128' whereas '/src/app.bc' is 'e-m:e-i64:64-f80:128-n8:16:32:64-S128'
+
+warning: Linking two modules of different target triples: /tmp/emscripten_temp_WwvmL5_archive_contents/mulsc3_20989819.c.o' is 'asmjs-unknown-emscripten' whereas '/src/app.bc' is 'x86_64-unknown-linux-gnu'
+```
+
+Apparently Emscripten is basically for 32b code. Using mismatched pointer sizes sounds like a pretty bad idea, so I added this `/var/lib/dub/settings.json` to the Dockerfile:
+
+```
+{
+ "defaultArchitecture": "x86", // Set code generation to 32b
+ "defaultCompiler": "ldc" // Use LDC by default
+}
+```
+
+There’s an [open issue for documenting `dub`’s `settings.json`][10].
+
+##### Remove debug information
+
+Emscripten gave the following error when I ran a normal build with `dub`:
+
+```
+shared:ERROR: Failed to run llvm optimizations:
+```
+
+It looks like there’s [an issue related to debugging information][11]. I worked around it by using `dub --build=release`.
+
+### Results
+
+After lots of trial and error, I finally succeeded in getting my demo to run in a browser. Here’s how it looks:
+
+![Meet D-Man \(Demo\)][12]
+
+The Emscripten+D dev environment isn’t as stable as a normal dev environment. For example, rendering didn’t work if I used `SDL_LowerBlit` instead. But here’s D-Man in a browser.
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2018/12/20/emscripten_d.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: https://github.com/Ace17/dscripten
+[2]: https://github.com/CyberShadow/dscripten-tools
+[3]: https://wiki.dlang.org/Generating_WebAssembly_with_LDC
+[4]: https://github.com/outlandkarasu-sandbox/emdman
+[5]: https://qiita.com/outlandkarasu@github/items/15e0f4b6d1b2a0eab846
+[6]: http://kripken.github.io/emscripten-site/
+[7]: https://wiki.dlang.org/LDC
+[8]: https://github.com/BindBC/bindbc-sdl
+[9]: https://hub.docker.com/r/trzeci/emscripten/
+[10]: https://github.com/dlang/dub/issues/1463
+[11]: https://github.com/kripken/emscripten/issues/4078
+[12]: /images/emscripten_d/d-man-browser.png
diff --git a/sources/talk/20190105 Why Sorting is O(N log N).md b/sources/talk/20190105 Why Sorting is O(N log N).md
new file mode 100644
index 0000000000..34a1d3bf97
--- /dev/null
+++ b/sources/talk/20190105 Why Sorting is O(N log N).md
@@ -0,0 +1,115 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Why Sorting is O(N log N))
+[#]: via: (https://theartofmachinery.com/2019/01/05/sorting_is_nlogn.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+Why Sorting is O(N log N)
+======
+
+Any decent algorithms textbook will explain how fast sorting algorithms like quicksort and heapsort are, but it doesn’t take crazy maths to prove that they’re as asymptotically fast as you can possibly get.
+
+### A pedantic note about notation
+
+Most computer scientists use big-O notation to mean “asymptotically equal, up to a constant scaling factor”, which isn’t quite what it means to other mathematicians. Sorry, I’ll use big-O like in CS textbooks, but at least I won’t mix it with other mathematical notation.
+
+## Comparison-based sorting
+
+Let’s look at the special case of algorithms that compare values two at a time (like quicksort and heapsort, and most other popular algorithms). The ideas can be extended to all sorting algorithms later.
+
+### A simple counting argument for the worst case
+
+Suppose you have an array of four elements, all different, in random order. Can you sort it by comparing just one pair of elements? Obviously not, but here’s one good reason that proves you can’t: By definition, to sort the array, you need to how to rearrange the elements to put them in order. In other words, you need to know which permutation is needed. How many possible permutations are there? The first element could be moved to one of four places, the second one could go to one of the remaining three, the third element has two options, and the last element has to take the one remaining place. So there are (4 \times 3 \times 2 \times 1 = 4! = 24) possible permutations to choose from, but there are only two possible results from comparing two different things: “BIGGER” and “SMALLER”. If you made a list of all the possible permutations, you might decide that “BIGGER” means you need permutation #8 and “SMALLER” means you need permutation #24, but there’s no way you could know when you need the other 22 permutations.
+
+With two comparisons, you have (2 \times 2 = 4) possible outputs, which still isn’t enough. You can’t sort every possible shuffled array unless you do at least five comparisons ((2^{5} = 32)). If (W(N)) is the worst-case number of comparisons needed to sort (N) different elements using some algorithm, we can say
+
+[2^{W(N)} \geq N!]
+
+Taking a logarithm base 2,
+
+[W(N) \geq \log_{2}{N!}]
+
+Asymptotically, (N!) grows like (N^{N}) (see also [Stirling’s formula][1]), so
+
+[W(N) \succeq \log N^{N} = N\log N]
+
+And that’s an (O(N\log N)) limit on the worst case just from counting outputs.
+
+### Average case from information theory
+
+We can get a stronger result if we extend that counting argument with a little information theory. Here’s how we could use a sorting algorithm as a code for transmitting information:
+
+ 1. I think of a number — say, 15
+ 2. I look up permutation #15 from the list of permutations of four elements
+ 3. I run the sorting algorithm on this permutation and record all the “BIGGER” and “SMALLER” comparison results
+ 4. I transmit the comparison results to you in binary code
+ 5. You re-enact my sorting algorithm run, step by step, referring to my list of comparison results as needed
+ 6. Now that you know how I rearranged my array to make it sorted, you can reverse the permutation to figure out my original array
+ 7. You look up my original array in the permutation list to figure out I transmitted the number 15
+
+
+
+Okay, it’s a bit strange, but it could be done. That means that sorting algorithms are bound by the same laws as normal encoding schemes, including the theorem proving there’s no universal data compressor. I transmitted one bit per comparison the algorithm does, so, on average, the number of comparisons must be at least the number of bits needed to represent my data, according to information theory. More technically, [the average number of comparisons must be at least the Shannon entropy of my input data, measured in bits][2]. Entropy is a mathematical measure of the information content, or unpredictability, of something.
+
+If I have an array of (N) elements that could be in any possible order without bias, then entropy is maximised and is (\log_{2}{N!}) bits. That proves that (O(N\log N)) is an optimal average for a comparison-based sort with arbitrary input.
+
+That’s the theory, but how do real sorting algorithms compare? Below is a plot of the average number of comparisons needed to sort an array. I’ve compared the theoretical optimum against naïve quicksort and the [Ford-Johnson merge-insertion sort][3], which was designed to minimise comparisons (though it’s rarely faster than quicksort overall because there’s more to life than minimising comparisons). Since it was developed in 1959, merge-insertion sort has been tweaked to squeeze a few more comparisons out, but the plot shows it’s already almost optimal.
+
+![Plot of average number of comparisons needed to sort randomly shuffled arrays of length up to 100. Bottom line is theoretical optimum. Within about 1% is merge-insertion sort. Naïve quicksort is within about 25% of optimum.][4]
+
+It’s nice when a little theory gives such a tight practical result.
+
+### Summary so far
+
+Here’s what’s been proven so far:
+
+ 1. If the array could start in any order, at least (O(N\log N)) comparisons are needed in the worst case
+ 2. The average number of comparisons must be at least the entropy of the array, which is (O(N\log N)) for random input
+
+
+
+Note that #2 allows comparison-based sorting algorithms to be faster than (O(N\log N)) if the input is low entropy (in other words, more predictable). Merge sort is close to (O(N)) if the input contains many sorted subarrays. Insertion sort is close to (O(N)) if the input is an array that was sorted before being perturbed a bit. None of them beat (O(N\log N)) in the worst case unless some array orderings are impossible as inputs.
+
+## General sorting algorithms
+
+Comparison-based sorts are an interesting special case in practice, but there’s nothing theoretically special about [`CMP`][5] as opposed to any other instruction on a computer. Both arguments above can be generalised to any sorting algorithm if you note a couple of things:
+
+ 1. Most computer instructions have more than two possible outputs, but still have a limited number
+ 2. The limited number of outputs means that one instruction can only process a limited amount of entropy
+
+
+
+That gives us the same (O(N\log N)) lower bound on the number of instructions. Any physically realisable computer can only process a limited number of instructions at a time, so that’s an (O(N\log N)) lower bound on the time required, as well.
+
+### But what about “faster” algorithms?
+
+The most useful practical implication of the general (O(N\log N)) bound is that if you hear about any asymptotically faster algorithm, you know it must be “cheating” somehow. There must be some catch that means it isn’t a general purpose sorting algorithm that scales to arbitrarily large arrays. It might still be a useful algorithm, but it’s a good idea to read the fine print closely.
+
+A well-known example is radix sort. It’s often called an (O(N)) sorting algorithm, but the catch is that it only works if all the numbers fit into (k) bits, and it’s really (O({kN})).
+
+What does that mean in practice? Suppose you have an 8-bit machine. You can represent (2^{8} = 256) different numbers in 8 bits, so if you have an array of thousands of numbers, you’re going to have duplicates. That might be okay for some applications, but for others you need to upgrade to at least 16 bits, which can represent (2^{16} = 65,536) numbers distinctly. 32 bits will support (2^{32} = 4,294,967,296) different numbers. As the size of the array goes up, the number of bits needed will tend to go up, too. To represent (N) different numbers distinctly, you’ll need (k \geq \log_{2}N). So, unless you’re okay with lots of duplicates in your array, (O({kN})) is effectively (O(N\log N)).
+
+The need for (O(N\log N)) of input data in the general case actually proves the overall result by itself. That argument isn’t so interesting in practice because we rarely need to sort billions of integers on a 32-bit machine, and [if anyone’s hit the limits of a 64-bit machine, they haven’t told the rest of us][6].
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2019/01/05/sorting_is_nlogn.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: http://hyperphysics.phy-astr.gsu.edu/hbase/Math/stirling.html
+[2]: https://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem
+[3]: https://en.wikipedia.org/wiki/Merge-insertion_sort
+[4]: /images/sorting_is_nlogn/sorting_algorithms_num_comparisons.svg
+[5]: https://c9x.me/x86/html/file_module_x86_id_35.html
+[6]: https://sortbenchmark.org/
diff --git a/sources/talk/20190108 NSA to Open Source its Reverse Engineering Tool GHIDRA.md b/sources/talk/20190108 NSA to Open Source its Reverse Engineering Tool GHIDRA.md
deleted file mode 100644
index 78922f9525..0000000000
--- a/sources/talk/20190108 NSA to Open Source its Reverse Engineering Tool GHIDRA.md
+++ /dev/null
@@ -1,89 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (NSA to Open Source its Reverse Engineering Tool GHIDRA)
-[#]: via: (https://itsfoss.com/nsa-ghidra-open-source)
-[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
-
-NSA to Open Source its Reverse Engineering Tool GHIDRA
-======
-
-GHIDRA – NSA’s reverse engineering tool is getting ready for a free public release this March at the [RSA Conference 2019][1] to be held in San Francisco.
-
-The National Security Agency (NSA) did not officially announce this – however – a senior NSA advisor, Robert Joyce’s [session description][2] on the official RSA conference website revealed about it before any official statement or announcement.
-
-Here’s what it mentioned:
-
-![][3]
-Image Credits: [Twitter][4]
-
-In case the text in the image isn’t properly visible, let me quote the description here:
-
-> NSA has developed a software reverse engineering framework known as GHIDRA, which will be demonstrated for the first time at RSAC 2019. An interactive GUI capability enables reverse engineers to leverage an integrated set of features that run on a variety of platforms including Windows, Mac OS, and Linux and supports a variety of processor instruction sets. The GHISDRA platform includes all the features expected in high-end commercial tools, with new and expanded functionality NSA uniquely developed. and will be released for free public use at RSA.
-
-### What is GHIDRA?
-
-GHIDRA is a software reverse engineering framework developed by [NSA][5] that is in use by the agency for more than a decade.
-
-Basically, a software reverse engineering tool helps to dig up the source code of a proprietary program which further gives you the ability to detect virus threats or potential bugs. You should read how [reverse engineering][6] works to know more.
-
-The tool is is written in Java and quite a few people compared it to high-end commercial reverse engineering tools available like [IDA][7].
-
-A [Reddit thread][8] involves more detailed discussion where you will find some ex-employees giving good amount of details before the availability of the tool.
-
-![NSA open source][9]
-
-### GHIDRA was a secret tool, how do we know about it?
-
-The existence of the tool was uncovered in a series of leaks by [WikiLeaks][10] as part of [Vault 7 documents of CIA][11].
-
-### Is it going to be open source?
-
-We do think that the reverse engineering tool to be released could be made open source. Even though there is no official confirmation mentioning “open source” – but a lot of people do believe that NSA is definitely targeting the open source community to help improve their tool while also reducing their effort to maintain this tool.
-
-This way the tool can remain free and the open source community can help improve GHIDRA as well.
-
-You can also check out the existing [Vault 7 document at WikiLeaks][12] to come up with your prediction.
-
-### Is NSA doing a good job here?
-
-The reverse engineering tool is going to be available for Windows, Linux, and Mac OS for free.
-
-Of course, we care about the Linux platform here – which could be a very good option for people who do not want to or cannot afford a thousand dollar license for a reverse engineering tool with the best-in-class features.
-
-### Wrapping Up
-
-If GHIDRA becomes open source and is available for free, it would definitely help a lot of researchers and students and on the other side – the competitors will be forced to adjust their pricing.
-
-What are your thoughts about it? Is it a good thing? What do you think about the tool going open sources Let us know what you think in the comments below.
-
-![][13]
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/nsa-ghidra-open-source
-
-作者:[Ankush Das][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/ankush/
-[b]: https://github.com/lujun9972
-[1]: https://www.rsaconference.com/events/us19
-[2]: https://www.rsaconference.com/events/us19/agenda/sessions/16608-come-get-your-free-nsa-reverse-engineering-tool
-[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/come-get-your-free-nsa.jpg?fit=800%2C337&ssl=1
-[4]: https://twitter.com/0xffff0800/status/1080909700701405184
-[5]: http://nsa.gov
-[6]: https://en.wikipedia.org/wiki/Reverse_engineering
-[7]: https://en.wikipedia.org/wiki/Interactive_Disassembler
-[8]: https://www.reddit.com/r/ReverseEngineering/comments/ace2m3/come_get_your_free_nsa_reverse_engineering_tool/
-[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/nsa-open-source.jpeg?resize=800%2C450&ssl=1
-[10]: https://www.wikileaks.org/
-[11]: https://en.wikipedia.org/wiki/Vault_7
-[12]: https://wikileaks.org/ciav7p1/cms/page_9536070.html
-[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/nsa-open-source.jpeg?fit=800%2C450&ssl=1
diff --git a/sources/talk/20190115 The Art of Unix Programming, reformatted.md b/sources/talk/20190115 The Art of Unix Programming, reformatted.md
deleted file mode 100644
index 73ffb4c955..0000000000
--- a/sources/talk/20190115 The Art of Unix Programming, reformatted.md
+++ /dev/null
@@ -1,54 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (The Art of Unix Programming, reformatted)
-[#]: via: (https://arp242.net/weblog/the-art-of-unix-programming.html)
-[#]: author: (Martin Tournoij https://arp242.net/)
-
-The Art of Unix Programming, reformatted
-======
-
-tl;dr: I reformatted Eric S. Raymond’s The Art of Unix Programming for readability; [read it here][1].
-
-I recently wanted to look up a quote for an article I was writing, and I was fairly sure I had read it in The Art of Unix Programming. Eric S. Raymond (esr) has [kindly published it online][2], but it’s difficult to search as it’s distributed over many different pages, and the formatting is not exactly conducive for readability.
-
-I `wget --mirror`’d it to my drive, and started out with a simple [script][3] to join everything to a single page, but eventually ended up rewriting a lot of the HTML from crappy 2003 docbook-generated tagsoup to more modern standards, and I slapped on some CSS to make it more readable.
-
-The results are fairly nice, and it should work well in any version of any browser (I haven’t tested Internet Explorer and Edge, lacking access to a Windows computer, but I’m reasonably confident it should work without issues; if not, see the bottom of this page on how to get in touch).
-
-The HTML could be simplified further (so rms can read it too), but dealing with 360k lines of ill-formatted HTML is not exactly my idea of fun, so this will have to do for now.
-
-The entire page is self-contained. You can save it to your laptop or mobile phone and read it on a plane or whatnot.
-
-Why spend so much work on an IT book from 2003? I think a substantial part of the book still applies very much today, for all programmers (not just Unix programmers). For example the [Basics of the Unix Philosophy][4] was good advice in 1972, is still good advice in 2019, and will continue to be good advice well in to the future.
-
-Other parts have aged less gracefully; for example “since 2000, practice has been moving toward use of XML-DocBook as a documentation interchange format” doesn’t really represent the current state of things, and the [Data File Metaformats][5] section mentions XML and INI, but not JSON or YAML (as they weren’t invented until after the book was written)
-
-I find this adds, rather than detracts. It makes for an interesting window in to past. The downside is that the uninitiated will have a bit of a hard time distinguishing between the good and outdated parts; as a rule of thumb: if it talks about abstract concepts, it probably still applies today. If it talks about specific software, it may be outdated.
-
-I toyed with the idea of updating or annotating the text, but the license doesn’t allow derivative works, so that’s not going to happen. Perhaps I’ll email esr and ask nicely. Another project, for another weekend :-)
-
-You can mail me at [martin@arp242.net][6] or [create a GitHub issue][7] for feedback, questions, etc.
-
---------------------------------------------------------------------------------
-
-via: https://arp242.net/weblog/the-art-of-unix-programming.html
-
-作者:[Martin Tournoij][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://arp242.net/
-[b]: https://github.com/lujun9972
-[1]: https://arp242.net/the-art-of-unix-programming/
-[2]: http://catb.org/~esr/writings/taoup/html/
-[3]: https://arp242.net/the-art-of-unix-programming/fix-taoup.py
-[4]: https://arp242.net/the-art-of-unix-programming#ch01s06
-[5]: https://arp242.net/the-art-of-unix-programming/#ch05s02
-[6]: mailto:martin@arp242.net
-[7]: https://github.com/Carpetsmoker/arp242.net/issues/new
diff --git a/sources/talk/20190211 Introducing kids to computational thinking with Python.md b/sources/talk/20190211 Introducing kids to computational thinking with Python.md
deleted file mode 100644
index c877d3c212..0000000000
--- a/sources/talk/20190211 Introducing kids to computational thinking with Python.md
+++ /dev/null
@@ -1,69 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (WangYueScream )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Introducing kids to computational thinking with Python)
-[#]: via: (https://opensource.com/article/19/2/break-down-stereotypes-python)
-[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
-
-Introducing kids to computational thinking with Python
-======
-Coding program gives low-income students the skills, confidence, and knowledge to break free from economic and societal disadvantages.
-
-
-
-When the [Parkman Branch][1] of the Detroit Public Library was flooded with bored children taking up all the computers during summer break, the library saw it not as a problem, rather an opportunity. They started a coding club, the [Parkman Coders][2], led by [Qumisha Goss][3], a librarian who is leveraging the power of Python to introduce disadvantaged children to computational thinking.
-
-When she started the Parkman Coders program about four years ago, "Q" (as she is known) didn't know much about coding. Since then, she's become a specialist in library instruction and technology and a certified Raspberry Pi instructor.
-
-The program began by using [Scratch][4], but the students got bored with the block coding interface, which they regarded as "baby stuff." She says, "I knew we need to make a change to something that was still beginner friendly, but that would be more challenging for them to continue to hold their attention." At this point, she started teaching them Python.
-
-Q first saw Python while playing a game with dungeons and skeleton monsters on [Code.org][5]. She began to learn Python by reading books like [Python Programming: An Introduction to Computer Science][6] and [Python for Kids][7]. She also recommends [Automate the Boring Stuff with Python][8] and [Lauren Ipsum: A Story about Computer Science and Other Improbable Things][9].
-
-### Setting up a Raspberry Pi makerspace
-
-Q decided to use [Raspberry Pi][10] computers to avoid the possibility that the students might be able to hack into the library system's computers, which weren't arranged in a way conducive to a makerspace anyway. The Pi's affordability, plus its flexibility and the included free software, lent more credibility to her decision.
-
-While the coder program was the library's effort keep the peace and create a learning space that would engage the children, it quickly grew so popular that it ran out of space, computers, and adequate electrical outlets in a building built in 1921. They started with 10 Raspberry Pi computers shared among 20 children, but the library obtained funding from individuals, companies including Microsoft, the 4H, and the Detroit Public Library Foundation to get more equipment and expand the program.
-
-Currently, about 40 children participate in each session and they have enough Raspberry Pi's for one device per child and some to give away. Many of the Parkman Coders come from low socio-economic backgrounds and don't have a computer at home, so the library provides them with donated Chromebooks.
-
-Q says, "when kids demonstrate that they have a good understanding of how to use a Raspberry Pi or a [Microbit][11] and have been coming to programs regularly, we give them equipment to take home with them. This process is very challenging, however, because [they may not] have internet access at home [or] all the peripheral things they need like monitors, keyboards, and mice."
-
-### Learning life skills and breaking stereotypes with Python
-
-Q says, "I believe that the mainstays of learning computer science are learning critical thinking and problem-solving skills. My hope is that these lessons will stay with the kids as they grow and pursue futures in whatever field they choose. In addition, I'm hoping to inspire some pride in creatorship. It's a very powerful feeling to know 'I made this thing,' and once they've had these successes early, I hope they will approach new challenges with zeal."
-
-She also says, "in learning to program, you have to learn to be hyper-vigilant about spelling and capitalization, and for some of our kids, reading is an issue. To make sure that the program is inclusive, we spell aloud during our lessons, and we encourage kids to speak up if they don't know a word or can't spell it correctly."
-
-Q also tries to give extra attention to children who need it. She says, "if I recognize that someone has a more severe problem, we try to get them paired with a tutor at our library outside of program time, but still allow them to come to the program. We want to help them without discouraging them from participating."
-
-Most importantly, the Parkman Coders program seeks to help every child realize that each has a unique skill set and abilities. Most of the children are African-American and half are girls. Q says, "we live in a world where we grow up with societal stigmas that frequently limit our own belief of what we can accomplish." She believes that children need a nonjudgmental space where "they can try new things, mess up, and discover."
-
-The environment Q and the Parkman Coders program creates helps the participants break away from economic and societal disadvantages. She says that the secret sauce is to "make sure you have a welcoming space so anyone can come and that your space is forgiving and understanding. Let people come as they are, and be prepared to teach and to learn; when people feel comfortable and engaged, they want to stay."
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/2/break-down-stereotypes-python
-
-作者:[Don Watkins][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/don-watkins
-[b]: https://github.com/lujun9972
-[1]: https://detroitpubliclibrary.org/locations/parkman
-[2]: https://www.dplfound.org/single-post/2016/05/15/Parkman-Branch-Coders
-[3]: https://www.linkedin.com/in/qumisha-goss-b3bb5470
-[4]: https://scratch.mit.edu/
-[5]: http://Code.org
-[6]: https://www.amazon.com/Python-Programming-Introduction-Computer-Science/dp/1887902996
-[7]: https://nostarch.com/pythonforkids
-[8]: https://automatetheboringstuff.com/
-[9]: https://nostarch.com/laurenipsum
-[10]: https://www.raspberrypi.org/
-[11]: https://microbit.org/guide/
diff --git a/sources/talk/20190223 No- Ubuntu is NOT Replacing Apt with Snap.md b/sources/talk/20190223 No- Ubuntu is NOT Replacing Apt with Snap.md
deleted file mode 100644
index bb7dd14943..0000000000
--- a/sources/talk/20190223 No- Ubuntu is NOT Replacing Apt with Snap.md
+++ /dev/null
@@ -1,76 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (No! Ubuntu is NOT Replacing Apt with Snap)
-[#]: via: (https://itsfoss.com/ubuntu-snap-replaces-apt-blueprint/)
-[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
-
-No! Ubuntu is NOT Replacing Apt with Snap
-======
-
-Stop believing the rumors that Ubuntu is planning to replace Apt with Snap in the [Ubuntu 19.04 release][1]. These are only rumors.
-
-![Snap replacing apt rumors][2]
-
-Don’t get what I am talking about? Let me give you some context.
-
-There is a ‘blueprint’ on Ubuntu’s launchpad website, titled ‘Replace APT with snap as default package manager’. It talks about replacing Apt (package manager at the heart of Debian) with Snap ( a new packaging system by Ubuntu).
-
-> Thanks to Snap, the need for APT is disappearing, fast… why don’t we use snap at the system level?
-
-The post further says “Imagine, for example, being able to run “sudo snap install cosmic” to upgrade to the current release, “sudo snap install –beta disco” (in March) to upgrade to a beta release, or, for that matter, “sudo snap install –edge disco” to upgrade to a pre-beta release. It would make the whole process much easier, and updates could simply be delivered as updates to the corresponding snap, which could then just be pushed to the repositories and there it is. This way, instead of having a separate release updater, it would be possible to A, run all system updates completely and silently in the background to avoid nagging the user (a la Chrome OS), and B, offer release upgrades in the GNOME software store, Mac-style, as banners, so the user can install them easily. It would make the user experience both more consistent and even more user-friendly than it currently is.”
-
-It might sound good and promising and if you take a look at [this link][3], even you might start believing the rumor. Why? Because at the bottom of the blueprint information, it lists Ubuntu-founder Mark Shuttleworth as the approver.
-
-![Apt being replaced with Snap blueprint rumor][4]Mark Shuttleworth’s name adds to the confusion
-
-The rumor got fanned when the Switch to Linux YouTube channel covered it. You can watch the video from around 11:30.
-
-
-
-When this ‘news’ was brought to my attention, I reached out to Alan Pope of Canonical and asked him if he or his colleagues at Canonical (Ubuntu’s parent company) could confirm it.
-
-Alan clarified that the so called blueprint was not associated with official Ubuntu team. It was created as a proposal by some community member not affiliated with Ubuntu.
-
-> That’s not anything official. Some random community person made it. Anyone can write a blueprint.
->
-> Alan Pope, Canonical
-
-Alan further elaborated that anyone can create such blueprints and tag Mark Shuttleworth or other Ubuntu members in it. Just because Mark’s name was listed as the approver, it doesn’t mean he already approved the idea.
-
-Canonical has no such plans to replace Apt with Snap. It’s not as simple as the blueprint in question suggests.
-
-After talking with Alan, I decided to not write about this topic because I don’t want to fan baseless rumors and confuse people.
-
-Unfortunately, the ‘replace Apt with Snap’ blueprint is still being shared on various Ubuntu and Linux related groups and forums. Alan had to publicly dismiss these rumors in a series of tweets:
-
-> Seen this [#Ubuntu][5] blueprint being shared around the internet. It's not official, not a thing we're doing. Just because someone made a blueprint, doesn't make it fact.
->
-> — Alan Pope 🇪🇺🇬🇧 (@popey) [February 23, 2019][6]
-
-I don’t want you, the It’s FOSS reader, to fell for such silly rumors so I quickly penned this article.
-
-If you come across ‘apt being replaced with snap’ discussion, you may tell people that it’s not true and provide them this link as a reference.
-
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/ubuntu-snap-replaces-apt-blueprint/
-
-作者:[Abhishek Prakash][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/abhishek/
-[b]: https://github.com/lujun9972
-[1]: https://itsfoss.com/ubuntu-19-04-release-features/
-[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/snap-replacing-apt.png?resize=800%2C450&ssl=1
-[3]: https://blueprints.launchpad.net/ubuntu/+spec/package-management-default-snap
-[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/apt-snap-blueprint.jpg?ssl=1
-[5]: https://twitter.com/hashtag/Ubuntu?src=hash&ref_src=twsrc%5Etfw
-[6]: https://twitter.com/popey/status/1099238146393468931?ref_src=twsrc%5Etfw
diff --git a/sources/talk/20190228 Why CLAs aren-t good for open source.md b/sources/talk/20190228 Why CLAs aren-t good for open source.md
deleted file mode 100644
index ca39619762..0000000000
--- a/sources/talk/20190228 Why CLAs aren-t good for open source.md
+++ /dev/null
@@ -1,76 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Why CLAs aren't good for open source)
-[#]: via: (https://opensource.com/article/19/2/cla-problems)
-[#]: author: (Richard Fontana https://opensource.com/users/fontana)
-
-Why CLAs aren't good for open source
-======
-Few legal topics in open source are as controversial as contributor license agreements.
-
-
-Few legal topics in open source are as controversial as [contributor license agreements][1] (CLAs). Unless you count the special historical case of the [Fedora Project Contributor Agreement][2] (which I've always seen as an un-CLA), or, like [Karl Fogel][3], you classify the [DCO][4] as a [type of CLA][5], today Red Hat makes no use of CLAs for the projects it maintains.
-
-It wasn't always so. Red Hat's earliest projects followed the traditional practice I've called "inbound=outbound," in which contributions to a project are simply provided under the project's open source license with no execution of an external, non-FOSS contract required. But in the early 2000s, Red Hat began experimenting with the use of contributor agreements. Fedora started requiring contributors to sign a CLA based on the widely adapted [Apache ICLA][6], while a Free Software Foundation-derived copyright assignment agreement and a pair of bespoke CLAs were inherited from the Cygnus and JBoss acquisitions, respectively. We even took [a few steps][7] towards adopting an Apache-style CLA across the rapidly growing set of Red Hat-led projects.
-
-This came to an end, in large part because those of us on the Red Hat legal team heard and understood the concerns and objections raised by Red Hat engineers and the wider technical community. We went on to become de facto leaders of what some have called the anti-CLA movement, marked notably by our [opposition to Project Harmony][8] and our [efforts][9] to get OpenStack to replace its CLA with the DCO. (We [reluctantly][10] sign tolerable upstream project CLAs out of practical necessity.)
-
-### Why CLAs are problematic
-
-Our choice not to use CLAs is a reflection of our values as an authentic open source company with deep roots in the free software movement. Over the years, many in the open source community have explained why CLAs, and the very similar mechanism of copyright assignment, are a bad policy for open source.
-
-One reason is the red tape problem. Normally, open source development is characterized by frictionless contribution, which is enabled by inbound=outbound without imposition of further legal ceremony or process. This makes it relatively easy for new contributors to get involved in a project, allowing more effective growth of contributor communities and driving technical innovation upstream. Frictionless contribution is a key part of the advantage open source development holds over proprietary alternatives. But frictionless contribution is negated by CLAs. Having to sign an unusual legal agreement before a contribution can be accepted creates a bureaucratic hurdle that slows down development and discourages participation. This cost persists despite the growing use of automation by CLA-using projects.
-
-CLAs also give rise to an asymmetry of legal power among a project's participants, which also discourages the growth of strong contributor and user communities around a project. With Apache-style CLAs, the company or organization leading the project gets special rights that other contributors do not receive, while those other contributors must shoulder certain legal obligations (in addition to the red tape burden) from which the project leader is exempt. The problem of asymmetry is most severe in copyleft projects, but it is present even when the outbound license is permissive.
-
-When assessing the arguments for and against CLAs, bear in mind that today, as in the past, the vast majority of the open source code in any product originates in projects that follow the inbound=outbound practice. The use of CLAs by a relatively small number of projects causes collateral harm to all the others by signaling that, for some reason, open source licensing is insufficient to handle contributions flowing into a project.
-
-### The case for CLAs
-
-Since CLAs continue to be a minority practice and originate from outside open source community culture, I believe that CLA proponents should bear the burden of explaining why they are necessary or beneficial relative to their costs. I suspect that most companies using CLAs are merely emulating peer company behavior without critical examination. CLAs have an understandable, if superficial, appeal to risk-averse lawyers who are predisposed to favor greater formality, paper, and process regardless of the business costs. Still, some arguments in favor of CLAs are often advanced and deserve consideration.
-
-**Easy relicensing:** If administered appropriately, Apache-style CLAs give the project steward effectively unlimited power to sublicense contributions under terms of the steward's choice. This is sometimes seen as desirable because of the potential need to relicense a project under some other open source license. But the value of easy relicensing has been greatly exaggerated by pointing to a few historical cases involving major relicensing campaigns undertaken by projects with an unusually large number of past contributors (all of which were successful without the use of a CLA). There are benefits in relicensing being hard because it results in stable legal expectations around a project and encourages projects to consult their contributor communities before undertaking significant legal policy changes. In any case, most inbound=outbound open source projects never attempt to relicense during their lifetime, and for the small number that do, relicensing will be relatively painless because typically the number of past contributors to contact will not be large.
-
-**Provenance tracking:** It is sometimes claimed that CLAs enable a project to rigorously track the provenance of contributions, which purportedly has some legal benefit. It is unclear what is achieved by the use of CLAs in this regard that is not better handled through such non-CLA means as preserving Git commit history. And the DCO would seem to be much better suited to tracking contributions, given that it is normally used on a per-commit basis, while CLAs are signed once per contributor and are administratively separate from code contributions. Moreover, provenance tracking is often described as though it were a benefit for the public, yet I know of no case where a project provides transparent, ready public access to CLA acceptance records.
-
-**License revocation:** Some CLA advocates warn of the prospect that a contributor may someday attempt to revoke a past license grant. To the extent that the concern is about largely judgment-proof individual contributors with no corporate affiliation, it is not clear why an Apache-style CLA provides more meaningful protection against this outcome compared to the use of an open source license. And, as with so many of the legal risks raised in discussions of open source legal policy, this appears to be a phantom risk. I have heard of only a few purported attempts at license revocation over the years, all of which were resolved quickly when the contributor backed down in the face of community pressure.
-
-**Unauthorized employee contribution:** This is a special case of the license revocation issue and has recently become a point commonly raised by CLA advocates. When an employee contributes to an upstream project, normally the employer owns the copyrights and patents for which the project needs licenses, and only certain executives are authorized to grant such licenses. Suppose an employee contributed proprietary code to a project without approval from the employer, and the employer later discovers this and demands removal of the contribution or sues the project's users. This risk of unauthorized contributions is thought to be minimized by use of something like the [Apache CCLA][11] with its representations and signature requirement, coupled with some adequate review process to ascertain that the CCLA signer likely was authorized to sign (a step which I suspect is not meaningfully undertaken by most CLA-using companies).
-
-Based on common sense and common experience, I contend that in nearly all cases today, employee contributions are done with the actual or constructive knowledge and consent of the employer. If there were an atmosphere of high litigation risk surrounding open source software, perhaps this risk should be taken more seriously, but litigation arising out of open source projects remains remarkably uncommon.
-
-More to the point, I know of no case where an allegation of copyright or patent infringement against an inbound=outbound project, not stemming from an alleged open source license violation, would have been prevented by use of a CLA. Patent risk, in particular, is often cited by CLA proponents when pointing to the risk of unauthorized contributions, but the patent license grants in Apache-style CLAs are, by design, quite narrow in scope. Moreover, corporate contributions to an open source project will typically be few in number, small in size (and thus easily replaceable), and likely to be discarded as time goes on.
-
-### Alternatives
-
-If your company does not buy into the anti-CLA case and cannot get comfortable with the simple use of inbound=outbound, there are alternatives to resorting to an asymmetric and administratively burdensome Apache-style CLA requirement. The use of the DCO as a complement to inbound=outbound addresses at least some of the concerns of risk-averse CLA advocates. If you must use a true CLA, there is no need to use the Apache model (let alone a [monstrous derivative][10] of it). Consider the non-specification core of the [Eclipse Contributor Agreement][12]—essentially the DCO wrapped inside a CLA—or the Software Freedom Conservancy's [Selenium CLA][13], which merely ceremonializes an inbound=outbound contribution policy.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/2/cla-problems
-
-作者:[Richard Fontana][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/fontana
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/article/18/3/cla-vs-dco-whats-difference
-[2]: https://opensource.com/law/10/6/new-contributor-agreement-fedora
-[3]: https://www.red-bean.com/kfogel/
-[4]: https://developercertificate.org
-[5]: https://producingoss.com/en/contributor-agreements.html#developer-certificate-of-origin
-[6]: https://www.apache.org/licenses/icla.pdf
-[7]: https://www.freeipa.org/page/Why_CLA%3F
-[8]: https://opensource.com/law/11/7/trouble-harmony-part-1
-[9]: https://wiki.openstack.org/wiki/OpenStackAndItsCLA
-[10]: https://opensource.com/article/19/1/cla-proliferation
-[11]: https://www.apache.org/licenses/cla-corporate.txt
-[12]: https://www.eclipse.org/legal/ECA.php
-[13]: https://docs.google.com/forms/d/e/1FAIpQLSd2FsN12NzjCs450ZmJzkJNulmRC8r8l8NYwVW5KWNX7XDiUw/viewform?hl=en_US&formkey=dFFjXzBzM1VwekFlOWFWMjFFRjJMRFE6MQ#gid=0
diff --git a/sources/talk/20190311 Discuss everything Fedora.md b/sources/talk/20190311 Discuss everything Fedora.md
deleted file mode 100644
index 5795fbf3f7..0000000000
--- a/sources/talk/20190311 Discuss everything Fedora.md
+++ /dev/null
@@ -1,45 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Discuss everything Fedora)
-[#]: via: (https://fedoramagazine.org/discuss-everything-fedora/)
-[#]: author: (Ryan Lerch https://fedoramagazine.org/introducing-flatpak/)
-
-Discuss everything Fedora
-======
-
-
-Are you interested in how Fedora is being developed? Do you want to get involved, or see what goes into making a release? You want to check out [Fedora Discussion][1]. It is a relatively new place where members of the Fedora Community meet to discuss, ask questions, and interact. Keep reading for more information.
-
-Note that the Fedora Discussion system is mainly aimed at contributors. If you have questions on using Fedora, check out [Ask Fedora][2] (which is being migrated in the future).
-
-![][3]
-
-Fedora Discussion is a forum and discussion site that uses the [Discourse open source discussion platform][4].
-
-There are already several categories useful for Fedora users, including [Desktop][5] (covering Fedora Workstation, Fedora Silverblue, KDE, XFCE, and more) and the [Server, Cloud, and IoT][6] category . Additionally, some of the [Fedora Special Interest Groups (SIGs) have discussions as well][7]. Finally, the [Fedora Friends][8] category helps you connect with other Fedora users and Community members by providing discussions about upcoming meetups and hackfests.
-
-
---------------------------------------------------------------------------------
-
-via: https://fedoramagazine.org/discuss-everything-fedora/
-
-作者:[Ryan Lerch][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://fedoramagazine.org/introducing-flatpak/
-[b]: https://github.com/lujun9972
-[1]: https://discussion.fedoraproject.org/
-[2]: https://ask.fedoraproject.org
-[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/discussion-screenshot-1024x663.png
-[4]: https://www.discourse.org/about
-[5]: https://discussion.fedoraproject.org/c/desktop
-[6]: https://discussion.fedoraproject.org/c/server
-[7]: https://discussion.fedoraproject.org/c/sigs
-[8]: https://discussion.fedoraproject.org/c/friends
diff --git a/sources/talk/20190319 Hello World Marketing (or, How I Find Good, Boring Software).md b/sources/talk/20190319 Hello World Marketing (or, How I Find Good, Boring Software).md
new file mode 100644
index 0000000000..02f7e3fcac
--- /dev/null
+++ b/sources/talk/20190319 Hello World Marketing (or, How I Find Good, Boring Software).md
@@ -0,0 +1,100 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Hello World Marketing (or, How I Find Good, Boring Software))
+[#]: via: (https://theartofmachinery.com/2019/03/19/hello_world_marketing.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+Hello World Marketing (or, How I Find Good, Boring Software)
+======
+
+Back in 2001 Joel Spolsky wrote his classic essay [“Good Software Takes Ten Years. Get Used To it”][1]. Nothing much has changed since then: software is still taking around a decade of development to get good, and the industry is still getting used to that fact. Unfortunately, the industry has investors who want to see hockey stick growth rates on software that’s a year old or less. The result is an antipattern I like to call “Hello World Marketing”. Once you start to notice it, you see it everywhere, and it’s a huge red flag when choosing software tools.
+
+
+Of course, by “Hello World”, I’m referring to the programmer’s traditional first program: the one that just displays the message “Hello World”. The aim isn’t to make a useful program; it’s to make a minimal starting point.
+
+Hello World Marketing is about doing the same thing, but pretending that it’s useful. You’re supposed to be distracted into admiring how neatly a tool solves trivial problems, and forget about features you’ll need in real applications. HWM emphasises what can be done in the first five minutes, and downplays what you might need after several months. HWMed software is optimised for looking good in demos, and sounding exciting in blog posts and presentations.
+
+For a good example, see Nemil Dalal’s [great series of articles about the early marketing for MongoDB][2]. Notice the heavy use of hackathons, and that a lot of the marketing was about how “SQL looks like COBOL”. Now, I can criticise SQL, too, but if `SELECT` and `WHERE` are serious problems for an application, there are already hundreds of solutions like [SQLAlchemy][3] and [LINQ][4] — solutions that don’t compromise on more advanced features of traditional databases. On the other hand, if you were wondering about those advanced features, you could read vomity-worthy, hand-wavey pieces like “[Living in the post-transactional database future][5]”.
+
+### How I Find Good, Boring Software
+
+Obviously, one way to avoid HWM is to stick to software that’s much more than ten years old, and has a good reputation. But sometimes that’s not possible because the tools for a problem only came out during the last decade. Also, sometimes newer tools really do bring new benefits.
+
+However, it’s much harder to rely on reputation for newer software because “good reputation” often just means “popular”, which often just means “current fad”. Thankfully, there’s a simple and effective trick to avoid being dazzled by hype: just look elsewhere. Instead of looking at the marketing for the core features, look at the things that are usually forgotten. Here are the kinds of things I look at:
+
+#### Backups and Disaster Recovery
+
+Backup support is both super important and regularly an afterthought.
+
+The minimum viable product is full data dump/import functionality, but longer term it’s nice to have things like incremental backups. Some vendors will try to tell you to just copy the data files from disk, but this isn’t guaranteed to give you a consistent snapshot if the software is running live.
+
+There’s no point backing up data if you can’t restore it, and restoration is the difficult part. Yet many people never test the restoration (until they actually need it). About five years ago I was working with a team that had started using a new, cutting-edge, big-data database. The database looked pretty good, but I suggested we do an end-to-end test of the backup support. We loaded a cluster with one of the multi-terabyte datasets we had, did a backup, wiped the data in the cluster and then tried to restore it. Turns out we were the first people to actually try to restore a dataset of that size — the backup “worked”, but the restoration caused the cluster to crash and burn. We filed a bug report with the original database developers and they fixed it.
+
+Backup processes that work on small test datasets but fail on large production datasets is a recurring theme. I always recommend testing on production-sized datasets, and testing again as production data grows.
+
+For batch jobs, a related concept is restartability. If you’re copying large amounts of data from one place to another, and the job gets interrupted in the middle, what happens? Can you keep going from the middle? Alternatively, can you safely retry by starting from the beginning?
+
+#### Configuration
+
+A lot of HWMed software can only be configured using a GUI or web UI because that’s what’s obvious and looks good in demos and docs. For one thing, this usually means there’s no good way to back up or restore the configuration. So if a team of people use a shared instance over a year or so, forget about trying to restore it if (or when) it breaks. It’s also much more work to keep multiple deployments consistent (e.g., for dev, testing and prod environments) using separate GUIs. In practice, it just doesn’t happen.
+
+I prefer a well-commented config file for software I deploy, if nothing else because it can be checked into source control, and I know I can reproduce the deployment using nothing but what’s checked into source control. If something is configured using a UI, I look for a config export/import function. Even then, that feature is often an afterthought, and often imcomplete, so it’s worth testing if it’s possible to deploy the software without ever needing to manually tweak something in the UI.
+
+There seems to be a recent trend for software to be configured using a REST API instead. Honestly, this is the worst of both config files and GUI-based config, and most of the time people end up using [hacky ways to put the config into a file instead][6].
+
+#### Upgrades
+
+Life would be much easier if everything were static; software upgrade support makes everything more complicated. It’s also not usually shown in demos, so the first upgrade often ends the honeymoon with shiny, new software.
+
+For HA distributed systems, you’ll need support for graceful shutdown and a certain amount of forward _and_ backwards compatibility (because you’ll have multiple versions running during upgrades). It’s a common mistake to forget about downgrade support.
+
+Distributed systems are simpler when components have independent replicas that don’t communicate with each other. Anything with clustering (or, worse, consensus algorithms) is often extra tricky to upgrade, and worth testing.
+
+Things that support horizontal scaling don’t necessarily support rescaling without downtime. This is especially true whenever sharding is involved because live resharding isn’t trivial.
+
+Here’s a story from a certain popular container app platform. Demos showed how easy it was to launch an app on the platform, and then showed how easy it was to scale it to multiple replicas. What they didn’t show was the upgrade process: When you pushed a new version of your app, the first thing the platform did was _shut down all running instances of it_. Then it would upload the code to a build server and start building it — meaning downtime for however long the build took, plus the time needed to roll out the new version (if it worked). This problem has been fixed in newer releases of the platform.
+
+#### Security
+
+Even if software has no built-in access control, all-or-nothing access control is easy to implement (e.g., using a reverse proxy with HTTP basic auth). The harder problem is fine-grained access control. Sometimes you don’t care, but in some environments it makes a big difference to what features you can even use.
+
+Some immature software has a quick-and-dirty implementation of user-based access control, typically with a GUI for user management. For everything except the core business tool, this isn’t very useful. For human users, every project I’ve worked on has either been with a small team that just shared a single username/password, or with a large team that wanted integration with OpenID Connect, or LDAP, or whatever centralised single-sign-on (SSO) system was used by the organisation. No one wants to manually manage credentials for every tool, every time someone joins or leaves. Similarly, credentials for applications or other non-human users are better generated using an automatable approach — like a config file or API.
+
+Immature implementations of access control are often missing anything like user groups, but managing permissions at the user level is a time waster. Some SSO integrations only integrate users, not groups, which is a “so close yet so far” when it comes to avoiding permissions busywork.
+
+#### Others
+
+I talked about ignoring the hype, but there’s one good signal you can get from the marketing: whether the software is branded as “enterprise” software. Enterprise software is normally bought by someone other than the end user, so it’s usually pleasant to buy but horrible to use. The only exceptions I know of are enterprise versions of normal consumer software, and enterprise software that the buyer will also have to use. Be warned: even if a company sells enterprise software alongside consumer software, there’s no guarantee that they’re just different versions of the same product. Often they’ll be developed by separate teams with different priorities.
+
+A lot of the stuff in this post can be checked just by skimming through the documentation. If a tool stores data, but the documentation doesn’t mention backups, there probably isn’t any backup suppport. Even if there is and it’s just not documented, that’s not exactly a good sign either. So, sure, documentation quality is worth evaluating by itself. On the other hand, sometimes the documentation is better than the product, so I never trust a tool until I’ve actually tried it out.
+
+When I first saw Python, I knew that it was a terrible programming language because of the way it used whitespace indentation. Yeah, that was stupid. Later on I learned that 1) the syntax wasn’t a big deal, especially when I’m already indenting C-like languages in the same way, and 2) a lot of practical problems can be solved just by gluing libraries together with a few dozen lines of Python, and that was really useful. We often have strong opinions about syntax that are just prejudice. Syntax can matter, but it’s less important than how the tool integrates with the rest of the system.
+
+### Weighing Pros and Cons
+
+You never need to do deep analysis to detect the most overhyped products. Just check a few of these things and they’ll fail spectacularly.
+
+Even with software that looks solid, I still like to do more tests before entrusting a serious project with it. That’s not because I’m looking for excuses to nitpick and use my favourite tool instead. New tools often really do bring new benefits. But it’s much better to understand the pros and cons of new software, and to use it because the pros outweigh the cons, not because of how slick the Hello World demo is.
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2019/03/19/hello_world_marketing.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: https://www.joelonsoftware.com/2001/07/21/good-software-takes-ten-years-get-used-to-it/
+[2]: https://www.nemil.com/mongo/
+[3]: https://www.sqlalchemy.org/
+[4]: https://msdn.microsoft.com/en-us/library/bb308959.aspx
+[5]: https://www.mongodb.com/post/36151042528/post-transactional-future
+[6]: /2017/07/15/use_terraform_with_vault.html
diff --git a/sources/talk/20190322 How to save time with TiDB.md b/sources/talk/20190322 How to save time with TiDB.md
deleted file mode 100644
index 534c04de1f..0000000000
--- a/sources/talk/20190322 How to save time with TiDB.md
+++ /dev/null
@@ -1,143 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to save time with TiDB)
-[#]: via: (https://opensource.com/article/19/3/how-save-time-tidb)
-[#]: author: (Morgan Tocker https://opensource.com/users/morgo)
-
-How to save time with TiDB
-======
-
-TiDB, an open source-compatible, cloud-based database engine, simplifies many of MySQL database administrators' common tasks.
-
-![Team checklist][1]
-
-Last November, I wrote about key [differences between MySQL and TiDB][2], an open source-compatible, cloud-based database engine, from the perspective of scaling both solutions in the cloud. In this follow-up article, I'll dive deeper into the ways [TiDB][3] streamlines and simplifies administration.
-
-If you come from a MySQL background, you may be used to doing a lot of manual tasks that are either not required or much simpler with TiDB.
-
-The inspiration for TiDB came from the founders managing sharded MySQL at scale at some of China's largest internet companies. Since requirements for operating a large system at scale are a key concern, I'll look at some typical MySQL database administrator (DBA) tasks and how they translate to TiDB.
-
-[![TiDB architecture][4]][5]
-
-In [TiDB's architecture][5]:
-
- * SQL processing is separated from data storage. The SQL processing (TiDB) and storage (TiKV) components independently scale horizontally.
- * PD (Placement Driver) acts as the cluster manager and stores metadata.
- * All components natively provide high availability, with PD and TiKV using the [Raft consensus algorithm][6].
- * You can access your data via either MySQL (TiDB) or Spark (TiSpark) protocols.
-
-
-
-### Adding/fixing replication slaves
-
-**tl;dr:** It doesn't happen in the same way as in MySQL.
-
-Replication and redundancy of data are automatically managed by TiKV. You also don't need to worry about creating initial backups to seed replicas, as _both_ the provisioning and replication are handled for you.
-
-Replication is also quorum-based using the Raft consensus algorithm, so you don't have to worry about the inconsistency problems surrounding failures that you do with asynchronous replication (the default in MySQL and what many users are using).
-
-TiDB does support its own binary log, so it can be used for asynchronous replication between clusters.
-
-### Optimizing slow queries
-
-**tl;dr:** Still happens in TiDB
-
-There is no real way out of optimizing slow queries that have been introduced by development teams.
-
-As a mitigating factor though, if you need to add breathing room to your database's capacity while you work on optimization, the TiDB's architecture allows you to horizontally scale.
-
-### Upgrades and maintenance
-
-**tl;dr:** Still required, but generally easier
-
-Because the TiDB server is stateless, you can roll through an upgrade and deploy new TiDB servers. Then you can remove the older TiDB servers from the load balancer pool, shutting down them once connections have drained.
-
-Upgrading PD is also quite straightforward since only the PD leader actively answers requests at a time. You can perform a rolling upgrade and upgrade PD's non-leader peers one at a time, and then change the leader before upgrading the final PD server.
-
-For TiKV, the upgrade is marginally more complex. If you want to remove a node, I recommend first setting it to be a follower on each of the regions where it is currently a leader. After that, you can bring down the node without impacting your application. If the downtime is brief, TiKV will recover with its regional peers from the Raft log. In a longer downtime, it will need to re-copy data. This can all be managed for you, though, if you choose to deploy using Ansible or Kubernetes.
-
-### Manual sharding
-
-**tl;dr:** Not required
-
-Manual sharding is mainly a pain on the part of the application developers, but as a DBA, you might have to get involved if the sharding is naive or has problems such as hotspots (many workloads do) that require re-balancing.
-
-In TiDB, re-sharding or re-balancing happens automatically in the background. The PD server observes when data regions (TiKV's term for chunks of data in key-value form) get too small, too big, or too frequently accessed.
-
-You can also explicitly configure PD to store regions on certain TiKV servers. This works really well when combined with MySQL partitioning.
-
-### Capacity planning
-
-**tl;dr:** Much easier
-
-Capacity planning on a MySQL database can be a little bit hard because you need to plan your physical infrastructure requirements two to three years from now. As data grows (and the working set changes), this can be a difficult task. I wouldn't say it completely goes away in the cloud either, since changing a master server's hardware is always hard.
-
-TiDB splits data into approximately 100MiB chunks that it distributes among TiKV servers. Because this increment is much smaller than a full server, it's much easier to move around and redistribute data. It's also possible to add new servers in smaller increments, which is easier on planning.
-
-### Scaling
-
-**tl;dr:** Much easier
-
-This is related to capacity planning and sharding. When we talk about scaling, many people think about very large _systems,_ but that is not exclusively how I think of the problem:
-
- * Scaling is being able to start with something very small, without having to make huge investments upfront on the chance it could become very large.
- * Scaling is also a people problem. If a system requires too much internal knowledge to operate, it can become hard to grow as an engineering organization. The barrier to entry for new hires can become very high.
-
-
-
-Thus, by providing automatic sharding, TiDB can scale much easier.
-
-### Schema changes (DDL)
-
-**tl;dr:** Mostly better
-
-The data definition language (DDL) supported in TiDB is all online, which means it doesn't block other reads or writes to the system. It also doesn't block the replication stream.
-
-That's the good news, but there are a couple of limitations to be aware of:
-
- * TiDB does not currently support all DDL operations, such as changing the primary key or some "change data type" operations.
- * TiDB does not currently allow you to chain multiple DDL changes in the same command, e.g., _ALTER TABLE t1 ADD INDEX (x), ADD INDEX (y)_. You will need to break these queries up into individual DDL queries.
-
-
-
-This is an area that we're looking to improve in [TiDB 3.0][7].
-
-### Creating one-off data dumps for the reporting team
-
-**tl;dr:** May not be required
-
-DBAs loathe manual tasks that create one-off exports of data to be consumed by another team, perhaps in an analytics tool or data warehouse.
-
-This is often required when the types of queries that are be executed on the dataset are analytical. TiDB has hybrid transactional/analytical processing (HTAP) capabilities, so in many cases, these queries should work fine. If your analytics team is using Spark, you can also use the [TiSpark][8] connector to allow them to connect directly to TiKV.
-
-This is another area we are improving with [TiFlash][7], a column store accelerator. We are also working on a plugin system to support external authentication. This will make it easier to manage access by the reporting team.
-
-### Conclusion
-
-In this post, I looked at some common MySQL DBA tasks and how they translate to TiDB. If you would like to learn more, check out our [TiDB Academy course][9] designed for MySQL DBAs (it's free!).
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/3/how-save-time-tidb
-
-作者:[Morgan Tocker][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/morgo
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
-[2]: https://opensource.com/article/18/11/key-differences-between-mysql-and-tidb
-[3]: https://github.com/pingcap/tidb
-[4]: https://opensource.com/sites/default/files/uploads/tidb_architecture.png (TiDB architecture)
-[5]: https://pingcap.com/docs/architecture/
-[6]: https://raft.github.io/
-[7]: https://pingcap.com/blog/tidb-3.0-beta-stability-at-scale/
-[8]: https://github.com/pingcap/tispark
-[9]: https://pingcap.com/tidb-academy/
diff --git a/sources/talk/20190405 D as a C Replacement.md b/sources/talk/20190405 D as a C Replacement.md
new file mode 100644
index 0000000000..36b60bb278
--- /dev/null
+++ b/sources/talk/20190405 D as a C Replacement.md
@@ -0,0 +1,247 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (D as a C Replacement)
+[#]: via: (https://theartofmachinery.com/2019/04/05/d_as_c_replacement.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+D as a C Replacement
+======
+
+Sircmpwn (the main developer behind the [Sway Wayland compositor][1]) recently wrote a blog post about how he thinks [Rust is not a good C replacement][2]. I don’t know if he’d like the [D programming language][3] either, but it’s become a C replacement for me.
+
+### My C to D Story
+
+My story is like a lot of systems programmers’ stories. At one time, C was my go-to language for most programming. One day I realised that most of my C programs kept reimplementing things from C++: dynamic arrays, better strings, polymorphic classes, etc. So I tried using C++ instead, and at first I loved it. RAII and classes and generics made programming fun again. Even better was the promise that if I read all these books on C++, and learned to master things like template metaprogramming, I’d become an almighty god of systems programming and my code would be amazing. But learning more eventually had the opposite effect: (in hindsight) my code actually got worse, and I fell out of love. I remember reading Scott Meyer’s Effective C++ and realising it was really more about _ineffective_ C++ — and that most of my C++ code until then was broken. Let’s face it: C might be fiddly to use, but it has a kind of elegance, and “elegant” is rarely a word you hear when C++ is involved.
+
+Apparently, a lot of ex-C C++ programmers end up going back to C. In my case, I discovered D. It’s also not perfect, but I use it because it feels to me a lot more like the `C += 1` that C++ was meant to be. Here’s an example that’s very superficial, but I think is representative. Take this simple C program:
+
+```
+#include
+
+int main()
+{
+ printf("1 + 1 = %d!\n", 1 + 1);
+ return 0;
+}
+```
+
+Here’s a version using the C++ standard library:
+
+```
+#include
+
+int main()
+{
+ std::cout << "1 + 1 = " << 1 + 1 << "!" << std::endl;
+ return 0;
+}
+```
+
+Here’s an idiomatic D version:
+
+```
+import std.stdio;
+
+void main()
+{
+ writef("1 + 1 = %d!\n", 1 + 1);
+}
+```
+
+As I said, it’s a superficial example, but I think it shows a general difference in philosophy between C++ and D. (If I wanted to make the difference even clearer, I’d use an example that needed `iomanip` in C++.)
+
+Update: Unlike in C, [D’s format strings can work with custom types][4]. Stefan Rohe has also pointed out that [D supports compile-time checking of format strings][5] using its metaprogramming features — unlike C which does it through built-in compiler special casing that can’t be used with custom code.
+
+This [article about C++ member function pointers][6] happens to also be a good explanation of the origins of D. It’s a good read if you’re a programming language nerd like me, but here’s my TL;DR for everyone else: C++ member function pointers are supposed to feel like a low-level feature (like normal function pointers are), but the complexity and diversity of implementations means they’re really high level. The complexity of the implementations is because of the subtleties of the rules about what you can do with them. The author explains the implementations from several C++ compilers, including what’s “easily [his] favorite implementation”: the elegantly simple Digital Mars C++ implementation. (“Why doesn’t everyone else do it this way?”) The DMC compiler was written by Walter Bright, who invented D.
+
+D has classes and templates and other core features of C++, but designed by someone who has spent a heck of a lot of time thinking about the C++ spec and how things could be simpler. Walter once said that his experiences implementing C++ templates made him consider not including them in D at all, until he realised they didn’t need to be so complex.
+
+Here’s a quick tour of D from the point of view of incrementally improving C.
+
+### `-betterC`
+
+D compilers support a `-betterC` switch that disables [the D runtime][7] and all high-level features that depend on it. The example C code above can be translated directly into betterC:
+
+```
+import core.stdc.stdio;
+
+extern(C):
+
+int main()
+{
+ printf("1 + 1 = %d!\n", 1 + 1);
+ return 0;
+}
+
+$ dmd -betterC example.d
+$ ./example
+1 + 1 = 2!
+```
+
+The resulting binary looks a lot like the equivalent C binary. In fact, if you rewrote a C library in betterC, it could still link to code that had been compiled against the C version, and work without changes. Walter Bright wrote a good article walking through all [the changes needed to convert a real C program to betterC][8].
+
+You don’t actually need the `-betterC` switch just to write C-like code in D. It’s only needed in special cases where you simply can’t have the D runtime. But let me point out some of my favourite D features that still work with `-betterC`.
+
+#### `static assert()`
+
+This allows verifying some assumption at compile time.
+
+```
+static assert(kNumInducers < 16);
+```
+
+Systems code often makes assumptions about alignment or structure size or other things. With `static assert`, it’s possible to not only document these assumptions, but trigger a compilation error if someone breaks them by adding a struct member or something.
+
+#### Slices
+
+Typical C code is full of pointer/length pairs, and it’s a common bug for them to go out of sync. Slices are a simple and super-useful abstraction for a range of memory defined by a pointer and length. Instead of code like this:
+
+```
+buffer_p += offset;
+buffer_len -= offset; // Got to update both
+```
+
+You can use this much-less-bug-prone alternative:
+
+```
+buffer = buffer[offset..$];
+```
+
+A slice is nothing but a pointer/length pair with first-class syntactic support.
+
+Update: [Walter Bright has written more about pointer/length pair problem in C][9].
+
+#### Compile Time Function Evaluation (CTFE)
+
+[Many functions can be evaluated at compile time.][10]
+
+```
+long factorial(int n) pure
+{
+ assert (n >= 0 && n <= 20);
+ long ret = 1;
+ foreach (j; 2..n+1) ret *= j;
+ return ret;
+}
+
+// Statically allocated array
+// Size is calculated at compile time
+Permutation[factorial(kNumThings)] permutation_table;
+```
+
+#### `scope` Guards
+
+Code in one part of a function is often coupled to cleanup code in a later part. Failing to match this code up correctly is another common source of bugs (especially when multiple control flow paths are involved). D’s scope guards make it simple to get this stuff right:
+
+```
+p = malloc(128);
+// free() will be called when the current scope exits
+scope (exit) free(p);
+// Put whatever if statements, or loops, or early returns you like here
+```
+
+You can even have multiple scope guards in a scope, or have nested scopes. The cleanup routines will be called when needed, in the right order.
+
+D also supports RAII using struct destructors.
+
+#### `const` and `immutable`
+
+It’s a popular myth that `const` in C and C++ is useful for compiler optimisations. Walter Bright has complained that every time he thought of a new `const`-based optimisation for C++, he eventually discovered it didn’t work in real code. So he made some changes to `const` semantics for D, and added `immutable`. You can read more in the [D `const` FAQ][11].
+
+#### `pure`
+
+Functional purity can be enforced. I’ve written about [some of the benefits of the `pure` keyword before][12].
+
+#### `@safe`
+
+SafeD is a subset of D that forbids risky language features like pointer typecasts and inline assembly. Code marked `@safe` is enforced by the compiler to not use these features, so that risky code can be limited to the small percentage of the application that needs it. You can [read more about SafeD in this article][13].
+
+#### Metaprogramming
+
+Like I hinted earlier, metaprogramming has got a bad reputation among some C++ programmers. But [D has the advantage of making metaprogramming less interesting][14], so D programmers tend to just do it when it’s useful, and not as a fun puzzle.
+
+D has great support for [compile-time reflection][15]. In most cases, compile-time reflection can solve the same problems as run-time reflection, but with compile-time guarantees. Compile-time reflection can also be used to implement run-time reflection where it’s truly needed.
+
+Need the names of an enumerated type as an array?
+
+```
+enum State
+{
+ stopped,
+ starting,
+ running,
+ stopping,
+}
+
+string[] state_names = [__traits(allMembers, State)];
+```
+
+Thanks to D’s metaprogramming, the standard library has many nice, type-safe tools, like this [compile-time checked bit flag enum][16].
+
+I’ve written more about [using metaprogramming in `-betterC` code][17].
+
+#### No Preprocessor
+
+Okay, this a non-feature as a feature, but D has no equivalent to C’s preprocessor. All its sane use-cases are replaced with native language features, like [manifest constants][18] and [templates][19]. That includes proper [modules][20] support, which means D can break free of the limitations of that old `#include` hack.
+
+### Normal D
+
+C-like D code can be written and compiled as normal D code without the `-betterC` switch. The difference is that normal D code is linked to the D runtime, which supports higher-level features, the most obvious ones being garbage collection and object-oriented classes. Some people have confused the D runtime with something like the Java virtual machine, so I once wrote [an article explaining exactly what it is][7] (spoiler: it’s like the C and C++ runtimes, but with more features).
+
+Even with the runtime, compiled D is not much different from compiled C++. Sometimes I like to write throwaway code to, say, experiment with a new Linux system call or something. I used to think the best language for that is plain old C, but now I always use D.
+
+D doesn’t natively support `#include`ing C code, but for nice APIs that don’t have a lot of preprocessor craziness (like most of Linux) I usually just write [ad-hoc bindings][21]. Many popular C libraries have maintained D bindings, which can be found in the [Dub registry][22], or in [the Derelict project][23], or in the newer [BindBC project][24]. There are also tools for automated bindings, including the awesome [dpp tool][25] that brings `#include` support directly to D code.
+
+Update: This post has got a lot of attention from people who’ve never heard of D before. If you’re interested in learning D, I recommend
+
+ * [The DLang Tour][26] for a quick dive into the language
+ * [Ali Çehreli’s Programming in D book][27] if you prefer something in-depth
+ * [The D forum Learn group][28] or [IRC channel][29] to get answers to your questions
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2019/04/05/d_as_c_replacement.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: https://swaywm.org/
+[2]: https://drewdevault.com/2019/03/25/Rust-is-not-a-good-C-replacement.html
+[3]: https://dlang.org
+[4]: https://wiki.dlang.org/Defining_custom_print_format_specifiers
+[5]: https://dlang.org/phobos/std_format.html#format
+[6]: https://www.codeproject.com/Articles/7150/Member-Function-Pointers-and-the-Fastest-Possible
+[7]: /2017/06/04/what_is_the_d_runtime.html
+[8]: https://dlang.org/blog/2018/06/11/dasbetterc-converting-make-c-to-d/
+[9]: https://www.digitalmars.com/articles/b44.html
+[10]: https://dlang.org/spec/function.html#interpretation
+[11]: https://dlang.org/articles/const-faq.html
+[12]: /2016/03/28/dirtying_pure_functions_can_be_useful.html
+[13]: https://dlang.org/blog/2016/09/28/how-to-write-trusted-code-in-d/
+[14]: https://epi.github.io/2017/03/18/less_fun.html
+[15]: https://dlang.org/spec/traits.html
+[16]: https://dlang.org/phobos/std_typecons.html#BitFlags
+[17]: /2018/08/13/inheritance_and_polymorphism_2.html
+[18]: https://dlang.org/spec/enum.html#manifest_constants
+[19]: https://tour.dlang.org/tour/en/basics/templates
+[20]: https://ddili.org/ders/d.en/modules.html
+[21]: https://wiki.dlang.org/Bind_D_to_C
+[22]: https://code.dlang.org/
+[23]: https://github.com/DerelictOrg
+[24]: https://github.com/BindBC
+[25]: https://github.com/atilaneves/dpp
+[26]: https://tour.dlang.org/
+[27]: https://ddili.org/ders/d.en/index.html
+[28]: https://forum.dlang.org/group/learn
+[29]: irc://irc.freenode.net/d
diff --git a/sources/talk/20190410 Google partners with Intel, HPE and Lenovo for hybrid cloud.md b/sources/talk/20190410 Google partners with Intel, HPE and Lenovo for hybrid cloud.md
deleted file mode 100644
index 5603086a53..0000000000
--- a/sources/talk/20190410 Google partners with Intel, HPE and Lenovo for hybrid cloud.md
+++ /dev/null
@@ -1,60 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Google partners with Intel, HPE and Lenovo for hybrid cloud)
-[#]: via: (https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all)
-[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
-
-Google partners with Intel, HPE and Lenovo for hybrid cloud
-======
-Google boosted its on-premises and cloud connections with Kubernetes and serverless computing.
-![Ilze Lucero \(CC0\)][1]
-
-Still struggling to get its Google Cloud business out of single-digit marketshare, Google this week introduced new partnerships with Lenovo and Intel to help bolster its hybrid cloud offerings, both built on Google’s Kubernetes container technology.
-
-At Google’s Next ’19 show this week, Intel and Google said they will collaborate on Google's Anthos, a new reference design based on the second-Generation Xeon Scalable processor introduced last week and an optimized Kubernetes software stack designed to deliver increased workload portability between public and private cloud environments.
-
-**[ Read also:[What hybrid cloud means in practice][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
-
-As part the Anthos announcement, Hewlett Packard Enterprise (HPE) said it has validated Anthos on its ProLiant servers, while Lenovo has done the same for its ThinkAgile platform. This solution will enable customers to get a consistent Kubernetes experience between Google Cloud and their on-premises HPE or Lenovo servers. No official word from Dell yet, but they can’t be far behind.
-
-Users will be able to manage their Kubernetes clusters and enforce policy consistently across environments – either in the public cloud or on-premises. In addition, Anthos delivers a fully integrated stack of hardened components, including OS and container runtimes that are tested and validated by Google, so customers can upgrade their clusters with confidence and minimize downtime.
-
-### What is Google Anthos?
-
-Google formally introduced [Anthos][4] at this year’s show. Anthos, formerly Cloud Services Platform, is meant to allow users to run their containerized applications without spending time on building, managing, and operating Kubernetes clusters. It runs both on Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE) and in your data center with GKE On-Prem. Anthos will also let you manage workloads running on third-party clouds such as Amazon Web Services (AWS) and Microsoft Azure.
-
-Google also announced the beta release of Anthos Migrate, which auto-migrates virtual machines (VM) from on-premises or other clouds directly into containers in GKE with minimal effort. This allows enterprises to migrate their infrastructure in one streamlined motion, without upfront modifications to the original VMs or applications.
-
-Intel said it will publish the production design as an Intel Select Solution, as well as a developer platform, making it available to anyone who wants it.
-
-### Serverless environments
-
-Google isn’t stopping with Kubernetes containers, it’s also pushing ahead with serverless environments. [Cloud Run][5] is Google’s implementation of serverless computing, which is something of a misnomer. You still run your apps on servers; you just aren’t using a dedicated physical server. It is stateless, so resources are not allocated until you actually run or use the application.
-
-Cloud Run is a fully serverless offering that takes care of all infrastructure management, including the provisioning, configuring, scaling, and managing of servers. It automatically scales up or down within seconds, even down to zero depending on traffic, ensuring you pay only for the resources you actually use. Cloud Run can be used on GKE, offering the option to run side by side with other workloads deployed in the same cluster.
-
-Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all
-
-作者:[Andy Patrizio][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Andy-Patrizio/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2018/03/cubes_blocks_squares_containers_ilze_lucero_cc0_via_unsplash_1200x800-100752172-large.jpg
-[2]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
-[3]: https://www.networkworld.com/newsletters/signup.html
-[4]: https://cloud.google.com/blog/topics/hybrid-cloud/new-platform-for-managing-applications-in-todays-multi-cloud-world
-[5]: https://cloud.google.com/blog/products/serverless/announcing-cloud-run-the-newest-member-of-our-serverless-compute-stack
-[6]: https://www.facebook.com/NetworkWorld/
-[7]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190410 HPE and Nutanix partner for hyperconverged private cloud systems.md b/sources/talk/20190410 HPE and Nutanix partner for hyperconverged private cloud systems.md
deleted file mode 100644
index 76f908c68b..0000000000
--- a/sources/talk/20190410 HPE and Nutanix partner for hyperconverged private cloud systems.md
+++ /dev/null
@@ -1,60 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (HPE and Nutanix partner for hyperconverged private cloud systems)
-[#]: via: (https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all)
-[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
-
-HPE and Nutanix partner for hyperconverged private cloud systems
-======
-Both companies will sell HP ProLiant appliances with Nutanix software but to different markets.
-![Hewlett Packard Enterprise][1]
-
-Hewlett Packard Enterprise (HPE) has partnered with Nutanix to offer Nutanix’s hyperconverged infrastructure (HCI) software available as a managed private cloud service and on HPE-branded appliances.
-
-As part of the deal, the two companies will be competing against each other in hardware sales, sort of. If you want the consumption model you get through HPE’s GreenLake, where your usage is metered and you pay for only the time you use it (similar to the cloud), then you would get the ProLiant hardware from HPE.
-
-If you want an appliance model where you buy the hardware outright, like in the traditional sense of server sales, you would get the same ProLiant through Nutanix.
-
-**[ Read also:[What is hybrid cloud computing?][2] and [Multicloud mania: what to know][3] ]**
-
-As it is, HPE GreenLake offers multiple cloud offerings to customers, including virtualization courtesy of VMware and Microsoft. With the Nutanix partnership, HPE is adding Nutanix’s free Acropolis hypervisor to its offerings.
-
-“Customers get to choose an alternative to VMware with this,” said Pradeep Kumar, senior vice president and general manager of HPE’s Pointnext consultancy. “They like the Acropolis license model, since it’s license-free. Then they have choice points so pricing is competitive. Some like VMware, and I think it’s our job to offer them both and they can pick and choose.”
-
-Kumar added that the whole Nutanix stack is 15 to 18% less with Acropolis than a VMware-powered system, since they save on the hypervisor.
-
-The HPE-Nutanix partnership offers a fully managed hybrid cloud infrastructure delivered as a service and deployed in customers’ data centers or co-location facility. The managed private cloud service gives enterprises a hyperconverged environment in-house without having to manage the infrastructure themselves and, more importantly, without the burden of ownership. GreenLake operates more like a lease than ownership.
-
-### HPE GreenLake's private cloud services promise to significantly reduce costs
-
-HPE is pushing hard on GreenLake, which basically mimics cloud platform pricing models of paying for what you use rather than outright ownership. Kumar said HPE projects the consumption model will account for 30% of HPE’s business in the next few years.
-
-GreenLake makes some hefty promises. According to Nutanix-commissioned IDC research, customers will achieve a 60% reduction in the five-year cost of operations, while a HPE-commissioned Forrester report found customers benefit from a 30% Capex savings due to eliminated need for overprovisioning and a 90% reduction in support and professional services costs.
-
-By shifting to an IT as a Service model, HPE claims to provide a 40% increase in productivity by reducing the support load on IT operations staff and to shorten the time to deploy IT projects by 65%.
-
-The two new offerings from the partnership – HPE GreenLake’s private cloud service running Nutanix software and the HPE-branded appliances integrated with Nutanix software – are expected to be available during the 2019 third quarter, the companies said.
-
-Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all
-
-作者:[Andy Patrizio][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Andy-Patrizio/
-[b]: https://github.com/lujun9972
-[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg
-[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
-[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
-[4]: https://www.facebook.com/NetworkWorld/
-[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190412 Gov-t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software.md b/sources/talk/20190412 Gov-t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software.md
deleted file mode 100644
index b5d5c21ee6..0000000000
--- a/sources/talk/20190412 Gov-t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software.md
+++ /dev/null
@@ -1,76 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Gov’t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software)
-[#]: via: (https://www.networkworld.com/article/3388646/gov-t-warns-on-vpn-security-bug-in-cisco-palo-alto-f5-pulse-software.html#tk.rss_all)
-[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
-
-Gov’t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software
-======
-VPN packages from Cisco, Palo Alto, F5 and Pusle may improperly secure tokens and cookies
-![Getty Images][1]
-
-The Department of Homeland Security has issued a warning that some [VPN][2] packages from Cisco, Palo Alto, F5 and Pusle may improperly secure tokens and cookies, allowing nefarious actors an opening to invade and take control over an end user’s system.
-
-The DHS’s Cybersecurity and Infrastructure Security Agency (CISA) [warning][3] comes on the heels of a notice from Carnegie Mellon's CERT that multiple VPN applications store the authentication and/or session cookies insecurely in memory and/or log files.
-
-**[Also see:[What to consider when deploying a next generation firewall][4]. Get regularly scheduled insights by [signing up for Network World newsletters][5]]**
-
-“If an attacker has persistent access to a VPN user's endpoint or exfiltrates the cookie using other methods, they can replay the session and bypass other authentication methods,” [CERT wrote][6]. “An attacker would then have access to the same applications that the user does through their VPN session.”
-
-According to the CERT warning, the following products and versions store the cookie insecurely in log files:
-
- * Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS0 ([CVE-2019-1573][7])
- * Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2.
-
-
-
-The following products and versions store the cookie insecurely in memory:
-
- * Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS0.
- * Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2.
- * Cisco AnyConnect 4.7.x and prior.
-
-
-
-CERT says that Palo Alto Networks GlobalProtect version 4.1.1 [patches][8] this vulnerability.
-
-In the CERT warning F5 stated it has been aware of the insecure memory storage since 2013 and has not yet been patched. More information can be found [here][9]. F5 also stated it has been aware of the insecure log storage since 2017 and fixed it in version 12.1.3 and 13.1.0 and onwards. More information can be found [here][10].
-
-**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][11] ]**
-
-CERT said it is unaware of any patches at the time of publishing for Cisco AnyConnect and Pulse Secure Connect Secure.
-
-CERT credited the [National Defense ISAC Remote Access Working Group][12] for reporting the vulnerability.
-
-Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3388646/gov-t-warns-on-vpn-security-bug-in-cisco-palo-alto-f5-pulse-software.html#tk.rss_all
-
-作者:[Michael Cooney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Michael-Cooney/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2018/10/broken-chain_metal_link_breach_security-100777433-large.jpg
-[2]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html
-[3]: https://www.us-cert.gov/ncas/current-activity/2019/04/12/Vulnerability-Multiple-VPN-Applications
-[4]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
-[5]: https://www.networkworld.com/newsletters/signup.html
-[6]: https://www.kb.cert.org/vuls/id/192371/
-[7]: https://nvd.nist.gov/vuln/detail/CVE-2019-1573
-[8]: https://securityadvisories.paloaltonetworks.com/Home/Detail/146
-[9]: https://support.f5.com/csp/article/K14969
-[10]: https://support.f5.com/csp/article/K45432295
-[11]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
-[12]: https://ndisac.org/workinggroups/
-[13]: https://www.facebook.com/NetworkWorld/
-[14]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190417 Cisco Talos details exceptionally dangerous DNS hijacking attack.md b/sources/talk/20190417 Cisco Talos details exceptionally dangerous DNS hijacking attack.md
deleted file mode 100644
index db534e4457..0000000000
--- a/sources/talk/20190417 Cisco Talos details exceptionally dangerous DNS hijacking attack.md
+++ /dev/null
@@ -1,130 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Cisco Talos details exceptionally dangerous DNS hijacking attack)
-[#]: via: (https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html#tk.rss_all)
-[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
-
-Cisco Talos details exceptionally dangerous DNS hijacking attack
-======
-Cisco Talos says state-sponsored attackers are battering DNS to gain access to sensitive networks and systems
-![Peshkova / Getty][1]
-
-Security experts at Cisco Talos have released a [report detailing][2] what it calls the “first known case of a domain name registry organization that was compromised for cyber espionage operations.”
-
-Talos calls ongoing cyber threat campaign “Sea Turtle” and said that state-sponsored attackers are abusing DNS to harvest credentials to gain access to sensitive networks and systems in a way that victims are unable to detect, which displays unique knowledge on how to manipulate DNS, Talos stated.
-
-**More about DNS:**
-
- * [DNS in the cloud: Why and why not][3]
- * [DNS over HTTPS seeks to make internet use more private][4]
- * [How to protect your infrastructure from DNS cache poisoning][5]
- * [ICANN housecleaning revokes old DNS security key][6]
-
-
-
-By obtaining control of victims’ DNS, the attackers can change or falsify any data on the Internet, illicitly modify DNS name records to point users to actor-controlled servers; users visiting those sites would never know, Talos reported.
-
-DNS, routinely known as the Internet’s phonebook, is part of the global internet infrastructure that translates between familiar names and the numbers computers need to access a website or send an email.
-
-### Threat to DNS could spread
-
-At this point Talos says Sea Turtle isn't compromising organizations in the U.S.
-
-“While this incident is limited to targeting primarily national security organizations in the Middle East and North Africa, and we do not want to overstate the consequences of this specific campaign, we are concerned that the success of this operation will lead to actors more broadly attacking the global DNS system,” Talos stated.
-
-Talos reports that the ongoing operation likely began as early as January 2017 and has continued through the first quarter of 2019. “Our investigation revealed that approximately 40 different organizations across 13 different countries were compromised during this campaign,” Talos stated. “We assess with high confidence that this activity is being carried out by an advanced, state-sponsored actor that seeks to obtain persistent access to sensitive networks and systems.”
-
-**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][7] ]**
-
-Talos says the attackers directing the Sea Turtle campaign show signs of being highly sophisticated and have continued their attacks despite public reports of their activities. In most cases, threat actors typically stop or slow down their activities once their campaigns are publicly revealed suggesting the Sea Turtle actors are unusually brazen and may be difficult to deter going forward, Talos stated.
-
-In January the Department of Homeland Security (DHS) [issued an alert][8] about this activity, warning that an attacker could redirect user traffic and obtain valid encryption certificates for an organization’s domain names.
-
-At that time the DHS’s [Cybersecurity and Infrastructure Security Agency][9] said in its [Emergency Directive][9] that it was tracking a series of incidents targeting DNS infrastructure. CISA wrote that it “is aware of multiple executive branch agency domains that were impacted by the tampering campaign and has notified the agencies that maintain them.”
-
-### DNS hijacking
-
-CISA said that attackers have managed to intercept and redirect web and mail traffic and could target other networked services. The agency said the attacks start with compromising user credentials of an account that can make changes to DNS records. Then the attacker alters DNS records, like Address, Mail Exchanger, or Name Server records, replacing the legitimate address of the services with an address the attacker controls.
-
-To achieve their nefarious goals, Talos stated the Sea Turtle accomplices:
-
- * Use DNS hijacking through the use of actor-controlled name servers.
- * Are aggressive in their pursuit targeting DNS registries and a number of registrars, including those that manage country-code top-level domains (ccTLD).
-
-
- * Use Let’s Encrypts, Comodo, Sectigo, and self-signed certificates in their man-in-the-middle (MitM) servers to gain the initial round of credentials.
-
-
- * Steal victim organization’s legitimate SSL certificate and use it on actor-controlled servers.
-
-
-
-Such actions also distinguish Sea Turtle from an earlier DNS exploit known as DNSpionage, which [Talos reported][10] on in November 2018.
-
-Talos noted “with high confidence” that these operations are distinctly different and independent from the operations performed by [DNSpionage.][11]
-
-In that report, Talos said a DNSpionage campaign utilized two fake, malicious websites containing job postings that were used to compromise targets via malicious Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers.
-
-In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated Let's Encrypt certificates for the redirected domains. These certificates provide X.509 certificates for [Transport Layer Security (TLS)][12] free of charge to the user, Talos said.
-
-The Sea Turtle campaign gained initial access either by exploiting known vulnerabilities or by sending spear-phishing emails. Talos said it believes the attackers have exploited multiple known common vulnerabilities and exposures (CVEs) to either gain initial access or to move laterally within an affected organization. Talos research further shows the following known exploits of Sea Turtle include:
-
- * CVE-2009-1151: PHP code injection vulnerability affecting phpMyAdmin
- * CVE-2014-6271: RCE affecting GNU bash system, specifically the SMTP (this was part of the Shellshock CVEs)
- * CVE-2017-3881: RCE by unauthenticated user with elevated privileges Cisco switches
- * CVE-2017-6736: Remote Code Exploit (RCE) for Cisco integrated Service Router 2811
- * CVE-2017-12617: RCE affecting Apache web servers running Tomcat
- * CVE-2018-0296: Directory traversal allowing unauthorized access to Cisco Adaptive Security Appliances (ASAs) and firewalls
- * CVE-2018-7600: RCE for Website built with Drupal, aka “Drupalgeddon”
-
-
-
-“As with any initial access involving a sophisticated actor, we believe this list of CVEs to be incomplete,” Talos stated. “The actor in question can leverage known vulnerabilities as they encounter a new threat surface. This list only represents the observed behavior of the actor, not their complete capabilities.”
-
-Talos says that the Sea Turtle campaign continues to be highly successful for several reasons. “First, the actors employ a unique approach to gain access to the targeted networks. Most traditional security products such as IDS and IPS systems are not designed to monitor and log DNS requests,” Talos stated. “The threat actors were able to achieve this level of success because the DNS domain space system added security into the equation as an afterthought. Had more ccTLDs implemented security features such as registrar locks, attackers would be unable to redirect the targeted domains.”
-
-Talos said the attackers also used previously undisclosed techniques such as certificate impersonation. “This technique was successful in part because the SSL certificates were created to provide confidentiality, not integrity. The attackers stole organizations’ SSL certificates associated with security appliances such as [Cisco's Adaptive Security Appliance] to obtain VPN credentials, allowing the actors to gain access to the targeted network, and have long-term persistent access, Talos stated.
-
-### Cisco Talos DNS attack mitigation strategy
-
-To protect against Sea Turtle, Cisco recommends:
-
- * Use a registry lock service, which will require an out-of-band message before any changes can occur to an organization's DNS record.
- * If your registrar does not offer a registry-lock service, Talos recommends implementing multi-factor authentication, such as DUO, to access your organization's DNS records.
- * If you suspect you were targeted by this type of intrusion, Talos recommends instituting a network-wide password reset, preferably from a computer on a trusted network.
- * Apply patches, especially on internet-facing machines. Network administrators can monitor passive DNS records on their domains to check for abnormalities.
-
-
-
-Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html#tk.rss_all
-
-作者:[Michael Cooney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Michael-Cooney/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2019/02/man-in-boat-surrounded-by-sharks_risk_fear_decision_attack_threat_by-peshkova-getty-100786972-large.jpg
-[2]: https://blog.talosintelligence.com/2019/04/seaturtle.html
-[3]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
-[4]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
-[5]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
-[6]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
-[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
-[8]: https://www.networkworld.com/article/3336201/batten-down-the-dns-hatches-as-attackers-strike-feds.html
-[9]: https://cyber.dhs.gov/ed/19-01/
-[10]: https://blog.talosintelligence.com/2018/11/dnspionage-campaign-targets-middle-east.html
-[11]: https://krebsonsecurity.com/tag/dnspionage/
-[12]: https://www.networkworld.com/article/2303073/lan-wan-what-is-transport-layer-security-protocol.html
-[13]: https://www.facebook.com/NetworkWorld/
-[14]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md b/sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md
deleted file mode 100644
index 5abcb3bcba..0000000000
--- a/sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md
+++ /dev/null
@@ -1,76 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes)
-[#]: via: (https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all)
-[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
-
-Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes
-======
-Cisco says unpatched vulnerabilities could lead to DoS attacks, arbitrary code execution, take-over of devices.
-![Woolzian / Getty Images][1]
-
-Cisco this week issued 31 security advisories but directed customer attention to “critical” patches for its IOS and IOS XE Software Cluster Management and IOS software for Cisco ASR 9000 Series routers. A number of other vulnerabilities also need attention if customers are running Cisco Wireless LAN Controllers.
-
-The [first critical patch][2] has to do with a vulnerability in the Cisco Cluster Management Protocol (CMP) processing code in Cisco IOS and Cisco IOS XE Software that could allow an unauthenticated, remote attacker to send malformed CMP-specific Telnet options while establishing a Telnet session with an affected Cisco device configured to accept Telnet connections. An exploit could allow an attacker to execute arbitrary code and obtain full control of the device or cause a reload of the affected device, Cisco said.
-
-**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
-
-The problem has a Common Vulnerability Scoring System number of 9.8 out of 10.
-
-According to Cisco, the Cluster Management Protocol utilizes Telnet internally as a signaling and command protocol between cluster members. The vulnerability is due to the combination of two factors:
-
- * The failure to restrict the use of CMP-specific Telnet options only to internal, local communications between cluster members and instead accept and process such options over any Telnet connection to an affected device
- * The incorrect processing of malformed CMP-specific Telnet options.
-
-
-
-Cisco says the vulnerability can be exploited during Telnet session negotiation over either IPv4 or IPv6. This vulnerability can only be exploited through a Telnet session established _to_ the device; sending the malformed options on Telnet sessions _through_ the device will not trigger the vulnerability.
-
-The company says there are no workarounds for this problem, but disabling Telnet as an allowed protocol for incoming connections would eliminate the exploit vector. Cisco recommends disabling Telnet and using SSH instead. Information on how to do both can be found on the [Cisco Guide to Harden Cisco IOS Devices][5]. For patch information [go here][6].
-
-The second critical patch involves a vulnerability in the sysadmin virtual machine (VM) on Cisco’s ASR 9000 carrier class routers running Cisco IOS XR 64-bit Software could let an unauthenticated, remote attacker access internal applications running on the sysadmin VM, Cisco said in the [advisory][7]. This CVSS also has a 9.8 rating.
-
-**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
-
-Cisco said the vulnerability is due to incorrect isolation of the secondary management interface from internal sysadmin applications. An attacker could exploit this vulnerability by connecting to one of the listening internal applications. A successful exploit could result in unstable conditions, including both denial of service (DoS) and remote unauthenticated access to the device, Cisco stated.
-
-Cisco has released [free software updates][6] that address the vulnerability described in this advisory.
-
-Lastly, Cisco wrote that [multiple vulnerabilities][9] in the administrative GUI configuration feature of Cisco Wireless LAN Controller (WLC) Software could let an authenticated, remote attacker cause the device to reload unexpectedly during device configuration when the administrator is using this GUI, causing a DoS condition on an affected device. The attacker would need to have valid administrator credentials on the device for this exploit to work, Cisco stated.
-
-“These vulnerabilities are due to incomplete input validation for unexpected configuration options that the attacker could submit while accessing the GUI configuration menus. An attacker could exploit these vulnerabilities by authenticating to the device and submitting crafted user input when using the administrative GUI configuration feature,” Cisco stated.
-
-“These vulnerabilities have a Security Impact Rating (SIR) of High because they could be exploited when the software fix for the Cisco Wireless LAN Controller Cross-Site Request Forgery Vulnerability is not in place,” Cisco stated. “In that case, an unauthenticated attacker who first exploits the cross-site request forgery vulnerability could perform arbitrary commands with the privileges of the administrator user by exploiting the vulnerabilities described in this advisory.”
-
-Cisco has released [software updates][10] that address these vulnerabilities and said that there are no workarounds.
-
-Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all
-
-作者:[Michael Cooney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Michael-Cooney/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2019/02/compromised_data_security_breach_vulnerability_by_woolzian_gettyimages-475563052_2400x1600-100788413-large.jpg
-[2]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20170317-cmp
-[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
-[4]: https://www.networkworld.com/newsletters/signup.html
-[5]: http://www.cisco.com/c/en/us/support/docs/ip/access-lists/13608-21.html
-[6]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
-[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-asr9k-exr
-[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
-[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-wlc-iapp
-[10]: https://www.cisco.com/c/en/us/support/web/tsd-cisco-worldwide-contacts.html
-[11]: https://www.facebook.com/NetworkWorld/
-[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md b/sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md
deleted file mode 100644
index 59978d555c..0000000000
--- a/sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md
+++ /dev/null
@@ -1,58 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Fujitsu completes design of exascale supercomputer, promises to productize it)
-[#]: via: (https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all)
-[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
-
-Fujitsu completes design of exascale supercomputer, promises to productize it
-======
-Fujitsu hopes to be the first to offer exascale supercomputing using Arm processors.
-![Riken Advanced Institute for Computational Science][1]
-
-Fujitsu and Japanese research institute Riken announced the design for the post-K supercomputer, to be launched in 2021, is complete and that they will productize the design for sale later this year.
-
-The K supercomputer was a massive system, built by Fujitsu and housed at the Riken Advanced Institute for Computational Science campus in Kobe, Japan, with more than 80,000 nodes and using Sparc64 VIIIfx processors, a derivative of the Sun Microsystems Sparc processor developed under a license agreement that pre-dated Oracle buying out Sun in 2010.
-
-**[ Also read:[10 of the world's fastest supercomputers][2] ]**
-
-It was ranked as the top supercomputer when it was launched in June 2011 with a computation speed of over 8 petaflops. And in November 2011, K became the first computer to top 10 petaflops. It was eventually surpassed as the world's fastest supercomputer by the IBM’s Sequoia, but even now, eight years later, it’s still in the top 20 of supercomputers in the world.
-
-### What's in the Post-K supercomputer?
-
-The new system, dubbed “Post-K,” will feature an Arm-based processor called A64FX, a high-performance CPU developed by Fujitsu, designed for exascale systems. The chip is based off the Arm8 design, which is popular in smartphones, with 48 cores plus four “assistant” cores and the ability to access up to 32GB of memory per chip.
-
-A64FX is the first CPU to adopt the Scalable Vector Extension (SVE), an instruction set specifically designed for Arm-based supercomputers. Fujitsu claims A64FX will offer a peak double precision (64-bit) floating point operations performance of over 2.7 teraflops per chip. The system will have one CPU per node and 384 nodes per rack. That comes out to one petaflop per rack.
-
-Contrast that with Summit, the top supercomputer in the world built by IBM for the Oak Ridge National Laboratory using IBM Power9 processors and Nvidia GPUs. A Summit rack has a peak computer of 864 teraflops.
-
-Let me put it another way: IBM’s Power processor and Nvidia’s Tesla are about to get pwned by a derivative chip to the one in your iPhone.
-
-**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
-
-Fujitsu will productize the Post-K design and sell it as the successor to the Fujitsu Supercomputer PrimeHPC FX100. The company said it is also considering measures such as developing an entry-level model that will be easy to deploy, or supplying these technologies to other vendors.
-
-Post-K will be installed in the Riken Center for Computational Science (R-CCS), where the K computer is currently located. The system will be one of the first exascale supercomputers in the world, although the U.S. and China are certainly gunning to be first if only for bragging rights.
-
-Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all
-
-作者:[Andy Patrizio][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Andy-Patrizio/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2018/06/riken_advanced_institute_for_computational_science_k-computer_supercomputer_1200x800-100762135-large.jpg
-[2]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
-[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
-[4]: https://www.facebook.com/NetworkWorld/
-[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md b/sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md
deleted file mode 100644
index 9685591b2c..0000000000
--- a/sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md
+++ /dev/null
@@ -1,61 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Intel follows AMD’s lead (again) into single-socket Xeon servers)
-[#]: via: (https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all)
-[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
-
-Intel follows AMD’s lead (again) into single-socket Xeon servers
-======
-Intel's new U series of processors are aimed at the low-end market where one processor is good enough.
-![Intel][1]
-
-I’m really starting to wonder who the leader in x86 really is these days because it seems Intel is borrowing another page out of AMD’s playbook.
-
-Intel launched a whole lot of new Xeon Scalable processors earlier this month, but they neglected to mention a unique line: the U series of single-socket processors. The folks over at Serve The Home sniffed it out first, and Intel has confirmed the existence of the line, just that they “didn’t broadly promote them.”
-
-**[ Read also:[Intel makes a play for high-speed fiber networking for data centers][2] ]**
-
-To backtrack a bit, AMD made a major push for [single-socket servers][3] when it launched the Epyc line of server chips. Epyc comes with up to 32 cores and multithreading, and Intel (and Dell) argued that one 32-core/64-thread processor was enough to handle many loads and a lot cheaper than a two-socket system.
-
-The new U series isn’t available in the regular Intel [ARK database][4] listing of Xeon Scalable processors, but they do show up if you search. Intel says they are looking into that .There are two processors for now, one with 24 cores and two with 20 cores.
-
-The 24-core Intel [Xeon Gold 6212U][5] will be a counterpart to the Intel Xeon Platinum 8260, with a 2.4GHz base clock speed and a 3.9GHz turbo clock and the ability to access up to 1TB of memory. The Xeon Gold 6212U will have the same 165W TDP as the 8260 line, but with a single socket that’s 165 fewer watts of power.
-
-Also, Intel is suggesting a price of about $2,000 for the Intel Xeon Gold 6212U, a big discount over the Xeon Platinum 8260’s $4,702 list price. So, that will translate into much cheaper servers.
-
-The [Intel Xeon Gold 6210U][6] with 20 cores carries a suggested price of $1,500, has a base clock rate of 2.50GHz with turbo boost to 3.9GHz and a 150 watt TDP. Finally, there is the 20-core Intel [Xeon Gold 6209U][7] with a price of around $1,000 that is identical to the 6210 except its base clock speed is 2.1GHz with a turbo boost of 3.9GHz and a TDP of 125 watts due to its lower clock speed.
-
-**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]**
-
-All of the processors support up to 1TB of DDR4-2933 memory and Intel’s Optane persistent memory.
-
-In terms of speeds and feeds, AMD has a slight advantage over Intel in the single-socket race, and Epyc 2 is rumored to be approaching completion, which will only further advance AMD’s lead.
-
-Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all
-
-作者:[Andy Patrizio][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Andy-Patrizio/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2018/06/intel_generic_cpu_background-100760187-large.jpg
-[2]: https://www.networkworld.com/article/3307852/intel-makes-a-play-for-high-speed-fiber-networking-for-data-centers.html
-[3]: https://www.networkworld.com/article/3253626/amd-lands-dell-as-its-latest-epyc-server-processor-customer.html
-[4]: https://ark.intel.com/content/www/us/en/ark/products/series/192283/2nd-generation-intel-xeon-scalable-processors.html
-[5]: https://ark.intel.com/content/www/us/en/ark/products/192453/intel-xeon-gold-6212u-processor-35-75m-cache-2-40-ghz.html
-[6]: https://ark.intel.com/content/www/us/en/ark/products/192452/intel-xeon-gold-6210u-processor-27-5m-cache-2-50-ghz.html
-[7]: https://ark.intel.com/content/www/us/en/ark/products/193971/intel-xeon-gold-6209u-processor-27-5m-cache-2-10-ghz.html
-[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
-[9]: https://www.facebook.com/NetworkWorld/
-[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190423 Edge computing is in most industries- future.md b/sources/talk/20190423 Edge computing is in most industries- future.md
deleted file mode 100644
index 3f5a6d4c00..0000000000
--- a/sources/talk/20190423 Edge computing is in most industries- future.md
+++ /dev/null
@@ -1,63 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (ninifly )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Edge computing is in most industries’ future)
-[#]: via: (https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html#tk.rss_all)
-[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
-
-Edge computing is in most industries’ future
-======
-Nearly every industry can take advantage of edge computing in the journey to speed digital transformation efforts
-![iStock][1]
-
-The growth of edge computing is about to take a huge leap. Right now, companies are generating about 10% of their data outside a traditional data center or cloud. But within the next six years, that will increase to 75%, [according to Gartner][2].
-
-That’s largely down to the need to process data emanating from devices, such as Internet of Things (IoT) sensors. Early adopters include:
-
- * **Manufacturers:** Devices and sensors seem endemic to this industry, so it’s no surprise to see the need to find faster processing methods for the data produced. A recent [_Automation World_][3] survey found that 43% of manufacturers have deployed edge projects. Most popular use cases have included production/manufacturing data analysis and equipment data analytics.
-
- * **Retailers** : Like most industries deeply affected by the need to digitize operations, retailers are being forced to innovate their customer experiences. To that end, these organizations are “investing aggressively in compute power located closer to the buyer,” [writes Dave Johnson][4], executive vice president of the IT division at Schneider Electric. He cites examples such as augmented-reality mirrors in fitting rooms that offer different clothing options without the consumer having to try on the items, and beacon-based heat maps that show in-store traffic.
-
-
-
- * **Healthcare organizations** : As healthcare costs continue to escalate, this industry is ripe for innovation that improves productivity and cost efficiencies. Management consulting firm [McKinsey & Co. has identified][5] at least 11 healthcare use cases that benefit patients, the facility, or both. Two examples: tracking mobile medical devices for nursing efficiency as well as optimization of equipment, and wearable devices that track user exercise and offer wellness advice.
-
-
-
-While these are strong use cases, as the edge computing market grows, so too will the number of industries adopting it.
-
-**Getting the edge on digital transformation**
-
-Faster processing at the edge fits perfectly into the objectives and goals of digital transformation — improving efficiencies, productivity, speed to market, and the customer experience. Here are just a few of the potential applications and industries that will be changed by edge computing:
-
-**Agriculture:** Farmers and organizations already use drones to transmit field and climate conditions to watering equipment. Other applications might include monitoring and location tracking of workers, livestock, and equipment to improve productivity, efficiencies, and costs.
-
-**Energy** : There are multiple potential applications in this sector that could benefit both consumers and providers. For example, smart meters help homeowners better manage energy use while reducing grid operators’ need for manual meter reading. Similarly, sensors on water pipes would detect leaks, while providing real-time consumption data.
-
-**Financial services** : Banks are adopting interactive ATMs that quickly process data to provide better customer experiences. At the organizational level, transactional data can be more quickly analyzed for fraudulent activity.
-
-**Logistics** : As consumers demand faster delivery of goods and services, logistics companies will need to transform mapping and routing capabilities to get real-time data, especially in terms of last-mile planning and tracking. That could involve street-, package-, and car-based sensors transmitting data for processing.
-
-All industries have the potential for transformation, thanks to edge computing. But it will depend on how they address their computing infrastructure. Discover how to overcome any IT obstacles at [APC.com][6].
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html#tk.rss_all
-
-作者:[Anne Taylor][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Anne-Taylor/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2019/04/istock-1019389496-100794424-large.jpg
-[2]: https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/
-[3]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
-[4]: https://blog.schneider-electric.com/datacenter/2018/07/10/why-brick-and-mortar-retail-quickly-establishing-leadership-edge-computing/
-[5]: https://www.mckinsey.com/industries/high-tech/our-insights/new-demand-new-markets-what-edge-computing-means-for-hardware-companies
-[6]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
diff --git a/sources/talk/20190424 Cisco- DNSpionage attack adds new tools, morphs tactics.md b/sources/talk/20190424 Cisco- DNSpionage attack adds new tools, morphs tactics.md
deleted file mode 100644
index e202384558..0000000000
--- a/sources/talk/20190424 Cisco- DNSpionage attack adds new tools, morphs tactics.md
+++ /dev/null
@@ -1,97 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Cisco: DNSpionage attack adds new tools, morphs tactics)
-[#]: via: (https://www.networkworld.com/article/3390666/cisco-dnspionage-attack-adds-new-tools-morphs-tactics.html#tk.rss_all)
-[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
-
-Cisco: DNSpionage attack adds new tools, morphs tactics
-======
-Cisco's Talos security group says DNSpionage tools have been upgraded to be more stealthy
-![Calvin Dexter / Getty Images][1]
-
-The group behind the Domain Name System attacks known as DNSpionage have upped their dark actions with new tools and malware to focus their attacks and better hide their activities.
-
-Cisco Talos security researchers, who discovered [DNSpionage][2] in November, this week warned of new exploits and capabilities of the nefarious campaign.
-
-**More about DNS:**
-
- * [DNS in the cloud: Why and why not][3]
- * [DNS over HTTPS seeks to make internet use more private][4]
- * [How to protect your infrastructure from DNS cache poisoning][5]
- * [ICANN housecleaning revokes old DNS security key][6]
-
-
-
-“The threat actor's ongoing development of DNSpionage malware shows that the attacker continues to find new ways to avoid detection. DNS tunneling is a popular method of exfiltration for some actors and recent examples of DNSpionage show that we must ensure DNS is monitored as closely as an organization's normal proxy or weblogs,” [Talos wrote][7]. “DNS is essentially the phonebook of the internet, and when it is tampered with, it becomes difficult for anyone to discern whether what they are seeing online is legitimate.”
-
-In Talos’ initial report, researchers said a DNSpionage campaign targeted various businesses in the Middle East as well as United Arab Emirates government domains. It also utilized two malicious websites containing job postings that were used to compromise targets via crafted Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers.
-
-In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated “Let's Encrypt” certificates for the redirected domains. These certificates provide X.509 certificates for [Transport Layer Security (TLS)][8] free of charge to the user, Talos said.
-
-This week Cisco said DNSpionage actors have created a new remote administrative tool that supports HTTP and DNS communication with the attackers' command and control (C2).
-
-“In our previous post concerning DNSpionage, we showed that the malware author used malicious macros embedded in a Microsoft Word document. In the new sample from Lebanon identified at the end of February, the attacker used an Excel document with a similar macro.”
-
-**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][9] ]**
-
-Talos wrote: “The malware supports HTTP and DNS communication to the C2 server. The HTTP communication is hidden in the comments in the HTML code. This time, however, the C2 server mimics the GitHub platform instead of Wikipedia. While the DNS communication follows the same method we described in our previous article, the developer added some new features in this latest version and, this time, the actor removed the debug mode.”
-
-Talos added that the domain used for the C2 campaign is “bizarre.”
-
-“The previous version of DNSpionage attempted to use legitimate-looking domains in an attempt to remain undetected. However, this newer version uses the domain ‘coldfart[.]com,’ which would be easier to spot than other APT campaigns which generally try to blend in with traffic more suitable to enterprise environments. The domain was also hosted in the U.S., which is unusual for any espionage-style attack.”
-
-Talos researchers said they discovered that DNSpionage added a reconnaissance phase, that ensures the payload is being dropped on specific targets rather than indiscriminately downloaded on every machine.
-
-This level of attack also returns information about the workstation environment, including platform-specific information, the name of the domain and the local computer, and information concerning the operating system, Talos wrote. This information is key to helping the malware select the victims only and attempts to avoid researchers or sandboxes. Again, it shows the actor's improved abilities, as they now fingerprint the victim.
-
-This new tactic indicates an improved level of sophistication and is likely in response to the significant amount of public interest in the campaign.
-
-Talos noted that there have been several other public reports of DNSpionage attacks, and in January, the U.S. Department of Homeland Security issued an [alert][10] warning users about this threat activity.
-
-“In addition to increased reports of threat activity, we have also discovered new evidence that the threat actors behind the DNSpionage campaign continue to change their tactics, likely in an attempt to improve the efficacy of their operations,” Talos stated.
-
-In April, Cisco Talos identified an undocumented malware developed in .NET. On the analyzed samples, the malware author left two different internal names in plain text: "DropperBackdoor" and "Karkoff."
-
-“The malware is lightweight compared to other malware due to its small size and allows remote code execution from the C2 server. There is no obfuscation and the code can be easily disassembled,” Talos wrote.
-
-The Karkoff malware searches for two specific anti-virus platforms: Avira and Avast and will work around them.
-
-“The discovery of Karkoff also shows the actor is pivoting and is increasingly attempting to avoid detection while remaining very focused on the Middle Eastern region,” Talos wrote.
-
-Talos distinguished DNSpionage from another DNS attack method, “[Sea Turtle][11]”, it detailed this month. Sea Turtle involves state-sponsored attackers that are abusing DNS to target organizations and harvest credentials to gain access to sensitive networks and systems in a way that victims are unable to detect. This displays unique knowledge about how to manipulate DNS, Talos stated.
-
-By obtaining control of victims’ DNS, attackers can change or falsify any data victims receive from the Internet, illicitly modify DNS name records to point users to actor-controlled servers and users visiting those sites would never know, Talos reported.
-
-“While this incident is limited to targeting primarily national security organizations in the Middle East and North Africa, and we do not want to overstate the consequences of this specific campaign, we are concerned that the success of this operation will lead to actors more broadly attacking the global DNS system,” Talos stated about Sea Turtle.
-
-Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3390666/cisco-dnspionage-attack-adds-new-tools-morphs-tactics.html#tk.rss_all
-
-作者:[Michael Cooney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Michael-Cooney/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2019/02/cyber_attack_threat_danger_breach_hack_security_by_calvindexter_gettyimages-860363294_2400x800-100788395-large.jpg
-[2]: https://blog.talosintelligence.com/2018/11/dnspionage-campaign-targets-middle-east.html
-[3]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
-[4]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
-[5]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
-[6]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
-[7]: https://blog.talosintelligence.com/2019/04/dnspionage-brings-out-karkoff.html
-[8]: https://www.networkworld.com/article/2303073/lan-wan-what-is-transport-layer-security-protocol.html
-[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
-[10]: https://www.us-cert.gov/ncas/alerts/AA19-024A
-[11]: https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html
-[12]: https://www.facebook.com/NetworkWorld/
-[13]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190424 IoT roundup- VMware, Nokia beef up their IoT.md b/sources/talk/20190424 IoT roundup- VMware, Nokia beef up their IoT.md
deleted file mode 100644
index 90f4ebf5f1..0000000000
--- a/sources/talk/20190424 IoT roundup- VMware, Nokia beef up their IoT.md
+++ /dev/null
@@ -1,69 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (IoT roundup: VMware, Nokia beef up their IoT)
-[#]: via: (https://www.networkworld.com/article/3390682/iot-roundup-vmware-nokia-beef-up-their-iot.html#tk.rss_all)
-[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
-
-IoT roundup: VMware, Nokia beef up their IoT
-======
-Everyone wants in on the ground floor of the internet of things, and companies including Nokia, VMware and Silicon Labs are sharpening their offerings in anticipation of further growth.
-![Getty Images][1]
-
-When attempting to understand the world of IoT, it’s easy to get sidetracked by all the fascinating use cases: Automated oil and gas platforms! Connected pet feeders! Internet-enabled toilets! (Is “the Internet of Toilets” a thing yet?) But the most important IoT trend to follow may be the way that major tech vendors are vying to make large portions of the market their own.
-
-VMware’s play for a significant chunk of the IoT market is called Pulse IoT Center, and the company released version 2.0 of it this week. It follows the pattern set by other big companies getting into IoT: Leveraging their existing technological strengths and applying them to the messier, more heterodox networking environment that IoT represents.
-
-Unsurprisingly, given that it’s VMware we’re talking about, there’s now a SaaS option, and the company was also eager to talk up that Pulse IoT Center 2.0 has simplified device-onboarding and centralized management features.
-
-**More about edge networking**
-
- * [How edge networking and IoT will reshape data centers][2]
- * [Edge computing best practices][3]
- * [How edge computing can help secure the IoT][4]
-
-
-
-That might sound familiar, and for good reason – companies with any kind of a background in network management, from HPE/Aruba to Amazon, have been pushing to promote their system as the best framework for managing a complicated and often decentralized web of IoT devices from a single platform. By rolling features like software updates, onboarding and security into a single-pane-of-glass management console, those companies are hoping to be the organizational base for customers trying to implement IoT.
-
-Whether they’re successful or not remains to be seen. While major IT companies have been trying to capture market share by competing across multiple verticals, the operational orientation of the IoT also means that non-traditional tech vendors with expertise in particular fields (particularly industrial and automotive) are suddenly major competitors.
-
-**Nokia spreads the IoT network wide**
-
-As a giant carrier-equipment vendor, Nokia is an important company in the overall IoT landscape. While some types of enterprise-focused IoT are heavily localized, like connected factory floors or centrally managed office buildings, others are so geographically disparate that carrier networks are the only connectivity medium that makes sense.
-
-The Finnish company earlier this month broadened its footprint in the IoT space, announcing that it had partnered with Nordic Telecom to create a wide-area network focused on enabling IoT and emergency services. The network, which Nokia is billing as the first mission-critical communications network, operates using LTE technology in the 410-430MHz band – a relatively low frequency, which allows for better propagation and a wide effective range.
-
-The idea is to provide a high-throughput, low-latency network option to any user on the network, whether it’s an emergency services provider needing high-speed video communication or an energy or industrial company with a low-delay-tolerance application.
-
-**Silicon Labs packs more onto IoT chips**
-
-The heart of any IoT implementation remains the SoCs that make devices intelligent in the first place, and Silicon Labs announced that it's building more muscle into its IoT-focused product lineup.
-
-The Austin-based chipmaker said that version 2 of its Wireless Gecko platform will pack more than double the wireless connectivity range of previous entries, which could seriously ease design requirements for companies planning out IoT deployments. The chipsets support Zigbee, Thread and Bluetooth mesh networking, and are designed for line-powered IoT devices, using Arm Cortex-M33 processors for relatively strong computing capacity and high energy efficiency.
-
-Chipset advances aren’t the type of thing that will pay off immediately in terms of making IoT devices more capable, but improvements like these make designing IoT endpoints for particular uses that much easier, and new SoCs will begin to filter into line-of-business equipment over time.
-
-Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3390682/iot-roundup-vmware-nokia-beef-up-their-iot.html#tk.rss_all
-
-作者:[Jon Gold][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Jon-Gold/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home5-100768494-large.jpg
-[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
-[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
-[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
-[5]: https://www.facebook.com/NetworkWorld/
-[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190425 Dell EMC and Cisco renew converged infrastructure alliance.md b/sources/talk/20190425 Dell EMC and Cisco renew converged infrastructure alliance.md
deleted file mode 100644
index 8d3ad041db..0000000000
--- a/sources/talk/20190425 Dell EMC and Cisco renew converged infrastructure alliance.md
+++ /dev/null
@@ -1,52 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Dell EMC and Cisco renew converged infrastructure alliance)
-[#]: via: (https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html#tk.rss_all)
-[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
-
-Dell EMC and Cisco renew converged infrastructure alliance
-======
-Dell EMC and Cisco renewed their agreement to collaborate on converged infrastructure (CI) products for a few more years even though the momentum is elsewhere.
-![Dell EMC][1]
-
-Dell EMC and Cisco have renewed a collaboration on converged infrastructure (CI) products that has run for more than a decade, even as the momentum shifts elsewhere. The news was announced via a [blog post][2] by Pete Manca, senior vice president for solutions engineering at Dell EMC.
-
-The deal is centered around Dell EMC’s VxBlock product line, which originally started out in 2009 as a joint venture between EMC and Cisco called VCE (Virtual Computing Environment). EMC bought out Cisco’s stake in the venture before Dell bought EMC.
-
-The devices offered UCS servers and networking from Cisco, EMC storage, and VMware virtualization software in pre-configured, integrated bundles. VCE was retired in favor of new brands, VxBlock, VxRail, and VxRack. The lineup has been pared down to one device, the VxBlock 1000.
-
-**[ Read also:[How to plan a software-defined data-center network][3] ]**
-
-“The newly inked agreement entails continued company alignment across multiple organizations: executive, product development, marketing, and sales,” Manca wrote in the blog post. “This means we’ll continue to share product roadmaps and collaborate on strategic development activities, with Cisco investing in a range of Dell EMC sales, marketing and training initiatives to support VxBlock 1000.”
-
-Dell EMC cites IDC research that it holds a 48% market share in converged systems, nearly 1.5 times that of any other vendor. But IDC's April edition of the Converged Systems Tracker said the CI category is on the downswing. CI sales fell 6.4% year over year, while the market for hyperconverged infrastructure (HCI) grew 57.2% year over year.
-
-For the unfamiliar, the primary difference between converged and hyperconverged infrastructure is that CI relies on hardware building blocks, while HCI is software-defined and considered more flexible and scalable than CI and operates more like a cloud system with resources spun up and down as needed.
-
-Despite this, Dell is committed to CI systems. Just last month it announced an update and expansion of the VxBlock 1000, including higher scalability, a broader choice of components, and the option to add new technologies. It featured updated VMware vRealize and vSphere support, the option to consolidate high-value, mission-critical workloads with new storage and data protection options and support for Cisco UCS fabric and servers.
-
-For customers who prefer to build their own infrastructure solutions, Dell EMC introduced Ready Stack, a portfolio of validated designs with sizing, design, and deployment resources that offer VMware-based IaaS, vSphere on Dell EMC PowerEdge servers and Dell EMC Unity storage, and Microsoft Hyper-V on Dell EMC PowerEdge servers and Dell EMC Unity storage.
-
-Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html#tk.rss_all
-
-作者:[Andy Patrizio][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Andy-Patrizio/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2019/04/dell-emc-vxblock-1000-100794721-large.jpg
-[2]: https://blog.dellemc.com/en-us/dell-technologies-cisco-reaffirm-joint-commitment-converged-infrastructure/
-[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
-[4]: https://www.facebook.com/NetworkWorld/
-[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190426 Profiling D-s Garbage Collection with Bpftrace.md b/sources/talk/20190426 Profiling D-s Garbage Collection with Bpftrace.md
new file mode 100644
index 0000000000..ce0a408ae6
--- /dev/null
+++ b/sources/talk/20190426 Profiling D-s Garbage Collection with Bpftrace.md
@@ -0,0 +1,412 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Profiling D's Garbage Collection with Bpftrace)
+[#]: via: (https://theartofmachinery.com/2019/04/26/bpftrace_d_gc.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+Profiling D's Garbage Collection with Bpftrace
+======
+
+Recently I’ve been playing around with using [`bpftrace`][1] to trace and profile D’s garbage collector. Here are some examples of the cool stuff that’s possible.
+
+### What is `bpftrace`?
+
+It’s a high-level debugging tool based on Linux’s eBPF. “eBPF” stands for “extended Berkely packet filter”, but that’s just a historical name and doesn’t mean much today. It’s really a virtual machine (like the [JVM][2]) that sits inside the Linux kernel and runs code in a special eBPF instruction set similar to normal machine code. Users are expected to write short programs in high-level languages (including C and others) that get compiled to eBPF and loaded into the kernel on the fly to do interesting things.
+
+As you might guess, eBPF is powerful for instrumenting a running kernel, but it also supports instrumenting user-space programs.
+
+### What you need
+
+First you need a Linux kernel. Sorry BSD, Mac OS and Windows users. (But some of you can use [DTrace][3].)
+
+Also, not just any Linux kernel will work. This stuff is relatively new, so you’ll need a modern kernel with BPF-related features enabled. You might need to use the newest (or even testing) version of a distro. Here’s how to check if your kernel meets the requirements:
+
+```
+$ uname -r
+4.19.27-gentoo-r1sub
+$ # 4.9+ recommended by bpftrace
+$ zgrep CONFIG_UPROBES /proc/config.gz
+CONFIG_UPROBES=y
+$ # Also need
+$ # CONFIG_BPF=y
+$ # CONFIG_BPF_SYSCALL=y
+$ # CONFIG_BPF_JIT=y
+$ # CONFIG_HAVE_EBPF_JIT=y
+$ # CONFIG_BPF_EVENTS=y
+```
+
+Of course, [you also need to install the `bpftrace` tool itself][4].
+
+### `bpftrace` D “Hello World”
+
+Here’s a quick test you can do to make sure you’ve got everything working. First, let’s make a Hello World D binary:
+
+```
+$ pwd
+/tmp/
+$ cat hello.d
+import std.stdio;
+
+void main()
+{
+ writeln("Hello World");
+}
+$ dmd hello.d
+$ ./hello
+Hello World
+$
+```
+
+Now let’s `bpftrace` it. `bpftrace` uses a high-level language that’s obviously inspired by AWK. I’ll explain enough to understand the post, but you can also check out the [`bpftrace` reference guide][5] and [one-liner tutorial][6]. The minimum you need to know is that a bpftrace program is a list of `event:name /filter predicate/ { program(); code(); }` blocks that define code snippets to be run on events.
+
+This time I’m only using Linux uprobes, which trigger on functions in user-space programs. The syntax is `uprobe:/path/to/binary:functionName`. One gotcha is that D “[mangles][7]” (encodes) function names before inserting them into the binary. If we want to trigger on the D code’s `main()` function, we need to use the mangled name: `_Dmain`. (By the way, `nm program | grep ' _D.*functionName'` is one quick trick for finding mangled names.)
+
+Run this `bpftrace` invocation in a terminal as root user:
+
+```
+# bpftrace -e 'uprobe:/tmp/hello:_Dmain { printf("D Hello World run with process ID %d\n", pid); }'
+Attaching 1 probe...
+```
+
+While this is running, it’ll print a message every time the D Hello World program is executed by any user in any terminal. Press `Ctrl+C` to quit.
+
+All `bpftrace` code can be run directly from the command line like in the example above. But to make things easier to read from now on, I’ll make neatly formatted scripts.
+
+### Tracing some real code
+
+I’m using [D-Scanner][8], the D code analyser, as an example of a simple but non-trivial D workload. One nice thing about `bpftrace` and uprobes is that no modification of the program is needed. I’m just using a normal build of the `dscanner` tool, and using the [D runtime source code][9] as a codebase to analyse.
+
+Before using `bpftrace`, let’s try using [the profiling that’s built into the D GC implementation itself][10]:
+
+```
+$ dscanner --DRT-gcopt=profile:1 --etags
+...
+ Number of collections: 85
+ Total GC prep time: 0 milliseconds
+ Total mark time: 17 milliseconds
+ Total sweep time: 6 milliseconds
+ Total page recovery time: 3 milliseconds
+ Max Pause Time: 1 milliseconds
+ Grand total GC time: 28 milliseconds
+GC summary: 35 MB, 85 GC 28 ms, Pauses 17 ms < 1 ms
+```
+
+(If you can make a custom build, you can also use [the D runtime GC API to get stats][11].)
+
+There’s one more gotcha when using `bpftrace` on `dscanner` to trace GC functions: the binary file we specify for the uprobe needs to be the binary file that actually contains the GC functions. That could be the D binary itself, or it could be a shared D runtime library. Try running `ldd /path/to/d_program` to list any linked shared libraries, and if the output contains `druntime`, use that full path when specifying uprobes. My `dscanner` binary doesn’t link to a shared D runtime, so I just use the full path to `dscanner`. (Running `which dscanner` gives `/usr/local/bin/dscanner` for me.)
+
+Anyway, all the GC functions live in a `gc` module, so their mangled names start with `_D2gc`. Here’s a `bpftrace` invocation that tallies GC function calls. For convenience, it also includes a uretprobe to automatically exit when `main()` returns. The output is sorted to make it a little easier to read.
+
+```
+# cat dumpgcfuncs.bt
+uprobe:/usr/local/bin/dscanner:_D2gc*
+{
+ @[probe] = count();
+}
+
+uretprobe:/usr/local/bin/dscanner:_Dmain
+{
+ exit();
+}
+# bpftrace dumpgcfuncs.bt | sort
+
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC10freeNoSyncMFNbNiPvZv]: 31
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC10initializeFKCQCd11gcinterface2GCZv]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC11queryNoSyncMFNbPvZS4core6memory8BlkInfo_]: 44041
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC11removeRangeMFNbNiPvZv]: 2
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC12extendNoSyncMFNbPvmmxC8TypeInfoZm]: 251946
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC14collectNoStackMFNbZv]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC18fullCollectNoStackMFNbZv]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC4freeMFNbNiPvZv]: 31
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC5queryMFNbPvZS4core6memory8BlkInfo_]: 47704
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6__ctorMFZCQBzQBzQBxQCiQBn]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6callocMFNbmkxC8TypeInfoZPv]: 80
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6extendMFNbPvmmxC8TypeInfoZm]: 251946
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6mallocMFNbmkxC8TypeInfoZPv]: 12423
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_]: 948995
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC7getAttrMFNbPvZ2goFNbPSQClQClQCjQCu3GcxQBbZk]: 5615
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC7getAttrMFNbPvZk]: 5615
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC8addRangeMFNbNiPvmxC8TypeInfoZv]: 2
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs10freeNoSyncMFNbNiPvZvS_DQDsQDsQDqQEb8freeTimelS_DQErQErQEpQFa8numFreeslTQCdZQEbMFNbNiKQCrZv]: 31
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs11queryNoSyncMFNbPvZS4core6memory8BlkInfo_S_DQEmQEmQEkQEv9otherTimelS_DQFmQFmQFkQFv9numOtherslTQDaZQExMFNbKQDmZQDn]: 44041
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs12extendNoSyncMFNbPvmmxC8TypeInfoZmS_DQEfQEfQEdQEo10extendTimelS_DQFhQFhQFfQFq10numExtendslTQCwTmTmTxQDaZQFdMFNbKQDrKmKmKxQDvZm]: 251946
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs12mallocNoSyncMFNbmkKmxC8TypeInfoZPvS_DQEgQEgQEeQEp10mallocTimelS_DQFiQFiQFgQFr10numMallocslTmTkTmTxQCzZQFcMFNbKmKkKmKxQDsZQDl]: 961498
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs18fullCollectNoStackMFNbZ2goFNbPSQEaQEaQDyQEj3GcxZmTQvZQDfMFNbKQBgZm]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs7getAttrMFNbPvZ2goFNbPSQDqQDqQDoQDz3GcxQBbZkS_DQEoQEoQEmQEx9otherTimelS_DQFoQFoQFmQFx9numOtherslTQCyTQDlZQFdMFNbKQDoKQEbZk]: 5615
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw15LargeObjectPool10allocPagesMFNbmZm]: 5597
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw15LargeObjectPool13updateOffsetsMFNbmZv]: 10745
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw15LargeObjectPool7getInfoMFNbPvZS4core6memory8BlkInfo_]: 3844
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw15SmallObjectPool7getInfoMFNbPvZS4core6memory8BlkInfo_]: 40197
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw15SmallObjectPool9allocPageMFNbhZPSQChQChQCfQCq4List]: 15022
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx10smallAllocMFNbhKmkZ8tryAllocMFNbZb]: 955967
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx10smallAllocMFNbhKmkZPv]: 955912
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx11ToScanStack4growMFNbZv]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm]: 85
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx11removeRangeMFNbNiPvZv]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx23updateCollectThresholdsMFNbZv]: 84
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx4markMFNbNlPvQcZv]: 253
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx5sweepMFNbZm]: 84
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx7markAllMFNbbZ14__foreachbody3MFNbKSQCm11gcinterface5RangeZi]: 85
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx7markAllMFNbbZv]: 85
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx7newPoolMFNbmbZPSQBtQBtQBrQCc4Pool]: 6
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx7recoverMFNbZm]: 84
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx8addRangeMFNbNiPvQcxC8TypeInfoZv]: 2
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx8bigAllocMFNbmKmkxC8TypeInfoZ15tryAllocNewPoolMFNbZb]: 5
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx8bigAllocMFNbmKmkxC8TypeInfoZ8tryAllocMFNbZb]: 5616
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx8bigAllocMFNbmKmkxC8TypeInfoZPv]: 5586
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx8isMarkedMFNbNlPvZi]: 635
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx9allocPageMFNbhZPSQBuQBuQBsQCd4List]: 15024
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw4Pool10initializeMFNbmbZv]: 6
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw4Pool12freePageBitsMFNbmKxG4mZv]: 16439
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl5protoQo7ProtoGC4termMFZv]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl5protoQo7ProtoGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl5protoQo7ProtoGC8addRangeMFNbNiPvmxC8TypeInfoZv]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc4impl6manualQp8ManualGC10initializeFKCQBp11gcinterface2GCZv]: 1
+@[uprobe:/usr/local/bin/dscanner:_D2gc9pooltable__T9PoolTableTSQBc4impl12conservativeQBy4PoolZQBr6insertMFNbNiPQBxZb]: 6
+@[uprobe:/usr/local/bin/dscanner:_D2gc9pooltable__T9PoolTableTSQBc4impl12conservativeQBy4PoolZQBr8findPoolMFNaNbNiPvZPQCe]: 302268
+@[uprobe:/usr/local/bin/dscanner:_D2gc9pooltable__T9PoolTableTSQBc4impl12conservativeQBy4PoolZQBr8minimizeMFNaNbNjZAPQCd]: 30
+Attaching 231 probes...
+```
+
+All these functions are in [`src/gc/`][12], and most of the interesting ones here are in [`src/gc/impl/conservative/`][13]. There are 85 calls to `_D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm`, which [`ddemangle`][14] translates to `nothrow ulong gc.impl.conservative.gc.Gcx.fullcollect(bool)`. That matches up with the report from `--DRT-gcopt=profile:1`.
+
+The heart of the `bpftrace` program is `@[probe] = count();`. `@` prefixes a global variable, in this case a variable with an empty name (allowed by `bpftrace`). We’re using the variable as a map (like an associative array in D), and indexing it with `probe`, a built-in variable containing the name of the uprobe that was triggered. The tally is kept using the magic `count()` function.
+
+### Garbage collection timings
+
+How about something more interesting, like generating a profile of collection timings? This time, to get more data, I won’t make `bpftrace` exit as soon as the `dscanner` exits. I’ll keep it running and run `dscanner` 100 times before quitting `bpftrace` with `Ctrl+C`:
+
+```
+# cat gcprofile.bt
+uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm
+{
+ @t = nsecs;
+}
+
+uretprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm / @t /
+{
+ @gc_times = hist(nsecs - @t);
+}
+# bpftrace gcprofile.bt
+Attaching 2 probes...
+^C
+
+@gc_times:
+[64K, 128K) 138 |@ |
+[128K, 256K) 1367 |@@@@@@@@@@ |
+[256K, 512K) 6687 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
+[512K, 1M) 7 | |
+[1M, 2M) 301 |@@ |
+```
+
+Et voila! A log-scale histogram of the `nsecs` timestamp difference between entering and exiting `fullcollect()`. The times are in nanoseconds, so we see that most collections are taking less than half a millisecond, but we have tail cases that take 1-2ms.
+
+### Function arguments
+
+`bpftrace` provides `arg0`, `arg1`, `arg2`, etc. built-in variables for accessing the arguments to a traced function. There are a couple of complications with using them with D code, however.
+
+The first is that (at the binary level) `dmd` makes `extern(D)` functions (i.e., normal D functions) take arguments in the reverse order of `extern(C)` functions (that `bpftrace` is expecting). Suppose you have a simple three-argument function. If it’s using the C calling convention, `bpftrace` will recognise the first argument as `arg0`. If it’s using the D calling convention, however, it’ll be picked up as `arg2`.
+
+```
+extern(C) void cfunc(int arg0, int arg1, int arg2)
+{
+ // ...
+}
+
+// (extern(D) is the default)
+extern(D) void dfunc(int arg2, int arg1, int arg0)
+{
+ // ...
+}
+```
+
+If you look at [the D ABI spec][15], you’ll notice that (just like in C++) there can be a couple of hidden arguments if the function is more complex. If `dfunc` above returned a large struct, there can be an extra hidden argument for implementing [copy elision][16], which means the first argument would actually be `arg3`, and `arg0` would be the hidden argument. If `dfunc` were also a member function, it would have a hidden `this` argument, which would bump up the first argument to `arg4`.
+
+To get the hang of this, you might need to experiment with tracing function calls with known arguments.
+
+### Allocation sizes
+
+Let’s get a histogram of the memory allocation request sizes. Looking at the list of GC functions traced earlier, and comparing it with the GC source code, it looks like we need to trace these functions and grab the `size` argument:
+
+```
+class ConservativeGC : GC
+{
+ // ...
+ void *malloc(size_t size, uint bits, const TypeInfo ti) nothrow;
+ void *calloc(size_t size, uint bits, const TypeInfo ti) nothrow;
+ BlkInfo qalloc( size_t size, uint bits, const TypeInfo ti) nothrow;
+ // ...
+}
+```
+
+As class member functions, they have a hidden `this` argument as well. The last one, `qalloc()`, returns a struct, so it also has a hidden argument for copy elision. So `size` is `arg3` for the first two functions, and `arg4` for `qalloc()`. Time to run a trace:
+
+```
+# cat allocsizeprofile.bt
+uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6mallocMFNbmkxC8TypeInfoZPv,
+uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6callocMFNbmkxC8TypeInfoZPv
+{
+ @ = hist(arg3);
+}
+
+uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_
+{
+ @ = hist(arg4);
+}
+
+uretprobe:/usr/local/bin/dscanner:_Dmain
+{
+ exit();
+}
+# bpftrace allocsizeprofile.bt
+Attaching 4 probes...
+@:
+[2, 4) 2489 | |
+[4, 8) 9324 |@ |
+[8, 16) 46527 |@@@@@ |
+[16, 32) 206324 |@@@@@@@@@@@@@@@@@@@@@@@ |
+[32, 64) 448020 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
+[64, 128) 147053 |@@@@@@@@@@@@@@@@@ |
+[128, 256) 88072 |@@@@@@@@@@ |
+[256, 512) 2519 | |
+[512, 1K) 1830 | |
+[1K, 2K) 3749 | |
+[2K, 4K) 1668 | |
+[4K, 8K) 256 | |
+[8K, 16K) 2533 | |
+[16K, 32K) 312 | |
+[32K, 64K) 239 | |
+[64K, 128K) 209 | |
+[128K, 256K) 164 | |
+[256K, 512K) 124 | |
+[512K, 1M) 48 | |
+[1M, 2M) 30 | |
+[2M, 4M) 7 | |
+[4M, 8M) 1 | |
+[8M, 16M) 2 | |
+```
+
+So, we have a lot of small allocations, with a very long tail of larger allocations. Remember, size is on a log scale, so that long tail represents a very skewed distribution.
+
+### Small allocation hotspots
+
+Now for something more complex. Suppose we’re profiling our code and looking for low-hanging fruit for reducing the number of memory allocations. Code that makes a lot of small allocations tends to be a good candidate for this kind of refactoring. `bpftrace` lets us grab stack traces, which can be used to see what part of the main program caused an allocation.
+
+As of writing, there’s one little complication because of a limitation of `bpftrace`’s stack trace handling: it can only show meaningful function symbol names (as opposed to raw memory addresses) if `bpftrace` quits while the target program is still running. There’s [an open bug report for improving this behaviour][17], but in the meantime I just made sure `dscanner` took a long time, and that I shut down `bpftrace` first.
+
+Here’s how to grab the top three stack traces that lead to small (<16B) memory allocations with `qalloc()`:
+
+```
+# cat smallallocs.bt
+uprobe:/usr/local/bin/dscanner:_D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_
+{
+ if (arg4 < 16)
+ {
+ @[ustack] = count();
+ }
+}
+
+END
+{
+ print(@, 3);
+ clear(@);
+}
+# bpftrace smallallocs.bt
+Attaching 2 probes...
+^C@[
+ _D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_+0
+ _D2rt8lifetime12__arrayAllocFNaNbmxC8TypeInfoxQlZS4core6memory8BlkInfo_+236
+ _d_arraysetlengthT+248
+ _D8dscanner8analysis25label_var_same_name_check17LabelVarNameCheck9pushScopeMFZv+29
+ _D8dscanner8analysis25label_var_same_name_check17LabelVarNameCheck9__mixin175visitMFxC6dparse3ast6ModuleZv+21
+ _D8dscanner8analysis3run7analyzeFAyaxC6dparse3ast6ModulexSQCeQBy6config20StaticAnalysisConfigKS7dsymbol11modulecache11ModuleCacheAxS3std12experimental5lexer__T14TokenStructureThVQFpa305_0a20202020737472696e6720636f6d6d656e743b0a20202020737472696e6720747261696c696e67436f6d6d656e743b0a0a20202020696e74206f70436d702873697a655f7420692920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202069662028696e646578203c2069292072657475726e202d313b0a202020202020202069662028696e646578203e2069292072657475726e20313b0a202020202020202072657475726e20303b0a202020207d0a0a20202020696e74206f70436d702872656620636f6e737420747970656f66287468697329206f746865722920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202072657475726e206f70436d70286f746865722e696e646578293b0a202020207d0aZQYobZCQZv9container6rbtree__T12RedBlackTreeTSQBGiQBGd4base7MessageVQBFza62_20612e6c696e65203c20622e6c696e65207c7c2028612e6c696e65203d3d20622e6c696e6520262620612e636f6c756d6e203c20622e636f6c756d6e2920Vbi1ZQGt+11343
+ _D8dscanner8analysis3run7analyzeFAAyaxSQBlQBf6config20StaticAnalysisConfigQBoKS6dparse5lexer11StringCacheKS7dsymbol11modulecache11ModuleCachebZb+337
+ _Dmain+3618
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv+40
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZv+139
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
+ _d_run_main+463
+ main+16
+ __libc_start_main+235
+ 0x41fd89415541f689
+]: 450
+@[
+ _D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_+0
+ _D2rt8lifetime12__arrayAllocFNaNbmxC8TypeInfoxQlZS4core6memory8BlkInfo_+236
+ _d_arrayappendcTX+1944
+ _D8dscanner8analysis10unmodified16UnmodifiedFinder9pushScopeMFZv+61
+ _D8dscanner8analysis10unmodified16UnmodifiedFinder5visitMFxC6dparse3ast6ModuleZv+21
+ _D8dscanner8analysis3run7analyzeFAyaxC6dparse3ast6ModulexSQCeQBy6config20StaticAnalysisConfigKS7dsymbol11modulecache11ModuleCacheAxS3std12experimental5lexer__T14TokenStructureThVQFpa305_0a20202020737472696e6720636f6d6d656e743b0a20202020737472696e6720747261696c696e67436f6d6d656e743b0a0a20202020696e74206f70436d702873697a655f7420692920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202069662028696e646578203c2069292072657475726e202d313b0a202020202020202069662028696e646578203e2069292072657475726e20313b0a202020202020202072657475726e20303b0a202020207d0a0a20202020696e74206f70436d702872656620636f6e737420747970656f66287468697329206f746865722920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202072657475726e206f70436d70286f746865722e696e646578293b0a202020207d0aZQYobZCQZv9container6rbtree__T12RedBlackTreeTSQBGiQBGd4base7MessageVQBFza62_20612e6c696e65203c20622e6c696e65207c7c2028612e6c696e65203d3d20622e6c696e6520262620612e636f6c756d6e203c20622e636f6c756d6e2920Vbi1ZQGt+11343
+ _D8dscanner8analysis3run7analyzeFAAyaxSQBlQBf6config20StaticAnalysisConfigQBoKS6dparse5lexer11StringCacheKS7dsymbol11modulecache11ModuleCachebZb+337
+ _Dmain+3618
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv+40
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZv+139
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
+ _d_run_main+463
+ main+16
+ __libc_start_main+235
+ 0x41fd89415541f689
+]: 450
+@[
+ _D2gc4impl12conservativeQw14ConservativeGC6qallocMFNbmkxC8TypeInfoZS4core6memory8BlkInfo_+0
+ _D2rt8lifetime12__arrayAllocFNaNbmxC8TypeInfoxQlZS4core6memory8BlkInfo_+236
+ _d_arrayappendcTX+1944
+ _D8dscanner8analysis3run7analyzeFAyaxC6dparse3ast6ModulexSQCeQBy6config20StaticAnalysisConfigKS7dsymbol11modulecache11ModuleCacheAxS3std12experimental5lexer__T14TokenStructureThVQFpa305_0a20202020737472696e6720636f6d6d656e743b0a20202020737472696e6720747261696c696e67436f6d6d656e743b0a0a20202020696e74206f70436d702873697a655f7420692920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202069662028696e646578203c2069292072657475726e202d313b0a202020202020202069662028696e646578203e2069292072657475726e20313b0a202020202020202072657475726e20303b0a202020207d0a0a20202020696e74206f70436d702872656620636f6e737420747970656f66287468697329206f746865722920636f6e73742070757265206e6f7468726f77204073616665207b0a202020202020202072657475726e206f70436d70286f746865722e696e646578293b0a202020207d0aZQYobZCQZv9container6rbtree__T12RedBlackTreeTSQBGiQBGd4base7MessageVQBFza62_20612e6c696e65203c20622e6c696e65207c7c2028612e6c696e65203d3d20622e6c696e6520262620612e636f6c756d6e203c20622e636f6c756d6e2920Vbi1ZQGt+680
+ _D8dscanner8analysis3run7analyzeFAAyaxSQBlQBf6config20StaticAnalysisConfigQBoKS6dparse5lexer11StringCacheKS7dsymbol11modulecache11ModuleCachebZb+337
+ _Dmain+3618
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv+40
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZv+139
+ _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv+32
+ _d_run_main+463
+ main+16
+ __libc_start_main+235
+ 0x41fd89415541f689
+]: 450
+```
+
+It looks like a lot of the small allocations are due to a red-black tree in `ModuleCache`.
+
+### What’s next?
+
+I think these examples already show that `bpftrace` is a pretty powerful tool. There’s a lot more that can done, and I highly recommended reading [Brendan Gregg’s eBPF tutorials][18].
+
+I used uprobes to trace arbitrary functions in the D runtime. The pro of this is the freedom to do anything, but the cons are that I had to refer to the D runtime source code and manually deal with the D ABI. There’s also no guarantee that a script I write today will work with future versions of the runtime. Linux also supports making well-defined tracepoints in user code using a feature called [USDT][19]. That should let D code export stable tracepoints that can be used without worrying about the D ABI. I might do more experiments in future.
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2019/04/26/bpftrace_d_gc.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: https://github.com/iovisor/bpftrace
+[2]: https://en.wikipedia.org/wiki/Java_virtual_machine
+[3]: http://dtrace.org/blogs/about/
+[4]: https://github.com/iovisor/bpftrace/blob/master/INSTALL.md
+[5]: https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md
+[6]: https://github.com/iovisor/bpftrace/blob/master/docs/tutorial_one_liners.md
+[7]: https://dlang.org/spec/abi.html#name_mangling
+[8]: https://github.com/dlang-community/D-Scanner
+[9]: https://github.com/dlang/druntime/
+[10]: https://dlang.org/spec/garbage.html#gc_config
+[11]: https://dlang.org/phobos/core_memory.html#.GC.stats
+[12]: https://github.com/dlang/druntime/tree/v2.081.1/src/gc
+[13]: https://github.com/dlang/druntime/tree/v2.081.1/src/gc/impl/conservative
+[14]: https://github.com/dlang/tools
+[15]: https://dlang.org/spec/abi.html#parameters
+[16]: https://en.wikipedia.org/wiki/Copy_elision
+[17]: https://github.com/iovisor/bpftrace/issues/246
+[18]: http://www.brendangregg.com/blog/2019-01-01/learn-ebpf-tracing.html
+[19]: https://lwn.net/Articles/753601/
diff --git a/sources/talk/20190429 Cisco goes all in on WiFi 6.md b/sources/talk/20190429 Cisco goes all in on WiFi 6.md
deleted file mode 100644
index decd25500a..0000000000
--- a/sources/talk/20190429 Cisco goes all in on WiFi 6.md
+++ /dev/null
@@ -1,87 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Cisco goes all in on WiFi 6)
-[#]: via: (https://www.networkworld.com/article/3391919/cisco-goes-all-in-on-wifi-6.html#tk.rss_all)
-[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
-
-Cisco goes all in on WiFi 6
-======
-Cisco rolls out Catalyst and Meraki WiFi 6-based access points, Catalyst 9000 switch
-![undefined / Getty Images][1]
-
-Cisco has taken the wraps off a family of WiFi 6 access points, roaming technology and developer-community support all to make wireless a solid enterprise equal with the wired world.
-
-“Best-effort’ wireless for enterprise customers doesn’t cut it any more. There’s been a change in customer expectations that there will be an uninterrupted unplugged experience,” said Scott Harrell, senior vice president and general manager of enterprise networking at Cisco. **“ **It is now a wireless-first world.** ”**
-
-**More about 802.11ax (Wi-Fi 6)**
-
- * [Why 802.11ax is the next big thing in wireless][2]
- * [FAQ: 802.11ax Wi-Fi][3]
- * [Wi-Fi 6 (802.11ax) is coming to a router near you][4]
- * [Wi-Fi 6 with OFDMA opens a world of new wireless possibilities][5]
- * [802.11ax preview: Access points and routers that support Wi-Fi 6 are on tap][6]
-
-
-
-Bringing a wireless-first enterprise world together is one of the drivers behind a new family of WiFi 6-based access points (AP) for Cisco’s Catalyst and Meraki portfolios. WiFi 6 (802.11ax) is designed for high-density public or private environments. But it also will be beneficial in internet of things (IoT) deployments, and in offices that use bandwidth-hogging applications like videoconferencing.
-
-The Cisco Catalyst 9100 family and Meraki [MR 45/55][7] WiFi-6 access points are built on Cisco silicon and communicate via pre-802.1ax protocols. The silicon in these access points now acts a rich sensor providing IT with insights about what is going on the wireless network in real-time, and that enables faster reactions to problems and security concerns, Harrell said.
-
-Aside from WiFi 6, the boxes include support for visibility and communications with Zigbee, BLE and Thread protocols. The Catalyst APs support uplink speeds of 2.5 Gbps, in addition to 100 Mbps and 1 Gbps. All speeds are supported on Category 5e cabling for an industry first, as well as 10GBASE-T (IEEE 802.3bz) cabling, Cisco said.
-
-Wireless traffic aggregates to wired networks so and the wired network must also evolve. Technology like multi-gigabit Ethernet must be driven into the access layer, which in turn drives higher bandwidth needs at the aggregation and core layers, [Harrell said][8].
-
-Handling this influx of wireless traffic was part of the reason Cisco also upgraded its iconic Catalyst 6000 with the [Catalyst 9600 this week][9]. The 9600 brings with it support for Cat 6000 features such as support for MPLS, virtual switching and IPv6, while adding or bolstering support for wireless netowrks as well as Intent-based networking (IBN) and security segmentation. The 9600 helps fill out the company’s revamped lineup which includes the 9200 family of access switches, the 9500 aggregation switch and 9800 wireless controller.
-
-“WiFi doesn’t exist in a vacuum – how it connects to the enterprise and the data center or the Internet is key and in Cisco’s case that key is now the 9600 which has been built to handle the increased traffic,” said Lee Doyle, principal analyst with Doyle Research.
-
-The new 9600 ties in with the recently [released Catalyst 9800][10], which features 40Gbps to 100Gbps performance, depending on the model, hot-patching to simplify updates and eliminate update-related downtime, Encrypted Traffic Analytics (ETA), policy-based micro- and macro-segmentation and Trustworthy solutions to detect malware on wired or wireless connected devices, Cisco said.
-
-All Catalyst 9000 family members support other Cisco products such as [DNA Center][11] , which controls automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise wired and wireless networks.
-
-The new APs are pre-standard, but other vendors including Aruba, NetGear and others are also selling pre-standard 802.11ax devices. Cisco getting into the market solidifies the validity of this strategy, said Brandon Butler, a senior research analyst with IDC.
-
-Many experts [expect the standard][12] to be ratified late this year.
-
-“We expect to see volume shipments of WiFi 6 products by early next year and it being the de facto WiFi standard by 2022.”
-
-On top of the APs and 9600 switch, Cisco extended its software development community – [DevNet][13] – to offer WiFi 6 learning labs, sandboxes and developer resources.
-
-The Cisco Catalyst and Meraki access platforms are open and programmable all the way down to the chipset level, allowing applications to take advantage of network programmability, Cisco said.
-
-Cisco also said it had added more vendors to now include Apple, Samsung, Boingo, Presidio and Intel for its ongoing [OpenRoaming][14] project. OpenRoaming, which is in beta promises to let users move seamlessly between wireless networks and LTE without interruption.
-
-Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3391919/cisco-goes-all-in-on-wifi-6.html#tk.rss_all
-
-作者:[Michael Cooney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Michael-Cooney/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2019/04/cisco_catalyst_wifi_coffee-cup_coffee-beans_-100794990-large.jpg
-[2]: https://www.networkworld.com/article/3215907/mobile-wireless/why-80211ax-is-the-next-big-thing-in-wi-fi.html
-[3]: https://%20https//www.networkworld.com/article/3048196/mobile-wireless/faq-802-11ax-wi-fi.html
-[4]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
-[5]: https://www.networkworld.com/article/3332018/wi-fi/wi-fi-6-with-ofdma-opens-a-world-of-new-wireless-possibilities.html
-[6]: https://www.networkworld.com/article/3309439/mobile-wireless/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
-[7]: https://meraki.cisco.com/lib/pdf/meraki_datasheet_MR55.pdf
-[8]: https://blogs.cisco.com/news/unplugged-and-uninterrupted
-[9]: https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html
-[10]: https://www.networkworld.com/article/3321000/cisco-links-wireless-wired-worlds-with-new-catalyst-9000-switches.html
-[11]: https://www.networkworld.com/article/3280988/cisco/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html
-[12]: https://www.networkworld.com/article/3336263/is-jumping-ahead-to-wi-fi-6-the-right-move.html
-[13]: https://developer.cisco.com/wireless/?utm_campaign=colaunch-wireless19&utm_source=pressrelease&utm_medium=ciscopress-wireless-main
-[14]: https://www.cisco.com/c/en/us/solutions/enterprise-networks/802-11ax-solution/openroaming.html
-[15]: https://www.facebook.com/NetworkWorld/
-[16]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190429 Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600.md b/sources/talk/20190429 Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600.md
deleted file mode 100644
index 965d2a0e51..0000000000
--- a/sources/talk/20190429 Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600.md
+++ /dev/null
@@ -1,86 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600)
-[#]: via: (https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html#tk.rss_all)
-[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
-
-Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600
-======
-Cisco introduced Catalyst 9600 switches, that let customers automate, set policy, provide security and gain assurance across wired and wireless networks.
-![Martyn Williams][1]
-
-Few events in the tech industry are truly transformative, but Cisco’s replacement of its core Catalyst 6000 family could be one of those actions for customers and the company.
-
-Introduced in 1999, [iterations of the Catalyst 6000][2] have nestled into the core of scores of enterprise networks, with the model 6500 becoming the company’s largest selling box ever.
-
-**Learn about edge networking**
-
- * [How edge networking and IoT will reshape data centers][3]
- * [Edge computing best practices][4]
- * [How edge computing can help secure the IoT][5]
-
-
-
-It goes without question that migrating these customers alone to the new switch – the Catalyst 9600 which the company introduced today – will be of monumental importance to Cisco as it looks to revamp and continue to dominate large campus-core deployments. The first [Catalyst 9000][6], introduced in June 2017, is already the fastest ramping product line in Cisco’s history.
-
-“There are at least tens of thousands of Cat 6000s running in campus cores all over the world,” said [Sachin Gupta][7], senior vice president for product management at Cisco. ”It is the Swiss Army knife of switches in term of features, and we have taken great care and over two years developing feature parity and an easy migration path for those users to the Cat 9000.”
-
-Indeed the 9600 brings with it for Cat 6000 features such as support for MPLS, virtual switching and IPv6, while adding or bolstering support for newer items such as Intent-based networking (IBN), wireless networks and security segmentation. Strategically the 9600 helps fill out the company’s revamped lineup which includes the 9200 family of access switches, the [9500][8] aggregation switch and [9800 wireless controller.][9]
-
-Some of the nitty-gritty details about the 9600:
-
- * It is a purpose-built 40 Gigabit and 100 Gigabit Ethernet line of modular switches targeted for the enterprise campus with a wired switching capacity of up to 25.6 Tbps, with up to 6.4 Tbps of bandwidth per slot.
- * The switch supports granular port densities that fit diverse campus needs, including nonblocking 40 Gigabit and 100 Gigabit Ethernet Quad Small Form-Factor Pluggable (QSFP+, QSFP28) and 1, 10, and 25 GE Small Form-Factor Pluggable Plus (SFP, SFP+, SFP28).
- * It can be configured to support up to 48 nonblocking 100 Gigabit Ethernet QSPF28 ports with the Cisco Catalyst 9600 Series Supervisor Engine 1; Up to 96 nonblocking 40 Gigabit Ethernet QSFP+ ports with the Cisco Catalyst 9600 Series Supervisor Engine 1 and Up to 192 nonblocking 25 Gigabit/10 Gigabit Ethernet SFP28/SFP+ ports with the Cisco Catalyst 9600 Series Supervisor Engine 1.
- * It supports advanced routing and infrastructure services (MPLS, Layer 2 and Layer 3 VPNs, Multicast VPN, and Network Address Translation.
- * Cisco Software-Defined Access capabilities (such as a host-tracking database, cross-domain connectivity, and VPN Routing and Forwarding [VRF]-aware Locator/ID Separation Protocol; and network system virtualization with Cisco StackWise virtual technology.
-
-
-
-The 9600 series runs Cisco’s IOS XE software which now runs across all Catalyst 9000 family members. The software brings with it support for other key products such as Cisco’s [DNA Center][10] which controls automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks. What that means is that with one user interface, DNA Center, customers can automate, set policy, provide security and gain assurance across the entire wired and wireless network fabric, Gupta said.
-
-“The 9600 is a big deal for Cisco and customers as it brings together the campus core and lets users establish standards access and usage policies across their wired and wireless environments,” said Brandon Butler, a senior research analyst with IDC. “It was important that Cisco add a powerful switch to handle the increasing amounts of traffic wireless and cloud applications are bringing to the network.”
-
-IOS XE brings with it automated device provisioning and a wide variety of automation features including support for the network configuration protocol NETCONF and RESTCONF using YANG data models. The software offers near-real-time monitoring of the network, leading to quick detection and rectification of failures, Cisco says.
-
-The software also supports hot patching which provides fixes for critical bugs and security vulnerabilities between regular maintenance releases. This support lets customers add patches without having to wait for the next maintenance release, Cisco says.
-
-As with the rest of the Catalyst family, the 9600 is available via subscription-based licensing. Cisco says the [base licensing package][11] includes Network Advantage licensing options that are tied to the hardware. The base licensing packages cover switching fundamentals, management automation, troubleshooting, and advanced switching features. These base licenses are perpetual.
-
-An add-on licensing package includes the Cisco DNA Premier and Cisco DNA Advantage options. The Cisco DNA add-on licenses are available as a subscription.
-
-IDC’S Butler noted that there are competitors such as Ruckus, Aruba and Extreme that offer switches capable of melding wired and wireless environments.
-
-The new switch is built for the next two decades of networking, Gupta said. “If any of our competitors though they could just go in and replace the Cat 6k they were misguided.”
-
-Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html#tk.rss_all
-
-作者:[Michael Cooney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Michael-Cooney/
-[b]: https://github.com/lujun9972
-[1]: https://images.techhive.com/images/article/2017/02/170227-mwc-02759-100710709-large.jpg
-[2]: https://www.networkworld.com/article/2289826/133715-The-illustrious-history-of-Cisco-s-Catalyst-LAN-switches.html
-[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
-[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
-[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
-[6]: https://www.networkworld.com/article/3256264/cisco-ceo-we-are-still-only-on-the-front-end-of-a-new-version-of-the-network.html
-[7]: https://blogs.cisco.com/enterprise/looking-forward-catalyst-9600-switch-and-9100-access-point-meraki
-[8]: https://www.networkworld.com/article/3202105/cisco-brings-intent-based-networking-to-the-end-to-end-network.html
-[9]: https://www.networkworld.com/article/3321000/cisco-links-wireless-wired-worlds-with-new-catalyst-9000-switches.html
-[10]: https://www.networkworld.com/article/3280988/cisco/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html
-[11]: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9600/software/release/16-11/release_notes/ol-16-11-9600.html#id_67835
-[12]: https://www.facebook.com/NetworkWorld/
-[13]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190501 Vapor IO provides direct, high-speed connections from the edge to AWS.md b/sources/talk/20190501 Vapor IO provides direct, high-speed connections from the edge to AWS.md
deleted file mode 100644
index 0ddef36770..0000000000
--- a/sources/talk/20190501 Vapor IO provides direct, high-speed connections from the edge to AWS.md
+++ /dev/null
@@ -1,69 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Vapor IO provides direct, high-speed connections from the edge to AWS)
-[#]: via: (https://www.networkworld.com/article/3391922/vapor-io-provides-direct-high-speed-connections-from-the-edge-to-aws.html#tk.rss_all)
-[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
-
-Vapor IO provides direct, high-speed connections from the edge to AWS
-======
-With a direct fiber line, latency between the edge and the cloud can be dramatically reduced.
-![Vapor IO][1]
-
-Edge computing startup Vapor IO now offers a direct connection between its edge containers to Amazon Web Services (AWS) via a high-speed fiber network link.
-
-The company said that connection between its Kinetic Edge containers and AWS will be provided by Crown Castle's Cloud Connect fiber network, which uses Amazon Direct Connect Services. This would help reduce network latency by essentially drawing a straight fiber line from Vapor IO's edge computing data centers to Amazon's cloud computing data centers.
-
-“When combined with Crown Castle’s high-speed Cloud Connect fiber, the Kinetic Edge lets AWS developers build applications that span the entire continuum from core to edge. By enabling new classes of applications at the edge, we make it possible for any AWS developer to unlock the next generation of real-time, innovative use cases,” wrote Matt Trifiro, chief marketing officer of Vapor IO, in a [blog post][2].
-
-**[ Read also:[What is edge computing and how it’s changing the network][3] ]**
-
-Vapor IO clams that the connection will lower latency by as much as 75%. “Connecting workloads and data at the Kinetic Edge with workloads and data in centralized AWS data centers makes it possible to build edge applications that leverage the full power of AWS,” wrote Trifiro.
-
-Developers building applications at the Kinetic Edge will have access to the full suite of AWS cloud computing services, including Amazon Simple Storage Service (Amazon S3), Amazon Elastic Cloud Compute (Amazon EC2), Amazon Virtual Private Cloud (Amazon VPC), and Amazon Relational Database Service (Amazon RDS).
-
-Crown Castle is the largest provider of shared communications infrastructure in the U.S., with 40,000 cell towers and 60,000 miles of fiber, offering 1Gbps to 10Gbps private fiber connectivity between the Kinetic Edge and AWS.
-
-AWS Direct Connect is a essentially a private connection between Amazon's AWS customers and their the AWS data centers, so customers don’t have to rout their traffic over the public internet and compete with Netflix and YouTube, for example, for bandwidth.
-
-### How edge computing works
-
-The structure of [edge computing][3] is the reverse of the standard internet design. Rather than sending all the data up to central servers, as much processing as possible is done at the edge. This is to reduce the sheer volume of data coming upstream and thus reduce latency.
-
-With things like smart cars, even if 95% of data is eliminated that remaining, 5% can still be a lot, so moving it fast is essential. Vapor IO said it will shuttle workloads to Amazon’s USEAST and USWEST data centers, depending on location.
-
-This shows how the edge is up-ending the traditional internet design and moving more computing outside the traditional data center, although a connection upstream is still important because it allows for rapid movement of necessary data from the edge to the cloud, where it can be stored or processed.
-
-**More about edge networking:**
-
- * [How edge networking and IoT will reshape data centers][4]
- * [Edge computing best practices][5]
- * [How edge computing can help secure the IoT][6]
-
-
-
-Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3391922/vapor-io-provides-direct-high-speed-connections-from-the-edge-to-aws.html#tk.rss_all
-
-作者:[Andy Patrizio][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Andy-Patrizio/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2018/09/vapor-io-kinetic-edge-data-center-100771510-large.jpg
-[2]: https://www.vapor.io/powering-amazon-web-services-at-the-kinetic-edge/
-[3]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
-[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
-[5]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
-[6]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
-[7]: https://www.facebook.com/NetworkWorld/
-[8]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190506 Cisco boosts SD-WAN with multicloud-to-branch access system.md b/sources/talk/20190506 Cisco boosts SD-WAN with multicloud-to-branch access system.md
deleted file mode 100644
index c676e5effb..0000000000
--- a/sources/talk/20190506 Cisco boosts SD-WAN with multicloud-to-branch access system.md
+++ /dev/null
@@ -1,89 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Cisco boosts SD-WAN with multicloud-to-branch access system)
-[#]: via: (https://www.networkworld.com/article/3393232/cisco-boosts-sd-wan-with-multicloud-to-branch-access-system.html#tk.rss_all)
-[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
-
-Cisco boosts SD-WAN with multicloud-to-branch access system
-======
-Cisco's SD-WAN Cloud onRamp for CoLocation can tie branch offices to private data centers in regional corporate headquarters via colocation facilities for shorter, faster, possibly more secure connections.
-![istock][1]
-
-Cisco is looking to give traditional or legacy wide-area network users another reason to move to the [software-defined WAN world][2].
-
-The company has rolled out an integrated hardware/software package called SD-WAN Cloud onRamp for CoLocation that lets customers tie distributed multicloud applications back to a local branch office or local private data center. The idea is that a cloud-to-branch link would be shorter, faster and possibly more secure that tying cloud-based applications directly all the way to the data center.
-
-**More about SD-WAN**
-
- * [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][3]
- * [How to pick an off-site data-backup method][4]
- * [SD-Branch: What it is and why you’ll need it][5]
- * [What are the options for security SD-WAN?][6]
-
-
-
-“With Cisco SD-WAN Cloud onRamp for CoLocation operating regionally, connections from colocation facilities to branches are set up and configured according to traffic loads (such as video vs web browsing vs email) SLAs (requirements for low latency/jitter), and Quality of Experience for optimizing cloud application performance,” wrote Anand Oswal, senior vice president of engineering, in Cisco’s Enterprise Networking Business in a [blog about the new service][7].
-
-According to Oswal, each branch or private data center is equipped with a network interface that provides a secure tunnel to the regional colocation facility. In turn, the Cloud onRamp for CoLocation establishes secure tunnels to SaaS application platforms, multi-cloud platform services, and enterprise data centers, he stated.
-
-Traffic is securely routed through the Cloud onRamp for CoLocation stack which includes security features such as application-aware firewalls, URL-filtering, intrusion detection/prevention, DNS-layer security, and Advanced Malware Protection (AMP) Threat Grid, as well as other network services such as load-balancing and Wide Area Application Services, Oswal wrote.
-
-A typical use case for the package is an enterprise that has dozens of distributed branch offices, clustered around major cities, spread over several countries. The goal is to tie each branch to enterprise data center databases, SaaS applications, and multi-cloud services while meeting service level agreements and application quality of experience, Oswal stated.
-
-“With virtualized Cisco SD-WAN running on regional colocation centers, the branch workforce has access to applications and data residing in AWS, Azure, and Google cloud platforms as well as SaaS providers such as Microsoft 365 and Salesforce—transparently and securely,” Oswal said. “Distributing SD-WAN features over a regional architecture also brings processing power closer to where data is being generated—at the cloud edge.”
-
-The idea is that paths to designated SaaS applications will be monitored continuously for performance, and the application traffic will be dynamically routed to the best-performing path, without requiring human intervention, Oswal stated.
-
-For a typical configuration, a region covering a target city uses a colocation IaaS provider that hosts the Cisco Cloud onRamp for CoLocation, which includes:
-
- * Cisco vManage software that lets customers manage applications and provision, monitor and troubleshooting the WAN.
- * [Cisco Cloud Services Platform (CSP) 5000][8] The systems are x86 Linux Kernel-based Virtual Machine (KVM) software and hardware platforms for the data center, regional hub, and colocation Network Functions Virtualization (NFV). The platforms let enterprise IT teams or service providers deploy any Cisco or third-party network virtual service with Cisco’s [Network Services Orchestrator (NSO)][9] or any other northbound management and orchestration system.
- * The Cisco [Catalyst 9500 Series][10] aggregation switches. Based on an x86 CPU, the Catalyst 9500 Series is Cisco’s lead purpose-built fixed core and aggregation enterprise switching platform, built for security, IoT, and cloud. The switches come with a 4-core x86, 2.4-GHz CPU, 16-GB DDR4 memory, and 16-GB internal storage.
-
-
-
-If the features of the package sound familiar, that’s because the [Cloud onRamp for CoLocation][11] package is the second generation of a similar SD-WAN package offered by Viptela which Cisco [bought in 2017][12].
-
-SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized – for data, voice or video – and potentially save money in the process.
-
-It's a profoundly hot market with tons of players including [Cisco][13], VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa. IDC says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40% yearly clip between now and then.
-
-[SD-WAN][14] lets networks route traffic based on centrally managed roles and rules, no matter what the entry and exit points of the traffic are, and with full security. For example, if a user in a branch office is working in Office365, SD-WAN can route their traffic directly to the closest cloud data center for that app, improving network responsiveness for the user and lowering bandwidth costs for the business.
-
-"SD-WAN has been a promised technology for years, but in 2019 it will be a major driver in how networks are built and re-built," Oswal said a Network World [article][15] earlier this year.
-
-Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3393232/cisco-boosts-sd-wan-with-multicloud-to-branch-access-system.html#tk.rss_all
-
-作者:[Michael Cooney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Michael-Cooney/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2018/02/istock-578801262-100750453-large.jpg
-[2]: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html
-[3]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
-[4]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
-[5]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
-[6]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
-[7]: https://blogs.cisco.com/enterprise/cisco-sd-wan-cloud-onramp-for-colocation-multicloud
-[8]: https://www.cisco.com/c/en/us/products/collateral/switches/cloud-services-platform-5000/nb-06-csp-5k-data-sheet-cte-en.html#ProductOverview
-[9]: https://www.cisco.com/go/nso
-[10]: https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9500-series-switches/data_sheet-c78-738978.html
-[11]: https://www.networkworld.com/article/3207751/viptela-cloud-onramp-optimizes-cloud-access.html
-[12]: https://www.networkworld.com/article/3193784/cisco-grabs-up-sd-wan-player-viptela-for-610m.html?nsdr=true
-[13]: https://www.networkworld.com/article/3322937/what-will-be-hot-for-cisco-in-2019.html
-[14]: https://www.networkworld.com/article/3031279/sd-wan/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
-[15]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html
-[16]: https://www.facebook.com/NetworkWorld/
-[17]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190507 Server shipments to pick up in the second half of 2019.md b/sources/talk/20190507 Server shipments to pick up in the second half of 2019.md
deleted file mode 100644
index 8169c594ef..0000000000
--- a/sources/talk/20190507 Server shipments to pick up in the second half of 2019.md
+++ /dev/null
@@ -1,56 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Server shipments to pick up in the second half of 2019)
-[#]: via: (https://www.networkworld.com/article/3393167/server-shipments-to-pick-up-in-the-second-half-of-2019.html#tk.rss_all)
-[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
-
-Server shipments to pick up in the second half of 2019
-======
-Server sales slowed in anticipation of the new Intel Xeon processors, but they are expected to start up again before the end of the year.
-![Thinkstock][1]
-
-Global server shipments are not expected to return to growth momentum until the third quarter or even the fourth quarter of 2019, according to Taiwan-based tech news site DigiTimes, which cited unnamed server supply chain sources. The one bright spot remains cloud providers like Amazon, Google, and Facebook, which continue their buying binge.
-
-Normally I’d be reluctant to cite such a questionable source, but given most of the OEMs and ODMs are based in Taiwan and DigiTimes (the article is behind a paywall so I cannot link) has shown it has connections to them, I’m inclined to believe them.
-
-Quanta Computer chairman Barry Lam told the publication that Quanta's shipments of cloud servers have risen steadily, compared to sharp declines in shipments of enterprise servers. Lam continued that enterprise servers command only 1-2% of the firm's total server shipments.
-
-**[ Also read:[Gartner: IT spending to drop due to falling equipment prices][2] ]**
-
-[Server shipments began to slow down in the first quarter][3] thanks in part to the impending arrival of second-generation Xeon Scalable processors from Intel. And since it takes a while to get parts and qualify them, this quarter won’t be much better.
-
-In its latest quarterly earnings, Intel's data center group (DCG) said sales declined 6% year over year, the first decline of its kind since the first quarter of 2012 and reversing an average growth of over 20% in the past.
-
-[The Osbourne Effect][4] wasn’t the sole reason. An economic slowdown in China and the trade war, which will add significant tariffs to Chinese-made products, are also hampering sales.
-
-DigiTimes says Inventec, Intel's largest server motherboard supplier, expects shipments of enterprise server motherboards to further lose steams for the rest of the year, while sales of data center servers are expected to grow 10-15% on year in 2019.
-
-**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][5] ]**
-
-It went on to say server shipments may concentrate in the second half or even the fourth quarter of the year, while cloud-based data center servers for the cloud giants will remain positive as demand for edge computing, new artificial intelligence (AI) applications, and the proliferation of 5G applications begin in 2020.
-
-Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3393167/server-shipments-to-pick-up-in-the-second-half-of-2019.html#tk.rss_all
-
-作者:[Andy Patrizio][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Andy-Patrizio/
-[b]: https://github.com/lujun9972
-[1]: https://images.techhive.com/images/article/2017/04/2_data_center_servers-100718306-large.jpg
-[2]: https://www.networkworld.com/article/3391062/it-spending-to-drop-due-to-falling-equipment-prices-gartner-predicts.html
-[3]: https://www.networkworld.com/article/3332144/server-sales-projected-to-slow-while-memory-prices-drop.html
-[4]: https://en.wikipedia.org/wiki/Osborne_effect
-[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
-[6]: https://www.facebook.com/NetworkWorld/
-[7]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190509 Cisco adds AMP to SD-WAN for ISR-ASR routers.md b/sources/talk/20190509 Cisco adds AMP to SD-WAN for ISR-ASR routers.md
deleted file mode 100644
index a5ec6212d8..0000000000
--- a/sources/talk/20190509 Cisco adds AMP to SD-WAN for ISR-ASR routers.md
+++ /dev/null
@@ -1,74 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Cisco adds AMP to SD-WAN for ISR/ASR routers)
-[#]: via: (https://www.networkworld.com/article/3394597/cisco-adds-amp-to-sd-wan-for-israsr-routers.html#tk.rss_all)
-[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
-
-Cisco adds AMP to SD-WAN for ISR/ASR routers
-======
-Cisco SD-WAN now sports Advanced Malware Protection on its popular edge routers, adding to their routing, segmentation, security, policy and orchestration capabilities.
-![vuk8691 / Getty Images][1]
-
-Cisco has added support for Advanced Malware Protection (AMP) to its million-plus ISR/ASR edge routers, in an effort to [reinforce branch and core network malware protection][2] at across the SD-WAN.
-
-Cisco last year added its Viptela SD-WAN technology to the IOS XE version 16.9.1 software that runs its core ISR/ASR routers such as the ISR models 1000, 4000 and ASR 5000, in use by organizations worldwide. Cisco bought Viptela in 2017.
-
-**More about SD-WAN**
-
- * [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][3]
- * [How to pick an off-site data-backup method][4]
- * [SD-Branch: What it is and why you’ll need it][5]
- * [What are the options for security SD-WAN?][6]
-
-
-
-The release of Cisco IOS XE offered an instant upgrade path for creating cloud-controlled SD-WAN fabrics to connect distributed offices, people, devices and applications operating on the installed base, Cisco said. At the time Cisco said that Cisco SD-WAN on edge routers builds a secure virtual IP fabric by combining routing, segmentation, security, policy and orchestration.
-
-With the recent release of [IOS-XE SD-WAN 16.11][7], Cisco has brought AMP and other enhancements to its SD-WAN.
-
-“Together with Cisco Talos [Cisco’s security-intelligence arm], AMP imbues your SD-WAN branch, core and campuses locations with threat intelligence from millions of worldwide users, honeypots, sandboxes, and extensive industry partnerships,” wrote Cisco’s Patrick Vitalone a product marketing manager in a [blog][8] about the security portion of the new software. “In total, AMP identifies more than 1.1 million unique malware samples a day." When AMP in Cisco SD-WAN spots malicious behavior it automatically blocks it, he wrote.
-
-The idea is to use integrated preventative engines, exploit prevention and intelligent signature-based antivirus to stop malicious attachments and fileless malware before they execute, Vitalone wrote.
-
-AMP support is added to a menu of security features already included in the SD-WAN software including support for URL filtering, [Cisco Umbrella][9] DNS security, Snort Intrusion Prevention, the ability to segment users across the WAN and embedded platform security, including the [Cisco Trust Anchor][10] module.
-
-**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][11] ]**
-
-The software also supports [SD-WAN Cloud onRamp for CoLocation][12], which lets customers tie distributed multicloud applications back to a local branch office or local private data center. That way a cloud-to-branch link would be shorter, faster and possibly more secure that tying cloud-based applications directly to the data center.
-
-“The idea that this kind of security technology is now integrated into Cisco’s SD-WAN offering is a critical for Cisco and customers looking to evaluate SD-WAN offerings,” said Lee Doyle, principal analyst at Doyle Research.
-
-IOS-XE SD-WAN 16.11 is available now.
-
-Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3394597/cisco-adds-amp-to-sd-wan-for-israsr-routers.html#tk.rss_all
-
-作者:[Michael Cooney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Michael-Cooney/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2018/09/shimizu_island_el_nido_palawan_philippines_by_vuk8691_gettyimages-155385042_1200x800-100773533-large.jpg
-[2]: https://www.networkworld.com/article/3285728/what-are-the-options-for-securing-sd-wan.html
-[3]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
-[4]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
-[5]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
-[6]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
-[7]: https://www.cisco.com/c/en/us/td/docs/routers/sdwan/release/notes/xe-16-11/sd-wan-rel-notes-19-1.html
-[8]: https://blogs.cisco.com/enterprise/enabling-amp-in-cisco-sd-wan
-[9]: https://www.networkworld.com/article/3167837/cisco-umbrella-cloud-service-shapes-security-for-cloud-mobile-resources.html
-[10]: https://www.cisco.com/c/dam/en_us/about/doing_business/trust-center/docs/trustworthy-technologies-datasheet.pdf
-[11]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
-[12]: https://www.networkworld.com/article/3393232/cisco-boosts-sd-wan-with-multicloud-to-branch-access-system.html
-[13]: https://www.facebook.com/NetworkWorld/
-[14]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190510 Supermicro moves production from China.md b/sources/talk/20190510 Supermicro moves production from China.md
deleted file mode 100644
index 21739fa416..0000000000
--- a/sources/talk/20190510 Supermicro moves production from China.md
+++ /dev/null
@@ -1,58 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Supermicro moves production from China)
-[#]: via: (https://www.networkworld.com/article/3394404/supermicro-moves-production-from-china.html#tk.rss_all)
-[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
-
-Supermicro moves production from China
-======
-Supermicro was cleared of any activity related to the Chinese government and secret chips in its motherboards, but it is taking no chances and is moving its facilities.
-![Frank Schwichtenberg \(CC BY 4.0\)][1]
-
-Server maker Supermicro, based in Fremont, California, is reportedly moving production out of China over customer concerns that the Chinese government had secretly inserted chips for spying into its motherboards.
-
-The claims were made by Bloomberg late last year in a story that cited more than 100 sources in government and private industry, including Apple and Amazon Web Services (AWS). However, Apple CEO Tim Cook and AWS CEO Andy Jassy denied the claims and called for Bloomberg to retract the article. And a few months later, the third-party investigations firm Nardello & Co examined the claims and [cleared Supermicro][2] of any surreptitious activity.
-
-At first it seemed like Supermicro was weathering the storm, but the story did have a negative impact. Server sales have fallen since the Bloomberg story, and the company is forecasting a near 10% decline in total revenues for the March quarter compared to the previous three months.
-
-**[ Also read:[Who's developing quantum computers][3] ]**
-
-And now, Nikkei Asian Review reports that despite the strong rebuttals, some customers remain cautious about the company's products. To address those concerns, Nikkei says Supermicro has told suppliers to [move production out of China][4], citing industry sources familiar with the matter.
-
-It also has the side benefit of mitigating against the U.S.-China trade war, which is only getting worse. Since the tariffs are on the dollar amount of the product, that can quickly add up even for a low-end system, as Serve The Home noted in [this analysis][5].
-
-Supermicro is the world's third-largest server maker by shipments, selling primarily to cloud providers like Amazon and Facebook. It does its own assembly in its Fremont facility but outsources motherboard production to numerous suppliers, mostly China and Taiwan.
-
-"We have to be more self-reliant [to build in-house manufacturing] without depending only on those outsourcing partners whose production previously has mostly been in China," an executive told Nikkei.
-
-Nikkei notes that roughly 90% of the motherboards shipped worldwide in 2017 were made in China, but that percentage dropped to less than 50% in 2018, according to Digitimes Research, a tech supply chain specialist based in Taiwan.
-
-Supermicro just held a groundbreaking ceremony in Taiwan for a 800,000 square foot manufacturing plant in Taiwan and is expanding its San Jose, California, plant as well. So, they must be anxious to be free of China if they are willing to expand in one of the most expensive real estate markets in the world.
-
-A Supermicro spokesperson said via email, “We have been expanding our manufacturing capacity for many years to meet increasing customer demand. We are currently constructing a new Green Computing Park building in Silicon Valley, where we are the only Tier 1 solutions vendor manufacturing in Silicon Valley, and we proudly broke ground this week on a new manufacturing facility in Taiwan. To support our continued global growth, we look forward to expanding in Europe as well.”
-
-Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3394404/supermicro-moves-production-from-china.html#tk.rss_all
-
-作者:[Andy Patrizio][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Andy-Patrizio/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2019/05/supermicro_-_x11sae__cebit_2016_01-100796121-large.jpg
-[2]: https://www.networkworld.com/article/3326828/investigator-finds-no-evidence-of-spy-chips-on-super-micro-motherboards.html
-[3]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html
-[4]: https://asia.nikkei.com/Economy/Trade-war/Server-maker-Super-Micro-to-ditch-made-in-China-parts-on-spy-fears
-[5]: https://www.servethehome.com/how-tariffs-hurt-intel-xeon-d-atom-and-amd-epyc-3000/
-[6]: https://www.facebook.com/NetworkWorld/
-[7]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190513 Top auto makers rely on cloud providers for IoT.md b/sources/talk/20190513 Top auto makers rely on cloud providers for IoT.md
new file mode 100644
index 0000000000..5adf5f65a7
--- /dev/null
+++ b/sources/talk/20190513 Top auto makers rely on cloud providers for IoT.md
@@ -0,0 +1,53 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top auto makers rely on cloud providers for IoT)
+[#]: via: (https://www.networkworld.com/article/3395137/top-auto-makers-rely-on-cloud-providers-for-iot.html)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+Top auto makers rely on cloud providers for IoT
+======
+
+For the companies looking to implement the biggest and most complex [IoT][1] setups in the world, the idea of pairing up with [AWS][2], [Google Cloud][3] or [Azure][4] seems to be one whose time has come. Within the last two months, BMW and Volkswagen have both announced large-scale deals with Microsoft and Amazon, respectively, to help operate their extensive network of operational technology.
+
+According to Alfonso Velosa, vice president and analyst at Gartner, part of the impetus behind those two deals is that the automotive sector fits in very well with the architecture of the public cloud. Public clouds are great at collecting and processing data from a diverse array of different sources, whether they’re in-vehicle sensors, dealerships, mechanics, production lines or anything else.
+
+**[ RELATED:[What hybrid cloud means in practice][5]. | Get regularly scheduled insights by [signing up for Network World newsletters][6]. ]**
+
+“What they’re trying to do is create a broader ecosystem. They think they can leverage the capabilities from these folks,” Velosa said.
+
+### Cloud providers as IoT partners
+
+The idea is automated analytics for service and reliability data, manufacturing and a host of other operational functions. And while the full realization of that type of service is still very much a work in progress, it has clear-cut advantages for big companies – a skilled partner handling tricky implementation work, built-in capability for sophisticated analytics and security, and, of course, the ability to scale up in a big way.
+
+Hence, the structure of the biggest public clouds has upside for many large-scale IoT deployments, not just the ones taking place in the auto industry. The cloud giants have vast infrastructures, with multiple points of presence all over the world.
+
+To continue reading this article register now
+
+[Get Free Access][7]
+
+[Learn More][8] Existing Users [Sign In][7]
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3395137/top-auto-makers-rely-on-cloud-providers-for-iot.html
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
+[2]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html
+[3]: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html
+[4]: https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html
+[5]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
+[6]: https://www.networkworld.com/newsletters/signup.html
+[7]: javascript://
+[8]: /learn-about-insider/
diff --git a/sources/talk/20190514 Mobility and SD-WAN, Part 1- SD-WAN with 4G LTE is a Reality.md b/sources/talk/20190514 Mobility and SD-WAN, Part 1- SD-WAN with 4G LTE is a Reality.md
new file mode 100644
index 0000000000..1ecd68fa41
--- /dev/null
+++ b/sources/talk/20190514 Mobility and SD-WAN, Part 1- SD-WAN with 4G LTE is a Reality.md
@@ -0,0 +1,64 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Mobility and SD-WAN, Part 1: SD-WAN with 4G LTE is a Reality)
+[#]: via: (https://www.networkworld.com/article/3394866/mobility-and-sd-wan-part-1-sd-wan-with-4g-lte-is-a-reality.html)
+[#]: author: (Francisca Segovia )
+
+Mobility and SD-WAN, Part 1: SD-WAN with 4G LTE is a Reality
+======
+
+![istock][1]
+
+Without a doubt, 5G — the fifth generation of mobile wireless technology — is the hottest topic in wireless circles today. You can’t throw a stone without hitting 5G news. While telecommunications providers are in a heated competition to roll out 5G, it’s important to reflect on current 4G LTE (Long Term Evolution) business solutions as a preview of what we have learned and what’s possible.
+
+This is part one of a two-part blog series that will explore the [SD-WAN][2] journey through the evolution of these wireless technologies.
+
+### **Mobile SD-WAN is a reality**
+
+4G LTE commercialization continues to expand. According to [the GSM (Groupe Spéciale Mobile) Association][3], 710 operators have rolled out 4G LTE in 217 countries, reaching 83 percent of the world’s population. The evolution of 4G is transforming the mobile industry and is setting the stage for the advent of 5G.
+
+Mobile connectivity is increasingly integrated with SD-WAN, along with MPLS and broadband WAN services today. 4G LTE represents a very attractive transport alternative, as a backup or even an active member of the WAN transport mix to connect users to critical business applications. And in some cases, 4G LTE might be the only choice in locations where fixed lines aren’t available or reachable. Furthermore, an SD-WAN can optimize 4G LTE connectivity and bring new levels of performance and availability to mobile-based business use cases by selecting the best path available across several 4G LTE connections.
+
+### **Increasing application performance and availability with 4G LTE**
+
+Silver Peak has partnered with [BEC Technologies][4] to create a joint solution that enables customers to incorporate one or more low-cost 4G LTE services into any [Unity EdgeConnect™][5] SD-WAN edge platform deployment. All the capabilities of the EdgeConnect platform are supported across LTE links including packet-based link bonding, dynamic path control, path conditioning along with the optional [Unity Boost™ WAN Optimization][6] performance pack. This ensures always-consistent, always-available application performance even in the event of an outage or degraded service.
+
+EdgeConnect also incorporates sophisticated NAT traversal technology that eliminates the requirement for provisioning the LTE service with extra-cost static IP addresses. The Silver Peak [Unity Orchestrator™][7] management software enables the prioritization of LTE bandwidth usage based on branch and application requirements – active-active or backup-only. This solution is ideal in retail point-of-sale and other deployment use cases where always-available WAN connectivity is critical for the business.
+
+### **Automated SD-WAN enables innovative services**
+
+An example of an innovative mobile SD-WAN service is [swyMed’s DOT Telemedicine Backpack][8] powered by the EdgeConnect [Ultra Small][9] hardware platform. This integrated telemedicine solution enables first responders to connect to doctors and communicate patient vital statistics and real-time video anywhere, any time, greatly improving and expediting care for emergency patients. Using a lifesaving backpack provisioned with two LTE services from different carriers, EdgeConnect continuously monitors the underlying 4G LTE services for packet loss, latency and jitter. In the case of transport failure or brownout, EdgeConnect automatically initiates a sub-second failover so that voice, video and data connections continue without interruption over the remaining active 4G service. By bonding the two LTE links together with the EdgeConnect SD-WAN, swyMed can achieve an aggregate signal quality in excess of 90 percent, bringing mobile telemedicine to areas that would have been impossible in the past due to poor signal strength.
+
+To learn more about SD-WAN and the unique advantages that SD-WAN provides to enterprises across all industries, visit the [SD-WAN Explained][2] page on our website.
+
+### **Prepare for the 5G future**
+
+In summary, the adoption of 4G LTE is a reality. Service providers are taking advantage of the distinct benefits of SD-WAN to offer managed SD-WAN services that leverage 4G LTE.
+
+As the race for the 5G gains momentum, service providers are sure to look for ways to drive new revenue streams to capitalize on their initial investments. Stay tuned for part 2 of this 2-blog series where I will discuss how SD-WAN is one of the technologies that can help service providers to transition from 4G to 5G and enable the monetization of a new wave of managed 5G services.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3394866/mobility-and-sd-wan-part-1-sd-wan-with-4g-lte-is-a-reality.html
+
+作者:[Francisca Segovia][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/05/istock-952414660-100796279-large.jpg
+[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
+[3]: https://www.gsma.com/futurenetworks/resources/all-ip-statistics/
+[4]: https://www.silver-peak.com/resource-center/edgeconnect-4glte-solution-bec-technologies
+[5]: https://www.silver-peak.com/products/unity-edge-connect
+[6]: https://www.silver-peak.com/products/unity-boost
+[7]: https://www.silver-peak.com/products/unity-orchestrator
+[8]: https://www.silver-peak.com/resource-center/mobile-telemedicine-helps-save-lives-streaming-real-time-clinical-data-and-patient
+[9]: https://www.silver-peak.com/resource-center/edgeconnect-us-ec-us-specification-sheet
diff --git a/sources/talk/20190515 Extreme addresses networked-IoT security.md b/sources/talk/20190515 Extreme addresses networked-IoT security.md
new file mode 100644
index 0000000000..1ad756eded
--- /dev/null
+++ b/sources/talk/20190515 Extreme addresses networked-IoT security.md
@@ -0,0 +1,71 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Extreme addresses networked-IoT security)
+[#]: via: (https://www.networkworld.com/article/3395539/extreme-addresses-networked-iot-security.html)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+Extreme addresses networked-IoT security
+======
+The ExtremeAI security app features machine learning that can understand typical behavior of IoT devices and alert when it finds anomalies.
+![Getty Images][1]
+
+[Extreme Networks][2] has taken the wraps off a new security application it says will use machine learning and artificial intelligence to help customers effectively monitor, detect and automatically remediate security issues with networked IoT devices.
+
+The application – ExtremeAI security—features machine-learning technology that can understand typical behavior of IoT devices and automatically trigger alerts when endpoints act in unusual or unexpected ways, Extreme said.
+
+**More about edge networking**
+
+ * [How edge networking and IoT will reshape data centers][3]
+ * [Edge computing best practices][4]
+ * [How edge computing can help secure the IoT][5]
+
+
+
+Extreme said that the ExtremeAI Security application can tie into all leading threat intelligence feeds, and had close integration with its existing [Extreme Workflow Composer][6] to enable automatic threat mitigation and remediation.
+
+The application integrates the company’s ExtremeAnalytics application which lets customers view threats by severity, category, high-risk endpoints and geography. An automated ticketing feature integrates with variety of popular IT tools such as Slack, Jira, and ServiceNow, and the application interoperates with many popular security tools, including existing network taps, the vendor stated.
+
+There has been an explosion of new endpoints ranging from million-dollar smart MRI machines to five-dollar sensors, which creates a complex and difficult job for network and security administrators, said Abby Strong, vice president of product marketing for Extreme. “We need smarter, secure and more self-healing networks especially where IT cybersecurity resources are stretched to the limit.”
+
+Extreme is trying to address an issue that is important to enterprise-networking customers: how to get actionable, usable insights as close to real-time as possible, said Rohit Mehra, Vice President of Network Infrastructure at IDC. “Extreme is melding automation, analytics and security that can look at network traffic patterns and allow the system to take action when needed.”
+
+The ExtremeAI application, which will be available in October, is but one layer of IoT security Extreme offers. Already on the market, its [Defender for IoT][7] package, which includes a Defender application and adapter, lets customers monitor, set policies and isolate IoT devices across an enterprise.
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
+
+The Extreme AI and Defender packages are now part of what the company calls Extreme Elements, which is a menu of its new and existing Smart OmniEdge, Automated Campus and Agile Data Center software, hardware and services that customers can order to build a manageable, secure system.
+
+Aside from the applications, the Elements include Extreme Management Center, the company’s network management software; the company’s x86-based intelligent appliances, including the ExtremeCloud Appliance; and [ExtremeSwitching X465 premium][9], a stackable multi-rate gigabit Ethernet switch.
+
+The switch and applications are just the beginning of a very busy time for Extreme. In its [3Q earnings cal][10]l this month company CEO Ed Meyercord noted Extreme was in the “early stages of refreshing 70 percent of our products” and seven different products will become generally available this quarter – a record for Extreme, he said.
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3395539/extreme-addresses-networked-iot-security.html
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/iot_security_tablet_conference_digital-100787102-large.jpg
+[2]: https://www.networkworld.com/article/3289508/extreme-facing-challenges-girds-for-future-networking-battles.html
+[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
+[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
+[6]: https://www.extremenetworks.com/product/workflow-composer/
+[7]: https://www.extremenetworks.com/product/extreme-defender-for-iot/
+[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[9]: https://community.extremenetworks.com/extremeswitching-exos-223284/extremexos-30-2-and-smart-omniedge-premium-x465-switches-are-now-available-7823377
+[10]: https://seekingalpha.com/news/3457137-extreme-networks-minus-15-percent-quarterly-miss-light-guidance
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190516 Will 5G be the first carbon-neutral network.md b/sources/talk/20190516 Will 5G be the first carbon-neutral network.md
new file mode 100644
index 0000000000..decacfac5d
--- /dev/null
+++ b/sources/talk/20190516 Will 5G be the first carbon-neutral network.md
@@ -0,0 +1,88 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Will 5G be the first carbon-neutral network?)
+[#]: via: (https://www.networkworld.com/article/3395465/will-5g-be-the-first-carbon-neutral-network.html)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Will 5G be the first carbon-neutral network?
+======
+Increased energy consumption in new wireless networks could become ecologically unsustainable. Engineers think they have solutions that apply to 5G, but all is not certain.
+![Dushesina/Getty Images][1]
+
+If wireless networks transfer 1,000 times more data, does that mean they will use 1,000 times more energy? It probably would with the old 4G LTE wireless technologies— LTE doesn’t have much of a sleep-standby. But with 5G, we might have a more energy-efficient option.
+
+More customers want Earth-friendly options, and engineers are now working on how to achieve it — meaning 5G might introduce the first zero-carbon networks. It’s not all certain, though.
+
+**[ Related:[What is 5G wireless? And how it will change networking as we know it][2] ]**
+
+“When the 4G technology for wireless communication was developed, not many people thought about how much energy is consumed in transmitting bits of information,” says Emil Björnson, associate professor of communication systems at Linkoping University, [in an article on the school’s website][3].
+
+Standby was never built into 4G, Björnson explains. Reasons include overbuilding — the architects wanted to ensure connections didn’t fail, so they just kept the power up. The downside to that redundancy was that almost the same amount of energy is used whether the system is transmitting data or not.
+
+“We now know that this is not necessary,” Björnson says. 5G networks don’t use much power during periods of low traffic, and that reduces power consumption.
+
+Björnson says he knows how to make future-networks — those 5G networks that one day may become the enterprise broadband replacement — super efficient even when there is heavy use. Massive-MIMO (multiple-in, multiple-out) antennas are the answer, he says. That’s hundreds of connected antennas taking advantage of multipath.
+
+I’ve written before about some of Björnson's Massive-MIMO ideas. He thinks [Massive-MIMO will remove all capacity ceilings from wireless networks][4]. However, he now adds calculations to his research that he claims prove that the Massive-MIMO antenna technology will also reduce power use. He and his group are actively promoting their academic theories in a paper ([pdf][5]).
+
+**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
+
+### Nokia's plan to reduce wireless networks' CO2 emissions
+
+Björnson's isn’t the only 5G-aimed eco-concept out there. Nokia points out that it isn't just radios transmitting that use electricity. Cooling is actually the main electricity hog, says the telcommunications company, which is one of the world’s principal manufacturers of mobile network equipment.
+
+Nokia says the global energy cost of Radio Access Networks (RANs) in 2016 (the last year numbers were available), which includes base transceiver stations (BTSs) needed by mobile networks, was around $80 billion. That figure increases with more users coming on stream, something that’s probable. Of the BTS’s electricity use, about 90% “converts to waste heat,” [Harry Kuosa, a marketing executive, writes on Nokia’s blog][7]. And base station sites account for about 80% of a mobile network’s entire energy use, Nokia expands on its website.
+
+“A thousand-times more traffic that creates a thousand-times higher energy costs is unsustainable,” Nokia says in its [ebook][8] on the subject, “Turning the zero carbon vision into business opportunity,” and it’s why Nokia plans liquid-cooled 5G base stations among other things, including chip improvements. It says the liquid-cooling can reduce CO2 emissions by up to 80%.
+
+### Will those ideas work?
+
+Not all agree power consumption can be reduced when implementing 5G, though. Gabriel Brown of Heavy Reading, quotes [in a tweet][9] a China Mobile executive as saying that 5G BTSs will use three times as much power as 4G LTE ones because the higher frequencies used in 5G mean one needs more BTS units to provide the same geographic coverage: For physics reasons, higher frequencies equals shorter range.
+
+If, as is projected, 5G develops into the new enterprise broadband for the internet of things (IoT), along with associated private networks covering everything else, then these eco- and cost-important questions are going to be salient — and they need answers quickly. 5G will soon be here, and [Gartner estimates that 60% of organizations will adopt it][10].
+
+**More about 5G networks:**
+
+ * [How enterprises can prep for 5G networks][11]
+ * [5G vs 4G: How speed, latency and apps support differ][12]
+ * [Private 5G networks are coming][13]
+ * [5G and 6G wireless have security issues][14]
+ * [How millimeter-wave wireless could help support 5G and IoT][15]
+
+
+
+Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3395465/will-5g-be-the-first-carbon-neutral-network.html
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/01/4g-versus-5g_horizon_sunrise-100784230-large.jpg
+[2]: https://www.networkworld.com/article/3203489/lan-wan/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html
+[3]: https://liu.se/en/news-item/okningen-av-mobildata-kraver-energieffektivare-nat
+[4]: https://www.networkworld.com/article/3262991/future-wireless-networks-will-have-no-capacity-limits.html
+[5]: https://arxiv.org/pdf/1812.01688.pdf
+[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
+[7]: https://www.nokia.com/blog/nokia-has-ambitious-plans-reduce-network-power-consumption/
+[8]: https://pages.nokia.com/2364.Zero.Emissions.ebook.html?did=d000000001af&utm_campaign=5g_in_action_&utm_source=twitter&utm_medium=organic&utm_term=0dbf430c-1c94-47d7-8961-edc4f0ba3270
+[9]: https://twitter.com/Gabeuk/status/1099709788676636672?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1099709788676636672&ref_url=https%3A%2F%2Fwww.lightreading.com%2Fmobile%2F5g%2Fpower-consumption-5g-basestations-are-hungry-hungry-hippos%2Fd%2Fd-id%2F749979
+[10]: https://www.gartner.com/en/newsroom/press-releases/2018-12-18-gartner-survey-reveals-two-thirds-of-organizations-in
+[11]: https://www.networkworld.com/article/3306720/mobile-wireless/how-enterprises-can-prep-for-5g.html
+[12]: https://www.networkworld.com/article/3330603/mobile-wireless/5g-versus-4g-how-speed-latency-and-application-support-differ.html
+[13]: https://www.networkworld.com/article/3319176/mobile-wireless/private-5g-networks-are-coming.html
+[14]: https://www.networkworld.com/article/3315626/network-security/5g-and-6g-wireless-technologies-have-security-issues.html
+[15]: https://www.networkworld.com/article/3291323/mobile-wireless/millimeter-wave-wireless-could-help-support-5g-and-iot.html
+[16]: https://www.facebook.com/NetworkWorld/
+[17]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190517 The modern data center and the rise in open-source IP routing suites.md b/sources/talk/20190517 The modern data center and the rise in open-source IP routing suites.md
new file mode 100644
index 0000000000..02063687a0
--- /dev/null
+++ b/sources/talk/20190517 The modern data center and the rise in open-source IP routing suites.md
@@ -0,0 +1,140 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The modern data center and the rise in open-source IP routing suites)
+[#]: via: (https://www.networkworld.com/article/3396136/the-modern-data-center-and-the-rise-in-open-source-ip-routing-suites.html)
+[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
+
+The modern data center and the rise in open-source IP routing suites
+======
+Open source enables passionate people to come together and fabricate work of phenomenal quality. This is in contrast to a single vendor doing everything.
+![fdecomite \(CC BY 2.0\)][1]
+
+As the cloud service providers and search engines started with the structuring process of their business, they quickly ran into the problems of managing the networking equipment. Ultimately, after a few rounds of getting the network vendors to understand their problems, these hyperscale network operators revolted.
+
+Primarily, what the operators were looking for was a level of control in managing their network which the network vendors couldn’t offer. The revolution burned the path that introduced open networking, and network disaggregation to the work of networking. Let us first learn about disaggregation followed by open networking.
+
+### Disaggregation
+
+The concept of network disaggregation involves breaking-up of the vertical networking landscape into individual pieces, where each piece can be used in the best way possible. The hardware can be separated from the software, along with open or closed IP routing suites. This enables the network operators to use the best of breed for the hardware, software and the applications.
+
+**[ Now see[7 free network tools you must have][2]. ]**
+
+Networking has always been built as an appliance and not as a platform. The mindset is that the network vendor builds an appliance and as a specialized appliance, they will completely control what you can and cannot do on that box. In plain words, they will not enable anything that is not theirs. As a result, they act as gatekeepers and not gate-enablers.
+
+Network disaggregation empowers the network operators with the ability to lay hands on the features they need when they need them. However, this is impossible in case of non-disaggregated hardware.
+
+### Disaggregation leads to using best-of-breed
+
+In the traditional vertically integrated networking market, you’re forced to live with the software because you like the hardware, or vice-versa. But network disaggregation drives different people to develop things that matter to them. This allows multiple groups of people to connect, with each one focused on doing what he or she does the best. Switching silicon manufacturers can provide the best merchant silicon. Routing suites can be provided by those who are the best at that. And the OS vendors can provide the glue that enables all of these to work well together.
+
+With disaggregation, people are driven to do what they are good at. One company does the hardware, whereas another does the software and other company does the IP routing suites. Hence, today the networking world looks like more of the server world.
+
+### Open source
+
+Within this rise of the modern data center, there is another element that is driving network disaggregation; the notion of open source. Open source is “denoting software for which the original source code is made freely available, it may be redistributed and modified.” It enables passionate people to come together and fabricate work of phenomenal quality. This is in contrast to a single vendor doing everything.
+
+As a matter of fact, the networking world has always been very vendor driven. However, the advent of open source gives the opportunity to like-minded people rather than the vendor controlling the features. This eliminates the element of vendor lock-in, thereby enabling interesting work. Open source allows more than one company to be involved.
+
+### Open source in the data center
+
+The traditional enterprise and data center networks were primarily designed by bridging and Spanning Tree Protocol (STP). However, the modern data center is driven by IP routing and the CLOS topology. As a result, you need a strong IP routing suite.
+
+That was the point where the need for an open-source routing suite surfaced, the suite that can help drive the modern data center. The primary open-source routing suites are [FRRouting (FRR)][3], BIRD, GoBGP and ExaBGP.
+
+Open-source IP routing protocol suites are slowly but steadily gaining acceptance and are used in data centers of various sizes. Why? It is because they allow a community of developers and users to work on finding solutions to common problems. Open-source IP routing protocol suites equip them to develop the specific features that they need. It also helps the network operators to create simple designs that make sense to them, as opposed to having everything controlled by the vendor. They also enable routing suites to run on compute nodes. Kubernetes among others uses this model of running a routing protocol on a compute node.
+
+Today many startups are using FRR. Out of all of the IP routing suites, FRR is preferred in the data center as the primary open-source IP routing protocol suite. Some traditional network vendors have even demonstrated the use of FRR on their networking gear.
+
+There are lots of new features currently being developed for FRR, not just by the developers but also by the network operators.
+
+### Use cases for open-source routing suites
+
+When it comes to use-cases, where do IP routing protocol suites sit? First and foremost, if you want to do any type of routing in the disaggregated network world, you need an IP routing suite.
+
+Some operators are using FRR at the edge of the network as well, thereby receiving full BGP feeds. Many solutions which use Intel’s DPDK for packet forwarding use FRR as the control plane, receiving full BGP feeds. In addition, there are other vendors using FRR as the core IP routing suite for a full leaf and spine data center architecture. You can even get a version of FRR on pfSense which is a free and open source firewall.
+
+We need to keep in mind that reference implementations are important. Open source allows you to test at scale. But vendors don’t allow you to do that. However, with FRR, we have the ability to spin up virtual machines (VMs) or even containers by using software like Vagrant to test your network. Some vendors do offer software versions, but they are not fully feature-compatible.
+
+Also, with open source you do not need to wait. This empowers you with flexibility and speed which drives the modern data center.
+
+### Deep dive on FRRouting (FRR)
+
+FRR is a Linux foundation project. In a technical Linux sense, FRR is a group of daemons that work together, providing a complete routing suite that includes BGP, IS-IS, LDP, OSPF, BFD, PIM, and RIP.
+
+Each one of these daemons communicate with the common routing information base (RIB) daemon called Zebra in order to interface with the OS and to resolve conflicts between the multiple routing protocols providing the same information. Interfacing with the OS is used to receive the link up/down events, to add and delete routes etc.
+
+### FRRouting (FRR) components: Zebra
+
+Zebra is the RIB of the routing systems. It knows everything about the state of the system relevant to routing and is able to pass and disseminate this information to all the interested parties.
+
+The RIB in FRR acts just like a traditional RIB. When a route wins, it goes into the Linux kernel data plane where the forwarding occurs. All of the routing protocols run as separate processes and each of them have their source code in FRR.
+
+For example, when BGP starts up, it needs to know, for instance, what kind of virtual routing and forwarding (VRF) and IP interfaces are available. Zebra collects and passes this information back to the interested daemons. It passes all the relevant information about state of the machine.
+
+Furthermore, you can also register information with Zebra. For example, if a particular route changes, the daemon can be informed. This can also be used for reverse path forwarding (RPF). FRR doesn't need to do a pull when changes happen on the network.
+
+There are a myriad of ways through which you can control Linux and the state. Sometimes you have to use options like the Netlink bus and sometimes you may need to read the state in proc file system of Linux. The goal of Zebra is to gather all this data for the upper level protocols.
+
+### FRR supports remote data planes
+
+FRR also has the ability to manage the remote data planes. So, what does this mean? Typically, the data forwarding plane and the routing protocols run on the same box. Another model, adopted by Openflow and SDN for example, is one in which the data forwarding plane can be on one box while FRR runs on a different box on behalf of the first box and pushes the computed routing state on the first box. In other words, the data plane and the control plane run on different boxes.
+
+If you examine the traditional world, it’s like having one large chassis with different line cards with the ability to install routes in those different line cards. FRR operates with the same model which has one control plane and the capability to offer 3 boxes, if needed. It does this via the forwarding plane manager.
+
+### Forwarding plane manager
+
+Zebra can either install routes directly into the data plane of the box it is running on or use a forwarding plane manager to install routes on a remote box. When it installs a route, the forwarding plane manager abstracts the data which displays the route and the next hops. It then pushes the data to a remote system where the remote machine processes it and programs the ASIC appropriately.
+
+After the data is abstracted, you can use whatever protocol you want in order to push the data to the remote machine. You can even include the data in an email.
+
+### What is holding people back from open source?
+
+Since last 30 years the networking world meant that you need to go to a vendor to solve a problem. But now with open-source routing suites, such as, FRR, there is a major drift in the mindset as to how you approach troubleshooting.
+
+This causes the fear of not being able to use it properly because with open source you are the one who has to fix it. This at first can be scary and daunting. But it doesn’t necessarily have to be. Also, to switch to FRR on a traditional network gear, you need the vendor to enable it, but they may be reluctant as they are on competing platforms which can be another road blocker.
+
+### The future of FRR
+
+If we examine FRR from the use case perspective of the data center, FRR is feature-complete. Anyone building an IP based data center FRR has everything available. The latest 7.0 release of FRR adds Yang/NetConf, BGP Enhancements and OpenFabric.
+
+FRR is not just about providing features, boosting the performance or being the same as or better than the traditional network vendor’s software, it is also about simplifying the process for the end user.
+
+Since the modern data center is focused on automation and ease of use, FRR has made such progress that the vendors have not caught up with. FRR is very automation friendly. For example, FRR takes BGP and makes it automation-friendly without having to change the protocol. It supports BGP unnumbered that is unmatched by any other vendor suite. This is where the vendors are trying to catch up.
+
+Also, while troubleshooting, FRR shows peer’s and host’s names and not just the IP addresses. This allows you to understand without having spent much time. However, vendors show the peer’s IP addresses which can be daunting when you need to troubleshoot.
+
+FRR provides the features that you need to run an efficient network and data center. It makes easier to configure and manage the IP routing suite. Vendors just add keep adding features over features whether they are significant or not. Then you need to travel the certification paths that teach you how to twiddle 20 million nobs. How many of those networks are robust and stable?
+
+FRR is about supporting features that matter and not every imaginable feature. FRR is an open source project that brings like-minded people together, good work that is offered isn’t turned away. As a case in point, FRR has an open source implementation of EIGRP.
+
+The problem surfaces when you see a bunch of things, you think you need them. But in reality, you should try to keep the network as simple as possible. FRR is laser-focused on the ease of use and simplifying the use rather than implementing features that are mostly not needed to drive the modern data center.
+
+For more information and to contribute, why not join the [FRR][4] [mailing list group][4].
+
+**This article is published as part of the IDG Contributor Network.[Want to Join?][5]**
+
+Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3396136/the-modern-data-center-and-the-rise-in-open-source-ip-routing-suites.html
+
+作者:[Matt Conran][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Matt-Conran/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/12/modular_humanoid_polyhedra_connections_structure_building_networking_by_fdecomite_cc_by_2-0_via_flickr_1200x800-100782334-large.jpg
+[2]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html
+[3]: https://frrouting.org/community/7.0-launch.html
+[4]: https://frrouting.org/#participate
+[5]: /contributor-network/signup.html
+[6]: https://www.facebook.com/NetworkWorld/
+[7]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190521 Enterprise IoT- Companies want solutions in these 4 areas.md b/sources/talk/20190521 Enterprise IoT- Companies want solutions in these 4 areas.md
new file mode 100644
index 0000000000..9df4495f05
--- /dev/null
+++ b/sources/talk/20190521 Enterprise IoT- Companies want solutions in these 4 areas.md
@@ -0,0 +1,119 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Enterprise IoT: Companies want solutions in these 4 areas)
+[#]: via: (https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+Enterprise IoT: Companies want solutions in these 4 areas
+======
+Based on customer pain points, PwC identified four areas companies are seeking enterprise solutions for, including energy use and sustainability.
+![Jackie Niam / Getty Images][1]
+
+Internet of things (IoT) vendors and pundits like to crow about the billions and billions of connected devices that make the IoT so ubiquitous and powerful. But how much of that installed base is really relevant to the enterprise?
+
+To find out, I traded emails with Rob Mesirow, principal at [PwC’s Connected Solutions][2], the firm’s new one-stop-shop of IoT solutions, who suggests that consumer adoption may not paint a true picture of the enterprise opportunities. If you remove the health trackers and the smart thermostats from the market, he suggested, there are very few connected devices left.
+
+So, I wondered, what is actually happening on the enterprise side of IoT? What kinds of devices are we talking about, and in what kinds of numbers?
+
+**[ Read also:[Forget 'smart homes,' the new goal is 'autonomous buildings'][3] ]**
+
+“When people talk about the IoT,” Mesirow told me, “they usually focus on [consumer devices, which far outnumber business devices][4]. Yet [connected buildings currently represent only 12% of global IoT projects][5],” he noted, “and that’s without including wearables and smart home projects.” (Mesirow is talking about buildings that “use various IoT devices, including occupancy sensors that determine when people are present in a room in order to keep lighting and temperature controls at optimal levels, lowering energy costs and aiding sustainability goals. Sensors can also detect water and gas leaks and aid in predictive maintenance for HVAC systems.”)
+
+### 4 key enterprise IoT opportunities
+
+More specifically, based on customer pain points, PwC’s Connected Solutions is focusing on a few key opportunities, which Mesirow laid out in a [blog post][6] earlier this year. (Not surprisingly, the opportunities seem tied to [the group’s products][7].)
+
+“A lot of these solutions came directly from our customers’ request,” he noted. “We pre-qualify our solutions with customers before we build them.”
+
+Let’s take a look at the top four areas, along with a quick reality check on how important they are and whether the technology is ready for prime time.
+
+#### **1\. Energy use and sustainability**
+
+The IoT makes it possible to manage buildings and spaces more efficiently, with savings of 25% or more. Occupancy sensors can tell whether anyone is actually in a room, adjusting lighting and temperature to saving money and conserve energy.
+
+Connected buildings can also help determine when meeting spaces are available, which can boost occupancy at large businesses and universities by 40% while cutting infrastructure and maintenance costs. Other sensors, meanwhile, can detect water and gas leaks and aid in predictive maintenance for HVAC systems.
+
+**Reality check:** Obviously, much of this technology is not new, but there’s a real opportunity to make it work better by integrating disparate systems and adding better analytics to the data to make planning more effective.
+
+#### **2. Asset tracking
+
+**
+
+“Businesses can also use the IoT to track their assets,“ Mesirow told me, “which can range from trucks to hotel luggage carts to medical equipment. It can even assist with monitoring trash by alerting appropriate people when dumpsters need to be emptied.”
+
+Asset trackers can instantly identify the location of all kinds of equipment (saving employee time and productivity), and they can reduce the number of lost, stolen, and misplaced devices and machines as well as provide complete visibility into the location of your assets.
+
+Such trackers can also save employees from wasting time hunting down the devices and machines they need. For example, PwC noted that during an average hospital shift, more than one-third of nurses spend at least an hour looking for equipment such as blood pressure monitors and insulin pumps. Just as important, location tracking often improves asset optimization, reduced inventory needs, and improved customer experience.
+
+**Reality check:** Asset tracking offers clear value. The real question is whether a given use case is cost effective or not, as well as how the data gathered will actually be used. Too often, companies spend a lot of money and effort tracking their assets, but don’t do much with the information.
+
+#### **3\. Security and compliance**
+
+Connected solutions can create better working environments, Mesirow said. “In a hotel, for example, these smart devices can ensure that air and water quality is up to standards, provide automated pest traps, monitor dumpsters and recycling bins, detect trespassers, determine when someone needs assistance, or discover activity in an unauthorized area. Monitoring the water quality of hotel swimming pools can lower chemical and filtering costs,” he said.
+
+Mesirow cited an innovative use case where, in response to workers’ complaints about harassment, hotel operators—in conjunction with the [American Hotel and Lodging Association][8]—are giving their employees portable devices that alert security staff when workers request assistance.
+
+**Reality check:** This seems useful, but the ROI might be difficult to calculate.
+
+#### **4\. Customer experience**
+
+According to PwC, “Sensors, facial recognition, analytics, dashboards, and notifications can elevate and even transform the customer experience. … Using connected solutions, you can identify and reward your best customers by offering perks, reduced wait times, and/or shorter lines.”
+
+Those kinds of personalized customer experiences can potentially boost customer loyalty and increase revenue, Mesirow said, adding that the technology can also make staff deployments more efficient and “enhance safety by identifying trespassers and criminals who are tampering with company property.”
+
+**Reality check:** Creating a great customer experience is critical for businesses today, and this kind of personalized targeting promises to make it more efficient and effective. However, it has to be done in a way that makes customers comfortable and not creeped out. Privacy concerns are very real, especially when it comes to working with facial recognition and other kinds of surveillance technology. For example, [San Francisco recently banned city agencies from using facial recognition][9], and others may follow.
+
+**More on IoT:**
+
+ * [What is the IoT? How the internet of things works][10]
+ * [What is edge computing and how it’s changing the network][11]
+ * [Most powerful Internet of Things companies][12]
+ * [10 Hot IoT startups to watch][13]
+ * [The 6 ways to make money in IoT][14]
+ * [What is digital twin technology? [and why it matters]][15]
+ * [Blockchain, service-centric networking key to IoT success][16]
+ * [Getting grounded in IoT networking and security][17]
+ * [Building IoT-ready networks must become a priority][18]
+ * [What is the Industrial IoT? [And why the stakes are so high]][19]
+
+
+
+Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_by_jackie_niam_gettyimages-996958260_2400x1600-100788446-large.jpg
+[2]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html#get-connected
+[3]: https://www.networkworld.com/article/3309420/forget-smart-homes-the-new-goal-is-autonomous-buildings.html
+[4]: https://www.statista.com/statistics/370350/internet-of-things-installed-base-by-category/)
+[5]: https://iot-analytics.com/top-10-iot-segments-2018-real-iot-projects/
+[6]: https://www.digitalpulse.pwc.com.au/five-unexpected-ways-internet-of-things/
+[7]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html
+[8]: https://www.ahla.com/
+[9]: https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html
+[10]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
+[11]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[12]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[13]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[14]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[15]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[16]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[17]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[18]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[19]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[20]: https://www.facebook.com/NetworkWorld/
+[21]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190522 Experts- Enterprise IoT enters the mass-adoption phase.md b/sources/talk/20190522 Experts- Enterprise IoT enters the mass-adoption phase.md
new file mode 100644
index 0000000000..86d7bf0efe
--- /dev/null
+++ b/sources/talk/20190522 Experts- Enterprise IoT enters the mass-adoption phase.md
@@ -0,0 +1,78 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Experts: Enterprise IoT enters the mass-adoption phase)
+[#]: via: (https://www.networkworld.com/article/3397317/experts-enterprise-iot-enters-the-mass-adoption-phase.html)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+Experts: Enterprise IoT enters the mass-adoption phase
+======
+Dropping hardware prices, 5G boost business internet-of-things deployments; technical complexity encourages partnerships.
+![Avgust01 / Getty Images][1]
+
+[IoT][2] in general has taken off quickly over the past few years, but experts at the recent IoT World highlighted that the enterprise part of the market has been particularly robust of late – it’s not just an explosion of connected home gadgets anymore.
+
+Donna Moore, chairwoman of the LoRa Alliance, an industry group that works to develop and scale low-power WAN technology for mass usage, said on a panel that she’s never seen growth this fast in the sector. “I’d say we’re now in the early mass adopters [stage],” she said.
+
+**More on IoT:**
+
+ * [Most powerful Internet of Things companies][3]
+ * [10 Hot IoT startups to watch][4]
+ * [The 6 ways to make money in IoT][5]
+ * [What is digital twin technology? [and why it matters]][6]
+ * [Blockchain, service-centric networking key to IoT success][7]
+ * [Getting grounded in IoT networking and security][8]
+ * [Building IoT-ready networks must become a priority][9]
+ * [What is the Industrial IoT? [And why the stakes are so high]][10]
+
+
+
+The technology itself has pushed adoption to these heights, said Graham Trickey, head of IoT for the GSMA, a trade organization for mobile network operators. Along with price drops for wireless connectivity modules, the array of upcoming technologies nestling under the umbrella label of [5G][11] could simplify the process of connecting devices to [edge-computing][12] hardware – and the edge to the cloud or [data center][13].
+
+“Mobile operators are not just providers of connectivity now, they’re farther up the stack,” he said. Technologies like narrow-band IoT and support for highly demanding applications like telehealth are all set to be part of the final 5G spec.
+
+### Partnerships needed to deal with IoT complexity**
+
+**
+
+That’s not to imply that there aren’t still huge tasks facing both companies trying to implement their own IoT frameworks and the creators of the technology underpinning them. For one thing, IoT tech requires a huge array of different sets of specialized knowledge.
+
+“That means partnerships, because you need an expert in your [vertical] area to know what you’re looking for, you need an expert in communications, and you might need a systems integrator,” said Trickey.
+
+Phil Beecher, the president and CEO of the Wi-SUN Alliance (the acronym stands for Smart Ubiquitous Networks, and the group is heavily focused on IoT for the utility sector), concurred with that, arguing that broad ecosystems of different technologies and different partners would be needed. “There’s no one technology that’s going to solve all these problems, no matter how much some parties might push it,” he said.
+
+One of the central problems – [IoT security][14] – is particularly dear to Beecher’s heart, given the consequences of successful hacks of the electrical grid or other utilities. More than one panelist praised the passage of the EU’s General Data Protection Regulation, saying that it offered concrete guidelines for entities developing IoT tech – a crucial consideration for some companies that may not have a lot of in-house expertise in that area.
+
+Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3397317/experts-enterprise-iot-enters-the-mass-adoption-phase.html
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_mobile_connections_by_avgust01_gettyimages-1055659210_2400x1600-100788447-large.jpg
+[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
+[3]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[4]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[5]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[6]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[7]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[8]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[9]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[10]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[11]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
+[12]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html?nsdr=true
+[13]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
+[14]: https://www.networkworld.com/article/3269736/getting-grounded-in-iot-networking-and-security.html
+[15]: https://www.facebook.com/NetworkWorld/
+[16]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190522 The Traffic Jam Whopper project may be the coolest-dumbest IoT idea ever.md b/sources/talk/20190522 The Traffic Jam Whopper project may be the coolest-dumbest IoT idea ever.md
new file mode 100644
index 0000000000..be8a4833cc
--- /dev/null
+++ b/sources/talk/20190522 The Traffic Jam Whopper project may be the coolest-dumbest IoT idea ever.md
@@ -0,0 +1,97 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The Traffic Jam Whopper project may be the coolest/dumbest IoT idea ever)
+[#]: via: (https://www.networkworld.com/article/3396188/the-traffic-jam-whopper-project-may-be-the-coolestdumbest-iot-idea-ever.html)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+The Traffic Jam Whopper project may be the coolest/dumbest IoT idea ever
+======
+Burger King uses real-time IoT data to deliver burgers to drivers stuck in traffic — and it seems to be working.
+![Mike Mozart \(CC BY 2.0\)][1]
+
+People love to eat in their cars. That’s why we invented the drive-in and the drive-thru.
+
+But despite a fast-food outlet on the corner of every major intersection, it turns out we were only scratching the surface of this idea. Burger King is taking this concept to the next logical step with its new IoT-powered Traffic Jam Whopper project.
+
+I have to admit, when I first heard about this, I thought it was a joke, but apparently the [Traffic Jam Whopper project is totally real][2] and has already passed a month-long test in Mexico City. While the company hasn’t specified a timeline, it plans to roll out the Traffic Jam Whopper project in Los Angeles (where else?) and other traffic-plagued megacities such as São Paulo and Shanghai.
+
+**[ Also read:[Is IoT in the enterprise about making money or saving money?][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
+
+### How Burger King's Traffic Jam Whopper project works
+
+According to [Nations Restaurant News][5], this is how Burger King's Traffic Jam Whopper project works:
+
+The project uses real-time data to target hungry drivers along congested roads and highways for food delivery by couriers on motorcycles.
+
+The system leverages push notifications to the Burger King app and personalized messaging on digital billboards positioned along busy roads close to a Burger King restaurant.
+
+[According to the We Believers agency][6] that put it all together, “By leveraging traffic and drivers’ real-time data [location and speed], we adjusted our billboards’ location and content, displaying information about the remaining time in traffic to order, and personalized updates about deliveries in progress.” The menu is limited to Whopper Combos to speed preparation (though the company plans to offer a wider menu as it works out the kinks).
+
+**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][7] ]**
+
+The company said orders in Mexico City were delivered in an average of 15 minutes. Fortunately (or unfortunately, depending on how you look at it) many traffic jams hold drivers captive for far longer than that.
+
+Once the order is ready, the motorcyclist uses Google maps and GPS technology embedded into the app to locate the car that made the order. The delivery person then weaves through traffic to hand over the Whopper. (Lane-splitting is legal in California, but I have no idea if there are other potential safety or law-enforcement issues involved here. For drivers ordering burgers, at least, the Burger King app supports voice ordering. I also don’t know what happens if traffic somehow clears up before the burger arrives.)
+
+Here’s a video of the pilot program in Mexico City:
+
+#### **New technology = > new opportunities**
+
+Even more amazing, this is not _just_ a publicity stunt. NRN quotes Bruno Cardinali, head of marketing for Burger King Latin America and Caribbean, claiming the project boosted sales during rush hour, when app orders are normally slow:
+
+“Thanks to The Traffic Jam Whopper campaign, we’ve increased deliveries by 63% in selected locations across the month of April, adding a significant amount of orders per restaurant per day, just during rush hours."
+
+If nothing else, this project shows that creative thinking really can leverage IoT technology into new businesses. In this case, it’s turning notoriously bad traffic—pretty much required for this process to work—from a problem into an opportunity to generate additional sales during slow periods.
+
+**More on IoT:**
+
+ * [What is the IoT? How the internet of things works][8]
+ * [What is edge computing and how it’s changing the network][9]
+ * [Most powerful Internet of Things companies][10]
+ * [10 Hot IoT startups to watch][11]
+ * [The 6 ways to make money in IoT][12]
+ * [What is digital twin technology? [and why it matters]][13]
+ * [Blockchain, service-centric networking key to IoT success][14]
+ * [Getting grounded in IoT networking and security][15]
+ * [Building IoT-ready networks must become a priority][16]
+ * [What is the Industrial IoT? [And why the stakes are so high]][17]
+
+
+
+Join the Network World communities on [Facebook][18] and [LinkedIn][19] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3396188/the-traffic-jam-whopper-project-may-be-the-coolestdumbest-iot-idea-ever.html
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/05/burger-king-gift-card-100797164-large.jpg
+[2]: https://abc7news.com/food/burger-king-to-deliver-to-drivers-stuck-in-traffic/5299073/
+[3]: https://www.networkworld.com/article/3343917/the-big-picture-is-iot-in-the-enterprise-about-making-money-or-saving-money.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://www.nrn.com/technology/tech-tracker-burger-king-deliver-la-motorists-stuck-traffic?cid=
+[6]: https://www.youtube.com/watch?v=LXNgEZV7lNg
+[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
+[8]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
+[9]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[10]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[11]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[12]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[13]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[14]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[15]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[16]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[17]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[18]: https://www.facebook.com/NetworkWorld/
+[19]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190523 Benchmarks of forthcoming Epyc 2 processor leaked.md b/sources/talk/20190523 Benchmarks of forthcoming Epyc 2 processor leaked.md
new file mode 100644
index 0000000000..61ae9e656b
--- /dev/null
+++ b/sources/talk/20190523 Benchmarks of forthcoming Epyc 2 processor leaked.md
@@ -0,0 +1,55 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Benchmarks of forthcoming Epyc 2 processor leaked)
+[#]: via: (https://www.networkworld.com/article/3397081/benchmarks-of-forthcoming-epyc-2-processor-leaked.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Benchmarks of forthcoming Epyc 2 processor leaked
+======
+Benchmarks of AMD's second-generation Epyc server briefly found their way online and show the chip is larger but a little slower than the Epyc 7601 on the market now.
+![Gordon Mah Ung][1]
+
+Benchmarks of engineering samples of AMD's second-generation Epyc server, code-named “Rome,” briefly found their way online and show a very beefy chip running a little slower than its predecessor.
+
+Rome is based on the Zen 2 architecture, believed to be more of an incremental improvement over the prior generation than a major leap. It’s already known that Rome would feature a 64-core, 128-thread design, but that was about all of the details.
+
+**[ Also read:[Who's developing quantum computers][2] ]**
+
+The details came courtesy of SiSoftware's Sandra PC analysis and benchmarking tool. It’s very popular and has been used by hobbyists and benchmarkers alike for more than 20 years. New benchmarks are uploaded to the Sandra database all the time, and what I suspect happened is someone running a Rome sample ran the benchmark, not realizing the results would be uploaded to the Sandra database.
+
+The benchmarks were from two different servers, a Dell PowerEdge R7515 and a Super Micro Super Server. The Dell product number is not on the market, so this would indicate a future server with Rome processors. The entry has since been deleted, but several sites, including the hobbyist site Tom’s Hardware Guide, managed to [take a screenshot][3].
+
+According to the entry, the chip is a mid-range processor with a base clock speed of 1.4GHz, jumping up to 2.2GHz in turbo mode, with 16MB of Level 2 cache and 256MB of Level 3 cache, the latter of which is crazy. The first-generation Epyc had just 32MB of L3 cache.
+
+That’s a little slower than the Epyc 7601 on the market now, but when you double the number of cores in the same space, something’s gotta give, and in this case, it’s electricity. The thermal envelope was not revealed by the benchmark. Previous Epyc processors ranged from 120 watts to 180 watts.
+
+Sandra ranked the processor at #3 for arithmetic and #5 for multimedia processing, which makes me wonder what on Earth beat the Rome chip. Interestingly, the servers were running Windows 10, not Windows Server 2019.
+
+**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][4] ]**
+
+Rome is expected to be officially launched at the massive Computex trade show in Taiwan on May 27 and will begin shipping in the third quarter of the year.
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3397081/benchmarks-of-forthcoming-epyc-2-processor-leaked.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/11/rome_2-100779395-large.jpg
+[2]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html
+[3]: https://www.tomshardware.co.uk/amd-epyc-rome-processor-data-center,news-60265.html
+[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190523 Edge-based caching and blockchain-nodes speed up data transmission.md b/sources/talk/20190523 Edge-based caching and blockchain-nodes speed up data transmission.md
new file mode 100644
index 0000000000..54ddf76db3
--- /dev/null
+++ b/sources/talk/20190523 Edge-based caching and blockchain-nodes speed up data transmission.md
@@ -0,0 +1,74 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Edge-based caching and blockchain-nodes speed up data transmission)
+[#]: via: (https://www.networkworld.com/article/3397105/edge-based-caching-and-blockchain-nodes-speed-up-data-transmission.html)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Edge-based caching and blockchain-nodes speed up data transmission
+======
+Using a combination of edge-based data caches and blockchain-like distributed networks, Bluzelle claims it can significantly speed up the delivery of data across the globe.
+![OlgaSalt / /getty][1]
+
+The combination of a blockchain-like distributed network, along with the ability to locate data at the edge will massively speed up future networks, such as those used by the internet of things (IoT), claims Bluzelle in announcing what is says is the first decentralized data delivery network (DDN).
+
+Distributed DDNs will be like content delivery networks (CDNs) that now cache content around the world to speed up the web, but in this case, it will be for data, the Singapore-based company explains. Distributed key-value (blockchain) networks and edge computing built into Bluzelle's system will provide significantly faster delivery than existing caching, the company claims in a press release announcing its product.
+
+“The future of data delivery can only ever be de-centrally distributed,” says Pavel Bains, CEO and co-founder of Bluzelle. It’s because the world requires instant access to data that’s being created at the edge, he argues.
+
+“But delivery is hampered by existing technology,” he says.
+
+**[ Also read:[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3]. ]**
+
+Bluzelle says decentralized caching is the logical next step to generalized data caching, used for reducing latency. “Decentralized caching expands the theory of caching,” the company writes in a [report][4] (Dropbox pdf) on its [website][5]. It says the cache must be expanded from simply being located at one unique location.
+
+“Using a combination of distributed networks, the edge and the cloud, [it’s] thereby increasing the transactional throughput of data,” the company says.
+
+This kind of thing is particularly important in consumer gaming now, where split-second responses from players around the world make or break a game experience, but it will likely be crucial for the IoT, higher-definition media, artificial intelligence, and virtual reality as they gain more of a role in digitization—including at critical enterprise applications.
+
+“Currently applications are limited to data caching technologies that require complex configuration and management of 10-plus-year-old technology constrained to a few data centers,” Bains says. “These were not designed to handle the ever-increasing volumes of data.”
+
+Bains says one of the key selling points of Bluzelle's network is that developers should be able to implement and run networks without having to also physically expand the networks manually.
+
+“Software developers don’t want to react to where their customers come from. Our architecture is designed to always have the data right where the customer is. This provides a superior consumer experience,” he says.
+
+Data caches are around now, but Bluzelle claims its system, written in C++ and available on Linux and Docker containers, among other platforms, is faster than others. It further says that if its system and a more traditional cache would connect to the same MySQL database in Virginia, say, their users will get the data three to 16 times faster than a traditional “non-edge-caching” network. Write updates to all Bluzelle nodes around the world takes 875 milliseconds (ms), it says.
+
+The company has been concentrating its efforts on gaming, and with a test setup in Virginia, it says it was able to deliver data 33 times faster—at 22ms to Singapore—than a normal, cloud-based data cache. That traditional cache (located near the database) took 727ms in the Bluzelle-published test. In a test to Ireland, it claims 16ms over 223ms using a traditional cache.
+
+An algorithm is partly the reason for the gains, the company explains. It “allows the nodes to make decisions and take actions without the need for masternodes,” the company says. Masternodes are the server-like parts of blockchain systems.
+
+**More about edge networking**
+
+ * [How edge networking and IoT will reshape data centers][3]
+ * [Edge computing best practices][6]
+ * [How edge computing can help secure the IoT][7]
+
+
+
+Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3397105/edge-based-caching-and-blockchain-nodes-speed-up-data-transmission.html
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/blockchain_crypotocurrency_bitcoin-by-olgasalt-getty-100787949-large.jpg
+[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[4]: https://www.dropbox.com/sh/go5bnhdproy1sk5/AAC5MDoafopFS7lXUnmiLAEFa?dl=0&preview=Bluzelle+Report+-+The+Decentralized+Internet+Is+Here.pdf
+[5]: https://bluzelle.com/
+[6]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
+[7]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
+[8]: https://www.facebook.com/NetworkWorld/
+[9]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190523 Online performance benchmarks all companies should try to achieve.md b/sources/talk/20190523 Online performance benchmarks all companies should try to achieve.md
new file mode 100644
index 0000000000..829fb127f8
--- /dev/null
+++ b/sources/talk/20190523 Online performance benchmarks all companies should try to achieve.md
@@ -0,0 +1,80 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Online performance benchmarks all companies should try to achieve)
+[#]: via: (https://www.networkworld.com/article/3397322/online-performance-benchmarks-all-companies-should-try-to-achieve.html)
+[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
+
+Online performance benchmarks all companies should try to achieve
+======
+With digital performance more important than ever, companies must ensure their online performance meets customers’ needs. A new ThousandEyes report can help them determine that.
+![Thinkstock][1]
+
+There's no doubt about it: We have entered the experience economy, and digital performance is more important than ever.
+
+Customer experience is the top brand differentiator, topping price and every other factor. And businesses that provide a poor digital experience will find customers will actively seek a competitor. In fact, recent ZK Research found that in 2018, about two-thirds of millennials changed loyalties to a brand because of a bad experience. (Note: I am an employee of ZK Research.)
+
+To help companies determine if their online performance is leading, lacking, or on par with some of the top companies, ThousandEyes this week released its [2019 Digital Experience Performance Benchmark Report][2]. This document provides a comparative analysis of web, infrastructure, and network performance from the top 20 U.S. digital retail, travel, and media websites. Although this is a small sampling of companies, those three industries are the most competitive when it comes to using their digital platforms for competitive advantage. The aggregated data from this report can be used as an industry-agnostic performance benchmark that all companies should strive to meet.
+
+**[ Read also:[IoT providers need to take responsibility for performance][3] ]**
+
+The methodology of the study was for ThousandEyes to use its own platform to provide an independent view of performance. It uses active monitoring and a global network of monitoring agents to measure application and network layer performance for websites, applications, and services. The company collected data from 36 major cities scattered across the U.S. Six of the locations (Ashburn, Chicago, Dallas, Los Angeles, San Jose, and Seattle) also included vantage points connected to six major broadband ISPs (AT&T, CenturyLink, Charter, Comcast, Cox, and Verizon). This acts as a good proxy for what a user would experience.
+
+The test involved page load tests against the websites of the major companies in retail, media, and travel and looked at several factors, including DNS response time, round-trip latency, network time (one-way latency), HTTP response time, and page load. The averages and median times can be seen in the table below. Those can be considered the average benchmarks that all companies should try to attain.
+
+![][4]
+
+### Choice of content delivery network matters by location
+
+ThousandEyes' report also looked at how the various services that companies use impacts web performance. For example, the study measured the performance of the content delivery network (CDN) providers in the 36 markets. It found that in Albuquerque, Akamai and Fastly had the most latency, whereas Edgecast had the least. It also found that in Boston, all of the CDN providers were close. Companies can use this type of data to help them select a CDN. Without it, decision makers are essentially guessing and hoping.
+
+### CDN performance is impacted by ISP
+
+Another useful set of data was cross-referencing CDN performance by ISP, which lead to some fascinating information. With Comcast, Akamai, Cloudfront, Google and Incapula all had high amounts of latency. Only Edgecast and Fastly offered average latency. On the other hand, all of the CDNs worked great with CenturyLink. This tells a buyer, "If my customer base is largely in Comcast’s footprint, I should look at Edgecast or Fastly or my customers will be impacted."
+
+### DNS and latency directly impact page load times
+
+The ThousandEyes study also confirmed some points that many people believe as true but until now had no quantifiable evidence to support it. For example, it's widely accepted that DNS response time and network latency to the CDN edge correlate to web performance; the data in the report now supports that belief. ThousandEyes did some regression analysis and fancy math and found that in general, companies that were in the top quartile of HTTP performance had above-average DNS response time and network performance. There were a few exceptions, but in most cases, this is true.
+
+Based on all the data, the below are the benchmarks for the three infrastructure metrics gathered and is what businesses, even ones outside the three verticals studied, should hope to achieve to support a high-quality digital experience.
+
+ * DNS response time 25 ms
+ * Round trip network latency 15 ms
+ * HTTP response time 250 ms
+
+
+
+### Operations teams need to focus on digital performance
+
+Benchmarking certainly provides value, but the report also offers some recommendations on how operations teams can use the data to improve digital performance. Those include:
+
+ * **Measure site from distributed user vantage points**. There is no single point that will provide a view of digital performance everywhere. Instead, measure from a range of ISPs in different regions and take a multi-layered approach to visibility (application, network and routing).
+ * **Use internet performance information as a baseline**. Compare your organization's data to the baselines, and if you’re not meeting it in some markets, focus on improvement there.
+ * **Compare performance to industry peers**. In highly competitive industries, it’s important to understand how you rank versus the competition. Don’t be satisfied with hitting the benchmarks if your key competitors exceed them.
+ * **Build a strong performance stack.** The data shows that solid DNS and HTTP response times and low latency are correlated to solid page load times. Focus on optimizing those factors and consider them foundational to digital performance.
+
+
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3397322/online-performance-benchmarks-all-companies-should-try-to-achieve.html
+
+作者:[Zeus Kerravala][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Zeus-Kerravala/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2017/07/racing_speed_runners_internet-speed-100728363-large.jpg
+[2]: https://www.thousandeyes.com/research/digital-experience
+[3]: https://www.networkworld.com/article/3340318/iot-providers-need-to-take-responsibility-for-performance.html
+[4]: https://images.idgesg.net/images/article/2019/05/thousandeyes-100797290-large.jpg
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190523 Study- Most enterprise IoT transactions are unencrypted.md b/sources/talk/20190523 Study- Most enterprise IoT transactions are unencrypted.md
new file mode 100644
index 0000000000..51098dad33
--- /dev/null
+++ b/sources/talk/20190523 Study- Most enterprise IoT transactions are unencrypted.md
@@ -0,0 +1,93 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Study: Most enterprise IoT transactions are unencrypted)
+[#]: via: (https://www.networkworld.com/article/3396647/study-most-enterprise-iot-transactions-are-unencrypted.html)
+[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/)
+
+Study: Most enterprise IoT transactions are unencrypted
+======
+A Zscaler report finds 91.5% of IoT communications within enterprises are in plaintext and so susceptible to interference.
+![HYWARDS / Getty Images][1]
+
+Of the millions of enterprise-[IoT][2] transactions examined in a recent study, the vast majority were sent without benefit of encryption, leaving the data vulnerable to theft and tampering.
+
+The research by cloud-based security provider Zscaler found that about 91.5 percent of transactions by internet of things devices took place over plaintext, while 8.5 percent were encrypted with [SSL][3]. That means if attackers could intercept the unencrypted traffic, they’d be able to read it and possibly alter it, then deliver it as if it had not been changed.
+
+**[ For more on IoT security, see[our corporate guide to addressing IoT security concerns][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
+
+Researchers looked through one month’s worth of enterprise traffic traversing Zscaler’s cloud seeking the digital footprints of IoT devices. It found and analyzed 56 million IoT-device transactions over that time, and identified the type of devices, protocols they used, the servers they communicated with, how often communication went in and out and general IoT traffic patterns.
+
+The team tried to find out which devices generate the most traffic and the threats they face. It discovered that 1,015 organizations had at least one IoT device. The most common devices were set-top boxes (52 percent), then smart TVs (17 percent), wearables (8 percent), data-collection terminals (8 percent), printers (7 percent), IP cameras and phones (5 percent) and medical devices (1 percent).
+
+While they represented only 8 percent of the devices, data-collection terminals generated 80 percent of the traffic.
+
+The breakdown is that 18 percent of the IoT devices use SSL to communicate all the time, and of the remaining 82 percent, half used it part of the time and half never used it.
+The study also found cases of plaintext HTTP being used to authenticate devices and to update software and firmware, as well as use of outdated crypto libraries and weak default credentials.
+
+While IoT devices are common in enterprises, “many of the devices are employee owned, and this is just one of the reasons they are a security concern,” the report says. Without strict policies and enforcement, these devices represent potential vulnerabilities.
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][6] ]**
+
+Another reason employee-owned IoT devices are a concern is that many businesses don’t consider them a threat because no data is stored on them. But if the data they gather is transmitted insecurely, it is at risk.
+
+### 5 tips to protect enterprise IoT
+
+Zscaler recommends these security precautions:
+
+ * Change default credentials to something more secure. As employees bring in devices, encourage them to use strong passwords and to keep their firmware current.
+ * Isolate IoT devices on networks and restrict inbound and outbound network traffic.
+ * Restrict access to IoT devices from external networks and block unnecessary ports from external access.
+ * Apply regular security and firmware updates to IoT devices, and secure network traffic.
+ * Deploy tools to gain visibility of shadow-IoT devices already inside the network so they can be protected.
+
+
+
+**More on IoT:**
+
+ * [What is edge computing and how it’s changing the network][7]
+ * [Most powerful Internet of Things companies][8]
+ * [10 Hot IoT startups to watch][9]
+ * [The 6 ways to make money in IoT][10]
+ * [What is digital twin technology? [and why it matters]][11]
+ * [Blockchain, service-centric networking key to IoT success][12]
+ * [Getting grounded in IoT networking and security][13]
+ * [Building IoT-ready networks must become a priority][14]
+ * [What is the Industrial IoT? [And why the stakes are so high]][15]
+
+
+
+Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3396647/study-most-enterprise-iot-transactions-are-unencrypted.html
+
+作者:[Tim Greene][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Tim-Greene/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/05/network_security_network_traffic_scanning_by_hywards_gettyimages-673891964_2400x1600-100796830-large.jpg
+[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
+[3]: https://www.networkworld.com/article/3045953/5-things-you-need-to-know-about-ssl.html
+[4]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
+[5]: https://www.networkworld.com/newsletters/signup.html
+[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[7]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[8]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[9]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[10]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[11]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[12]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[13]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[14]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[15]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[16]: https://www.facebook.com/NetworkWorld/
+[17]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190528 Analysing D Code with KLEE.md b/sources/talk/20190528 Analysing D Code with KLEE.md
new file mode 100644
index 0000000000..c93f6e2b8d
--- /dev/null
+++ b/sources/talk/20190528 Analysing D Code with KLEE.md
@@ -0,0 +1,680 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Analysing D Code with KLEE)
+[#]: via: (https://theartofmachinery.com/2019/05/28/d_and_klee.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+Analysing D Code with KLEE
+======
+
+[KLEE][1] is symbolic execution engine that can rigorously verify or find bugs in software. It’s designed for C and C++, but it’s just an interpreter for LLVM bitcode combined with theorem prover backends, so it can work with bitcode generated by `ldc2`. One catch is that it needs a compatible bitcode port of the D runtime to run normal D code. I’m still interested in getting KLEE to work with normal D code, but for now I’ve done some experiments with `-betterC` D.
+
+### How KLEE works
+
+What makes KLEE special is its support for two kinds of variables: concrete and symbolic. Concrete variables are just like the normal variables in normal code: they have a deterministic value at any given point in the program. On the other hand, symbolic variables contain a bundle of logical constraints instead of values. Take this code:
+
+```
+int x = klee_int("x");
+klee_assume(x >= 0);
+if (x > 42)
+{
+ doA(x);
+}
+else
+{
+ doB(x);
+ assert (3 * x != 21);
+}
+```
+
+`klee_int("x")` creates a symbolic integer that will be called “`x`” in output reports. Initially it has no contraints and can imply any value that a 32b signed integer can have. `klee_assume(x >= 0)` tells KLEE to add `x >= 0` as a constraint, so now we’re only analysing the code for non-negative 32b signed integers. On hitting the `if`, KLEE checks if both branches are possible. Sure enough, `x > 42` can be true or false even with the constraint `x >= 0`, so KLEE has to _fork_. We now have two processes being interpreted on the VM: one executing `doA()` while `x` holds the constraints `x >= 0, x > 42`, and another executing `doB()` while `x` holds the contraints `x >= 0, x <= 42`. The second process will hit the `assert` statement, and KLEE will try to prove or disprove `3 * x != 21` using the assumptions `x >= 0, x <= 42` — in this case it will disprove it and report a bug with `x = 7` as a crashing example.
+
+### First steps
+
+Here’s a toy example just to get things working. Suppose we have a function that makes an assumption for a performance optimisation. Thankfully the assumption is made explicit with `assert` and is documented with a comment. Is the assumption valid?
+
+```
+int foo(int x)
+{
+ // 17 is a prime number, so let's use it as a sentinel value for an awesome optimisation
+ assert (x * x != 17);
+ // ...
+ return x;
+}
+```
+
+Here’s a KLEE test rig. The KLEE function declarations and the `main()` entry point need to have `extern(C)` linkage, but anything else can be normal D code as long as it compiles under `-betterC`:
+
+```
+extern(C):
+
+int klee_int(const(char*) name);
+
+int main()
+{
+ int x = klee_int("x");
+ foo(x);
+ return 0;
+}
+```
+
+It turns out there’s just one (frustrating) complication with running `-betterC` D under KLEE. In D, `assert` is handled specially by the compiler. By default, it throws an `Error`, but for compatibility with KLEE, I’m using the `-checkaction=C` flag. In C, `assert` is usually a macro that translates to code that calls some backend implementation. That implementation isn’t standardised, so of course various C libraries work differently. `ldc2` actually has built-in logic for implementing `-checkaction=C` correctly depending on the C library used.
+
+KLEE uses a port of [uClibc][2], which translates `assert()` to a four-parameter `__assert()` function, which conflicts with the three-parameter `__assert()` function in other implementations. `ldc2` uses LLVM’s (target) `Triple` type for choosing an `assert()` implementation configuration, but that doesn’t recognise uClibc. As a hacky workaround, I’m telling `ldc2` to compile for Musl, which “tricks” it into using an `__assert_fail()` implementation that KLEE happens to support as well. I’ve opened [an issue report][3].
+
+Anyway, if we put all that code above into a file, we can compile it to KLEE-ready bitcode like this:
+
+```
+ldc2 -g -checkaction=C -mtriple=x86_64-linux-musl -output-bc -betterC -c first.d
+```
+
+`-g` is optional, but adds debug information that can be useful for later analysis. The KLEE developers recommend disabling compiler optimisations and letting KLEE do its own optimisations instead.
+
+Now to run KLEE:
+
+```
+$ klee first.bc
+KLEE: output directory is "/tmp/klee-out-1"
+KLEE: Using Z3 solver backend
+warning: Linking two modules of different target triples: klee_int.bc' is 'x86_64-pc-linux-gnu' whereas 'first.bc' is 'x86_64--linux-musl'
+
+KLEE: ERROR: first.d:4: ASSERTION FAIL: x * x != 17
+KLEE: NOTE: now ignoring this error at this location
+
+KLEE: done: total instructions = 35
+KLEE: done: completed paths = 2
+KLEE: done: generated tests = 2
+```
+
+Straight away, KLEE has found two execution paths through the program: a happy path, and a path that fails the assertion. Let’s see the results:
+
+```
+$ ls klee-last/
+assembly.ll
+info
+messages.txt
+run.istats
+run.stats
+run.stats-journal
+test000001.assert.err
+test000001.kquery
+test000001.ktest
+test000002.ktest
+warnings.txt
+```
+
+Here’s the example that triggers the happy path:
+
+```
+$ ktest-tool klee-last/test000002.ktest
+ktest file : 'klee-last/test000002.ktest'
+args : ['first.bc']
+num objects: 1
+object 0: name: 'x'
+object 0: size: 4
+object 0: data: b'\x00\x00\x00\x00'
+object 0: hex : 0x00000000
+object 0: int : 0
+object 0: uint: 0
+object 0: text: ....
+```
+
+Here’s the example that causes an assertion error:
+
+```
+$ cat klee-last/test000001.assert.err
+Error: ASSERTION FAIL: x * x != 17
+File: first.d
+Line: 4
+assembly.ll line: 32
+Stack:
+ #000000032 in _D5first3fooFiZi () at first.d:4
+ #100000055 in main (=1, =94262044506880) at first.d:16
+$ ktest-tool klee-last/test000001.ktest
+ktest file : 'klee-last/test000001.ktest'
+args : ['first.bc']
+num objects: 1
+object 0: name: 'x'
+object 0: size: 4
+object 0: data: b'\xe9&\xd33'
+object 0: hex : 0xe926d333
+object 0: int : 869476073
+object 0: uint: 869476073
+object 0: text: .&.3
+```
+
+So, KLEE has deduced that when `x` is 869476073, `x * x` does a 32b overflow to 17 and breaks the code.
+
+It’s overkill for this simple example, but `run.istats` can be opened with [KCachegrind][4] to view things like call graphs and source code coverage. (Unfortunately, coverage stats can be misleading because correct code won’t ever hit boundary check code inserted by the compiler.)
+
+### MurmurHash preimage
+
+Here’s a slightly more useful example. D currently uses 32b MurmurHash3 as its standard non-cryptographic hash function. What if we want to find strings that hash to a given special value? In general, we can solve problems like this by asserting that something doesn’t exist (i.e., a string that hashes to a given value) and then challenging the theorem prover to prove us wrong with a counterexample.
+
+Unfortunately, we can’t just use `hashOf()` directly without the runtime, but we can copy [the hash code from the runtime source][5] into its own module, and then import it into a test rig like this:
+
+```
+import dhash;
+
+extern(C):
+
+void klee_make_symbolic(void* addr, size_t nbytes, const(char*) name);
+int klee_assume(ulong condition);
+
+int main()
+{
+ // Create a buffer for 8-letter strings and let KLEE manage it symbolically
+ char[8] s;
+ klee_make_symbolic(s.ptr, s.sizeof, "s");
+
+ // Constrain the string to be letters from a to z for convenience
+ foreach (j; 0..s.length)
+ {
+ klee_assume(s[j] > 'a' && s[j] <= 'z');
+ }
+
+ assert (dHash(cast(ubyte[])s) != 0xdeadbeef);
+ return 0;
+}
+```
+
+Here’s how to compile and run it. Because we’re not checking correctness, we can use `-boundscheck=off` for a slight performance boost. It’s also worth enabling KLEE’s optimiser.
+
+```
+$ ldc2 -g -boundscheck=off -checkaction=C -mtriple=x86_64-linux-musl -output-bc -betterC -c dhash.d dhash_klee.d
+$ llvm-link -o dhash_test.bc dhash.bc dhash_klee.bc
+$ klee -optimize dhash_test.bc
+```
+
+It takes just over 4s:
+
+```
+$ klee-stats klee-last/
+-------------------------------------------------------------------------
+| Path | Instrs| Time(s)| ICov(%)| BCov(%)| ICount| TSolver(%)|
+-------------------------------------------------------------------------
+|klee-last/| 168| 4.37| 87.50| 50.00| 160| 99.95|
+-------------------------------------------------------------------------
+```
+
+And it actually works:
+
+```
+$ ktest-tool klee-last/test000001.ktest
+ktest file : 'klee-last/test000001.ktest'
+args : ['dhash_test.bc']
+num objects: 1
+object 0: name: 's'
+object 0: size: 8
+object 0: data: b'psgmdxvq'
+object 0: hex : 0x7073676d64787671
+object 0: int : 8175854546265273200
+object 0: uint: 8175854546265273200
+object 0: text: psgmdxvq
+$ rdmd --eval 'writef("%x\n", hashOf("psgmdxvq"));'
+deadbeef
+```
+
+For comparison, here’s a simple brute force version in plain D:
+
+```
+import std.stdio;
+
+void main()
+{
+ char[8] buffer;
+
+ bool find(size_t idx)
+ {
+ if (idx == buffer.length)
+ {
+ auto hash = hashOf(buffer[]);
+ if (hash == 0xdeadbeef)
+ {
+ writeln(buffer[]);
+ return true;
+ }
+ return false;
+ }
+
+ foreach (char c; 'a'..'z')
+ {
+ buffer[idx] = c;
+ auto is_found = find(idx + 1);
+ if (is_found) return true;
+ }
+
+ return false;
+ }
+
+ find(0);
+}
+```
+
+This takes ~17s:
+
+```
+$ ldc2 -O3 -boundscheck=off hash_brute.d
+$ time ./hash_brute
+aexkaydh
+
+real 0m17.398s
+user 0m17.397s
+sys 0m0.001s
+$ rdmd --eval 'writef("%x\n", hashOf("aexkaydh"));'
+deadbeef
+```
+
+The constraint solver implementation is simpler to write, but is still faster because it can automatically do smarter things than calculating hashes of strings from scratch every iteration.
+
+### Binary search
+
+Now for an example of testing and debugging. Here’s an implementation of [binary search][6]:
+
+```
+bool bsearch(const(int)[] haystack, int needle)
+{
+ while (haystack.length)
+ {
+ auto mid_idx = haystack.length / 2;
+ if (haystack[mid_idx] == needle) return true;
+ if (haystack[mid_idx] < needle)
+ {
+ haystack = haystack[mid_idx..$];
+ }
+ else
+ {
+ haystack = haystack[0..mid_idx];
+ }
+ }
+ return false;
+}
+```
+
+Does it work? Here’s a test rig:
+
+```
+extern(C):
+
+void klee_make_symbolic(void* addr, size_t nbytes, const(char*) name);
+int klee_range(int begin, int end, const(char*) name);
+int klee_assume(ulong condition);
+
+int main()
+{
+ // Making an array arr and an x to find in it.
+ // This time we'll also parameterise the array length.
+ // We have to apply klee_make_symbolic() to the whole buffer because of limitations in KLEE.
+ int[8] arr_buffer;
+ klee_make_symbolic(arr_buffer.ptr, arr_buffer.sizeof, "a");
+ int len = klee_range(0, arr_buffer.length+1, "len");
+ auto arr = arr_buffer[0..len];
+ // Keeping the values in [0, 32) makes the output easier to read.
+ // (The binary-friendly limit 32 is slightly more efficient than 30.)
+ int x = klee_range(0, 32, "x");
+ foreach (j; 0..arr.length)
+ {
+ klee_assume(arr[j] >= 0);
+ klee_assume(arr[j] < 32);
+ }
+
+ // Make the array sorted.
+ // We don't have to actually sort the array.
+ // We can just tell KLEE to constrain it to be sorted.
+ foreach (j; 1..arr.length)
+ {
+ klee_assume(arr[j - 1] <= arr[j]);
+ }
+
+ // Test against simple linear search
+ bool has_x = false;
+ foreach (a; arr[])
+ {
+ has_x |= a == x;
+ }
+
+ assert (bsearch(arr, x) == has_x);
+
+ return 0;
+}
+```
+
+When run in KLEE, it keeps running for a long, long time. How do we know it’s doing anything? By default KLEE writes stats every 1s, so we can watch the live progress in another terminal:
+
+```
+$ watch klee-stats --print-more klee-last/
+Every 2.0s: klee-stats --print-more klee-last/
+
+---------------------------------------------------------------------------------------------------------------------
+| Path | Instrs| Time(s)| ICov(%)| BCov(%)| ICount| TSolver(%)| States| maxStates| Mem(MB)| maxMem(MB)|
+---------------------------------------------------------------------------------------------------------------------
+|klee-last/| 5834| 637.27| 79.07| 68.75| 172| 100.00| 22| 22| 24.51| 24|
+---------------------------------------------------------------------------------------------------------------------
+```
+
+`bsearch()` should be pretty fast, so we should see KLEE discovering new states rapidly. But instead it seems to be stuck. [At least one fork of KLEE has heuristics for detecting infinite loops][7], but plain KLEE doesn’t. There are timeout and batching options for making KLEE work better with code that might have infinite loops, but let’s just take another look at the code. In particular, the loop condition:
+
+```
+while (haystack.length)
+{
+ // ...
+}
+```
+
+Binary search is supposed to reduce the search space by about half each iteration. `haystack.length` is an unsigned integer, so the loop must terminate as long as it goes down every iteration. Let’s rewrite the code slightly so we can verify if that’s true:
+
+```
+bool bsearch(const(int)[] haystack, int needle)
+{
+ while (haystack.length)
+ {
+ auto mid_idx = haystack.length / 2;
+ if (haystack[mid_idx] == needle) return true;
+ const(int)[] next_haystack;
+ if (haystack[mid_idx] < needle)
+ {
+ next_haystack = haystack[mid_idx..$];
+ }
+ else
+ {
+ next_haystack = haystack[0..mid_idx];
+ }
+ // This lets us verify that the search terminates
+ assert (next_haystack.length < haystack.length);
+ haystack = next_haystack;
+ }
+ return false;
+}
+```
+
+Now KLEE can find the bug!
+
+```
+$ klee -optimize bsearch.bc
+KLEE: output directory is "/tmp/klee-out-2"
+KLEE: Using Z3 solver backend
+warning: Linking two modules of different target triples: klee_range.bc' is 'x86_64-pc-linux-gnu' whereas 'bsearch.bc' is 'x86_64--linux-musl'
+
+warning: Linking two modules of different target triples: memset.bc' is 'x86_64-pc-linux-gnu' whereas 'bsearch.bc' is 'x86_64--linux-musl'
+
+KLEE: ERROR: bsearch.d:18: ASSERTION FAIL: next_haystack.length < haystack.length
+KLEE: NOTE: now ignoring this error at this location
+
+KLEE: done: total instructions = 2281
+KLEE: done: completed paths = 42
+KLEE: done: generated tests = 31
+```
+
+Using the failing example as input and stepping through the code, it’s easy to find the problem:
+
+```
+/// ...
+if (haystack[mid_idx] < needle)
+{
+ // If mid_idx == 0, next_haystack is the same as haystack
+ // Nothing changes, so the loop keeps repeating
+ next_haystack = haystack[mid_idx..$];
+}
+/// ...
+```
+
+Thinking about it, the `if` statement already excludes `haystack[mid_idx]` from being `needle`, so there’s no reason to include it in `next_haystack`. Here’s the fix:
+
+```
+// The +1 matters
+next_haystack = haystack[mid_idx+1..$];
+```
+
+But is the code correct now? Terminating isn’t enough; it needs to get the right answer, of course.
+
+```
+$ klee -optimize bsearch.bc
+KLEE: output directory is "/tmp/kee-out-3"
+KLEE: Using Z3 solver backend
+warning: Linking two modules of different target triples: klee_range.bc' is 'x86_64-pc-linux-gnu' whereas 'bsearch.bc' is 'x86_64--linux-musl'
+
+warning: Linking two modules of different target triples: memset.bc' is 'x86_64-pc-linux-gnu' whereas 'bsearch.bc' is 'x86_64--linux-musl'
+
+KLEE: done: total instructions = 3152
+KLEE: done: completed paths = 81
+KLEE: done: generated tests = 81
+```
+
+In just under 7s, KLEE has verified every possible execution path reachable with arrays of length from 0 to 8. Note, that’s not just coverage of individual code lines, but coverage of full pathways through the code. KLEE hasn’t ruled out stack corruption or integer overflows with large arrays, but I’m pretty confident the code is correct now.
+
+KLEE has generated test cases that trigger each path, which we can keep and use as a faster-than-7s regression test suite. Trouble is, the output from KLEE loses all type information and isn’t in a convenient format:
+
+```
+$ ktest-tool klee-last/test000042.ktest
+ktest file : 'klee-last/test000042.ktest'
+args : ['bsearch.bc']
+num objects: 3
+object 0: name: 'a'
+object 0: size: 32
+object 0: data: b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
+object 0: hex : 0x0000000000000000000000000000000001000000100000000000000000000000
+object 0: text: ................................
+object 1: name: 'x'
+object 1: size: 4
+object 1: data: b'\x01\x00\x00\x00'
+object 1: hex : 0x01000000
+object 1: int : 1
+object 1: uint: 1
+object 1: text: ....
+object 2: name: 'len'
+object 2: size: 4
+object 2: data: b'\x06\x00\x00\x00'
+object 2: hex : 0x06000000
+object 2: int : 6
+object 2: uint: 6
+object 2: text: ....
+```
+
+But we can write our own pretty-printing code and put it at the end of the test rig:
+
+```
+char[256] buffer;
+char* output = buffer.ptr;
+output += sprintf(output, "TestCase([");
+foreach (a; arr[])
+{
+ output += sprintf(output, "%d, ", klee_get_value_i32(a));
+}
+sprintf(output, "], %d, %s),\n", klee_get_value_i32(x), klee_get_value_i32(has_x) ? "true".ptr : "false".ptr);
+fputs(buffer.ptr, stdout);
+```
+
+Ugh, that would be just one format call with D’s `%(` array formatting specs. The output needs to be buffered up and printed all at once to stop output from different parallel executions getting mixed up. `klee_get_value_i32()` is needed to get a concrete example from a symbolic variable (remember that a symbolic variable is just a bundle of constraints).
+
+```
+$ klee -optimize bsearch.bc > tests.d
+...
+$ # Sure enough, 81 test cases
+$ wc -l tests.d
+81 tests.d
+$ # Look at the first 10
+$ head tests.d
+TestCase([], 0, false),
+TestCase([0, ], 0, true),
+TestCase([16, ], 1, false),
+TestCase([0, ], 1, false),
+TestCase([0, 0, ], 0, true),
+TestCase([0, 0, ], 1, false),
+TestCase([1, 16, ], 1, true),
+TestCase([0, 0, 0, ], 0, true),
+TestCase([16, 16, ], 1, false),
+TestCase([1, 16, ], 3, false),
+```
+
+Nice! An autogenerated regression test suite that’s better than anything I would write by hand. This is my favourite use case for KLEE.
+
+### Change counting
+
+One last example:
+
+In Australia, coins come in 5c, 10c, 20c, 50c, $1 (100c) and $2 (200c) denominations. So you can make 70c using 14 5c coins, or using a 50c coin and a 20c coin. Obviously, fewer coins is usually more convenient. There’s a simple [greedy algorithm][8] to make a small pile of coins that adds up to a given value: just keep adding the biggest coin you can to the pile until you’ve reached the target value. It turns out this trick is optimal — at least for Australian coins. Is it always optimal for any set of coin denominations?
+
+The hard thing about testing optimality is that you don’t know what the correct optimal values are without a known-good algorithm. Without a constraints solver, I’d compare the output of the greedy algorithm with some obviously correct brute force optimiser, run over all possible cases within some small-enough limit. But with KLEE, we can use a different approach: comparing the greedy solution to a non-deterministic solution.
+
+The greedy algorithm takes the list of coin denominations and the target value as input, so (like in the previous examples) we make those symbolic. Then we make another symbolic array that represents an assignment of coin counts to each coin denomination. We don’t specify anything about how to generate this assignment, but we constrain it to be a valid assignment that adds up to the target value. It’s [non-deterministic][9]. Then we just assert that the total number of coins in the non-deterministic assignment is at least the number of coins needed by the greedy algorithm, which would be true if the greedy algorithm were universally optimal. Finally we ask KLEE to prove the program correct or incorrect.
+
+Here’s the code:
+
+```
+// Greedily break value into coins of values in denominations
+// denominations must be in strictly decreasing order
+int greedy(const(int[]) denominations, int value, int[] coins_used_output)
+{
+ int num_coins = 0;
+ foreach (j; 0..denominations.length)
+ {
+ int num_to_use = value / denominations[j];
+ coins_used_output[j] = num_to_use;
+ num_coins += num_to_use;
+ value = value % denominations[j];
+ }
+ return num_coins;
+}
+
+extern(C):
+
+void klee_make_symbolic(void* addr, size_t nbytes, const(char*) name);
+int klee_int(const(char*) name);
+int klee_assume(ulong condition);
+int klee_get_value_i32(int expr);
+
+int main(int argc, char** argv)
+{
+ enum kNumDenominations = 6;
+ int[kNumDenominations] denominations, coins_used;
+ klee_make_symbolic(denominations.ptr, denominations.sizeof, "denominations");
+
+ // We're testing the algorithm itself, not implementation issues like integer overflow
+ // Keep values small
+ foreach (d; denominations)
+ {
+ klee_assume(d >= 1);
+ klee_assume(d <= 1024);
+ }
+ // Make the smallest denomination 1 so that all values can be represented
+ // This is just for simplicity so we can focus on optimality
+ klee_assume(denominations[$-1] == 1);
+
+ // Greedy algorithm expects values in descending order
+ foreach (j; 1..denominations.length)
+ {
+ klee_assume(denominations[j-1] > denominations[j]);
+ }
+
+ // What we're going to represent
+ auto value = klee_int("value");
+
+ auto num_coins = greedy(denominations[], value, coins_used[]);
+
+ // The non-deterministic assignment
+ int[kNumDenominations] nd_coins_used;
+ klee_make_symbolic(nd_coins_used.ptr, nd_coins_used.sizeof, "nd_coins_used");
+
+ int nd_num_coins = 0, nd_value = 0;
+ foreach (j; 0..kNumDenominations)
+ {
+ klee_assume(nd_coins_used[j] >= 0);
+ klee_assume(nd_coins_used[j] <= 1024);
+ nd_num_coins += nd_coins_used[j];
+ nd_value += nd_coins_used[j] * denominations[j];
+ }
+
+ // Making the assignment valid is 100% up to KLEE
+ klee_assume(nd_value == value);
+
+ // If we find a counterexample, dump it and fail
+ if (nd_num_coins < num_coins)
+ {
+ import core.stdc.stdio;
+
+ puts("Counterexample found.");
+
+ puts("Denominations:");
+ foreach (ref d; denominations)
+ {
+ printf("%d ", klee_get_value_i32(d));
+ }
+ printf("\nValue: %d\n", klee_get_value_i32(value));
+
+ void printAssignment(const ref int[kNumDenominations] coins)
+ {
+ foreach (j; 0..kNumDenominations)
+ {
+ printf("%d * %dc\n", klee_get_value_i32(coins[j]), klee_get_value_i32(denominations[j]));
+ }
+ }
+
+ printf("Greedy \"optimum\": %d\n", klee_get_value_i32(num_coins));
+ printAssignment(coins_used);
+
+ printf("Better assignment for %d total coins:\n", klee_get_value_i32(nd_num_coins));
+ printAssignment(nd_coins_used);
+ assert (false);
+ }
+
+ return 0;
+}
+```
+
+And here’s the counterexample it found after 14s:
+
+```
+Counterexample found.
+Denominations:
+129 12 10 3 2 1
+Value: 80
+Greedy "optimum": 9
+0 * 129c
+6 * 12c
+0 * 10c
+2 * 3c
+1 * 2c
+0 * 1c
+Better assignment for 8 total coins:
+0 * 129c
+0 * 12c
+8 * 10c
+0 * 3c
+0 * 2c
+0 * 1c
+```
+
+Note that this isn’t proven to be the new optimum; it’s just a witness that the greedy algorithm isn’t always optimal. There’s a well-known [dynamic programming][10] [solution][11] that always works.
+
+### What’s next?
+
+As I said, I’m interesting in getting this to work with full D code. I’m also interested in using [one of the floating point forks of KLEE][12] on some D because floating point is much harder to test thoroughly than integer and string code.
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2019/05/28/d_and_klee.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: https://klee.github.io/
+[2]: https://www.uclibc.org/
+[3]: https://github.com/ldc-developers/ldc/issues/3078
+[4]: https://kcachegrind.github.io/html/Home.html
+[5]: https://github.com/dlang/druntime/blob/4ad638f61a9b4a98d8ed6eb9f9429c0ef6afc8e3/src/core/internal/hash.d#L670
+[6]: https://www.calhoun.io/lets-learn-algorithms-an-intro-to-binary-search/
+[7]: https://github.com/COMSYS/SymbolicLivenessAnalysis
+[8]: https://en.wikipedia.org/wiki/Greedy_algorithm
+[9]: http://people.clarkson.edu/~alexis/PCMI/Notes/lectureB03.pdf
+[10]: https://www.algorithmist.com/index.php/Dynamic_Programming
+[11]: https://www.topcoder.com/community/competitive-programming/tutorials/dynamic-programming-from-novice-to-advanced/
+[12]: https://github.com/srg-imperial/klee-float
diff --git a/sources/talk/20190528 Managed WAN and the cloud-native SD-WAN.md b/sources/talk/20190528 Managed WAN and the cloud-native SD-WAN.md
new file mode 100644
index 0000000000..026b5d8e81
--- /dev/null
+++ b/sources/talk/20190528 Managed WAN and the cloud-native SD-WAN.md
@@ -0,0 +1,121 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Managed WAN and the cloud-native SD-WAN)
+[#]: via: (https://www.networkworld.com/article/3398476/managed-wan-and-the-cloud-native-sd-wan.html)
+[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
+
+Managed WAN and the cloud-native SD-WAN
+======
+The motivation for WAN transformation is clear, today organizations require: improved internet access and last mile connectivity, additional bandwidth and a reduction in the WAN costs.
+![Gerd Altmann \(CC0\)][1]
+
+In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations.
+
+The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia.
+
+A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects.
+
+**[ Related:[MPLS explained – What you need to know about multi-protocol label switching][2]**
+
+### Motivations for transforming the WAN
+
+The motivation for WAN transformation is clear: Today organizations prefer improved internet access and last mile connectivity, additional bandwidth along with a reduction in the WAN costs. Replacing Multiprotocol Label Switching (MPLS) with SD-WAN has of course been the main driver for the SD-WAN evolution, but it is only a single piece of the jigsaw puzzle.
+
+Many SD-WAN vendors are quickly brought to their knees when they try to address security and gain direct internet access from remote site locations. The problem is how to ensure optimized cloud access that is secure, has improved visibility and predictable performance without the high costs associated with MPLS? SD-WAN is not just about connecting locations. Primarily, it needs to combine many other important network and security elements into one seamless worldwide experience.
+
+According to a recent report from [Cato Networks][3] into enterprise IT managers, a staggering 85% will confront use cases in 2019 that are poorly addressed or outright ignored by SD-WAN. Examples includes providing secure, Internet access from any location (50%) and improving visibility into and control over mobile access to cloud applications, such as Office 365 (46%).
+
+### Issues with traditional SD-WAN vendors
+
+First and foremost, SD-WAN unable to address the security challenges that arise during the WAN transformation. Such security challenges include protection against malware, ransomware and implementing the necessary security policies. Besides, there is a lack of visibility that is required to police the mobile users and remote site locations accessing resources in the public cloud.
+
+To combat this, organizations have to purchase additional equipment. There has always been and will always be a high cost associated with buying such security appliances. Furthermore, the additional tools that are needed to protect the remote site locations increase the network complexity and reduce visibility. Let’s us not forget that the variety of physical appliances require talented engineers for design, deployment and maintenance.
+
+There will often be a single network-cowboy. This means the network and security configuration along with the design essentials are stored in the mind of the engineer, not in a central database from where the knowledge can be accessed if the engineer leaves his or her employment.
+
+The physical appliance approach to SD-WAN makes it hard, if not impossible, to accommodate for the future. If the current SD-WAN vendors continue to focus just on connecting the devices with the physical appliances, they will have limited ability to accommodate for example, with the future of network IoT devices. With these factors in mind what are the available options to overcome the SD-WAN shortcomings?
+
+One can opt for a do it yourself (DIY) solution, or a managed service, which can fall into the category of telcos, with the improvements of either co-managed or self-managed service categories.
+
+### Option 1: The DIY solution
+
+Firstly DIY, from the experience of trying to stitch together a global network, this is not only costly but also complex and is a very constrained approach to the network transformation. We started with physical appliances decades ago and it was sufficient to an extent. The reason it worked was that it suited the requirements of the time, but our environment has changed since then. Hence, we need to accommodate these changes with the current requirements.
+
+Even back in those days, we always had a breachable perimeter. The perimeter-approach to networking and security never really worked and it was just a matter of time before the bad actor would penetrate the guarded walls.
+
+Securing a global network involves more than just firewalling the devices. A solid security perimeter requires URL filtering, anti-malware and IPS to secure the internet traffic. If you try to deploy all these functions in a single device, such as, unified threat management (UTM), you will hit scaling problems. As a result, you will be left with appliance sprawl.
+
+Back in my early days as an engineer, I recall stitching together a global network with a mixture of security and network appliances from a variety of vendors. It was me and just two others who used to get the job done on time and for a production network, our uptime levels were superior to most.
+
+However, it involved too many late nights, daily flights to our PoPs and of course the major changes required a forklift. A lot of work had to be done at that time, which made me want to push some or most of the work to a 3rd party.
+
+### Option 2: The managed service solution
+
+Today, there is a growing need for the managed service approach to SD-WAN. Notably, it simplifies the network design, deployment and maintenance activities while offloading the complexity, in line with what most CIOs are talking about today.
+
+Managed service provides a number of benefits, such as the elimination of backhauling to centralized cloud connectors or VPN concentrators. Evidently, backhauling is never favored for a network architect. More than often it will result in increased latency, congested links, internet chokepoints, and last-mile outages.
+
+Managed service can also authenticate mobile users at the local communication hub and not at a centralized point which would increase the latency. So what options are available when considering a managed service?
+
+### Telcos: An average service level
+
+Let’s be honest, telcos have a mixed track record and enterprises rely on them with caution. Essentially, you are building a network with 3rd party appliances and services that put the technical expertise outside of the organization.
+
+Secondly, the telco must orchestrate, monitor and manage numerous technical domains which are likely to introduce further complexity. As a result, troubleshooting requires close coordination with the suppliers which will have an impact on the customer experience.
+
+### Time equals money
+
+To resolve a query could easily take two or three attempts. It’s rare that you will get to the right person straight away. This eventually increases the time to resolve problems. Even for a minor feature change, you have to open tickets. Hence, with telcos, it increases the time required to solve a problem.
+
+In addition, it takes time to make major network changes such as opening new locations, which could take up to 45 days. In the same report mentioned above, 71% of the respondents are frustrated with the telco customer-service-time to resolve the problems, 73% indicated that deploying new locations requires at least 15 days and 47% claimed that “high bandwidth costs” is the biggest frustration while working with telcos.
+
+When it comes to lead times for projects, an engineer does not care. Does a project manager care if you have an optimum network design? No, many don’t, most just care about the timeframes. During my career, now spanning 18 years, I have never seen comments from any of my contacts saying “you must adhere to your project manager’s timelines”.
+
+However, out of the experience, the project managers have their ways and lead times do become a big part of your daily job. So as an engineer, 45-day lead time will certainly hit your brand hard, especially if you are an external consultant.
+
+There is also a problem with bandwidth costs. Telcos need to charge due to their complexity. There is always going to be a series of problems when working with them. Let’s face it, they offer an average service level.
+
+### Co-management and self-service management
+
+What is needed is a service that equips with the visibility and control of DIY to managed services. This, ultimately, opens the door to co-management and self-service management.
+
+Co-management allows both the telco and enterprise to make changes to the WAN. Then we have the self-service management of WAN that allows the enterprises to have sole access over the aspect of their network.
+
+However, these are just sticking plasters covering up the flaws. We need a managed service that not only connects locations but also synthesizes the site connectivity, along with security, mobile access, and cloud access.
+
+### Introducing the cloud-native approach to SD-WAN
+
+There should be a new style of managed services that combines the best of both worlds. It should offer the uptime, predictability and reach of the best telcos along with the cost structure and versatility of cloud providers. All such requirements can be met by what is known as the cloud-native carrier.
+
+Therefore, we should be looking for a platform that can connect and secure all the users and resources at scale, no matter where they are positioned. Eventually, such a platform will limit the costs and increase the velocity and agility.
+
+This is what a cloud-native carrier can offer you. You could say it’s a new kind of managed service, which is what enterprises are now looking for. A cloud-native carrier service brings the best of cloud services to the world of networking. This new style of managed service brings to SD-WAN the global reach, self-service, and agility of the cloud with the ability to easily migrate from MPLS.
+
+In summary, a cloud-native carrier service will improve global connectivity to on-premises and cloud applications, enable secure branch to internet access, and both securely and optimally integrate cloud datacenters.
+
+**This article is published as part of the IDG Contributor Network.[Want to Join?][4]**
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3398476/managed-wan-and-the-cloud-native-sd-wan.html
+
+作者:[Matt Conran][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Matt-Conran/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2017/03/network-wan-100713693-large.jpg
+[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html
+[3]: https://www.catonetworks.com/news/digital-transformation-survey
+[4]: /contributor-network/signup.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190528 Moving to the Cloud- SD-WAN Matters.md b/sources/talk/20190528 Moving to the Cloud- SD-WAN Matters.md
new file mode 100644
index 0000000000..8f6f46b6f2
--- /dev/null
+++ b/sources/talk/20190528 Moving to the Cloud- SD-WAN Matters.md
@@ -0,0 +1,69 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Moving to the Cloud? SD-WAN Matters!)
+[#]: via: (https://www.networkworld.com/article/3397921/moving-to-the-cloud-sd-wan-matters.html)
+[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
+
+Moving to the Cloud? SD-WAN Matters!
+======
+
+![istock][1]
+
+This is the first in a two-part blog series that will explore how enterprises can realize the full transformation promise of the cloud by shifting to a business first networking model powered by a business-driven [SD-WAN][2]. The focus for this installment will be on automating secure IPsec connectivity and intelligently steering traffic to cloud providers.
+
+Over the past several years we’ve seen a major shift in data center strategies where enterprise IT organizations are shifting applications and workloads to cloud, whether private or public. More and more, enterprises are leveraging software as-a-service (SaaS) applications and infrastructure as-a-service (IaaS) cloud services from leading providers like [Amazon AWS][3], [Google Cloud][4], [Microsoft Azure][5] and [Oracle Cloud Infrastructure][6]. This represents a dramatic shift in enterprise data traffic patterns as fewer and fewer applications are hosted within the walls of the traditional corporate data center.
+
+There are several drivers for the shift to IaaS cloud services and SaaS apps, but business agility tops the list for most enterprises. The traditional IT model for provisioning and deprovisioning applications is rigid and inflexible and is no longer able to keep pace with changing business needs.
+
+According to [LogicMonitor’s Cloud Vision 2020][7] study, more than 80 percent of enterprise workloads will run in the cloud by 2020 with more than 40 percent running on public cloud platforms. This major shift in the application consumption model is having a huge [impact on organizations and infrastructure][8]. A recent article entitled “[How Amazon Web Services is luring banks to the cloud][9],” published by CNBC, reported that some companies already have completely migrated all of their applications and IT workloads to public cloud infrastructures. An interesting fact is that while many enterprises must comply with stringent regulatory compliance mandates such as PCI-DSS or HIPAA, they still have made the move to the cloud. This tells us two things – the maturity of using public cloud services and the trust these organizations have in using them is at an all-time high. Again, it is all about speed and agility – without compromising performance, security and reliability.
+
+### **Is there a direct correlation between moving to the cloud and adopting SD-WAN?**
+
+As the cloud enables businesses to move faster, an SD-WAN architecture where top-down business intent is the driver is critical to ensuring success, especially when branch offices are geographically distributed across the globe. Traditional router-centric WAN architectures were never designed to support today’s cloud consumption model for applications in the most efficient way. With a conventional router-centric WAN approach, access to applications residing in the cloud means traversing unnecessary hops, resulting in wasted bandwidth, additional cost, added latency and potentially higher packet loss. In addition, under the existing, traditional WAN model where management tends to be rigid, complex network changes can be lengthy, whether setting up new branches or troubleshooting performance issues. This leads to inefficiencies and a costly operational model. Therefore, enterprises greatly benefit from taking a business-first WAN approach toward achieving greater agility in addition to realizing substantial CAPEX and OPEX savings.
+
+A business-driven SD-WAN platform is purpose-built to tackle the challenges inherent to the traditional router-centric model and more aptly support today’s cloud consumption model. This means application policies are defined based on business intent, connecting users securely and directly to applications where ever they reside without unnecessary extra hops or security compromises. For example, if the application is hosted in the cloud and is trusted, a business-driven SD-WAN can automatically connect users to it without backhauling traffic to a POP or HQ data center. Now, in general this traffic is usually going across an internet link which, on its own, may not be secure. However, the right SD-WAN platform will have a unified stateful firewall built-in for local internet breakout allowing only branch-initiated sessions to enter the branch and providing the ability to service chain traffic to a cloud-based security service if necessary, before forwarding it to its final destination. If the application is moved and becomes hosted by another provider or perhaps back to a company’s own data center, traffic must be intelligently redirected, wherever the application is being hosted. Without automation and embedded machine learning, dynamic and intelligent traffic steering is impossible.
+
+### **A closer look at how the Silver Peak EdgeConnect™ SD-WAN edge platform addresses these challenges: **
+
+**Automate traffic steering and connectivity to cloud providers**
+
+An [EdgeConnect][10] virtual instance is easily spun up in any of the [leading cloud providers][11] through their respective marketplaces. For an SD-WAN to intelligently steer traffic to its destination, it requires insights into both HTTP and HTTPS traffic; it must be able to identify apps on the first packet received in order to steer traffic to the right destination in accordance with business intent. This is critical capability because once a TCP connection is NAT’d with a public IP address, it cannot be switched thus it can’t be re-routed once a connection is established. So, the ability of EdgeConnect to identify, classify and automatically steer traffic based on the first packet – and not the second or tenth packet – to the correct destination will assure application SLAs, minimize wasting expensive bandwidth and deliver the highest quality of experience.
+
+Another critical capability is automatic performance optimization. Irrespective of which link the traffic ends up traversing based on business intent and the unique requirements of the application, EdgeConnect automatically optimizes application performance without human intervention by correcting for out of order packets using Packet Order Correction (POC) or even under high latency conditions that can be related to distance or other issues. This is done using adaptive Forward Error Correction (FEC) and tunnel bonding where a virtual tunnel is created, resulting in a single logical overlay that traffic can be dynamically moved between the different paths as conditions change with each underlay WAN service. In this [lightboard video][12], Dinesh Fernando, a technical marketing engineer at Silver Peak, explains how EdgeConnect automates tunnel creation between sites and cloud providers, how it simplifies data transfers between multi-clouds, and how it improves application performance.
+
+If your business is global and increasingly dependent on the cloud, the business-driven EdgeConnect SD-WAN edge platform enables seamless multi-cloud connectivity, turning the network into a business accelerant. EdgeConnect delivers:
+
+ 1. A consistent deployment from the branch to the cloud, extending the reach of the SD-WAN into virtual private cloud environments
+ 2. Multi-cloud flexibility, making it easier to initiate and distribute resources across multiple cloud providers
+ 3. Investment protection by confidently migrating on premise IT resources to any combination of the leading public cloud platforms, knowing their cloud-hosted instances will be fully supported by EdgeConnect
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3397921/moving-to-the-cloud-sd-wan-matters.html
+
+作者:[Rami Rammaha][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Rami-Rammaha/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/05/istock-899678028-100797709-large.jpg
+[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
+[3]: https://www.silver-peak.com/company/tech-partners/cloud/aws
+[4]: https://www.silver-peak.com/company/tech-partners/cloud/google-cloud
+[5]: https://www.silver-peak.com/company/tech-partners/cloud/microsoft-azure
+[6]: https://www.silver-peak.com/company/tech-partners/cloud/oracle-cloud
+[7]: https://www.logicmonitor.com/resource/the-future-of-the-cloud-a-cloud-influencers-survey/?utm_medium=pr&utm_source=businesswire&utm_campaign=cloudsurvey
+[8]: http://www.networkworld.com/article/3152024/lan-wan/in-the-age-of-digital-transformation-why-sd-wan-plays-a-key-role-in-the-transition.html
+[9]: http://www.cnbc.com/2016/11/30/how-amazon-web-services-is-luring-banks-to-the-cloud.html?__source=yahoo%257cfinance%257cheadline%257cheadline%257cstory&par=yahoo&doc=104135637
+[10]: https://www.silver-peak.com/products/unity-edge-connect
+[11]: https://www.silver-peak.com/company/tech-partners?strategic_partner_type=69
+[12]: https://www.silver-peak.com/resource-center/automate-connectivity-to-cloud-networking-with-sd-wan
diff --git a/sources/talk/20190529 Satellite-based internet possible by year-end, says SpaceX.md b/sources/talk/20190529 Satellite-based internet possible by year-end, says SpaceX.md
new file mode 100644
index 0000000000..383fac66ca
--- /dev/null
+++ b/sources/talk/20190529 Satellite-based internet possible by year-end, says SpaceX.md
@@ -0,0 +1,63 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Satellite-based internet possible by year-end, says SpaceX)
+[#]: via: (https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Satellite-based internet possible by year-end, says SpaceX
+======
+Amazon, Tesla-associated SpaceX and OneWeb are emerging as just some of the potential suppliers of a new kind of data-friendly satellite internet service that could bring broadband IoT connectivity to most places on Earth.
+![Getty Images][1]
+
+With SpaceX’s successful launch of an initial array of broadband-internet-carrying satellites last week, and Amazon’s surprising posting of numerous satellite engineering-related job openings on its [job board][2] this month, one might well be asking if the next-generation internet space race is finally getting going. (I first wrote about [OneWeb’s satellite internet plans][3] it was concocting with Airbus four years ago.)
+
+This new batch of satellite-driven internet systems, if they work and are eventually switched on, could provide broadband to most places, including previously internet-barren locations, such as rural areas. That would be good for high-bandwidth, low-latency remote-internet of things (IoT) and increasingly important edge-server connections for verticals like oil and gas and maritime. [Data could even end up getting stored in compliance-friendly outer space, too][4]. Leaky ground-based connections, also, perhaps a thing of the past.
+
+Of the principal new internet suppliers, SpaceX has gotten farthest along. That’s in part because it has commercial impetus. It needed to create payload for its numerous rocket projects. The Tesla electric-car-associated company (the two firms share materials science) has not only launched its first tranche of 60 satellites for its own internet constellation, called Starlink, but also successfully launched numerous batches (making up the full constellation of 75 satellites) for Iridium’s replacement, an upgraded constellation called Iridium NEXT.
+
+[The time of 5G is almost here][5]
+
+Potential competitor OneWeb launched its first six Airbus-built satellites in February. [It has plans for 900 more][6]. SpaceX has been approved for 4,365 more by the FCC, and Project Kuiper, as Amazon’s space internet project is known, wants to place 3,236 satellites in orbit, according to International Telecommunication Union filings [discovered by _GeekWire_][7] earlier this year. [Startup LeoSat, which I wrote about last year, aims to build an internet backbone constellation][8]. Facebook, too, is exploring [space-delivered internet][9].
+
+### Why the move to space?
+
+Laser technical progress, where data is sent in open, free space, rather than via a restrictive, land-based cable or via traditional radio paths, is partly behind this space-internet rush. “Bits travel faster in free space than in glass-fiber cable,” LeoSat explained last year. Additionally, improving microprocessor tech is also part of the mix.
+
+One important difference from existing older-generation satellite constellations is that this new generation of internet satellites will be located in low Earth orbit (LEO). Initial Starlink satellites will be placed at about 350 miles above Earth, with later launches deployed at 710 miles.
+
+There’s an advantage to that. Traditional satellites in geostationary orbit, or GSO, have been deployed about 22,000 miles up. That extra distance versus LEO introduces latency and is one reason earlier generations of Internet satellites are plagued by slow round-trip times. Latency didn’t matter when GSO was introduced in 1964, and commercial satellites, traditionally, have been pitched as one-way video links, such as are used by sporting events for broadcast, and not for data.
+
+And when will we get to experience these new ISPs? “Starlink is targeted to offer service in the Northern U.S. and Canadian latitudes after six launches,” [SpaceX says on its website][10]. Each launch would deliver about 60 satellites. “SpaceX is targeting two to six launches by the end of this year.”
+
+Global penetration of the “populated world” could be obtained after 24 launches, it thinks.
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/10/network_iot_world-map_us_globe_nodes_global-100777483-large.jpg
+[2]: https://www.amazon.jobs/en/teams/projectkuiper
+[3]: https://www.itworld.com/article/2938652/space-based-internet-starts-to-get-serious.html
+[4]: https://www.networkworld.com/article/3200242/data-should-be-stored-data-in-space-firm-says.html
+[5]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
+[6]: https://www.airbus.com/space/telecommunications-satellites/oneweb-satellites-connection-for-people-all-over-the-globe.html
+[7]: https://www.geekwire.com/2019/amazon-lists-scores-jobs-bellevue-project-kuiper-broadband-satellite-operation/
+[8]: https://www.networkworld.com/article/3328645/space-data-backbone-gets-us-approval.html
+[9]: https://www.networkworld.com/article/3338081/light-based-computers-to-be-5000-times-faster.html
+[10]: https://www.starlink.com/
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190529 Survey finds SD-WANs are hot, but satisfaction with telcos is not.md b/sources/talk/20190529 Survey finds SD-WANs are hot, but satisfaction with telcos is not.md
new file mode 100644
index 0000000000..9b65a6c8dd
--- /dev/null
+++ b/sources/talk/20190529 Survey finds SD-WANs are hot, but satisfaction with telcos is not.md
@@ -0,0 +1,69 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Survey finds SD-WANs are hot, but satisfaction with telcos is not)
+[#]: via: (https://www.networkworld.com/article/3398478/survey-finds-sd-wans-are-hot-but-satisfaction-with-telcos-is-not.html)
+[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
+
+Survey finds SD-WANs are hot, but satisfaction with telcos is not
+======
+A recent survey of over 400 IT executives by Cato Networks found that legacy telcos might be on the outside looking in for SD-WANs.
+![istock][1]
+
+This week SD-WAN vendor Cato Networks announced the results of its [Telcos and the Future of the WAN in 2019 survey][2]. The study was a mix of companies of all sizes, with 42% being enterprise-class (over 2,500 employees). More than 70% had a network with more than 10 locations, and almost a quarter (24%) had over 100 sites. All of the respondents have a cloud presence, and almost 80% have at least two data centers. The survey had good geographic diversity, with 57% of respondents coming from the U.S. and 24% from Europe.
+
+Highlights of the survey include the following key findings:
+
+## **SD-WANs are hot but not a panacea to all networking challenges**
+
+The survey found that 44% of respondents have already deployed or will deploy an SD-WAN within the next 12 months. This number is up sharply from 25% when Cato ran the survey a year ago. Another 33% are considering SD-WAN but have no immediate plans to deploy. The primary drivers for the evolution of the WAN are improved internet access (46%), increased bandwidth (39%), improved last-mile availability (38%) and reduced WAN costs (37%). It’s good to see cost savings drop to fourth in motivation, since there is so much more to SD-WAN.
+
+[The time of 5G is almost here][3]
+
+It’s interesting that the majority of respondents believe SD-WAN alone can’t address all challenges facing the WAN. A whopping 85% stated they would be confronting issues not addressed by SD-WAN alone. This includes secure, local internet breakout, improved visibility, and control over mobile access to cloud apps. This indicates that customers are looking for SD-WAN to be the foundation of the WAN but understand that other technologies need to be deployed as well.
+
+## **Telco dissatisfaction is high**
+
+The traditional telco has been a point of frustration for network professionals for years, and the survey spelled that out loud and clear. Prior to being an analyst, I held a number of corporate IT positions and found telcos to be the single most frustrating group of companies to deal with. The problem was, there was no choice. If you need MPLS services, you need a telco. The same can’t be said for SD-WANs, though; businesses have more choices.
+
+Respondents to the survey ranked telco service as “average.” It’s been well documented that we are now in the customer-experience era and “good enough” service is no longer good enough. Regarding pricing, 54% gave telcos a failing grade. Although price isn’t everything, this will certainly open the door to competitive SD-WAN vendors. Respondents gave the highest marks for overall experience to SaaS providers, followed by cloud computing suppliers. Global telcos scored the lowest of all vendor types.
+
+A look deeper explains the frustration level. The network is now mission-critical for companies, but 48% stated they are able to reach the support personnel with the right expertise to solve a problem only on a second attempt. No retailer, airline, hotel or other type of company could survive this, but telco customers had no other options for years.
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
+
+Another interesting set of data points is the speed at which telcos address customer needs. Digital businesses compete on speed, but telco process is the antithesis of fast. Moves, adds and changes take at least one business day for half of the respondents. Also, 70% indicated that opening a new location takes 15 days, and 38% stated it requires 45 days or more.
+
+## **Security is now part of SD-WAN**
+
+The use of broadband, cloud access and other trends raise the bar on security for SD-WAN, and the survey confirmed that respondents are skeptical that SD-WANs could address these issues. Seventy percent believe SD-WANs can’t address malware/ransomware, and 49% don’t think SD-WAN helps with enforcing company policies on mobile users. Because of this, network professionals are forced to buy additional security tools from other vendors, but that can drive up complexity. SD-WAN vendors that have intrinsic security capabilities can use that as a point of differentiation.
+
+## **Managed services are critical to the growth of SD-WANs**
+
+The survey found that 75% of respondents are using some kind of managed service provider, versus only 25% using an appliance vendor. This latter number was 32% last year. I’m not surprised by this shift and expect it to continue. Legacy WANs were inefficient but straightforward to deploy. D-WANs are highly agile and more cost-effective, but complexity has gone through the roof. Network engineers need to factor in cloud connectivity, distributed security, application performance, broadband connectivity and other issues. Managed services can help businesses enjoy the benefits of SD-WAN while masking the complexity.
+
+Despite the desire to use an MSP, respondents don’t want to give up total control. Eighty percent stated they preferred self-service or co-managed models. This further explains the shift away from telcos, since they typically work with fully managed models.
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3398478/survey-finds-sd-wans-are-hot-but-satisfaction-with-telcos-is-not.html
+
+作者:[Zeus Kerravala][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Zeus-Kerravala/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/02/istock-465661573-100750447-large.jpg
+[2]: https://www.catonetworks.com/news/digital-transformation-survey/
+[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
+[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190601 True Hyperconvergence at Scale- HPE Simplivity With Composable Fabric.md b/sources/talk/20190601 True Hyperconvergence at Scale- HPE Simplivity With Composable Fabric.md
new file mode 100644
index 0000000000..97eb611ef8
--- /dev/null
+++ b/sources/talk/20190601 True Hyperconvergence at Scale- HPE Simplivity With Composable Fabric.md
@@ -0,0 +1,28 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (True Hyperconvergence at Scale: HPE Simplivity With Composable Fabric)
+[#]: via: (https://www.networkworld.com/article/3399619/true-hyperconvergence-at-scale-hpe-simplivity-with-composable-fabric.html)
+[#]: author: (HPE https://www.networkworld.com/author/Michael-Cooney/)
+
+True Hyperconvergence at Scale: HPE Simplivity With Composable Fabric
+======
+
+Many hyperconverged solutions only focus on software-defined storage. However, many networking functions and technologies can be consolidated for simplicity and scale in the data center. This video describes how HPE SimpliVity with Composable Fabric gives organizations the power to run any virtual machine anywhere, anytime. Read more about HPE SimpliVity [here][1].
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3399619/true-hyperconvergence-at-scale-hpe-simplivity-with-composable-fabric.html
+
+作者:[HPE][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://hpe.com/info/simplivity
diff --git a/sources/talk/20190602 IoT Roundup- New research on IoT security, Microsoft leans into IoT.md b/sources/talk/20190602 IoT Roundup- New research on IoT security, Microsoft leans into IoT.md
new file mode 100644
index 0000000000..6d955c6485
--- /dev/null
+++ b/sources/talk/20190602 IoT Roundup- New research on IoT security, Microsoft leans into IoT.md
@@ -0,0 +1,71 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (IoT Roundup: New research on IoT security, Microsoft leans into IoT)
+[#]: via: (https://www.networkworld.com/article/3398607/iot-roundup-new-research-on-iot-security-microsoft-leans-into-iot.html)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+IoT Roundup: New research on IoT security, Microsoft leans into IoT
+======
+Verizon sets up widely available narrow-band IoT service, while most Americans think IoT manufacturers should ensure their products protect personal information.
+As with any technology whose use is expanding at such speed, it can be tough to track exactly what’s going on in the [IoT][1] world – everything from basic usage numbers to customer attitudes to more in-depth slices of the market is constantly changing. Fortunately, the month of May brought several new pieces of research to light, which should help provide at least a partial outline of what’s really happening in IoT.
+
+### Internet of things polls
+
+Not all of the news is good. An IPSOS Mori poll performed on behalf of the Internet Society and Consumers International (respectively, an umbrella organization for open development and Internet use and a broad-based consumer advocacy group) found that, despite the skyrocketing numbers of smart devices in circulation around the world, more than half of users in large parts of the western world don’t trust those devices to safeguard their privacy.
+
+**More on IoT:**
+
+ * [What is the IoT? How the internet of things works][2]
+ * [What is edge computing and how it’s changing the network][3]
+ * [Most powerful Internet of Things companies][4]
+ * [10 Hot IoT startups to watch][5]
+ * [The 6 ways to make money in IoT][6]
+ * [What is digital twin technology? [and why it matters]][7]
+ * [Blockchain, service-centric networking key to IoT success][8]
+ * [Getting grounded in IoT networking and security][9]
+ * [Building IoT-ready networks must become a priority][10]
+ * [What is the Industrial IoT? [And why the stakes are so high]][11]
+
+
+
+While almost 70 percent of respondents owned connected devices, 55 percent said they didn’t feel their personal information was adequately protected by manufacturers. A further 28 percent said they had avoided using connected devices – smart home, fitness tracking and similar consumer gadgetry – primarily because they were concerned over privacy issues, and a whopping 85 percent of Americans agreed with the argument that manufacturers had a responsibility to produce devices that protected personal information.
+
+Those concerns are understandable, according to data from the Ponemon Institute, a tech-research organization. Its survey of corporate risk and security personnel, released in early May, found that there have been few concerted efforts to limit exposure to IoT-based security threats, and that those threats are sharply on the rise when compared to past years, with the percentage of organizations that had experienced a data breach related to unsecured IoT devices rising from 15 percent in fiscal 2017 to 26 percent in fiscal 2019.
+
+Beyond a lack of organizational wherewithal to address those threats, part of the problem in some verticals is technical. Security vendor Forescout said earlier this month that its research showed 40 percent of all healthcare IT environments had more than 20 different operating systems, and more than 30 percent had more than 100 – hardly an ideal situation for smooth patching and updating.
+
+To continue reading this article register now
+
+[Get Free Access][12]
+
+[Learn More][13] Existing Users [Sign In][12]
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3398607/iot-roundup-new-research-on-iot-security-microsoft-leans-into-iot.html
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
+[2]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
+[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[5]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[6]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[7]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[8]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[9]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[10]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[11]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[12]: javascript://
+[13]: /learn-about-insider/
diff --git a/sources/talk/20190603 It-s time for the IoT to -optimize for trust.md b/sources/talk/20190603 It-s time for the IoT to -optimize for trust.md
new file mode 100644
index 0000000000..cc5aa9db7c
--- /dev/null
+++ b/sources/talk/20190603 It-s time for the IoT to -optimize for trust.md
@@ -0,0 +1,102 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (It’s time for the IoT to 'optimize for trust')
+[#]: via: (https://www.networkworld.com/article/3399817/its-time-for-the-iot-to-optimize-for-trust.html)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+It’s time for the IoT to 'optimize for trust'
+======
+If we can't trust the internet of things (IoT) to gather accurate data and use it appropriately, IoT adoption and innovation are likely to suffer.
+![Bose][1]
+
+One of the strengths of internet of things (IoT) technology is that it can do so many things well. From smart toothbrushes to predictive maintenance on jetliners, the IoT has more use cases than you can count. The result is that various IoT uses cases require optimization for particular characteristics, from cost to speed to long life, as well as myriad others.
+
+But in a recent post, "[How the internet of things will change advertising][2]" (which you should definitely read), the always-insightful Stacy Higginbotham tossed in a line that I can’t stop thinking about: “It's crucial that the IoT optimizes for trust."
+
+**[ Read also: Network World's[corporate guide to addressing IoT security][3] ]**
+
+### Trust is the IoT's most important attribute
+
+Higginbotham was talking about optimizing for trust as opposed to clicks, but really, trust is more important than just about any other value in the IoT. It’s more important than bandwidth usage, more important than power usage, more important than cost, more important than reliability, and even more important than security and privacy (though they are obviously related). In fact, trust is the critical factor in almost every aspect of the IoT.
+
+Don’t believe me? Let’s take a quick look at some recent developments in the field:
+
+For one thing, IoT devices often don’t take good care of the data they collect from you. Over 90% of data transactions on IoT devices are not fully encrypted, according to a new [study from security company Zscaler][4]. The [problem][5], apparently, is that many companies have large numbers of consumer-grade IoT devices on their networks. In addition, many IoT devices are attached to the companies’ general networks, and if that network is breached, the IoT devices and data may also be compromised.
+
+In some cases, ownership of IoT data can raise surprisingly serious trust concerns. According to [Kaiser Health News][6], smartphone sleep apps, as well as smart beds and smart mattress pads, gather amazingly personal information: “It knows when you go to sleep. It knows when you toss and turn. It may even be able to tell when you’re having sex.” And while companies such as Sleep Number say they don’t share the data they gather, their written privacy policies clearly state that they _can_.
+
+### **Lack of trust may lead to new laws**
+
+In California, meanwhile, "lawmakers are pushing for new privacy rules affecting smart speakers” such as the Amazon Echo. According to the _[LA Times][7]_ , the idea is “to ensure that the devices don’t record private conversations without permission,” requiring a specific opt-in process. Why is this an issue? Because consumers—and their elected representatives—don’t trust that Amazon, or any IoT vendor, will do the right thing with the data it collects from the IoT devices it sells—perhaps because it turns out that thousands of [Amazon employees have been listening in on what Alexa users are][8] saying to their Echo devices.
+
+The trust issues get even trickier when you consider that Amazon reportedly considered letting Alexa listen to users even without a wake word like “Alexa” or “computer,” and is reportedly working on [wearable devices designed to read human emotions][9] from listening to your voice.
+
+“The trust has been breached,” said California Assemblyman Jordan Cunningham (R-Templeton) to the _LA Times_.
+
+As critics of the bill ([AB 1395][10]) point out, the restrictions matter because voice assistants require this data to improve their ability to correctly understand and respond to requests.
+
+### **Some first steps toward increasing trust**
+
+Perhaps recognizing that the IoT needs to be optimized for trust so that we are comfortable letting it do its job, Amazon recently introduced a new Alexa voice command: “[Delete what I said today][11].”
+
+Moves like that, while welcome, will likely not be enough.
+
+For example, a [new United Nations report][12] suggests that “voice assistants reinforce harmful gender stereotypes” when using female-sounding voices and names like Alexa and Siri. Put simply, “Siri’s ‘female’ obsequiousness—and the servility expressed by so many other digital assistants projected as young women—provides a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education.” I'm not sure IoT vendors are eager—or equipped—to tackle issues like that.
+
+**More on IoT:**
+
+ * [What is the IoT? How the internet of things works][13]
+ * [What is edge computing and how it’s changing the network][14]
+ * [Most powerful Internet of Things companies][15]
+ * [10 Hot IoT startups to watch][16]
+ * [The 6 ways to make money in IoT][17]
+ * [What is digital twin technology? [and why it matters]][18]
+ * [Blockchain, service-centric networking key to IoT success][19]
+ * [Getting grounded in IoT networking and security][20]
+ * [Building IoT-ready networks must become a priority][21]
+ * [What is the Industrial IoT? [And why the stakes are so high]][22]
+
+
+
+Join the Network World communities on [Facebook][23] and [LinkedIn][24] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3399817/its-time-for-the-iot-to-optimize-for-trust.html
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/09/bose-sleepbuds-2-100771579-large.jpg
+[2]: https://mailchi.mp/iotpodcast/stacey-on-iot-how-iot-changes-advertising?e=6bf9beb394
+[3]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
+[4]: https://www.zscaler.com/blogs/research/iot-traffic-enterprise-rising-so-are-threats
+[5]: https://www.csoonline.com/article/3397044/over-90-of-data-transactions-on-iot-devices-are-unencrypted.html
+[6]: https://khn.org/news/a-wake-up-call-on-data-collecting-smart-beds-and-sleep-apps/
+[7]: https://www.latimes.com/politics/la-pol-ca-alexa-google-home-privacy-rules-california-20190528-story.html
+[8]: https://www.usatoday.com/story/tech/2019/04/11/amazon-employees-listening-alexa-customers/3434732002/
+[9]: https://www.bloomberg.com/news/articles/2019-05-23/amazon-is-working-on-a-wearable-device-that-reads-human-emotions
+[10]: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1395
+[11]: https://venturebeat.com/2019/05/29/amazon-launches-alexa-delete-what-i-said-today-voice-command/
+[12]: https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=1
+[13]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
+[14]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[15]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[16]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[17]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[18]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[19]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[20]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[21]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[22]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[23]: https://www.facebook.com/NetworkWorld/
+[24]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190604 Data center workloads become more complex despite promises to the contrary.md b/sources/talk/20190604 Data center workloads become more complex despite promises to the contrary.md
new file mode 100644
index 0000000000..31d127e77d
--- /dev/null
+++ b/sources/talk/20190604 Data center workloads become more complex despite promises to the contrary.md
@@ -0,0 +1,64 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Data center workloads become more complex despite promises to the contrary)
+[#]: via: (https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Data center workloads become more complex despite promises to the contrary
+======
+The data center is shouldering a greater burden than ever, despite promises of ease and the cloud.
+![gorodenkoff / Getty Images][1]
+
+Data centers are becoming more complex and still run the majority of workloads despite the promises of simplicity of deployment through automation and hyperconverged infrastructure (HCI), not to mention how the cloud was supposed to take over workloads.
+
+That’s the finding of the Uptime Institute's latest [annual global data center survey][2] (registration required). The majority of IT loads still run on enterprise data centers even in the face of cloud adoption, putting pressure on administrators to have to manage workloads across the hybrid infrastructure.
+
+**[ Learn[how server disaggregation can boost data center efficiency][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
+
+With workloads like artificial intelligence (AI) and machine language coming to the forefront, that means facilities face greater power and cooling challenges, since AI is extremely processor-intensive. That puts strain on data center administrators and power and cooling vendors alike to keep up with the growth in demand.
+
+On top of it all, everyone is struggling to get enough staff with the right skills.
+
+### Outages, staffing problems, lack of public cloud visibility among top concerns
+
+Among the key findings of Uptime's report:
+
+ * The large, privately owned enterprise data center facility still forms the bedrock of corporate IT and is expected to be running half of all workloads in 2021.
+ * The staffing problem affecting most of the data center sector has only worsened. Sixty-one percent of respondents said they had difficulty retaining or recruiting staff, up from 55% a year earlier.
+ * Outages continue to cause significant problems for operators. Just over a third (34%) of all respondents had an outage or severe IT service degradation in the past year, while half (50%) had an outage or severe IT service degradation in the past three years.
+ * Ten percent of all respondents said their most recent significant outage cost more than $1 million.
+ * A lack of visibility, transparency, and accountability of public cloud services is a major concern for enterprises that have mission-critical applications. A fifth of operators surveyed said they would be more likely to put workloads in a public cloud if there were more visibility. Half of those using public cloud for mission-critical applications also said they do not have adequate visibility.
+ * Improvements in data center facility energy efficiency have flattened out and even deteriorated slightly in the past two years. The average PUE for 2019 is 1.67.
+ * Rack power density is rising after a long period of flat or minor increases, causing many to rethink cooling strategies.
+ * Power loss was the single biggest cause of outages, accounting for one-third of outages. Sixty percent of respondents said their data center’s outage could have been prevented with better management/processes or configuration.
+
+
+
+Traditionally data centers are improving their reliability through "rigorous attention to power, infrastructure, connectivity and on-site IT replication," the Uptime report says. The solution, though, is pricy. Data center operators are getting distributed resiliency through active-active data centers where at least two active data centers replicate data to each other. Uptime found up to 40% of those surveyed were using this method.
+
+The Uptime survey was conducted in March and April of this year, surveying 1,100 end users in more than 50 countries and dividing them into two groups: the IT managers, owners, and operators of data centers and the suppliers, designers, and consultants that service the industry.
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/05/cso_cloud_computing_backups_it_engineer_data_center_server_racks_connections_by_gorodenkoff_gettyimages-943065400_3x2_2400x1600-100796535-large.jpg
+[2]: https://uptimeinstitute.com/2019-data-center-industry-survey-results
+[3]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190604 Moving to the Cloud- SD-WAN Matters- Part 2.md b/sources/talk/20190604 Moving to the Cloud- SD-WAN Matters- Part 2.md
new file mode 100644
index 0000000000..2f68bd6f59
--- /dev/null
+++ b/sources/talk/20190604 Moving to the Cloud- SD-WAN Matters- Part 2.md
@@ -0,0 +1,66 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Moving to the Cloud? SD-WAN Matters! Part 2)
+[#]: via: (https://www.networkworld.com/article/3398488/moving-to-the-cloud-sd-wan-matters-part-2.html)
+[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
+
+Moving to the Cloud? SD-WAN Matters! Part 2
+======
+
+![istock][1]
+
+This is the second installment of the blog series exploring how enterprises can realize the full transformation promise of the cloud by shifting to a business first networking model powered by a business-driven [SD-WAN][2]. The first installment explored automating secure IPsec connectivity and intelligently steering traffic to cloud providers. We also framed the direct correlation between moving to the cloud and adopting an SD-WAN. In this blog, we will expand upon several additional challenges that can be addressed with a business-driven SD-WAN when embracing the cloud:
+
+### Simplifying and automating security zone-based segmentation
+
+Securing cloud-first branches requires a robust multi-level approach that addresses following considerations:
+
+ * Restricting outside traffic coming into the branch to sessions exclusively initiated by internal users with a built-in stateful firewall, avoiding appliance sprawl and lowering operational costs; this is referred to as the app whitelist model
+ * Encrypting communications between end points within the SD-WAN fabric and between branch locations and public cloud instances
+ * Service chaining traffic to a cloud-hosted security service like [Zscaler][3] for Layer 7 inspection and analytics for internet-bound traffic
+ * Segmenting traffic spanning the branch, WAN and data center/cloud
+ * Centralizing policy orchestration and automation of zone-based firewall, VLAN and WAN overlays
+
+
+
+A traditional device-centric WAN approach for security segmentation requires the time-consuming manual configuration of routers and/or firewalls on a device-by-device and site-by-site basis. This is not only complex and cumbersome, but it simply can’t scale to 100s or 1000s of sites. Anusha Vaidyanathan, director of product management at Silver Peak, explains how to automate end-to-end zone-based segmentation, emphasizing the advantages of a business-driven approach in this [lightboard video][4].
+
+### Delivering the Highest Quality of Experience to IT teams
+
+The goal for enterprise IT is enabling business agility and increasing operational efficiency. The traditional router-centric WAN approach doesn’t provide the best quality of experience for IT as management and on-going network operations are manual and time consuming, device-centric, cumbersome, error-prone and inefficient.
+
+A business-driven SD-WAN such as the Silver Peak [Unity EdgeConnect™][5] unified SD-WAN edge platform centralizes the orchestration of business-driven policies. EdgeConnect automation, machine learning and open APIs easily integrate with third-party management tools and real-time visibility tools to deliver the highest quality of experience for IT, enabling them to reclaim nights and weekends. Manav Mishra, vice president of product management at Silver Peak, explains the latest Silver Peak innovations in this [lightboard video][6].
+
+As enterprises become increasingly dependent on the cloud and embrace a multi-cloud strategy, they must address a number of new challenges:
+
+ * A centralized approach to securely embracing the cloud and the internet
+ * How to extend the on-premise data center to a public cloud and migrating workloads between private and public cloud, taking application portability into account
+ * Deliver consistent high application performance and availability to hosted applications whether they reside in the data center, private or public clouds or are delivered as SaaS services
+ * A proactive way to quickly resolve complex issues that span the data center and cloud as well as multiple WAN transport services by harnessing the power of advanced visibility and analytics tools
+
+
+
+The business-driven EdgeConnect SD-WAN edge platform enables enterprise IT organizations to easily and consistently embrace the public cloud. Unified security and performance capabilities with automation deliver the highest quality of experience for both users and IT while lowering overall WAN expenditures.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3398488/moving-to-the-cloud-sd-wan-matters-part-2.html
+
+作者:[Rami Rammaha][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Rami-Rammaha/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/05/istock-909772962-100797711-large.jpg
+[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
+[3]: https://www.silver-peak.com/company/tech-partners/zscaler
+[4]: https://www.silver-peak.com/resource-center/how-to-create-sd-wan-security-zones-in-edgeconnect
+[5]: https://www.silver-peak.com/products/unity-edge-connect
+[6]: https://www.silver-peak.com/resource-center/how-to-optimize-quality-of-experience-for-it-using-sd-wan
diff --git a/sources/talk/20190604 Why Emacs.md b/sources/talk/20190604 Why Emacs.md
new file mode 100644
index 0000000000..0d9b12ba1a
--- /dev/null
+++ b/sources/talk/20190604 Why Emacs.md
@@ -0,0 +1,94 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Why Emacs)
+[#]: via: (https://saurabhkukade.github.io/Why-Emacs/)
+[#]: author: (Saurabh Kukade http://saurabhkukade.github.io/)
+
+Why Emacs
+======
+![Image of Emacs][1]
+
+> “Emacs outshines all other editing software in approximately the same way that the noonday sun does the stars. It is not just bigger and brighter; it simply makes everything else vanish.”
+
+> -Neal Stephenson, “In the Beginning was the Command Line”
+
+### Introduction
+
+This is my first blog post about Emacs. I want to discuss step by step customization of Emacs for beginner. If you’re new to Emacs then you are in the right place, if you’re already familiar with Emacs then that is even better, I assure you that we will get to know many new things in here.
+
+Before getting into how to customize Emacs and what are the exciting features of Emacs I want to write about “why Emacs”.
+
+### Why Emacs?
+
+This was first question crossed my mind when one wise man asked me to try Emacs instead of VIM. Well, I am not writing this article to discuss a battle between two editors VIM and Emacs. That is a another story for another day. But Why Emacs? Well here are some things that justifies that Emacs is powerful and highly customizable.
+
+### 41 Years!
+
+Initial release year of Emacs is 1976 that means Emacs is standing and adapting changes from last 41 years.
+
+41 years of time for a software is huge and that makes Emacs is one of the best Software Engineering product.
+
+### Lisp (Emacs Lisp)
+
+If you are lisp programmer (lisper) then I don’t need to explain you. But for those who don’t know Lisp and its dialects like Scheme, Clojure then Lisp (and all dialects of Lips) is powerful programming language and it stands different from other languages because of its unique property of “Homoiconicity”.
+
+As Emacs is implemented in C and Emacs Lisp (Emacs Lisp is a dialect of the Lisp programming language) it makes Emacs what is because,
+
+ * The simple syntax of Lisp, together with the powerful editing features made possible by that simple syntax, add up to a more convenient programming system than is practical with other languages. Lisp and extensible editors are made for each other.
+
+ * The simplicity of Lisp syntax makes intelligent editing operations easier to implement, while the complexity of other languages discourages their users from implementing similar operations for them.
+
+
+
+
+### Highly Customizable
+
+To any programmer, tools gives power and convenience for reading, writing and managing a code.
+
+Hence, if a tool is programmatic-ally customizable then that makes it even more powerful.
+
+Emacs has above property and in fact is itself one of best tool known for its flexibility and easy customization. Emacs provides basic commands and key configuration for editing a text. This commands and key-configuration are editable and extensible.
+
+Beside basic configuration, Emacs is not biased towards any specific language for customization. One can customize Emacs for any programming language or extend easily existing customization.
+
+Emacs provides the consistent environment for multiple programming languages, email, organizer (via org-mode), a shell/interpreter, note taking, and document writing.
+
+For customizing you don’t need to learn Emacs-lisp from scratch. You can use existing packages available and that’s it. Installing and managing packages in Emacs is easy, Emacs has in-built package manager for it.
+
+Customization is very portable, one just need to place a file or directory containing personal customization file(s) in the right place and it’s done for getting personal customization to new place. ## Huge platform Support
+
+Emacs supports Lisp, Ruby, Python, PHP, Java, Erlang, JavaScript, C, C++, Prolog, Tcl, AWK, PostScript, Clojure, Scala, Perl, Haskell, Elixir all of these languages and more like mysql, pgsql etc. Because of the powerful Lisp core, Emacs is easy to extend to add support for new languages if need to.
+
+Also one can use the built-in IRC client ERC along with BitlBee to connect to your favorite chat services, or use the Jabber package to hop on any XMPP service.
+
+### Org-mode
+
+No matter if you are programmer or not. Org mode is for everyone. Org mode lets you to plan projects and organize schedule. It can be also use for publish notes and documents to different formats, like LaTeX->pdf, html, and markdown.
+
+In fact, Org-mode is so awesome enough that many non-Emacs users started learn Emacs.
+
+### Final note
+
+There are number of reason to argue that Emacs is cool and awesome to use. But I just wanted you to give glimpse of why to try Emacs. In the upcoming post I will be writing step by step information to customize Emacs from scratch to awesome IDE.
+
+Thank you!
+
+Please don’t forget to comment your thoughts and suggestions below.
+
+--------------------------------------------------------------------------------
+
+via: https://saurabhkukade.github.io/Why-Emacs/
+
+作者:[Saurabh Kukade][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: http://saurabhkukade.github.io/
+[b]: https://github.com/lujun9972
+[1]: https://saurabhkukade.github.io/img/emacs.jpeg
diff --git a/sources/talk/20190606 Cloud adoption drives the evolution of application delivery controllers.md b/sources/talk/20190606 Cloud adoption drives the evolution of application delivery controllers.md
new file mode 100644
index 0000000000..d7b22353c4
--- /dev/null
+++ b/sources/talk/20190606 Cloud adoption drives the evolution of application delivery controllers.md
@@ -0,0 +1,67 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Cloud adoption drives the evolution of application delivery controllers)
+[#]: via: (https://www.networkworld.com/article/3400897/cloud-adoption-drives-the-evolution-of-application-delivery-controllers.html)
+[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
+
+Cloud adoption drives the evolution of application delivery controllers
+======
+Application delivery controllers (ADCs) are on the precipice of shifting from traditional hardware appliances to software form factors.
+![Aramyan / Getty Images / Microsoft][1]
+
+Migrating to a cloud computing model will obviously have an impact on the infrastructure that’s deployed. This shift has already been seen in the areas of servers, storage, and networking, as those technologies have evolved to a “software-defined” model. And it appears that application delivery controllers (ADCs) are on the precipice of a similar shift.
+
+In fact, a new ZK Research [study about cloud computing adoption and the impact on ADCs][2] found that, when looking at the deployment model, hardware appliances are the most widely deployed — with 55% having fully deployed or are currently testing and only 15% currently researching hardware. (Note: I am an employee of ZK Research.)
+
+Juxtapose this with containerized ADCs where only 34% have deployed or are testing but 24% are currently researching and it shows that software in containers will outpace hardware for growth. Not surprisingly, software on bare metal and in virtual machines showed similar although lower, “researching” numbers that support the thesis that the market is undergoing a shift from hardware to software.
+
+**[ Read also:[How to make hybrid cloud work][3] ]**
+
+The study, conducted in collaboration with Kemp Technologies, surveyed 203 respondents from the U.K. and U.S. The demographic split was done to understand regional differences. An equal number of mid and large size enterprises were looked at, with 44% being from over 5,000 employees and the other 56% from companies that have 300 to 5,000 people.
+
+### Incumbency helps but isn’t a fait accompli for future ADC purchases
+
+The primary tenet of my research has always been that incumbents are threatened when markets transition, and this is something I wanted to investigate in the study. The survey asked whether buyers would consider an alternative as they evolve their applications from legacy (mode 1) to cloud-native (mode 2). The results offer a bit of good news and bad news for the incumbent providers. Only 8% said they would definitely select a new vendor, but 35% said they would not change. That means the other 57% will look at alternatives. This is sensible, as the requirements for cloud ADCs are different than ones that support traditional applications.
+
+### IT pros want better automation capabilities
+
+This begs the question as to what features ADC buyers want for a cloud environment versus traditional ones. The survey asked specifically what features would be most appealing in future purchases, and the top response was automation, followed by central management, application analytics, on-demand scaling (which is a form of automation), and visibility.
+
+The desire to automate was a positive sign for the evolution of buyer mindset. Just a few years ago, the mere mention of automation would have sent IT pros into a panic. The reality is that IT can’t operate effectively without automation, and technology professionals are starting to understand that.
+
+The reason automation is needed is that manual changes are holding businesses back. The survey asked how the speed of ADC changes impacts the speed at which applications are rolled out, and a whopping 60% said it creates significant or minor delays. In an era of DevOps and continuous innovation, multiple minor delays create a drag on the business and can cause it to fall behind is more agile competitors.
+
+![][4]
+
+### ADC upgrades and service provisioning benefit most from automation
+
+The survey also drilled down on specific ADC tasks to see where automation would have the most impact. Respondents were asked how long certain tasks took, answering in minutes, days, weeks, or months. Shockingly, there wasn’t a single task where the majority said it could be done in minutes. The closest was adding DNS entries for new virtual IP addresses (VIPs) where 46% said they could do that in minutes.
+
+Upgrading, provisioning new load balancers, and provisioning new VIPs took the longest. Looking ahead, this foreshadows big problems. As the data center gets more disaggregated and distributed, IT will deploy more software-based ADCs in more places. Taking days or weeks or month to perform these functions will cause the organization to fall behind.
+
+The study clearly shows changes are in the air for the ADC market. For IT pros, I strongly recommend that as the environment shifts to the cloud, it’s prudent to evaluate new vendors. By all means, see what your incumbent vendor has, but look at least at two others that offer software-based solutions. Also, there should be a focus on automating as much as possible, so the primary evaluation criteria for ADCs should be how easy it is to implement automation.
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3400897/cloud-adoption-drives-the-evolution-of-application-delivery-controllers.html
+
+作者:[Zeus Kerravala][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Zeus-Kerravala/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/05/cw_microsoft_sharepoint_vs_onedrive_clouds_and_hands_by_aramyan_gettyimages-909772962_2400x1600-100796932-large.jpg
+[2]: https://kemptechnologies.com/research-papers/adc-market-research-study-zeus-kerravala/?utm_source=zkresearch&utm_medium=referral&utm_campaign=zkresearch&utm_term=zkresearch&utm_content=zkresearch
+[3]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb
+[4]: https://images.idgesg.net/images/article/2019/06/adc-survey-zk-research-100798593-large.jpg
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190606 For enterprise storage, persistent memory is here to stay.md b/sources/talk/20190606 For enterprise storage, persistent memory is here to stay.md
new file mode 100644
index 0000000000..3da91bb311
--- /dev/null
+++ b/sources/talk/20190606 For enterprise storage, persistent memory is here to stay.md
@@ -0,0 +1,118 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (For enterprise storage, persistent memory is here to stay)
+[#]: via: (https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html)
+[#]: author: (John Edwards )
+
+For enterprise storage, persistent memory is here to stay
+======
+Persistent memory – also known as storage class memory – has tantalized data center operators for many years. A new technology promises the key to success.
+![Thinkstock][1]
+
+It's hard to remember a time when semiconductor vendors haven't promised a fast, cost-effective and reliable persistent memory technology to anxious [data center][2] operators. Now, after many years of waiting and disappointment, technology may have finally caught up with the hype to make persistent memory a practical proposition.
+
+High-capacity persistent memory, also known as storage class memory ([SCM][3]), is fast and directly addressable like dynamic random-access memory (DRAM), yet is able to retain stored data even after its power has been switched off—intentionally or unintentionally. The technology can be used in data centers to replace cheaper, yet far slower traditional persistent storage components, such as [hard disk drives][4] (HDD) and [solid-state drives][5] (SSD).
+
+**Learn more about enterprise storage**
+
+ * [Why NVMe over Fabric matters][6]
+ * [What is hyperconvergence?][7]
+ * [How NVMe is changing enterprise storage][8]
+ * [Making the right hyperconvergence choice: HCI hardware or software?][9]
+
+
+
+Persistent memory can also be used to replace DRAM itself in some situations without imposing a significant speed penalty. In this role, persistent memory can deliver crucial operational benefits, such as lightning-fast database-server restarts during maintenance, power emergencies and other expected and unanticipated reboot situations.
+
+Many different types of strategic operational applications and databases, particularly those that require low-latency, high durability and strong data consistency, can benefit from persistent memory. The technology also has the potential to accelerate virtual machine (VM) storage and deliver higher performance to multi-node, distributed-cloud applications.
+
+In a sense, persistent memory marks a rebirth of core memory. "Computers in the ‘50s to ‘70s used magnetic core memory, which was direct access, non-volatile memory," says Doug Wong, a senior member of [Toshiba Memory America's][10] technical staff. "Magnetic core memory was displaced by SRAM and DRAM, which are both volatile semiconductor memories."
+
+One of the first persistent memory devices to come to market is [Intel’s Optane DC][11]. Other vendors that have released persistent memory products or are planning to do so include [Samsung][12], Toshiba America Memory and [SK Hynix][13].
+
+### Persistent memory: performance + reliability
+
+With persistent memory, data centers have a unique opportunity to gain faster performance and lower latency without enduring massive technology disruption. "It's faster than regular solid-state NAND flash-type storage, but you're also getting the benefit that it’s persistent," says Greg Schulz, a senior advisory analyst at vendor-independent storage advisory firm [StorageIO.][14] "It's the best of both worlds."
+
+Yet persistent memory offers adopters much more than speedy, reliable storage. In an ideal IT world, all of the data associated with an application would reside within DRAM to achieve maximum performance. "This is currently not practical due to limited DRAM and the fact that DRAM is volatile—data is lost when power fails," observes Scott Nelson, senior vice president and general manager of Toshiba Memory America's memory business unit.
+
+Persistent memory transports compatible applications to an "always on" status, providing continuous access to large datasets through increased system memory capacity, says Kristie Mann, [Intel's][15] director of marketing for data center memory and storage. She notes that Optane DC can supply data centers with up to three-times more system memory capacity (as much as 36TBs), system restarts in seconds versus minutes, 36% more virtual machines per node, and up to 8-times better performance on [Apache Spark][16], a widely used open-source distributed general-purpose cluster-computing framework.
+
+System memory currently represents 60% of total platform costs, Mann says. She observes that Optane DC persistent memory provides significant customer value by delivering 1.2x performance/dollar on key customer workloads. "This value will dramatically change memory/storage economics and accelerate the data-centric era," she predicts.
+
+### Where will persistent memory infiltrate enterprise storage?
+
+Persistent memory is likely to first enter the IT mainstream with minimal fanfare, serving as a high-performance caching layer for high performance SSDs. "This could be adopted relatively-quickly," Nelson observes. Yet this intermediary role promises to be merely a stepping-stone to increasingly crucial applications.
+
+Over the next few years, persistent technology will impact data centers serving enterprises across an array of sectors. "Anywhere time is money," Schulz says. "It could be financial services, but it could also be consumer-facing or sales-facing operations."
+
+Persistent memory supercharges anything data-related that requires extreme speed at extreme scale, observes Andrew Gooding, vice president of engineering at [Aerospike][17], which delivered the first commercially available open database optimized for use with Intel Optane DC.
+
+Machine learning is just one of many applications that stand to benefit from persistent memory. Gooding notes that ad tech firms, which rely on machine learning to understand consumers' reactions to online advertising campaigns, should find their work made much easier and more effective by persistent memory. "They’re collecting information as users within an ad campaign browse the web," he says. "If they can read and write all that data quickly, they can then apply machine-learning algorithms and tailor specific ads for users in real time."
+
+Meanwhile, as automakers become increasingly reliant on data insights, persistent memory promises to help them crunch numbers and refine sophisticated new technologies at breakneck speeds. "In the auto industry, manufacturers face massive data challenges in autonomous vehicles, where 20 exabytes of data needs to be processed in real time, and they're using self-training machine-learning algorithms to help with that," Gooding explains. "There are so many fields where huge amounts of data need to be processed quickly with machine-learning techniques—fraud detection, astronomy... the list goes on."
+
+Intel, like other persistent memory vendors, expects cloud service providers to be eager adopters, targeting various types of in-memory database services. Google, for example, is applying persistent memory to big data workloads on non-relational databases from vendors such as Aerospike and [Redis Labs][18], Mann says.
+
+High-performance computing (HPC) is yet another area where persistent memory promises to make a tremendous impact. [CERN][19], the European Organization for Nuclear Research, is using Intel's Optane DC to significantly reduce wait times for scientific computing. "The efficiency of their algorithms depends on ... persistent memory, and CERN considers it a major breakthrough that is necessary to the work they are doing," Mann observes.
+
+### How to prepare storage infrastructure for persistent memory
+
+Before jumping onto the persistent memory bandwagon, organizations need to carefully scrutinize their IT infrastructure to determine the precise locations of any existing data bottlenecks. This task will be primary application-dependent, Wong notes. "If there is significant performance degradation due to delays associated with access to data stored in non-volatile storage—SSD or HDD—then an SCM tier will improve performance," he explains. Yet some applications will probably not benefit from persistent memory, such as compute-bound applications where CPU performance is the bottleneck.
+
+Developers may need to reevaluate fundamental parts of their storage and application architectures, Gooding says. "They will need to know how to program with persistent memory," he notes. "How, for example, to make sure writes are flushed to the actual persistent memory device when necessary, as opposed to just sitting in the CPU cache."
+
+To leverage all of persistent memory's potential benefits, significant changes may also be required in how code is designed. When moving applications from DRAM and flash to persistent memory, developers will need to consider, for instance, what happens when a program crashes and restarts. "Right now, if they write code that leaks memory, that leaked memory is recovered on restart," Gooding explains. With persistent memory, that isn't necessarily the case. "Developers need to make sure the code is designed to reconstruct a consistent state when a program restarts," he notes. "You may not realize how much your designs rely on the traditional combination of fast volatile DRAM and block storage, so it can be tricky to change your code designs for something completely new like persistent memory."
+
+Older versions of operating systems may also need to be updated to accommodate the new technology, although newer OSes are gradually becoming persistent memory aware, Schulz says. "In other words, if they detect that persistent memory is available, then they know how to utilize that either as a cache, or some other memory."
+
+Hypervisors, such as [Hyper-V][20] and [VMware][21], now know how to leverage persistent memory to support productivity, performance and rapid restarts. By utilizing persistent memory along with the latest versions of VMware, a whole system can see an uplift in speed and also maximize the number of VMs to fit on a single host, says Ian McClarty, CEO and president of data center operator [PhoenixNAP Global IT Services][22]. "This is a great use case for companies who want to own less hardware or service providers who want to maximize hardware to virtual machine deployments."
+
+Many key enterprise applications, particularly databases, are also becoming persistent memory aware. SQL Server and [SAP’s][23] flagship [HANA][24] database management platform have both embraced persistent memory. "The SAP HANA platform is commonly used across multiple industries to process data and transactions, and then run advanced analytics ... to deliver real-time insights," Mann observes.
+
+In terms of timing, enterprises and IT organizations should begin persistent memory planning immediately, Schulz recommends. "You should be talking with your vendors and understanding their roadmap, their plans, for not only supporting this technology, but also in what mode: as storage, as memory."
+
+Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html
+
+作者:[John Edwards][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2017/08/file_folder_storage_sharing_thinkstock_477492571_3x2-100732889-large.jpg
+[2]: https://www.networkworld.com/article/3353637/the-data-center-is-being-reimagined-not-disappearing.html
+[3]: https://www.networkworld.com/article/3026720/the-next-generation-of-storage-disruption-storage-class-memory.html
+[4]: https://www.networkworld.com/article/2159948/hard-disk-drives-vs--solid-state-drives--are-ssds-finally-worth-the-money-.html
+[5]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
+[6]: https://www.networkworld.com/article/3273583/why-nvme-over-fabric-matters.html
+[7]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence
+[8]: https://www.networkworld.com/article/3280991/what-is-nvme-and-how-is-it-changing-enterprise-storage.html
+[9]: https://www.networkworld.com/article/3318683/making-the-right-hyperconvergence-choice-hci-hardware-or-software
+[10]: https://business.toshiba-memory.com/en-us/top.html
+[11]: https://www.intel.com/content/www/us/en/architecture-and-technology/optane-dc-persistent-memory.html
+[12]: https://www.samsung.com/semiconductor/
+[13]: https://www.skhynix.com/eng/index.jsp
+[14]: https://storageio.com/
+[15]: https://www.intel.com/content/www/us/en/homepage.html
+[16]: https://spark.apache.org/
+[17]: https://www.aerospike.com/
+[18]: https://redislabs.com/
+[19]: https://home.cern/
+[20]: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/
+[21]: https://www.vmware.com/
+[22]: https://phoenixnap.com/
+[23]: https://www.sap.com/index.html
+[24]: https://www.sap.com/products/hana.html
+[25]: https://www.facebook.com/NetworkWorld/
+[26]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190606 Self-learning sensor chips won-t need networks.md b/sources/talk/20190606 Self-learning sensor chips won-t need networks.md
new file mode 100644
index 0000000000..c5abec5426
--- /dev/null
+++ b/sources/talk/20190606 Self-learning sensor chips won-t need networks.md
@@ -0,0 +1,82 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Self-learning sensor chips won’t need networks)
+[#]: via: (https://www.networkworld.com/article/3400659/self-learning-sensor-chips-wont-need-networks.html)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Self-learning sensor chips won’t need networks
+======
+Scientists working on new, machine-learning networks aim to embed everything needed for artificial intelligence (AI) onto a processor, eliminating the need to transfer data to the cloud or computers.
+![Jiraroj Praditcharoenkul / Getty Images][1]
+
+Tiny, intelligent microelectronics should be used to perform as much sensor processing as possible on-chip rather than wasting resources by sending often un-needed, duplicated raw data to the cloud or computers. So say scientists behind new, machine-learning networks that aim to embed everything needed for artificial intelligence (AI) onto a processor.
+
+“This opens the door for many new applications, starting from real-time evaluation of sensor data,” says [Fraunhofer Institute for Microelectronic Circuits and Systems][2] on its website. No delays sending unnecessary data onwards, along with speedy processing, means theoretically there is zero latency.
+
+Plus, on-microprocessor, self-learning means the embedded, or sensor, devices can self-calibrate. They can even be “completely reconfigured to perform a totally different task afterwards,” the institute says. “An embedded system with different tasks is possible.”
+
+**[ Also read:[What is edge computing?][3] and [How edge networking and IoT will reshape data centers][4] ]**
+
+Much internet of things (IoT) data sent through networks is redundant and wastes resources: a temperature reading taken every 10 minutes, say, when the ambient temperature hasn’t changed, is one example. In fact, one only needs to know when the temperature has changed, and maybe then only when thresholds have been met.
+
+### Neural network-on-sensor chip
+
+The commercial German research organization says it’s developing a specific RISC-V microprocessor with a special hardware accelerator designed for a [brain-copying, artificial neural network (ANN) it has developed][5]. The architecture could ultimately be suitable for the condition-monitoring or predictive sensors of the kind we will likely see more of in the industrial internet of things (IIoT).
+
+Key to Fraunhofer IMS’s [Artificial Intelligence for Embedded Systems (AIfES)][6] is that the self-learning takes place at chip level rather than in the cloud or on a computer, and that it is independent of “connectivity towards a cloud or a powerful and resource-hungry processing entity.” But it still offers a “full AI mechanism, like independent learning,”
+
+It’s “decentralized AI,” says Fraunhofer IMS. "It’s not focused towards big-data processing.”
+
+Indeed, with these kinds of systems, no connection is actually required for the raw data, just for the post-analytical results, if indeed needed. Swarming can even replace that. Swarming lets sensors talk to one another, sharing relevant information without even getting a host network involved.
+
+“It is possible to build a network from small and adaptive systems that share tasks among themselves,” Fraunhofer IMS says.
+
+Other benefits in decentralized neural networks include that they can be more secure than the cloud. Because all processing takes place on the microprocessor, “no sensitive data needs to be transferred,” Fraunhofer IMS explains.
+
+### Other edge computing research
+
+The Fraunhofer researchers aren’t the only academics who believe entire networks become redundant with neuristor, brain-like AI chips. Binghamton University and Georgia Tech are working together on similar edge-oriented tech.
+
+“The idea is we want to have these chips that can do all the functioning in the chip, rather than messages back and forth with some sort of large server,” Binghamton said on its website when [I wrote about the university's work last year][7].
+
+One of the advantages of no major communications linking: Not only don't you have to worry about internet resilience, but also that energy is saved creating the link. Energy efficiency is an ambition in the sensor world — replacing batteries is time consuming, expensive, and sometimes, in the case of remote locations, extremely difficult.
+
+Memory or storage for swaths of raw data awaiting transfer to be processed at a data center, or similar, doesn’t have to be provided either — it’s been processed at the source, so it can be discarded.
+
+**More about edge networking:**
+
+ * [How edge networking and IoT will reshape data centers][4]
+ * [Edge computing best practices][8]
+ * [How edge computing can help secure the IoT][9]
+
+
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3400659/self-learning-sensor-chips-wont-need-networks.html
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_by_jiraroj_praditcharoenkul_gettyimages-902668940_2400x1600-100788458-large.jpg
+[2]: https://www.ims.fraunhofer.de/en.html
+[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[5]: https://www.ims.fraunhofer.de/en/Business_Units_and_Core_Competencies/Electronic_Assistance_Systems/News/AIfES-Artificial_Intelligence_for_Embedded_Systems.html
+[6]: https://www.ims.fraunhofer.de/en/Business_Units_and_Core_Competencies/Electronic_Assistance_Systems/technologies/Artificial-Intelligence-for-Embedded-Systems-AIfES.html
+[7]: https://www.networkworld.com/article/3326557/edge-chips-could-render-some-networks-useless.html
+[8]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
+[9]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190606 What to do when yesterday-s technology won-t meet today-s support needs.md b/sources/talk/20190606 What to do when yesterday-s technology won-t meet today-s support needs.md
new file mode 100644
index 0000000000..622537f2f9
--- /dev/null
+++ b/sources/talk/20190606 What to do when yesterday-s technology won-t meet today-s support needs.md
@@ -0,0 +1,53 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What to do when yesterday’s technology won’t meet today’s support needs)
+[#]: via: (https://www.networkworld.com/article/3399875/what-to-do-when-yesterday-s-technology-won-t-meet-today-s-support-needs.html)
+[#]: author: (Anand Rajaram )
+
+What to do when yesterday’s technology won’t meet today’s support needs
+======
+
+![iStock][1]
+
+You probably already know that end user technology is exploding and are feeling the effects of it in your support organization every day. Remember when IT sanctioned and standardized every hardware and software instance in the workplace? Those days are long gone. Today, it’s the driving force of productivity that dictates what will or won’t be used – and that can be hard on a support organization.
+
+Whatever users need to do their jobs better, faster, more efficiently is what you are seeing come into the workplace. So naturally, that’s what comes into your service desk too. Support organizations see all kinds of [devices, applications, systems, and equipment][2], and it’s adding a great deal of complexity and demand to keep up with. In fact, four of the top five factors causing support ticket volumes to rise are attributed to new and current technology.
+
+To keep up with the steady [rise of tickets][3] and stay out in front of this surge, support organizations need to take a good, hard look at the processes and technologies they use. Yesterday’s methods won’t cut it. The landscape is simply changing too fast. Supporting today’s users and getting them back to work fast requires an expanding set of skills and tools.
+
+So where do you start with a new technology project? Just because a technology is new or hyped doesn’t mean it’s right for your organization. It’s important to understand your project goals and the experience you really want to create and match your technology choices to those goals. But don’t go it alone. Talk to your teams. Get intimately familiar with how your support organization works today. Understand your customers’ needs at a deep level. And bring the right people to the table to cover:
+
+ * Business problem analysis: What existing business issue are stakeholders unhappy with?
+ * The impact of that problem: How does that issue justify making a change?
+ * Process automation analysis: What area(s) can technology help automate?
+ * Other solutions: Have you considered any other options besides technology?
+
+
+
+With these questions answered, you’re ready to entertain your technology options. Put together your “must-haves” in a requirements document and reach out to potential suppliers. During the initial information-gathering stage, assess if the supplier understands your goals and how their technology helps you meet them. To narrow the field, compare solutions side by side against your goals. Select the top two or three for more in-depth product demos before moving into product evaluations. By the time you’re ready for implementation, you have empirical, practical knowledge of how the solution will perform against your business goals.
+
+The key takeaway is this: Technology for technology’s sake is just technology. But technology that drives business value is a solution. If you want a solution that drives results for your organization and your customers, it’s worth following a strategic selection process to match your goals with the best technology for the job.
+
+For more insight, check out the [LogMeIn Rescue][4] and HDI webinar “[Technology and the Service Desk: Expanding Mission, Expanding Skills”][5].
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3399875/what-to-do-when-yesterday-s-technology-won-t-meet-today-s-support-needs.html
+
+作者:[Anand Rajaram][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/06/istock-1019006240-100798168-large.jpg
+[2]: https://www.logmeinrescue.com/resources/datasheets/infographic-mobile-support-are-your-employees-getting-what-they-need?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
+[3]: https://www.logmeinrescue.com/resources/analyst-reports/the-importance-of-remote-support-in-a-shift-left-world?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
+[4]: https://www.logmeinrescue.com/?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
+[5]: https://www.brighttalk.com/webcast/8855/312289?utm_source=LogMeIn7&utm_medium=brighttalk&utm_campaign=312289
diff --git a/sources/talk/20190611 6 ways to make enterprise IoT cost effective.md b/sources/talk/20190611 6 ways to make enterprise IoT cost effective.md
new file mode 100644
index 0000000000..492262c617
--- /dev/null
+++ b/sources/talk/20190611 6 ways to make enterprise IoT cost effective.md
@@ -0,0 +1,89 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (6 ways to make enterprise IoT cost effective)
+[#]: via: (https://www.networkworld.com/article/3401082/6-ways-to-make-enterprise-iot-cost-effective.html)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+6 ways to make enterprise IoT cost effective
+======
+Rob Mesirow, a principal at PwC’s Connected Solutions unit, offers tips for successfully implementing internet of things (IoT) projects without breaking the bank.
+![DavidLeshem / Getty][1]
+
+There’s little question that the internet of things (IoT) holds enormous potential for the enterprise, in everything from asset tracking to compliance.
+
+But enterprise uses of IoT technology are still evolving, and it’s not yet entirely clear which use cases and practices currently make economic and business sense. So, I was thrilled to trade emails recently with [Rob Mesirow][2], a principal at [PwC’s Connected Solutions][3] unit, about how to make enterprise IoT implementations as cost effective as possible.
+
+“The IoT isn’t just about technology (hardware, sensors, software, networks, communications, the cloud, analytics, APIs),” Mesirow said, “though tech is obviously essential. It also includes ensuring cybersecurity, managing data governance, upskilling the workforce and creating a receptive workplace culture, building trust in the IoT, developing interoperability, and creating business partnerships and ecosystems—all part of a foundation that’s vital to a successful IoT implementation.”
+
+**[ Also read:[Enterprise IoT: Companies want solutions in these 4 areas][4] ]**
+
+Yes, that sounds complicated—and a lot of work for a still-hard-to-quantify return. Fortunately, though, Mesirow offered up some tips on how companies can make their IoT implementations as cost effective as possible.
+
+### 1\. Don’t wait for better technology
+
+Mesirow advised against waiting to implement IoT projects until you can deploy emerging technology such as [5G networks][5]. That makes sense, as long as your implementation doesn’t specifically require capabilities available only in the new technology.
+
+### 2\. Start with the basics, and scale up as needed
+
+“Companies need to start with the basics—building one app/task at a time—instead of jumping ahead with enterprise-wide implementations and ecosystems,” Mesirow said.
+
+“There’s no need to start an IoT initiative by tackling a huge, expensive ecosystem. Instead, begin with one manageable use case, and build up and out from there. The IoT can inexpensively automate many everyday tasks to increase effectiveness, employee productivity, and revenue.”
+
+After you pick the low-hanging fruit, it’s time to become more ambitious.
+
+“After getting a few successful pilots established, businesses can then scale up as needed, building on the established foundation of business processes, people experience, and technology," Mesirow said,
+
+### 3\. Make dumb things smart
+
+Of course, identifying the ripest low-hanging fruit isn’t always easy.
+
+“Companies need to focus on making dumb things smart, deploying infrastructure that’s not going to break the bank, and providing enterprise customers the opportunity to experience what data intelligence can do for their business,” Mesirow said. “Once they do that, things will take off.”
+
+### 4\. Leverage lower-cost networks
+
+“One key to building an IoT inexpensively is to use low-power, low-cost networks (Low-Power Wide-Area Networks (LPWAN)) to provide IoT services, which reduces costs significantly,” Mesirow said.
+
+Naturally, he mentioned that PwC has three separate platforms with some 80 products that hang off those platforms, which he said cost “a fraction of traditional IoT offerings, with security and privacy built in.”
+
+Despite the product pitch, though, Mesirow is right to call out the efficiencies involved in using low-cost, low-power networks instead of more expensive existing cellular.
+
+### 5\. Balance security vs. cost
+
+Companies need to plan their IoT network with costs vs. security in mind, Mesirow said. “Open-source networks will be less expensive, but there may be security concerns,” he said.
+
+That’s true, of course, but there may be security concerns in _any_ network, not just open-source solutions. Still, Mesirow’s overall point remains valid: Enterprises need to carefully consider all the trade-offs they’re making in their IoT efforts.
+
+### 6\. Account for _all_ the value IoT provides
+
+Finally, Mesirow pointed out that “much of the cost-effectiveness comes from the _value_ the IoT provides,” and its important to consider the return, not just the investment.
+
+“For example,” Mesirow said, the IoT “increases productivity by enabling the remote monitoring and control of business operations. It saves on energy costs by automatically turning off lights and HVAC when spaces are vacant, and predictive maintenance alerts lead to fewer machine repairs. And geolocation can lead to personalized marketing to customer smartphones, which can increase sales to nearby stores.”
+
+**[ Now read this:[5 reasons the IoT needs its own networks][6] ]**
+
+Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3401082/6-ways-to-make-enterprise-iot-cost-effective.html
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/money_financial_salary_growth_currency_by-davidleshem-100787975-large.jpg
+[2]: https://twitter.com/robmesirow
+[3]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html
+[4]: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html
+[5]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
+[6]: https://www.networkworld.com/article/3284506/5-reasons-the-iot-needs-its-own-networks.html
+[7]: https://www.facebook.com/NetworkWorld/
+[8]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190611 The carbon footprints of IT shops that train AI models are huge.md b/sources/talk/20190611 The carbon footprints of IT shops that train AI models are huge.md
new file mode 100644
index 0000000000..b440b8d65b
--- /dev/null
+++ b/sources/talk/20190611 The carbon footprints of IT shops that train AI models are huge.md
@@ -0,0 +1,68 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The carbon footprints of IT shops that train AI models are huge)
+[#]: via: (https://www.networkworld.com/article/3401919/the-carbon-footprints-of-it-shops-that-train-ai-models-are-huge.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+The carbon footprints of IT shops that train AI models are huge
+======
+Artificial intelligence (AI) model training can generate five times more carbon dioxide than a car does in a lifetime, researchers at the University of Massachusetts, Amherst find.
+![ipopba / Getty Images][1]
+
+A new research paper from the University of Massachusetts, Amherst looked at the carbon dioxide (CO2) generated over the course of training several common large artificial intelligence (AI) models and found that the process can generate nearly five times the amount as an average American car over its lifetime plus the process of making the car itself.
+
+The [paper][2] specifically examined the model training process for natural-language processing (NLP), which is how AI handles natural language interactions. The study found that during the training process, more than 626,000 pounds of carbon dioxide is generated.
+
+This is significant, since AI training is one IT process that has remained firmly on-premises and not moved to the cloud. Very expensive equipment is needed, as is large volumes of data, so the cloud isn’t right work for most AI training, and the report notes this. Plus, IT shops want to keep that kind of IP in house. So, if you are experimenting with AI, that power bill is going to go up.
+
+**[ Read also:[How to plan a software-defined data-center network][3] ]**
+
+While the report used carbon dioxide as a measure, that’s still the product of electricity generation. Training involves the use of the most powerful processors, typically Nvidia GPUs, and they are not known for being low-power draws. And as the paper notes, “model training also incurs a substantial cost to the environment due to the energy required to power this hardware for weeks or months at a time.”
+
+Training is the most processor-intensive portion of AI. It can take days, weeks, or even months to “learn” what the model needs to know. That means power-hungry Nvidia GPUs running at full utilization for the entire time. In this case, how to handle and process natural language questions rather than broken sentences of keywords like your typical Google search.
+
+The report said training one model with a neural architecture generated 626,155 pounds of CO2. By contrast, one passenger flying round trip between New York and San Francisco would generate 1,984 pounds of CO2, an average American would generate 11,023 pounds in one year, and a car would generate 126,000 pounds over the course of its lifetime.
+
+### How the researchers calculated the CO2 amounts
+
+The researchers used four models in the NLP field that have been responsible for the biggest leaps in performance. They are Transformer, ELMo, BERT, and GPT-2. They trained all of the models on a single Nvidia Titan X GPU, with the exception of ELMo which was trained on three Nvidia GTX 1080 Ti GPUs. Each model was trained for a maximum of one day.
+
+**[[Learn Java from beginning concepts to advanced design patterns in this comprehensive 12-part course!][4] ]**
+
+They then used the number of training hours listed in the model’s original papers to calculate the total energy consumed over the complete training process. That number was converted into pounds of carbon dioxide equivalent based on the average energy mix in the U.S.
+
+The big takeaway is that computational costs start out relatively inexpensive, but they mushroom when additional tuning steps were used to increase the model’s final accuracy. A tuning process known as neural architecture search ([NAS][5]) is the worst offender because it does so much processing. NAS is an algorithm that searches for the best neural network architecture. It is seriously advanced AI and requires the most processing time and power.
+
+The researchers suggest it would be beneficial to directly compare different models to perform a cost-benefit (accuracy) analysis.
+
+“To address this, when proposing a model that is meant to be re-trained for downstream use, such as re-training on a new domain or fine-tuning on a new task, authors should report training time and computational resources required, as well as model sensitivity to hyperparameters. This will enable direct comparison across models, allowing subsequent consumers of these models to accurately assess whether the required computational resources,” the authors wrote.
+
+They also say researchers who are cost-constrained should pool resources and avoid the cloud, as cloud compute time is more expensive. In an example, it said a GPU server with eight Nvidia 1080 Ti GPUs and supporting hardware is available for approximately $20,000. To develop the sample models used in their study, that hardware would cost $145,000, plus electricity to run the models, about half the estimated cost to use on-demand cloud GPUs.
+
+“Unlike money spent on cloud compute, however, that invested in centralized resources would continue to pay off as resources are shared across many projects. A government-funded academic compute cloud would provide equitable access to all researchers,” they wrote.
+
+Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3401919/the-carbon-footprints-of-it-shops-that-train-ai-models-are-huge.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/05/ai-vendor-relationship-management_artificial-intelligence_hand-on-virtual-screen-100795246-large.jpg
+[2]: https://arxiv.org/abs/1906.02243
+[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
+[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fjava
+[5]: https://www.oreilly.com/ideas/what-is-neural-architecture-search
+[6]: https://www.facebook.com/NetworkWorld/
+[7]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190612 IoT security vs. privacy- Which is a bigger issue.md b/sources/talk/20190612 IoT security vs. privacy- Which is a bigger issue.md
new file mode 100644
index 0000000000..2f06f6afc1
--- /dev/null
+++ b/sources/talk/20190612 IoT security vs. privacy- Which is a bigger issue.md
@@ -0,0 +1,95 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (IoT security vs. privacy: Which is a bigger issue?)
+[#]: via: (https://www.networkworld.com/article/3401522/iot-security-vs-privacy-which-is-a-bigger-issue.html)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+IoT security vs. privacy: Which is a bigger issue?
+======
+When it comes to the internet of things (IoT), security has long been a key concern. But privacy issues could be an even bigger threat.
+![Ring][1]
+
+If you follow the news surrounding the internet of things (IoT), you know that security issues have long been a key concern for IoT consumers, enterprises, and vendors. Those issues are very real, but I’m becoming increasingly convinced that related but fundamentally different _privacy_ vulnerabilities may well be an even bigger threat to the success of the IoT.
+
+In June alone, we’ve seen a flood of IoT privacy issues inundate the news cycle, and observers are increasingly sounding the alarm that IoT users should be paying attention to what happens to the data collected by IoT devices.
+
+**[ Also read:[It’s time for the IoT to 'optimize for trust'][2] and [A corporate guide to addressing IoT security][2] ]**
+
+Predictably, most of the teeth-gnashing has come on the consumer side, but that doesn’t mean enterprises users are immune to the issue. One the one hand, just like consumers, companies are vulnerable to their proprietary information being improperly shared and misused. More immediately, companies may face backlash from their own customers if they are seen as not properly guarding the data they collect via the IoT. Too often, in fact, enterprises shoot themselves in the foot on privacy issues, with practices that range from tone-deaf to exploitative to downright illegal—leading almost [two-thirds (63%) of consumers to describe IoT data collection as “creepy,”][3] while more than half (53%) “distrust connected devices to protect their privacy and handle information in a responsible manner.”
+
+### Ring becoming the poster child for IoT privacy issues
+
+As a case in point, let’s look at the case of [Ring, the IoT doorbell company now owned by Amazon][4]. Ring is [reportedly working with police departments to build a video surveillance network in residential neighborhoods][5]. Police in more than 50 cities and towns across the country are apparently offering free or discounted Ring doorbells, and sometimes requiring the recipients to share footage for use in investigations. (While [Ring touts the security benefits][6] of working with law enforcement, it has asked police departments to end the practice of _requiring_ users to hand over footage, as it appears to violate the devices’ terms of service.)
+
+Many privacy advocates are troubled by this degree of cooperation between police and Ring, but that’s only part of the problem. Last year, for example, [Ring workers in Ukraine reportedly watched customer feeds][7]. Amazingly, though, even that only scratches the surface of the privacy flaps surrounding Ring.
+
+### Guilty by video?
+
+According to [Motherboard][8], “Ring is using video captured by its doorbell cameras in Facebook advertisements that ask users to identify and call the cops on a woman whom local police say is a suspected thief.” While the police are apparently appreciative of the “additional eyes that may see this woman and recognize her,” the ad calls the woman a thief even though she has not been charged with a crime, much less convicted!
+
+Ring may be today’s poster child for IoT privacy issues, but IoT privacy complaints are widespread. In many cases, it comes down to what IoT users—or others nearby—are getting in return for giving up their privacy. According to the [Guardian][9], for example, Google’s Sidewalk Labs smart city project is little more than “surveillance capitalism.” And while car owners may get a discount on auto insurance in return for sharing their driving data, that relationship is hardly set in stone. It may not be long before drivers have to give up their data just to get insurance at all.
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][10] ]**
+
+And as the recent [data breach at the U.S. Customs and Border Protection][11] once again demonstrates, private data is “[a genie without a bottle][12].” No matter what legal or technical protections are put in place, the data may always be revealed or used in unforeseen ways. Heck, when you put it all together, it’s enough to make you wonder [whether doorbells really need to be smart][13] at all?
+
+**Read more about IoT:**
+
+ * [Google’s biggest, craziest ‘moonshot’ yet][14]
+ * [What is the IoT? How the internet of things works][15]
+ * [What is edge computing and how it’s changing the network][16]
+ * [Most powerful internet of things companies][17]
+ * [10 Hot IoT startups to watch][18]
+ * [The 6 ways to make money in IoT][19]
+ * [What is digital twin technology? [and why it matters]][20]
+ * [Blockchain, service-centric networking key to IoT success][21]
+ * [Getting grounded in IoT networking and security][22]
+ * [Building IoT-ready networks must become a priority][23]
+ * [What is the Industrial IoT? [And why the stakes are so high]][24]
+
+
+
+Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3401522/iot-security-vs-privacy-which-is-a-bigger-issue.html
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/ringvideodoorbellpro-100794084-large.jpg
+[2]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
+[3]: https://www.cpomagazine.com/data-privacy/consumers-still-concerned-about-iot-security-and-privacy-issues/
+[4]: https://www.cnbc.com/2018/02/27/amazon-buys-ring-a-former-shark-tank-reject.html
+[5]: https://www.cnet.com/features/amazons-helping-police-build-a-surveillance-network-with-ring-doorbells/
+[6]: https://blog.ring.com/2019/02/14/how-rings-neighbors-creates-safer-more-connected-communities/
+[7]: https://www.theinformation.com/go/b7668a689a
+[8]: https://www.vice.com/en_us/article/pajm5z/amazon-home-surveillance-company-ring-law-enforcement-advertisements
+[9]: https://www.theguardian.com/cities/2019/jun/06/toronto-smart-city-google-project-privacy-concerns
+[10]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[11]: https://www.washingtonpost.com/technology/2019/06/10/us-customs-border-protection-says-photos-travelers-into-out-country-were-recently-taken-data-breach/?utm_term=.0f3a38aa40ca
+[12]: https://smartbear.com/blog/test-and-monitor/data-scientists-are-sexy-and-7-more-surprises-from/
+[13]: https://slate.com/tag/should-this-thing-be-smart
+[14]: https://www.networkworld.com/article/3058036/google-s-biggest-craziest-moonshot-yet.html
+[15]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
+[16]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[17]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[18]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[19]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[20]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[21]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[22]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[23]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[24]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[25]: https://www.facebook.com/NetworkWorld/
+[26]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190612 Software Defined Perimeter (SDP)- Creating a new network perimeter.md b/sources/talk/20190612 Software Defined Perimeter (SDP)- Creating a new network perimeter.md
new file mode 100644
index 0000000000..88a540e875
--- /dev/null
+++ b/sources/talk/20190612 Software Defined Perimeter (SDP)- Creating a new network perimeter.md
@@ -0,0 +1,121 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Software Defined Perimeter (SDP): Creating a new network perimeter)
+[#]: via: (https://www.networkworld.com/article/3402258/software-defined-perimeter-sdp-creating-a-new-network-perimeter.html)
+[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
+
+Software Defined Perimeter (SDP): Creating a new network perimeter
+======
+Considering the way networks work today and the change in traffic patterns; both internal and to the cloud, this limits the effect of the fixed perimeter.
+![monsitj / Getty Images][1]
+
+Networks were initially designed to create internal segments that were separated from the external world by using a fixed perimeter. The internal network was deemed trustworthy, whereas the external was considered hostile. However, this is still the foundation for most networking professionals even though a lot has changed since the inception of the design.
+
+More often than not the fixed perimeter consists of a number of network and security appliances, thereby creating a service chained stack, resulting in appliance sprawl. Typically, the appliances that a user may need to pass to get to the internal LAN may vary. But generally, the stack would consist of global load balancers, external firewall, DDoS appliance, VPN concentrator, internal firewall and eventually LAN segments.
+
+The perimeter approach based its design on visibility and accessibility. If an entity external to the network can’t see an internal resource, then access cannot be gained. As a result, external entities were blocked from coming in, yet internal entities were permitted to passage out. However, it worked only to a certain degree. Realistically, the fixed network perimeter will always be breachable; it's just a matter of time. Someone with enough skill will eventually get through.
+
+**[ Related:[MPLS explained – What you need to know about multi-protocol label switching][2]**
+
+### Environmental changes – the cloud and mobile workforce
+
+Considering the way networks work today and the change in traffic patterns; both internal and to the cloud, this limits the effect of the fixed perimeter. Nowadays, we have a very fluid network perimeter with many points of entry.
+
+Imagine a castle with a portcullis that was used to gain access. To gain entry into the portcullis was easy as we just needed to pass one guard. There was only one way in and one way out. But today, in this digital world, we have so many small doors and ways to enter, all of which need to be individually protected.
+
+This boils down to the introduction of cloud-based application services and changing the location of the perimeter. Therefore, the existing networking equipment used for the perimeter is topologically ill-located. Nowadays, everything that is important is outside the perimeter, such as, remote access workers, SaaS, IaaS and PaaS-based applications.
+
+Users require access to the resources in various cloud services regardless of where the resources are located, resulting in complex-to-control multi-cloud environments. Objectively, the users do not and should not care where the applications are located. They just require access to the application. Also, the increased use of mobile workforce that demands anytime and anywhere access from a variety of devices has challenged the enterprises to support this dynamic workforce.
+
+There is also an increasing number of devices, such as, BYOD, on-site contractors, and partners that will continue to grow internal to the network. This ultimately leads to a world of over-connected networks.
+
+### Over-connected networks
+
+Over-connected networks result in complex configurations of network appliances. This results in large and complex policies without any context.
+
+They provide a level of coarse-grained access to a variety of services where the IP address does not correspond to the actual user. Traditional appliances that use static configurations to limit the incoming and outgoing traffic are commonly based on information in the IP packet and the port number.
+
+Essentially, there is no notion of policy and explanation of why a given source IP address is on the list. This approach fails to take into consideration any notion of trust and dynamically adjust access in relation to the device, users and application request events.
+
+### Problems with IP addresses
+
+Back in the early 1990s, RFC 1597 declared three IP ranges reserved for private use: 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16. If an end host was configured with one of these addresses, it was considered more secure. However, this assumption of trust was shattered with the passage of time and it still haunts us today.
+
+Network Address Translation (NAT) also changed things to a great extent. NAT allowed internal trusted hosts to communicate directly with the external untrusted hosts. However, since Transmission Control Protocol (TCP) is bidirectional, it allows the data to be injected by the external hosts while connecting back to the internal hosts.
+
+Also, there is no contextual information regarding the IP addresses as the sole purpose revolved around connectivity. If you have the IP address of someone, you can connect to them. The authentication was handled higher up in the stack.
+
+Not only do user’s IP addresses change regularly, but there’s also not a one-to-one correspondence between the users and IP addresses. Anyone can communicate from any IP address they please and also insert themselves between you and the trusted resource.
+
+Have you ever heard of the 20-year old computer that responds to an internet control message protocol (ICMP) request, yet no one knows where it is? But this would not exist on a zero trust network as the network is dark until the administrator turns the lights on with a whitelist policy rule set. This is contrary to the legacy black policy rule set. You can find more information on zero trust in my course: [Zero Trust Networking: The Big Picture][3].
+
+Therefore, we can’t just rely on the IP addresses and expect them to do much more other than connect. As a result, we have to move away from the IP addresses and network location as the proxy for access trust. The network location can longer be the driver of network access levels. It is not fully equipped to decide the trust of a device, user or application.
+
+### Visibility – a major gap
+
+When we analyze networking and its flaws, visibility is a major gap in today’s hybrid environments. By and large, enterprise networks are complex beasts. More than often networking pros do not have accurate data or insights into who or what is accessing the network resource.
+
+I.T does not have the visibility in place to detect, for example, insecure devices, unauthorized users and potentially harmful connections that could propagate malware or perform data exfiltration.
+
+Also, once you know how network elements connect, how do you ensure that they don’t reconnect through a broader definition of connectivity? For this, you need contextual visibility. You need full visibility into the network to see who, what, when, and how they are connecting with the device.
+
+### What’s the workaround?
+
+A new approach is needed that enables the application owners to protect the infrastructure located in a public or private cloud and on-premise data center. This new network architecture is known as [software-defined perimeter][4] (SDP). Back in 2013, Cloud Security Alliance (CSA) launched the SDP initiative, a project designed to develop the architecture for creating more robust networks.
+
+The principles behind SDPs are not entirely new. Organizations within the DoD and Intelligence Communities (IC) have implemented a similar network architecture that is based on authentication and authorization prior to network access.
+
+Typically, every internal resource is hidden behind an appliance. And a user must authenticate before visibility of the authorized services is made available and access is granted.
+
+### Applying the zero trust framework
+
+SDP is an extension to [zero trust][5] which removes the implicit trust from the network. The concept of SDP started with Google’s BeyondCorp, which is the general direction that the industry is heading to right now.
+
+Google’s BeyondCorp puts forward the idea that the corporate network does not have any meaning. The trust regarding accessing an application is set by a static network perimeter containing a central appliance. This appliance permits the inbound and outbound access based on a very coarse policy.
+
+However, access to the application should be based on other parameters such as who the user is, the judgment of the security stance of the device, followed by some continuous assessment of the session. Rationally, only then should access be permitted.
+
+Let’s face it, the assumption that internal traffic can be trusted is flawed and zero trust assumes that all hosts internal to the network are internet facing, thereby hostile.
+
+### What is software-defined perimeter (SDP)?
+
+The SDP aims to deploy perimeter functionality for dynamically provisioned perimeters meant for clouds, hybrid environments, and on-premise data center infrastructures. There is often a dynamic tunnel that automatically gets created during the session. That is a one-to-one mapping between the requesting entity and the trusted resource. The important point to note here is that perimeters are formed not solely to obey a fixed location already design by the network team.
+
+SDP relies on two major pillars and these are the authentication and authorization stages. SDPs require endpoints to authenticate and be authorized first before obtaining network access to the protected entities. Then, encrypted connections are created in real-time between the requesting systems and application infrastructure.
+
+Authenticating and authorizing the users and their devices before even allowing a single packet to reach the target service, enforces what's known as least privilege at the network layer. Essentially, the concept of least privilege is for an entity to be granted only the minimum privileges that it needs to get its work done. Within a zero trust network, privilege is more dynamic than it would be in traditional networks since it uses many different attributes of activity to determine the trust score.
+
+### The dark network
+
+Connectivity is based on a need-to-know model. Under this model, no DNS information, internal IP addresses or visible ports of internal network infrastructure are transmitted. This is the reason why SDP assets are considered as “dark”. As a result, SDP isolates any concerns about the network and application. The applications and users are considered abstract, be it on-premise or in the cloud, which becomes irrelevant to the assigned policy.
+
+Access is granted directly between the users and their devices to the application and resource, regardless of the underlying network infrastructure. There simply is no concept of inside and outside of the network. This ultimately removes the network location point as a position of advantage and also eliminates the excessive implicit trust that IP addresses offer.
+
+**This article is published as part of the IDG Contributor Network.[Want to Join?][6]**
+
+Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3402258/software-defined-perimeter-sdp-creating-a-new-network-perimeter.html
+
+作者:[Matt Conran][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Matt-Conran/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/sdn_software-defined-network_architecture-100791938-large.jpg
+[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html
+[3]: http://pluralsight.com/courses/zero-trust-networking-big-picture
+[4]: https://network-insight.net/2018/09/software-defined-perimeter-zero-trust/
+[5]: https://network-insight.net/2018/10/zero-trust-networking-ztn-want-ghosted/
+[6]: /contributor-network/signup.html
+[7]: https://www.facebook.com/NetworkWorld/
+[8]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190612 When to use 5G, when to use Wi-Fi 6.md b/sources/talk/20190612 When to use 5G, when to use Wi-Fi 6.md
new file mode 100644
index 0000000000..a2271052c9
--- /dev/null
+++ b/sources/talk/20190612 When to use 5G, when to use Wi-Fi 6.md
@@ -0,0 +1,83 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (When to use 5G, when to use Wi-Fi 6)
+[#]: via: (https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html)
+[#]: author: (Lee Doyle )
+
+When to use 5G, when to use Wi-Fi 6
+======
+5G is a cellular service, and Wi-Fi 6 is a short-range wireless access technology, and each has attributes that make them useful in specific enterprise roles.
+![Thinkstock][1]
+
+We have seen hype about whether [5G][2] cellular or [Wi-Fi 6][3] will win in the enterprise, but the reality is that the two are largely complementary with an overlap for some use cases, which will make for an interesting competitive environment through the early 2020s.
+
+### The potential for 5G in enterprises
+
+The promise of 5G for enterprise users is higher speed connectivity with lower latency. Cellular technology uses licensed spectrum which largely eliminates potential interference that may occur with unlicensed Wi-Fi spectrum. Like current 4G LTE technologies, 5G can be supplied by cellular wireless carriers or built as a private network .
+
+The architecture for 5G requires many more radio access points and can suffer from poor or no connectivity indoors. So, the typical organization needs to assess its [current 4G and potential 5G service][4] for its PCs, routers and other devices. Deploying indoor microcells, repeaters and distributed antennas can help solve indoor 5G service issues. As with 4G, the best enterprise 5G use case is for truly mobile connectivity such as public safety vehicles and in non-carpeted environments like mining, oil and gas extraction, transportation, farming and some manufacturing.
+
+In addition to broad mobility, 5G offers advantages in terms of authentication while roaming and speed of deployment as might be needed to provide WAN connectivity to a pop-up office or retail site. 5G will have the capacity to offload traffic in cases of data congestion such as live video. As 5G standards mature, the technology will improve its options for low-power IoT connectivity.
+
+5G will gradually roll out over the next four to five years starting in large cities and specific geographies; 4G technology will remain prevalent for a number of years. Enterprise users will need new devices, dongles and routers to connect to 5G services. For example, Apple iPhones are not expected to support 5G until 2020, and IoT devices will need specific cellular compatibility to connect to 5G.
+
+Doyle Research expects the 1Gbps and higher bandwidth promised by 5G will have a significant impact on the SD-WAN market. 4G LTE already enables cellular services to become a primary WAN link. 5G is likely to be cost competitive or cheaper than many wired WAN options such as MPLS or the internet. 5G gives enterprise WAN managers more options to provide increased bandwidth to their branch sites and remote users – potentially displacing MPLS over time.
+
+### The potential for Wi-Fi 6 in enterprises
+
+Wi-Fi is nearly ubiquitous for connecting mobile laptops, tablets and other devices to enterprise networks. Wi-Fi 6 (802.11ax) is the latest version of Wi-Fi and brings the promise of increased speed, low latency, improved aggregate bandwidth and advanced traffic management. While it has some similarities with 5G (both are based on orthogonal frequency division multiple access), Wi-Fi 6 is less prone to interference, requires less power (which prolongs device battery life) and has improved spectral efficiency.
+
+**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][5] ]**
+
+As is typical for Wi-Fi, early [vendor-specific versions of Wi-Fi 6][6] are currently available from many manufacturers. The Wi-Fi alliance plans for certification of Wi-Fi 6-standard gear in 2020. Most enterprises will upgrade to Wi-Fi 6 along standard access-point life cycles of three years or so unless they have specific performance/latency requirements that prompt an upgrade sooner.
+
+Wi-Fi access points continue to be subject to interference, and it can be challenging to design and site APs to provide appropriate coverage. Enterprise LAN managers will continue to need vendor-supplied tools and partners to configure optimal Wi-Fi coverage for their organizations. Wi-Fi 6 solutions must be integrated with wired campus infrastructure. Wi-Fi suppliers need to do a better job at providing unified network management across wireless and wired solutions in the enterprise.
+
+### Need for wired backhaul
+
+For both technologies, wireless is combined with wired-network infrastructure to deliver high-speed communications end-to-end. In the enterprise, Wi-Fi is typically paired with wired Ethernet switches for campus and larger branches. Some devices are connected via cable to the switch, others via Wi-Fi – and laptops may use both methods. Wi-Fi access points are connected via Ethernet inside the enterprise and to the WAN or internet by fiber connections.
+
+The architecture for 5G makes extensive use of fiber optics to connect the distributed radio access network back to the core of the 5G network. Fiber is typically required to provide the high bandwidth needed to connect 5G endpoints to SaaS-based applications, and to provide live video and high-speed internet access. Private 5G networks will also have to meet high-speed wired-connectivity requirements.
+
+### Handoff issues
+
+Enterprise IT managers need to be concerned with handoff challenges as phones switch between 5G and Wi-Fi 6. These issues can affect performance and user satisfaction. Several groups are working towards standards to promote better interoperability between Wi-Fi 6 and 5G. As the architectures of Wi-Fi 6 align with 5G, the experience of moving between cellular and Wi-Fi networks should become more seamless.
+
+### 5G vs Wi-Fi 6 depends on locations, applications and devices
+
+Wi-Fi 6 and 5G are competitive with each other for specific situations in the enterprise environment that depend on location, application and device type. IT managers should carefully evaluate their current and emerging connectivity requirements. Wi-Fi will continue to dominate indoor environments and cellular wins for broad outdoor coverage.
+
+Some of the overlap cases occur in stadiums, hospitality and other large event spaces with many users competing for bandwidth. Government applications, including aspect of smart cities, can be applicable to both Wi-Fi and cellular. Health care facilities have many distributed medical devices and users that need connectivity. Large distributed manufacturing environments share similar characteristics. The emerging IoT deployments are perhaps the most interesting “competitive” environment with many overlapping use cases.
+
+### Recommendations for IT Leaders
+
+While the wireless technologies enabling them are converging, Wi-Fi 6 and 5G are fundamentally distinct networks – both of which have their role in enterprise connectivity. Enterprise IT leaders should focus on how Wi-Fi and cellular can complement each other, with Wi-Fi continuing as the in-building technology to connect PCs and laptops, offload phone and tablet data, and for some IoT connectivity.
+
+4G LTE moving to 5G will remain the truly mobile technology for phone and tablet connectivity, an option (via dongle) for PC connections, and increasingly popular for connecting some IoT devices. 5G WAN links will increasingly become standard as a backup for improved SD-WAN reliability and as primary links for remote offices.
+
+Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html
+
+作者:[Lee Doyle][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2017/07/wi-fi_wireless_communication_network_abstract_thinkstock_610127984_1200x800-100730107-large.jpg
+[2]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
+[3]: https://www.networkworld.com/article/3215907/why-80211ax-is-the-next-big-thing-in-wi-fi.html
+[4]: https://www.networkworld.com/article/3330603/5g-versus-4g-how-speed-latency-and-application-support-differ.html
+[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
+[6]: https://www.networkworld.com/article/3309439/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
+[7]: https://www.facebook.com/NetworkWorld/
+[8]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190613 Data centers should sell spare UPS capacity to the grid.md b/sources/talk/20190613 Data centers should sell spare UPS capacity to the grid.md
new file mode 100644
index 0000000000..69b4356661
--- /dev/null
+++ b/sources/talk/20190613 Data centers should sell spare UPS capacity to the grid.md
@@ -0,0 +1,59 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Data centers should sell spare UPS capacity to the grid)
+[#]: via: (https://www.networkworld.com/article/3402039/data-centers-should-sell-spare-ups-capacity-to-the-grid.html)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Data centers should sell spare UPS capacity to the grid
+======
+Distributed Energy is gaining traction, providing an opportunity for data centers to sell excess power in data center UPS batteries to the grid.
+![Getty Images][1]
+
+The energy storage capacity in uninterruptable power supply (UPS) batteries, languishing often dormant in data centers, could provide new revenue streams for those data centers, says Eaton, a major electrical power management company.
+
+Excess, grid-generated power, created during times of low demand, should be stored on the now-proliferating lithium-backup power systems strewn worldwide in data centers, Eaton says. Then, using an algorithm tied to grid-demand, electricity should be withdrawn as necessary for grid use. It would then be slid back onto the backup batteries when not needed.
+
+**[ Read also:[How server disaggregation can boost data center efficiency][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
+
+The concept is called Distributed Energy and has been gaining traction in part because electrical generation is changing—emerging green power, such as wind and solar, being used now at the grid-level have considerations that differ from the now-retiring, fossil-fuel power generation. You can generate solar only in daylight, yet much demand takes place on dark evenings, for example.
+
+Coal, gas, and oil deliveries have always been, to a great extent, pre-planned, just-in-time, and used for electrical generation in real time. Nowadays, though, fluctuations between supply, storage, and demand are kicking in. Electricity storage on the grid is required.
+
+Eaton says that by piggy-backing on existing power banks, electricity distribution could be evened out better. The utilities would deliver power more efficiently, despite the peaks and troughs in demand—with the data center UPS, in effect, acting like a quasi-grid-power storage battery bank, or virtual power plant.
+
+The objective of this UPS use case, called EnergyAware, is to regulate frequency in the grid. That’s related to the tolerances needed to make the grid work—the cycles per second, or hertz, inherent in electrical current can’t deviate too much. Abnormalities happen if there’s a suddent spike in demand but no power on hand to supply the surge.
+
+### How the Distributed Energy concept works
+
+The distributed energy resource (DER), which can be added to any existing lithium-ion battery bank, in any building, allows for the consumption of energy, or the distribution of it, based on a Frequency Regulation grid-demand algorithm. It charges or discharges the backup battery, connected to the grid, thus balancing the grid frequency.
+
+Often, not much power will need to be removed, just “micro-bursts of energy,” explains Sean James, director of Energy Research at Microsoft, in an Eaton-published promotional video. Microsoft Innovation Center in Virginia has been working with Eaton on the project. Those bursts are enough to get the frequency tolerances back on track, but the UPS still functions as designed.
+
+Eaton says data centers should start participating in energy markets. That could mean bidding, as a producer of power, to those who need to buy it—the electricity market, also known as the grid. Data centers could conceivably even switch on generators to operate the data halls if the price for its battery-stored power was particularly lucrative at certain times.
+
+“A data center in the future wouldn’t just be a huge load on the grid,” James says. “In the future, you don’t have a data center or a power plant. It’s something in the middle. A data plant,” he says on the Eaton [website][4].
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3402039/data-centers-should-sell-spare-ups-capacity-to-the-grid.html
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/10/business_continuity_server-100777720-large.jpg
+[2]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
+[3]: https://www.networkworld.com/newsletters/signup.html
+[4]: https://www.eaton.com/us/en-us/products/backup-power-ups-surge-it-power-distribution/backup-power-ups/dual-purpose-ups-technology.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190617 5 transferable higher-education skills.md b/sources/talk/20190617 5 transferable higher-education skills.md
new file mode 100644
index 0000000000..db0f584aaf
--- /dev/null
+++ b/sources/talk/20190617 5 transferable higher-education skills.md
@@ -0,0 +1,64 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (5 transferable higher-education skills)
+[#]: via: (https://opensource.com/article/19/6/5-transferable-higher-education-skills)
+[#]: author: (Stephon Brown https://opensource.com/users/stephb)
+
+5 transferable higher-education skills
+======
+If you're moving from the Ivory Tower to the Matrix, you already have
+the foundation for success in the developer role.
+![Two hands holding a resume with computer, clock, and desk chair ][1]
+
+My transition from a higher-education professional into the tech realm was comparable to moving from a pond into an ocean. There was so much to learn, and after learning, there was still so much more to learn!
+
+Rather than going down the rabbit hole and being overwhelmed by what I did not know, in the last two to three months, I have been able to take comfort in the realization that I was not entirely out of my element as a developer. The skills I acquired during my six years as a university professional gave me the foundation to be successful in the developer role.
+
+These skills are transferable in any direction you plan to go within or outside tech, and it's valuable to reflect on how they apply to your new position.
+
+### 1\. Composition and documentation
+
+Higher education is replete with opportunities to develop skills related to composition and communication. In most cases, clear writing and communication are mandatory requirements for university administrative and teaching positions. Although you may not yet be well-versed in deep technical concepts, learning documentation and writing your progress may be two of the strongest skills you bring as a former higher education administrator. All of those "In response to…" emails will finally come in handy when describing the inner workings of a class or leaving succinct comments for other developers to follow what you have implemented.
+
+### 2\. Problem-solving and critical thinking
+
+Whether you've been an adviser who sits with students and painstakingly develops class schedules for graduation or a finance buff who balances government funds, you will not leave critical thinking behind as you transition into a developer role. Although your critical thinking may have seemed specialized for your work, the skill of turning problems into opportunities is not lost when contributing to code. The experience gained while spending long days and nights revising recruitment strategies will be necessary when composing algorithms and creative ways of delivering data. Continue to foster a passion for solving problems, and you will not have any trouble becoming an efficient and skillful developer.
+
+### 3\. Communication
+
+Though it may seem to overlap with writing (above), communication spans verbal and written disciplines. When you're interacting with clients and leadership, you may have a leg up over your peers because of your higher-education experience. Being approachable and understanding how to manage interactions are skills that some software practitioners may not have fostered to an impactful level. Although you will experience days of staring at a screen and banging your head against the keyboard, you can rest well in knowing you can describe technical concepts and interact with a wide range of audiences, from clients to peers.
+
+### 4\. Leadership
+
+Sitting on that panel; planning that event; leading that workshop. All of those experiences provide you with the grounding to plan and lead smaller projects as a new developer. Leadership is not limited to heading up large and small teams; its essence lies in taking initiative. This can be volunteering to do research on a new feature or do more extensive unit tests for your code. However you use it, your foundation as an educator will allow you to go further in technology development and maintenance.
+
+### 5\. Research
+
+You can Google with the best of them. Being able to clearly truncate your query into the idea you are searching for is characteristic of a higher-education professional. Most administrator or educator jobs focus on solving problems in a defined process for qualitative, quantitative, or mixed results; therefore, cultivating your scientific mind is valuable when providing software solutions. Your research skills also open opportunities for branching into data science and machine learning.
+
+### Bonus: Collaboration
+
+Being able to reach across various offices and fields for event planning and program implementation fit well within team collaboration—both within your new team and across development teams. This may leak into the project management realm, but being able to plan and divide work between teams and establish accountability will allow you as a new developer to understand the software development lifecycle process a little more intimately because of your past related experience.
+
+### Summary
+
+As a developer jumping head-first into technology after years of walking students through the process of navigating higher education, [imposter syndrome][2] has been a constant fear since moving into technology. However, I have been able to take heart in knowing my experience as an educator and an administrator has not gone in vain. If you are like me, be encouraged in knowing that these transferable skills, some of which fall into the soft-skills and other categories, will continue to benefit you as a developer and a professional.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/6/5-transferable-higher-education-skills
+
+作者:[Stephon Brown][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/stephb
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI (Two hands holding a resume with computer, clock, and desk chair )
+[2]: https://en.wikipedia.org/wiki/Impostor_syndrome
diff --git a/sources/talk/20190618 17 predictions about 5G networks and devices.md b/sources/talk/20190618 17 predictions about 5G networks and devices.md
new file mode 100644
index 0000000000..d8833f9887
--- /dev/null
+++ b/sources/talk/20190618 17 predictions about 5G networks and devices.md
@@ -0,0 +1,82 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (17 predictions about 5G networks and devices)
+[#]: via: (https://www.networkworld.com/article/3403358/17-predictions-about-5g-networks-and-devices.html)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+17 predictions about 5G networks and devices
+======
+Not surprisingly, the new Ericsson Mobility Report is bullish on the potential of 5G technology. Here’s a quick look at the most important numbers.
+![Vertigo3D / Getty Images][1]
+
+_“As market after market switches on 5G, we are at a truly momentous point in time. No previous generation of mobile technology has had the potential to drive economic growth to the extent that 5G promises. It goes beyond connecting people to fully realizing the Internet of Things (IoT) and the Fourth Industrial Revolution.”_ —The opening paragraph of the [June 2019 Ericsson Mobility Report][2]
+
+Almost every significant technology advancement now goes through what [Gartner calls the “hype cycle.”][3] These days, Everyone expects new technologies to be met with gushing optimism and dreamy visions of how it’s going to change the world in the blink of an eye. After a while, we all come to expect the vendors and the press to go overboard with excitement, at least until reality and disappointment set in when things don’t pan out exactly as expected.
+
+**[ Also read:[The time of 5G is almost here][4] ]**
+
+Even with all that in mind, though, Ericsson’s whole-hearted embrace of 5G in its Internet Mobility Report is impressive. The optimism is backed up by lots of numbers, but they can be hard to tease out of the 36-document. So, let’s recap some of the most important top-line predictions (with my comments at the end).
+
+### Worldwide 5G growth projections
+
+ 1. “More than 10 million 5G subscriptions are projected worldwide by the end of 2019.”
+ 2. “[We] now expect there to be 1.9 billion 5G subscriptions for enhanced mobile broadband by the end of 2024. This will account for over 20 percent of all mobile subscriptions at that time. The peak of LTE subscriptions is projected for 2022, at around 5.3 billion subscriptions, with the number declining slowly thereafter.”
+ 3. “In 2024, 5G networks will carry 35 percent of mobile data traffic globally.”
+ 4. “5G can cover up to 65 percent of the world’s population in 2024.”
+ 5. ”NB-IoT and Cat-M technologies will account for close to 45 percent of cellular IoT connections in 2024.”
+ 6. “By the end of 2024, nearly 35 percent of cellular IoT connections will be Broadband IoT, with 4G connecting the majority.” But 5G connections will support more advanced use cases.
+ 7. “Despite challenging 5G timelines, device suppliers are expected to be ready with different band and architecture support in a range of devices during 2019.”
+ 8. “Spectrum sharing … chipsets are currently in development and are anticipated to be in 5G commercial devices in late 2019."
+ 9. “[VoLTE][5] is the foundation for enabling voice and communication services on 5G devices. Subscriptions are expected to reach 2.1 billion by the end of 2019. … The number of VoLTE subscriptions is projected to reach 5.9 billion by the end of 2024, accounting for more than 85 percent of combined LTE and 5G subscriptions.”
+
+
+
+![][6]
+
+### Regional 5G projections
+
+ 1. “In North America, … service providers have already launched commercial 5G services, both for fixed wireless access and mobile. … By the end of 2024, we anticipate close to 270 million 5G subscriptions in the region, accounting for more than 60 percent of mobile subscriptions.”
+ 2. “In Western Europe … The momentum for 5G in the region was highlighted by the first commercial launch in April. By the end of 2024, 5G is expected to account for around 40 percent of mobile subscriptions.
+ 3. In Central and Eastern Europe, … The first 5G subscriptions are expected in 2019, and will make up 15 percent of subscriptions in 2024.”
+ 4. “In North East Asia, … the region’s 5G subscription penetration is projected to reach 47 percent [by the end of 2024].
+ 5. “[In India,] 5G subscriptions are expected to become available in 2022 and will represent 6 percent of mobile subscriptions at the end of 2024.”
+ 6. “In the Middle East and North Africa, we anticipate commercial 5G deployments with leading communications service providers during 2019, and significant volumes in 2021. … Around 60 million 5G subscriptions are forecast for the end of 2024, representing 3 percent of total mobile subscriptions.”
+ 7. “Initial 5G commercial devices are expected in the [South East Asia and Oceania] region during the first half of 2019. By the end of 2024, it is anticipated that almost 12 percent of subscriptions in the region will be for 5G.]”
+ 8. “In Latin America … the first 5G deployments will be possible in the 3.5GHz band during 2019. Argentina, Brazil, Chile, Colombia, and Mexico are anticipated to be the first countries in the region to deploy 5G, with increased subscription uptake forecast from 2020. By the end of 2024, 5G is set to make up 7 percent of mobile subscriptions.”
+
+
+
+### Is 5G really so inevitable?
+
+Considered individually, these predictions all seem perfectly reasonable. Heck, 10 million 5G subscriptions is only a drop in the global bucket. And rumors are already flying that Apple’s next round of iPhones will include 5G capability. Also, 2024 is still five years in the future, so why wouldn’t the faster connections drive impressive traffic stats? Similarly, North America and North East Asia will experience the fastest 5G penetration.
+
+But when you look at them all together, these numbers project a sense of 5G inevitability that could well be premature. It will take a _lot_ of spending, by a lot of different parties—carriers, chip makers, equipment vendors, phone manufacturers, and consumers—to make this kind of growth a reality.
+
+I’m not saying 5G won’t take over the world. I’m just saying that when so many things have to happen in a relatively short time, there are a lot of opportunities for the train to jump the tracks. Don’t be surprised if it takes longer than expected for 5G to turn into the worldwide default standard Ericsson—and everyone else—seems to think it will inevitably become.
+
+Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3403358/17-predictions-about-5g-networks-and-devices.html
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/5g_wireless_technology_network_connections_by_credit-vertigo3d_gettyimages-1043302218_3x2-100787550-large.jpg
+[2]: https://www.ericsson.com/assets/local/mobility-report/documents/2019/ericsson-mobility-report-june-2019.pdf
+[3]: https://www.gartner.com/en/research/methodologies/gartner-hype-cycle
+[4]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
+[5]: https://www.gsma.com/futurenetworks/technology/volte/
+[6]: https://images.idgesg.net/images/article/2019/06/ericsson-mobility-report-june-2019-graph-100799481-large.jpg
+[7]: https://www.facebook.com/NetworkWorld/
+[8]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190618 Why your workplace arguments aren-t as effective as you-d like.md b/sources/talk/20190618 Why your workplace arguments aren-t as effective as you-d like.md
new file mode 100644
index 0000000000..54a0cca26f
--- /dev/null
+++ b/sources/talk/20190618 Why your workplace arguments aren-t as effective as you-d like.md
@@ -0,0 +1,144 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Why your workplace arguments aren't as effective as you'd like)
+[#]: via: (https://opensource.com/open-organization/19/6/barriers-productive-arguments)
+[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland/users/ron-mcfarland)
+
+Why your workplace arguments aren't as effective as you'd like
+======
+Open organizations rely on open conversations. These common barriers to
+productive argument often get in the way.
+![Arrows pointing different directions][1]
+
+Transparent, frank, and often contentious arguments are part of life in an open organization. But how can we be sure those conversations are _productive_ —not _destructive_?
+
+This is the second installment of a two-part series on how to argue and actually achieve something. In the [first article][2], I mentioned what arguments are (and are not), according to author Sinnott-Armstrong in his book _Think Again: How to Reason and Argue._ I also offered some suggestions for making arguments as productive as possible.
+
+In this article, I'll examine three barriers to productive arguments that Sinnott-Armstrong elaborates in his book: incivility, polarization, and language issues. Finally, I'll explain his suggestions for addressing those barriers.
+
+### Incivility
+
+"Incivility" has become a social concern in recent years. Consider this: As a tactic in arguments, incivility _can_ have an effect in certain situations—and that's why it's a common strategy. Sinnott-Armstrong notes that incivility:
+
+ * **Attracts attention:** Incivility draws people's attention in one direction, sometimes to misdirect attention from or outright obscure other issues. It redirects people's attention to shocking statements. Incivility, exaggeration, and extremism can increase the size of an audience.
+
+ * **Energizes:** Sinnott-Armstrong writes that seeing someone being uncivil on a topic of interest can generate energy from a state of powerlessness.
+
+ * **Stimulates memory:** Forgetting shocking statements is difficult; they stick in our memory more easily than statements that are less surprising to us.
+
+ * **Excites the powerless:** The groups most likely to believe and invest in someone being uncivil are those that feel they're powerless and being treated unfairly.
+
+
+
+
+Unfortunately, incivility as a tactic in arguments has its costs. One such cost is polarization.
+
+### Polarization
+
+Sinnott-Armstrong writes about five forms of polarization:
+
+ * **Distance:** If two people's or groups' views are far apart according to some relevant scale, have significant disagreements and little common ground, then they're polarized.
+
+ * **Differences:** If two people or groups have fewer values and beliefs _in common_ than they _don't have in common_ , then they're polarized.
+
+ * **Antagonism:** Groups are more polarized the more they feel hatred, disdain, fear, or other negative emotions toward other people or groups.
+
+ * **Incivility:** Groups tend to be more polarized when they talk more negatively about people of the other groups.
+
+ * **Rigidity:** Groups tend to be more polarized when they treat their values as indisputable and will not compromise.
+
+ * **Gridlock:** Groups tend to be more polarized when they're unable to cooperate and work together toward common goals.
+
+
+
+
+And I'll add one more form of polarization to Sinnott-Armstrong's list:
+
+ * **Non-disclosure:** Groups tend to be more polarized when one or both of the groups refuses to share valid, verifiable information—or when they distract each other with useless or irrelevant information. One of the ways people polarize is by not talking to each other and withhold information. Similarly, they talk on subjects that distract us from the issue at hand. Some issues are difficult to talk about, but by doing so solutions can be explored.
+
+
+
+### Language issues
+
+Identifying discussion-stoppers like these can help you avoid shutting down a discussion that would otherwise achieve beneficial outcomes.
+
+Language issues can be argument-stoppers Sinnott-Armstrong says. In particular, he outlines some of the following language-related barriers to productive argument.
+
+ * **Guarding:** Using words like "all" can make a statement unbelievable; words like "sometimes" can make a statement too vague.
+
+ * **Assuring:** Simply stating "trust me, I know what I'm talking about," without offering evidence that this is the case, can impede arguments.
+
+ * **Evaluating:** Offering an evaluation of something—like saying "It is good"―without any supporting reasoning.
+
+ * **Discounting:** This involves anticipating what the another person will say and attempting to weaken it as much as possible by framing an argument in a negative way. (Contrast these two sentences, for example: "Ramona is smart but boring" and "Ramona is boring but smart." The difference is subtle, but you'd probably want to spend less time with Ramona if you heard the first statement about her than if you heard the second.)
+
+
+
+
+Identifying discussion-stoppers like these can help you avoid shutting down a discussion that would otherwise achieve beneficial outcomes. In addition, Sinnott-Armstrong specifically draws readers' attention to two other language problems that can kill productive debates: vagueness and ambiguity.
+
+ * **Vagueness:** This occurs when a word or sentence is not precise enough and having many ways to interpret its true meaning and intent, which leads to confusion. Consider the sentence "It is big." "It" must be defined if it's not already obvious to everyone in the conversation. And a word like "big" must be clarified through comparison to something that everyone has agreed upon.
+
+ * **Ambiguity:** This occurs when a sentence could have two distinct meanings. For example: "Police killed man with axe." Who was holding the axe, the man or the police? "My neighbor had a friend for dinner." Did your neighbor invite a friend to share a meal—or did she eat her friend?
+
+
+
+
+### Overcoming barriers
+
+To help readers avoid these common roadblocks to productive arguments, Sinnott-Armstrong recommends a simple, four-step process for evaluating another person's argument.
+
+ 1. **Observation:** First, observe a stated opinion and its related evidence to determine the precise nature of the claim. This might require you to ask some questions for clarification (you'll remember I employed this technique when arguing with my belligerent uncle, which I described [in the first article of this series][2]).
+
+ 2. **Hypothesis:** Develop some hypothesis about the argument. In this case, the hypothesis should be an inference based on generally acceptable standards (for more on the structure of arguments themselves, also see [the first part of this series][2]).
+
+ 3. **Comparison:** Compare that hypothesis with others and evaluate which is more accurate. More important issues will require you to conduct more comparisons. In other cases, premises are so obvious that no further explanation is required.
+
+ 4. **Conclusion:** From the comparison analysis, reach a conclusion about whether your hypothesis about a competing argument is correct.
+
+
+
+
+In many cases, the question is not whether a particular claim is _correct_ or _incorrect_ , but whether it is _believable._ So Sinnott-Armstrong also offers a four-step "believability test" for evaluating claims of this type.
+
+ 1. **Expertise:** Does the person presenting the argument have authority in an appropriate field? Being a specialist is one field doesn't necessarily make that person an expert in another.
+
+ 2. **Motive:** Would self-interest or other personal motives compel a person to withhold information or make false statements? To confirm one's statements, it might be wise to seek a totally separate, independent authority for confirmation.
+
+ 3. **Sources:** Are the sources the person offers as evidence of a claim recognized experts? Do those sources have the expertise on the specific issue addressed?
+
+ 4. **Agreement:** Is there agreement among many experts within the same specialty?
+
+
+
+
+### Let's argue
+
+If you really want to strengthen your ability to argue, find someone that totally disagrees with you but wants to learn and understand your beliefs.
+
+When I was a university student, I would usually sit toward the front of the classroom. When I didn't understand something, I would start asking questions for clarification. Everyone else in the class would just sit silently, saying nothing. After class, however, other students would come up to me and thank me for asking those questions—because everyone else in the room was confused, too.
+
+Clarification is a powerful act—not just in the classroom, but during arguments anywhere. Building an organizational culture in which people feel empowered to ask for clarification is critical for productive arguments (I've [given presentations on this topic][3] before). If members have the courage to clarify premises, and they can do so in an environment where others don't think they're being belligerent, then this might be the key to a successful and productive argument.
+
+If you really want to strengthen your ability to argue, find someone that totally disagrees with you but wants to learn and understand your beliefs. Then, practice some of Sinnott-Armstrong's suggestions. Arguing productively will enhance [transparency, inclusivity, and collaboration][4] in your organization—leading to a more open culture.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/open-organization/19/6/barriers-productive-arguments
+
+作者:[Ron McFarland][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ron-mcfarland/users/ron-mcfarland
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/directions-arrows.png?itok=EE3lFewZ (Arrows pointing different directions)
+[2]: https://opensource.com/open-organization/19/5/productive-arguments
+[3]: https://www.slideshare.net/RonMcFarland1/argue-successfully-achieve-something
+[4]: https://opensource.com/open-organization/resources/open-org-definition
diff --git a/sources/talk/20190619 With Tableau, SaaS king Salesforce becomes a hybrid cloud company.md b/sources/talk/20190619 With Tableau, SaaS king Salesforce becomes a hybrid cloud company.md
new file mode 100644
index 0000000000..d0d1d24cb6
--- /dev/null
+++ b/sources/talk/20190619 With Tableau, SaaS king Salesforce becomes a hybrid cloud company.md
@@ -0,0 +1,67 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (With Tableau, SaaS king Salesforce becomes a hybrid cloud company)
+[#]: via: (https://www.networkworld.com/article/3403442/with-tableau-saas-king-salesforce-becomes-a-hybrid-cloud-company.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+With Tableau, SaaS king Salesforce becomes a hybrid cloud company
+======
+Once dismissive of software, Salesforce acknowledges the inevitability of the hybrid cloud.
+![Martyn Williams/IDGNS][1]
+
+I remember a time when people at Salesforce events would hand out pins that read “Software” inside a red circle with a slash through it. The High Priest of SaaS (a.k.a. CEO Marc Benioff) was so adamant against installed, on-premises software that his keynotes were always comical.
+
+Now, Salesforce is prepared to [spend $15.7 billion to acquire Tableau Software][2], the leader in on-premises data analytics.
+
+On the hell-freezes-over scale, this is up there with Microsoft embracing Linux or Apple PR people returning a phone call. Well, we know at least one of those has happened.
+
+**[ Also read:[Hybrid Cloud: The time for adoption is upon us][3] | Stay in the know: [Subscribe and get daily newsletter updates][4] ]**
+
+So, why would a company that is so steeped in the cloud, so anti-on-premises software, make such a massive purchase?
+
+Partly it is because Benioff and company are finally coming to the same conclusion as most everyone else: The hybrid cloud, a mix of on-premises systems and public cloud, is the wave of the future, and pure cloud plays are in the minority.
+
+The reality is that data is hybrid and does not sit in a single location, and Salesforce is finally acknowledging this, said Tim Crawford, president of Avoa, a strategic CIO advisory firm.
+
+“I see the acquisition of Tableau by Salesforce as less about getting into the on-prem game as it is a reality of the world we live in. Salesforce needed a solid analytics tool that went well beyond their existing capability. Tableau was that tool,” he said.
+
+**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][5] ]**
+
+Salesforce also understands that they need a better understanding of customers and those data insights that drive customer decisions. That data is both on-prem and in the cloud, Crawford noted. It is in Salesforce, other solutions, and the myriad of Excel spreadsheets spread across employee systems. Tableau crosses the hybrid boundaries and brings a straightforward way to visualize data.
+
+Salesforce had analytics features as part of its SaaS platform, but it was geared around their own platform, whereas everyone uses Tableau and Tableau supports all manner of analytics.
+
+“There’s a huge overlap between Tableau customers and Salesforce customers,” Crawford said. “The data is everywhere in the enterprise, not just in Salesforce. Salesforce does a great job with its own data, but Tableau does great with data in a lot of places because it’s not tied to one platform. So, it opens up where the data comes from and the insights you get from the data.”
+
+Crawford said that once the deal is done and Tableau is under some deeper pockets, the organization may be able to innovate faster or do things they were unable to do prior. That hardly indicates Tableau was struggling, though. It pulled in [$1.16 billion in revenue][6] in 2018.
+
+Crawford also expects Salesforce to push Tableau to open up new possibilities for customer insights by unlocking customer data inside and outside of Salesforce. One challenge for the two companies is to maintain that neutrality so that they don’t lose the ability to use Tableau for non-customer centric activities.
+
+“It’s a beautiful way to visualize large sets of data that have nothing to do with customer centricity,” he said.
+
+Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3403442/with-tableau-saas-king-salesforce-becomes-a-hybrid-cloud-company.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2015/09/150914-salesforce-dreamforce-2-100614575-large.jpg
+[2]: https://www.cio.com/article/3402026/how-salesforces-tableau-acquisition-will-impact-it.html
+[3]: http://www.networkworld.com/article/2172875/cloud-computing/hybrid-cloud--the-year-of-adoption-is-upon-us.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
+[6]: https://www.geekwire.com/2019/tableau-hits-841m-annual-recurring-revenue-41-transition-subscription-model-continues/
+[7]: https://www.facebook.com/NetworkWorld/
+[8]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190620 Carrier services help expand healthcare, with 5G in the offing.md b/sources/talk/20190620 Carrier services help expand healthcare, with 5G in the offing.md
new file mode 100644
index 0000000000..072b172fda
--- /dev/null
+++ b/sources/talk/20190620 Carrier services help expand healthcare, with 5G in the offing.md
@@ -0,0 +1,85 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Carrier services help expand healthcare, with 5G in the offing)
+[#]: via: (https://www.networkworld.com/article/3403366/carrier-services-help-expand-healthcare-with-5g-in-the-offing.html)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+Carrier services help expand healthcare, with 5G in the offing
+======
+Many telehealth initiatives tap into wireless networking supplied by service providers that may start offering services such as Citizen's Band and 5G to support remote medical care.
+![Thinkstock][1]
+
+There are connectivity options aplenty for most types of [IoT][2] deployment, but the idea of simply handing the networking part of the equation off to a national licensed wireless carrier could be the best one for certain kinds of deployments in the medical field.
+
+Telehealth systems, for example, are still a relatively new facet of modern medicine, but they’re already among the most important applications that use carrier networks to deliver care. One such system is operated by the University of Mississippi Medical Center, for the treatment and education of diabetes patients.
+
+**[More on wireless:[The time of 5G is almost here][3]]**
+
+**[ Now read[20 hot jobs ambitious IT pros should shoot for][4]. ]**
+
+Greg Hall is the director of IT at UMMC’s center for telehealth. He said that the remote patient monitoring system is relatively simple by design – diabetes patients receive a tablet computer that they can use to input and track their blood sugar levels, alert clinicians to symptoms like nerve pain or foot sores, and even videoconference with their doctors directly. The tablet connects via Verizon, AT&T or CSpire – depending on who’s got the best coverage in a given area – back to UMMC’s servers.
+
+According to Hall, there are multiple advantages to using carrier connectivity instead of unlicensed (i.e. purpose-built [Wi-Fi][5] or other technology) to connect patients – some of whom live in remote parts of the state – to their caregivers.
+
+“We weren’t expecting everyone who uses the service to have Wi-Fi,” he said, “and they can take their tablet with them if they’re traveling.”
+
+The system serves about 250 patients in Mississippi, up from roughly 175 in the 2015 pilot program that got the effort off the ground. Nor is it strictly limited to diabetes care – Hall said that it’s already been extended to patients suffering from chronic obstructive pulmonary disease, asthma and even used for prenatal care, with further expansion in the offing.
+
+“The goal of our program isn’t just the monitoring piece, but also the education piece, teaching a person to live with their [condition] and thrive,” he said.
+
+It hasn’t all been smooth sailing. One issue was caused by the natural foliage of the area, as dense areas of pine trees can cause transmission problems, thanks to their needles being a particularly troublesome length and interfering with 2.5GHz wireless signals. But Hall said that the team has been able to install signal boosters or repeaters to overcome that obstacle.
+
+Neurologist Dr. Allen Gee’s practice in Wyoming attempts to address a similar issue – far-flung patients with medical needs that might not be addressed by the sparse local-care options. From his main office in Cody, he said, he can cover half the state via telepresence, using a purpose-built system that is based on cellular-data connectivity from TCT, Spectrum and AT&T, as well as remote audiovisual equipment and a link to electronic health records stored in distant locations. That allows him to receive patient data, audio/visual information and even imaging diagnostics remotely. Some specialists in the state are able to fly to those remote locations, others are not.
+
+While Gee’s preference is to meet with patients in person, that’s just not always possible, he said.
+
+“Medical specialists don’t get paid for windshield time,” he noted. “Being able to transfer information from an EHR facilitates the process of learning about the patient.”
+
+### 5G is coming**
+
+**
+
+According to Alan Stewart-Brown, vice president at infrastructure management vendor Opengear, there’s a lot to like about current carrier networks for medical use – particularly wide coverage and a lack of interference – but there are bigger things to come.
+
+“We have customers that have equipment in ambulances for instance, where they’re livestreaming patients’ vital signs to consoles that doctors can monitor,” he said. “They’re using carrier 4G for that right now and it works well enough, but there are limitations, namely latency, which you don’t get on [5G][6].”
+
+Beyond the simple fact of increased throughput and lower latency, widespread 5G deployments could open a wide array of new possibilities for medical technology, mostly involving real-time, very-high-definition video streaming. These include medical VR, remote surgery and the like.
+
+“The process you use to do things like real-time video – right now on a 4G network, that may or may not have a delay,” said Stewart-Brown. “Once you can get rid of the delay, the possibilities are endless as to what you can use the technology for.”
+
+### Citizens band
+
+Ron Malenfant, chief architect for service provider IoT at Cisco, agreed that the future of 5G for medical IoT is bright, but said that the actual applications of the technology have to be carefully thought out.
+
+“The use cases need to be worked on,” he said. “The innovative [companies] are starting to say ‘OK, what does 5G mean to me’ and starting to plan use cases.”
+
+One area that the carriers themselves have been eyeing recently is the CBRS band of radio frequencies, which sits around 3.5GHz. It’s what’s referred to as “lightly licensed” spectrum, in that parts of it are used for things like CB radio and other parts are the domain of the U.S. armed forces, and it could be used to build private networks for institutional users like hospitals, instead of deploying small but expensive 4G cells. The idea is that the institutions would be able to lease those frequencies for their specific area from the carrier directly for private LTE/CBRS networks, and, eventually 5G, Malenfant said.
+
+There’s also the issue, of course, that there are still a huge amount of unknowns around 5G, which isn’t expected to supplant LTE in the U.S. for at least another year or so. The medical field’s stiff regulatory requirements could also prove a stumbling block for the adoption of newer wireless technology.
+
+Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3403366/carrier-services-help-expand-healthcare-with-5g-in-the-offing.html
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/07/stethoscope_mobile_healthcare_ipad_tablet_doctor_patient-100765655-large.jpg
+[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
+[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
+[4]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
+[5]: https://www.networkworld.com/article/3238664/80211-wi-fi-standards-and-speeds-explained.html
+[6]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
+[7]: https://www.facebook.com/NetworkWorld/
+[8]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190620 Cracks appear in Intel-s grip on supercomputing.md b/sources/talk/20190620 Cracks appear in Intel-s grip on supercomputing.md
new file mode 100644
index 0000000000..5ff550d3fc
--- /dev/null
+++ b/sources/talk/20190620 Cracks appear in Intel-s grip on supercomputing.md
@@ -0,0 +1,63 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Cracks appear in Intel’s grip on supercomputing)
+[#]: via: (https://www.networkworld.com/article/3403443/cracks-appear-in-intels-grip-on-supercomputing.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Cracks appear in Intel’s grip on supercomputing
+======
+New competitors threaten to take Intel’s dominance in the high-performance computing (HPC) world, and we’re not even talking about AMD (yet).
+![Randy Wong/LLNL][1]
+
+It’s June, so it’s that time again for the twice-yearly Top 500 supercomputer list, where bragging rights are established or, in most cases, reaffirmed. The list constantly shifts as new trends appear, and one of them might be a break in Intel’s dominance.
+
+[Supercomputers in the top 10 list][2] include a lot of IBM Power-based systems, and almost all run Nvidia GPUs. But there’s more going on than that.
+
+For starters, an ARM supercomputer has shown up, at #156. [Astra][3] at Sandia National Laboratories is an HPE system running Cavium (now Marvell) ThunderX2 processors. It debuted on the list at #204 last November, but thanks to upgrades, it has moved up the list. It won’t be the last ARM server to show up, either.
+
+**[ Also see:[10 of the world's fastest supercomputers][2] | Get daily insights: [Sign up for Network World newsletters][4] ]**
+
+Second is the appearance of four Nvidia DGX servers, with the [DGX SuperPOD][5] ranking the highest at #22. [DGX systems][6] are basically compact GPU boxes with a Xeon just to boot the thing. The GPUs do all the heavy lifting.
+
+AMD hasn’t shown up yet with the Epyc processors, but it will, given Cray is building them for the government.
+
+This signals a breaking up of the hold Intel has had on the high-performance computing (HPC) market for a long time, said Ashish Nadkarni, group vice president in IDC's worldwide infrastructure practice. “The Intel hold has already been broken up by all the accelerators in the supercomputing space. The more accelerators they use, the less need they have for Xeons. They can go with other processors that do justice to those accelerators,” he told me.
+
+With so much work in HPC and artificial intelligence (AI) being done by GPUs, the x86 processor becomes just a boot processor in a way. I wasn’t kidding about the DGX box. It’s got one Xeon and eight Tesla GPUs. And the Xeon is an E5, a midrange part.
+
+**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][7] ]**
+
+“They don’t need high-end Xeons in servers any more, although there’s a lot of supercomputers that just use CPUs. The fact is there are so many options now,” said Nadkarni. One example of an all-CPU system is [Frontera][8], a Dell-based system at the Texas Advanced Computing Center in Austin.
+
+The top two computers, Sierra and Summit, both run IBM Power9 RISC processors, as well as Nvidia GPUs. All told, Nvidia is in 125 of the 500 supercomputers, including five of the top 10, the fastest computer in the world, the fastest in Europe (Piz Daint) and the fastest in Japan (ABCI).
+
+Lenovo was the top hardware provider, beating out Dell, HPE, and IBM combined. That’s because of its large presence in its native China. Nadkari said Lenovo, which acquired the IBM x86 server business in 2014, has benefitted from the IBM installed base, which has continued wanting the same tech from Lenovo under new ownership.
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3403443/cracks-appear-in-intels-grip-on-supercomputing.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/10/sierra875x500-100778404-large.jpg
+[2]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
+[3]: https://www.top500.org/system/179565
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://www.top500.org/system/179691
+[6]: https://www.networkworld.com/article/3196088/nvidias-new-volta-based-dgx-1-supercomputer-puts-400-servers-in-a-box.html
+[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[8]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html#slide7
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190620 Several deals solidify the hybrid cloud-s status as the cloud of choice.md b/sources/talk/20190620 Several deals solidify the hybrid cloud-s status as the cloud of choice.md
new file mode 100644
index 0000000000..ade07dcb10
--- /dev/null
+++ b/sources/talk/20190620 Several deals solidify the hybrid cloud-s status as the cloud of choice.md
@@ -0,0 +1,77 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Several deals solidify the hybrid cloud’s status as the cloud of choice)
+[#]: via: (https://www.networkworld.com/article/3403354/several-deals-solidify-the-hybrid-clouds-status-as-the-cloud-of-choice.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Several deals solidify the hybrid cloud’s status as the cloud of choice
+======
+On-premises and cloud connections are being built by all the top vendors to bridge legacy and modern systems, creating hybrid cloud environments.
+![Getty Images][1]
+
+The hybrid cloud market is expected to grow from $38.27 billion in 2017 to $97.64 billion by 2023, at a Compound Annual Growth Rate (CAGR) of 17.0% during the forecast period, according to Markets and Markets.
+
+The research firm said the hybrid cloud is rapidly becoming a leading cloud solution, as it provides various benefits, such as cost, efficiency, agility, mobility, and elasticity. One of the many reasons is the need for interoperability standards between cloud services and existing systems.
+
+Unless you are a startup company and can be born in the cloud, you have legacy data systems that need to be bridged, which is where the hybrid cloud comes in.
+
+So, in very short order we’ve seen a bunch of new alliances involving the old and new guard, reiterating that the need for hybrid solutions remains strong.
+
+**[ Read also:[What hybrid cloud means in practice][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
+
+### HPE/Google
+
+In April, the Hewlett Packard Enterprise (HPE) and Google announced a deal where HPE introduced a variety of server solutions for Google Cloud’s Anthos, along with a consumption-based model for the validated HPE on-premises infrastructure that is integrated with Anthos.
+
+Following up with that, the two just announced a strategic partnership to create a hybrid cloud for containers by combining HPE’s on-premises infrastructure, Cloud Data Services, and GreenLake consumption model with Anthos. This allows for:
+
+ * Bi-directional data mobility for data mobility and consistent data services between on-premises and cloud
+ * Application workload mobility to move containerized app workloads across on-premises and multi-cloud environments
+ * Multi-cloud flexibility, offering the choice of HPE Cloud Volumes and Anthos for what works best for the workload
+ * Unified hybrid management through Anthos, so customers can get a unified and consistent view of their applications and workloads regardless of where they reside
+ * Charged as a service via HPE GreenLake
+
+
+
+### IBM/Cisco
+
+This is a furthering of an already existing partnership between IBM and Cisco designed to deliver a common and secure developer experience across on-premises and public cloud environments for building modern applications.
+
+[Cisco said it will support IBM Cloud Private][4], an on-premises container application development platform, on Cisco HyperFlex and HyperFlex Edge hyperconverged infrastructure. This includes support for IBM Cloud Pak for Applications. IBM Cloud Paks deliver enterprise-ready containerized software solutions and developer tools for building apps and then easily moving to any cloud—public or private.
+
+This architecture delivers a common and secure Kubernetes experience across on-premises (including edge) and public cloud environments. IBM’s Multicloud Manager covers monitoring and management of clusters and container-based applications running from on-premises to the edge, while Cisco’s Virtual Application Centric Infrastructure (ACI) will allow customers to extend their network fabric from on-premises to the IBM Cloud.
+
+### IBM/Equinix
+
+Equinix expanded its collaboration with IBM Cloud to bring private and scalable connectivity to global enterprises via Equinix Cloud Exchange Fabric (ECX Fabric). This provides private connectivity to IBM Cloud, including Direct Link Exchange, Direct Link Dedicated and Direct Link Dedicated Hosting, that is secure and scalable.
+
+ECX Fabric is an on-demand, SDN-enabled interconnection service that allows any business to connect between its own distributed infrastructure and any other company’s distributed infrastructure, including cloud providers. Direct Link provides IBM customers with a connection between their network and IBM Cloud. So ECX Fabric provides IBM customers with a secured and scalable network connection to the IBM Cloud service.
+
+At the same time, ECX Fabric provides secure connections to other cloud providers, and most customers prefer a multi-vendor approach to avoid vendor lock-in.
+
+“Each of the partnerships focus on two things: 1) supporting a hybrid-cloud platform for their existing customers by reducing the friction to leveraging each solution and 2) leveraging the unique strength that each company brings. Each of the solutions are unique and would be unlikely to compete directly with other partnerships,” said Tim Crawford, president of Avoa, an IT consultancy.
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3403354/several-deals-solidify-the-hybrid-clouds-status-as-the-cloud-of-choice.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/cloud_hand_plus_sign_private-100787051-large.jpg
+[2]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
+[3]: https://www.networkworld.com/newsletters/signup.html
+[4]: https://www.networkworld.com/article/3403363/cisco-connects-with-ibm-in-to-simplify-hybrid-cloud-deployment.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190626 Where are all the IoT experts going to come from.md b/sources/talk/20190626 Where are all the IoT experts going to come from.md
new file mode 100644
index 0000000000..22a303d2f6
--- /dev/null
+++ b/sources/talk/20190626 Where are all the IoT experts going to come from.md
@@ -0,0 +1,78 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Where are all the IoT experts going to come from?)
+[#]: via: (https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+Where are all the IoT experts going to come from?
+======
+The fast growth of the internet of things (IoT) is creating a need to train cross-functional experts who can combine traditional networking and infrastructure expertise with database and reporting skills.
+![Kevin \(CC0\)][1]
+
+If the internet of things (IoT) is going to fulfill its enormous promise, it’s going to need legions of smart, skilled, _trained_ workers to make everything happen. And right now, it’s not entirely clear where those people are going to come from.
+
+That’s why I was interested in trading emails with Keith Flynn, senior director of product management, R&D at asset-optimization software company [AspenTech][2], who says that when dealing with the slew of new technologies that fall under the IoT umbrella, you need people who can understand how to configure the technology and interpret the data. Flynn sees a growing need for existing educational institutions to house IoT-specific programs, as well as an opportunity for new IoT-focused private colleges, offering a well -ounded curriculum
+
+“In the future,” Flynn told me, “IoT projects will differ tremendously from the general data management and automation projects of today. … The future requires a more holistic set of skills and cross-trading capabilities so that we’re all speaking the same language.”
+
+**[ Also read: [20 hot jobs ambitious IT pros should shoot for][3] ]**
+
+With the IoT growing 30% a year, Flynn added, rather than a few specific skills, “everything from traditional deployment skills, like networking and infrastructure, to database and reporting skills and, frankly, even basic data science, need to be understood together and used together.”
+
+### Calling all IoT consultants
+
+“The first big opportunity for IoT-educated people is in the consulting field,” Flynn predicted. “As consulting companies adapt or die to the industry trends … having IoT-trained people on staff will help position them for IoT projects and make a claim in the new line of business: IoT consulting.”
+
+The problem is especially acute for startups and smaller companies. “The bigger the organization, the more likely they have a means to hire different people across different lines of skillsets,” Flynn said. “But for smaller organizations and smaller IoT projects, you need someone who can do both.”
+
+Both? Or _everything?_ The IoT “requires a combination of all knowledge and skillsets,” Flynn said, noting that “many of the skills aren’t new, they’ve just never been grouped together or taught together before.”
+
+**[ [Looking to upgrade your career in tech? This comprehensive online course teaches you how.][4] ]**
+
+### The IoT expert of the future
+
+True IoT expertise starts with foundational instrumentation and electrical skills, Flynn said, which can help workers implement new wireless transmitters and boost technology for better battery life and power consumption.
+
+“IT skills, like networking, IP addressing, subnet masks, cellular and satellite are also pivotal IoT needs,” Flynn said. He also sees a need for database management skills and cloud management and security expertise, “especially as things like [advanced process control] APC and sending sensor data directly to databases and data lakes become the norm.”
+
+### Where will IoT experts come from?
+
+Flynn said standardized formal education courses would be the best way to make sure that graduates or certificate holders have the right set of skills. He even laid out a sample curriculum: “Start in chronological order with the basics like [Electrical & Instrumentation] E&I and measurement. Then teach networking, and then database administration and cloud courses should follow that. This degree could even be looped into an existing engineering course, and it would probably take two years … to complete the IoT component.”
+
+While corporate training could also play role, “that’s easier said than done,” Flynn warned. “Those trainings will need to be organization-specific efforts and pushes.”
+
+Of course, there are already [plenty of online IoT training courses and certificate programs][5]. But, ultimately, the responsibility lies with the workers themselves.
+
+“Upskilling is incredibly important in this world as tech continues to transform industries,” Flynn said. “If that upskilling push doesn’t come from your employer, then online courses and certifications would be an excellent way to do that for yourself. We just need those courses to be created. ... I could even see organizations partnering with higher-education institutions that offer these courses to give their employees better access to it. Of course, the challenge with an IoT program is that it will need to constantly evolve to keep up with new advancements in tech.”
+
+**[ For more on IoT, see [tips for securing IoT on your network][6], our list of [the most powerful internet of things companies][7] and learn about the [industrial internet of things][8]. | Get regularly scheduled insights by [signing up for Network World newsletters][9]. ]**
+
+Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/07/programmer_certification-skills_code_devops_glasses_student_by-kevin-unsplash-100764315-large.jpg
+[2]: https://www.aspentech.com/
+[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
+[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career
+[5]: https://www.google.com/search?client=firefox-b-1-d&q=iot+training
+[6]: https://www.networkworld.com/article/3254185/internet-of-things/tips-for-securing-iot-on-your-network.html#nww-fsb
+[7]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
+[8]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb
+[9]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
+[10]: https://www.facebook.com/NetworkWorld/
+[11]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190630 Data Still Dominates.md b/sources/talk/20190630 Data Still Dominates.md
new file mode 100644
index 0000000000..e9a86acf68
--- /dev/null
+++ b/sources/talk/20190630 Data Still Dominates.md
@@ -0,0 +1,100 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Data Still Dominates)
+[#]: via: (https://theartofmachinery.com/2019/06/30/data_still_dominates.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+
+Data Still Dominates
+======
+
+Here’s [a quote from Linus Torvalds in 2006][1]:
+
+> I’m a huge proponent of designing your code around the data, rather than the other way around, and I think it’s one of the reasons git has been fairly successful… I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. Bad programmers worry about the code. Good programmers worry about data structures and their relationships.
+
+Which sounds a lot like [Eric Raymond’s “Rule of Representation” from 2003][2]:
+
+> Fold knowledge into data, so program logic can be stupid and robust.
+
+Which was just his summary of ideas like [this one from Rob Pike in 1989][3]:
+
+> Data dominates. If you’ve chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
+
+Which cites [Fred Brooks from 1975][4]:
+
+> ### Representation is the Essence of Programming
+>
+> Beyond craftmanship lies invention, and it is here that lean, spare, fast programs are born. Almost always these are the result of strategic breakthrough rather than tactical cleverness. Sometimes the strategic breakthrough will be a new algorithm, such as the Cooley-Tukey Fast Fourier Transform or the substitution of an n log n sort for an n2 set of comparisons.
+>
+> Much more often, strategic breakthrough will come from redoing the representation of the data or tables. This is where the heart of your program lies. Show me your flowcharts and conceal your tables, and I shall be continued to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.
+
+So, smart people have been saying this again and again for nearly half a century: focus on the data first. But sometimes it feels like the most famous piece of smart programming advice that everyone forgets.
+
+Let me give some real examples.
+
+### The Highly Scalable System that Couldn’t
+
+This system was designed from the start to handle CPU-intensive loads with incredible scalability. Nothing was synchronous. Everything was done with callbacks, task queues and worker pools.
+
+But there were two problems: The first was that the “CPU-intensive load” turned out not to be that CPU-intensive after all — a single task took a few milliseconds at worst. So most of the architecture was doing more harm than good. The second problem was that although it sounded like a highly scalable distributed system, it wasn’t one — it only ran on one machine. Why? Because all communication between asynchronous components was done using files on the local filesystem, which was now the bottleneck for any scaling. The original design didn’t say much about data at all, except to advocate local files in the name of “simplicity”. Most of the document was about all the extra architecture that was “obviously” needed to handle the “CPU-intensiveness” of the load.
+
+### The Service-Oriented Architecture that was Still Data-Oriented
+
+This system followed a microservices design, made up of single-purpose apps with REST-style APIs. One component was a database that stored documents (basically responses to standard forms, and other electronic paperwork). Naturally it exposed an API for basic storage and retrieval, but pretty quickly there was a need for more complex search functionality. The designers felt that adding this search functionality to the existing document API would have gone against the principles of microservices design. They could talk about “search” as being a different kind of service from “get/put”, so their architecture shouldn’t couple them together. Besides, the tool they were planning to use for search indexing was separate from the database itself, so creating a new service made sense for implementation, too.
+
+In the end, a search API was created containing a search index that was essentially a duplicate of the data in the main database. This data was being updated dynamically, so any component that mutated document data through the main database API had to also update the search API. It’s impossible to do this with REST APIs without race conditions, so the two sets of data kept going out of sync every now and then, anyway.
+
+Despite what the architecture diagram promised, the two APIs were tightly coupled through their data dependencies. Later on it was recognised that the search index should be an implementation detail of a unified document service, and this made the system much more maintainable. “Do one thing” works at the data level, not the verb level.
+
+### The Fantastically Modular and Configurable Ball of Mud
+
+This system was a kind of automated deployment pipeline. The original designers wanted to make a tool that was flexible enough to solve deployment problems across the company. It was written as a set of pluggable components, with a configuration file system that not only configured the components, but acted as a [DSL][5] for programming how the components fitted into the pipeline.
+
+Fast forward a few years and it’s turned into “that program”. There was a long list of known bugs that no one was ever fixing. No one wanted to touch the code out of fear of breaking things. No one used any of the flexibility of the DSL. Everyone who used the program copy-pasted the same known-working configuration that everyone else used.
+
+What had gone wrong? Although the original design document used words like “modular”, “decoupled”, “extensible” and “configurable” a lot, it never said anything about data. So, data dependencies between components ended up being handled in an ad-hoc way using a globally shared blob of JSON. Over time, components made more and more undocumented assumptions about what was in or not in the JSON blob. Sure, the DSL allowed rearranging components into any order, but most configurations didn’t work.
+
+### Lessons
+
+I chose these three examples because they’re easy to explain, not to pick on others. I once tried to build a website, and failed trying to instead build some cringe-worthy XML database that didn’t even solve the data problems I had. Then there’s the project that turned into a broken mockery of half the functionality of `make`, again because I didn’t think about what I really needed. I wrote a post before based on a time I wrote [a castle-in-the-sky OOP class hierarchy that should have been encoded in data instead][6].
+
+Update:
+
+Apparently many people still thought I wrote this to make fun of others. People who’ve actually worked with me will know I’m much more interested in the things I’m fixing than in blaming the people who did most of the work building them, but, okay, here’s what I think of the engineers involved.
+
+Honestly, the first example obviously happened because the designer was more interested in bringing a science project to work than in solving the problem at hand. Most of us have done that (mea culpa), but it’s really annoying to our colleagues who’ll probably have to help maintain them when we’re bored of them. If this sounds like you, please don’t get offended; please just stop. (I’d still rather work on the single-node distributed system than anything built around my “XML database”.)
+
+There’s nothing personal in the second example. Sometimes it feels like everyone is talking about how wonderful it is to split up services, but no one is talking about exactly when not to. People are learning the hard way all the time.
+
+The third example was actually from some of the smartest people I’ve ever had the chance to work with.
+
+(End update.)
+
+“Does this talk about the problems created by data?” turns out to be a pretty useful litmus test for good systems design. It’s also pretty handy for detecting false expert advice. The hard, messy systems design problems are data problems, so false experts love to ignore them. They’ll show you a wonderfully beautiful architecture, but without talking about what kind of data it’s appropriate for, and (crucially) what kind of data it isn’t.
+
+For example, a false expert might tell you that you should use a pub/sub system because pub/sub systems are loosely coupled, and loosely coupled components are more maintainable. That sounds nice and results in pretty diagrams, but it’s backwards thinking. Pub/sub doesn’t _make_ your components loosely coupled; pub/sub _is_ loosely coupled, which may or may not match your data needs.
+
+On the flip side, a well-designed data-oriented architecture goes a long way. Functional programming, service meshes, RPCs, design patterns, event loops, whatever, all have their merits, but personally I’ve seen tools like [boring old databases][7] be responsible for a lot more successfully shipped software.
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2019/06/30/data_still_dominates.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: https://lwn.net/Articles/193245/
+[2]: http://www.catb.org/~esr/writings/taoup/html/ch01s06.html
+[3]: http://doc.cat-v.org/bell_labs/pikestyle
+[4]: https://archive.org/stream/mythicalmanmonth00fred/mythicalmanmonth00fred_djvu.txt
+[5]: https://martinfowler.com/books/dsl.html
+[6]: https://theartofmachinery.com/2016/06/21/code_vs_data.html
+[7]: https://theartofmachinery.com/2017/10/28/rdbs_considered_useful.html
diff --git a/sources/talk/20190702 SD-WAN Buyers Should Think Application Performance as well as Resiliency.md b/sources/talk/20190702 SD-WAN Buyers Should Think Application Performance as well as Resiliency.md
new file mode 100644
index 0000000000..3bddd4cdc3
--- /dev/null
+++ b/sources/talk/20190702 SD-WAN Buyers Should Think Application Performance as well as Resiliency.md
@@ -0,0 +1,49 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (SD-WAN Buyers Should Think Application Performance as well as Resiliency)
+[#]: via: (https://www.networkworld.com/article/3406456/sd-wan-buyers-should-think-application-performance-as-well-as-resiliency.html)
+[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
+
+SD-WAN Buyers Should Think Application Performance as well as Resiliency
+======
+
+![istock][1]
+
+As an industry analyst, not since the days of WAN Optimization have I seen a technology gain as much interest as I am seeing with [SD-WANs][2] today. Although full deployments are still limited, nearly every network manager, and often many IT leaders I talk to, are interested in it. The reason for this is two-fold – the WAN has grown in importance for cloud-first enterprises and is badly in need of an overhaul. This hasn’t gone unnoticed by the vendor community as there has been an explosion of companies bringing a broad range of SD-WAN offerings to market. The great news for buyers is that there is no shortage of choices. The bad news is there are too many choices and making the right decision difficult.
+
+One area of differentiation for SD-WAN vendors is how they handle application performance. I think of the SD-WAN market as being split into two categories – basic and advanced SD-WANs. A good analogy is to think of the virtualization market. There are many vendors that offer hypervisors – in fact there are a number of free ones. So why do companies pay a premium for VMware? It’s because VMware offers many advanced features and capabilities that make its solution do more than just virtualize servers.
+
+Similarly, basic SD-WAN solutions do a great job of helping to lower costs and to increase application resiliency through path selection capabilities but do nothing to improve application performance. One myth that needs busting is that all SD-WANs make your applications perform better. That’s simply not true as application availability and performance are two different things. It’s possible to have great performance and poor availability or high availability with lackluster performance.
+
+Consider the case where a business runs a hybrid WAN and voice and video traffic is sent over the MPLS connection and broadband is used for other traffic. If the MPLS link becomes congested, but doesn’t go down, most SD-WAN solutions will continue to send video and voice over it, which obviously degrades the performance. If multiple broadband connections are used, the chances of congestion related issues are even more likely.
+
+This is an important point for IT professionals to understand. The business justification for SD-WAN was initially built around saving money but if application performance suffers, the entire return on investment (ROI) for the project might as well be tossed out the window. For many companies, the network is the business, so a poor performing network means equally poor performing applications which results lost productivity, lower revenues and possibly brand damage from customer experience issues.
+
+I’ve talked to many organizations that had digital initiatives fail because the network wasn’t transformed. For example, a luxury retailer implemented a tablet program for in store personnel to be able to show merchandise to customers. High end retail is almost wholly impulse purchases so the more inventory that can be shown to a customer, the larger the resulting sales. The WAN that was in place was causing the mobile application to perform poorly causing the digital initiative to have a negative effect. Instead of driving sales, the mobile initiative was chasing customers from the store. The idea was right but the poor performing WAN caused the project to fail.
+
+SD-WAN decision makers need to look to suppliers that have specific technologies integrated into it that can act when congestion occurs. A great example of this is the Silver Peak [Unity EdgeConnect™][3] SD-WAN edge platform with [path conditioning][4], [traffic shaping][5] and sub-second link failover. This ensures the best possible quality for all critical applications, even when an underlying link experiences congestion or an outage, even for [voice and video over broadband][6]. This is a foundational component of advanced SD-WAN providers as they offer the same resiliency and cost benefits as a basic SD-WAN but also ensure application performance remains high.
+
+The SD-WAN era is here, and organizations should be aggressive with deployments as it will transform the WAN and make it a digital transformation enabler. Decision makers should choose their provider carefully and ensure the vendor also improves application performance. Without it, the digital initiatives will likely fail and negate any ROI the company was hoping to realize.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3406456/sd-wan-buyers-should-think-application-performance-as-well-as-resiliency.html
+
+作者:[Zeus Kerravala][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Zeus-Kerravala/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/07/istock-157647179-100800860-large.jpg
+[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
+[3]: https://www.silver-peak.com/products/unity-edge-connect
+[4]: https://www.silver-peak.com/products/unity-edge-connect/path-conditioning
+[5]: https://www.silver-peak.com/products-solutions/unity/traffic-shaping
+[6]: https://www.silver-peak.com/sd-wan/voice-video-over-broadband
diff --git a/sources/talk/20190703 An eco-friendly internet of disposable things is coming.md b/sources/talk/20190703 An eco-friendly internet of disposable things is coming.md
new file mode 100644
index 0000000000..8d1e827aa8
--- /dev/null
+++ b/sources/talk/20190703 An eco-friendly internet of disposable things is coming.md
@@ -0,0 +1,92 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (An eco-friendly internet of disposable things is coming)
+[#]: via: (https://www.networkworld.com/article/3406462/an-eco-friendly-internet-of-disposable-things-is-coming.html)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+An eco-friendly internet of disposable things is coming
+======
+Researchers are creating a non-hazardous, bacteria-powered miniature battery that can be implanted into shipping labels and packaging to monitor temperature and track packages in real time.
+![Thinkstock][1]
+
+Get ready for a future of disposable of internet of things (IoT) devices, one that will mean everything is connected to networks. It will be particularly useful in logistics, being used in single-use plastics in retail packaging and throw-away shippers’ carboard boxes.
+
+How it will happen? The answer is when non-hazardous, disposable bio-batteries make it possible. And that moment might be approaching. Researchers say they’re closer to commercializing a bacteria-powered miniature battery that they say will propel the IoDT.
+
+**[ Learn more: [Download a PDF bundle of five essential articles about IoT in the enterprise][2] ]**
+
+The “internet of disposable things is a new paradigm for the rapid evolution of wireless sensor networks,” says Seokheun Choi, an associate professor at Binghamton University, [in an article on the school’s website][3].
+
+“Current IoDTs are mostly powered by expensive and environmentally hazardous batteries,” he says. Those costs can be significant in any kind of large-scale deployment, he says. And furthermore, with exponential growth, the environmental concerns would escalate rapidly.
+
+The miniaturized battery that Choi’s team has come up with is uniquely charged through power created by bacteria. It doesn’t have metals and acids in it. And it’s designed specifically to provide energy to sensors and radios in single-use IoT devices. Those could be the kinds of sensors ideal for supply-chain logistics where the container is ultimately going to end up in a landfill, creating a hazard.
+
+Another use case is real-time analysis of packaged food, with sensors monitoring temperature and location, preventing spoilage and providing safer food handling. For example, a farm product could be tracked for on-time delivery, as well as have its temperature measured, all within the packaging, as it moves from packaging facility to consumer. In the event of a food-borne illness outbreak, say, one can quickly find out where the product originated—which apparently is hard to do now.
+
+Other use cases could be battery-impregnated shipping labels that send real-time data to the internet. Importantly, in both use cases, packaging can be discarded without added environmental concerns.
+
+### How the bacteria-powered batteries work
+
+A slow release of nutrients provide the energy to the bacteria-powered batteries, which the researchers say can last up to eight days. “Slow and continuous reactions” convert the microbial nutrients into “long standing power,” they say in [their paper's abstract][4].
+
+“Our biobattery is low-cost, disposable, and environmentally-friendly,” Choi says.
+
+Origami, the Japanese paper-folding skill used to create objects, was an inspiration for a similar microbial-based battery project the group wrote about last year in a paper. This one is liquid-based and not as long lasting. A bacteria-containing liquid was absorbed along the porous creases in folded paper, creating the paper-delivered power source, perhaps to be used in a shipping label.
+
+“Low-cost microbial fuel cells (MFCs) can be done efficiently by using a paper substrate and origami techniques,” [the group wrote then][5].
+
+Scientists, too, envisage electronics now printed on circuit boards (PCBs) and can be toxic on disposal being printed entirely on eco-friendly paper. Product cycles, such as those found now in mobile devices and likely in future IoT devices, are continually getting tighter—thus PCBs are increasingly being disposed. Solutions are needed, experts say.
+
+Put the battery in the paper, too, is the argument here. And while you’re at it, get the biodegradation of the used-up biobattery to help break-down the organic-matter paper.
+
+Ultimately, Choi believes that the power-creating bacteria could even be introduced naturally by the environment—right now it’s added on by the scientists.
+
+**More on IoT:**
+
+ * [What is the IoT? How the internet of things works][6]
+ * [What is edge computing and how it’s changing the network][7]
+ * [Most powerful Internet of Things companies][8]
+ * [10 Hot IoT startups to watch][9]
+ * [The 6 ways to make money in IoT][10]
+ * [What is digital twin technology? [and why it matters]][11]
+ * [Blockchain, service-centric networking key to IoT success][12]
+ * [Getting grounded in IoT networking and security][2]
+ * [Building IoT-ready networks must become a priority][13]
+ * [What is the Industrial IoT? [And why the stakes are so high]][14]
+
+
+
+Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3406462/an-eco-friendly-internet-of-disposable-things-is-coming.html
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2017/04/green-data-center-intro-100719502-large.jpg
+[2]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[3]: https://www.binghamton.edu/news/story/1867/everything-will-connect-to-the-internet-someday-and-this-biobattery-could-h
+[4]: https://www.sciencedirect.com/science/article/abs/pii/S0378775319305580
+[5]: https://www.sciencedirect.com/science/article/pii/S0960148117311606
+[6]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
+[7]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[8]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[9]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[10]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[11]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[12]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[13]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[14]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[15]: https://www.facebook.com/NetworkWorld/
+[16]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190705 Lessons in Vendor Lock-in- Google and Huawei.md b/sources/talk/20190705 Lessons in Vendor Lock-in- Google and Huawei.md
new file mode 100644
index 0000000000..fe92b389a9
--- /dev/null
+++ b/sources/talk/20190705 Lessons in Vendor Lock-in- Google and Huawei.md
@@ -0,0 +1,58 @@
+[#]: collector: "lujun9972"
+[#]: translator: " "
+[#]: reviewer: " "
+[#]: publisher: " "
+[#]: url: " "
+[#]: subject: "Lessons in Vendor Lock-in: Google and Huawei"
+[#]: via: "https://www.linuxjournal.com/content/lessons-vendor-lock-google-and-huawei"
+[#]: author: "Kyle Rankin https://www.linuxjournal.com/users/kyle-rankin"
+
+Lessons in Vendor Lock-in: Google and Huawei
+======
+
+
+What happens when you're locked in to a vendor that's too big to fail, but is on the opposite end of a trade war?
+
+The story of Google no longer giving Huawei access to Android updates is still developing, so by the time you read this, the situation may have changed. At the moment, Google has granted Huawei a 90-day window whereby it will have access to Android OS updates, the Google Play store and other Google-owned Android assets. After that point, due to trade negotiations between the US and China, Huawei no longer will have that access.
+
+Whether or not this new policy between Google and Huawei is still in place when this article is published, this article isn't about trade policy or politics. Instead, I'm going to examine this as a new lesson in vendor lock-in that I don't think many have considered before: what happens when the vendor you rely on is forced by its government to stop you from being a customer?
+
+### Too Big to Fail
+
+Vendor lock-in isn't new, but until the last decade or so, it generally was thought of by engineers as a bad thing. Companies would take advantage the fact that you used one of their products that was legitimately good to use the rest of their products that may or may not be as good as those from their competitors. People felt the pain of being stuck with inferior products and rebelled.
+
+These days, a lot of engineers have entered the industry in a world where the new giants of lock-in are still growing and have only flexed their lock-in powers a bit. Many engineers shrug off worries about choosing a solution that requires you to use only products from one vendor, in particular if that vendor is a large enough company. There is an assumption that those companies are too big ever to fail, so why would it matter that you rely on them (as many companies in the cloud do) for every aspect of their technology stack?
+
+Many people who justify lock-in with companies who are too big to fail point to all of the even more important companies who use that vendor who would have even bigger problems should that vendor have a major bug, outage or go out of business. It would take so much effort to use cross-platform technologies, the thinking goes, when the risk of going all-in with a single vendor seems so small.
+
+Huawei also probably figured (rightly) that Google and Android were too big to fail. Why worry about the risks of being beholden to a single vendor for your OS when that vendor was used by other large companies and would have even bigger problems if the vendor went away?
+
+### The Power of Updates
+
+Google held a particularly interesting and subtle bit of lock-in power over Huawei (and any phone manufacturer who uses Android)—the power of software updates. This form of lock-in isn't new. Microsoft famously used the fact that software updates in Microsoft Office cost money (naturally, as it was selling that software) along with the fact that new versions of Office had this tendency to break backward compatibility with older document formats to encourage everyone to upgrade. The common scenario was that the upper-level folks in the office would get brand-new, cutting-edge computers with the latest version of Office on them. They would start saving new documents and sharing them, and everyone else wouldn't be able to open them. It ended up being easier to upgrade everyone's version of Office than to have the bosses remember to save new documents in old formats every time.
+
+The main difference with Android is that updates are critical not because of compatibility, but for security. Without OS updates, your phone ultimately will become vulnerable to exploits that attackers continue to find in your software. The Android OS that ships on phones is proprietary and therefore requires permission from Google to get those updates.
+
+Many people still don't think of the Android OS as proprietary software. Although people talk about the FOSS underpinnings in Android, only people who go to the extra effort of getting a pure-FOSS version of Android, like LineageOS, on their phones actually experience it. The version of Android most people tend to use has a bit of FOSS in the center, surrounded by proprietary Google Apps code.
+
+It's this Google Apps code that gives Google the kind of powerful leverage over a company like Huawei. With traditional Android releases, Google controls access to OS updates including security updates. All of this software is signed with Google's signing keys. This system is built with security in mind—attackers can't easily build their own OS update to install on your phone—but it also has a convenient side effect of giving Google control over the updates.
+
+What's more, the Google Apps suite isn't just a convenient way to load Gmail or Google Docs, it also includes the tight integration with your Google account and the Google Play store. Without those hooks, you don't have access to the giant library of applications that everyone expects to use on their phones. As anyone with a LineageOS phone that uses F-Droid can attest, while a large number of applications are available in the F-Droid market, you can't expect to see those same apps as on Google Play. Although you can side-load some Google Play apps, many applications, such as Google Maps, behave differently without a Google account. Note that this control isn't unique to Google. Apple uses similar code-signing features with similar restrictions on its own phones and app updates.
+
+### Conclusion
+
+Without access to these OS updates, Huawei now will have to decide whether to create its own LineageOS-style Android fork or a whole new phone OS of its own. In either case, it will have to abandon the Google Play Store ecosystem and use F-Droid-style app repositories, or if it goes 100% alone, it will need to create a completely new app ecosystem. If its engineers planned for this situation, then they likely are working on this plan right now; otherwise, they are all presumably scrambling to address an event that "should never happen". Here's hoping that if you find yourself in a similar case of vendor lock-in with an overseas company that's too big to fail, you never get caught in the middle of a trade war.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linuxjournal.com/content/lessons-vendor-lock-google-and-huawei
+
+作者:[Kyle Rankin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linuxjournal.com/users/kyle-rankin
+[b]: https://github.com/lujun9972
diff --git a/sources/talk/20190708 Colocation facilities buck the cloud-data-center trend.md b/sources/talk/20190708 Colocation facilities buck the cloud-data-center trend.md
new file mode 100644
index 0000000000..add6ef0093
--- /dev/null
+++ b/sources/talk/20190708 Colocation facilities buck the cloud-data-center trend.md
@@ -0,0 +1,96 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Colocation facilities buck the cloud-data-center trend)
+[#]: via: (https://www.networkworld.com/article/3407756/colocation-facilities-buck-the-cloud-data-center-trend.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Colocation facilities buck the cloud-data-center trend
+======
+Lower prices and latency plus easy access to multiple cloud providers make colocation facilities an attractive option compared to building on-site data centers.
+![gorodenkoff / Getty Images][1]
+
+[Data center][2] workloads are moving but not only to the cloud. Increasingly, they are shifting to colocation facilities as an alternative to privately owned data centers.
+
+### What is colocation?
+
+A colocation facility or colo is a data center in which a business can rent space for servers and other computing hardware that they purchase but that the colo provider manages.
+
+[Read about IPv6 and cloud-access security brokers][3]
+
+The colo company provides the building, cooling, power, bandwidth and physical security. Space is leased by the rack, cabinet, cage or room. Many colos started out as managed services and continue to offer those specialized services.
+
+Some prominent providers include Equinix, Digital Reality Trust, CenturyLink, and NTT Communications, and there are several Chinese providers that only serve the China market. Unlike the data centers of cloud vendors like Amazon and Microsoft, these colo facilities are generally in large metropolitan areas.
+
+“Colos have been around a long time, but their initial use case was Web servers,” said Rick Villars, vice president of data centers and cloud research at IDC. “What’s changed now is the ratio of what’s customer-facing is much greater than in 2000, [with the] expansion of companies needing to have more assets that are network-facing.”
+
+### Advantages of colos: Cost, cloud interconnect
+
+Homegrown data centers are often sized correctly, with either too much capacity or too little, said Jim Poole, vice president of business development at Equinix. “Customers come to us all the time and say, ‘Would you buy my data center? Because I only use 25 percent of it,’” he said.
+
+Poole said the average capital expenditure for a stand-alone enterprise data center that is not a part of the corporate campus is $9 million. Companies are increasingly realizing that it makes sense to buy the racks of hardware but place it in someone else’s secure facility that handles the power and cooling. “It’s the same argument for doing cloud computing but at the physical-infrastructure level,” he said.
+
+Mike Satter, vice president for OceanTech, a data-center-decommissioning service provider, says enterprises should absolutely outsource data-center construction or go the colo route. Just as there are contractors who specialize in building houses, there are experts who specialize in data-center design, he said.
+
+He added that with many data-center closures there is subsequent consolidation. “For every decommissioning we do, that same company is adding to another environment somewhere else. With the new hardware out there now, the servers can do the same work in 20 racks as they did in 80 racks five years ago. That means a reduced footprint and energy cost,” he said.
+
+Often these closures mean moving to a colo. OceanTech recently decommissioned a private data center for a major media outlet he declined to identify that involved shutting down a data center in New Jersey that held 70 racks of gear. The firm was going to move its apps to the cloud but ended up expanding to a colo facility in New York City.
+
+### Cloud isn't cheaper than private data centers
+
+Satter said he’s had conversations with companies that planned to go to the cloud but changed their minds when they saw what it would cost if they later decided to move workloads out. Cloud providers can “kill you with guidelines and costs” because your data is in their infrastructure, and they can set fees that make it expensive to move it to another provider, he said. “The cloud not a money saver.”
+
+That can drive decisions to keep data in-house or in a colo in order to keep tighter possession of their data. “Early on, when people weren’t hip to the game for how much it cost to move to the cloud, you had decision makers with influence say the cloud sounded good. Now they are realizing it costs a lot more dollars to do that vs. doing something on-prem, on your own,” said Satter.
+
+Guy Churchward, CEO of Datera, developer of software designed storage platforms for enterprises, has noticed a new trend among CIOs making a cloud vs. private decision for apps based on the lifespan of the app.
+
+“Organizations don’t know how much resource they need to throw at a task. The cloud makes more sense for [short-term apps],” he said. For applications that will be used for five years or more, it makes more sense to place them in company-controlled facilities, he said. That's because with three-to-five-year hardware-refresh cycles, the hardware lasts the entire lifespan of the app, and the hardware and app can be retired at the same time.
+
+Another force driving the decision of private data center vs. the cloud is machine learning. Churchward said that’s because machine learning is often done using large amounts of highly sensitive data, so customers wanted data kept securely in house. They also wanted a low-latency loop between their ML apps and the data lake from which they draw.
+
+### Colos connect to mulitple cloud providers
+
+Another allure of colocation providers is that they can act as a pipeline between enterprises and multiple cloud providers. So rather than directly connecting to AWS, Azure, etc., businesses can connect to a colo, and that colo acts like a giant switch, connecting them to cloud providers through dedicated, high-speed networks.
+
+Villars notes the typical corporate data center is either inside corporate HQ or someplace remote, like South Dakota where land was cheap. But the trade-off is that network connectivity to remote locations is often slower and more expensive.
+
+That’s where a data-center colo providers with a large footprints come in, since they have points of presence in major cities. No one would fault a New York City-based firm for putting its data center in upstate New York or even further away. But when Equinix, DTR, and others all have data centers right in New York City, customers might get faster and sometimes cheaper connections plus lower latency.
+
+Steve Cretney, vice president and CIO for food distributor Colony Brands, is in the midst of migrating the company to the cloud and moving everything he can from his data center to AWS. Rather than connect directly to AWS, Colony’s Wisconsin headquarters is connected to an Equinix data center in Chicago.
+
+Going with Equinix provides more and cheaper bandwidth to the cloud than buying direct connectivity on his own. “I effectively moved my data center into Chicago. Now I can compete with a better price on data communication and networks,” he said.
+
+Cretney estimates that by moving Colony’s networking from a smaller, local provider to Chicago, the company is seeing an annual cost savings of 50 percent for network connectivity that includes telecommunications.
+
+Also, Colony wants to adopt a mult-cloud-provider strategy to avoid vendor lock-in, and he gets that by using Equinix as his network connection. As the company eventually uses Microsoft Azure and Google Cloud and other providers, Equinex can provide flexible and economic interconnections, he said.
+
+### **Colos reduce the need for enterprise data-center real estate**
+
+In 2014, 80 percent of data-centers were owned by enterprises, while colos and the early cloud accounted for 20 percent, said Villars. Today that’s a 50-50 split, and by 2022-2023, IDC projects service providers will own 70 percent of the large-data-center space.
+
+For the past five years, the amount of new data-center construction by enterprises has been falling steadily at 5 to 10 percent per year, said Villars. “They are not building new ones because they are coming to the realization that being an expert at data-center construction is not something a company has.”
+
+Enterprises across many sectors are looking at their data-center environment and leveraging things like virtual machines and SSD, thereby compressing the size of their data centers and getting more work done within smaller physical footprints. “So at some point they ask if they are spending appropriately for this space. That’s when they look at colo,” said Villars.
+
+Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3407756/colocation-facilities-buck-the-cloud-data-center-trend.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/05/cso_cloud_computing_backups_it_engineer_data_center_server_racks_connections_by_gorodenkoff_gettyimages-943065400_3x2_2400x1600-100796535-large.jpg
+[2]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
+[3]: https://www.networkworld.com/article/3391380/does-your-cloud-access-security-broker-support-ipv6-it-should.html
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190709 Improving IT Operations - Key to Business Success in Digital Transformation.md b/sources/talk/20190709 Improving IT Operations - Key to Business Success in Digital Transformation.md
new file mode 100644
index 0000000000..5ce4f7edfe
--- /dev/null
+++ b/sources/talk/20190709 Improving IT Operations - Key to Business Success in Digital Transformation.md
@@ -0,0 +1,79 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Improving IT Operations – Key to Business Success in Digital Transformation)
+[#]: via: (https://www.networkworld.com/article/3407698/improving-it-operations-key-to-business-success-in-digital-transformation.html)
+[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
+
+Improving IT Operations – Key to Business Success in Digital Transformation
+======
+
+![Artem Peretiatko][1]
+
+Forty seven percent of CEOs say they are being “challenged” by their board of directors to show progress in shifting toward a digital business model according to the [Gartner 2018 CIO][2] Agenda Industry Insights Report. By improving IT operations, organizations can progress and even accelerate their digital transformation initiatives efficiently and successfully. The biggest barrier to success is that IT currently spends around 78 percent of their budget and 80 percent of their time just maintaining IT operations, leaving little time and resource left for innovation according to ZK Research[*][3].
+
+### **Do you cut the operations budget or invest more in transforming operations? **
+
+The Cisco IT Operations Readiness Index 2018 predicted a dramatic change in IT operations as CIOs embrace analytics and automation. The study reported that 88 percent of respondents identify investing in IT operations as key to driving preemptive practices and enhancing customer experience.
+
+### What does this have to do with the wide area network?
+
+According to the IT Operations Readiness Index, 73 percent of respondents will collect WAN operational or performance data and 70 percent will analyze WAN data and leverage the results to further automate network operations. However, security is the most data-driven infrastructure today compared to other IT infrastructure functions (i.e. IoT, IP telephony, network infrastructure, data center infrastructure, WAN connectivity, etc.). The big questions are:
+
+ * How do you collect operations data and what data should you collect?
+ * How do you analyze it?
+ * How do you then automate IT operations based on the results?
+
+
+
+By no means, is this a simple task. IT departments use a combination of data collected internally and by outside vendors to aggregate information used to transform operations and make better business decisions.
+
+In a recent [survey][4] by Frost & Sullivan, 94 percent of respondents indicated they will deploy a Software-defined Wide Area Network ([SD-WAN][5]) in the next 24 months. SD-WAN addresses the gap that router-centric WAN architectures were not designed to fill. A business-driven SD-WAN, designed from the ground up to support a cloud-first business model, provides significantly more network and application performance visibility, significantly assisting enterprises to realize the transformational promise of a digital business model. In fact, Gartner indicates that 90 percent of WAN edge decisions will be based on SD-WAN by 2023.
+
+### How an SD-WAN can improve IT operations leading to successful digital transformation
+
+All SD-WAN solutions are not created alike. One of the key components that organizations need to consider and evaluate is having complete observability across the network and applications through a single pane of glass. Without visibility, IT risks running inefficient operations that will stifle digital transformation initiatives. This real-time visibility must provide:
+
+ * Operational metrics enabling IT/CIO’s to shift from a reactive toward a predictive practice
+ * A centralized dashboard that allows IT to monitor, in real-time, all aspects of network operations – a dashboard that has flexible knobs to adjust and collect metrics from all WAN edge appliances to accelerate problem resolution
+
+
+
+The Silver Peak Unity [EdgeConnect™][6] SD-WAN edge platform provides granular visibility into network and application performance. The EdgeConnect platform ensures the highest quality of experience for both end users and IT. End users enjoy always-consistent, always-available application performance including the highest quality of voice and video, even over broadband. Utilizing the [Unity Orchestrator™][7] comprehensive management dashboard as shown below, IT gains complete observability into the performance attributes of the network and applications in real-time. Customizable widgets provide a wealth of operational data including a health heatmap for every SD-WAN appliance deployed, flow counts, active tunnels, logical topologies, top talkers, alarms, bandwidth consumed by each application and location, application latency and jitter and much more. Furthermore, the platform maintains a week’s worth of data with context allowing IT to playback and see what has transpired at a specific time and location, analogous to a DVR.
+
+By providing complete observability of the entire WAN, IT spends less time troubleshooting network and application bottlenecks and fielding support/help desk calls day and night, and more time focused on strategic business initiatives.
+
+![][8]
+
+This solution brief, “[Simplify SD-WAN Operations with Greater Visibility][9]”, provides additional detail on the capabilities offered in the business-driven EdgeConnect SD-WAN edge platform that enables businesses to accelerate their shift toward a digital business model.
+
+![][10]
+
+* ZK Research quote from [Cisco IT Operations Readiness Index 2018][11]
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3407698/improving-it-operations-key-to-business-success-in-digital-transformation.html
+
+作者:[Rami Rammaha][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Rami-Rammaha/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/07/istock-1096811078_1200x800-100801264-large.jpg
+[2]: https://www.gartner.com/smarterwithgartner/is-digital-a-priority-for-your-industry/
+[3]: https://blog.silver-peak.com/improving-it-operations-key-to-business-success-in-digital-transformation#footnote
+[4]: https://www.silver-peak.com/sd-wan-edge-survey
+[5]: https://www.silver-peak.com/sd-wan
+[6]: https://www.silver-peak.com/products/unity-edge-connect
+[7]: https://www.silver-peak.com/products/unity-orchestrator
+[8]: https://images.idgesg.net/images/article/2019/07/silver-peak-unity-edgeconnect-sdwan-100801265-large.jpg
+[9]: https://www.silver-peak.com/resource-center/simplify-sd-wan-operations-greater-visibility
+[10]: https://images.idgesg.net/images/article/2019/07/simplify-sd-wan-operations-with-greater-visibility-100801266-large.jpg
+[11]: https://s3-us-west-1.amazonaws.com/connectedfutures-prod/wp-content/uploads/2018/11/CF_transforming_IT_operations_report_3-2.pdf
diff --git a/sources/talk/20190709 Linux a key player in the edge computing revolution.md b/sources/talk/20190709 Linux a key player in the edge computing revolution.md
new file mode 100644
index 0000000000..df1dba9344
--- /dev/null
+++ b/sources/talk/20190709 Linux a key player in the edge computing revolution.md
@@ -0,0 +1,112 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Linux a key player in the edge computing revolution)
+[#]: via: (https://www.networkworld.com/article/3407702/linux-a-key-player-in-the-edge-computing-revolution.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+Linux a key player in the edge computing revolution
+======
+Edge computing is augmenting the role that Linux plays in our day-to-day lives. A conversation with Jaromir Coufal from Red Hat helps to define what the edge has become.
+![Dominic Smith \(CC BY 2.0\)][1]
+
+In the past few years, [edge computing][2] has been revolutionizing how some very familiar services are provided to individuals like you and me, as well as how services are managed within major industries. Try to get your arms around what edge computing is today, and you might just discover that your arms aren’t nearly as long or as flexible as you’d imagined. And Linux is playing a major role in this ever-expanding edge.
+
+One reason why edge computing defies easy definition is that it takes many different forms. As Jaromir Coufal, principal product manager at Red Hat, recently pointed out to me, there is no single edge. Instead, there are lots of edges – depending on what compute features are needed. He suggests that we can think of the edge as something of a continuum of capabilities with the problem being resolved determining where along that particular continuum any edge solution will rest.
+
+**[ Also read: [What is edge computing?][3] and [How edge networking and IoT will reshape data centers][4] ]**
+
+Some forms of edge computing include consumer electronics that are used and installed in millions of homes, others that serve tens of thousands of small businesses with operating their facilities, and still others that tie large companies to their remote sites. Key to this elusive definition is the idea that edge computing always involves distributing the workload in such a way that the bulk of the computing work is done remotely from the central core of the business and close to the business problem being addressed.
+
+Done properly, edge computing can provide services that are both faster and more reliable. Applications running on the edge can be more resilient and run considerably faster because their required data resources are local. In addition, data can be processed or analyzed locally, often requiring only periodic transfer of results to central sites.
+
+While physical security might be lower at the edge, edge devices often implement security features that allow them to detect 1) manipulation of the device, 2) malicious software, and 3) a physical breach and wipe data.
+
+### Benefits of edge computing
+
+Some of the benefits of edge computing include:
+
+ * A quick response to intrusion detection, including the ability for a remote device to detach or self-destruct
+ * The ability to instantly stop communication when needed
+ * Constrained functionality and fewer generic entry points
+ * Rugged and reliable problem resistance
+ * Making the overall computing system harder to attack because computing is distributed
+ * Less data-in-transit exposure
+
+
+
+Some examples of edge computing devices include those that provide:
+
+ * Video surveillance – watching for activity, reporting only if seen
+ * Controlling autonomous vehicles
+ * Production monitoring and control
+
+
+
+### Edge computing success story: Chick-fil-A
+
+One impressive example of highly successful edge computing caught me by surprise. It turns out Chick-fil-A uses edge computing devices to help manage its food preparation services. At Chick-fil-A, edge devices:
+
+ 1. Analyze a fryer’s cleaning and cooking
+ 2. Aggregate data as a failsafe in case internet connectivity is lost
+ 3. Help with decision-making about cooking – how much and how long to cook
+ 4. Enhance business operations
+ 5. Help automate the complex food cooking and holding decisions so that even newbies get things right
+ 6. Function even when the connection with the central site is down
+
+
+
+As Coufal pointed out, Chick-fil-A runs [Kubernetes][5] at the edge in every one of its restaurants. Their key motivators are low-latency, scale of operations, and continuous business. And it seems to be working extremely well.
+
+[Chick-fil-A’s hypothesis][6] captures it all: By making smarter kitchen equipment, we can collect more data. By applying data to our restaurant, we can build more intelligent systems. By building more intelligent systems, we can better scale our business.
+
+### Are you edge-ready?
+
+There’s no quick answer as to whether your organization is “edge ready.” Many factors determine what kind of services can be deployed on the edge and whether and when those services need to communicate with more central devices. Some of these include:
+
+ * Whether your workload can be functionally distributed
+ * If it’s OK for devices to have infrequent contact with the central services
+ * If devices can work properly when cut off from their connection back to central services
+ * Whether the devices can be secured (e.g., trusted not to provide an entry point)
+
+
+
+Implementing an edge computing network will likely take a long time from initial planning to implementation. Still, this kind of technology is taking hold and offers some strong advantages. While edge computing initially took hold 15 or more years ago, the last few years have seen renewed interest thanks to tech advances that have enabled new uses.
+
+Coufal noted that it's been 15 or more years since edge computing concepts and technologies were first introduced, but renewed interest has come about due to tech advances enabling new uses that require this technology.
+
+**More about edge computing:**
+
+ * [How edge networking and IoT will reshape data centers][4]
+ * [Edge computing best practices][7]
+ * [How edge computing can help secure the IoT][8]
+
+
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3407702/linux-a-key-player-in-the-edge-computing-revolution.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/07/telecom-100801330-large.jpg
+[2]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[5]: https://www.infoworld.com/article/3268073/what-is-kubernetes-container-orchestration-explained.html
+[6]: https://medium.com/@cfatechblog/edge-computing-at-chick-fil-a-7d67242675e2
+[7]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
+[8]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190711 Smarter IoT concepts reveal creaking networks.md b/sources/talk/20190711 Smarter IoT concepts reveal creaking networks.md
new file mode 100644
index 0000000000..86150bc580
--- /dev/null
+++ b/sources/talk/20190711 Smarter IoT concepts reveal creaking networks.md
@@ -0,0 +1,79 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Smarter IoT concepts reveal creaking networks)
+[#]: via: (https://www.networkworld.com/article/3407852/smarter-iot-concepts-reveal-creaking-networks.html)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Smarter IoT concepts reveal creaking networks
+======
+Today’s networks don’t meet the needs of emergent internet of things systems. IoT systems need their own modern infrastructure, researchers at the University of Magdeburg say.
+![Thinkstock][1]
+
+The internet of things (IoT) needs its own infrastructure ecosystem — one that doesn't use external clouds at all, researchers at the University of Magdeburg say.
+
+The computer scientists recently obtained funding from the German government to study how to build a future-generation of revolutionary, emergent IoT systems. They say networks must be fault tolerant, secure, and traverse disparate protocols, which they aren't now.
+
+**[ Read also: [What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3] ]**
+
+The researchers say a smarter, unique, and organic infrastructure needs to be developed for the IoT and that simply adapting the IoT to traditional networks won't work. They say services must self-organize and function autonomously and that people must accept the fact that we are using the internet in ways never originally intended.
+
+"The internet, as we know it, is based on network architectures of the 70s and 80s, when it was designed for completely different applications,” the researchers say in their [media release][4]. The internet has centralized security, which causes choke points, and and an inherent lack of dynamic controls, which translates to inflexibility in access rights — all of which make it difficult to adapt the IoT to it.
+
+Device, data, and process management must be integrated into IoT systems, say the group behind the project, called [DoRIoT][5] (Dynamische Laufzeitumgebung für Organisch (dis-)Aggregierende IoT-Prozesse), translated as Dynamic Runtime Environment for Organic dis-Aggregating IoT Processes.
+
+“In order to close this gap, concepts [will be] developed in the project that transparently realize the access to the data,” says Professor Sebastian Zug of the University of Freiberg, a partner in DoRIoT. “For the application, it should make no difference whether the specific information requirement is answered by a server or an IoT node.”
+
+### Extreme edge computing
+
+In other words, servers and nodes, conceptually, should merge. One could argue it’s a form of extreme [edge computing][6], which is when processing and data storage is taken out of traditional, centralized data center environments and placed close to where the resources are required. It reduces latency, among other advantages.
+
+DoRIoT may take edge computing one step further. Detecting failures ahead of time and seamless migration of devices are wants, too — services can’t fail just because a new kind of device is introduced.
+
+“The systems [will] benefit from each other, for example, they can share computing power, data and so on,” says Mesut Güneş of Magdeburg’s [Faculty of Computer Science Institute for Intelligent Cooperating Systems][7].
+
+“The result is an enormous data pool,” the researchers explain. “Which, in turn, makes it possible to make much more precise statements, for example when predicting climate models, observing traffic flows, or managing large factories in Industry 4.0.”
+
+[Industry 4.0][8] refers to smart factories that have connected machines autonomously self-managing their own supply chain, production output, and logistics without human intervention.
+
+Managing risks better than the current internet is one of DoRIoT's goals. The idea is to “guarantee full sovereignty over proprietary data.” To get there, though, one has to eliminate dependency on the cloud and access to data via third parties, they say.
+
+“This allows companies to be independent of the server infrastructures of external service providers such as Google, Microsoft or Amazon, which are subject to constant changes and even may not be accessible,” they say.
+
+**More about edge networking**
+
+ * [How edge networking and IoT will reshape data centers][3]
+ * [Edge computing best practices][9]
+ * [How edge computing can help secure the IoT][10]
+
+
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3407852/smarter-iot-concepts-reveal-creaking-networks.html
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/02/industry_4-0_industrial_iot_internet_of_things_network_thinkstock_613880008-100749946-large.jpg
+[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[4]: http://www.ovgu.de/en/University/In+Profile/Key+Profile+Areas/Research/Secure+data+protection+in+the+new+internet+of+things.html
+[5]: http://www.doriot.net/
+[6]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[7]: http://iks.cs.ovgu.de/iks/en/ICS.html
+[8]: https://www.networkworld.com/article/3199671/what-is-industry-4-0.html
+[9]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
+[10]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190716 Server hardware makers shift production out of China.md b/sources/talk/20190716 Server hardware makers shift production out of China.md
new file mode 100644
index 0000000000..be29283977
--- /dev/null
+++ b/sources/talk/20190716 Server hardware makers shift production out of China.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Server hardware makers shift production out of China)
+[#]: via: (https://www.networkworld.com/article/3409784/server-hardware-makers-shift-production-out-of-china.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Server hardware makers shift production out of China
+======
+Tariffs on Chinese products and unstable U.S./China relations cause server makers to speed up their move out of China.
+![Etereuti \(CC0\)][1]
+
+The supply chain of vendors that build servers and network communication devices is accelerating its shift of production out of China to Taiwan and North America, along with other nations not subject to the trade war between the U.S. and China.
+
+Last May, the Trump Administration levied tariffs on a number of imported Chinese goods, computer components among them. The tariffs ranged from 10-25%. Consumers were hit hardest, since they are more price sensitive than IT buyers. PC World said the [average laptop price could rise by $120][2] just for the tariffs.
+
+But since the tariff was based on the value of the product, that means server hardware prices could skyrocket, since servers cost much more than PCs.
+
+**[ Read also: [HPE’s CEO lays out his technology vision][3] ]**
+
+### Companies that are moving production out of China
+
+The Taiwanese tech publication DigiTimes reported (article now locked behind a paywall) that Mitac Computing Technology, a server ODM, reactivated an old production line at Hsinchu Science Park (HSP) in Taiwan at the end of 2018 and restarted another for motherboard SMT process in March 2019. The company plans to establish one more SMT production line prior to the end of 2019.
+
+It went on to say Mitac plans to produce all of its high-end U.S.-bound servers in Taiwan and is looking to move 30% of its overall server production lines back to Taiwan in the next three years.
+
+Wiwynn, a cloud computing server subsidiary of Wistron, is primarily assembling its U.S.-bound servers in Mexico and has also recently established a production site in southern Taiwan per clients' requests.
+
+Taiwan-based server chassis and assembly player AIC recently expanded the number of its factories in Taiwan to four and has been aggressively forming cooperation with its partners to expand its capacity. Many Taiwan-based component suppliers are also expanding their capacity in Taiwan.
+
+**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][4] ]**
+
+Several ODMs, such as Inventec, Wiwynn, Wistron, and Foxconn, all have plants in Mexico, while Quanta Computer has production lines in the U.S. Wiwynn also plans to open manufacturing facilities in eastern U.S.
+
+“This is not something that just happened overnight, it’s a process that started a few years ago. The tariffs just accelerated the desire of ODMs to do it,” said Ashish Nadkarni, group vice president for infrastructure systems, platforms and technologies at IDC. “Since [President] Trump has come into office there has been saber rattling about China and a trade war. There has also been a focus on margins.”
+
+He added that component makers are definitely moving out of China to other parts of Asia, like Korea, the Philippines, and Vietnam.
+
+### HPE, Dell and Lenovo should remain unaffected
+
+The big three branded server makers are all largely immunized against the tariffs. HP Enterprise, Dell, and Lenovo all have U.S.-based assemblies and their contract manufacturers are in Taiwan, said Nadkarni. So, their costs should remain unaffected by tariffs.
+
+The tariffs are not affecting sales as much as revenue for hyperscale whitebox vendors is being stressed. Hyperscale companies such as Amazon Web Services (AWS), Microsoft, Google, etc. have contracts with vendors such as Inspur and Super Micro, and if prices fluctuate, that’s not their problem. The hardware vendor is expected to deliver at the agreed cost.
+
+So margins, already paper thin, can’t be passed on to the customer, unlike the aforementioned laptop example.
+
+“It’s not the end customers who are affected by it, it’s the vendors who are affected by it. Certain things they can pass on, like component prices. But if the build value goes up, that’s not the customers problem, that’s the vendor’s problem,” said Nadkarni.
+
+So while it may cost you more to buy a laptop as this trade fracas goes on, it shouldn’t cost more to buy a server.
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3409784/server-hardware-makers-shift-production-out-of-china.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/07/asia_china_flag_grunge-stars_pixabay_etereuti-100763424-large.jpg
+[2]: https://www.pcworld.com/article/3403405/trump-tariffs-on-chinese-goods-could-cost-you-120-more-for-notebook-pcs-say-dell-hp-and-cta.html
+[3]: https://www.networkworld.com/article/3394879/hpe-s-ceo-lays-out-his-technology-vision.html
+[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190717 How edge computing is driving a new era of CDN.md b/sources/talk/20190717 How edge computing is driving a new era of CDN.md
new file mode 100644
index 0000000000..643d3aa713
--- /dev/null
+++ b/sources/talk/20190717 How edge computing is driving a new era of CDN.md
@@ -0,0 +1,107 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How edge computing is driving a new era of CDN)
+[#]: via: (https://www.networkworld.com/article/3409027/how-edge-computing-is-driving-a-new-era-of-cdn.html)
+[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
+
+How edge computing is driving a new era of CDN
+======
+A CDN is an edge application and an edge application is a superset of what your CDN is doing.
+![geralt \(CC0\)][1]
+
+We are living in a hyperconnected world where anything can now be pushed to the cloud. The idea of having content located in one place, which could be useful from the management’s perspective, is now redundant. Today, the users and data are omnipresent.
+
+The customer’s expectations have up-surged because of this evolution. There is now an increased expectation of high-quality service and a decrease in customer’s patience. In the past, one could patiently wait 10 hours to download the content. But this is certainly not the scenario at the present time. Nowadays we have high expectations and high-performance requirements but on the other hand, there are concerns as well. The internet is a weird place, with unpredictable asymmetric patterns, buffer bloat and a list of other [performance-related problems][2] that I wrote about on Network Insight. _[Disclaimer: the author is employed by Network Insight.]_
+
+Also, the internet is growing at an accelerated rate. By the year 2020, the internet is expected to reach 1.5 Gigabyte of traffic per day per person. In the coming times, the world of the Internet of Things (IoT) driven by objects will far supersede these data figures as well. For example, a connected airplane will generate around 5 Terabytes of data per day. This spiraling level of volume requires a new approach to data management and forces us to re-think how we delivery applications.
+
+[RELATED: How Notre Dame is going all in with Amazon’s cloud][3]
+
+Why? Because all this information cannot be processed by a single cloud or an on-premise location. Latency will always be a problem. For example, in virtual reality (VR) anything over 7 milliseconds will cause motion sickness. When decisions are required to be taken in real-time, you cannot send data to the cloud. You can, however, make use of edge computing and a multi-CDN design.
+
+### Introducing edge computing and multi-CDN
+
+The rate of cloud adoption, all-things-video, IoT and edge computing are bringing life back to CDNs and multi-CDN designs. Typically, a multi-CDN is an implementation pattern that includes more than one CDN vendor. The traffic direction is performed by using different metrics, whereby traffic can either be load balanced or failed across the different vendors.
+
+Edge computing moves actions as close as possible to the source. It is the point where the physical world interacts with the digital world. Logically, the decentralized approach of edge computing will not take over the centralized approach. They will be complementary to each other, so that the application can run at its peak level, depending on its position in the network.
+
+For example, in IoT, saving battery life is crucial. Let’s assume an IoT device can conduct the transaction in 10ms round trip time (RTT), instead of 100ms RTT. As a result, it can use 10 times less battery.
+
+### The internet, a performance bottleneck
+
+The internet is designed on the principle that everyone can talk to everyone, thereby providing universal connectivity whether required or not. There has been a number of design changes with network address translation (NAT) being the biggest. However, essentially the role of the internet has remained the same in terms of connectivity, regardless of location.
+
+With this type of connectivity model, distance is an important determinant for the application’s performance. Users on the other side of the planet will suffer regardless of buffer sizes or other device optimizations. Long RTT is experienced as packets go back and forth before the actual data transmission. Although caching and traffic redirection is being used but limited success has been achieved so far.
+
+### The principles of application delivery
+
+When transmission control protocol (TCP) starts, it thinks it is back in the late 1970s. It assumes that all services are on a local area network (LAN) and there is no packet loss. It then starts to work backward from there. Back when it was designed, we didn't have real-time traffic, such as voice and video that is latency and jitter sensitive.
+
+Ideally, TCP was designed for the ease of use and reliability, not to boost the performance. You actually need to optimize the TCP stack. And this is why CDNs are very good at performing such tasks. For example, if a connection is received from a mobile phone, a CDN will start with the assumption that there is going to be high jitter and packet loss. This allows them to size the TCP window correctly that accurately match network conditions.
+
+How do you magnify the performance, what options do you have? In a generic sense, many look to lowering the latency. However, with applications, such as video streaming, latency does not tell you if the video is going to buffer. One can only assume that lower latency will lead to less buffering. In such a scenario, measurement-based on throughput is a far better performance metric since will tell you how fast an object will load.
+
+We have also to consider the page load times. At the network level, it's the time to first byte (TTFB) and ping. However, these mechanisms don’t tell you much about the user experience as everything fits into one packet. Using ping will not inform you about the bandwidth problems.
+
+And if a web page goes slower by 25% once packet loss exceeds 5% and you are measuring time to the first byte which is the 4th packet - what exactly can you learn? TTFB is comparable to an internet control message protocol (ICMP) request just one layer up the stack. It's good if something is broken but not if there is underperformance issue.
+
+When you examine the history of TTFB measuring, you will find that it was deployed due to the lack of Real User Monitoring (RUM) measurements. Previously TTFB was as good in approximating how fast something was going to load, but we don't have to approximate anymore as we can measure it with RUM. RUM is measurements from the end-users. An example could be the metrics generated from a webpage that is being served to an actual user.
+
+Conclusively, TTFB, ping and page load times are not sophisticated measurements. We should prefer RUM time measurements as much as we can. This provides a more accurate picture of the user experience. This is something which has become critical over the last decade.
+
+Now we are living in a world of RUM which lets us build our network based on what matters to the business users. All CDNs should aim for RUM measurements. For this, they may need to integrate with traffic management systems that intelligently measure on what the end-user really sees.
+
+### The need for multi-CDN
+
+Primarily, the reasons one would opt for a multi-CDN environment are availability and performance. No single CDN can be the fastest to everyone and everywhere in the world. It is impossible due to the internet's connectivity model. However, combining the best of two or even more CDN providers will increase the performance.
+
+A multi-CDN will give a faster performance and higher availability than what can be achieved with a single CDN. A good design is what runs two availability zones. A better design is what runs two availability zones with a single CDN provider. However, superior design is what runs two availability zones in a multi-CDN environment.
+
+### Edge applications will be the new norm
+
+It’s not that long ago that there was a transition from the heavy physical monolithic architecture to the agile cloud. But all that really happened was the transition from the physical appliance to a virtual cloud-based appliance. Maybe now is the time that we should ask, is this the future that we really want?
+
+One of the main issues in introducing edge applications is the mindset. It is challenging to convince yourself or your peers that the infrastructure you have spent all your time working on and investing in is not the best way forward for your business.
+
+Although the cloud has created a big buzz, just because you migrate to the cloud does not mean that your applications will run faster. In fact, all you are really doing is abstracting the physical pieces of the architecture and paying someone else to manage it. The cloud has, however, opened the door for the edge application conversation. We have already taken the first step to the cloud and now it's time to make the second move.
+
+Basically, when you think about edge applications: its simplicity is a programmable CDN. A CDN is an edge application and an edge application is a superset of what your CDN is doing. Edge applications denote cloud computing at the edge. It is a paradigm to distribute the application closer to the source for lower latency, additional resilience, and simplified infrastructure, where you still have control and privacy.
+
+From an architectural point of view, an edge application provides more resilience than deploying centralized applications. In today's world of high expectations, resilience is a necessity for the continuity of business. Edge applications allow you to collapse the infrastructure into an architecture that is cheaper, simpler and more attentive to the application. The less in the expanse of infrastructure, the more time you can focus on what really matters to your business - the customer.
+
+### An example of an edge architecture
+
+An example of edge architecture is within each PoP, every application has its own isolated JavaScript (JS) environment. JavaScript is great for security isolation and the performance guarantees scale. The JavaScript is a dedicated isolated instance that executes the code at the edge.
+
+Most likely, each JavaScript has its own virtual machine (VM). The sole operation that the VM is performing is the JavaScript runtime engine and the only thing it is running is the customer's code. One could use Google V8 open-source high-performance JavaScript and WebAssembly engine.
+
+Let’s face it, if you continue building more PoPs, you will hit the law of diminishing returns. When it comes to application such as mobile, you really are maxed out when throwing PoPs to form a solution. So we need to find another solution.
+
+In the coming times, we are going to witness a trend where most applications will become global, which means edge applications. It certainly makes little sense to place all the application in one location when your users are everywhere else.
+
+**This article is published as part of the IDG Contributor Network. [Want to Join?][4]**
+
+Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3409027/how-edge-computing-is-driving-a-new-era-of-cdn.html
+
+作者:[Matt Conran][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Matt-Conran/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2017/02/network-traffic-100707086-large.jpg
+[2]: https://network-insight.net/2016/12/buffers-packet-drops/
+[3]: https://www.networkworld.com/article/3014599/cloud-computing/how-notre-dame-is-going-all-in-with-amazon-s-cloud.html#tk.nww-fsb
+[4]: https://www.networkworld.com/contributor-network/signup.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190717 Public internet should be all software-defined.md b/sources/talk/20190717 Public internet should be all software-defined.md
new file mode 100644
index 0000000000..3b834bea66
--- /dev/null
+++ b/sources/talk/20190717 Public internet should be all software-defined.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Public internet should be all software-defined)
+[#]: via: (https://www.networkworld.com/article/3409783/public-internet-should-be-all-software-defined.html)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Public internet should be all software-defined
+======
+Having a programmable public internet will correct inefficiencies in the current system, engineers at NOIA say.
+![Thinkstock][1]
+
+The public internet should migrate to a programmable backbone-as-a-service architecture, says a team of network engineers behind NOIA, a startup promising to revolutionize global traffic. They say the internet will be more efficient if internet protocols and routing technologies are re-worked and then combined with a traffic-trading blockchain.
+
+It’s “impossible to use internet for modern applications,” the company says on its website. “Almost all global internet companies struggle to ensure uptime and reliable user experience.”
+
+That’s because modern techniques aren’t being introduced fully, NOIA says. The engineers say algorithms should be implemented to route traffic and that segment routing technology should be adopted. Plus, blockchain should be instigated to trade internet transit capacity. A “programmable internet solves the web’s inefficiencies,” a representative from NOIA told me.
+
+**[ Read also: [What is IPv6, and why aren’t we there yet?][2] ]**
+
+### Deprecate the public internet
+
+NOIA has started introducing a caching, distributed content delivery application to improve website loading times, but it wants to ultimately deprecate the existing internet completely.
+
+The company currently has 353 active cache nodes around the world, with a total 27 terabytes of storage for that caching system—NOIA clients contribute spare bandwidth and storage. It’s also testing a network backbone using four providers with European and American locations that it says will be the [development environment for its envisaged software-defined and radical internet replacement][3].
+
+### The problem with today's internet
+
+The “internet is a mesh of tangled up cables,” [NOIA says][4]. “Thousands of physically connected networks” are involved. Any configuration alterations in any of the jumble of networks causes issues with the protocols, it explains. The company is referring to Border Gateway Protocol (BGP), which lets routers discover paths to IP addresses through the disparate network. Because BGP only forwards to a neighboring router, it doesn’t manage the entire route. That introduces “severe variability” or unreliability.
+
+“It is impossible to guarantee service reliability without using overlay networks. Low-latency, performance-critical applications, and games cannot operate on public Internet,” the company says.
+
+### How a software-defined internet works
+
+NOIA's idea is to use [IPv6][5], the latest internet protocol. IPv6 features an expanded packet size and allows custom headers. The company then adds segment routing to create Segment Routing over IPv6 (SRv6). That SRv6 combo adds routing information to each data packet sent—a packet-level programmable network, in other words.
+
+Segment routing, roughly, is an updated internet protocol that lets routers comprehend routing information in packet headers and then perform the routing. Cisco has been using it, too.
+
+NOIA’s network then adds the SRv6 amalgamation to distributed ledger technology (blockchain) in order to let ISPs and data centers buy and sell the routes—buyers can choose their routes in the exchange, too.
+
+In addition to trade, blockchain introduces security. It's worth noting that routings aren’t the only internet technologies that could be disrupted due to blockchain. In April I wrote about [organizations that propose moving data storage transactions over to distributed ledgers][6]. They say that will be more secure than anything seen before. [Ethernet’s lack of inherent security could be corrected by smart contract, trackable verifiable transactions][7], say some. And, of course, supply chain, the automotive vertical, and the selling of sensor data overall may emerge as [use-contenders for secure, blockchain in the internet of things][8].
+
+In NOIA’s case, with SRv6 blended with distributed ledgers, the encrypted ledger holds the IP addresses, but it is architecturally decentralized—no one controls it. That’s one element of added security, along with the aforementioned trading, provided by the ledger.
+
+That trading could handle the question of who’s paying for all this. However, NOIA says current internet hardware will be able to understand the segment routings, so no new equipment investments are needed.
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3409783/public-internet-should-be-all-software-defined.html
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/05/dns_browser_http_web_internet_thinkstock-100758191-large.jpg
+[2]: https://www.networkworld.com/article/3254575/lan-wan/what-is-ipv6-and-why-aren-t-we-there-yet.html
+[3]: https://medium.com/noia/development-update-06-20-07-04-2879f9fce3cb
+[4]: https://noia.network/
+[5]: https://www.networkworld.com/article/3254575/what-is-ipv6-and-why-aren-t-we-there-yet.html
+[6]: https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html
+[7]: https://www.networkworld.com/article/3356496/how-blockchain-will-manage-networks.html
+[8]: https://www.networkworld.com/article/3330937/how-blockchain-will-transform-the-iot.html
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190718 Worst DNS attacks and how to mitigate them.md b/sources/talk/20190718 Worst DNS attacks and how to mitigate them.md
new file mode 100644
index 0000000000..b490eeeea1
--- /dev/null
+++ b/sources/talk/20190718 Worst DNS attacks and how to mitigate them.md
@@ -0,0 +1,151 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Worst DNS attacks and how to mitigate them)
+[#]: via: (https://www.networkworld.com/article/3409719/worst-dns-attacks-and-how-to-mitigate-them.html)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+Worst DNS attacks and how to mitigate them
+======
+DNS threats, including DNS hijacking, tunneling, phishing, cache poisoning and DDoS attacks, are all on the rise.
+![Max Bender \(CC0\)][1]
+
+The Domain Name System remains under constant attack, and there seems to be no end in sight as threats grow increasingly sophisticated.
+
+DNS, known as the internet’s phonebook, is part of the global internet infrastructure that translates between familiar names and the numbers computers need to access a website or send an email. While DNS has long been the target of assailants looking to steal all manner of corporate and private information, the threats in the [past year][2] or so indicate a worsening of the situation.
+
+**More about DNS:**
+
+ * [DNS in the cloud: Why and why not][3]
+ * [DNS over HTTPS seeks to make internet use more private][4]
+ * [How to protect your infrastructure from DNS cache poisoning][5]
+ * [ICANN housecleaning revokes old DNS security key][6]
+
+
+
+IDC reports that 82% of companies worldwide have faced a DNS attack over the past year. The research firm recently published its fifth annual [Global DNS Threat Report][7], which is based on a survey IDC conducted on behalf of DNS security vendor EfficientIP of 904 organizations across the world during the first half of 2019.
+
+According to IDC's research, the average costs associated with a DNS attack rose by 49% compared to a year earlier. In the U.S., the average cost of a DNS attack tops out at more than $1.27 million. Almost half of respondents (48%) report losing more than $500,000 to a DNS attack, and nearly 10% say they lost more than $5 million on each breach. In addition, the majority of U.S. organizations say that it took more than one day to resolve a DNS attack.
+
+“Worryingly, both in-house and cloud applications were damaged, with growth of over 100% for in-house application downtime, making it now the most prevalent damage suffered,” IDC wrote. "DNS attacks are moving away from pure brute-force to more sophisticated attacks acting from the internal network. This will force organizations to use intelligent mitigation tools to cope with insider threats."
+
+### Sea Turtle DNS hijacking campaign
+
+An ongoing DNS hijacking campaign known as Sea Turtle is one example of what's occuring in today's DNS threat landscape.
+
+This month, [Cisco Talos][8] security researchers said the people behind the Sea Turtle campaign have been busy [revamping their attacks][9] with new infrastructure and going after new victims.
+
+**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][10] ]**
+
+In April, Talos released a [report detailing][11] Sea Turtle and calling it the “first known case of a domain name registry organization that was compromised for cyber espionage operations.” Talos says the ongoing DNS threat campaign is a state-sponsored attack that abuses DNS to harvest credentials to gain access to sensitive networks and systems in a way that victims are unable to detect, which displays unique knowledge on how to manipulate DNS.
+
+By obtaining control of victims’ DNS, the attackers can change or falsify any data on the Internet and illicitly modify DNS name records to point users to actor-controlled servers; users visiting those sites would never know, Talos reports.
+
+The hackers behind Sea Turtle appear to have regrouped after the April report from Talos and are redoubling their efforts with new infrastructure – a move Talos researchers find to be unusual: “While many actors will slow down once they are discovered, this group appears to be unusually brazen, and will be unlikely to be deterred going forward,” Talos [wrote][9] in July.
+
+“Additionally, we discovered a new DNS hijacking technique that we assess with moderate confidence is connected to the actors behind Sea Turtle. This new technique is similar in that the threat actors compromise the name server records and respond to DNS requests with falsified A records,” Talos stated.
+
+“This new technique has only been observed in a few highly targeted operations. We also identified a new wave of victims, including a country code top-level domain (ccTLD) registry, which manages the DNS records for every domain [that] uses that particular country code; that access was used to then compromise additional government entities. Unfortunately, unless there are significant changes made to better secure DNS, these sorts of attacks are going to remain prevalent,” Talos wrote.
+
+### DNSpionage attack upgrades its tools
+
+Another newer threat to DNS comes in the form of an attack campaign called [DNSpionage][12].
+
+DNSpionage initially used two malicious websites containing job postings to compromise targets via crafted Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers. And the attackers are continuing to develop new assault techniques.
+
+“The threat actor's ongoing development of DNSpionage malware shows that the attacker continues to find new ways to avoid detection. DNS tunneling is a popular method of exfiltration for some actors, and recent examples of DNSpionage show that we must ensure DNS is monitored as closely as an organization's normal proxy or weblogs,” [Talos wrote][13]. “DNS is essentially the phonebook of the internet, and when it is tampered with, it becomes difficult for anyone to discern whether what they are seeing online is legitimate.”
+
+The DNSpionage campaign targeted various businesses in the Middle East as well as United Arab Emirates government domains.
+
+“One of the biggest problems with DNS attacks or the lack of protection from them is complacency,” said Craig Williams, director of Talos outreach. Companies think DNS is stable and that they don’t need to worry about it. “But what we are seeing with attacks like DNSpionage and Sea Turtle are kind of the opposite, because attackers have figured out how to use it to their advantage – how to use it to do damage to credentials in a way, in the case of Sea Turtle, that the victim never even knows it happened. And that’s a real potential problem.”
+
+If you know, for example, your name server has been compromised, then you can force everyone to change their passwords. But if instead they go after the registrar and the registrar points to the bad guy’s name, you never knew it happened because nothing of yours was touched – that’s why these new threats are so nefarious, Williams said.
+
+“Once attackers start using it publicly, successfully, other bad guys are going to look at it and say, ‘Hey, why don't I use that to harvest a bunch of credentials from the sites I am interested in,’” Williams said.
+
+### **The DNS IoT risk**
+
+Another developing risk would be the proliferation of IoT devices. The Internet Corporation for Assigned Names and Numbers (ICANN) recently wrote a [paper on the risk that IoT brings to DNS][14].
+
+“The IoT is a risk to the DNS because various measurement studies suggest that IoT devices could stress the DNS infrastructure in ways that we have not seen before,” ICANN stated. “For example, a software update for a popular IP-enabled IoT device that causes the device to use the DNS more frequently (e.g., regularly lookup random domain names to check for network availability) could stress the DNS in individual networks when millions of devices automatically install the update at the same time.”
+
+While this is a programming error from the perspective of individual devices, it could result in a significant attack vector from the perspective of DNS infrastructure operators. Incidents like this have already occurred on a small scale, but they may occur more frequently in the future due to the growth of heterogeneous IoT devices from manufacturers that equip their IoT devices with controllers that use the DNS, ICANN stated.
+
+ICANN also suggested that IoT botnets will represent an increased threat to DNS operators. “Larger DDoS attacks, partly because IoT bots are more difficult to eradicate. Current botnet sizes are on the order of hundreds of thousands. The most well-known example is the Mirai botnet, which involved 400K (steady-state) to 600K (peak) infected IoT devices. The Hajime botnet hovers around 400K infected IoT devices, but has not launched any DDoS attacks yet. With the growth of the IoT, these attacks may grow to involve millions of bots and as a result larger DDoS attacks.
+
+### **DNS security warnings grow**
+
+The UK's [National Cyber Security Centre (NCSC)][15] issued a warning this month about ongoing DNS attacks, particularly focusing on DNS hijacking. It cited a number of risks associated with the uptick in DNS hijacking including:
+
+**Creating malicious DNS records.** A malicious DNS record could be used, for example, to create a phishing website that is present within an organization’s familiar domain. This may be used to phish employees or customers.
+
+**Obtaining SSL certificates.** Domain-validated SSL certificates are issued based on the creation of DNS records; thus an attacker may obtain valid SSL certificates for a domain name, which could be used to create a phishing website intended to look like an authentic website, for example.
+
+**Transparent proxying.** One serious risk employed recently involves transparently proxying traffic to intercept data. The attacker modifies an organization’s configured domain zone entries (such as “A” or “CNAME” records) to point traffic to their own IP address, which is infrastructure they manage.
+
+“An organization may lose total control of their domain and often the attackers will change the domain ownership details making it harder to recover,” the NCSC wrote.
+
+These new threats, as well as other dangers, led the U.S. government to issue a warning earlier this year about DNS attacks on federal agencies.
+
+The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) told all federal agencies to bolt down their DNS in the face of a series of global hacking campaigns.
+
+CISA said in its [Emergency Directive][16] that it was tracking a series of incidents targeting DNS infrastructure. CISA wrote that it “is aware of multiple executive branch agency domains that were impacted by the tampering campaign and has notified the agencies that maintain them.”
+
+CISA says that attackers have managed to intercept and redirect web and mail traffic and could target other networked services. The agency said the attacks start with compromising user credentials of an account that can make changes to DNS records. Then the attacker alters DNS records, like Address, Mail Exchanger, or Name Server records, replacing the legitimate address of the services with an address the attacker controls.
+
+These actions let the attacker direct user traffic to their own infrastructure for manipulation or inspection before passing it on to the legitimate service, should they choose. This creates a risk that persists beyond the period of traffic redirection, CISA stated.
+
+“Because the attacker can set DNS record values, they can also obtain valid encryption certificates for an organization’s domain names. This allows the redirected traffic to be decrypted, exposing any user-submitted data. Since the certificate is valid for the domain, end users receive no error warnings,” CISA stated.
+
+### **Get on the DNSSEC bandwagon**
+
+“Enterprises that are potential targets – in particular those that capture or expose user and enterprise data through their applications – should heed this advisory by the NSCS and should pressure their DNS and registrar vendors to make DNSSEC and other domain security best practices easy to implement and standardized,” said Kris Beevers, co-founder and CEO of DNS security vendor [NS1][17]. “They can easily implement DNSSEC signing and other domain security best practices with technologies in the market today. At the very least, they should work with their vendors and security teams to audit their implementations.”
+
+DNSSEC was in the news earlier this year when in response to increased DNS attacks, ICANN called for an intensified community effort to install stronger DNS security technology.
+
+Specifically, ICANN wants full deployment of the Domain Name System Security Extensions ([DNSSEC][18]) across all unsecured domain names. DNSSEC adds a layer of security on top of DNS. Full deployment of DNSSEC ensures end users are connecting to the actual web site or other service corresponding to a particular domain name, ICANN said. “Although this will not solve all the security problems of the Internet, it does protect a critical piece of it – the directory lookup – complementing other technologies such as SSL (https:) that protect the ‘conversation’, and provide a platform for yet-to-be-developed security improvements,” ICANN stated.
+
+DNSSEC technologies have been around since about 2010 but are not widely deployed, with less than 20% of the world’s DNS registrars having deployed it, according to the regional internet address registry for the Asia-Pacific region ([APNIC][19]).
+
+DNSSEC adoption has been lagging because it was viewed as optional and can require a tradeoff between security and functionality, said NS1's Beevers.
+
+### **Traditional DNS threats**
+
+While DNS hijacking may be the front line attack method, other more traditional threats still exist.
+
+The IDC/EfficientIP study found most popular DNS threats have changed compared with last year. Phishing (47%) is now more popular than last year’s favorite, DNS-based malware (39%), followed by DDoS attacks (30%), false positive triggering (26%), and lock-up domain attacks (26%).
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3409719/worst-dns-attacks-and-how-to-mitigate-them.html
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/08/anonymous_faceless_hooded_mand_in_scary_halloween_mask_finger_to_lips_danger_threat_stealth_attack_hacker_hush_silence_warning_by_max_bender_cc0_via_unsplash_1200x800-100766358-large.jpg
+[2]: https://www.fireeye.com/blog/threat-research/2019/01/global-dns-hijacking-campaign-dns-record-manipulation-at-scale.html
+[3]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
+[4]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
+[5]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
+[6]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
+[7]: https://www.efficientip.com/resources/idc-dns-threat-report-2019/
+[8]: https://www.talosintelligence.com/
+[9]: https://blog.talosintelligence.com/2019/07/sea-turtle-keeps-on-swimming.html
+[10]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[11]: https://blog.talosintelligence.com/2019/04/seaturtle.html
+[12]: https://www.networkworld.com/article/3390666/cisco-dnspionage-attack-adds-new-tools-morphs-tactics.html
+[13]: https://blog.talosintelligence.com/2019/04/dnspionage-brings-out-karkoff.html
+[14]: https://www.icann.org/en/system/files/files/sac-105-en.pdf
+[15]: https://www.ncsc.gov.uk/news/ongoing-dns-hijacking-and-mitigation-advice
+[16]: https://cyber.dhs.gov/ed/19-01/
+[17]: https://ns1.com/
+[18]: https://www.icann.org/resources/pages/dnssec-qaa-2014-01-29-en
+[19]: https://www.apnic.net/
diff --git a/sources/talk/20190724 Data centers may soon recycle heat into electricity.md b/sources/talk/20190724 Data centers may soon recycle heat into electricity.md
new file mode 100644
index 0000000000..92298e1a01
--- /dev/null
+++ b/sources/talk/20190724 Data centers may soon recycle heat into electricity.md
@@ -0,0 +1,67 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Data centers may soon recycle heat into electricity)
+[#]: via: (https://www.networkworld.com/article/3410578/data-centers-may-soon-recycle-heat-into-electricity.html)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+Data centers may soon recycle heat into electricity
+======
+Rice University researchers are developing a system that converts waste heat into light and then that light into electricity, which could help data centers reduce computing costs.
+![Gordon Mah Ung / IDG][1]
+
+Waste heat is the scurge of computing. In fact, much of the cost of powering a computer is from creating unwanted heat. That’s because the inefficiencies in electronic circuits, caused by resistance in the materials, generates that heat. The processors, without computing anything, are essentially converting expensively produced electrical energy into waste energy.
+
+It’s a fundamental problem, and one that hasn’t been going away. But what if you could convert the unwanted heat back into electricity—recycle the heat back into its original energy form? The data center heat, instead of simply disgorging into the atmosphere to be gotten rid of with dubious eco-effects, could actually run more machines. Plus, your cooling costs would be taken care of—there’s nothing to cool because you’ve already grabbed the hot air.
+
+**[ Read also: [How server disaggregation can boost data center efficiency][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
+
+Scientists at Rice Univeristy are trying to make that a reality by developing heat scavenging and conversion solutions.
+
+Currently, the most efficient way to convert heat into electricity is through the use of traditional turbines.
+
+Turbines “can give you nearly 50% conversion efficiency,” says Chloe Doiron, a graduate student at Rice University and co-lead on the project, in a [news article][4] on the school’s website. Turbines convert the kinetic energy of moving fluids, like steam or combustion gases, into mechanical energy. The moving steam then shifts blades mounted on a shaft, which turns a generator, thus creating the power.
+
+Not a bad solution. The problem, though, is “those systems are not easy to implement,” the researchers explain. The issue is that turbines are full of moving parts, and they’re big, noisy, and messy.
+
+### Thermal emitter better than turbines for converting heat to energy
+
+A better option would be a solid-state, thermal device that could absorb heat at the source and simply convert it, perhaps straight into attached batteries.
+
+The researchers say a thermal emitter could absorb heat, jam it into tight, easy-to-capture bandwidth and then emit it as light. Cunningly, they would then simply turn the light into electricity, as we see all the time now in solar systems.
+
+“Thermal photons are just photons emitted from a hot body,” says Rice University professor Junichiro Kono in the article. “If you look at something hot with an infrared camera, you see it glow. The camera is capturing these thermally excited photons.” Indeed, all heated surfaces, to some extent, send out light as thermal radiation.
+
+The Rice team wants to use a film of aligned carbon nanotubes to do the job. The test system will be structured as an actual solar panel. That’s because solar panels, too, lose energy through heat, so are a good environment in which to work. The concept applies to other inefficient technologies, too. “Anything else that loses energy through heat [would become] far more efficient,” the researchers say.
+
+Around 20% of industrial energy consumption is unwanted heat, Doiron says. That's a lot of wasted energy.
+
+### Other heat conversion solutions
+
+Other heat scavenging devices are making inroads, too. Now-commercially available thermoelectric technology can convert a temperature difference into power, also with no moving parts. They function by exposing a specially made material to heat. [Electrons flow when one part is cold and one is hot][5]. And the University of Utah is working on [silicon for chips that generates electricity][6] as one of two wafers heat up.
+
+Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3410578/data-centers-may-soon-recycle-heat-into-electricity.html
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/07/flir_20190711t191326-100801627-large.jpg
+[2]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
+[3]: https://www.networkworld.com/newsletters/signup.html
+[4]: https://news.rice.edu/2019/07/12/rice-device-channels-heat-into-light/
+[5]: https://www.networkworld.com/article/2861438/how-to-convert-waste-data-center-heat-into-electricity.html
+[6]: https://unews.utah.edu/beat-the-heat/
+[7]: https://www.facebook.com/NetworkWorld/
+[8]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190724 Reports- As the IoT grows, so do its threats to DNS.md b/sources/talk/20190724 Reports- As the IoT grows, so do its threats to DNS.md
new file mode 100644
index 0000000000..d9647304b9
--- /dev/null
+++ b/sources/talk/20190724 Reports- As the IoT grows, so do its threats to DNS.md
@@ -0,0 +1,78 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Reports: As the IoT grows, so do its threats to DNS)
+[#]: via: (https://www.networkworld.com/article/3411437/reports-as-the-iot-grows-so-do-its-threats-to-dns.html)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+Reports: As the IoT grows, so do its threats to DNS
+======
+ICANN and IBM's security researchers separately spell out how the growth of the internet of things will increase opportunities for malicious actors to attack the Domain Name System with hyperscale botnets and worm their malware into the cloud.
+The internet of things is shaping up to be a more significant threat to the Domain Name System through larger IoT botnets, unintentional adverse effects of IoT-software updates and the continuing development of bot-herding software.
+
+The Internet Corporation for Assigned Names and Numbers (ICANN) and IBM’s X-Force security researchers have recently issued reports outlining the interplay between DNS and IoT that includes warnings about the pressure IoT botnets will put on the availability of DNS systems.
+
+**More about DNS:**
+
+ * [DNS in the cloud: Why and why not][1]
+ * [DNS over HTTPS seeks to make internet use more private][2]
+ * [How to protect your infrastructure from DNS cache poisoning][3]
+ * [ICANN housecleaning revokes old DNS security key][4]
+
+
+
+ICANN’s Security and Stability Advisory Committee (SSAC) wrote in a [report][5] that “a significant number of IoT devices will likely be IP enabled and will use the DNS to locate the remote services they require to perform their functions. As a result, the DNS will continue to play the same crucial role for the IoT that it has for traditional applications that enable human users to interact with services and content,” ICANN stated. “The role of the DNS might become even more crucial from a security and stability perspective with IoT devices interacting with people’s physical environment.”
+
+IoT represents both an opportunity and a risk to the DNS, ICANN stated. “It is an opportunity because the DNS provides functions and data that can help make the IoT more secure, stable, and transparent, which is critical given the IoT's interaction with the physical world. It is a risk because various measurement studies suggest that IoT devices may stress the DNS, for instance, because of complex DDoS attacks carried out by botnets that grow to hundreds of thousands or in the future millions of infected IoT devices within hours,” ICANN stated.
+
+Unintentional DDoS attacks
+
+One risk is that the IoT could place new burdens on the DNS. “For example, a software update for a popular IP-enabled IoT device that causes the device to use the DNS more frequently (e.g., regularly lookup random domain names to check for network availability) could stress the DNS in individual networks when millions of devices automatically install the update at the same time,” ICANN stated.
+
+While this is a programming error from the perspective of individual devices, it could result in a significant attack vector from the perspective of DNS infrastructure operators. Incidents like this have already occurred on a small scale, but they may occur more frequently in the future due to the growth of heterogeneous IoT devices from manufacturers that equip their IoT devices with controllers that use the DNS, ICANN stated.
+
+Massively larger botnets, threat to clouds
+
+The report also suggested that the scale of IoT botnets could grow from hundreds of thousands of devices to millions. The best known IoT botnet is Mirai, responsible for DDoS attacks involving 400,000 to 600,000 devices. The Hajime botnet hovers around 400K infected IoT devices but has not launched any DDoS attacks yet. But as the IoT grows, so will the botnets and as a result larger DDoS attacks.
+
+Cloud-connected IoT devices could endanger cloud resources. “IoT devices connected to cloud architecture could allow Mirai adversaries to gain access to cloud servers. They could infect a server with additional malware dropped by Mirai or expose all IoT devices connected to the server to further compromise,” wrote Charles DeBeck, a senior cyber threat intelligence strategic analyst with [IBM X-Force Incident Response][6] in a recent report.
+
+ “As organizations increasingly adopt cloud architecture to scale efficiency and productivity, disruption to a cloud environment could be catastrophic.”
+
+For enterprises that are rapidly adopting both IoT technology and cloud architecture, insufficient security controls could expose the organization to elevated risk, calling for the security committee to conduct an up-to-date risk assessment, DeBeck stated.
+
+Attackers continue malware development
+
+“Since this activity is highly automated, there remains a strong possibility of large-scale infection of IoT devices in the future,” DeBeck stated. “Additionally, threat actors are continuing to expand their targets to include new types of IoT devices and may start looking at industrial IoT devices or connected wearables to increase their footprint and profits.”
+
+Botnet bad guys are also developing new Mirai variants and IoT botnet malware outside of the Mirai family to target IoT devices, DeBeck stated.
+
+To continue reading this article register now
+
+[Get Free Access][7]
+
+[Learn More][8] Existing Users [Sign In][7]
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3411437/reports-as-the-iot-grows-so-do-its-threats-to-dns.html
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
+[2]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
+[3]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
+[4]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
+[5]: https://www.icann.org/en/system/files/files/sac-105-en.pdf
+[6]: https://securityintelligence.com/posts/i-cant-believe-mirais-tracking-the-infamous-iot-malware-2/?cm_mmc=OSocial_Twitter-_-Security_Security+Brand+and+Outcomes-_-WW_WW-_-SI+TW+blog&cm_mmca1=000034XK&cm_mmca2=10009814&linkId=70790642
+[7]: javascript://
+[8]: https://www.networkworld.com/learn-about-insider/
diff --git a/sources/talk/20190724 When it comes to the IoT, Wi-Fi has the best security.md b/sources/talk/20190724 When it comes to the IoT, Wi-Fi has the best security.md
new file mode 100644
index 0000000000..2b5014dfa8
--- /dev/null
+++ b/sources/talk/20190724 When it comes to the IoT, Wi-Fi has the best security.md
@@ -0,0 +1,88 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (When it comes to the IoT, Wi-Fi has the best security)
+[#]: via: (https://www.networkworld.com/article/3410563/when-it-comes-to-the-iot-wi-fi-has-the-best-security.html)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+When it comes to the IoT, Wi-Fi has the best security
+======
+It’s easy to dismiss good ol’ Wi-Fi’s role in internet of things networking. But Wi-Fi has more security advantages than other IoT networking choices.
+![Ralph Gaithe / Soifer / Getty Images][1]
+
+When it comes to connecting internet of things (IoT) devices, there is a wide variety of networks to choose from, each with its own set of capabilities, advantages and disadvantages, and ideal use cases. Good ol’ Wi-Fi is often seen as a default networking choice, available in many places, but of limited range and not particularly suited for IoT implementations.
+
+According to [Aerohive Networks][2], however, Wi-Fi is “evolving to help IT address security complexities and challenges associated with IoT devices.” Aerohive sells cloud-managed networking solutions and was [acquired recently by software-defined networking company Extreme Networks for some $272 million][3]. And Aerohive's director of product marketing, Mathew Edwards, told me via email that Wi-Fi brings a number of security advantages compared to other IoT networking choices.
+
+It’s not a trivial problem. According to Gartner, in just the last three years, [approximately one in five organizations have been subject to an IoT-based attack][4]. And as more and more IoT devices come on line, the attack surface continues to grow quickly.
+
+**[ Also read: [Extreme targets cloud services, SD-WAN, Wi-Fi 6 with $210M Aerohive grab][3] and [Smart cities offer window into the evolution of enterprise IoT technology][5] ]**
+
+### What makes Wi-Fi more secure for IoT?
+
+What exactly are Wi-Fi’s IoT security benefits? Some of it is simply 20 years of technological maturity, Edwards said.
+
+“Extending beyond the physical boundaries of organizations, Wi-Fi has always had to be on the front foot when it comes to securely onboarding and monitoring a range of corporate, guest, and BYOD devices, and is now prepared with the next round of connectivity complexities with IoT,” he said.
+
+Specifically, Edwards said, “Wi-Fi has evolved … to increase the visibility, security, and troubleshooting of edge devices by combining edge security with centralized cloud intelligence.”
+
+Just as important, though, new Wi-Fi capabilities from a variety of vendors are designed to help identify and isolate IoT devices to integrate them into the wider network while limiting the potential risks. The goal is to incorporate IoT device awareness and protection mechanisms to prevent breaches and attacks through vulnerable headless devices. Edwards cited Aerohive’s work to “securely onboard IoT devices with its PPSK (private pre-shared key) technology, an authentication and encryption method providing 802.1X-equivalent role-based access, without the equivalent management complexities.”
+
+**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][6] ]**
+
+### The IoT is already here—and so is Wi-Fi
+
+Unfortunately, enterprise IoT security is not always a carefully planned and monitored operation.
+
+“Much like BYOD,” Edwards said, “many organizations are dealing with IoT without them even knowing it.” On the plus side, even as “IoT devices have infiltrated many networks , ... administrators are already leveraging some of the tools to protect against IoT threats without them even realizing it.”
+
+He noted that customers who have already deployed PPSK to secure guest and BYOD networks can easily extend those capabilities to cover IoT devices such as “smart TVs, projectors, printers, security systems, sensors and more.”
+
+In addition, Edwards said, “vendors have introduced methods to assign performance and security limits through context-based profiling, which is easily extended to IoT devices once the vendor can utilize signatures to identify an IoT device.”
+
+Once an IoT device is identified and tagged, Wi-Fi networks can assign it to a particular VLAN, set minimum and maximum data rates, data limits, application access, firewall rules, and other protections. That way, Edwards said, “if the device is lost, stolen, or launches a DDoS attack, the Wi-Fi network can kick it off, restrict it, or quarantine it.”
+
+### Wi-Fi still isn’t for every IoT deployment
+
+All that hardly turns Wi-Fi into the perfect IoT network. Relatively high costs and limited range mean it won’t find a place in many large-scale IoT implementations. But Edwards says Wi-Fi’s mature identification and control systems can help enterprises incorporate new IoT-based systems and sensors into their networks with more confidence.
+
+**More about 802.11ax (Wi-Fi 6)**
+
+ * [Why 802.11ax is the next big thing in wireless][7]
+ * [FAQ: 802.11ax Wi-Fi][8]
+ * [Wi-Fi 6 (802.11ax) is coming to a router near you][9]
+ * [Wi-Fi 6 with OFDMA opens a world of new wireless possibilities][10]
+ * [802.11ax preview: Access points and routers that support Wi-Fi 6 are on tap][11]
+
+
+
+Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3410563/when-it-comes-to-the-iot-wi-fi-has-the-best-security.html
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/03/hack-your-own-wi-fi_neon-wi-fi_keyboard_hacker-100791531-large.jpg
+[2]: http://www.aerohive.com/
+[3]: https://www.networkworld.com/article/3405440/extreme-targets-cloud-services-sd-wan-wifi-6-with-210m-aerohive-grab.html
+[4]: https://www.gartner.com/en/newsroom/press-releases/2018-03-21-gartner-says-worldwide-iot-security-spending-will-reach-1-point-5-billion-in-2018.
+[5]: https://www.networkworld.com/article/3409787/smart-cities-offer-window-into-the-evolution-of-enterprise-iot-technology.html
+[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[7]: https://www.networkworld.com/article/3215907/mobile-wireless/why-80211ax-is-the-next-big-thing-in-wi-fi.html
+[8]: https://%20https//www.networkworld.com/article/3048196/mobile-wireless/faq-802-11ax-wi-fi.html
+[9]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
+[10]: https://www.networkworld.com/article/3332018/wi-fi/wi-fi-6-with-ofdma-opens-a-world-of-new-wireless-possibilities.html
+[11]: https://www.networkworld.com/article/3309439/mobile-wireless/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
+[12]: https://www.facebook.com/NetworkWorld/
+[13]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190725 IoT-s role in expanding drone use.md b/sources/talk/20190725 IoT-s role in expanding drone use.md
new file mode 100644
index 0000000000..a9281dcf40
--- /dev/null
+++ b/sources/talk/20190725 IoT-s role in expanding drone use.md
@@ -0,0 +1,61 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (IoT’s role in expanding drone use)
+[#]: via: (https://www.networkworld.com/article/3410564/iots-role-in-expanding-drone-use.html)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+IoT’s role in expanding drone use
+======
+Collision avoidance technology that uses internet of things (IoT) connectivity, AI, machine learning, and computer vision could be the key to expanding drone applications.
+![Thinkstock][1]
+
+As faithful readers of [TechWatch][2] (love you, Mom) may know, the rollout of many companies’ ambitious drone delivery services has not gone as quickly as promised. Despite recent signs of progress in Australia and the United States—not to mention [clever ideas for burger deliveries to cars stuck in traffic][3]—drone delivery remains a long way from becoming a viable option in the vast majority of use cases. And the problem affects many areas of drone usage, not just the heavily hyped drone delivery applications.
+
+According to [Grace McKenzie][4], director of operations and controller at [Iris Automation][5], one key restriction to economically viable drone deliveries is that the “skies are not safe enough for many drone use cases.”
+
+Speaking at a recent [SF New Tech “Internet of Everything” event in San Francisco][6], McKenzie said fear of collisions with manned aircraft is the big reason why the Federal Aviation Association (FAA) and international regulators typically prohibit drones from flying beyond the line of the sight of the remote pilot. Obviously, she added, that restriction greatly constrains where and how drones can make deliveries and is working to keep the market from growing test and pilot programs into full-scale commercial adoption.
+
+**[ Read also: [No, drone delivery still isn’t ready for prime time][7] | Get regularly scheduled insights: [Sign up for Network World newsletters][8] ]**
+
+### Detect and avoid technology is critical
+
+Iris Automation, not surprisingly, is in the business of creating workable collision avoidance systems for drones in an attempt to solve this issue. Variously called “detect and avoid” or “sense and avoid” technologies, these automated solutions are required for “beyond visual line of sight” (BVLOS) drone operations. There are multiple issues in play.
+
+As explained on Iris’ website, “Drone pilots are skilled aviators, but even they struggle to see and avoid obstacles and aircraft when operating drones at extended range [and] no pilot on board means low situational awareness. This risk is huge, and the potential conflicts can be extremely dangerous.”
+
+As “a software company with a hardware problem,” McKenzie said, Iris’ systems use artificial intelligence (AI), machine learning, computer vision, and IoT connectivity to identify and focus on the “small group of pixels that could be a risk.” Working together, those technologies are creating an “exponential curve” in detect-and-avoid technology improvements, she added. The result? Drones that “see better than a human pilot,” she claimed.
+
+### Bigger market and new use cases for drones
+
+It’s hardly an academic issue. “Not being able to show adequate mitigation of operational risk means regulators are forced to limit drone uses and applications to closed environments,” the company says.
+
+Solving this problem would open up a wide range of industrial and commercial applications for drones. Far beyond delivering burritos, McKenzie said that with confidence in drone “sense and avoid” capabilities, drones could be used for all kinds of aerial data gathering, from inspecting hydro-electric dams, power lines, and railways to surveying crops to fighting forest fires and conducting search-and-rescue operations.
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3410564/iots-role-in-expanding-drone-use.html
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/01/drone_delivery_package_future-100745961-large.jpg
+[2]: https://www.networkworld.com/blog/techwatch/
+[3]: https://www.networkworld.com/article/3396188/the-traffic-jam-whopper-project-may-be-the-coolestdumbest-iot-idea-ever.html
+[4]: https://www.linkedin.com/in/withgracetoo/
+[5]: https://www.irisonboard.com/
+[6]: https://sfnewtech.com/event/iot/
+[7]: https://www.networkworld.com/article/3390677/drone-delivery-not-ready-for-prime-time.html
+[8]: https://www.networkworld.com/newsletters/signup.html
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190725 Report- Smart-city IoT isn-t smart enough yet.md b/sources/talk/20190725 Report- Smart-city IoT isn-t smart enough yet.md
new file mode 100644
index 0000000000..da6d4ee57a
--- /dev/null
+++ b/sources/talk/20190725 Report- Smart-city IoT isn-t smart enough yet.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Report: Smart-city IoT isn’t smart enough yet)
+[#]: via: (https://www.networkworld.com/article/3411561/report-smart-city-iot-isnt-smart-enough-yet.html)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+Report: Smart-city IoT isn’t smart enough yet
+======
+A report from Forrester Research details vulnerabilities affecting smart-city internet of things (IoT) infrastructure and offers some methods of mitigation.
+![Aleksandr Durnov / Getty Images][1]
+
+Security arrangements for smart-city IoT technology around the world are in an alarming state of disrepair, according to a report from Forrester Research that argues serious changes are needed in order to avoid widespread compromises.
+
+Much of what’s wrong has to do with a lack of understanding on the part of the people in charge of those systems and a failure to follow well-known security best practices, like centralized management, network visibility and limiting attack-surfaces.
+
+**More on IoT:**
+
+ * [What is the IoT? How the internet of things works][2]
+ * [What is edge computing and how it’s changing the network][3]
+ * [Most powerful Internet of Things companies][4]
+ * [10 Hot IoT startups to watch][5]
+ * [The 6 ways to make money in IoT][6]
+ * [What is digital twin technology? [and why it matters]][7]
+ * [Blockchain, service-centric networking key to IoT success][8]
+ * [Getting grounded in IoT networking and security][9]
+ * [Building IoT-ready networks must become a priority][10]
+ * [What is the Industrial IoT? [And why the stakes are so high]][11]
+
+
+
+Those all pose stiff challenges, according to “Making Smart Cities Safe And Secure,” the Forrester report by Merritt Maxim and Salvatore Schiano. The attack surface for a smart city is, by default, enormous, given the volume of Internet-connected hardware involved. Some device, somewhere, is likely to be vulnerable, and with the devices geographically spread out it’s difficult to secure all types of access to them.
+
+Worse still, some legacy systems can be downright impossible to manage and update in a safe way. Older technology often contains no provision for live updates, and its vulnerabilities can be severe, according to the report. Physical access to some types of devices also remains a serious challenge. The report gives the example of wastewater treatment plants in remote locations in Australia, which were sabotaged by a contractor who accessed the SCADA systems directly.
+
+In addition to the risk of compromised control systems, the generalized insecurity of smart city IoT makes the vast amounts of data that it generates highly suspect. Improperly configured devices could collect more information than they’re supposed to, including personally identifiable information, which could violate privacy regulations. Also, the data collected is analyzed to glean useful information about such things as parking patterns, water flow and electricity use, and inaccurate or compromised information can badly undercut the value of smart city technology to a given user.
+
+“Security teams are just gaining maturity in the IT environment with the necessity for data inventory, classification, and flow mapping, together with thorough risk and privacy impact assessments, to drive appropriate protection,” the report says. “In OT environments, they’re even further behind.”
+
+Yet, despite the fact that IoT planning and implementation doubled between 2017 and 2018, according to Forrester’s data, comparatively little work has been done on the security front. The report lists 13 cyberattacks on smart-city technology between 2014 and 2019 that had serious consequences, including widespread electricity outages, ransomware infections on hospital computers and emergency-service interruptions.
+
+Still, there are ways forward, according to Forrester. Careful log monitoring can keep administrators abreast of what’s normal and what’s suspicious on their networks. Asset mapping and centralizing control-plane functionality should make it much more difficult for bad actors to insert malicious devices into a smart-city network or take control of less-secure items. And intelligent alerting – the kind that provides contextual information, differentiating between “this system just got rained on and has poor connectivity” and “someone is tampering with this system” – should help cities be more responsive to security threats when they arise.
+
+Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3411561/report-smart-city-iot-isnt-smart-enough-yet.html
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/smart_city_smart_cities_iot_internet_of_things_by_aleksandr_durnov_gettyimages-971455374_2400x1600-100788363-large.jpg
+[2]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
+[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[5]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[6]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[7]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[8]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[9]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[10]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[11]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[12]: https://www.facebook.com/NetworkWorld/
+[13]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190725 Storage management a weak area for most enterprises.md b/sources/talk/20190725 Storage management a weak area for most enterprises.md
new file mode 100644
index 0000000000..859c1caa32
--- /dev/null
+++ b/sources/talk/20190725 Storage management a weak area for most enterprises.md
@@ -0,0 +1,82 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Storage management a weak area for most enterprises)
+[#]: via: (https://www.networkworld.com/article/3411400/storage-management-a-weak-area-for-most-enterprises.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Storage management a weak area for most enterprises
+======
+Survey finds companies are adopting technology for such things as AI, machine learning, edge computing and IoT, but still use legacy storage that can't handle those workloads.
+![Miakievy / Getty Images][1]
+
+Stop me if you’ve heard this before: Companies are racing to a new technological paradigm but are using yesterday’s tech to do it.
+
+I know. Shocking.
+
+A survey of more than 300 storage professionals by storage vendor NGD Systems found only 11% of the companies they talked to would give themselves an “A” grade for their compute and storage capabilities.
+
+Why? The chief reason given is that while enterprises are rapidly deploying technologies for edge networks, real-time analytics, machine learning, and internet of things (IoT) projects, they are still using legacy storage solutions that are not designed for such data-intensive workloads. More than half — 54% — said their processing of edge applications is a bottleneck, and they want faster and more intelligent storage solutions.
+
+**[ Read also: [What is NVMe, and how is it changing enterprise storage][2] ]**
+
+### NVMe SSD use increases, but doesn't solve all needs
+
+It’s not all bad news. The study, entitled ["The State of Storage and Edge Computing"][3] and conducted by Dimensional Research, found 60% of storage professionals are using NVMe SSDs to speed up the processing of large data sets being generated at the edge.
+
+However, this has not solved their needs. As artificial intelligence (AI) and other data-intensive deployments increase, data needs to be moved over increasingly longer distances, which causes network bottlenecks and delays analytic results. And edge computing systems tend to have a smaller footprint than a traditional data center, so they are performance constrained.
+
+The solution is to process the data where it is ingested, in this case, the edge device. Separate the wheat from the chafe and only send relevant data upstream to a data center to be processed. This is called computational storage, processing data where it is stored rather than moving it around.
+
+According to the survey, 89% of respondents said they expect real value from computational storage. Conveniently, NGD is a vendor of computational storage systems. So, yes, this is a self-serving finding. This happens a lot. That doesn’t mean they don’t have a valid point, though. Processing the data where it lies is the point of edge computing.
+
+Among the survey’s findings:
+
+ * 55% use edge computing
+ * 71% use edge computing for real-time analytics
+ * 61% said the cost of traditional storage solutions continues to plague their applications
+ * 57% said faster access to storage would improve their compute abilities
+
+
+
+The study also found that [NVMe][2] is being adopted very quickly but is being hampered by price.
+
+ * 86% expect storage’s future to rely on NVMe SSDs
+ * 60% use NVMe SSDs in their work environments
+ * 63% said NVMe SSDs helped with superior storage speed
+ * 67% reported budget and cost as issues preventing the use of NVMe SSDs
+
+
+
+That last finding is why so many enterprises are hampered in their work. For whatever reason they are using old storage systems rather than new NVMe systems, and it hurts them.
+
+### GPUs won't improve workload performance
+
+One interesting finding: 70% of respondents said they are using GPUs to help improve workload performance, but NGD said those are no good.
+
+“We were not surprised to find that while more than half of respondents are actively using edge computing, more than 70% are using legacy GPUs, which will not reduce the network bandwidth, power and footprint necessary to analyze mass data-sets in real time,” said Nader Salessi, CEO and founder of NGD Systems, in a statement.
+
+That’s because GPUs lend themselves well to repetitive tasks and parallel processing jobs, while computational storage is very much a serial processing job, with the task constantly changing. So while some processing jobs will benefit from a GPU, a good number will not and the GPU is essentially wasted.
+
+Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3411400/storage-management-a-weak-area-for-most-enterprises.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/edge_computing_by_miakievy_gettyimages-957694592_2400x1600-100788315-large.jpg
+[2]: https://www.networkworld.com/article/3280991/what-is-nvme-and-how-is-it-changing-enterprise-storage.html
+[3]: https://ngd.dnastaging.net/brief/NGD_Systems_Storage_Edge_Computing_Survey_Report
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190726 NVMe over Fabrics enterprise storage spec enters final review process.md b/sources/talk/20190726 NVMe over Fabrics enterprise storage spec enters final review process.md
new file mode 100644
index 0000000000..f14dcd7d67
--- /dev/null
+++ b/sources/talk/20190726 NVMe over Fabrics enterprise storage spec enters final review process.md
@@ -0,0 +1,71 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (NVMe over Fabrics enterprise storage spec enters final review process)
+[#]: via: (https://www.networkworld.com/article/3411958/nvme-over-fabrics-enterprise-storage-spec-enters-final-review-process.html)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+NVMe over Fabrics enterprise storage spec enters final review process
+======
+The NVMe over Fabric (NVMe-oF) architecture is closer to becoming a formal specification. It's expected improve storage network fabric communications and network performance.
+![Gremlin / Getty Images][1]
+
+NVM Express Inc., the developer of the [NVMe][2] spec for enterprise SSDs, announced that its NVMe-oF architecture has entered a final 45-day review, an important step toward release of a formal specification for enterprise SSD makers.
+
+NVMe-oF stands for [NVMe over Fabrics][3], a mechanism to transfer data between a host computer and a target SSD or system over a network, such as Ethernet, Fibre Channel (FC), or InfiniBand. NVM Express first released the 1.0 spec of NVMe-oF in 2016, so this is long overdue.
+
+**[ Read also: [NVMe over Fabrics creates data-center storage disruption][3] ]**
+
+NVMe has become an important advance in enterprise storage because it allows for intra-network data sharing. Before, when PCI Express-based SSDs first started being used in servers, they could not easily share data with another physical server. The SSD was basically for the machine it was in, and moving data around was difficult.
+
+With NVMe over Fabrics, it’s possible for one machine to directly reach out to another for data and have it transmitted over a variety of high-speed fabrics rather than just Ethernet.
+
+### How NVMe-oF 1.1 improves storage network fabric communication
+
+The NVMe-oF 1.1 architecture is designed to improve storage network fabric communications in several ways:
+
+ * Adds TCP transport supports NVMe-oF on current data center TCP/IP network infrastructure.
+ * Asynchronous discovery events inform hosts of addition or removal of target ports in a fabric-independent manner.
+ * Fabric I/O Queue Disconnect enables finer-grain I/O resource management.
+ * End-to-end (command to response) flow control improves concurrency.
+
+
+
+### New enterprise features for NVMe 1.4
+
+The organization also announced the release of the NVMe 1.4 base specification with new “enterprise features” described as a further maturation of the protocol. The specification provides important benefits, such as improved quality of service (QoS), faster performance, improvements for high-availability deployments, and scalability optimizations for data centers.
+
+Among the new features:
+
+ * Rebuild Assist simplifies data recovery and migration scenarios.
+ * Persistent Event Log enables robust drive history for issue triage and debug at scale.
+ * NVM Sets and IO Determinism allow for better performance, isolation, and QoS.
+ * Multipathing enhancements or Asymmetric Namespace Access (ANA) enable optimal and redundant paths to namespaces for high availability and full multi-controller scalability.
+ * Host Memory Buffer feature reduces latency and SSD design complexity, benefiting client SSDs.
+
+
+
+The upgraded NVMe 1.4 base specification and the pending over-fabric spec will be demonstrated at the Flash Memory Summit August 6-8, 2019 in Santa Clara, California.
+
+Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3411958/nvme-over-fabrics-enterprise-storage-spec-enters-final-review-process.html
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/big_data_storage_businessman_walks_through_futuristic_data_center_by_gremlin_gettyimages-1098116540_2400x1600-100788347-large.jpg
+[2]: https://www.networkworld.com/article/3280991/what-is-nvme-and-how-is-it-changing-enterprise-storage.html
+[3]: https://www.networkworld.com/article/3394296/nvme-over-fabrics-creates-data-center-storage-disruption.html
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20190729 Do you prefer a live demo to be perfect or broken.md b/sources/talk/20190729 Do you prefer a live demo to be perfect or broken.md
new file mode 100644
index 0000000000..b4b76aadd6
--- /dev/null
+++ b/sources/talk/20190729 Do you prefer a live demo to be perfect or broken.md
@@ -0,0 +1,82 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Do you prefer a live demo to be perfect or broken?)
+[#]: via: (https://opensource.com/article/19/7/live-demo-perfect-or-broken)
+[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeohttps://opensource.com/users/don-watkinshttps://opensource.com/users/jamesf)
+
+Do you prefer a live demo to be perfect or broken?
+======
+Do you learn more from flawless demos or ones the presenter de-bugs in
+real-time? Let us know by answering our poll.
+![video editing dashboard][1]
+
+At [DevFest DC][2] in June, [Sara Robinson][3], developer advocate at Google Cloud, gave the most seamless live demo I've ever witnessed.
+
+Sara live-coded a machine model from scratch using TensorFlow and Keras. Then she trained the model live, deployed it to Google's Cloud AI platform, and used the deployed model to make predictions.
+
+With the exception of perhaps one small hiccup, the whole thing went smoothly, and I learned a lot as an audience member.
+
+At that evening's reception, I congratulated Sara on the live demo's success and told her I've never seen a live demo go so well. It turns out that this subject was already on her mind; Sara asked this question on Twitter less than two hours before her live demo:
+
+> Do you prefer watching a live demo where everything works perfectly or one that breaks and the presenter has to de-bug?
+>
+> — Sara Robinson (@SRobTweets) [June 14, 2019][4]
+
+Contrary to my preference for flawless demos, two-thirds of Sara's followers prefer to watch de-bugging. The replies to her poll were equally enlightening:
+
+> I prefer ones that break once or twice, just so you know it's real. "Break" can be something small like a typo or skipping a step.
+>
+> — Seth Vargo (@sethvargo) [June 14, 2019][5]
+
+> Broken demos which are fixed in real time seem to get a better reaction from the audience. This was our experience with the All-Demo Super Session at NEXT SF. Audible gasps followed by applause from the audience when the broken demo was fixed in real-time 🤓
+>
+> — Jamie Kinney (@jamiekinney) [June 14, 2019][6]
+
+This made me reconsider my preference for perfection. When I attend live demos at events, I'm looking for tools that I'm unfamiliar with. I want to learn the basics of those tools, then see real-world applications. I don't expect magic, but I do want to see how the tools intend to work so I can gain and retain some knowledge.
+
+I've gone to several live demos that break. In my experience, this has caught most presenters off-guard; they seemed unfamiliar with the bugs at hand and, in one case, the error derailed the rest of the presentation. In short, it was like this:
+
+> Hmm, at least when the live demo fails you know it's not a video 😁
+> But I don't like when the presenter start to struggle, when everything becomes silent, it becomes so awkward (especially when I'm the one presenting)
+>
+> — Sylvain Nouts Ⓥ (@SylvainNouts) [June 14, 2019][7]
+
+Reading the replies to Sara's thread made me wonder what I'm really after when attending live demos. Is "perfection" what I seek? Or is it presenters who are more skilled at de-bugging in real-time? Upon reflection, I suspect that it's the latter.
+
+After all, "perfect" code is a lofty (if impossible) concept. Mistakes will happen, and I don't expect them not to. But I _do_ expect conference presenters to know their tools well enough that when things go sideways during live demos, they won't get so flustered that they can't keep going.
+
+Overall, this reply to Sara resonates with me the most. I attend live demos as a new coder with the goal to learn, and those that veer too far off-course aren't as effective for me:
+
+> I don’t necessarily prefer a broken demo, but I do think they show a more realistic view.
+> That said, when you are newer to coding if the error takes things too far off the rails it can make it challenging to understand the original concept.
+>
+> — April Bowler (@A_Bowler2) [June 14, 2019][8]
+
+I don't expect everyone to attend live demos with the same goals and perspective as me. That's why we want to learn what the open source community thinks.
+
+_Do you prefer for live demos to be perfect? Or do you gain more from watching presenters de-bug in real-time? Do you attend live demos primarily to learn or for other reasons? Let us know by taking our poll or leaving a comment below._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/7/live-demo-perfect-or-broken
+
+作者:[Lauren Maffeo][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/lmaffeohttps://opensource.com/users/don-watkinshttps://opensource.com/users/jamesf
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
+[2]: https://www.devfestdc.org/
+[3]: https://twitter.com/SRobTweets
+[4]: https://twitter.com/SRobTweets/status/1139619990687162368?ref_src=twsrc%5Etfw
+[5]: https://twitter.com/sethvargo/status/1139620990546145281?ref_src=twsrc%5Etfw
+[6]: https://twitter.com/jamiekinney/status/1139636109585989632?ref_src=twsrc%5Etfw
+[7]: https://twitter.com/SylvainNouts/status/1139637154731237376?ref_src=twsrc%5Etfw
+[8]: https://twitter.com/A_Bowler2/status/1139648492953976832?ref_src=twsrc%5Etfw
diff --git a/sources/talk/20190729 I Used The Web For A Day On A 50 MB Budget - Smashing Magazine.md b/sources/talk/20190729 I Used The Web For A Day On A 50 MB Budget - Smashing Magazine.md
new file mode 100644
index 0000000000..b0a7f4a7e0
--- /dev/null
+++ b/sources/talk/20190729 I Used The Web For A Day On A 50 MB Budget - Smashing Magazine.md
@@ -0,0 +1,538 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (I Used The Web For A Day On A 50 MB Budget — Smashing Magazine)
+[#]: via: (https://www.smashingmagazine.com/2019/07/web-on-50mb-budget/)
+[#]: author: (Chris Ashton https://www.smashingmagazine.com/author/chrisbashton)
+
+I Used The Web For A Day On A 50 MB Budget
+======
+
+Data can be prohibitively expensive, especially in developing countries. Chris Ashton puts himself in the shoes of someone on a tight data budget and offers practical tips for reducing our websites’ data footprint.
+
+This article is part of a series in which I attempt to use the web under various constraints, representing a given demographic of user. I hope to raise the profile of difficulties faced by real people, which are avoidable if we design and develop in a way that is sympathetic to their needs.
+
+Last time, I [navigated the web for a day using Internet Explorer 8][7]. This time, I browsed the web for a day on a 50 MB budget.
+
+### Why 50 MB?
+
+Many of us are lucky enough to be on mobile plans which allow several gigabytes of data transfer per month. Failing that, we are usually able to connect to home or public WiFi networks that are on fast broadband connections and have effectively unlimited data.
+
+But there are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.
+
+> People often buy data packages of just tens of megabytes at a time, making a gigabyte a relatively large and therefore expensive amount of data to buy.
+> — Dan Howdle, consumer telecoms analyst at Cable.co.uk
+
+Just how expensive are we talking?
+
+#### The Cost Of Mobile Data
+
+A 2018 [study by cable.co.uk][8] found that Zimbabwe was the most expensive country in the world for mobile data, where 1 GB cost an average of $75.20, ranging from $12.50 to $138.46. The enormous range in price is due to smaller amounts of data being very expensive, getting proportionally cheaper the bigger the data plan you commit to. You can read the [study methodology][9] for more information.
+
+Zimbabwe is by no means a one-off. Equatorial Guinea, Saint Helena and the Falkland Islands are next in line, with 1 GB of data costing $65.83, $55.47 and $47.39 respectively. These countries generally have a combination of poor technical infrastructure and low adoption, meaning data is both costly to deliver and doesn’t have the economy of scale to drive costs down.
+
+Data is expensive in parts of Europe too. A gigabyte of data in Greece will set you back $32.71; in Switzerland, $20.22. For comparison, the same amount of data costs $6.66 in the UK, or $12.37 in the USA. On the other end of the scale, India is the cheapest place in the world for data, at an average cost of $0.26. Kyrgyzstan, Kazakhstan and Ukraine follow at $0.27, $0.49 and $0.51 per GB respectively.
+
+The speed of mobile networks, too, varies considerably between countries. Perhaps surprisingly, [users experience faster speeds over a mobile network than WiFi][10] in at least 30 countries worldwide, including Australia and France. South Korea has the [fastest mobile download speed][11], averaging 52.4 Mbps, but Iraq has the slowest, averaging 1.6 Mbps download and 0.7 Mbps upload. The USA ranks 40th in the world for mobile download speeds, at around 34 Mbps, and is [at risk of falling further behind][12] as the world moves towards 5G.
+
+As for mobile network connection type, 84.7% of user connections in the UK are on 4G, compared to 93% in the USA, and 97.5% in South Korea. This compares with less than 50% in Uzbekistan and less than 60% in Algeria, Ecuador, Nepal and Iraq.
+
+#### The Cost Of Broadband Data
+
+Meanwhile, a [study of the cost of broadband in 2018][13] shows that a broadband connection in Niger costs $263 ‘per megabit per month’. This metric is a little difficult to comprehend, so here’s an example: if the average cost of broadband packages in a country is $22, and the average download speed offered by the packages is 10 Mbps, then the cost ‘per megabit per month’ would be $2.20.
+
+It’s an interesting metric, and one that acknowledges that broadband speed is as important a factor as the data cap. A cost of $263 suggests a combination of extremely slow and extremely expensive broadband. For reference, the metric is $1.19 in the UK and $1.26 in the USA.
+
+What’s perhaps easier to comprehend is the average cost of a broadband package. Note that this study was looking for the cheapest broadband packages on offer, ignoring whether or not these packages had a data cap, so provides a useful ballpark figure rather than the cost of data per se.
+
+On package cost alone, Mauritania has the most expensive broadband in the world, at an average of $768.16 (a range of $307.26 to $1,368.72). This enormous cost includes building physical lines to the property, since few already exist in Mauritania. At 0.7 Mbps, Mauritania also has one of the slowest broadband networks in the world.
+
+[Taiwan has the fastest broadband in the world][14], at a mean speed of 85 Mbps. Yemen has the slowest, at 0.38 Mbps. But even countries with good established broadband infrastructure have so-called ‘not-spots’. The United Kingdom is ranked 34th out of 207 countries for broadband speed, but in July 2019 there was [still a school in the UK without broadband][15].
+
+The average cost of a broadband package in the UK is $39.58, and in the USA is $67.69. The cheapest average in the world is Ukraine’s, at just $5, although the cheapest broadband deal of them all was found in Kyrgystan ($1.27 — against the country average of $108.22).
+
+Zimbabwe was the most costly country for mobile data, and the statistics aren’t much better for its broadband, with an average cost of $128.71 and a ‘per megabit per month’ cost of $6.89.
+
+#### Absolute Cost vs Cost In Real Terms
+
+All of the costs outlined so far are the absolute costs in USD, based on the exchange rates at the time of the study. These costs have [not been accounted for cost of living][16], meaning that for many countries the cost is actually far higher in real terms.
+
+I’m going to limit my browsing today to 50 MB, which in Zimbabwe would cost around $3.67 on a mobile data tariff. That may not sound like much, but teachers in Zimbabwe were striking this year because their [salaries had fallen to just $2.50 a day][17].
+
+For comparison, $3.67 is around half the [$7.25 minimum wage in the USA][18]. As a Zimbabwean, I’d have to work for around a day and a half to earn the money to buy this 50MB data, compared to just half an hour in the USA. It’s not easy to compare cost of living between countries, but on wages alone the $3.67 cost of 50 MB of data in Zimbabwe would feel like $52 to an American on minimum wage.
+
+### Setting Up The Experiment
+
+I launched Chrome and opened the dev tools, where I throttled the network to a slow 3G connection. I wanted to simulate a slow connection like those experienced by users in Uzbekistan, to see what kind of experience websites would give me. I also throttled my CPU to simulate being on a lower end device.
+
+[![][19]][20]I opted to throttle my network to Slow 3G and my CPU to 6x slowdown. ([Large preview][20])
+
+I installed [ModHeader][21] and set the [‘Save-Data’ header][22] to let websites know I want to minimise my data usage. This is also the header set by Chrome for Android’s ‘Lite mode’, which I’ll cover in more detail later.
+
+I downloaded [TripMode][23]; an application for Mac which gives you control over which apps on your Mac can access the internet. Any other application’s internet access is automatically blocked.
+
+You can enable/disable individual apps from connecting to the internet with TripMode. I enabled Chrome. ([Large preview][24])
+
+How far do I predict my 50 MB budget will take me? With the [average weight of a web page being almost 1.7 MB][25], that suggests I’ve got around 29 pages in my budget, although probably a few more than that if I’m able to stay on the same sites and leverage browser caching.
+
+Throughout the experiment I will suggest performance tips to speed up the [first contentful paint][26] and perceived loading time of the page. Some of these tips may not affect the amount of data transferred directly, but do generally involve deferring the download of less important resources, which on slow connections may mean the resources are never downloaded and data is saved.
+
+### The Experiment
+
+Without any further ado, I loaded google.com, using 402 KB of my budget and spending $0.03 (around 1% of my Zimbabwe budget).
+
+[![402 KB transferred, 1.1 MB resources, 24 requests][27]][28]402 KB transferred, 1.1 MB resources, 24 requests. ([Large preview][28])
+
+All in all, not a bad page size, but I wondered where those 24 network requests were coming from and whether or not the page could be made any lighter.
+
+#### Google Homepage — DOM
+
+[![][29]][30]Chrome devtools screenshot of the DOM, where I’ve expanded one inline `style` tag. ([Large preview][30])
+
+Looking at the page markup, there are no external stylesheets — all of the CSS is inline.
+
+##### Performance Tip #1: Inline Critical CSS
+
+This is good for performance as it saves the browser having to make an additional network request in order to fetch an external stylesheet, so the styles can be parsed and applied immediately for the first contentful paint. There’s a trade-off to be made here, as external stylesheets can be cached but inline ones cannot (unless you [get clever with JavaScript][31]).
+
+The general advice is for your [critical styles][32] (anything [above-the-fold][33]) to be inline, and for the rest of your styling to be external and loaded asynchronously. Asynchronous loading of CSS can be achieved in [one remarkably clever line of HTML][34]:
+
+```
+
+```
+
+The devtools show a prettified version of the DOM. If you want to see what was actually downloaded to the browser, switch to the Sources tab and find the document.
+
+[![A wall of minified code.][35]][36]Switching to Sources and finding the index shows the ‘raw’ HTML that was delivered to the browser. What a mess! ([Large preview][36])
+
+You can see there is a LOT of inline JavaScript here. It’s worth noting that it has been uglified rather than merely minified.
+
+##### Performance Tip #2: Minify And Uglify Your Assets
+
+Minification removes unnecessary spaces and characters, but uglification actually ‘mangles’ the code to be shorter. The tell-tale sign is that the code contains short, machine-generated variable names rather than untouched source code. This is good as it means the script is smaller and quicker to download.
+
+Even so, inline scripts look to be roughly 120 KB of the 210 KB page resource (about half the 60 KB gzipped size). In addition, there are five external JavaScript files amounting to 291 KB of the 402 KB downloaded:
+
+[![Network tab of DevTools showing the external javascript files][37]][38]Five external JavaScript files in the Network tab of the devtools. ([Large preview][38])
+
+This means that JavaScript accounts for about 80 percent of the overall page weight.
+
+This isn’t useless JavaScript; Google has to have some in order to display suggestions as you type. But I suspect a lot of it is tracking code and advertising setup.
+
+For comparison, I disabled JavaScript and reloaded the page:
+
+[![DevTools showing only 5 network requests][39]][40]The disabled JS version of Google search was only 102 KB and had just 5 network requests. ([Large preview][40])
+
+The JS-disabled version of Google search is just 102 KB, as opposed to 402 KB. Although Google can’t provide autosuggestions under these conditions, the site is still functional, and I’ve just cut my data usage down to a quarter of what it was. If I really did have to limit my data usage in the long term, one of the first things I’d do is disable JavaScript. [It’s not as bad as it sounds][41].
+
+##### Performance Tip #3: Less Is More
+
+Inlining, uglifying and minifying assets is all well and good, but the best performance comes from not sending down the assets in the first place.
+
+ * Before adding any new features, do you have a [performance budget][42] in place?
+ * Before adding JavaScript to your site, can your feature be accomplished using plain HTML? (For example, [HTML5 form validation][43]).
+ * Before pulling a large JavaScript or CSS library into your application, use something like [bundlephobia.com][44] to measure how big it is. Is the convenience worth the weight? Can you accomplish the same thing using vanilla code at a much smaller data size?
+
+
+
+#### Analysing The Resource Info
+
+There’s a lot to unpack here, so let’s get cracking. I’ve only got 50 MB to play with, so I’m going to milk every bit of this page load. Settle in for a short Chrome Devtools tutorial.
+
+402 KB transferred, but 1.1 MB of resources: what does that actually mean?
+
+It means 402 KB of content was actually downloaded, but in its compressed form (using a compression algorithm such as [gzip or brotli][45]). The browser then had to do some work to unpack it into something meaningful. The total size of the unpacked data is 1.1 MB.
+
+This unpacking isn’t free — [there are a few milliseconds of overhead in decompressing the resources][46]. But that’s a negligible overhead compared to sending 1.1MB down the wire.
+
+##### Performance Tip #4: Compress Text-based Assets
+
+As a general rule, always compress your assets, using something like gzip. But don’t use compression on your images and other binary files — you should optimize these in advance at source. Compression could actually end up [making them bigger][47].
+
+And, if you can, [avoid compressing files that are 1500 bytes or smaller][47]. The smallest TCP packet size is 1500 bytes, so by compressing to, say, 800 bytes, you save nothing, as it’s still transmitted in the same byte packet. Again, the cost is negligible, but wastes some compression CPU time on the server and decompression CPU time on the client.
+
+Now back to the Network tab in Chrome: let’s dig into those priorities. Notice that resources have priority “Highest” to “Lowest” — these are the browser’s best guess as to what are the more important resources to download. The higher the priority, the sooner the browser will try to download the asset.
+
+##### Performance Tip #5: Give Resource Hints To The Browser
+
+The browser will guess at what the highest priority assets are, but you can [provide a resource hint][48] using the `` tag, instructing the browser to download the asset as soon as possible. It’s a good idea to preload fonts, logos and anything else that appears above the fold.
+
+Let’s talk about caching. I’m going to hold ALT and right-click to change my column headers to unlock some more juicy information. We’re going to check out Cache-Control.
+
+There are lots of interesting fields tucked away behind ALT. ([Large preview][49])
+
+Cache-Control denotes whether or not a resource can be cached, how long it can be cached for, and what rules it should follow around [revalidating][50]. Setting proper cache values is crucial to keeping the data cost of repeat visits down.
+
+##### Performance Tip #6: Set cache-control Headers On All Cacheable Assets
+
+Note that the cache-control value begins with a directive of `public` or `private`, followed by an expiration value (e.g. `max-age=31536000`). What does the directive mean, and why the oddly specific `max-age` value?
+
+[![Screenshot of Google network tab with cache-control column visible][51]][52]A mixture of max-age values and public/private. ([Large preview][52])
+
+The value `31536000` is the number of seconds there are in a year, and is the theoretical maximum value allowed by the cache-control specification. It is common to see this value applied to all static assets and effectively means “this resource isn’t going to change”. In practice, [no browser is going to cache for an entire year][53], but it will cache the asset for as long as makes sense.
+
+To explain the public/private directive, we must explain the two main caches that exist off the server. First, there is the traditional browser cache, where the resource is stored on the user’s machine (the ‘client’). And then there is the CDN cache, which sits between the client and the server; resources are cached at the CDN level to prevent the CDN from requesting the resource from the origin server over and over again.
+
+A `Cache-Control` directive of `public` allows the resource to be cached in both the client and the CDN. A value of `private` means only the client can cache it; the CDN is not supposed to. This latter value is typically used for pages or assets that exist behind authentication, where it is fine to be cached on the client but we wouldn’t want to leak private information by caching it in the CDN and delivering it to other users.
+
+[![Screenshot of Google logo cache-control setting: private, max-age=31536000][54]][55]A mixture of max-age values and public/private. ([Large preview][55])
+
+One thing that got my attention was that the Google logo has a cache control of “private”. Other images on the page do have a public cache, and I don’t know why the logo would be treated any differently. If you have any ideas, let me know in the comments!
+
+I refreshed the page and most of the resources were served from cache, apart from the page itself, which as you’ve seen already is `private, max-age=0`, meaning it cannot be cached. This is normal for dynamic web pages where it is important that the user always gets the very latest page when they refresh.
+
+It was at this point I accidentally clicked on an ‘Explanation’ URL in the devtools, which took me to the [network analysis reference][56], costing me about 5 MB of my budget. Oops.
+
+### Google Dev Docs
+
+4.2 MB of this new 5 MB page was down to images; specifically SVGs. The weightiest of these was 186 KB, which isn’t particularly big — there were just so many of them, and they all downloaded at once.
+
+This is a loooong page. All the images downloaded on page load. ([Large preview][57])
+
+That 5 MB page load was 10% of my budget for today. So far I’ve used 5.5 MB, including the no-JavaScript reload of the Google homepage, and spent $0.40. I didn’t even mean to open this page.
+
+What would have been a better user experience here?
+
+##### Performance Tip #7: Lazy-load Your Images
+
+Ordinarily, if I accidentally clicked on a link, I would hit the back button in my browser. I’d have received no benefit whatsoever from downloading those images — what a waste of 4.2 MB!
+
+Apart from video, where you generally know what you’re getting yourself into, images are by far the biggest culprit to data usage on the web. A [study of the world’s top 500 websites][58] found that images take up to 53% of the average page weight. “This means they have a big impact on page-loading times and subsequently overall performance”.
+
+Instead of downloading all of the images on page load, it is good practice to lazy-load the images so that only users who are engaged with the page pay the cost of downloading them. Users who choose not to scroll below the fold therefore don’t waste any unnecessary bandwidth downloading images they’ll never see.
+
+There’s a great [css-tricks.com guide to rolling out lazy-loading for images][59] which offers a good balance between those on good connections, those on poor connections, and those with JavaScript disabled.
+
+If this page had implemented lazy loading as per the guide above, each of the 38 SVGs would have been represented by a 1 KB placeholder image by default, and only loaded into view on scroll.
+
+##### Performance Tip #8: Use The Right Format For Your Images
+
+I thought that Google had missed a trick by not using [WebP][60], which is an image format that is 26% smaller in size compared to PNGs (with no loss in quality) and 25-34% smaller in size compared to JPEGs (and of a comparable quality). I thought I’d have a go at converting SVG to WebP.
+
+Converting to WebP did bring one of the SVGs down from 186 KB to just 65 KB, but actually, looking at the images side by side, the WebP came out grainy:
+
+[![Comparison of the two images][61]][62]The SVG (left) is noticeably crisper than the WebP (right). ([Large preview][62])
+
+I then tried converting one of the PNGs to WebP, which is supposed to be lossless and should come out smaller. However, the WebP output was *heavier* (127 KB, from 109 KB)!
+
+[![Comparison of the two images][63]][64]The PNG (left) is a similar quality to the WebP (right) but is smaller at 109 KB compared to 127 KB. ([Large preview][64])
+
+This surprised me. WebP isn’t necessarily the silver bullet we think it is, and even Google have neglected to use it on this page.
+
+So my advice would be: where possible, experiment with different image formats on a per-image basis. The format that keeps the best quality for the smallest size may not be the one you expect.
+
+Now back to the DOM. I came across this:
+
+Notice the `async` keyword on the Google analytics script?
+
+[![Screenshot of performance analysis output of devtools][65]][66]Google analytics has ‘low’ priority. ([Large preview][66])
+
+Despite being one of the first things in the head of the document, this was given a low priority, as we’ve explicitly opted out of being a blocking request by using the `async` keyword.
+
+A blocking request is one that stops the rendering of the page. A `
+
+
+