diff --git a/published/20180503 How the four components of a distributed tracing system work together.md b/published/20180503 How the four components of a distributed tracing system work together.md
new file mode 100644
index 0000000000..1f073d897e
--- /dev/null
+++ b/published/20180503 How the four components of a distributed tracing system work together.md
@@ -0,0 +1,149 @@
+分布式跟踪系统的四大功能模块如何协同工作
+======
+
+> 了解分布式跟踪中的主要体系结构决策,以及各部分如何组合在一起。
+
+![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/touch-tracing.jpg?itok=rOmsY-nU)
+
+早在十年前,认真研究过分布式跟踪基本上只有学者和一小部分大型互联网公司中的人。对于任何采用微服务的组织来说,它如今成为一种筹码。其理由是确立的:微服务通常会发生让人意想不到的错误,而分布式跟踪则是描述和诊断那些错误的最好方法。
+
+也就是说,一旦你准备将分布式跟踪集成到你自己的应用程序中,你将很快意识到对于不同的人来说“分布式跟踪”一词意味着不同的事物。此外,跟踪生态系统里挤满了具有相似内容的重叠项目。本文介绍了分布式跟踪系统中四个(可能)独立的功能模块,并描述了它们间将如何协同工作。
+
+### 分布式跟踪:一种思维模型
+
+大多数用于跟踪的思维模型来源于 [Google 的 Dapper 论文][1]。[OpenTracing][2] 使用相似的术语,因此,我们从该项目借用了以下术语:
+
+![Tracing][3]
+
+* 跟踪:事物在分布式系统运行的过程描述。
+* 跨度:一种命名的定时操作,表示工作流的一部分。跨度可接受键值对标签以及附加到特定跨度实例的细粒度的、带有时间戳的结构化日志。
+* 跨度上下文:携带分布式事务的跟踪信息,包括当它通过网络或消息总线将服务传递给服务时。跨度上下文包含跟踪标识符、跨度标识符以及跟踪系统所需传播到下游服务的任何其他数据。
+
+如果你想要深入研究这种思维模式的细节,请仔细参照 [OpenTracing 技术规范][1]。
+
+### 四大功能模块
+
+从应用层分布式跟踪系统的观点来看,现代软件系统架构如下图所示:
+
+![Tracing][5]
+
+现代软件系统的组件可分为三类:
+
+* **应用程序和业务逻辑**:你的代码。
+* **广泛共享库**:他人的代码
+* **广泛共享服务**:他人的基础架构
+
+这三类组件有着不同的需求,驱动着监控应用程序的分布式跟踪系统的设计。最终的设计得到了四个重要的部分:
+
+* 跟踪检测 API:修饰应用程序代码
+* 线路协议:在 RPC 请求中与应用程序数据一同发送的规定
+* 数据协议:将异步信息(带外)发送到你的分析系统的规定
+* 分析系统:用于处理跟踪数据的数据库和交互式用户界面
+
+为了更深入的解释这个概念,我们将深入研究驱动该设计的细节。如果你只需要我的一些建议,请跳转至下方的四大解决方案。
+
+### 需求,细节和解释
+
+应用程序代码、共享库以及共享式服务在操作上有显著的差别,这种差别严重影响了对其进行检测的请求操作。
+
+#### 检测应用程序代码和业务逻辑
+
+在任何特定的微服务中,由微服务开发者编写的大部分代码是应用程序或者商业逻辑。这部分代码规定了特定区域的操作。通常,它包含任何特殊、独一无二的逻辑判断,这些逻辑判断首先证明了创建新型微服务的合理性。基本上按照定义,**该代码通常不会在多个服务中共享或者以其他方式出现。**
+
+也即是说你仍需了解它,这也意味着需要以某种方式对它进行检测。一些监控和跟踪分析系统使用黑盒代理自动检测代码,另一些系统更想使用显式的白盒检测工具。对于后者,抽象跟踪 API 提供了许多对于微服务的应用程序代码来说更为实用的优势:
+
+* 抽象 API 允许你在不重新编写检测代码的条件下换新的监视工具。你可能想要变更云服务提供商、供应商和监测技术,而一大堆不可移植的检测代码将会为该过程增加有意义的开销和麻烦。
+* 事实证明,除了生产监控之外,该工具还有其他有趣的用途。现有的项目使用相同的跟踪工具来驱动测试工具、分布式调试器、“混沌工程”故障注入器和其他元应用程序。
+* 但更重要的是,若将应用程序组件提取到共享库中要怎么办呢?由上述内容可得到结论:
+
+#### 检测共享库
+
+在大多数应用程序中出现的实用程序代码(处理网络请求、数据库调用、磁盘写操作、线程、并发管理等)通常情况下是通用的,而非特别应用于某个特定应用程序。这些代码会被打包成库和框架,而后就可以被装载到许多的微服务上并且被部署到多种不同的环境中。
+
+其真正的不同是:对于共享代码,其他人则成为了使用者。大多数用户有不同的依赖关系和操作风格。如果尝试去使用该共享代码,你将会注意到几个常见的问题:
+
+* 你需要一个 API 来编写检测。然而,你的库并不知道你正在使用哪个分析系统。会有多种选择,并且运行在相同应用下的所有库无法做出不兼容的选择。
+* 由于这些包封装了所有网络处理代码,因此从请求报头注入和提取跨度上下文的任务往往指向 RPC 库。然而,共享库必须了解到每个应用程序正在使用哪种跟踪协议。
+* 最后,你不想强制用户使用相互冲突的依赖项。大多数用户有不同的依赖关系和操作风格。即使他们使用 gRPC,绑定的 gRPC 版本是否相同?因此任何你的库附带用于跟踪的监控 API 必定是免于依赖的。
+
+**因此,一个(a)没有依赖关系、(b)与线路协议无关、(c)使用流行的供应商和分析系统的抽象 API 应该是对检测共享库代码的要求。**
+
+#### 检测共享式服务
+
+最后,有时整个服务(或微服务集合体)的通用性足以使许多独立的应用程序使用它们。这种共享式服务通常由第三方托管和管理,例如缓存服务器、消息队列以及数据库。
+
+从应用程序开发者的角度来看,理解共享式服务本质上是黑盒子是极其重要的。它不可能将你的应用程序监控注入到共享式服务。恰恰相反,托管服务通常会运行它自己的监控方案。
+
+### 四个方面的解决方案
+
+因此,抽象的跟踪应用程序接口将会帮助库发出数据并且注入/抽取跨度上下文。标准的线路协议将会帮助黑盒服务相互连接,而标准的数据格式将会帮助分离的分析系统合并其中的数据。让我们来看一下部分有希望解决这些问题的方案。
+
+#### 跟踪 API:OpenTracing 项目
+
+如你所见,我们需要一个跟踪 API 来检测应用程序代码。为了将这种工具扩展到大多数进行跨度上下文注入和提取的共享库中,则必须以某种关键方式对 API 进行抽象。
+
+[OpenTracing][2] 项目主要针对解决库开发者的问题,OpenTracing 是一个与供应商无关的跟踪 API,它没有依赖关系,并且迅速得到了许多监控系统的支持。这意味着,如果库附带了内置的本地 OpenTracing 工具,当监控系统在应用程序启动连接时,跟踪将会自动启动。
+
+就个人而言,作为一个已经编写、发布和操作开源软件十多年的人,在 OpenTracing 项目上工作并最终解决这个观察性的难题令我十分满意。
+
+除了 API 之外,OpenTracing 项目还维护了一个不断增长的工具列表,其中一些可以在[这里][6]找到。如果你想参与进来,无论是通过提供一个检测插件,对你自己的 OSS 库进行本地测试,或者仅仅只想问个问题,都可以通过 [Gitter][7] 向我们打招呼。
+
+#### 线路协议: HTTP 报头 trace-context
+
+为了监控系统能进行互操作,以及减轻从一个监控系统切换为另外一个时带来的迁移问题,需要标准的线路协议来传播跨度上下文。
+
+[w3c 分布式跟踪上下文社区小组][8]在努力制定此标准。目前的重点是制定一系列标准的 HTTP 报头。该规范的最新草案可以在[此处][9]找到。如果你对此小组有任何的疑问,[邮件列表][10]和[Gitter 聊天室][11]是很好的解惑地点。
+
+(LCTT 译注:本文原文发表于 2018 年 5 月,可能现在社区已有不同进展)
+
+#### 数据协议 (还未出现!!)
+
+对于黑盒服务,在无法安装跟踪程序或无法与程序进行交互的情况下,需要使用数据协议从系统中导出数据。
+
+目前这种数据格式和协议的开发工作尚处在初级阶段,并且大多在 w3c 分布式跟踪上下文工作组的上下文中进行工作。需要特别关注的是在标准数据模式中定义更高级别的概念,例如 RPC 调用、数据库语句等。这将允许跟踪系统对可用数据类型做出假设。OpenTracing 项目也通过定义一套[标准标签集][12]来解决这一事务。该计划是为了使这两项努力结果相互配合。
+
+注意当前有一个中间地带。对于由应用程序开发者操作但不想编译或以其他方式执行代码修改的“网络设备”,动态链接可以帮助避免这种情况。主要的例子就是服务网格和代理,就像 Envoy 或者 NGINX。针对这种情况,可将兼容 OpenTracing 的跟踪器编译为共享对象,然后在运行时动态链接到可执行文件中。目前 [C++ OpenTracing API][13] 提供了该选项。而 JAVA 的 OpenTracing [跟踪器解析][14]也在开发中。
+
+这些解决方案适用于支持动态链接,并由应用程序开发者部署的的服务。但从长远来看,标准的数据协议可以更广泛地解决该问题。
+
+#### 分析系统:从跟踪数据中提取有见解的服务
+
+最后不得不提的是,现在有足够多的跟踪监视解决方案。可以在[此处][15]找到已知与 OpenTracing 兼容的监控系统列表,但除此之外仍有更多的选择。我更鼓励你研究你的解决方案,同时希望你在比较解决方案时发现本文提供的框架能派上用场。除了根据监控系统的操作特性对其进行评级外(更不用提你是否喜欢 UI 和其功能),确保你考虑到了上述三个重要方面、它们对你的相对重要性以及你感兴趣的跟踪系统如何为它们提供解决方案。
+
+### 结论
+
+最后,每个部分的重要性在很大程度上取决于你是谁以及正在建立什么样的系统。举个例子,开源库的作者对 OpenTracing API 非常感兴趣,而服务开发者对 trace-context 规范更感兴趣。当有人说一部分比另一部分重要时,他们的意思通常是“一部分对我来说比另一部分重要”。
+
+然而,事实是:分布式跟踪已经成为监控现代系统所必不可少的事物。在为这些系统进行构建模块时,“尽可能解耦”的老方法仍然适用。在构建像分布式监控系统一样的跨系统的系统时,干净地解耦组件是维持灵活性和前向兼容性地最佳方式。
+
+感谢你的阅读!现在当你准备好在你自己的应用程序中实现跟踪服务时,你已有一份指南来了解他们正在谈论哪部分部分以及它们之间如何相互协作。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/5/distributed-tracing
+
+作者:[Ted Young][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[chenmu-kk](https://github.com/chenmu-kk)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/tedsuo
+[1]:https://research.google.com/pubs/pub36356.html
+[2]:http://opentracing.io/
+[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing1_0.png?itok=dvDTX0JJ (Tracing)
+[4]:https://github.com/opentracing/specification/blob/master/specification.md
+[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing2_0.png?itok=yokjNLZk (Tracing)
+[6]:https://github.com/opentracing-contrib/
+[7]:https://gitter.im/opentracing/public
+[8]:https://www.w3.org/community/trace-context/
+[9]:https://w3c.github.io/distributed-tracing/report-trace-context.html
+[10]:http://lists.w3.org/Archives/Public/public-trace-context/
+[11]:https://gitter.im/TraceContext/Lobby
+[12]:https://github.com/opentracing/specification/blob/master/semantic_conventions.md
+[13]:https://github.com/opentracing/opentracing-cpp
+[14]:https://github.com/opentracing-contrib/java-tracerresolver
+[15]:http://opentracing.io/documentation/pages/supported-tracers
+[16]:https://events.linuxfoundation.org/kubecon-eu-2018/
+[17]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/
diff --git a/published/20180511 MidnightBSD Could Be Your Gateway to FreeBSD.md b/published/20180511 MidnightBSD Could Be Your Gateway to FreeBSD.md
new file mode 100644
index 0000000000..48ebc210e1
--- /dev/null
+++ b/published/20180511 MidnightBSD Could Be Your Gateway to FreeBSD.md
@@ -0,0 +1,159 @@
+MidnightBSD:或许是你通往 FreeBSD 的大门
+======
+
+![](https://www.linux.com/wp-content/uploads/2019/08/midnight_4_0.jpg)
+
+[FreeBSD][1] 是一个开源操作系统,衍生自著名的 [伯克利软件套件][2](BSD)。FreeBSD 的第一个版本发布于 1993 年,并且仍然在继续发展。2007 年左右,Lucas Holt 想要利用 OpenStep(现在是 Cocoa)的 Objective-C 框架、widget 工具包和应用程序开发工具的 [GnuStep][3] 实现,来创建一个 FreeBSD 的分支。为此,他开始开发 MidnightBSD 桌面发行版。
+
+MidnightBSD(以 Lucas 的猫 Midnight 命名)仍然在积极地(尽管缓慢)开发。从 2017 年 8 月开始,可以获得最新的稳定发布版本(0.8.6)(LCTT 译注:截止至本译文发布时,当前是 2019/10/31 发布的 1.2 版)。尽管 BSD 发行版不是你所说的用户友好型发行版,但上手安装是熟悉如何处理 文本(ncurses)安装过程以及通过命令行完成安装的好方法。
+
+这样,你最终会得到一个非常可靠的 FreeBSD 分支的桌面发行版。这需要花费一点精力,但是如果你是一名正在寻找扩展你的技能的 Linux 用户……这是一个很好的起点。
+
+我将带你走过安装 MidnightBSD 的流程,如何添加一个图形桌面环境,然后如何安装应用程序。
+
+### 安装
+
+正如我所提到的,这是一个文本(ncurses)安装过程,因此在这里找不到可以用鼠标点击的地方。相反,你将使用你键盘的 `Tab` 键和箭头键。在你下载[最新的发布版本][4]后,将它刻录到一个 CD/DVD 或 USB 驱动器,并启动你的机器(或者在 [VirtualBox][5] 中创建一个虚拟机)。安装程序将打开并给你三个选项(图 1)。使用你的键盘的箭头键选择 “Install”,并敲击回车键。
+
+![MidnightBSD installer][6]
+
+*图 1: 启动 MidnightBSD 安装程序。*
+
+在这里要经历相当多的屏幕。其中很多屏幕是一目了然的:
+
+1. 设置非默认键盘映射(是/否)
+2. 设置主机名称
+3. 添加可选系统组件(文档、游戏、32 位兼容性、系统源码代码)
+4. 对硬盘分区
+5. 管理员密码
+6. 配置网络接口
+7. 选择地区(时区)
+8. 启用服务(例如 ssh)
+9. 添加用户(图 2)
+
+![Adding a user][7]
+
+*图 2: 向系统添加一个用户。*
+
+在你向系统添加用户后,你将被进入到一个窗口中(图 3),在这里,你可以处理任何你可能忘记配置或你想重新配置的东西。如果你不需要作出任何更改,选择 “Exit”,然后你的配置就会被应用。
+
+![Applying your configurations][8]
+
+*图 3: 应用你的配置。*
+
+在接下来的窗口中,当出现提示时,选择 “No”,接下来系统将重启。在 MidnightBSD 重启后,你已经为下一阶段的安装做好了准备。
+
+### 后安装阶段
+
+当你最新安装的 MidnightBSD 启动时,你将发现你自己处于命令提示符当中。此刻,还没有图形界面。要安装应用程序,MidnightBSD 依赖于 `mport` 工具。比如说你想安装 Xfce 桌面环境。为此,登录到 MidnightBSD 中,并发出下面的命令:
+
+```
+sudo mport index
+sudo mport install xorg
+```
+
+你现在已经安装好 Xorg 窗口服务器了,它允许你安装桌面环境。使用命令来安装 Xfce :
+
+```
+sudo mport install xfce
+```
+
+现在 Xfce 已经安装好。不过,我们必须让它同命令 `startx` 一起启用。为此,让我们先安装 nano 编辑器。发出命令:
+
+```
+sudo mport install nano
+```
+
+随着 nano 安装好,发出命令:
+
+```
+nano ~/.xinitrc
+```
+
+这个文件仅包含一行内容:
+
+```
+exec startxfce4
+```
+
+保存并关闭这个文件。如果你现在发出命令 `startx`, Xfce 桌面环境将会启动。你应该会感到有点熟悉了吧(图 4)。
+
+![ Xfce][9]
+
+*图 4: Xfce 桌面界面已准备好服务。*
+
+因为你不会总是想必须发出命令 `startx`,你希望启用登录守护进程。然而,它却没有安装。要安装这个子系统,发出命令:
+
+```
+sudo mport install mlogind
+```
+
+当完成安装后,通过在 `/etc/rc.conf` 文件中添加一个项目来在启动时启用 mlogind。在 `rc.conf` 文件的底部,添加以下内容:
+
+```
+mlogind_enable=”YES”
+```
+
+保存并关闭该文件。现在,当你启动(或重启)机器时,你应该会看到图形登录屏幕。在写这篇文章的时候,在登录后我最后得到一个空白屏幕和讨厌的 X 光标。不幸的是,目前似乎并没有这个问题的解决方法。所以,要访问你的桌面环境,你必须使用 `startx` 命令。
+
+### 安装应用
+
+默认情况下,你找不到很多能可用的应用程序。如果你尝试使用 `mport` 安装应用程序,你很快就会感到沮丧,因为只能找到很少的应用程序。为解决这个问题,我们需要使用 `svnlite` 命令来查看检出的可用 mport 软件列表。回到终端窗口,并发出命令:
+
+```
+svnlite co http://svn.midnightbsd.org/svn/mports/trunk mports
+```
+
+在你完成这些后,你应该看到一个命名为 `~/mports` 的新目录。使用命令 `cd ~/.mports` 更改到这个目录。发出 `ls` 命令,然后你应该看到许多的类别(图 5)。
+
+![applications][10]
+
+*图 5: mport 现在可用的应用程序类别。*
+
+你想安装 Firefox 吗?如果你查看 `www` 目录,你将看到一个 `linux-firefox` 列表。发出命令:
+
+```
+sudo mport install linux-firefox
+```
+
+现在你应该会在 Xfce 桌面菜单中看到一个 Firefox 项。翻找所有的类别,并使用 `mport` 命令来安装你需要的所有软件。
+
+### 一个悲哀的警告
+
+一个悲哀的小警告是,`mport` (通过 `svnlite`)仅能找到的一个办公套件的版本是 OpenOffice 3 。那是非常过时的。尽管在 `~/mports/editors` 目录中能找到 Abiword ,但是它看起来不能安装。甚至在安装 OpenOffice 3 后,它会输出一个执行格式错误。换句话说,你不能使用 MidnightBSD 在办公生产效率方面做很多的事情。但是,嘿嘿,如果你周围正好有一个旧的 Palm Pilot,你可以安装 pilot-link。换句话说,可用的软件不足以构成一个极其有用的桌面发行版……至少对普通用户不是。但是,如果你想在 MidnightBSD 上开发,你将找到很多可用的工具可以安装(查看 `~/mports/devel` 目录)。你甚至可以使用命令安装 Drupal :
+
+```
+sudo mport install drupal7
+```
+
+当然,在此之后,你将需要创建一个数据库(MySQL 已经安装)、安装 Apache(`sudo mport install apache24`),并配置必要的 Apache 配置。
+
+显然地,已安装的和可以安装的是一个应用程序、系统和服务的大杂烩。但是随着足够多的工作,你最终可以得到一个能够服务于特殊目的的发行版。
+
+### 享受 \*BSD 优良
+
+这就是如何使 MidnightBSD 启动,并使其运行某种有用的桌面发行版的方法。它不像很多其它的 Linux 发行版一样快速简便,但是如果你想要一个促使你思考的发行版,这可能正是你正在寻找的。尽管大多数竞争对手都准备了很多可以安装的应用软件,但 MidnightBSD 无疑是一个 Linux 爱好者或管理员应该尝试的有趣挑战。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2018/5/midnightbsd-could-be-your-gateway-freebsd
+
+作者:[Jack Wallen][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/jlwallen
+[1]:https://www.freebsd.org/
+[2]:https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
+[3]:https://en.wikipedia.org/wiki/GNUstep
+[4]:http://www.midnightbsd.org/download/
+[5]:https://www.virtualbox.org/
+[6]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_1.jpg (MidnightBSD installer)
+[7]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_2.jpg (Adding a user)
+[8]:https://lcom.static.linuxfound.org/sites/lcom/files/mightnight_3.jpg (Applying your configurations)
+[9]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_4.jpg (Xfce)
+[10]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_5.jpg (applications)
+[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/published/20190407 Manage multimedia files with Git.md b/published/20190407 Manage multimedia files with Git.md
new file mode 100644
index 0000000000..befd819d4f
--- /dev/null
+++ b/published/20190407 Manage multimedia files with Git.md
@@ -0,0 +1,236 @@
+[#]: collector: (lujun9972)
+[#]: translator: (svtter)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11889-1.html)
+[#]: subject: (Manage multimedia files with Git)
+[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+
+通过 Git 来管理多媒体文件
+======
+
+> 在我们有关 Git 鲜为人知的用法系列的最后一篇文章中,了解如何使用 Git 跟踪项目中的大型多媒体文件。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/13/235436mhub12qhxzmbw11p.png)
+
+Git 是专用于源代码版本控制的工具。因此,Git 很少被用于非纯文本的项目以及行业。然而,异步工作流的优点是十分诱人的,尤其是在一些日益增长的行业中,这种类型的行业把重要的计算和重要的艺术创作结合起来,这包括网页设计、视觉效果、视频游戏、出版、货币设计(是的,这是一个真实的行业)、教育……等等。还有许多行业属于这个类型。
+
+在这个 Git 系列文章中,我们分享了六种鲜为人知的 Git 使用方法。在最后一篇文章中,我们将介绍将 Git 的优点带到管理多媒体文件的软件。
+
+### Git 管理多媒体文件的问题
+
+众所周知,Git 用于处理非文本文件不是很好,但是这并不妨碍我们进行尝试。下面是一个使用 Git 来复制照片文件的例子:
+
+```
+$ du -hs
+108K .
+$ cp ~/photos/dandelion.tif .
+$ git add dandelion.tif
+$ git commit -m 'added a photo'
+[master (root-commit) fa6caa7] two photos
+ 1 file changed, 0 insertions(+), 0 deletions(-)
+ create mode 100644 dandelion.tif
+$ du -hs
+1.8M .
+```
+
+目前为止没有什么异常。增加一个 1.8MB 的照片到一个目录下,使得目录变成了 1.8 MB 的大小。所以下一步,我们尝试删除文件。
+
+```
+$ git rm dandelion.tif
+$ git commit -m 'deleted a photo'
+$ du -hs
+828K .
+```
+
+在这里我们可以看到有些问题:删除一个已经被提交的文件,还是会使得存储库的大小扩大到原来的 8 倍(从 108K 到 828K)。我们可以测试多次来得到一个更好的平均值,但是这个简单的演示与我的经验一致。提交非文本文件,在一开始花费空间比较少,但是一个工程活跃地时间越长,人们可能对静态内容修改的会更多,更多的零碎文件会被加和到一起。当一个 Git 存储库变的越来越大,主要的成本往往是速度。拉取和推送的时间,从最初抿一口咖啡的时间到你觉得你可能断网了。
+
+静态内容导致 Git 存储库的体积不断扩大的原因是什么呢?那些通过文本的构成的文件,允许 Git 只拉取那些修改的部分。光栅图以及音乐文件对 Git 文件而言与文本不同,你可以查看一下 .png 和 .wav 文件中的二进制数据。所以,Git 只不过是获取了全部的数据,并且创建了一个新的副本,哪怕是一张图仅仅修改了一个像素。
+
+### Git-portal
+
+在实践中,许多多媒体项目不需要或者不想追踪媒体的历史记录。相对于文本或者代码的部分,项目的媒体部分一般有一个不同的生命周期。媒体资源一般按一个方向产生:一张图片从铅笔草稿开始,以数字绘画的形式抵达它的目的地。然后,尽管文本能够回滚到早起的版本,但是艺术制品只会一直向前发展。工程中的媒体很少被绑定到一个特定的版本。例外情况通常是反映数据集的图形,通常是可以用基于文本的格式(如 SVG)完成的表、图形或图表。
+
+所以,在许多同时包含文本(无论是叙事散文还是代码)和媒体的工程中,Git 是一个用于文件管理的,可接受的解决方案,只要有一个在版本控制循环之外的游乐场来给艺术家游玩就行。
+
+![Graphic showing relationship between art assets and Git][2]
+
+一个启用这个特性的简单方法是 [Git-portal][3],这是一个通过带有 Git 钩子的 Bash 脚本,它可将静态文件从文件夹中移出 Git 的范围,并通过符号链接来取代它们。Git 提交链接文件(有时候称作别名或快捷方式),这种符号链接文件比较小,所以所有的提交都是文本文件和那些代表媒体文件的链接。因为替身文件是符号链接,所以工程还会像预期的运行,因为本地机器会处理他们,转换成“真实的”副本。当用符号链接替换出文件时,Git-portal 维护了项目的结构,因此,如果你认为 Git-portal 不适合你的项目,或者你需要构建项目的一个没有符号链接的版本(比如用于分发),则可以轻松地逆转该过程。
+
+Git-portal 也允许通过 `rsync` 来远程同步静态资源,所以用户可以设置一个远程存储位置,来做为一个中心的授权源。
+
+Git-portal 对于多媒体的工程是一个理想的解决方案。类似的多媒体工程包括视频游戏、桌面游戏、需要进行大型 3D 模型渲染和纹理的虚拟现实工程、[带图][4]以及 .odt 输出的书籍、协作型的[博客站点][5]、音乐项目,等等。艺术家在应用程序中以图层(在图形世界中)和曲目(在音乐世界中)的形式执行版本控制并不少见——因此,Git 不会向多媒体项目文件本身添加任何内容。Git 的功能可用于艺术项目的其他部分(例如散文和叙述、项目管理、字幕文件、致谢、营销副本、文档等),而结构化远程备份的功能则由艺术家使用。
+
+#### 安装 Git-portal
+
+Git-portal 的 RPM 安装包位于 ,可用于下载和安装。
+
+此外,用户可以从 Git-portal 的 Gitlab 主页手动安装。这仅仅是一个 Bash 脚本以及一些 Git 钩子(也是 Bash 脚本),但是需要一个快速的构建过程来让它知道安装的位置。
+
+```
+$ git clone https://gitlab.com/slackermedia/git-portal.git git-portal.clone
+$ cd git-portal.clone
+$ ./configure
+$ make
+$ sudo make install
+```
+
+#### 使用 Git-portal
+
+Git-portal 与 Git 一起使用。这意味着,如同 Git 的所有大型文件扩展一样,都需要记住一些额外的步骤。但是,你仅仅需要在处理你的媒体资源的时候使用 Git-portal,所以很容易记住,除非你把大文件都当做文本文件来进行处理(对于 Git 用户很少见)。使用 Git-portal 必须做的一个安装步骤是:
+
+```
+$ mkdir bigproject.git
+$ cd !$
+$ git init
+$ git-portal init
+```
+
+Git-portal 的 `init` 函数在 Git 存储库中创建了一个 `_portal` 文件夹并且添加到 `.gitignore` 文件中。
+
+在平日里使用 Git-portal 和 Git 协同十分平滑。一个较好的例子是基于 MIDI 的音乐项目:音乐工作站产生的项目文件是基于文本的,但是 MIDI 文件是二进制数据:
+
+```
+$ ls -1
+_portal
+song.1.qtr
+song.qtr
+song-Track_1-1.mid
+song-Track_1-3.mid
+song-Track_2-1.mid
+$ git add song*qtr
+$ git-portal song-Track*mid
+$ git add song-Track*mid
+```
+
+如果你查看一下 `_portal` 文件夹,你会发现那里有最初的 MIDI 文件。这些文件在原本的位置被替换成了指向 `_portal` 的链接文件,使得音乐工作站像预期一样运行。
+
+```
+$ ls -lG
+[...] _portal/
+[...] song.1.qtr
+[...] song.qtr
+[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
+[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
+[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
+```
+
+与 Git 相同,你也可以添加一个目录下的文件。
+
+```
+$ cp -r ~/synth-presets/yoshimi .
+$ git-portal add yoshimi
+Directories cannot go through the portal. Sending files instead.
+$ ls -lG _portal/yoshimi
+[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
+```
+
+删除功能也像预期一样工作,但是当从 `_portal` 中删除一些东西时,你应该使用 `git-portal rm` 而不是 `git rm`。使用 Git-portal 可以确保文件从 `_portal` 中删除:
+
+```
+$ ls
+_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
+song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
+$ git-portal rm song-Track_1-3.mid
+rm 'song-Track_1-3.mid'
+$ ls _portal/
+song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
+```
+
+如果你忘记使用 Git-portal,那么你需要手动删除 `_portal` 下的文件:
+
+```
+$ git-portal rm song-Track_1-1.mid
+rm 'song-Track_1-1.mid'
+$ ls _portal/
+song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
+$ trash _portal/song-Track_1-1.mid
+```
+
+Git-portal 其它的唯一功能,是列出当前所有的链接并且找到里面可能已经损坏的符号链接。有时这种情况会因为项目文件夹中的文件被移动而发生:
+
+```
+$ mkdir foo
+$ mv yoshimi foo
+$ git-portal status
+bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
+bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
+```
+
+如果你使用 Git-portal 用于私人项目并且维护自己的备份,以上就是技术方面所有你需要知道关于 Git-portal 的事情了。如果你想要添加一个协作者或者你希望 Git-portal 来像 Git 的方式来管理备份,你可以创建一个远程位置。
+
+#### 增加 Git-portal 远程位置
+
+为 Git-portal 增加一个远程位置是通过 Git 已有的远程功能来实现的。Git-portal 实现了 Git 钩子(隐藏在存储库 `.git` 文件夹中的脚本),来寻找你的远程位置上是否存在以 `_portal` 开头的文件夹。如果它找到一个,它会尝试使用 `rsync` 来与远程位置同步文件。Git-portal 在用户进行 Git 推送以及 Git 合并的时候(或者在进行 Git 拉取的时候,实际上是进行一次获取和自动合并),都会执行此操作。
+
+如果你仅克隆了 Git 存储库,那么你可能永远不会自己添加一个远程位置。这是一个标准的 Git 过程:
+
+```
+$ git remote add origin git@gitdawg.com:seth/bigproject.git
+$ git remote -v
+origin git@gitdawg.com:seth/bigproject.git (fetch)
+origin git@gitdawg.com:seth/bigproject.git (push)
+```
+
+对你的主要 Git 存储库来说,`origin` 这个名字是一个流行的惯例,将其用于 Git 数据是有意义的。然而,你的 Git-portal 数据是分开存储的,所以你必须创建第二个远程位置来让 Git-portal 了解向哪里推送和从哪里拉取。取决于你的 Git 主机,你可能需要一个单独的服务器,因为空间有限的 Git 主机不太可能接受 GB 级的媒体资产。或者,可能你的服务器仅允许你访问你的 Git 存储库而不允许访问外部的存储文件夹:
+
+```
+$ git remote add _portal seth@example.com:/home/seth/git/bigproject_portal
+$ git remote -v
+origin git@gitdawg.com:seth/bigproject.git (fetch)
+origin git@gitdawg.com:seth/bigproject.git (push)
+_portal seth@example.com:/home/seth/git/bigproject_portal (fetch)
+_portal seth@example.com:/home/seth/git/bigproject_portal (push)
+```
+
+你可能不想为所有用户提供服务器上的个人帐户,也不必这样做。为了提供对托管资源库大文件资产的服务器的访问权限,你可以运行一个 Git 前端,比如 [Gitolite][8] 或者你可以使用 `rrsync` (受限的 rsync)。
+
+现在你可以推送你的 Git 数据到你的远程 Git 存储库,并将你的 Git-portal 数据到你的远程的门户:
+
+```
+$ git push origin HEAD
+master destination detected
+Syncing _portal content...
+sending incremental file list
+sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
+total size is 60,358,015 speedup is 6,474.10
+Syncing _portal content to example.com:/home/seth/git/bigproject_portal
+```
+
+如果你已经安装了 Git-portal,并且配置了 `_portal` 的远程位置,你的 `_portal` 文件夹将会被同步,并且从服务器获取新的内容,以及在每一次推送的时候发送新的内容。尽管你不需要进行 Git 提交或者推送来和服务器同步(用户可以使用直接使用 `rsync`),但是我发现对于艺术性内容的改变,提交是有用的。这将会把艺术家及其数字资产集成到工作流的其余部分中,并提供有关项目进度和速度的有用元数据。
+
+### 其他选择
+
+如果 Git-portal 对你而言太过简单,还有一些用于 Git 管理大型文件的其他选择。[Git 大文件存储][9](LFS)是一个名为 git-media 的停工项目的分支,这个分支由 GitHub 维护和支持。它需要特殊的命令(例如 `git lfs track` 来保护大型文件不被 Git 追踪)并且需要用户维护一个 `.gitattributes` 文件来更新哪些存储库中的文件被 LFS 追踪。对于大文件而言,它**仅**支持 HTTP 和 HTTPS 远程主机。所以你必须配置 LFS 服务器,才能使得用户可以通过 HTTP 而不是 SSH 或 `rsync` 来进行鉴权。
+
+另一个相对 LFS 更灵活的选择是 [git-annex][10]。你可以在我的文章 [管理 Git 中大二进制 blob][11] 中了解更多(忽略其中 git-media 这个已经废弃项目的章节,因为其灵活性没有被它的继任者 Git LFS 延续下来)。Git-annex 是一个灵活且优雅的解决方案。它拥有一个细腻的系统来用于添加、删除、移动存储库中的大型文件。因为它灵活且强大,有很多新的命令和规则需要进行学习,所以建议看一下它的[文档][12]。
+
+然而,如果你的需求很简单,你可能更加喜欢整合已有技术来进行简单且明显任务的解决方案,则 Git-portal 可能是对于工作而言比较合适的工具。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/manage-multimedia-files-git
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[svtter](https://github.com/svtter)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
+[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
+[3]: http://gitlab.com/slackermedia/git-portal.git
+[4]: https://www.apress.com/gp/book/9781484241691
+[5]: http://mixedsignals.ml
+[6]: mailto:git@gitdawg.com
+[7]: mailto:seth@example.com
+[8]: https://opensource.com/article/19/4/file-sharing-git
+[9]: https://git-lfs.github.com/
+[10]: https://git-annex.branchable.com/
+[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
+[12]: https://git-annex.branchable.com/walkthrough/
diff --git a/published/20191022 How to Go About Linux Boot Time Optimisation.md b/published/20191022 How to Go About Linux Boot Time Optimisation.md
new file mode 100644
index 0000000000..b6db365389
--- /dev/null
+++ b/published/20191022 How to Go About Linux Boot Time Optimisation.md
@@ -0,0 +1,208 @@
+[#]: collector: (lujun9972)
+[#]: translator: (robsean)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11881-1.html)
+[#]: subject: (How to Go About Linux Boot Time Optimisation)
+[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
+[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
+
+如何进行 Linux 启动时间优化
+======
+
+![][2]
+
+> 快速启动嵌入式设备或电信设备,对于时间要求紧迫的应用程序是至关重要的,并且在改善用户体验方面也起着非常重要的作用。这个文章给予一些关于如何增强任意设备的启动时间的重要技巧。
+
+快速启动或快速重启在各种情况下起着至关重要的作用。为了保持所有服务的高可用性和更好的性能,嵌入式设备的快速启动至关重要。设想有一台运行着没有启用快速启动的 Linux 操作系统的电信设备,所有依赖于这个特殊嵌入式设备的系统、服务和用户可能会受到影响。这些设备维持其服务的高可用性是非常重要的,为此,快速启动和重启起着至关重要的作用。
+
+一台电信设备的一次小故障或关机,即使只是几秒钟,都可能会对无数互联网上的用户造成破坏。因此,对于很多对时间要求严格的设备和电信设备来说,在它们的设备中加入快速启动的功能以帮助它们快速恢复工作是非常重要的。让我们从图 1 中理解 Linux 启动过程。
+
+![图 1:启动过程][3]
+
+### 监视工具和启动过程
+
+在对机器做出更改之前,用户应注意许多因素。其中包括计算机的当前启动速度,以及占用资源并增加启动时间的服务、进程或应用程序。
+
+#### 启动图
+
+为监视启动速度和在启动期间启动的各种服务,用户可以使用下面的命令来安装:
+
+```
+sudo apt-get install pybootchartgui
+```
+
+你每次启动时,启动图会在日志中保存一个 png 文件,使用户能够查看该 png 文件来理解系统的启动过程和服务。为此,使用下面的命令:
+
+```
+cd /var/log/bootchart
+```
+
+用户可能需要一个应用程序来查看 png 文件。Feh 是一个面向控制台用户的 X11 图像查看器。不像大多数其它的图像查看器,它没有一个精致的图形用户界面,但它只用来显示图片。Feh 可以用于查看 png 文件。你可以使用下面的命令来安装它:
+
+```
+sudo apt-get install feh
+```
+
+你可以使用 `feh xxxx.png` 来查看 png 文件。
+
+
+![图 2:启动图][4]
+
+图 2 显示了一个正在查看的引导图 png 文件。
+
+#### systemd-analyze
+
+但是,对于 Ubuntu 15.10 以后的版本不再需要引导图。为获取关于启动速度的简短信息,使用下面的命令:
+
+```
+systemd-analyze
+```
+
+![图 3:systemd-analyze 的输出][5]
+
+图表 3 显示命令 `systemd-analyze` 的输出。
+
+命令 `systemd-analyze blame` 用于根据初始化所用的时间打印所有正在运行的单元的列表。这个信息是非常有用的,可用于优化启动时间。`systemd-analyze blame` 不会显示服务类型为简单(`Type=simple`)的服务,因为 systemd 认为这些服务应是立即启动的;因此,无法测量初始化的延迟。
+
+![图 4:systemd-analyze blame 的输出][6]
+
+图 4 显示 `systemd-analyze blame` 的输出。
+
+下面的命令打印时间关键的服务单元的树形链条:
+
+```
+command systemd-analyze critical-chain
+```
+
+图 5 显示命令 `systemd-analyze critical-chain` 的输出。
+
+![图 5:systemd-analyze critical-chain 的输出][7]
+
+### 减少启动时间的步骤
+
+下面显示的是一些可以减少启动时间的各种步骤。
+
+#### BUM(启动管理器)
+
+BUM 是一个运行级配置编辑器,允许在系统启动或重启时配置初始化服务。它显示了可以在启动时启动的每个服务的列表。用户可以打开和关闭各个服务。BUM 有一个非常清晰的图形用户界面,并且非常容易使用。
+
+在 Ubuntu 14.04 中,BUM 可以使用下面的命令安装:
+
+```
+sudo apt-get install bum
+```
+
+为在 15.10 以后的版本中安装它,从链接 http://apt.ubuntu.com/p/bum 下载软件包。
+
+以基本的服务开始,禁用扫描仪和打印机相关的服务。如果你没有使用蓝牙和其它不想要的设备和服务,你也可以禁用它们中一些。我强烈建议你在禁用相关的服务前学习服务的基础知识,因为这可能会影响计算机或操作系统。图 6 显示 BUM 的图形用户界面。
+
+![图 6:BUM][8]
+
+#### 编辑 rc 文件
+
+要编辑 rc 文件,你需要转到 rc 目录。这可以使用下面的命令来做到:
+
+```
+cd /etc/init.d
+```
+
+然而,访问 `init.d` 需要 root 用户权限,该目录基本上包含的是开始/停止脚本,这些脚本用于在系统运行时或启动期间控制(开始、停止、重新加载、启动启动)守护进程。
+
+在 `init.d` 目录中的 `rc` 文件被称为运行控制脚本。在启动期间,`init` 执行 `rc` 脚本并发挥它的作用。为改善启动速度,我们可以更改 `rc` 文件。使用任意的文件编辑器打开 `rc` 文件(当你在 `init.d` 目录中时)。
+
+例如,通过输入 `vim rc` ,你可以更改 `CONCURRENCY=none` 为 `CONCURRENCY=shell`。后者允许某些启动脚本同时执行,而不是依序执行。
+
+在最新版本的内核中,该值应该被更改为 `CONCURRENCY=makefile`。
+
+图 7 和图 8 显示编辑 `rc` 文件前后的启动时间比较。可以注意到启动速度有所提高。在编辑 `rc` 文件前的启动时间是 50.98 秒,然而在对 `rc` 文件进行更改后的启动时间是 23.85 秒。
+
+但是,上面提及的更改方法在 Ubuntu 15.10 以后的操作系统上不工作,因为使用最新内核的操作系统使用 systemd 文件,而不再是 `init.d` 文件。
+
+![图 7:对 rc 文件进行更改之前的启动速度][9]
+
+![图 8:对 rc 文件进行更改之后的启动速度][10]
+
+#### E4rat
+
+E4rat 代表 e4 减少访问时间(仅在 ext4 文件系统的情况下)。它是由 Andreas Rid 和 Gundolf Kiefer 开发的一个项目。E4rat 是一个通过碎片整理来帮助快速启动的应用程序。它还会加速应用程序的启动。E4rat 使用物理文件的重新分配来消除寻道时间和旋转延迟,因而达到较高的磁盘传输速度。
+
+E4rat 可以 .deb 软件包形式获得,你可以从它的官方网站 http://e4rat.sourceforge.net/ 下载。
+
+Ubuntu 默认安装的 ureadahead 软件包与 e4rat 冲突。因此必须使用下面的命令安装这几个软件包:
+
+```
+sudo dpkg purge ureadahead ubuntu-minimal
+```
+
+现在使用下面的命令来安装 e4rat 的依赖关系:
+
+```
+sudo apt-get install libblkid1 e2fslibs
+```
+
+打开下载的 .deb 文件,并安装它。现在需要恰当地收集启动数据来使 e4rat 工作。
+
+遵循下面所给的步骤来使 e4rat 正确地运行并提高启动速度。
+
+* 在启动期间访问 Grub 菜单。这可以在系统启动时通过按住 `shift` 按键来完成。
+* 选择通常用于启动的选项(内核版本),并按 `e`。
+* 查找以 `linux /boot/vmlinuz` 开头的行,并在该行的末尾添加下面的代码(在句子的最后一个字母后按空格键):`init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
+`。
+* 现在,按 `Ctrl+x` 来继续启动。这可以让 e4rat 在启动后收集数据。在这台机器上工作,并在接下来的两分钟时间内打开并关闭应用程序。
+* 通过转到 e4rat 文件夹,并使用下面的命令来访问日志文件:`cd /var/log/e4rat`。
+* 如果你没有找到任何日志文件,重复上面的过程。一旦日志文件就绪,再次访问 Grub 菜单,并对你的选项按 `e`。
+* 在你之前已经编辑过的同一行的末尾输入 `single`。这可以让你访问命令行。如果出现其它菜单,选择恢复正常启动(Resume normal boot)。如果你不知为何不能进入命令提示符,按 `Ctrl+Alt+F1` 组合键。
+* 在你看到登录提示后,输入你的登录信息。
+* 现在输入下面的命令:`sudo e4rat-realloc /var/lib/e4rat/startup.log`。此过程需要一段时间,具体取决于机器的磁盘速度。
+* 现在使用下面的命令来重启你的机器:`sudo shutdown -r now`。
+* 现在,我们需要配置 Grub 来在每次启动时运行 e4rat。
+* 使用任意的编辑器访问 grub 文件。例如,`gksu gedit /etc/default/grub`。
+* 查找以 `GRUB CMDLINE LINUX DEFAULT=` 开头的一行,并在引号之间和任何选项之前添加下面的行:`init=/sbin/e4rat-preload 18`。
+* 它应该看起来像这样:`GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash`。
+* 保存并关闭 Grub 菜单,并使用 `sudo update-grub` 更新 Grub 。
+* 重启系统,你将发现启动速度有明显变化。
+
+图 9 和图 10 显示在安装 e4rat 前后的启动时间之间的差异。可注意到启动速度的提高。在使用 e4rat 前启动所用时间是 22.32 秒,然而在使用 e4rat 后启动所用时间是 9.065 秒。
+
+![图 9:使用 e4rat 之前的启动速度][11]
+
+![图 10:使用 e4rat 之后的启动速度][12]
+
+### 一些易做的调整
+
+使用很小的调整也可以达到良好的启动速度,下面列出其中两个。
+
+#### SSD
+
+使用固态设备而不是普通的硬盘或者其它的存储设备将肯定会改善启动速度。SSD 也有助于加快文件传输和运行应用程序方面的速度。
+
+#### 禁用图形用户界面
+
+图形用户界面、桌面图形和窗口动画占用大量的资源。禁用图形用户界面是获得良好的启动速度的另一个好方法。
+
+--------------------------------------------------------------------------------
+
+via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
+
+作者:[B Thangaraju][a]
+选题:[lujun9972][b]
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensourceforu.com/author/b-thangaraju/
+[b]: https://github.com/lujun9972
+[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?&ssl=1 (Screenshot from 2019-10-07 13-16-32)
+[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
+[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?ssl=1
+[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?ssl=1
+[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?ssl=1
+[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?ssl=1
+[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?ssl=1
+[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?ssl=1
+[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?ssl=1
+[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?ssl=1
+[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?ssl=1
+[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?ssl=1
diff --git a/sources/tech/20191225 8 Commands to Check Memory Usage on Linux.md b/published/20191225 8 Commands to Check Memory Usage on Linux.md
similarity index 57%
rename from sources/tech/20191225 8 Commands to Check Memory Usage on Linux.md
rename to published/20191225 8 Commands to Check Memory Usage on Linux.md
index 89108b5ded..70e07c8181 100644
--- a/sources/tech/20191225 8 Commands to Check Memory Usage on Linux.md
+++ b/published/20191225 8 Commands to Check Memory Usage on Linux.md
@@ -1,49 +1,45 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: translator: (mengxinayan)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11870-1.html)
[#]: subject: (8 Commands to Check Memory Usage on Linux)
[#]: via: (https://www.2daygeek.com/linux-commands-check-memory-usage/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-8 Commands to Check Memory Usage on Linux
+检查 Linux 中内存使用情况的 8 条命令
======
-Linux is not like Windows and you will not get a GUI always, especially in a server environment.
+![](https://img.linux.net.cn/data/attachment/album/202002/09/121112mg0jigxtcc5xr8or.jpg)
-As a Linux administrator, it is important to know how to check your available and used resources, such as memory, CPU, disk space, etc.
+Linux 并不像 Windows,你经常不会有图形界面可供使用,特别是在服务器环境中。
-If there are any applications that use too much resources on the system to run your system at the optimum level you need to find and fix.
+作为一名 Linux 管理员,知道如何获取当前可用的和已经使用的资源情况,比如内存、CPU、磁盘等,是相当重要的。如果某一应用在你的系统上占用了太多的资源,导致你的系统无法达到最优状态,那么你需要找到并修正它。
-If you want to **[find out the top 10 memory (RAM) consumption processes in Linux][1]**, go to the following article.
+如果你想找到消耗内存前十名的进程,你需要去阅读这篇文章:[如何在 Linux 中找出内存消耗最大的进程][1]。
-In Linux, there are commands for everything, so use the corresponding commands.
+在 Linux 中,命令能做任何事,所以使用相关命令吧。在这篇教程中,我们将会给你展示 8 个有用的命令来即查看在 Linux 系统中内存的使用情况,包括 RAM 和交换分区。
-In this tutorial, we will show you eight powerful commands to check memory usage on a Linux system, including RAM and swap.
+创建交换分区在 Linux 系统中是非常重要的,如果你想了解如何创建,可以去阅读这篇文章:[在 Linux 系统上创建交换分区][2]。
-**[Creating swap space on a Linux system][2]** is very important.
+下面的命令可以帮助你以不同的方式查看 Linux 内存使用情况。
-The following commands can help you check memory usage in Linux in different ways.
+ * `free` 命令
+ * `/proc/meminfo` 文件
+ * `vmstat` 命令
+ * `ps_mem` 命令
+ * `smem` 命令
+ * `top` 命令
+ * `htop` 命令
+ * `glances` 命令
- * free Command
- * /proc/meminfo File
- * vmstat Command
- * ps_mem Command
- * smem Command
- * top Command
- * htop Command
- * glances Command
+### 1)如何使用 free 命令查看 Linux 内存使用情况
+[free 命令][3] 是被 Linux 管理员广泛使用的主要命令。但是它提供的信息比 `/proc/meminfo` 文件少。
+`free` 命令会分别展示物理内存和交换分区内存中已使用的和未使用的数量,以及内核使用的缓冲区和缓存。
-### 1) How to Check Memory Usage on Linux Using the free Command
-
-**[Free command][3]** is the most powerful command widely used by the Linux administrator. But it provides very little information compared to the “/proc/meminfo” file.
-
-Free command displays the total amount of free and used physical and swap memory on the system, as well as buffers and caches used by the kernel.
-
-These information is gathered from the “/proc/meminfo” file.
+这些信息都是从 `/proc/meminfo` 文件中获取的。
```
# free -m
@@ -52,24 +48,18 @@ Mem: 15867 9199 1702 3315 4965 3039
Swap: 17454 666 16788
```
- * **total:** Total installed memory
- * **used:** Memory is currently in use by running processes (used= total – free – buff/cache)
- * **free:** Unused memory (free= total – used – buff/cache)
- * **shared:** Memory shared between two or more processes (multiple processes)
- * **buffers:** Memory reserved by the kernel to hold a process queue request.
- * **cache:** Size of the page cache that holds recently used files in RAM
- * **buff/cache:** Buffers + Cache
- * **available:** Estimation of how much memory is available for starting new applications, without swapping.
+ * `total`:总的内存量
+ * `used`:被当前运行中的进程使用的内存量(`used` = `total` – `free` – `buff/cache`)
+ * `free`: 未被使用的内存量(`free` = `total` – `used` – `buff/cache`)
+ * `shared`: 在两个或多个进程之间共享的内存量
+ * `buffers`: 内存中保留用于内核记录进程队列请求的内存量
+ * `cache`: 在 RAM 中存储最近使用过的文件的页缓冲大小
+ * `buff/cache`: 缓冲区和缓存总的使用内存量
+ * `available`: 可用于启动新应用的可用内存量(不含交换分区)
+### 2) 如何使用 /proc/meminfo 文件查看 Linux 内存使用情况
-
-### 2) How to Check Memory Usage on Linux Using the /proc/meminfo File
-
-The “/proc/meminfo” file is a virtual file that contains various real-time information about memory usage.
-
-It shows memory stats in kilobytes, most of which are somewhat difficult to understand.
-
-However it contains useful information about memory usage.
+`/proc/meminfo` 文件是一个包含了多种内存使用的实时信息的虚拟文件。它展示内存状态单位使用的是 kB,其中大部分属性都难以理解。然而它也包含了内存使用情况的有用信息。
```
# cat /proc/meminfo
@@ -124,13 +114,11 @@ DirectMap2M: 14493696 kB
DirectMap1G: 2097152 kB
```
-### 3) How to Check Memory Usage on Linux Using the vmstat Command
+### 3) 如何使用 vmstat 命令查看 Linux 内存使用情况
-The **[vmstat command][4]** is another useful tool for reporting virtual memory statistics.
+[vmstat 命令][4] 是另一个报告虚拟内存统计信息的有用工具。
-vmstat reports information about processes, memory, paging, block IO, traps, disks, and cpu functionality.
-
-vmstat does not require special permissions, and it can help identify system bottlenecks.
+`vmstat` 报告的信息包括:进程、内存、页面映射、块 I/O、陷阱、磁盘和 CPU 特性信息。`vmstat` 不需要特殊的权限,并且它可以帮助诊断系统瓶颈。
```
# vmstat
@@ -140,58 +128,35 @@ procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
1 0 682060 1769324 234188 4853500 0 3 25 91 31 16 34 13 52 0 0
```
-If you want to understand this in detail, read the field description below.
+如果你想详细了解每一项的含义,阅读下面的描述。
-**Procs**
+* `procs`:进程
+ * `r`: 可以运行的进程数目(正在运行或等待运行)
+ * `b`: 处于不可中断睡眠中的进程数目
+* `memory`:内存
+ * `swpd`: 使用的虚拟内存数量
+ * `free`: 空闲的内存数量
+ * `buff`: 用作缓冲区内存的数量
+ * `cache`: 用作缓存内存的数量
+ * `inact`: 不活动的内存数量(使用 `-a` 选项)
+ * `active`: 活动的内存数量(使用 `-a` 选项)
+* `Swap`:交换分区
+ * `si`: 每秒从磁盘交换的内存数量
+ * `so`: 每秒交换到磁盘的内存数量
+* `IO`:输入输出
+ * `bi`: 从一个块设备中收到的块(块/秒)
+ * `bo`: 发送到一个块设备的块(块/秒)
+* `System`:系统
+ * `in`: 每秒的中断次数,包括时钟。
+ * `cs`: 每秒的上下文切换次数。
+* `CPU`:下面这些是在总的 CPU 时间占的百分比
+ * `us`: 花费在非内核代码上的时间占比(包括用户时间,调度时间)
+ * `sy`: 花费在内核代码上的时间占比 (系统时间)
+ * `id`: 花费在闲置的时间占比。在 Linux 2.5.41 之前,包括 I/O 等待时间
+ * `wa`: 花费在 I/O 等待上的时间占比。在 Linux 2.5.41 之前,包括在空闲时间中
+ * `st`: 被虚拟机偷走的时间占比。在 Linux 2.6.11 之前,这部分称为 unknown
- * **r:** The number of runnable processes (running or waiting for run time).
- * **b:** The number of processes in uninterruptible sleep.
-
-
-
-**Memory**
-
- * **swpd:** the amount of virtual memory used.
- * **free:** the amount of idle memory.
- * **buff:** the amount of memory used as buffers.
- * **cache:** the amount of memory used as cache.
- * **inact:** the amount of inactive memory. (-a option)
- * **active:** the amount of active memory. (-a option)
-
-
-
-**Swap**
-
- * **si:** Amount of memory swapped in from disk (/s).
- * **so:** Amount of memory swapped to disk (/s).
-
-
-
-**IO**
-
- * **bi:** Blocks received from a block device (blocks/s).
- * **bo:** Blocks sent to a block device (blocks/s).
-
-
-
-**System**
-
- * **in:** The number of interrupts per second, including the clock.
- * **cs:** The number of context switches per second.
-
-
-
-**CPU : These are percentages of total CPU time.**
-
- * **us:** Time spent running non-kernel code. (user time, including nice time)
- * **sy:** Time spent running kernel code. (system time)
- * **id:** Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
- * **wa:** Time spent waiting for IO. Prior to Linux 2.5.41, included in idle.
- * **st:** Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown.
-
-
-
-Run the following command for detailed information.
+运行下面的命令查看详细的信息。
```
# vmstat -s
@@ -223,16 +188,13 @@ Run the following command for detailed information.
1577163147 boot time
3318 forks
```
+### 4) 如何使用 ps_mem 命令查看 Linux 内存使用情况
-### 4) How to Check Memory Usage on Linux Using the ps_mem Command
+[ps_mem][5] 是一个用来查看当前内存使用情况的简单的 Python 脚本。该工具可以确定每个程序使用了多少内存(不是每个进程)。
-**[ps_mem][5]** is a simple Python script that allows you to get core memory usage accurately for a program in Linux.
+该工具采用如下的方法计算每个程序使用内存:总的使用 = 程序进程私有的内存 + 程序进程共享的内存。
-This can determine how much RAM is used per program (not per process).
-
-It calculates the total amount of memory used per program, total = sum (private RAM for program processes) + sum (shared RAM for program processes).
-
-The shared RAM is problematic to calculate, and the tool automatically selects the most accurate method available for the running kernel.
+计算共享内存是存在不足之处的,该工具可以为运行中的内核自动选择最准确的方法。
```
# ps_mem
@@ -285,15 +247,13 @@ The shared RAM is problematic to calculate, and the tool automatically selects t
==================================
```
-### 5) How to Check Memory Usage on Linux Using the smem Command
+### 5)如何使用 smem 命令查看 Linux 内存使用情况
-**[smem][6]** is a tool that can provide numerous reports of memory usage on Linux systems. Unlike existing tools, smem can report Proportional Set Size (PSS), Unique Set Size (USS) and Resident Set Size (RSS).
+[smem][6] 是一个可以为 Linux 系统提供多种内存使用情况报告的工具。不同于现有的工具,`smem` 可以报告比例集大小(PSS)、唯一集大小(USS)和驻留集大小(RSS)。
-Proportional Set Size (PSS): refers to the amount of memory used by libraries and applications in the virtual memory system.
-
-Unique Set Size (USS) : Unshared memory is reported as USS (Unique Set Size).
-
-Resident Set Size (RSS) : The standard measure of physical memory (it typically shared among multiple applications) usage known as resident set size (RSS) will significantly overestimate memory usage.
+- 比例集大小(PSS):库和应用在虚拟内存系统中的使用量。
+- 唯一集大小(USS):其报告的是非共享内存。
+- 驻留集大小(RSS):物理内存(通常多进程共享)使用情况,其通常高于内存使用量。
```
# smem -tk
@@ -336,13 +296,11 @@ Resident Set Size (RSS) : The standard measure of physical memory (it typically
90 1 0 4.8G 5.2G 8.0G
```
-### 6) How to Check Memory Usage on Linux Using the top Command
+### 6) 如何使用 top 命令查看 Linux 内存使用情况
-**[top command][7]** is one of the most frequently used commands by Linux administrators to understand and view the resource usage for a process on a Linux system.
+[top 命令][7] 是一个 Linux 系统的管理员最常使用的用于查看进程的资源使用情况的命令。
-It displays the total memory of the system, current memory usage, free memory and total memory used by the buffers.
-
-In addition, it displays total swap memory, current swap usage, free swap memory, and total cached memory by the system.
+该命令会展示了系统总的内存量、当前内存使用量、空闲内存量和缓冲区使用的内存总量。此外,该命令还会展示总的交换空间内存量、当前交换空间的内存使用量、空闲的交换空间内存量和缓存使用的内存总量。
```
# top -b | head -10
@@ -368,27 +326,25 @@ KiB Swap: 17873388 total, 17873388 free, 0 used. 9179772 avail Mem
2174 daygeek 20 2466680 122196 78604 S 0.8 0.8 0:17.75 WebExtensi+
```
-### 7) How to Check Memory Usage on Linux Using the htop Command
+### 7) 如何使用 htop 命令查看 Linux 内存使用情况
-The **[htop command][8]** is an interactive process viewer for Linux/Unix systems. It is a text-mode application and requires the ncurses library, it was developed by Hisham.
+[htop 命令][8] 是一个可交互的 Linux/Unix 系统进程查看器。它是一个文本模式应用,且使用它需要 Hisham 开发的 ncurses 库。
-It is designed as an alternative to the top command.
+该名令的设计目的使用来代替 `top` 命令。该命令与 `top` 命令很相似,但是其允许你可以垂直地或者水平地的滚动以便可以查看系统中所有的进程情况。
-This is similar to the top command, but allows you to scroll vertically and horizontally to see all the processes running the system.
+`htop` 命令拥有不同的颜色,这个额外的优点当你在追踪系统性能情况时十分有用。
-htop comes with Visual Colors, which have added benefits and are very evident when it comes to tracking system performance.
+此外,你可以自由地执行与进程相关的任务,比如杀死进程或者改变进程的优先级而不需要其进程号(PID)。
-You are free to carry out any tasks related to processes, such as process killing and renicing without entering their PIDs.
+![][10]
-[![][9]][10]
+### 8)如何使用 glances 命令查看 Linux 内存使用情况
-### 8) How to Check Memory Usage on Linux Using the glances Command
+[Glances][11] 是一个 Python 编写的跨平台的系统监视工具。
-**[Glances][11]** is a cross-platform system monitoring tool written in Python.
+你可以在一个地方查看所有信息,比如:CPU 使用情况、内存使用情况、正在运行的进程、网络接口、磁盘 I/O、RAID、传感器、文件系统信息、Docker、系统信息、运行时间等等。
-You can see all information in one place such as CPU usage, Memory usage, running process, Network interface, Disk I/O, Raid, Sensors, Filesystem info, Docker, System info, Uptime, etc,.
-
-![][9]
+![][12]
--------------------------------------------------------------------------------
@@ -396,21 +352,22 @@ via: https://www.2daygeek.com/linux-commands-check-memory-usage/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
+译者:[萌新阿岩](https://github.com/mengxinayan)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/how-to-find-high-cpu-consumption-processes-in-linux/
-[2]: https://www.2daygeek.com/add-extend-increase-swap-space-memory-file-partition-linux/
-[3]: https://www.2daygeek.com/free-command-to-check-memory-usage-statistics-in-linux/
-[4]: https://www.2daygeek.com/linux-vmstat-command-examples-tool-report-virtual-memory-statistics/
-[5]: https://www.2daygeek.com/ps_mem-report-core-memory-usage-accurately-in-linux/
-[6]: https://www.2daygeek.com/smem-linux-memory-usage-statistics-reporting-tool/
+[1]: https://linux.cn/article-11542-1.html
+[2]: https://linux.cn/article-9579-1.html
+[3]: https://linux.cn/article-8314-1.html
+[4]: https://linux.cn/article-8157-1.html
+[5]: https://linux.cn/article-8639-1.html
+[6]: https://linux.cn/article-7681-1.html
[7]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
[8]: https://www.2daygeek.com/linux-htop-command-linux-system-performance-resource-monitoring-tool/
[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[10]: https://www.2daygeek.com/wp-content/uploads/2019/12/linux-commands-check-memory-usage-2.jpg
[11]: https://www.2daygeek.com/linux-glances-advanced-real-time-linux-system-performance-monitoring-tool/
+[12]: https://www.2daygeek.com/wp-content/uploads/2019/12/linux-commands-check-memory-usage-3.jpg
diff --git a/published/20191227 Top CI-CD resources to set you up for success.md b/published/20191227 Top CI-CD resources to set you up for success.md
new file mode 100644
index 0000000000..a19cea5720
--- /dev/null
+++ b/published/20191227 Top CI-CD resources to set you up for success.md
@@ -0,0 +1,57 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Morisun029)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11875-1.html)
+[#]: subject: (Top CI/CD resources to set you up for success)
+[#]: via: (https://opensource.com/article/19/12/cicd-resources)
+[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
+
+顶级 CI / CD 资源,助你成功
+======
+
+> 随着企业期望实现无缝、灵活和可扩展的部署,持续集成和持续部署成为 2019 年的关键主题。
+
+![Plumbing tubes in many directions][1]
+
+对于 CI/CD 和 DevOps 来说,2019 年是非常棒的一年。Opensource.com 的作者分享了他们专注于无缝、灵活和可扩展部署时是如何朝着敏捷和 scrum 方向发展的。以下是我们 2019 年发布的 CI/CD 文章中的一些重要文章。
+
+### 学习和提高你的 CI/CD 技能
+
+我们最喜欢的一些文章集中在 CI/CD 的实操经验上,并涵盖了许多方面。通常以 [Jenkins][2] 管道开始,Bryant Son 的文章《[用 Jenkins 构建 CI/CD 管道][3]》将为你提供足够的经验,以开始构建你的第一个管道。Daniel Oh 在《[用 DevOps 管道进行自动验收测试][4]》一文中,提供了有关验收测试的重要信息,包括可用于自行测试的各种 CI/CD 应用程序。我写的《[安全扫描 DevOps 管道][5]》非常简短,其中简要介绍了如何使用 Jenkins 平台在管道中设置安全性。
+
+### 交付工作流程
+
+正如 Jithin Emmanuel 在《[Screwdriver:一个用于持续交付的可扩展构建平台][6]》中分享的,在学习如何使用和提高你的 CI/CD 技能方面,工作流程很重要,特别是当涉及到管道时。Emily Burns 在《[为什么 Spinnaker 对 CI/CD 很重要][7]》中解释了灵活地使用 CI/CD 工作流程准确构建所需内容的原因。Willy-Peter Schaub 还盛赞了为所有产品创建统一管道的想法,以便《[在一个 CI/CD 管道中一致地构建每个产品][8]》。这些文章将让你很好地了解在团队成员加入工作流程后会发生什么情况。
+
+### CI/CD 如何影响企业
+
+2019 年也是认识到 CI/CD 的业务影响以及它是如何影响日常运营的一年。Agnieszka Gancarczyk 分享了 Red Hat 《[小型 Scrum vs. 大型 Scrum][9]》的调查结果, 包括受访者对 Scrum、敏捷运动及对团队的影响的不同看法。Will Kelly 的《[持续部署如何影响整个组织][10]》,也提及了开放式沟通的重要性。Daniel Oh 也在《[DevOps 团队必备的 3 种指标仪表板][11]》中强调了指标和可观测性的重要性。最后是 Ann Marie Fred 的精彩文章《[不在生产环境中测试?要在生产环境中测试!][12]》详细说明了在验收测试前在生产环境中测试的重要性。
+
+感谢许多贡献者在 2019 年与 Opensource 的读者分享他们的见解,我期望在 2020 年里从他们那里了解更多有关 CI/CD 发展的信息。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/12/cicd-resources
+
+作者:[Jessica Cherry][a]
+选题:[lujun9972][b]
+译者:[Morisun029](https://github.com/Morisun029)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jrepka
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plumbing_pipes_tutorial_how_behind_scenes.png?itok=F2Z8OJV1 (Plumbing tubes in many directions)
+[2]: https://jenkins.io/
+[3]: https://linux.cn/article-11546-1.html
+[4]: https://opensource.com/article/19/4/devops-pipeline-acceptance-testing
+[5]: https://opensource.com/article/19/7/security-scanning-your-devops-pipeline
+[6]: https://opensource.com/article/19/3/screwdriver-cicd
+[7]: https://opensource.com/article/19/8/why-spinnaker-matters-cicd
+[8]: https://opensource.com/article/19/7/cicd-pipeline-rule-them-all
+[9]: https://opensource.com/article/19/3/small-scale-scrum-vs-large-scale-scrum
+[10]: https://opensource.com/article/19/7/organizational-impact-continuous-deployment
+[11]: https://linux.cn/article-11183-1.html
+[12]: https://opensource.com/article/19/5/dont-test-production
diff --git a/published/20171018 How to create an e-book chapter template in LibreOffice Writer.md b/published/202001/20171018 How to create an e-book chapter template in LibreOffice Writer.md
similarity index 100%
rename from published/20171018 How to create an e-book chapter template in LibreOffice Writer.md
rename to published/202001/20171018 How to create an e-book chapter template in LibreOffice Writer.md
diff --git a/published/20190405 File sharing with Git.md b/published/202001/20190405 File sharing with Git.md
similarity index 100%
rename from published/20190405 File sharing with Git.md
rename to published/202001/20190405 File sharing with Git.md
diff --git a/translated/tech/20190406 Run a server with Git.md b/published/202001/20190406 Run a server with Git.md
similarity index 60%
rename from translated/tech/20190406 Run a server with Git.md
rename to published/202001/20190406 Run a server with Git.md
index c52295591e..ee4497eb51 100644
--- a/translated/tech/20190406 Run a server with Git.md
+++ b/published/202001/20190406 Run a server with Git.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11795-1.html)
[#]: subject: (Run a server with Git)
[#]: via: (https://opensource.com/article/19/4/server-administration-git)
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/seth)
@@ -10,37 +10,37 @@
使用 Git 来管理 Git 服务器
======
-> 借助 Gitolite,你可以使用 Git 来管理 Git 服务器。在我们的系列中了解这些鲜为人知的 Git 用途。
+> 借助 Gitolite,你可以使用 Git 来管理 Git 服务器。在我们的系列文章中了解这些鲜为人知的 Git 用途。
-![computer servers processing data][1]
+![](https://img.linux.net.cn/data/attachment/album/202001/18/132045yrr1pb9n497tfbiy.png)
-正如我在系列文章中演示的那样,[Git][2] 除了跟踪源代码外,还可以做很多事情。信不信由你,Git 甚至可以管理你的 Git 服务器,因此你可以或多或少地使用 Git 本身运行 Git 服务器。
+正如我在系列文章中演示的那样,[Git][2] 除了跟踪源代码外,还可以做很多事情。信不信由你,Git 甚至可以管理你的 Git 服务器,因此你可以或多或少地使用 Git 本身来运行 Git 服务器。
-当然,这涉及除日常使用 Git 之外的许多组件,其中最重要的是 [Gitolite][3],该后端应用程序可以管理你使用 Git 的每个细小的配置。Gitolite 的优点在于,由于它使用 Git 作为其前端接口,因此很容易将 Git 服务器管理集成到其他基于 Git 的工作流中。Gitolite 可以精确控制谁可以访问你服务器上的特定存储库以及他们具有哪些权限。你可以使用常规的 Linux 系统工具自行管理此类事务,但是如果在六个用户中只有一个或两个以上的仓库,则需要大量的工作。
+当然,这涉及除日常使用 Git 之外的许多组件,其中最重要的是 [Gitolite][3],该后端应用程序可以管理你使用 Git 的每个细微的配置。Gitolite 的优点在于,由于它使用 Git 作为其前端接口,因此很容易将 Git 服务器管理集成到其他基于 Git 的工作流中。Gitolite 可以精确控制谁可以访问你服务器上的特定存储库以及他们具有哪些权限。你可以使用常规的 Linux 系统工具自行管理此类事务,但是如果有好几个用户和不止一两个仓库,则需要大量的工作。
Gitolite 的开发人员做了艰苦的工作,使你可以轻松地为许多用户提供对你的 Git 服务器的访问权,而又不让他们访问你的整个环境 —— 而这一切,你可以使用 Git 来完成全部工作。
-Gitolite 并`不是` 图形化的管理员和用户面板。优秀的 [Gitea][4] 项目可提供这种经验,但是本文重点介绍 Gitolite 的简单优雅和令人舒适的熟悉感。
+Gitolite 并**不是**图形化的管理员和用户面板。优秀的 [Gitea][4] 项目可提供这种体验,但是本文重点介绍 Gitolite 的简单优雅和令人舒适的熟悉感。
### 安装 Gitolite
-假设你的 Git 服务器运行 Linux,则可以使用包管理器安装 Gitolite(在 CentOS 和 RHEL 上为 `yum`,在 Debian 和 Ubuntu 上为 `apt`,在 OpenSUSE 上为 `zypper` 等)。例如,在 RHEL 上:
+假设你的 Git 服务器运行在 Linux 上,则可以使用包管理器安装 Gitolite(在 CentOS 和 RHEL 上为 `yum`,在 Debian 和 Ubuntu 上为 `apt`,在 OpenSUSE 上为 `zypper` 等)。例如,在 RHEL 上:
```
$ sudo yum install gitolite3
```
-许多发行版的存储库仍提供的是旧版本的 Gitolite,但当前版本为版本 3。
+许多发行版的存储库提供的仍是旧版本的 Gitolite,但最新版本为版本 3。
你必须具有对服务器的无密码 SSH 访问权限。如果愿意,你可以使用密码登录服务器,但是 Gitolite 依赖于 SSH 密钥,因此必须配置使用密钥登录的选项。如果你不知道如何配置服务器以进行无密码 SSH 访问,请首先学习如何进行操作(Steve Ovens 的 Ansible 文章的[设置 SSH 密钥身份验证][5]部分对此进行了很好的说明)。这是加强服务器管理的安全以及运行 Gitolite 的重要组成部分。
### 配置 Git 用户
-如果没有 Gitolite,则如果某人请求访问你在服务器上托管的 Git 存储库,则必须向该人提供用户帐户。Git 提供了一个特殊的外壳,即 `git-shell`,这是一个仅执行 Git 任务的特别特定的 shell。这可以让你有个只能通过非常受限的 Shell 环境的过滤器来访问服务器的用户。
+如果没有 Gitolite,则如果某人请求访问你在服务器上托管的 Git 存储库时,则必须向该人提供用户帐户。Git 提供了一个特殊的外壳,即 `git-shell`,这是一个仅执行 Git 任务的特别的特定 shell。这可以让你有个只能通过非常受限的 Shell 环境来过滤访问你的服务器的用户。
-该解决方案可行,但通常意味着用户可以访问服务器上的所有存储库,除非你具有用于组权限的良好模式,并在创建新存储库时严格保持这些权限。这种方式还需要在系统级别进行大量手动配置,这通常是为特定级别的系统管理员保留的区域,而不一定是通常负责 Git 存储库的人员。
+这个解决方案是一个办法,但通常意味着用户可以访问服务器上的所有存储库,除非你具有用于组权限的良好模式,并在创建新存储库时严格遵循这些权限。这种方式还需要在系统级别进行大量手动配置,这通常是只有特定级别的系统管理员才能做的工作,而不一定是通常负责 Git 存储库的人员。
-Gitolite 通过为需要访问任何存储库的每个人指定一个用户名来完全回避此问题。 默认情况下,用户名是 `git`,并且由于 Gitolite 的文档假定使用的是它,因此在学习该工具时保留它是一个很好的默认设置。对于曾经使用过 GitLab 或 GitHub 或任何其他 Git 托管服务的人来说,这也是一个众所周知的约定。
+Gitolite 通过为需要访问任何存储库的每个人指定一个用户名来完全回避此问题。默认情况下,该用户名是 `git`,并且由于 Gitolite 的文档中假定使用的是它,因此在学习该工具时保留它是一个很好的默认设置。对于曾经使用过 GitLab 或 GitHub 或任何其他 Git 托管服务的人来说,这也是一个众所周知的约定。
Gitolite 将此用户称为**托管用户**。在服务器上创建一个帐户以充当托管用户(我习惯使用 `git`,因为这是惯例):
@@ -48,7 +48,7 @@ Gitolite 将此用户称为**托管用户**。在服务器上创建一个帐户
$ sudo adduser --create-home git
```
-为了控制该 `git` 用户帐户,该帐户必须具有属于你的有效 SSH 公钥。你应该已经进行了设置,因此复制你的公钥(**不是你的私钥**)添加到 `git` 用户的家目录中:
+为了控制该 `git` 用户帐户,该帐户必须具有属于你的有效 SSH 公钥。你应该已经进行了设置,因此复制你的公钥(**而不是你的私钥**)添加到 `git` 用户的家目录中:
```
$ sudo cp ~/.ssh/id_ed25519.pub /home/git/
@@ -62,11 +62,11 @@ $ sudo su - git
$ gitolite setup --pubkey id_ed25519.pub
```
-安装脚本运行后,`git` 的家用户目录将有一个 `repository` 目录,该目录(目前)包含文件 `git-admin.git` 和 `testing.git`。这就是该服务器所需的全部设置,现在请登出 `git` 用户。
+安装脚本运行后,`git` 的家用户目录将有一个 `repository` 目录,该目录(目前)包含存储库 `git-admin.git` 和 `testing.git`。这就是该服务器所需的全部设置,现在请登出 `git` 用户。
### 使用 Gitolite
-管理 Gitolite 就是编辑 Git 存储库中的文本文件,尤其是 `gitolite-admin.git`。你不会通过 SSH 进入服务器来进行 Git 管理,并且 Gitolite 也建议你不要这样尝试。你和你的用户存储在 Gitolite 服务器上的存储库是个**裸**存储库,因此最好不要使用它们。
+管理 Gitolite 就是编辑 Git 存储库中的文本文件,尤其是 `gitolite-admin.git` 中的。你不会通过 SSH 进入服务器来进行 Git 管理,并且 Gitolite 也建议你不要这样尝试。在 Gitolite 服务器上存储你和你的用户的存储库是个**裸**存储库,因此最好不要使用它们。
```
$ git clone git@example.com:gitolite-admin.git gitolite-admin.git
@@ -76,7 +76,7 @@ conf
keydir
```
-该存储库中的 `conf` 目录包含一个名为 `gitolite.conf` 的文件。在文本编辑器中打开它,或使用`cat`查看其内容:
+该存储库中的 `conf` 目录包含一个名为 `gitolite.conf` 的文件。在文本编辑器中打开它,或使用 `cat` 查看其内容:
```
repo gitolite-admin
@@ -86,15 +86,15 @@ repo testing
RW+ = @all
```
-你可能对该配置文件的功能有所了解:`gitolite-admin` 代表此存储库,并且 `id_ed25519` 密钥的所有者具有读取、写入和 Git 管理权限。换句话说,不是将用户映射到普通的本地 Unix 用户(因为所有用户都使用 `git` 用户托管用户身份),而是将用户映射到 `keydir` 目录中列出的 SSH 密钥。
+你可能对该配置文件的功能有所了解:`gitolite-admin` 代表此存储库,并且 `id_ed25519` 密钥的所有者具有读取、写入和管理 Git 的权限。换句话说,不是将用户映射到普通的本地 Unix 用户(因为所有用户都使用 `git` 用户托管用户身份),而是将用户映射到 `keydir` 目录中列出的 SSH 密钥。
`testing.git` 存储库使用特殊组符号为访问服务器的每个人提供了全部权限。
#### 添加用户
-如果要向 Git 服务器添加一个名为 `alice` 的用户,Alice 必须向你发送她的 SSH 公钥。Gitolite 使用 `.pub` 扩展名左边的任何内容作为该 Git 用户的标识符。不要使用默认的密钥名称值,而是给密钥指定一个指示密钥所有者的名称。如果用户有多个密钥(例如,一个用于笔记本电脑,一个用于台式机),则可以使用子目录来避免文件名冲突。例如,Alice 在笔记本电脑上使用的密钥可能是默认的 `id_rsa.pub`,因此将其重命名为`alice.pub` 或类似名称(或让用户根据其计算机上的本地用户帐户来命名密钥),然后将其放入 `gitolite-admin.git/keydir/work/laptop/` 目录中。如果她从她的桌面发送了另一个密钥,命名为 `alice.pub`(与上一个相同),然后将其添加到 `keydir/home/desktop/` 中。另一个密钥可能放到 `keydir/home/desktop/` 中,依此类推。Gitolite 递归地在 `keydir` 中搜索与存储库“用户”匹配的 `.pub` 文件,并将所有匹配项视为相同的身份。
+如果要向 Git 服务器添加一个名为 `alice` 的用户,Alice 必须向你发送她的 SSH 公钥。Gitolite 使用文件名的 `.pub` 扩展名左边的任何内容作为该 Git 用户的标识符。不要使用默认的密钥名称值,而是给密钥指定一个指示密钥所有者的名称。如果用户有多个密钥(例如,一个用于笔记本电脑,一个用于台式机),则可以使用子目录来避免文件名冲突。例如,Alice 在笔记本电脑上使用的密钥可能是默认的 `id_rsa.pub`,因此将其重命名为`alice.pub` 或类似名称(或让用户根据其计算机上的本地用户帐户来命名密钥),然后将其放入 `gitolite-admin.git/keydir/work/laptop/` 目录中。如果她从她的桌面计算机发送了另一个密钥,命名为 `alice.pub`(与上一个相同),然后将其添加到 `keydir/home/desktop/` 中。另一个密钥可能放到 `keydir/home/desktop/` 中,依此类推。Gitolite 递归地在 `keydir` 中搜索与存储库“用户”相匹配的 `.pub` 文件,并将所有匹配项视为相同的身份。
-当你将密钥添加到 `keydir` 目录时,必须将它们提交回服务器。这是一件很容易忘记的事情,这里有一个使用自动化的 Git 应用程序(例如 [Sparkleshare] [7])的真正的理由,因此任何更改都将立即提交给你的 Gitolite 管理员。第一次忘记提交和推送,在浪费了三个小时的时间以及用户的故障排除时间之后,你会发现 Gitolite 是使用 Sparkleshare 的完美理由。
+当你将密钥添加到 `keydir` 目录时,必须将它们提交回服务器。这是一件很容易忘记的事情,这里有一个使用自动化的 Git 应用程序(例如 [Sparkleshare][7])的真正的理由,因此任何更改都将立即提交给你的 Gitolite 管理员。第一次忘记提交和推送,在浪费了三个小时的你和你的用户的故障排除时间之后,你会发现 Gitolite 是使用 Sparkleshare 的完美理由。
```
$ git add keydir
@@ -106,10 +106,10 @@ $ git push origin HEAD
#### 设置权限
-与用户一样,目录权限和组也是从你可能习惯的的常规 Unix 工具中抽象出来的(或可从在线信息查找)。在`gitolite-admin.git/conf` 目录中的 `gitolite.conf` 文件中授予对项目的权限。权限分为四个级别:
+与用户一样,目录权限和组也是从你可能习惯的的常规 Unix 工具中抽象出来的(或可从在线信息查找)。在 `gitolite-admin.git/conf` 目录中的 `gitolite.conf` 文件中授予对项目的权限。权限分为四个级别:
* `R` 允许只读。在存储库上具有 `R` 权限的用户可以克隆它,仅此而已。
-* `RW` 允许用户执行分支的快进推送、创建新分支和创建新标签。对于大多数用户来说,这个或多或少感觉就像一个“普通”的 Git 存储库。
+* `RW` 允许用户执行分支的快进推送、创建新分支和创建新标签。对于大多数用户来说,这个基本上就像是一个“普通”的 Git 存储库。
* `RW+` 允许可能具有破坏性的 Git 动作。用户可以执行常规的快进推送、回滚推送、变基以及删除分支和标签。你可能想要或不希望将其授予项目中的所有贡献者。
* `-` 明确拒绝访问存储库。这与未在存储库的配置中列出的用户相同。
@@ -126,7 +126,7 @@ repo widgets
RW+ = alice
```
-现在,Alice(也仅 Alice 一个人)就可以克隆该存储库:
+现在,Alice(也仅有 Alice 一个人)可以克隆该存储库:
```
[alice]$ git clone git@example.com:widgets.git
@@ -188,7 +188,7 @@ repo foo/CREATOR/[a-z]..*
R = READERS
```
-第一行定义了一组用户:该组称为 `@managers`,其中包含用户 `alice` 和 `bob`。下一行设置了通配符允许创建尚不存在的存储库,放在名为 `foo` 的目录下的创建存储库的用户名的子目录中。例如:
+第一行定义了一组用户:该组称为 `@managers`,其中包含用户 `alice` 和 `bob`。下一行设置了通配符允许创建尚不存在的存储库,放在名为 `foo` 的目录下的创建该存储库的用户名的子目录中。例如:
```
[alice]$ git clone git@example.com:foo/alice/cool-app.git
@@ -197,11 +197,11 @@ Initialized empty Git repository in /home/git/repositories/foo/alice/cool-app.gi
warning: You appear to have cloned an empty repository.
```
-野生仓库的创建者可以使用一些机制来定义谁可以读取和写入其存储库,但是他们是被限定范围的。在大多数情况下,Gitolite 假定由一组特定的用户来管理项目权限。一种解决方案是使用 Git 挂钩授予所有用户对 `gitolite-admin` 的访问权限,以要求管理者批准将更改合并到 master 分支中。
+野生仓库的创建者可以使用一些机制来定义谁可以读取和写入其存储库,但是他们是有范围限定的。在大多数情况下,Gitolite 假定由一组特定的用户来管理项目权限。一种解决方案是使用 Git 挂钩来授予所有用户对 `gitolite-admin` 的访问权限,以要求管理者批准将更改合并到 master 分支中。
### 了解更多
-Gitolite 具有比此介绍性文章涵盖的更多功能,因此请尝试一下。其[文档][8]非常出色,一旦你通读了它,就可以自定义 Gitolite 服务器,以向用户提供你喜欢的任何级别的控制。Gitolite 是一种维护成本低、简单的系统,你可以安装、设置它,然后基本上就可以将其忘却。
+Gitolite 具有比此介绍性文章所涵盖的更多功能,因此请尝试一下。其[文档][8]非常出色,一旦你通读了它,就可以自定义 Gitolite 服务器,以向用户提供你喜欢的任何级别的控制。Gitolite 是一种维护成本低、简单的系统,你可以安装、设置它,然后基本上就可以将其忘却。
--------------------------------------------------------------------------------
@@ -210,7 +210,7 @@ via: https://opensource.com/article/19/4/server-administration-git
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/202001/20190619 Getting started with OpenSSL- Cryptography basics.md b/published/202001/20190619 Getting started with OpenSSL- Cryptography basics.md
new file mode 100644
index 0000000000..d7cd09934e
--- /dev/null
+++ b/published/202001/20190619 Getting started with OpenSSL- Cryptography basics.md
@@ -0,0 +1,333 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11810-1.html)
+[#]: subject: (Getting started with OpenSSL: Cryptography basics)
+[#]: via: (https://opensource.com/article/19/6/cryptography-basics-openssl-part-1)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen)
+
+OpenSSL 入门:密码学基础知识
+======
+
+> 想要入门密码学的基础知识,尤其是有关 OpenSSL 的入门知识吗?继续阅读。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/23/142249fpnhyqz9y2cz1exe.jpg)
+
+本文是使用 [OpenSSL][2] 的密码学基础知识的两篇文章中的第一篇,OpenSSL 是在 Linux 和其他系统上流行的生产级库和工具包。(要安装 OpenSSL 的最新版本,请参阅[这里][3]。)OpenSSL 实用程序可在命令行使用,程序也可以调用 OpenSSL 库中的函数。本文的示例程序使用的是 C 语言,即 OpenSSL 库的源语言。
+
+本系列的两篇文章涵盖了加密哈希、数字签名、加密和解密以及数字证书。你可以从[我的网站][4]的 ZIP 文件中找到这些代码和命令行示例。
+
+让我们首先回顾一下 OpenSSL 名称中的 SSL。
+
+### OpenSSL 简史
+
+[安全套接字层][5](SSL)是 Netscape 在 1995 年发布的一种加密协议。该协议层可以位于 HTTP 之上,从而为 HTTPS 提供了 S:安全。SSL 协议提供了各种安全服务,其中包括两项在 HTTPS 中至关重要的服务:
+
+* 对等身份验证(也称为相互质询):连接的每一边都对另一边的身份进行身份验证。如果 Alice 和 Bob 要通过 SSL 交换消息,则每个人首先验证彼此的身份。
+* 机密性:发送者在通过通道发送消息之前先对其进行加密。然后,接收者解密每个接收到的消息。此过程可保护网络对话。即使窃听者 Eve 截获了从 Alice 到 Bob 的加密消息(即*中间人*攻击),Eve 会发现他无法在计算上解密此消息。
+
+反过来,这两个关键 SSL 服务与其他不太受关注的服务相关联。例如,SSL 支持消息完整性,从而确保接收到的消息与发送的消息相同。此功能是通过哈希函数实现的,哈希函数也随 OpenSSL 工具箱一起提供。
+
+SSL 有多个版本(例如 SSLv2 和 SSLv3),并且在 1999 年出现了一个基于 SSLv3 的类似协议传输层安全性(TLS)。TLSv1 和 SSLv3 相似,但不足以相互配合工作。不过,通常将 SSL/TLS 称为同一协议。例如,即使正在使用的是 TLS(而非 SSL),OpenSSL 函数也经常在名称中包含 SSL。此外,调用 OpenSSL 命令行实用程序以 `openssl` 开始。
+
+除了 man 页面之外,OpenSSL 的文档是零零散散的,鉴于 OpenSSL 工具包很大,这些页面很难以查找使用。命令行和代码示例可以将主要主题集中起来。让我们从一个熟悉的示例开始(使用 HTTPS 访问网站),然后使用该示例来选出我们感兴趣的加密部分进行讲述。
+
+### 一个 HTTPS 客户端
+
+此处显示的 `client` 程序通过 HTTPS 连接到 Google:
+
+```
+/* compilation: gcc -o client client.c -lssl -lcrypto */
+#include
+#include
+#include /* BasicInput/Output streams */
+#include /* errors */
+#include /* core library */
+#define BuffSize 1024
+
+void report_and_exit(const char* msg) {
+ perror(msg);
+ ERR_print_errors_fp(stderr);
+ exit(-1);
+}
+
+void init_ssl() {
+ SSL_load_error_strings();
+ SSL_library_init();
+}
+
+void cleanup(SSL_CTX* ctx, BIO* bio) {
+ SSL_CTX_free(ctx);
+ BIO_free_all(bio);
+}
+
+void secure_connect(const char* hostname) {
+ char name[BuffSize];
+ char request[BuffSize];
+ char response[BuffSize];
+
+ const SSL_METHOD* method = TLSv1_2_client_method();
+ if (NULL == method) report_and_exit("TLSv1_2_client_method...");
+
+ SSL_CTX* ctx = SSL_CTX_new(method);
+ if (NULL == ctx) report_and_exit("SSL_CTX_new...");
+
+ BIO* bio = BIO_new_ssl_connect(ctx);
+ if (NULL == bio) report_and_exit("BIO_new_ssl_connect...");
+
+ SSL* ssl = NULL;
+
+ /* 链路 bio 通道,SSL 会话和服务器端点 */
+
+ sprintf(name, "%s:%s", hostname, "https");
+ BIO_get_ssl(bio, &ssl); /* 会话 */
+ SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* 鲁棒性 */
+ BIO_set_conn_hostname(bio, name); /* 准备连接 */
+
+ /* 尝试连接 */
+ if (BIO_do_connect(bio) <= 0) {
+ cleanup(ctx, bio);
+ report_and_exit("BIO_do_connect...");
+ }
+
+ /* 验证信任库,检查证书 */
+ if (!SSL_CTX_load_verify_locations(ctx,
+ "/etc/ssl/certs/ca-certificates.crt", /* 信任库 */
+ "/etc/ssl/certs/")) /* 其它信任库 */
+ report_and_exit("SSL_CTX_load_verify_locations...");
+
+ long verify_flag = SSL_get_verify_result(ssl);
+ if (verify_flag != X509_V_OK)
+ fprintf(stderr,
+ "##### Certificate verification error (%i) but continuing...\n",
+ (int) verify_flag);
+
+ /* 获取主页作为示例数据 */
+ sprintf(request,
+ "GET / HTTP/1.1\x0D\x0AHost: %s\x0D\x0A\x43onnection: Close\x0D\x0A\x0D\x0A",
+ hostname);
+ BIO_puts(bio, request);
+
+ /* 从服务器读取 HTTP 响应并打印到输出 */
+ while (1) {
+ memset(response, '\0', sizeof(response));
+ int n = BIO_read(bio, response, BuffSize);
+ if (n <= 0) break; /* 0 代表流结束,< 0 代表有错误 */
+ puts(response);
+ }
+
+ cleanup(ctx, bio);
+}
+
+int main() {
+ init_ssl();
+
+ const char* hostname = "www.google.com:443";
+ fprintf(stderr, "Trying an HTTPS connection to %s...\n", hostname);
+ secure_connect(hostname);
+
+return 0;
+}
+```
+
+可以从命令行编译和执行该程序(请注意 `-lssl` 和 `-lcrypto` 中的小写字母 `L`):
+
+```
+gcc -o client client.c -lssl -lcrypto
+```
+
+该程序尝试打开与网站 [www.google.com][13] 的安全连接。在与 Google Web 服务器的 TLS 握手过程中,`client` 程序会收到一个或多个数字证书,该程序会尝试对其进行验证(但在我的系统上失败了)。尽管如此,`client` 程序仍继续通过安全通道获取 Google 主页。该程序取决于前面提到的安全工件,尽管在上述代码中只着重突出了数字证书。但其它工件仍在幕后发挥作用,稍后将对它们进行详细说明。
+
+通常,打开 HTTP(非安全)通道的 C 或 C++ 的客户端程序将使用诸如*文件描述符*或*网络套接字*之类的结构,它们是两个进程(例如,这个 `client` 程序和 Google Web 服务器)之间连接的端点。另一方面,文件描述符是一个非负整数值,用于在程序中标识该程序打开的任何文件类的结构。这样的程序还将使用一种结构来指定有关 Web 服务器地址的详细信息。
+
+这些相对较低级别的结构不会出现在客户端程序中,因为 OpenSSL 库会将套接字基础设施和地址规范等封装在更高层面的安全结构中。其结果是一个简单的 API。下面首先看一下 `client` 程序示例中的安全性详细信息。
+
+* 该程序首先加载相关的 OpenSSL 库,我的函数 `init_ssl` 中对 OpenSSL 进行了两次调用:
+
+ ```
+SSL_load_error_strings();
+SSL_library_init();
+```
+* 下一个初始化步骤尝试获取安全*上下文*,这是建立和维护通往 Web 服务器的安全通道所需的信息框架。如对 OpenSSL 库函数的调用所示,在示例中使用了 TLS 1.2:
+
+ ```
+const SSL_METHOD* method = TLSv1_2_client_method(); /* TLS 1.2 */
+```
+
+ 如果调用成功,则将 `method` 指针被传递给库函数,该函数创建类型为 `SSL_CTX` 的上下文:
+
+ ```
+SSL_CTX* ctx = SSL_CTX_new(method);
+```
+
+ `client` 程序会检查每个关键的库调用的错误,如果其中一个调用失败,则程序终止。
+* 现在还有另外两个 OpenSSL 工件也在发挥作用:SSL 类型的安全会话,从头到尾管理安全连接;以及类型为 BIO(基本输入/输出)的安全流,用于与 Web 服务器进行通信。BIO 流是通过以下调用生成的:
+
+ ```
+BIO* bio = BIO_new_ssl_connect(ctx);
+```
+
+ 请注意,这个最重要的上下文是其参数。`BIO` 类型是 C 语言中 `FILE` 类型的 OpenSSL 封装器。此封装器可保护 `client` 程序与 Google 的网络服务器之间的输入和输出流的安全。
+* 有了 `SSL_CTX` 和 `BIO`,然后程序在 SSL 会话中将它们组合在一起。三个库调用可以完成工作:
+
+ ```
+BIO_get_ssl(bio, &ssl); /* 会话 */
+SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* 鲁棒性 */
+BIO_set_conn_hostname(bio, name); /* 准备连接 */
+```
+
+ 安全连接本身是通过以下调用建立的:
+
+ ```
+BIO_do_connect(bio);
+```
+
+ 如果最后一个调用不成功,则 `client` 程序终止;否则,该连接已准备就绪,可以支持 `client` 程序与 Google Web 服务器之间的机密对话。
+
+在与 Web 服务器握手期间,`client` 程序会接收一个或多个数字证书,以认证服务器的身份。但是,`client` 程序不会发送自己的证书,这意味着这个身份验证是单向的。(Web 服务器通常配置为**不**需要客户端证书)尽管对 Web 服务器证书的验证失败,但 `client` 程序仍通过了连接到 Web 服务器的安全通道继续获取 Google 主页。
+
+为什么验证 Google 证书的尝试会失败?典型的 OpenSSL 安装目录为 `/etc/ssl/certs`,其中包含 `ca-certificates.crt` 文件。该目录和文件包含着 OpenSSL 自带的数字证书,以此构成信任库。可以根据需要更新信任库,尤其是可以包括新信任的证书,并删除不再受信任的证书。
+
+`client` 程序从 Google Web 服务器收到了三个证书,但是我的计算机上的 OpenSSL 信任库并不包含完全匹配的证书。如目前所写,`client` 程序不会通过例如验证 Google 证书上的数字签名(一个用来证明该证书的签名)来解决此问题。如果该签名是受信任的,则包含该签名的证书也应受信任。尽管如此,`client` 程序仍继续获取页面,然后打印出 Google 的主页。下一节将更详细地介绍这些。
+
+### 客户端程序中隐藏的安全性
+
+让我们从客户端示例中可见的安全工件(数字证书)开始,然后考虑其他安全工件如何与之相关。数字证书的主要格式标准是 X509,生产级的证书由诸如 [Verisign][14] 的证书颁发机构(CA)颁发。
+
+数字证书中包含各种信息(例如,激活日期和失效日期以及所有者的域名),也包括发行者的身份和*数字签名*(这是加密过的*加密哈希*值)。证书还具有未加密的哈希值,用作其标识*指纹*。
+
+哈希值来自将任意数量的二进制位映射到固定长度的摘要。这些位代表什么(会计报告、小说或数字电影)无关紧要。例如,消息摘要版本 5(MD5)哈希算法将任意长度的输入位映射到 128 位哈希值,而 SHA1(安全哈希算法版本 1)算法将输入位映射到 160 位哈希值。不同的输入位会导致不同的(实际上在统计学上是唯一的)哈希值。下一篇文章将会进行更详细的介绍,并着重介绍什么使哈希函数具有加密功能。
+
+数字证书的类型有所不同(例如根证书、中间证书和最终实体证书),并形成了反映这些证书类型的层次结构。顾名思义,*根*证书位于层次结构的顶部,其下的证书继承了根证书所具有的信任。OpenSSL 库和大多数现代编程语言都具有 X509 数据类型以及处理此类证书的函数。来自 Google 的证书具有 X509 格式,`client` 程序会检查该证书是否为 `X509_V_OK`。
+
+X509 证书基于公共密钥基础结构(PKI),其中包括的算法(RSA 是占主导地位的算法)用于生成*密钥对*:公共密钥及其配对的私有密钥。公钥是一种身份:[Amazon][15] 的公钥对其进行标识,而我的公钥对我进行标识。私钥应由其所有者负责保密。
+
+成对出现的密钥具有标准用途。可以使用公钥对消息进行加密,然后可以使用同一个密钥对中的私钥对消息进行解密。私钥也可以用于对文档或其他电子工件(例如程序或电子邮件)进行签名,然后可以使用该对密钥中的公钥来验证签名。以下两个示例补充了一些细节。
+
+在第一个示例中,Alice 将她的公钥分发给全世界,包括 Bob。然后,Bob 用 Alice 的公钥加密邮件,然后将加密的邮件发送给 Alice。用 Alice 的公钥加密的邮件将可以用她的私钥解密(假设是她自己的私钥),如下所示:
+
+```
+ +------------------+ encrypted msg +-------------------+
+Bob's msg--->|Alice's public key|--------------->|Alice's private key|---> Bob's msg
+ +------------------+ +-------------------+
+```
+
+理论上可以在没有 Alice 的私钥的情况下解密消息,但在实际情况中,如果使用像 RSA 这样的加密密钥对系统,则在计算上做不到。
+
+现在,第二个示例,请对文档签名以证明其真实性。签名算法使用密钥对中的私钥来处理要签名的文档的加密哈希:
+
+```
+ +-------------------+
+Hash of document--->|Alice's private key|--->Alice's digital signature of the document
+ +-------------------+
+```
+
+假设 Alice 以数字方式签署了发送给 Bob 的合同。然后,Bob 可以使用 Alice 密钥对中的公钥来验证签名:
+
+```
+ +------------------+
+Alice's digital signature of the document--->|Alice's public key|--->verified or not
+ +------------------+
+```
+
+假若没有 Alice 的私钥,就无法轻松伪造 Alice 的签名:因此,Alice 有必要保密她的私钥。
+
+在 `client` 程序中,除了数字证书以外,这些安全性都没有明确展示。下一篇文章使用使用 OpenSSL 实用程序和库函数的示例填充更多详细的信息。
+
+### 命令行的 OpenSSL
+
+同时,让我们看一下 OpenSSL 命令行实用程序:特别是在 TLS 握手期间检查来自 Web 服务器的证书的实用程序。调用 OpenSSL 实用程序可以使用 `openssl` 命令,然后添加参数和标志的组合以指定所需的操作。
+
+看看以下命令:
+
+```
+openssl list-cipher-algorithms
+```
+
+该输出是组成加密算法套件的相关算法的列表。下面是列表的开头,加了澄清首字母缩写词的注释:
+
+```
+AES-128-CBC ## Advanced Encryption Standard, Cipher Block Chaining
+AES-128-CBC-HMAC-SHA1 ## Hash-based Message Authentication Code with SHA1 hashes
+AES-128-CBC-HMAC-SHA256 ## ditto, but SHA256 rather than SHA1
+...
+```
+
+下一条命令使用参数 `s_client` 将打开到 [www.google.com][13] 的安全连接,并在屏幕上显示有关此连接的所有信息:
+
+```
+openssl s_client -connect www.google.com:443 -showcerts
+```
+
+端口号 443 是 Web 服务器用于接收 HTTPS(而不是 HTTP 连接)的标准端口号。(对于 HTTP,标准端口为 80)Web 地址 www.google.com:443 也出现在 `client` 程序的代码中。如果尝试连接成功,则将显示来自 Google 的三个数字证书以及有关安全会话、正在使用的加密算法套件以及相关项目的信息。例如,这是开头的部分输出,它声明*证书链*即将到来。证书的编码为 base64:
+
+
+```
+Certificate chain
+ 0 s:/C=US/ST=California/L=Mountain View/O=Google LLC/CN=www.google.com
+ i:/C=US/O=Google Trust Services/CN=Google Internet Authority G3
+-----BEGIN CERTIFICATE-----
+MIIEijCCA3KgAwIBAgIQdCea9tmy/T6rK/dDD1isujANBgkqhkiG9w0BAQsFADBU
+MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMSUw
+...
+```
+
+诸如 Google 之类的主要网站通常会发送多个证书进行身份验证。
+
+输出以有关 TLS 会话的摘要信息结尾,包括加密算法套件的详细信息:
+
+```
+SSL-Session:
+ Protocol : TLSv1.2
+ Cipher : ECDHE-RSA-AES128-GCM-SHA256
+ Session-ID: A2BBF0E4991E6BBBC318774EEE37CFCB23095CC7640FFC752448D07C7F438573
+...
+```
+
+`client` 程序中使用了协议 TLS 1.2,`Session-ID` 唯一地标识了 `openssl` 实用程序和 Google Web 服务器之间的连接。`Cipher` 条目可以按以下方式进行解析:
+
+* `ECDHE`(椭圆曲线 Diffie-Hellman(临时)Elliptic Curve Diffie Hellman Ephemeral)是一种用于管理 TLS 握手的高效的有效算法。尤其是,ECDHE 通过确保连接双方(例如,`client` 程序和 Google Web 服务器)使用相同的加密/解密密钥(称为*会话密钥*)来解决“密钥分发问题”。后续文章会深入探讨该细节。
+* `RSA`(Rivest Shamir Adleman)是主要的公共密钥密码系统,并以 1970 年代末首次描述了该系统的三位学者的名字命名。这个正在使用的密钥对是使用 RSA 算法生成的。
+* `AES128`(高级加密标准Advanced Encryption Standard)是一种块式加密算法block cipher,用于加密和解密位块blocks of bits。(另一种算法是流式加密算法stream cipher,它一次加密和解密一个位。)这个加密算法是对称加密算法,因为使用同一个密钥进行加密和解密,这首先引起了密钥分发问题。AES 支持 128(此处使用)、192 和 256 位的密钥大小:密钥越大,安全性越好。
+
+ 通常,像 AES 这样的对称加密系统的密钥大小要小于像 RSA 这样的非对称(基于密钥对)系统的密钥大小。例如,1024 位 RSA 密钥相对较小,而 256 位密钥则当前是 AES 最大的密钥。
+* `GCM`(伽罗瓦计数器模式Galois Counter Mode)处理在安全对话期间重复应用的加密算法(在这种情况下为 AES128)。AES128 块的大小仅为 128 位,安全对话很可能包含从一侧到另一侧的多个 AES128 块。GCM 非常有效,通常与 AES128 搭配使用。
+* `SHA256`(256 位安全哈希算法Secure Hash Algorithm 256 bits)是我们正在使用的加密哈希算法。生成的哈希值的大小为 256 位,尽管使用 SHA 甚至可以更大。
+
+加密算法套件正在不断发展中。例如,不久前,Google 使用 RC4 流加密算法(RSA 的 Ron Rivest 后来开发的 Ron's Cipher 版本 4)。 RC4 现在有已知的漏洞,这大概部分导致了 Google 转换为 AES128。
+
+### 总结
+
+我们通过安全的 C Web 客户端和各种命令行示例对 OpenSSL 做了首次了解,使一些需要进一步阐明的主题脱颖而出。[下一篇文章会详细介绍][17],从加密散列开始,到对数字证书如何应对密钥分发挑战为结束的更全面讨论。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/6/cryptography-basics-openssl-part-1
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
+[2]: https://www.openssl.org/
+[3]: https://www.howtoforge.com/tutorial/how-to-install-openssl-from-source-on-linux/
+[4]: http://condor.depaul.edu/mkalin
+[5]: https://en.wikipedia.org/wiki/Transport_Layer_Security
+[6]: https://en.wikipedia.org/wiki/Netscape
+[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
+[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
+[9]: http://www.opengroup.org/onlinepubs/009695399/functions/sprintf.html
+[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
+[11]: http://www.opengroup.org/onlinepubs/009695399/functions/memset.html
+[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
+[13]: http://www.google.com
+[14]: https://www.verisign.com
+[15]: https://www.amazon.com
+[16]: http://www.google.com:443
+[17]: https://opensource.com/article/19/6/cryptography-basics-openssl-part-2
diff --git a/published/20190724 How to make an old computer useful again.md b/published/202001/20190724 How to make an old computer useful again.md
similarity index 100%
rename from published/20190724 How to make an old computer useful again.md
rename to published/202001/20190724 How to make an old computer useful again.md
diff --git a/published/20190924 An advanced look at Python interfaces using zope.interface.md b/published/202001/20190924 An advanced look at Python interfaces using zope.interface.md
similarity index 100%
rename from published/20190924 An advanced look at Python interfaces using zope.interface.md
rename to published/202001/20190924 An advanced look at Python interfaces using zope.interface.md
diff --git a/published/202001/20190930 How the Linux screen tool can save your tasks - and your sanity - if SSH is interrupted.md b/published/202001/20190930 How the Linux screen tool can save your tasks - and your sanity - if SSH is interrupted.md
new file mode 100644
index 0000000000..f52dbd55f6
--- /dev/null
+++ b/published/202001/20190930 How the Linux screen tool can save your tasks - and your sanity - if SSH is interrupted.md
@@ -0,0 +1,107 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11822-1.html)
+[#]: subject: (How the Linux screen tool can save your tasks – and your sanity – if SSH is interrupted)
+[#]: via: (https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+如果 SSH 被中断,Linux screen 工具如何拯救你的任务以及理智
+======
+
+> 当你需要确保长时间运行的任务不会在 SSH 会话中断时被杀死时,Linux screen 命令可以成为救生员。以下是使用方法。
+
+![](https://images.idgesg.net/images/article/2019/09/working_w_screen-shs-100812448-large.jpg)
+
+如果因 SSH 会话断开而不得不重启一个耗时的进程,那么你可能会很高兴了解一个有趣的工具,可以用来避免此问题:`screen` 工具。
+
+`screen` 是一个终端多路复用器,它使你可以在单个 SSH 会话中运行多个终端会话,并随时从它们之中脱离或重新接驳。做到这一点的过程非常简单,仅涉及少数命令。
+
+要启动 `screen` 会话,只需在 SSH 会话中键入 `screen`。 然后,你可以开始启动需要长时间运行的进程,并在适当的时候键入 `Ctrl + A Ctrl + D` 从会话中脱离,然后键入 `screen -r` 重新接驳。
+
+如果你要运行多个 `screen` 会话,更好的选择是为每个会话指定一个有意义的名称,以帮助你记住正在处理的任务。使用这种方法,你可以在启动每个会话时使用如下命令命名:
+
+```
+$ screen -S slow-build
+```
+
+一旦运行了多个会话,要重新接驳到一个会话,需要从列表中选择它。在以下命令中,我们列出了当前正在运行的会话,然后再重新接驳其中一个。请注意,一开始这两个会话都被标记为已脱离。
+
+```
+$ screen -ls
+There are screens on:
+ 6617.check-backups (09/26/2019 04:35:30 PM) (Detached)
+ 1946.slow-build (09/26/2019 02:51:50 PM) (Detached)
+2 Sockets in /run/screen/S-shs
+```
+
+然后,重新接驳到该会话要求你提供分配给会话的名称。例如:
+
+```
+$ screen -r slow-build
+```
+
+在脱离的会话中,保持运行状态的进程会继续进行处理,而你可以执行其他工作。如果你使用这些 `screen` 会话之一来查询 `screen` 会话情况,可以看到当前重新接驳的会话再次显示为 `Attached`。
+
+```
+$ screen -ls
+There are screens on:
+ 6617.check-backups (09/26/2019 04:35:30 PM) (Attached)
+ 1946.slow-build (09/26/2019 02:51:50 PM) (Detached)
+2 Sockets in /run/screen/S-shs.
+```
+
+你可以使用 `-version` 选项查询正在运行的 `screen` 版本。
+
+```
+$ screen -version
+Screen version 4.06.02 (GNU) 23-Oct-17
+```
+
+### 安装 screen
+
+如果 `which screen` 未在屏幕上提供信息,则可能你的系统上未安装该工具。
+
+```
+$ which screen
+/usr/bin/screen
+```
+
+如果你需要安装它,则以下命令之一可能适合你的系统:
+
+```
+sudo apt install screen
+sudo yum install screen
+```
+
+当你需要运行耗时的进程时,如果你的 SSH 会话由于某种原因断开连接,则可能会中断这个耗时的进程,那么 `screen` 工具就会派上用场。而且,如你所见,它非常易于使用和管理。
+
+以下是上面使用的命令的摘要:
+
+```
+screen -S 开始会话
+Ctrl+A Ctrl+D 从会话中脱离
+screen -ls 列出会话
+screen -r 重新接驳会话
+```
+
+尽管还有更多关于 `screen` 的知识,包括可以在 `screen` 会话之间进行操作的其他方式,但这已经足够帮助你开始使用这个便捷的工具了。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
+[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
diff --git a/published/202001/20191015 How GNOME uses Git.md b/published/202001/20191015 How GNOME uses Git.md
new file mode 100644
index 0000000000..d39600f1ef
--- /dev/null
+++ b/published/202001/20191015 How GNOME uses Git.md
@@ -0,0 +1,61 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11806-1.html)
+[#]: subject: (How GNOME uses Git)
+[#]: via: (https://opensource.com/article/19/10/how-gnome-uses-git)
+[#]: author: (Molly de Blanc https://opensource.com/users/mollydb)
+
+一个非技术人员对 GNOME 项目使用 GitLab 的感受
+======
+
+> 将 GNOME 项目集中在 GitLab 上的决定为整个社区(不只是开发人员)带来了好处。
+
+![red panda][1]
+
+“您的 GitLab 是什么?”这是我在 [GNOME 基金会][2]工作的第一天被问到的第一个问题之一,该基金会是支持 GNOME 项目(包括[桌面环境][3]、[GTK][4] 和 [GStreamer][5])的非盈利组织。此人问的是我在 [GNOME 的 GitLab 实例][6]上的用户名。我在 GNOME 期间,经常有人要求我提供我的 GitLab。
+
+我们使用 GitLab 进行几乎所有操作。通常情况下,我会收到一些提案issue和参考错误报告,有时还需要修改文件。我不是以开发人员或系统管理员的身份进行此操作的。我参与了“参与度、包容性和多样性(I&D)”团队。我为 GNOME 朋友们撰写新闻通讯,并采访该项目的贡献者。我为 GNOME 活动提供赞助。我不写代码,但我每天都使用 GitLab。
+
+在过去的二十年中,GNOME 项目的管理采用了各种方式。该项目的不同部分使用不同的系统来跟踪代码更改、协作以及作为项目和社交空间共享信息。但是,该项目决定,它需要更加地一体化,这从构思到完成大约花费了一年的时间。
+
+GNOME 希望切换到单个工具供整个社区使用的原因很多。外部项目与 GNOME 息息相关,并为它们提供更简单的与资源交互的方式对于项目至关重要,无论是支持社区还是发展生态系统。我们还希望更好地跟踪 GNOME 的指标,即贡献者的数量、贡献的类型和数量以及项目不同部分的开发进度。
+
+当需要选择一种协作工具时,我们考虑了我们需要的东西。最重要的要求之一是它必须由 GNOME 社区托管。由第三方托管并不是一种选择,因此像 GitHub 和 Atlassian 这样的服务就不在考虑之中。而且,当然了,它必须是自由软件。很快,唯一真正的竞争者出现了,它就是 GitLab。我们希望确保进行贡献很容易。GitLab 具有诸如单点登录的功能,该功能允许人们使用 GitHub、Google、GitLab.com 和 GNOME 帐户登录。
+
+我们认为 GitLab 是一条出路,我们开始从许多工具迁移到单个工具。GNOME 董事会成员 [Carlos Soriano][7] 领导这项改变。在 GitLab 和 GNOME 社区的大力支持下,我们于 2018 年 5 月完成了该过程。
+
+人们非常希望迁移到 GitLab 有助于社区的发展,并使贡献更加容易。由于 GNOME 以前使用了许多不同的工具,包括 Bugzilla 和 CGit,因此很难定量地评估这次切换对贡献量的影响。但是,我们可以更清楚地跟踪一些统计数据,例如在 2018 年 6 月至 2018 年 11 月之间关闭了近 10,000 个提案,合并了 7,085 个合并请求。人们感到社区在发展壮大,越来越受欢迎,而且贡献实际上也更加容易。
+
+人们因不同的原因而开始使用自由软件,重要的是,可以通过为需要软件的人提供更好的资源和更多的支持来公平竞争。Git 作为一种工具已被广泛使用,并且越来越多的人使用这些技能来参与到自由软件当中。自托管的 GitLab 提供了将 Git 的熟悉度与 GitLab 提供的功能丰富、用户友好的环境相结合的绝佳机会。
+
+切换到 GitLab 已经一年多了,变化确实很明显。持续集成(CI)为开发带来了巨大的好处,并且已经完全集成到 GNOME 的几乎每个部分当中。不进行代码开发的团队也转而使用 GitLab 生态系统进行工作。无论是使用问题跟踪来管理分配的任务,还是使用版本控制来共享和管理资产,就连“参与度、包容性和多样性(I&D)”这样的团队都已经使用了 GitLab。
+
+一个社区,即使是一个正在开发的自由软件,也很难适应新技术或新工具。在类似 GNOME 的情况下,这尤其困难,该项目[最近已经 22 岁了] [8]。像 GNOME 这样经过了 20 多年建设的项目,太多的人和组织使用了太多的部件,但迁移工作之所以能实现,这要归功于 GNOME 社区的辛勤工作和 GitLab 的慷慨帮助。
+
+在为使用 Git 进行版本控制的项目工作时,我发现很方便。这是一个令人感觉舒适和熟悉的系统,是一个在工作场所和爱好项目之间保持一致的工具。作为 GNOME 社区的新成员,能够参与并使用 GitLab 真是太好了。作为社区建设者,看到这样结果是令人鼓舞的:越来越多的相关项目加入并进入生态系统;新的贡献者和社区成员对该项目做出了首次贡献;以及增强了衡量我们正在做的工作以了解其成功和成功的能力。
+
+如此多的做着完全不同的事情(例如他们正在从事的不同工作以及所使用的不同技能)的团队同意汇集在一个工具上(尤其是被认为是跨开源的标准工具),这一点很棒。作为 GNOME 的贡献者,我真的非常感谢我们使用了 GitLab。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/10/how-gnome-uses-git
+
+作者:[Molly de Blanc][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mollydb
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/redpanda_firefox_pet_animal.jpg?itok=aSpKsyna (red panda)
+[2]: https://www.gnome.org/foundation/
+[3]: https://gnome.org/
+[4]: https://www.gtk.org/
+[5]: https://gstreamer.freedesktop.org/
+[6]: https://gitlab.gnome.org/
+[7]: https://twitter.com/csoriano1618?lang=en
+[8]: https://opensource.com/article/19/8/poll-favorite-gnome-version
diff --git a/published/20191016 Open source interior design with Sweet Home 3D.md b/published/202001/20191016 Open source interior design with Sweet Home 3D.md
similarity index 100%
rename from published/20191016 Open source interior design with Sweet Home 3D.md
rename to published/202001/20191016 Open source interior design with Sweet Home 3D.md
diff --git a/published/20191017 Intro to the Linux useradd command.md b/published/202001/20191017 Intro to the Linux useradd command.md
similarity index 100%
rename from published/20191017 Intro to the Linux useradd command.md
rename to published/202001/20191017 Intro to the Linux useradd command.md
diff --git a/published/202001/20191108 My Linux story- Learning Linux in the 90s.md b/published/202001/20191108 My Linux story- Learning Linux in the 90s.md
new file mode 100644
index 0000000000..f31ae62e4f
--- /dev/null
+++ b/published/202001/20191108 My Linux story- Learning Linux in the 90s.md
@@ -0,0 +1,62 @@
+[#]: collector: (lujun9972)
+[#]: translator: (alim0x)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11831-1.html)
+[#]: subject: (My Linux story: Learning Linux in the 90s)
+[#]: via: (https://opensource.com/article/19/11/learning-linux-90s)
+[#]: author: (Mike Harris https://opensource.com/users/mharris)
+
+我的 Linux 故事:在 90 年代学习 Linux
+======
+
+> 这是一个关于我如何在 WiFi 时代之前学习 Linux 的故事,那时的发行版还以 CD 的形式出现。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/29/213829t00wmwu2w0z502zg.jpg)
+
+大部分人可能不记得 1996 年时计算产业或日常生活世界的样子。但我很清楚地记得那一年。我那时候是堪萨斯中部一所高中的二年级学生,那是我的自由与开源软件(FOSS)旅程的开端。
+
+我从这里开始进步。我在 1996 年之前就开始对计算机感兴趣。我在我家的第一台 Apple ][e 上启蒙成长,然后多年之后是 IBM Personal System/2。(是的,在这过程中有一些代际的跨越。)IBM PS/2 有一个非常激动人心的特性:一个 1200 波特的 Hayes 调制解调器。
+
+我不记得是怎样了,但在那不久之前,我得到了一个本地 [BBS][2] 的电话号码。一旦我拨号进去,我可以得到本地的一些其他 BBS 的列表,我的网络探险就此开始了。
+
+在 1995 年,[足够幸运][3]的人拥有了家庭互联网连接,每月可以使用不到 30 分钟。那时的互联网不像我们现代的服务那样,通过卫星、光纤、有线电视同轴电缆或任何版本的铜线提供。大多数家庭通过一个调制解调器拨号,它连接到他们的电话线上。(这时离移动电话无处不在的时代还早得很,大多数人只有一部家庭电话。)尽管这还要取决你所在的位置,但我不认为那时有很多独立的互联网服务提供商(ISP),所以大多数人从仅有的几家大公司获得服务,包括 America Online,CompuServe 以及 Prodigy。
+
+你能获取到的服务速率非常低,甚至在拨号上网革命性地达到了顶峰的 56K,你也只能期望得到最高 3.5Kbps 的速率。如果你想要尝试 Linux,下载一个 200MB 到 800MB 的 ISO 镜像或(更加切合实际的)一套软盘镜像要贡献出时间、决心,以及减少电话的使用。
+
+我走了一条简单一点的路:在 1996 年,我从一家主要的 Linux 发行商订购了一套 “tri-Linux” CD 集。这些光盘提供了三个发行版,我的这套包含了 Debian 1.1(Debian 的第一个稳定版本)、Red Hat Linux 3.0.3 以及 Slackware 3.1(代号 Slackware '96)。据我回忆,这些光盘是从一家叫做 [Linux Systems Labs][4] 的在线商店购买的。这家在线商店如今已经不存在了,但在 90 年代和 00 年代早期,这样的发行商很常见。这些是多光盘 Linux 套件。这是 1998 年的一套光盘,你可以了解到他们都包含了什么:
+
+![A tri-linux CD set][5]
+
+![A tri-linux CD set][6]
+
+在 1996 年夏天一个命中注定般的日子,那时我住在堪萨斯一个新的并且相对较为乡村的城市,我做出了安装并使用 Linux 的第一次尝试。在 1996 年的整个夏天,我尝试了那套三张 Linux CD 套件里的全部三个发行版。他们都在我母亲的老 Pentium 75MHz 电脑上完美运行。
+
+我最终选择了 [Slackware][7] 3.1 作为我的首选发行版,相比其它发行版可能更多的是因为它的终端的外观,这是决定选择一个发行版前需要考虑的重要因素。
+
+我将系统设置完毕并运行了起来。我连接到一家 “不太知名的” ISP(一家这个区域的本地服务商),通过我家的第二条电话线拨号(为了满足我的所有互联网使用而订购)。那就像在天堂一样。我有一台完美运行的双系统(Microsoft Windows 95 和 Slackware 3.1)电脑。我依然拨号进入我所知道和喜爱的 BBS,游玩在线 BBS 游戏,比如 Trade Wars、Usurper 以及 Legend of the Red Dragon。
+
+我能够记得在 EFNet(IRC)上 #Linux 频道上渡过的日子,帮助其他用户,回答他们的 Linux 问题以及和版主们互动。
+
+在我第一次在家尝试使用 Linux 系统的 20 多年后,已经是我进入作为 Red Hat 顾问的第五年,我仍然在使用 Linux(现在是 Fedora)作为我的日常系统,并且依然在 IRC 上帮助想要使用 Linux 的人们。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/11/learning-linux-90s
+
+作者:[Mike Harris][a]
+选题:[lujun9972][b]
+译者:[alim0x](https://github.com/alim0x)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mharris
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS (Sky with clouds and grass)
+[2]: https://en.wikipedia.org/wiki/Bulletin_board_system
+[3]: https://en.wikipedia.org/wiki/Global_Internet_usage#Internet_users
+[4]: https://web.archive.org/web/19961221003003/http://lsl.com/
+[5]: https://opensource.com/sites/default/files/20191026_142009.jpg (A tri-linux CD set)
+[6]: https://opensource.com/sites/default/files/20191026_142020.jpg (A tri-linux CD set)
+[7]: http://slackware.com
diff --git a/published/20191113 How to cohost GitHub and GitLab with Ansible.md b/published/202001/20191113 How to cohost GitHub and GitLab with Ansible.md
similarity index 100%
rename from published/20191113 How to cohost GitHub and GitLab with Ansible.md
rename to published/202001/20191113 How to cohost GitHub and GitLab with Ansible.md
diff --git a/published/20191121 Simulate gravity in your Python game.md b/published/202001/20191121 Simulate gravity in your Python game.md
similarity index 100%
rename from published/20191121 Simulate gravity in your Python game.md
rename to published/202001/20191121 Simulate gravity in your Python game.md
diff --git a/published/20191129 How to write a Python web API with Django.md b/published/202001/20191129 How to write a Python web API with Django.md
similarity index 100%
rename from published/20191129 How to write a Python web API with Django.md
rename to published/202001/20191129 How to write a Python web API with Django.md
diff --git a/published/20191130 7 maker gifts for kids and teens.md b/published/202001/20191130 7 maker gifts for kids and teens.md
similarity index 100%
rename from published/20191130 7 maker gifts for kids and teens.md
rename to published/202001/20191130 7 maker gifts for kids and teens.md
diff --git a/translated/tech/20191205 Add jumping to your Python platformer game.md b/published/202001/20191205 Add jumping to your Python platformer game.md
similarity index 65%
rename from translated/tech/20191205 Add jumping to your Python platformer game.md
rename to published/202001/20191205 Add jumping to your Python platformer game.md
index 0487cc1491..d7195a008a 100644
--- a/translated/tech/20191205 Add jumping to your Python platformer game.md
+++ b/published/202001/20191205 Add jumping to your Python platformer game.md
@@ -1,34 +1,33 @@
[#]: collector: (lujun9972)
[#]: translator: (cycoe)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11790-1.html)
[#]: subject: (Add jumping to your Python platformer game)
[#]: via: (https://opensource.com/article/19/12/jumping-python-platformer-game)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
为你的 Python 平台类游戏添加跳跃功能
======
-在本期使用 Python Pygame 模块编写视频游戏中,学会如何使用跳跃来对抗重力。
-![游戏厅中的游戏][1]
+
+> 在本期使用 Python Pygame 模块编写视频游戏中,学会如何使用跳跃来对抗重力。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/16/214917c8mxn82fot82fx88.jpg)
在本系列的 [前一篇文章][2] 中,你已经模拟了重力。但现在,你需要赋予你的角色跳跃的能力来对抗重力。
-跳跃是对重力作用的暂时延缓。在这一小段时间里,你是向_上_跳,而不是被重力拉着向下落。但你一旦到达了跳跃的最高点,重力就会重新发挥作用,将你拉回地面。
+跳跃是对重力作用的暂时延缓。在这一小段时间里,你是向*上*跳,而不是被重力拉着向下落。但你一旦到达了跳跃的最高点,重力就会重新发挥作用,将你拉回地面。
-在代码中,跳跃被表示为变量。首先,你需要为玩家对象建立一个变量,使得 Python 能够跟踪对象是否正在跳跃中。一旦玩家对象开始跳跃,他就会再次受到重力的作用,并被拉回最近的物体。
+在代码中,这种变化被表示为变量。首先,你需要为玩家精灵建立一个变量,使得 Python 能够跟踪该精灵是否正在跳跃中。一旦玩家精灵开始跳跃,他就会再次受到重力的作用,并被拉回最近的物体。
### 设置跳跃状态变量
-你需要为你的 Player 类添加两个新变量:
+你需要为你的 `Player` 类添加两个新变量:
- * 一个是为了跟踪你的角色是否正在跳跃中,可通过你的玩家对象是否站在坚实的地面来确定
+ * 一个是为了跟踪你的角色是否正在跳跃中,可通过你的玩家精灵是否站在坚实的地面来确定
* 一个是为了将玩家带回地面
-
-
-将如下两个变量添加到你的 **Player** 类中。在下方的代码中,注释前的部分用于提示上下文,因此只需要添加最后两行:
-
+将如下两个变量添加到你的 `Player` 类中。在下方的代码中,注释前的部分用于提示上下文,因此只需要添加最后两行:
```
self.movex = 0
@@ -40,16 +39,15 @@
self.jump_delta = 6
```
-第一个变量 **collide_delta** 被设为 0 是因为在正常状态下,玩家对象没有处在跳跃中的状态。另一个变量 **jump_delta** 被设为 6,是为了防止对象在第一次进入游戏世界时就发生反弹(实际上就是跳跃)。当你完成了本篇文章的示例,尝试把该变量设为 0 看看会发生什么。
+第一个变量 `collide_delta` 被设为 0 是因为在正常状态下,玩家精灵没有处在跳跃中的状态。另一个变量 `jump_delta` 被设为 6,是为了防止精灵在第一次进入游戏世界时就发生反弹(实际上就是跳跃)。当你完成了本篇文章的示例,尝试把该变量设为 0 看看会发生什么。
### 跳跃中的碰撞
-如果你是跳到一个蹦床上,那你的跳跃一定非常优美。但是如果你是跳向一面墙会发生什么呢?(千万不要去尝试!)不管你的起跳多么令人印象深刻,当你撞到比你更大更硬的物体时,你都会立马停下。(译注:原理参考动量守恒定律)
+如果你是跳到一个蹦床上,那你的跳跃一定非常优美。但是如果你是跳向一面墙会发生什么呢?(千万不要去尝试!)不管你的起跳多么令人印象深刻,当你撞到比你更大更硬的物体时,你都会立马停下。(LCTT 译注:原理参考动量守恒定律)
-为了在你的视频游戏中模拟这一点,你需要在你的玩家对象与地面等东西发生碰撞时,将 **self.collide_delta** 变量设为 0。如果你的 **self.collide_delta** 不是 0 而是其它的什么值,那么你的玩家就会发生跳跃,并且当你的玩家与墙或者地面发生碰撞时无法跳跃。
-
-在你的 **Player** 类的 **update** 方法中,将地面碰撞相关代码块修改为如下所示:
+为了在你的视频游戏中模拟这一点,你需要在你的玩家精灵与地面等东西发生碰撞时,将 `self.collide_delta` 变量设为 0。如果你的 `self.collide_delta` 不是 0 而是其它的什么值,那么你的玩家就会发生跳跃,并且当你的玩家与墙或者地面发生碰撞时无法跳跃。
+在你的 `Player` 类的 `update` 方法中,将地面碰撞相关代码块修改为如下所示:
```
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
@@ -57,42 +55,40 @@
self.movey = 0
self.rect.y = worldy-ty-ty
self.collide_delta = 0 # 停止跳跃
- if self.rect.y > g.rect.y:
- self.health -=1
- print(self.health)
+ if self.rect.y > g.rect.y:
+ self.health -=1
+ print(self.health)
```
-这段代码块检查了地面对象和玩家对象之间发生的碰撞。当发生碰撞时,它会将玩家 Y 方向的坐标值设置为游戏窗口的高度减去一个瓷贴的高度再减去另一个瓷贴的高度。以此保证了玩家对象是站在地面**上**,而不是嵌在地面里。同时它也将 **self.collide_delta** 设为 0,使得程序能够知道玩家未处在跳跃中。除此之外,它将 **self.movey** 设为 0,使得程序能够知道玩家当前未受到重力的牵引作用(这是游戏物理引擎的奇怪之处,一旦玩家落地,也就没有必要继续将玩家拉向地面)。
+这段代码块检查了地面精灵和玩家精灵之间发生的碰撞。当发生碰撞时,它会将玩家 Y 方向的坐标值设置为游戏窗口的高度减去一个瓷砖的高度再减去另一个瓷砖的高度。以此保证了玩家精灵是站在地面*上*,而不是嵌在地面里。同时它也将 `self.collide_delta` 设为 0,使得程序能够知道玩家未处在跳跃中。除此之外,它将 `self.movey` 设为 0,使得程序能够知道玩家当前未受到重力的牵引作用(这是游戏物理引擎的奇怪之处,一旦玩家落地,也就没有必要继续将玩家拉向地面)。
-此处 **if** 语句用来检测玩家是否已经落到地面之_下_,如果是,那就扣除一点生命值作为惩罚。此处假定了你希望当你的玩家落到地图之外时失去生命值。这个设定不是必需的,它只是平台类游戏的一种惯例。更有可能的是,你希望这个事件能够触发另一些事件,或者说是一种能够让你的现实世界玩家沉迷于让对象掉到屏幕之外的东西。一种简单的恢复方式是在玩家对象掉落到地图之外时,将 **self.rect.y** 重新设置为 0,这样它就会在地图上方重新生成,并落到坚实的地面上。
+此处 `if` 语句用来检测玩家是否已经落到地面之*下*,如果是,那就扣除一点生命值作为惩罚。此处假定了你希望当你的玩家落到地图之外时失去生命值。这个设定不是必需的,它只是平台类游戏的一种惯例。更有可能的是,你希望这个事件能够触发另一些事件,或者说是一种能够让你的现实世界玩家沉迷于让精灵掉到屏幕之外的东西。一种简单的恢复方式是在玩家精灵掉落到地图之外时,将 `self.rect.y` 重新设置为 0,这样它就会在地图上方重新生成,并落到坚实的地面上。
### 撞向地面
-模拟的重力使你玩家的 Y 坐标不断增大(译注:此处原文中为 0,但在 Pygame 中越靠下方 Y 坐标应越大)。要实现跳跃,完成如下代码使你的玩家对象离开地面,飞向空中。
-
-在你的 **Player** 类的 **update** 方法中,添加如下代码来暂时延缓重力的作用:
+模拟的重力使你玩家的 Y 坐标不断增大(LCTT 译注:此处原文中为 0,但在 Pygame 中越靠下方 Y 坐标应越大)。要实现跳跃,完成如下代码使你的玩家精灵离开地面,飞向空中。
+在你的 `Player` 类的 `update` 方法中,添加如下代码来暂时延缓重力的作用:
```
- if self.collide_delta < 6 and self.jump_delta < 6:
+ if self.collide_delta < 6 and self.jump_delta < 6:
self.jump_delta = 6*2
self.movey -= 33 # 跳跃的高度
self.collide_delta += 6
self.jump_delta += 6
```
-根据此代码所示,跳跃使玩家对象向空中移动了 33 个像素。此处是_负_ 33 是因为在 Pygame 中,越小的数代表距离屏幕顶端越近。
+根据此代码所示,跳跃使玩家精灵向空中移动了 33 个像素。此处是*负* 33 是因为在 Pygame 中,越小的数代表距离屏幕顶端越近。
-不过此事件视条件而定,只有当 **self.collide_delta** 小于 6(缺省值定义在你 **Player** 类的 **init** 方法中)并且 **self.jump_delta** 也于 6 的时候才会发生。此条件能够保证直到玩家碰到一个平台,才能触发另一次跳跃。换言之,它能够阻止空中二段跳。
+不过此事件视条件而定,只有当 `self.collide_delta` 小于 6(缺省值定义在你 `Player` 类的 `init` 方法中)并且 `self.jump_delta` 也于 6 的时候才会发生。此条件能够保证直到玩家碰到一个平台,才能触发另一次跳跃。换言之,它能够阻止空中二段跳。
在某些特殊条件下,你可能不想阻止空中二段跳,或者说你允许玩家进行空中二段跳。举个栗子,如果玩家获得了某个战利品,那么在他被敌人攻击到之前,都能够拥有空中二段跳的能力。
-当你完成本篇文章中的示例,尝试将 **self.collide_delta** 和 **self.jump_delta** 设置为 0,从而获得百分之百的几率触发空中二段跳。
+当你完成本篇文章中的示例,尝试将 `self.collide_delta` 和 `self.jump_delta` 设置为 0,从而获得百分之百的几率触发空中二段跳。
### 在平台上着陆
-目前你已经定义了再玩家对象摔落地面时的抵抗重力条件,但此时你的游戏代码仍保持平台与地面置于不同的列表中(就像本文中做的很多其他选择一样,这个设定并不是必需的,你可以尝试将地面作为另一种平台)。为了允许玩家对象站在平台之上,你必须像检测地面碰撞一样,检测玩家对象与平台对象之间的碰撞。将如下代码放于你的 **update** 方法中:
-
+目前你已经定义了在玩家精灵摔落地面时的抵抗重力条件,但此时你的游戏代码仍保持平台与地面置于不同的列表中(就像本文中做的很多其他选择一样,这个设定并不是必需的,你可以尝试将地面作为另一种平台)。为了允许玩家精灵站在平台之上,你必须像检测地面碰撞一样,检测玩家精灵与平台精灵之间的碰撞。将如下代码放于你的 `update` 方法中:
```
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
@@ -103,27 +99,26 @@
但此处还有一点需要考虑:平台悬在空中,也就意味着玩家可以通过从上面或者从下面接触平台来与之互动。
-确定平台如何与玩家互动取决于你,阻止玩家从下方到达平台也并不稀奇。将如下代码加到上方的代码块中,使得平台表现得像天花板或者说是藤架。只有在玩家对象跳得比平台上沿更高时才能跳到平台上,但会阻止玩家从平台下方跳上来:
-
+确定平台如何与玩家互动取决于你,阻止玩家从下方到达平台也并不稀奇。将如下代码加到上方的代码块中,使得平台表现得像天花板或者说是藤架。只有在玩家精灵跳得比平台上沿更高时才能跳到平台上,但会阻止玩家从平台下方跳上来:
```
- if self.rect.y > p.rect.y:
+ if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
```
-此处 **if** 语句代码块的第一个子句阻止玩家对象从平台正下方跳到平台上。如果它检测到玩家对象的坐标比平台更大(在 Pygame 中,坐标更大意味着在屏幕的更下方),那么将玩家对象新的 Y 坐标设置为当前平台的 Y 坐标加上一个瓷贴的高度。实际效果就是保证玩家对象距离平台一个瓷贴的高度,防止其从下方穿过平台。
+此处 `if` 语句代码块的第一个子句阻止玩家精灵从平台正下方跳到平台上。如果它检测到玩家精灵的坐标比平台更大(在 Pygame 中,坐标更大意味着在屏幕的更下方),那么将玩家精灵新的 Y 坐标设置为当前平台的 Y 坐标加上一个瓷砖的高度。实际效果就是保证玩家精灵距离平台一个瓷砖的高度,防止其从下方穿过平台。
-**else** 子句做了相反的事情。当程序运行到此处时,如果玩家对象的 Y 坐标_不_比平台的更大,意味着玩家对象是从空中落下(不论是由于玩家刚刚从此处生成,或者是玩家执行了跳跃)。在这种情况下,玩家对象的 Y 坐标被设为平台的 Y 坐标减去一个瓷贴的高度(切记,在 Pygame 中更小的 Y 坐标代表在屏幕上的更高处)。这样就能保证玩家在平台_上_,除非他从平台上跳下来或者走下来。
+`else` 子句做了相反的事情。当程序运行到此处时,如果玩家精灵的 Y 坐标*不*比平台的更大,意味着玩家精灵是从空中落下(不论是由于玩家刚刚从此处生成,或者是玩家执行了跳跃)。在这种情况下,玩家精灵的 Y 坐标被设为平台的 Y 坐标减去一个瓷砖的高度(切记,在 Pygame 中更小的 Y 坐标代表在屏幕上的更高处)。这样就能保证玩家在平台*上*,除非他从平台上跳下来或者走下来。
-你也可以尝试其他的方式来处理玩家与平台之间的互动。举个栗子,也许玩家对象被设定为处在平台的“前面”,他能够无障碍地跳跃穿过平台并站在上面。或者你可以设计一种平台会减缓而又不完全阻止玩家的跳跃过程。甚至你可以通过将不同平台分到不同列表中来混合搭配使用。
+你也可以尝试其他的方式来处理玩家与平台之间的互动。举个栗子,也许玩家精灵被设定为处在平台的“前面”,他能够无障碍地跳跃穿过平台并站在上面。或者你可以设计一种平台会减缓而又不完全阻止玩家的跳跃过程。甚至你可以通过将不同平台分到不同列表中来混合搭配使用。
### 触发一次跳跃
-目前为此,你的代码已经模拟了所有必需的跳跃条件,但仍缺少一个跳跃触发器。你的玩家对象的 **self.jump_delta** 初始值被设置为 6,只有当它比 6 小时才会触发更新跳跃的代码。
+目前为此,你的代码已经模拟了所有必需的跳跃条件,但仍缺少一个跳跃触发器。你的玩家精灵的 `self.jump_delta` 初始值被设置为 6,只有当它比 6 小的时候才会触发更新跳跃的代码。
-为跳跃变量设置一个新的设置方法,在你的 **Player** 类中创建一个 **jump** 方法,并将 **self.jump_delta** 设为小于 6 的值。通过使玩家对象向空中移动 33 个像素,来暂时减缓重力的作用。
+为跳跃变量设置一个新的设置方法,在你的 `Player` 类中创建一个 `jump` 方法,并将 `self.jump_delta` 设为小于 6 的值。通过使玩家精灵向空中移动 33 个像素,来暂时减缓重力的作用。
```
@@ -131,25 +126,24 @@
self.jump_delta = 0
```
-不管你相信与否,这就是 **jump** 方法的全部。剩余的部分在 **update** 方法中,你已经在前面实现了相关代码。
+不管你相信与否,这就是 `jump` 方法的全部。剩余的部分在 `update` 方法中,你已经在前面实现了相关代码。
要使你游戏中的跳跃功能生效,还有最后一件事情要做。如果你想不起来是什么,运行游戏并观察跳跃是如何生效的。
-问题就在于你的主循环中没有调用 **jump** 方法。先前你已经为该方法创建了一个按键占位符,现在,跳跃键所做的就是将 **jump** 打印到终端。
+问题就在于你的主循环中没有调用 `jump` 方法。先前你已经为该方法创建了一个按键占位符,现在,跳跃键所做的就是将 `jump` 打印到终端。
### 调用 jump 方法
-在你的主循环中,将_上_方向键的效果从打印一条调试语句,改为调用 **jump** 方法。
-
-注意此处,与 **update** 方法类似,**jump** 方法也需要检测碰撞,因此你需要告诉它使用哪个 **plat_list**。
+在你的主循环中,将*上*方向键的效果从打印一条调试语句,改为调用 `jump` 方法。
+注意此处,与 `update` 方法类似,`jump` 方法也需要检测碰撞,因此你需要告诉它使用哪个 `plat_list`。
```
if event.key == pygame.K_UP or event.key == ord('w'):
player.jump(plat_list)
```
-如果你倾向于使用空格键作为跳跃键,使用 **pygame.K_SPACE** 替代 **pygame.K_UP** 作为按键。另一种选择,你可以同时使用两种方式(使用单独的 **if** 语句),给玩家多一种选择。
+如果你倾向于使用空格键作为跳跃键,使用 `pygame.K_SPACE` 替代 `pygame.K_UP` 作为按键。另一种选择,你可以同时使用两种方式(使用单独的 `if` 语句),给玩家多一种选择。
现在来尝试你的游戏吧!在下一篇文章中,你将让你的游戏卷动起来。
@@ -157,7 +151,6 @@
以下是目前为止的所有代码:
-
```
#!/usr/bin/env python3
# draw a world
@@ -220,7 +213,7 @@ class Player(pygame.sprite.Sprite):
def gravity(self):
self.movey += 3.2 # how fast player falls
- if self.rect.y > worldy and self.movey >= 0:
+ if self.rect.y > worldy and self.movey >= 0:
self.movey = 0
self.rect.y = worldy-ty
@@ -233,23 +226,23 @@ class Player(pygame.sprite.Sprite):
def update(self):
'''
- 更新对象位置
+ 更新精灵位置
'''
self.rect.x = self.rect.x + self.movex
self.rect.y = self.rect.y + self.movey
# 向左移动
- if self.movex < 0:
+ if self.movex < 0:
self.frame += 1
- if self.frame > ani*3:
+ if self.frame > ani*3:
self.frame = 0
self.image = self.images[self.frame//ani]
# 向右移动
- if self.movex > 0:
+ if self.movex > 0:
self.frame += 1
- if self.frame > ani*3:
+ if self.frame > ani*3:
self.frame = 0
self.image = self.images[(self.frame//ani)+4]
@@ -263,7 +256,7 @@ class Player(pygame.sprite.Sprite):
for p in plat_hit_list:
self.collide_delta = 0 # stop jumping
self.movey = 0
- if self.rect.y > p.rect.y:
+ if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
@@ -273,11 +266,11 @@ class Player(pygame.sprite.Sprite):
self.movey = 0
self.rect.y = worldy-ty-ty
self.collide_delta = 0 # stop jumping
- if self.rect.y > g.rect.y:
+ if self.rect.y > g.rect.y:
self.health -=1
print(self.health)
- if self.collide_delta < 6 and self.jump_delta < 6:
+ if self.collide_delta < 6 and self.jump_delta < 6:
self.jump_delta = 6*2
self.movey -= 33 # how high to jump
self.collide_delta += 6
@@ -308,22 +301,22 @@ class Enemy(pygame.sprite.Sprite):
self.movey += 3.2
- if self.counter >= 0 and self.counter <= distance:
+ if self.counter >= 0 and self.counter <= distance:
self.rect.x += speed
- elif self.counter >= distance and self.counter <= distance*2:
+ elif self.counter >= distance and self.counter <= distance*2:
self.rect.x -= speed
else:
self.counter = 0
self.counter += 1
- if not self.rect.y >= worldy-ty-ty:
+ if not self.rect.y >= worldy-ty-ty:
self.rect.y += self.movey
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
for p in plat_hit_list:
self.movey = 0
- if self.rect.y > p.rect.y:
+ if self.rect.y > p.rect.y:
self.rect.y = p.rect.y+ty
else:
self.rect.y = p.rect.y-ty
@@ -352,7 +345,7 @@ class Level():
ground_list = pygame.sprite.Group()
i=0
if lvl == 1:
- while i < len(gloc):
+ while i < len(gloc):
ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
ground_list.add(ground)
i=i+1
@@ -371,9 +364,9 @@ class Level():
ploc.append((300,worldy-ty-256,3))
ploc.append((500,worldy-ty-128,4))
- while i < len(ploc):
+ while i < len(ploc):
j=0
- while j <= ploc[i][2]:
+ while j <= ploc[i][2]:
plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
plat_list.add(plat)
j=j+1
@@ -417,11 +410,11 @@ eloc = []
eloc = [200,20]
gloc = []
#gloc = [0,630,64,630,128,630,192,630,256,630,320,630,384,630]
-tx = 64 # 瓷贴尺寸
-ty = 64 # 瓷贴尺寸
+tx = 64 # 瓷砖尺寸
+ty = 64 # 瓷砖尺寸
i=0
-while i <= (worldx/tx)+tx:
+while i <= (worldx/tx)+tx:
gloc.append(i*tx)
i=i+1
@@ -482,9 +475,8 @@ while main == True:
* [如何在你的 Python 游戏中添加一个玩家][8]
* [用 Pygame 使你的游戏角色移动起来][9]
* [如何向你的 Python 游戏中添加一个敌人][10]
- * [在你的 Python 游戏中模拟重力][2]
-
-
+ * [在 Pygame 游戏中放置平台][11]
+ * [在你的 Python 游戏中模拟引力][2]
--------------------------------------------------------------------------------
@@ -493,19 +485,20 @@ via: https://opensource.com/article/19/12/jumping-python-platformer-game
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[cycoe](https://github.com/cycoe)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/arcade_game_gaming.jpg?itok=84Rjk_32 (Arcade games)
-[2]: https://opensource.com/article/19/11/simulate-gravity-python
+[2]: https://linux.cn/article-11780-1.html
[3]: https://opensource.com/sites/default/files/uploads/pygame-jump.jpg (Pygame platformer)
[4]: https://www.python.org/
[5]: https://www.pygame.org/
-[6]: https://opensource.com/article/17/10/python-101
-[7]: https://opensource.com/article/17/12/game-framework-python
-[8]: https://opensource.com/article/17/12/game-python-add-a-player
-[9]: https://opensource.com/article/17/12/game-python-moving-player
-[10]: https://opensource.com/article/18/5/pygame-enemy
+[6]: https://linux.cn/article-9071-1.html
+[7]: https://linux.cn/article-10850-1.html
+[8]: https://linux.cn/article-10858-1.html
+[9]: https://linux.cn/article-10874-1.html
+[10]: https://linux.cn/article-10883-1.html
+[11]: https://linux.cn/article-10902-1.html
diff --git a/published/202001/20191208 What-s your favorite terminal emulator.md b/published/202001/20191208 What-s your favorite terminal emulator.md
new file mode 100644
index 0000000000..3979bb4a53
--- /dev/null
+++ b/published/202001/20191208 What-s your favorite terminal emulator.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11814-1.html)
+[#]: subject: (What's your favorite terminal emulator?)
+[#]: via: (https://opensource.com/article/19/12/favorite-terminal-emulator)
+[#]: author: (Opensource.com https://opensource.com/users/admin)
+
+你最喜欢的终端模拟器是什么?
+======
+
+> 我们让社区讲述他们在终端仿真器方面的经验。以下是我们收到的一些回复。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/24/000846qsmpz7s7spig77qg.jpg)
+
+终端仿真器的偏好可以说明一个人的工作流程。无鼠标操作能力是否必须具备?你想要标签页还是窗口?对于终端仿真器你还有什么选择的原因?是否有酷的因素?欢迎参加调查或给我们留下评论,告诉我们你最喜欢的终端模拟器。你尝试过多少种终端仿真器呢?
+
+我们让社区讲述他们在终端仿真器方面的经验。以下是我们收到的一些回复。
+
+“我最喜欢的终端仿真器是用 Powerline 定制的 Tilix。我喜欢它支持在一个窗口中打开多个终端。” —Dan Arel
+
+“[urxvt][2]。它可以通过文件简单配置,轻巧,并且在大多数程序包管理器存储库中都很容易找到。” —Brian Tomlinson
+
+“即使我不再使用 GNOME,gnome-terminal 仍然是我的首选。:)” —Justin W. Flory
+
+“现在 FC31 上的 Terminator。我刚刚开始使用它,我喜欢它的分屏功能,对我来说感觉很轻巧。我正在研究它的插件。” —Marc Maxwell
+
+“不久前,我切换到了 Tilix,它完成了我需要终端执行的所有工作。:) 多个窗格、通知,很精简,用来运行我的 tmux 会话很棒。” —Kevin Fenzi
+
+“alacritty。它针对速度进行了优化,是用 Rust 实现的,并且具有很多常规功能,但是老实说,我只关心一个功能:可配置的字形间距,使我可以进一步压缩字体。” —Alexander Sosedkin
+
+“我是个老古板:KDE Konsole。如果是远程会话,请使用 tmux。” —Marcin Juszkiewicz
+
+“在 macOS 上用 iTerm2。是的,它是开源的。:-) 在 Linux 上是 Terminator。” —Patrick Mullins
+
+“我现在已经使用 alacritty 一两年了,但是最近我在全屏模式下使用 cool-retro-term,因为我必须运行一个输出内容有很多的脚本,而它看起来很酷,让我感觉很酷。这对我很重要。” —Nick Childers
+
+“我喜欢 Tilix,部分是因为它擅长免打扰(我通常全屏运行它,里面是 tmux),而且还提供自定义热链接支持:在我的终端中,像 ‘rhbz#1234’ 之类的文本是将我带到 Bugzilla 的热链接。类似的还有 LaunchPad 提案,OpenStack 的 Gerrit 更改 ID 等。” —Lars Kellogg-Stedman
+
+“Eterm,在使用 Vintage 配置文件的 cool-retro-term 中,演示效果也最好。” —Ivan Horvath
+
+“Tilix +1。这是 GNOME 用户最好的选择,我是这么觉得的!” —Eric Rich
+
+“urxvt。快速、小型、可配置、可通过 Perl 插件扩展,这使其可以无鼠标操作。” —Roman Dobosz
+
+“Konsole 是最好的,也是 KDE 项目中我唯一使用的应用程序。所有搜索结果都高亮显示是一个杀手级功能,据我所知没有任何其它 Linux 终端有这个功能(如果能证明我错了,那我也很高兴)。最适合搜索编译错误和输出日志。” —Jan Horak
+
+“我过去经常使用 Terminator。现在我在 Tilix 中克隆了它的主题(深色主题),而感受一样好。它可以在选项卡之间轻松移动。就是这样。” —Alberto Fanjul Alonso
+
+“我开始使用的是 Terminator,自从差不多过去这三年,我已经完全切换到 Tilix。” —Mike Harris
+
+“我使用下拉式终端 X。这是 GNOME 3 的一个非常简单的扩展,使我始终可以通过一个按键(对于我来说是`F12`)拉出一个终端。它还支持制表符,这正是我所需要的。 ” —Germán Pulido
+
+“xfce4-terminal:支持 Wayland、缩放、无边框、无标题栏、无滚动条 —— 这就是我在 tmux 之外全部想要的终端仿真器的功能。我希望我的终端仿真器可以尽可能多地使用屏幕空间,我通常在 tmux 窗格中并排放着编辑器(Vim)和 repl。” —Martin Kourim
+
+“别问,问就是 Fish ! ;-)” —Eric Schabell
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/12/favorite-terminal-emulator
+
+作者:[Opensource.com][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/admin
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals_0.png?itok=XwIRERsn (Terminal window with green text)
+[2]: https://opensource.com/article/19/10/why-use-rxvt-terminal
diff --git a/published/20191210 Lessons learned from programming in Go.md b/published/202001/20191210 Lessons learned from programming in Go.md
similarity index 100%
rename from published/20191210 Lessons learned from programming in Go.md
rename to published/202001/20191210 Lessons learned from programming in Go.md
diff --git a/published/202001/20191211 Enable your Python game player to run forward and backward.md b/published/202001/20191211 Enable your Python game player to run forward and backward.md
new file mode 100644
index 0000000000..38d25cc1ab
--- /dev/null
+++ b/published/202001/20191211 Enable your Python game player to run forward and backward.md
@@ -0,0 +1,467 @@
+[#]: collector: (lujun9972)
+[#]: translator: (robsean)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11819-1.html)
+[#]: subject: (Enable your Python game player to run forward and backward)
+[#]: via: (https://opensource.com/article/19/12/python-platformer-game-run)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+使你的 Python 游戏玩家能够向前和向后跑
+======
+> 使用 Pygame 模块来使你的 Python 平台开启侧滚效果,来让你的玩家自由奔跑。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/25/220636x5mabbl47xvtsk55.jpg)
+
+这是仍在进行中的关于使用 Pygame 模块来在 Python 3 中在创建电脑游戏的第九部分。先前的文章是:
+
+ * [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][2]
+ * [使用 Python 和 Pygame 模块构建一个游戏框架][3]
+ * [如何在你的 Python 游戏中添加一个玩家][4]
+ * [用 Pygame 使你的游戏角色移动起来][5]
+ * [如何向你的 Python 游戏中添加一个敌人][6]
+ * [在 Pygame 游戏中放置平台][12]
+ * [在你的 Python 游戏中模拟引力][7]
+ * [为你的 Python 平台类游戏添加跳跃功能][8]
+
+在这一系列关于使用 [Pygame][10] 模块来在 [Python 3][9] 中创建电脑游戏的先前文章中,你已经设计了你的关卡设计布局,但是你的关卡的一些部分可能已近超出你的屏幕的可视区域。在平台类游戏中,这个问题的普遍解决方案是,像术语“侧滚side-scroller”表明的一样,滚动。
+
+滚动的关键是当玩家精灵接近屏的幕边缘时,使在玩家精灵周围的平台移动。这样给予一种错觉,屏幕是一个在游戏世界中穿梭追拍的"摄像机"。
+
+这个滚动技巧需要两个在屏幕边缘的绝对区域,在绝对区域内的点处,在世界滚动期间,你的化身静止不动。
+
+### 在侧滚动条中放置卷轴
+
+如果你希望你的玩家能够后退,你需要一个触发点来向前和向后。这两个点仅仅是两个变量。设置它们各个距各个屏幕边缘大约 100 或 200 像素。在你的设置部分中创建变量。在下面的代码中,前两行用于上下文说明,所以仅需要添加这行后的代码:
+
+```
+player_list.add(player)
+steps = 10
+forwardX = 600
+backwardX = 230
+```
+
+在主循环中,查看你的玩家精灵是否在 `forwardx` 或 `backwardx` 滚动点处。如果是这样,向左或向右移动使用的平台,取决于世界是向前或向后移动。在下面的代码中,代码的最后三行仅供你参考:
+
+```
+ # scroll the world forward
+ if player.rect.x >= forwardx:
+ scroll = player.rect.x - forwardx
+ player.rect.x = forwardx
+ for p in plat_list:
+ p.rect.x -= scroll
+
+ # scroll the world backward
+ if player.rect.x <= backwardx:
+ scroll = backwardx - player.rect.x
+ player.rect.x = backwardx
+ for p in plat_list:
+ p.rect.x += scroll
+
+ ## scrolling code above
+ world.blit(backdrop, backdropbox)
+ player.gravity() # check gravity
+ player.update()
+```
+
+启动你的游戏,并尝试它。
+
+![Scrolling the world in Pygame][11]
+
+滚动像预期的一样工作,但是你可能注意到一个发生的小问题,当你滚动你的玩家和非玩家精灵周围的世界时:敌人精灵不随同世界滚动。除非你要你的敌人精灵要无休止地追逐你的玩家,你需要修改敌人代码,以便当你的玩家快速撤退时,敌人被留在后面。
+
+### 敌人卷轴
+
+在你的主循环中,你必须对卷轴平台为你的敌人的位置的应用相同的规则。因为你的游戏世界将(很可能)有不止一个敌人在其中,该规则应该被应用于你的敌人列表,而不是一个单独的敌人精灵。这是分组类似元素到列表中的优点之一。
+
+前两行用于上下文注释,所以只需添加这两行后面的代码到你的主循环中:
+
+```
+ # scroll the world forward
+ if player.rect.x >= forwardx:
+ scroll = player.rect.x - forwardx
+ player.rect.x = forwardx
+ for p in plat_list:
+ p.rect.x -= scroll
+ for e in enemy_list:
+ e.rect.x -= scroll
+```
+
+来滚向另一个方向:
+
+```
+ # scroll the world backward
+ if player.rect.x <= backwardx:
+ scroll = backwardx - player.rect.x
+ player.rect.x = backwardx
+ for p in plat_list:
+ p.rect.x += scroll
+ for e in enemy_list:
+ e.rect.x += scroll
+```
+
+再次启动游戏,看看发生什么。
+
+这里是到目前为止你已经为这个 Python 平台所写所有的代码:
+
+```
+#!/usr/bin/env python3
+# draw a world
+# add a player and player control
+# add player movement
+# add enemy and basic collision
+# add platform
+# add gravity
+# add jumping
+# add scrolling
+
+# GNU All-Permissive License
+# Copying and distribution of this file, with or without modification,
+# are permitted in any medium without royalty provided the copyright
+# notice and this notice are preserved. This file is offered as-is,
+# without any warranty.
+
+import pygame
+import sys
+import os
+
+'''
+Objects
+'''
+
+class Platform(pygame.sprite.Sprite):
+ # x location, y location, img width, img height, img file
+ def __init__(self,xloc,yloc,imgw,imgh,img):
+ pygame.sprite.Sprite.__init__(self)
+ self.image = pygame.image.load(os.path.join('images',img)).convert()
+ self.image.convert_alpha()
+ self.rect = self.image.get_rect()
+ self.rect.y = yloc
+ self.rect.x = xloc
+
+class Player(pygame.sprite.Sprite):
+ '''
+ Spawn a player
+ '''
+ def __init__(self):
+ pygame.sprite.Sprite.__init__(self)
+ self.movex = 0
+ self.movey = 0
+ self.frame = 0
+ self.health = 10
+ self.collide_delta = 0
+ self.jump_delta = 6
+ self.score = 1
+ self.images = []
+ for i in range(1,9):
+ img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
+ img.convert_alpha()
+ img.set_colorkey(ALPHA)
+ self.images.append(img)
+ self.image = self.images[0]
+ self.rect = self.image.get_rect()
+
+ def jump(self,platform_list):
+ self.jump_delta = 0
+
+ def gravity(self):
+ self.movey += 3.2 # how fast player falls
+
+ if self.rect.y > worldy and self.movey >= 0:
+ self.movey = 0
+ self.rect.y = worldy-ty
+
+ def control(self,x,y):
+ '''
+ control player movement
+ '''
+ self.movex += x
+ self.movey += y
+
+ def update(self):
+ '''
+ Update sprite position
+ '''
+
+ self.rect.x = self.rect.x + self.movex
+ self.rect.y = self.rect.y + self.movey
+
+ # moving left
+ if self.movex < 0:
+ self.frame += 1
+ if self.frame > ani*3:
+ self.frame = 0
+ self.image = self.images[self.frame//ani]
+
+ # moving right
+ if self.movex > 0:
+ self.frame += 1
+ if self.frame > ani*3:
+ self.frame = 0
+ self.image = self.images[(self.frame//ani)+4]
+
+ # collisions
+ enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
+ for enemy in enemy_hit_list:
+ self.health -= 1
+ #print(self.health)
+
+ plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
+ for p in plat_hit_list:
+ self.collide_delta = 0 # stop jumping
+ self.movey = 0
+ if self.rect.y > p.rect.y:
+ self.rect.y = p.rect.y+ty
+ else:
+ self.rect.y = p.rect.y-ty
+
+ ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
+ for g in ground_hit_list:
+ self.movey = 0
+ self.rect.y = worldy-ty-ty
+ self.collide_delta = 0 # stop jumping
+ if self.rect.y > g.rect.y:
+ self.health -=1
+ print(self.health)
+
+ if self.collide_delta < 6 and self.jump_delta < 6:
+ self.jump_delta = 6*2
+ self.movey -= 33 # how high to jump
+ self.collide_delta += 6
+ self.jump_delta += 6
+
+class Enemy(pygame.sprite.Sprite):
+ '''
+ Spawn an enemy
+ '''
+ def __init__(self,x,y,img):
+ pygame.sprite.Sprite.__init__(self)
+ self.image = pygame.image.load(os.path.join('images',img))
+ self.movey = 0
+ #self.image.convert_alpha()
+ #self.image.set_colorkey(ALPHA)
+ self.rect = self.image.get_rect()
+ self.rect.x = x
+ self.rect.y = y
+ self.counter = 0
+
+
+ def move(self):
+ '''
+ enemy movement
+ '''
+ distance = 80
+ speed = 8
+
+ self.movey += 3.2
+
+ if self.counter >= 0 and self.counter <= distance:
+ self.rect.x += speed
+ elif self.counter >= distance and self.counter <= distance*2:
+ self.rect.x -= speed
+ else:
+ self.counter = 0
+
+ self.counter += 1
+
+ if not self.rect.y >= worldy-ty-ty:
+ self.rect.y += self.movey
+
+ plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
+ for p in plat_hit_list:
+ self.movey = 0
+ if self.rect.y > p.rect.y:
+ self.rect.y = p.rect.y+ty
+ else:
+ self.rect.y = p.rect.y-ty
+
+ ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
+ for g in ground_hit_list:
+ self.rect.y = worldy-ty-ty
+
+
+class Level():
+ def bad(lvl,eloc):
+ if lvl == 1:
+ enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
+ enemy_list = pygame.sprite.Group() # create enemy group
+ enemy_list.add(enemy) # add enemy to group
+
+ if lvl == 2:
+ print("Level " + str(lvl) )
+
+ return enemy_list
+
+ def loot(lvl,lloc):
+ print(lvl)
+
+ def ground(lvl,gloc,tx,ty):
+ ground_list = pygame.sprite.Group()
+ i=0
+ if lvl == 1:
+ while i < len(gloc):
+ ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
+ ground_list.add(ground)
+ i=i+1
+
+ if lvl == 2:
+ print("Level " + str(lvl) )
+
+ return ground_list
+
+ def platform(lvl,tx,ty):
+ plat_list = pygame.sprite.Group()
+ ploc = []
+ i=0
+ if lvl == 1:
+ ploc.append((0,worldy-ty-128,3))
+ ploc.append((300,worldy-ty-256,3))
+ ploc.append((500,worldy-ty-128,4))
+
+ while i < len(ploc):
+ j=0
+ while j <= ploc[i][2]:
+ plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
+ plat_list.add(plat)
+ j=j+1
+ print('run' + str(i) + str(ploc[i]))
+ i=i+1
+
+ if lvl == 2:
+ print("Level " + str(lvl) )
+
+ return plat_list
+
+'''
+Setup
+'''
+worldx = 960
+worldy = 720
+
+fps = 40 # frame rate
+ani = 4 # animation cycles
+clock = pygame.time.Clock()
+pygame.init()
+main = True
+
+BLUE = (25,25,200)
+BLACK = (23,23,23 )
+WHITE = (254,254,254)
+ALPHA = (0,255,0)
+
+world = pygame.display.set_mode([worldx,worldy])
+backdrop = pygame.image.load(os.path.join('images','stage.png')).convert()
+backdropbox = world.get_rect()
+player = Player() # spawn player
+player.rect.x = 0
+player.rect.y = 0
+player_list = pygame.sprite.Group()
+player_list.add(player)
+steps = 10
+forwardx = 600
+backwardx = 230
+
+eloc = []
+eloc = [200,20]
+gloc = []
+#gloc = [0,630,64,630,128,630,192,630,256,630,320,630,384,630]
+tx = 64 #tile size
+ty = 64 #tile size
+
+i=0
+while i <= (worldx/tx)+tx:
+ gloc.append(i*tx)
+ i=i+1
+
+enemy_list = Level.bad( 1, eloc )
+ground_list = Level.ground( 1,gloc,tx,ty )
+plat_list = Level.platform( 1,tx,ty )
+
+'''
+Main loop
+'''
+while main == True:
+ for event in pygame.event.get():
+ if event.type == pygame.QUIT:
+ pygame.quit(); sys.exit()
+ main = False
+
+ if event.type == pygame.KEYDOWN:
+ if event.key == pygame.K_LEFT or event.key == ord('a'):
+ print("LEFT")
+ player.control(-steps,0)
+ if event.key == pygame.K_RIGHT or event.key == ord('d'):
+ print("RIGHT")
+ player.control(steps,0)
+ if event.key == pygame.K_UP or event.key == ord('w'):
+ print('jump')
+
+ if event.type == pygame.KEYUP:
+ if event.key == pygame.K_LEFT or event.key == ord('a'):
+ player.control(steps,0)
+ if event.key == pygame.K_RIGHT or event.key == ord('d'):
+ player.control(-steps,0)
+ if event.key == pygame.K_UP or event.key == ord('w'):
+ player.jump(plat_list)
+
+ if event.key == ord('q'):
+ pygame.quit()
+ sys.exit()
+ main = False
+
+ # scroll the world forward
+ if player.rect.x >= forwardx:
+ scroll = player.rect.x - forwardx
+ player.rect.x = forwardx
+ for p in plat_list:
+ p.rect.x -= scroll
+ for e in enemy_list:
+ e.rect.x -= scroll
+
+ # scroll the world backward
+ if player.rect.x <= backwardx:
+ scroll = backwardx - player.rect.x
+ player.rect.x = backwardx
+ for p in plat_list:
+ p.rect.x += scroll
+ for e in enemy_list:
+ e.rect.x += scroll
+
+ world.blit(backdrop, backdropbox)
+ player.gravity() # check gravity
+ player.update()
+ player_list.draw(world) #refresh player position
+ enemy_list.draw(world) # refresh enemies
+ ground_list.draw(world) # refresh enemies
+ plat_list.draw(world) # refresh platforms
+ for e in enemy_list:
+ e.move()
+ pygame.display.flip()
+ clock.tick(fps)
+```
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/12/python-platformer-game-run
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_gaming_games_roundup_news.png?itok=KM0ViL0f (Gaming artifacts with joystick, GameBoy, paddle)
+[2]: https://linux.cn/article-9071-1.html
+[3]: https://linux.cn/article-10850-1.html
+[4]: https://linux.cn/article-10858-1.html
+[5]: https://linux.cn/article-10874-1.html
+[6]: https://linux.cn/article-10883-1.html
+[7]: https://linux.cn/article-11780-1.html
+[8]: https://linux.cn/article-11790-1.html
+[9]: https://www.python.org/
+[10]: https://www.pygame.org/news
+[11]: https://opensource.com/sites/default/files/uploads/pygame-scroll.jpg (Scrolling the world in Pygame)
+[12]:https://linux.cn/article-10902-1.html
diff --git a/published/20191214 Make VLC More Awesome With These Simple Tips.md b/published/202001/20191214 Make VLC More Awesome With These Simple Tips.md
similarity index 100%
rename from published/20191214 Make VLC More Awesome With These Simple Tips.md
rename to published/202001/20191214 Make VLC More Awesome With These Simple Tips.md
diff --git a/published/202001/20191215 How to Add Border Around Text in GIMP.md b/published/202001/20191215 How to Add Border Around Text in GIMP.md
new file mode 100644
index 0000000000..6a59c6ac66
--- /dev/null
+++ b/published/202001/20191215 How to Add Border Around Text in GIMP.md
@@ -0,0 +1,139 @@
+[#]: collector: (lujun9972)
+[#]: translator: (robsean)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11799-1.html)
+[#]: subject: (How to Add Border Around Text in GIMP)
+[#]: via: (https://itsfoss.com/gimp-text-outline/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+在 GIMP 中如何在文本周围添加边框
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202001/19/230506fzkyktqglfcyzkuh.jpg)
+
+这个简单的教程介绍了在 [GIMP][1] 中显示文本的轮廓的步骤。文本轮廓可以帮助你在其它颜色下高亮显示该文本。
+
+![Outlined Text created in GIMP][2]
+
+让我们看看如何在你的文本周围添加一个边框。
+
+### 在 GIMP 中添加文本轮廓
+
+整个过程可以用这些简单的步骤描述:
+
+* 创建文本,并复制它的轮廓路径
+* 添加一层新的透明层,并添加轮廓路径到透明层中
+* 更改轮廓的大小,给它添加一种不同的颜色
+
+这就是全部的东西。不用担心,我将使用适当地截图详细的展示每个步骤。按照这个教程,你应该能够为文本添加轮廓,即使你在此之前从未使用过 GIMP 。
+
+仅需要确保你已经 [在 Linux 上安装 GIMP][3],或者也可以使用的其它任何操作系统。
+
+这篇教程在 GIMP 2.10 版本下演示。
+
+#### 步骤 1: 创建你的主要文本,并复制它的轮廓
+
+打开 GIMP ,并通过转到 “菜单 -> 文件 -> 新建” 来创建一个新的文件。你应该可以使用 `Ctrl+N` 键盘快捷键。
+
+![Create New File][4]
+
+你可以在这里选择画布的大小。你也可以选择要白色背景或一种透明背景。它在 “高级选项 -> 颜色” 配置文件下。
+
+我选择默认的白色背景。它在以后能够更改。
+
+现在从左边栏的工具箱中选择文本工具。
+
+![Adding text in GIMP][5]
+
+写你想要的文本。你可以根据你的选择以更改文本的字体、大小和对齐方式。我保持这篇文章的文本的默认左对齐。
+
+我故意为文本选择一种浅色,以便难于阅读。在这篇教程中我将添加一个深色轮廓到这个浅色的文本。
+
+![Text added in GIMP][6]
+
+当你写完文本后,右键文本框并选择 “文本的路径” 。
+
+![Right click on the text box and select ‘Path from Text’][7]
+
+#### 步骤 2: 添加一个带有文本轮廓的透明层
+
+现在,转到顶部菜单,转到“层”,并添加一个新层。
+
+![Use Shift+Ctrl+N to add a new layer][8]
+
+确保添加新层为透明的。你可以给它一个合适的名称,像“文本大纲”。单击确定来添加这个透明层。
+
+![Add a transparent layer][9]
+
+再次转到菜单,这次转到 “选择” ,并单击 “来自路径” 。你将看到你的文本应该被高亮显示。
+
+![Go to Select and choose From Path][10]
+
+总的来说,你只创建了一个透明层,它有像你的原文一样相同的文本(但是透明)。现在你需要做的是在这个层上增加文本的大小。
+
+#### 步骤 3: 通过增加它的大小和更改它的颜色来添加文本轮廓
+
+为此,再次在菜单中转到 “选择” ,这次选择 “增加”。这将允许增大透明层上的文本的大小。
+
+![Grow the selection on the additional layer][11]
+
+以 5 或 10 像素增加,或者你喜欢的任意像素。
+
+![Grow it by 5 or 10 pixel][12]
+
+你选择需要做是使用一种你选择的颜色来填充这个扩大的选择区。因为我的原文是浅色,在这里我将为轮廓使用背景色。
+
+如果尚未选择的话,先选择你的主图像层。这些层在右侧栏中可视。然后转到工具箱并选择油漆桶工具。为你的轮廓选择想要的颜色。
+
+选择使用该工具来填充黑色到你的选择区。记住。你填充文本外部的轮廓,而不是文本本身。
+
+![Fill the outline of the text with a different color][13]
+
+在这里你完成了很多。使用 `Ctrl+Shift+A` 来取消你当前的选择区。
+
+![Outline added to the text][14]
+
+如此,你现在已经在 GIMP 中成功地添加轮廓到你的文本。它是在白色背景中,如果你想要一个透明背景,只需要在右侧栏的图层菜单中删除背景层。
+
+![Remove the white background layer if you want a transparent background][15]
+
+如果你对结果感到满意,保存文件未 PNG 文件(来保留透明背景),或你喜欢的任何文件格式。
+
+### 你使它工作了吗?
+
+就这样。这就是你在 GIMP 中为添加一个文本轮廓而需要做的全部工作。
+
+我希望你发现这个 GIMP 教程有帮助。你可能想查看另一个 [关于在 GIMP 中添加一个水印的简单教程][16]。
+
+如果你有问题或建议,请在下面自由留言。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/gimp-text-outline/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://www.gimp.org/
+[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/outlined_text_GIMP.png?ssl=1
+[3]: https://itsfoss.com/gimp-2-10-release/
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/12/create_outline_text_gimp_1.jpeg?ssl=1
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_2.jpg?ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp-3.jpg?ssl=1
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_4.jpg?ssl=1
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_5.jpg?ssl=1
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_6.jpg?ssl=1
+[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_7.jpg?ssl=1
+[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_8.jpg?ssl=1
+[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_9.jpg?ssl=1
+[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_10.jpg?ssl=1
+[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_11.jpg?ssl=1
+[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/outline_text_gimp_12.jpg?ssl=1
+[16]: https://itsfoss.com/add-watermark-gimp-linux/
diff --git a/published/20191217 App Highlight- Open Source Disk Partitioning Tool GParted.md b/published/202001/20191217 App Highlight- Open Source Disk Partitioning Tool GParted.md
similarity index 100%
rename from published/20191217 App Highlight- Open Source Disk Partitioning Tool GParted.md
rename to published/202001/20191217 App Highlight- Open Source Disk Partitioning Tool GParted.md
diff --git a/published/20191219 Kubernetes namespaces for beginners.md b/published/202001/20191219 Kubernetes namespaces for beginners.md
similarity index 100%
rename from published/20191219 Kubernetes namespaces for beginners.md
rename to published/202001/20191219 Kubernetes namespaces for beginners.md
diff --git a/published/202001/20191220 4 ways to volunteer this holiday season.md b/published/202001/20191220 4 ways to volunteer this holiday season.md
new file mode 100644
index 0000000000..ca2c8fb8e0
--- /dev/null
+++ b/published/202001/20191220 4 ways to volunteer this holiday season.md
@@ -0,0 +1,69 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Morisun029)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11803-1.html)
+[#]: subject: (4 ways to volunteer this holiday season)
+[#]: via: (https://opensource.com/article/19/12/ways-volunteer)
+[#]: author: (John Jones https://opensource.com/users/johnjones4)
+
+假期志愿服务的 4 种方式
+======
+
+> 想要洒播些节日的快乐吗?为开源组织做贡献,帮助有需要的社区。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/20/223730f7983z8atxp1tf4l.jpg)
+
+当领导者们配置人员和资源以做出积极改变时,就会产生社会影响。但是,许多社会努力都缺乏能够为这些改变者提供服务的技术资源。然而,有些组织通过将想要做出改变的开发人员与迫切需要更好技术的社区和非营利组织联系起来,来促进技术进步。这些组织通常为特定的受众提供服务,并招募特定种类的技术人员,它们有一个共同点:开源。
+
+作为开发人员,我们出于各种原因试图加入开源社区。有些是为了专业发展,有些是为了能够与广阔的网络上令人印象深刻的技术人员合作,还有其他人则是因为他们清楚自己的贡献对于项目的成功的必要性。为什么不将你作为开发人员的才华投入到需要它的地方,而同时又为开源组织做贡献呢?以下组织是实现此目标的一些主要事例。
+
+### Code for America
+
+“Code for America” 是在数字时代,政府如何依靠人民为人民服务的一个例子。通过其 Brigade Network,该组织在美国各个城市中组织了一个由志愿程序员、数据科学家、相关公民和设计师组成的全国联盟。这些本地分支机构定期举行聚会,向社区开放。这样既可以向小组推出新项目,又可以协调正在进行的工作。为了使志愿者与项目相匹配,该网站经常列出项目所需的特定技能,例如数据分析、内容创建和JavaScript。同时,Brigade 网站也会关注当地问题,分享自然灾害等共同经验,这些都可以促进成员之间的合作。例如,新奥尔良、休斯敦和坦帕湾团队合作开发了一个飓风响应网站,当灾难发生时,该网站可以快速响应不同的城市灾难情况。
+
+想要加入该组织,请访问 [该网站][2] 获取 70 多个 Brigade 的清单,以及个人加入组织的指南。
+
+### Code for Change
+
+“Code for Change” 显示了即使在高中时期,也可以为社会做贡献。印第安纳波利斯的一群高中开发爱好者成立了自己的俱乐部,他们通过创建针对社区问题的开源软件解决方案来回馈当地组织。“Code for Change” 鼓励当地组织提出项目构想,学生团体加入并开发完全自由和开源的解决方案。该小组已经开发了诸如“蓝宝石”之类的项目,该项目优化了当地难民组织的志愿者管理系统,并建立了民权委员会的投诉表格,方便公民就他们所关心的问题在网上发表意见。
+
+有关如何在你自己的社区中创建 “Code for Change”,[访问他们的网站][3]。
+
+### Python for Good/Ruby for Good
+
+“Python for Good” 和 “Ruby for Good” 是在俄勒冈州波特兰市和弗吉尼亚州费尔法克斯市举办的双年展活动,该活动将人们聚集在一起,为各自的社区开发和制定解决方案。
+
+在周末,人们聚在一起聆听当地非营利组织的建议,并通过构建开源解决方案来解决他们的问题。 2017 年,“Ruby For Good” 参与者创建了 “Justice for Juniors”,该计划指导当前和以前被监禁的年轻人,并将他们重新融入社区。参与者还创建了 “Diaperbase”,这是一种库存管理系统,为美国各地的尿布库diaper bank所使用。这些活动的主要目标之一是将看似不同的行业和思维方式的组织和个人聚集在一起,以谋求共同利益。公司可以赞助活动,非营利组织可以提交项目构想,各种技能的人都可以注册参加活动并做出贡献。通过两岸(美国大西洋和太平洋东西海岸)的努力,“Ruby for Good” 和 “Python for Good” 一直恪守“使世界变得更好”的座右铭。
+
+“[Ruby for Good][4]” 在夏天举行,举办地点在弗吉尼亚州费尔法克斯的乔治•梅森大学。
+
+### Social Coder
+
+英国的 Ed Guiness 创建了 “Social Coder”,将志愿者和慈善机构召集在一起,为六大洲的非营利组织创建和使用开源项目。“Social Coder” 积极招募来自世界各地的熟练 IT 志愿者,并将其与通过 Social Coder 注册的慈善机构和非营利组织进行匹配。项目范围从简单的网站更新到整个移动应用程序的开发。
+
+例如,PHASE Worldwide 是一个在尼泊尔支持工作的小型非政府组织,因为 “Social Coder”,它获得了利用开源技术的关键支持和专业知识。
+
+有许多慈善机构已经与英国的 “Social Coder”进行了合作,也欢迎其它国家的组织加入。通过他们的网站,个人可以注册为社会软件项目工作,找到寻求帮助的组织和慈善机构。
+
+对 “Social Coder” 的志愿服务感兴趣的个人可以 [在此][5]注册.
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/12/ways-volunteer
+
+作者:[John Jones][a]
+选题:[lujun9972][b]
+译者:[Morisun029](https://github.com/Morisun029)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/johnjones4
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1 (Gift box opens with colors coming out)
+[2]: https://brigade.codeforamerica.org/
+[3]: http://codeforchange.herokuapp.com/
+[4]: https://rubyforgood.org/
+[5]: https://socialcoder.org/Home/Programmer
diff --git a/published/20191220 Why Vim fans love the Herbstluftwm Linux window manager.md b/published/202001/20191220 Why Vim fans love the Herbstluftwm Linux window manager.md
similarity index 100%
rename from published/20191220 Why Vim fans love the Herbstluftwm Linux window manager.md
rename to published/202001/20191220 Why Vim fans love the Herbstluftwm Linux window manager.md
diff --git a/published/20191221 Pop-_OS vs Ubuntu- Which One is Better.md b/published/202001/20191221 Pop-_OS vs Ubuntu- Which One is Better.md
similarity index 100%
rename from published/20191221 Pop-_OS vs Ubuntu- Which One is Better.md
rename to published/202001/20191221 Pop-_OS vs Ubuntu- Which One is Better.md
diff --git a/published/20191224 Chill out with the Linux Equinox Desktop Environment.md b/published/202001/20191224 Chill out with the Linux Equinox Desktop Environment.md
similarity index 100%
rename from published/20191224 Chill out with the Linux Equinox Desktop Environment.md
rename to published/202001/20191224 Chill out with the Linux Equinox Desktop Environment.md
diff --git a/published/20191226 Darktable 3 Released With GUI Rework and New Features.md b/published/202001/20191226 Darktable 3 Released With GUI Rework and New Features.md
similarity index 100%
rename from published/20191226 Darktable 3 Released With GUI Rework and New Features.md
rename to published/202001/20191226 Darktable 3 Released With GUI Rework and New Features.md
diff --git a/published/20191227 10 resources to boost your Git skills.md b/published/202001/20191227 10 resources to boost your Git skills.md
similarity index 100%
rename from published/20191227 10 resources to boost your Git skills.md
rename to published/202001/20191227 10 resources to boost your Git skills.md
diff --git a/translated/tech/20191227 Explained- Why Your Distribution Still Using an ‘Outdated- Linux Kernel.md b/published/202001/20191227 Explained- Why Your Distribution Still Using an ‘Outdated- Linux Kernel.md
similarity index 77%
rename from translated/tech/20191227 Explained- Why Your Distribution Still Using an ‘Outdated- Linux Kernel.md
rename to published/202001/20191227 Explained- Why Your Distribution Still Using an ‘Outdated- Linux Kernel.md
index b8af543814..4075e017a8 100644
--- a/translated/tech/20191227 Explained- Why Your Distribution Still Using an ‘Outdated- Linux Kernel.md
+++ b/published/202001/20191227 Explained- Why Your Distribution Still Using an ‘Outdated- Linux Kernel.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (chen-ni)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11791-1.html)
[#]: subject: (Explained! Why Your Distribution Still Using an ‘Outdated’ Linux Kernel?)
[#]: via: (https://itsfoss.com/why-distros-use-old-kernel/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
@@ -10,7 +10,9 @@
为什么你的发行版仍然在使用“过时的”Linux 内核?
======
-[检查一下你的系统所使用的 Linux 内核版本][1],你十有八九会发现,按照 Linux 内核官网提供的信息,该内核版本已经达到使用寿命终期了。
+![](https://img.linux.net.cn/data/attachment/album/202001/16/225806jbqyacu3loolobae.png)
+
+[检查一下你的系统所使用的 Linux 内核版本][1],你十有八九会发现,按照 Linux 内核官网提供的信息,该内核版本已经达到使用寿命终期(EOL)了。
一个软件一旦达到了使用寿命终期,那么就意味着它再也不会得到 bug 修复和维护了。
@@ -18,11 +20,11 @@
下面将逐一解答这些问题。
-总结
-
-上游内核维护与你的发行版的内核维护是两个不同的概念。
-
-例如,根据 Linux 内核官网,Linux 内核 4.15 版本可能已经达到使用寿命终期了,但是在 2023 年 4 月之前,Ubuntu 18.04 长期维护版本将会继续使用这个版本,并通过向后移植安全补丁和修复 bug 来提供维护。
+> **总结**
+>
+> 上游内核维护与你的发行版的内核维护是两个不同的概念。
+>
+> 例如,根据 Linux 内核官网,Linux 内核 4.15 版本可能已经达到使用寿命终期了,但是在 2023 年 4 月之前,Ubuntu 18.04 长期维护版本将会继续使用这个版本,并通过向后移植安全补丁和修复 bug 来提供维护。
### 检查 Linux 内核版本,以及是否达到使用寿命终期
@@ -35,13 +37,11 @@ uname -r
我使用的是 Ubuntu 18.04,输出的 Linux 内核版本如下:
```
-[email protected]:~$ uname -r
+abhishek@itsfoss:~$ uname -r
5.0.0-37-generic
```
-接下来,可以到 Linux 内核官网上看看哪些 Linux 内核版本仍然在维护状态。在网站主页上就可以看到相关信息。
-
-[Linux 内核官网][2]
+接下来,可以到 [Linux 内核官网][2]上看看哪些 Linux 内核版本仍然在维护状态。在网站主页上就可以看到相关信息。
你看到的内核版本状态应该类似于下图:
@@ -53,7 +53,7 @@ uname -r
不幸的是,Linux 内核的生命周期没有任何规律可循。不是说常规的内核稳定发布版可以得到 X 月的维护、长期维护版本(LTS)可以得到 Y 年的维护。没有这回事。
-根据实际需求,可能会存在内核的多个 LTS 版本,其使用寿命终期各不相同。在[这个页面][5]上可以查到这些 LTS 版本的相关信息,包括推定的使用寿命终期。
+根据实际需求,可能会存在内核的多个 LTS 版本,其使用寿命终期各不相同。在[这个页面][5]上可以查到这些 LTS 版本的相关信息,包括计划的使用寿命终期。
那么问题来了:既然 Linux 内核官网上明确表示 5.0 版本的内核已经达到了使用寿命终期,Ubuntu 为什么还在提供这个内核版本呢?
@@ -63,11 +63,11 @@ uname -r
你是否想过,为什么 Ubuntu/Debian/Fedora 等发行版被称为 Linux “发行版”?这是因为,它们“发行” Linux 内核。
-这些发行版会对 Linux 内核进行不同的修改,并添加各种 GUI 元素(包括桌面环境,显示服务器等)以及软件,然后再呈现给用户。
+这些发行版会对 Linux 内核进行不同的修改,并添加各种 GUI 元素(包括桌面环境、显示服务器等)以及软件,然后再呈现给用户。
按照通常的工作流,Linux 发行版会选择一个内核,提供给其用户,然后在接下来的几个月、几年中,甚至是达到内核的使用寿命终期之后,仍然会继续使用该内核。
-这样能够保障安全吗?其实是可以的,因为 _**发行版会通过向后移植全部的重要修补来维护内核**_。
+这样能够保障安全吗?其实是可以的,因为 **发行版会通过向后移植全部的重要修补来维护内核**。
换句话说,你的 Linux 发行版会确保 Linux 内核没有漏洞和 bug,并且已经通过向后移植获得了重要的新特性。在“过时的旧版本 Linux 内核”上,其实有着数以千计的改动。
@@ -83,13 +83,13 @@ uname -r
新的 Linux 内核稳定版本每隔 2 到 3 个月发布一次,有不少用户跃跃欲试。
-实话说,除非有十分充分的理由,否则不应该使用最新版本的稳定内核。你使用的发行版并不会提供这个选项,你也不能指望通过在键盘上敲出“_sudo apt give-me-the-latest-stable-kernel_”解决问题。
+实话说,除非有十分充分的理由,否则不应该使用最新版本的稳定内核。你使用的发行版并不会提供这个选项,你也不能指望通过在键盘上敲出 `sudo apt give-me-the-latest-stable-kernel` 解决问题。
-此外,手动[安装主流 Linux 内核版本][8]本身就是一个挑战。即使安装成功,之后每次发布 bug 修复的时候,负责更新内核的就会是你了。此外,当新内核达到使用寿命终期之后,你就有责任将它升级到更新的内核版本了。和常规的[Ubuntu 更新][9]不同,内核升级无法通过 apt upgrade 完成。
+此外,手动[安装主流 Linux 内核版本][8]本身就是一个挑战。即使安装成功,之后每次发布 bug 修复的时候,负责更新内核的就会是你了。此外,当新内核达到使用寿命终期之后,你就有责任将它升级到更新的内核版本了。和常规的 [Ubuntu 更新][9]不同,内核升级无法通过 `apt upgrade` 完成。
同样需要记住的是,切换到主流内核之后,可能就无法使用你的发行版提供的一些驱动程序和补丁了。
-正如 [Greg Kroah-Hartman][10]所言,“_**你能使用的最好的内核,就是别人在维护的内核。**_”除了你的 Linux 发行版之外,又有谁更胜任这份工作呢!
+正如 [Greg Kroah-Hartman][10]所言,“**你能使用的最好的内核,就是别人在维护的内核。**”除了你的 Linux 发行版之外,又有谁更胜任这份工作呢!
希望你对这个主题已经有了更好的理解。下回发现你的系统正在使用的内核版本已经达到使用寿命终期的时候,希望你不会感到惊慌失措。
@@ -102,7 +102,7 @@ via: https://itsfoss.com/why-distros-use-old-kernel/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[chen-ni](https://github.com/chen-ni)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/202001/20191229 The best resources for agile software development.md b/published/202001/20191229 The best resources for agile software development.md
new file mode 100644
index 0000000000..7ffc566009
--- /dev/null
+++ b/published/202001/20191229 The best resources for agile software development.md
@@ -0,0 +1,64 @@
+[#]: collector: (lujun9972)
+[#]: translator: (algzjh)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11816-1.html)
+[#]: subject: (The best resources for agile software development)
+[#]: via: (https://opensource.com/article/19/12/agile-resources)
+[#]: author: (Leigh Griffin https://opensource.com/users/lgriffin)
+
+敏捷软件开发的最佳资源
+======
+
+> 请阅读我们的热门文章,这些文章着重讨论了敏捷的过去、现在和未来。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/25/121308jrs4speu2y09u09e.jpg)
+
+对于 Opensource.com 上的敏捷主题来说,2019 年是非常棒的一年。随着 2020 年的到来,我们回顾了我们读者所读的与敏捷相关的热门文章。
+
+### 小规模 Scrum 指南
+
+Opensource.com 关于[小规模 Scrum][2] 的指南(我曾参与合著)由六部分组成,为小型团队提供了关于如何将敏捷引入到他们的工作中的建议。在官方的 [Scrum 指南][3]的概述中,传统的 Scrum 框架推荐至少三个人来实现,以充分发挥其潜力。但是,它并没有为一两个人的团队如何成功遵循 Scrum 提供指导。我们的六部分系列旨在规范化小规模的 Scrum,并检验我们在现实世界中使用它的经验。该系列受到了读者的热烈欢迎,以至于这六篇文章占据了前 10 名文章的 60%。因此,如果你还没有阅读的话,一定要从我们的[小规模 Scrum 介绍页面][2]下载。
+
+### 全面的敏捷项目管理指南
+
+遵循传统项目管理方法的团队最初对敏捷持怀疑态度,现在已经热衷于敏捷的工作方式。目前,敏捷已被接受,并且一种更加灵活的混合风格已经找到了归宿。Matt Shealy 撰写的[有关敏捷项目管理的综合指南][4]涵盖了敏捷项目管理的 12 条指导原则,对于希望为其项目带来敏捷性的传统项目经理而言,它是完美的选择。
+
+### 成为出色的敏捷开发人员的 4 个步骤
+
+DevOps 文化已经出现在许多现代软件团队中,这些团队采用了敏捷软件开发原则,利用了最先进的工具和自动化技术。但是,这种机械的敏捷方法并不能保证开发人员在日常工作中遵循敏捷实践。Daniel Oh 在[成为出色的敏捷开发人员的 4 个步骤][5]中给出了一些很棒的技巧,通过关注设计思维,使用可预测的方法,以质量为中心并不断学习和探索来提高你的敏捷性。用你的敏捷工具补充这些方法将形成非常灵活和强大的敏捷开发人员。
+
+### Scrum 和 kanban:哪种敏捷框架更好?
+
+对于以敏捷方式运行的团队来说,Scrum 和 kanban 是两种最流行的方法。在 “[Scrum 与 kanban:哪种敏捷框架更好?][6]” 中,Taz Brown 探索了两者的历史和目的。在阅读本文时,我想起一句名言:“如果你的工具箱里只有锤子,那么所有问题看起来都像钉子。”知道何时使用 kanban 以及何时使用 Scrum 非常重要,本文有助于说明两者都有一席之地,这取决于你的团队、挑战和目标。
+
+### 开发人员对敏捷发表意见的 4 种方式
+
+当采用敏捷的话题出现时,开发人员常常会担心自己会被强加上一种工作风格。在“[开发人员对敏捷发表意见的 4 种方式][7]”中,[Clément Verna][8] 着眼于开发人员通过帮助确定敏捷在其团队中的表现形式来颠覆这种说法的方法。检查敏捷的起源和基础是一个很好的起点,但是真正的价值在于拥有可帮助指导你的过程的指标。知道你将面临什么样的挑战会给你的前进提供坚实的基础。根据经验进行决策不仅可以增强团队的能力,还可以使他们对整个过程有一种主人翁意识。Verna 的文章还探讨了将人置于过程之上并作为一个团队来实现目标的重要性。
+
+### 敏捷的现在和未来
+
+今年,Opensource.com 的作者围绕敏捷的过去、现在以及未来可能会是什么样子进行了大量的讨论。感谢他们所有人,请一定于 2020 年在这里分享[你自己的敏捷故事][9]。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/12/agile-resources
+
+作者:[Leigh Griffin][a]
+选题:[lujun9972][b]
+译者:[algzjh](https://github.com/algzjh)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/lgriffin
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G "Women programming"
+[2]: https://opensource.com/downloads/small-scale-scrum
+[3]: https://scrumguides.org/scrum-guide.html
+[4]: https://opensource.com/article/19/8/guide-agile-project-management
+[5]: https://opensource.com/article/19/2/steps-agile-developer
+[6]: https://opensource.com/article/19/8/scrum-vs-kanban
+[7]: https://opensource.com/article/19/10/ways-developers-what-agile
+[8]: https://twitter.com/clemsverna
+[9]: https://opensource.com/how-submit-article
diff --git a/published/20191230 10 articles to enhance your security aptitude.md b/published/202001/20191230 10 articles to enhance your security aptitude.md
similarity index 100%
rename from published/20191230 10 articles to enhance your security aptitude.md
rename to published/202001/20191230 10 articles to enhance your security aptitude.md
diff --git a/published/20191230 Fixing -VLC is Unable to Open the MRL- Error -Quick Tip.md b/published/202001/20191230 Fixing -VLC is Unable to Open the MRL- Error -Quick Tip.md
similarity index 100%
rename from published/20191230 Fixing -VLC is Unable to Open the MRL- Error -Quick Tip.md
rename to published/202001/20191230 Fixing -VLC is Unable to Open the MRL- Error -Quick Tip.md
diff --git a/published/20191231 10 Ansible resources to accelerate your automation skills.md b/published/202001/20191231 10 Ansible resources to accelerate your automation skills.md
similarity index 100%
rename from published/20191231 10 Ansible resources to accelerate your automation skills.md
rename to published/202001/20191231 10 Ansible resources to accelerate your automation skills.md
diff --git a/published/20191231 12 programming resources for coders of all levels.md b/published/202001/20191231 12 programming resources for coders of all levels.md
similarity index 100%
rename from published/20191231 12 programming resources for coders of all levels.md
rename to published/202001/20191231 12 programming resources for coders of all levels.md
diff --git a/published/20200101 5 predictions for Kubernetes in 2020.md b/published/202001/20200101 5 predictions for Kubernetes in 2020.md
similarity index 100%
rename from published/20200101 5 predictions for Kubernetes in 2020.md
rename to published/202001/20200101 5 predictions for Kubernetes in 2020.md
diff --git a/published/20200101 9 cheat sheets and guides to enhance your tech skills.md b/published/202001/20200101 9 cheat sheets and guides to enhance your tech skills.md
similarity index 100%
rename from published/20200101 9 cheat sheets and guides to enhance your tech skills.md
rename to published/202001/20200101 9 cheat sheets and guides to enhance your tech skills.md
diff --git a/published/20200101 Signal- A Secure, Open Source Messaging App.md b/published/202001/20200101 Signal- A Secure, Open Source Messaging App.md
similarity index 100%
rename from published/20200101 Signal- A Secure, Open Source Messaging App.md
rename to published/202001/20200101 Signal- A Secure, Open Source Messaging App.md
diff --git a/published/202001/20200102 Put some loot in your Python platformer game.md b/published/202001/20200102 Put some loot in your Python platformer game.md
new file mode 100644
index 0000000000..e379cc1307
--- /dev/null
+++ b/published/202001/20200102 Put some loot in your Python platformer game.md
@@ -0,0 +1,525 @@
+[#]: collector: (lujun9972)
+[#]: translator: (heguangzhi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11828-1.html)
+[#]: subject: (Put some loot in your Python platformer game)
+[#]: via: (https://opensource.com/article/20/1/loot-python-platformer-game)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+在你的 Python 平台类游戏中放一些奖励
+======
+
+> 这部分是关于在使用 Python 的 Pygame 模块开发的视频游戏总给你的玩家提供收集的宝物和经验值的内容。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/29/131158jkwnhgd1nnawzn86.jpg)
+
+这是正在进行的关于使用 [Python 3][2] 的 [Pygame][3] 模块创建视频游戏的系列文章的第十部分。以前的文章有:
+
+ * [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][4]
+ * [使用 Python 和 Pygame 模块构建一个游戏框架][5]
+ * [如何在你的 Python 游戏中添加一个玩家][6]
+ * [用 Pygame 使你的游戏角色移动起来][7]
+ * [如何向你的 Python 游戏中添加一个敌人][8]
+ * [在 Pygame 游戏中放置平台][13]
+ * [在你的 Python 游戏中模拟引力][9]
+ * [为你的 Python 平台类游戏添加跳跃功能][10]
+ * [使你的 Python 游戏玩家能够向前和向后跑][11]
+
+如果你已经阅读了本系列的前几篇文章,那么你已经了解了编写游戏的所有基础知识。现在你可以在这些基础上,创造一个全功能的游戏。当你第一次学习时,遵循本系列代码示例,这样的“用例”是有帮助的,但是,用例也会约束你。现在是时候运用你学到的知识,以新的方式应用它们了。
+
+如果说,说起来容易做起来难,这篇文章展示了一个如何将你已经了解的内容用于新目的的例子中。具体来说,就是它涵盖了如何使用你以前的课程中已经了解到的来实现奖励系统。
+
+在大多数电子游戏中,你有机会在游戏世界中获得“奖励”或收集到宝物和其他物品。奖励通常会增加你的分数或者你的生命值,或者为你的下一次任务提供信息。
+
+游戏中包含的奖励类似于编程平台。像平台一样,奖励没有用户控制,随着游戏世界的滚动进行,并且必须检查与玩家的碰撞。
+
+### 创建奖励函数
+
+奖励和平台非常相似,你甚至不需要一个奖励的类。你可以重用 `Platform` 类,并将结果称为“奖励”。
+
+由于奖励类型和位置可能因关卡不同而不同,如果你还没有,请在你的 `Level` 中创建一个名为 `loot` 的新函数。因为奖励物品不是平台,你也必须创建一个新的 `loot_list` 组,然后添加奖励物品。与平台、地面和敌人一样,该组用于检查玩家碰撞:
+
+```
+ def loot(lvl,lloc):
+ if lvl == 1:
+ loot_list = pygame.sprite.Group()
+ loot = Platform(300,ty*7,tx,ty, 'loot_1.png')
+ loot_list.add(loot)
+
+ if lvl == 2:
+ print(lvl)
+
+ return loot_list
+```
+
+你可以随意添加任意数量的奖励对象;记住把每一个都加到你的奖励清单上。`Platform` 类的参数是奖励图标的 X 位置、Y 位置、宽度和高度(通常让你的奖励精灵保持和所有其他方块一样的大小最为简单),以及你想要用作的奖励的图片。奖励的放置可以和贴图平台一样复杂,所以使用创建关卡时需要的关卡设计文档。
+
+在脚本的设置部分调用新的奖励函数。在下面的代码中,前三行是上下文,所以只需添加第四行:
+
+```
+enemy_list = Level.bad( 1, eloc )
+ground_list = Level.ground( 1,gloc,tx,ty )
+plat_list = Level.platform( 1,tx,ty )
+loot_list = Level.loot(1,tx,ty)
+```
+
+正如你现在所知道的,除非你把它包含在你的主循环中,否则奖励不会被显示到屏幕上。将下面代码示例的最后一行添加到循环中:
+
+```
+ enemy_list.draw(world)
+ ground_list.draw(world)
+ plat_list.draw(world)
+ loot_list.draw(world)
+```
+
+启动你的游戏看看会发生什么。
+
+![Loot in Python platformer][12]
+
+你的奖励将会显示出来,但是当你的玩家碰到它们时,它们不会做任何事情,当你的玩家经过它们时,它们也不会滚动。接下来解决这些问题。
+
+### 滚动奖励
+
+像平台一样,当玩家在游戏世界中移动时,奖励必须滚动。逻辑与平台滚动相同。要向前滚动奖励物品,添加最后两行:
+
+```
+ for e in enemy_list:
+ e.rect.x -= scroll
+ for l in loot_list:
+ l.rect.x -= scroll
+```
+
+要向后滚动,请添加最后两行:
+
+```
+ for e in enemy_list:
+ e.rect.x += scroll
+ for l in loot_list:
+ l.rect.x += scroll
+```
+
+再次启动你的游戏,看看你的奖励物品现在表现得像在游戏世界里一样了,而不是仅仅画在上面。
+
+### 检测碰撞
+
+就像平台和敌人一样,你可以检查奖励物品和玩家之间的碰撞。逻辑与其他碰撞相同,除了撞击不会(必然)影响重力或生命值。取而代之的是,命中会导致奖励物品会消失并增加玩家的分数。
+
+当你的玩家触摸到一个奖励对象时,你可以从 `loot_list` 中移除该对象。这意味着当你的主循环在 `loot_list` 中重绘所有奖励物品时,它不会重绘那个特定的对象,所以看起来玩家已经获得了奖励物品。
+
+在 `Player` 类的 `update` 函数中的平台碰撞检测之上添加以下代码(最后一行仅用于上下文):
+
+```
+ loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
+ for loot in loot_hit_list:
+ loot_list.remove(loot)
+ self.score += 1
+ print(self.score)
+
+ plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
+```
+
+当碰撞发生时,你不仅要把奖励从它的组中移除,还要给你的玩家一个分数提升。你还没有创建分数变量,所以请将它添加到你的玩家属性中,该属性是在 `Player` 类的 `__init__` 函数中创建的。在下面的代码中,前两行是上下文,所以只需添加分数变量:
+
+```
+ self.frame = 0
+ self.health = 10
+ self.score = 0
+```
+
+当在主循环中调用 `update` 函数时,需要包括 `loot_list`:
+
+```
+ player.gravity()
+ player.update()
+```
+
+如你所见,你已经掌握了所有的基本知识。你现在要做的就是用新的方式使用你所知道的。
+
+在下一篇文章中还有一些提示,但是与此同时,用你学到的知识来制作一些简单的单关卡游戏。限制你试图创造的东西的范围是很重要的,这样你就不会埋没自己。这也使得最终的成品看起来和感觉上更容易完成。
+
+以下是迄今为止你为这个 Python 平台编写的所有代码:
+
+```
+#!/usr/bin/env python3
+# draw a world
+# add a player and player control
+# add player movement
+# add enemy and basic collision
+# add platform
+# add gravity
+# add jumping
+# add scrolling
+
+# GNU All-Permissive License
+# Copying and distribution of this file, with or without modification,
+# are permitted in any medium without royalty provided the copyright
+# notice and this notice are preserved. This file is offered as-is,
+# without any warranty.
+
+import pygame
+import sys
+import os
+
+'''
+Objects
+'''
+
+class Platform(pygame.sprite.Sprite):
+ # x location, y location, img width, img height, img file
+ def __init__(self,xloc,yloc,imgw,imgh,img):
+ pygame.sprite.Sprite.__init__(self)
+ self.image = pygame.image.load(os.path.join('images',img)).convert()
+ self.image.convert_alpha()
+ self.rect = self.image.get_rect()
+ self.rect.y = yloc
+ self.rect.x = xloc
+
+class Player(pygame.sprite.Sprite):
+ '''
+ Spawn a player
+ '''
+ def __init__(self):
+ pygame.sprite.Sprite.__init__(self)
+ self.movex = 0
+ self.movey = 0
+ self.frame = 0
+ self.health = 10
+ self.collide_delta = 0
+ self.jump_delta = 6
+ self.score = 1
+ self.images = []
+ for i in range(1,9):
+ img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
+ img.convert_alpha()
+ img.set_colorkey(ALPHA)
+ self.images.append(img)
+ self.image = self.images[0]
+ self.rect = self.image.get_rect()
+
+ def jump(self,platform_list):
+ self.jump_delta = 0
+
+ def gravity(self):
+ self.movey += 3.2 # how fast player falls
+
+ if self.rect.y > worldy and self.movey >= 0:
+ self.movey = 0
+ self.rect.y = worldy-ty
+
+ def control(self,x,y):
+ '''
+ control player movement
+ '''
+ self.movex += x
+ self.movey += y
+
+ def update(self):
+ '''
+ Update sprite position
+ '''
+
+ self.rect.x = self.rect.x + self.movex
+ self.rect.y = self.rect.y + self.movey
+
+ # moving left
+ if self.movex < 0:
+ self.frame += 1
+ if self.frame > ani*3:
+ self.frame = 0
+ self.image = self.images[self.frame//ani]
+
+ # moving right
+ if self.movex > 0:
+ self.frame += 1
+ if self.frame > ani*3:
+ self.frame = 0
+ self.image = self.images[(self.frame//ani)+4]
+
+ # collisions
+ enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
+ for enemy in enemy_hit_list:
+ self.health -= 1
+ #print(self.health)
+
+ loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
+ for loot in loot_hit_list:
+ loot_list.remove(loot)
+ self.score += 1
+ print(self.score)
+
+ plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
+ for p in plat_hit_list:
+ self.collide_delta = 0 # stop jumping
+ self.movey = 0
+ if self.rect.y > p.rect.y:
+ self.rect.y = p.rect.y+ty
+ else:
+ self.rect.y = p.rect.y-ty
+
+ ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
+ for g in ground_hit_list:
+ self.movey = 0
+ self.rect.y = worldy-ty-ty
+ self.collide_delta = 0 # stop jumping
+ if self.rect.y > g.rect.y:
+ self.health -=1
+ print(self.health)
+
+ if self.collide_delta < 6 and self.jump_delta < 6:
+ self.jump_delta = 6*2
+ self.movey -= 33 # how high to jump
+ self.collide_delta += 6
+ self.jump_delta += 6
+
+class Enemy(pygame.sprite.Sprite):
+ '''
+ Spawn an enemy
+ '''
+ def __init__(self,x,y,img):
+ pygame.sprite.Sprite.__init__(self)
+ self.image = pygame.image.load(os.path.join('images',img))
+ self.movey = 0
+ #self.image.convert_alpha()
+ #self.image.set_colorkey(ALPHA)
+ self.rect = self.image.get_rect()
+ self.rect.x = x
+ self.rect.y = y
+ self.counter = 0
+
+
+ def move(self):
+ '''
+ enemy movement
+ '''
+ distance = 80
+ speed = 8
+
+ self.movey += 3.2
+
+ if self.counter >= 0 and self.counter <= distance:
+ self.rect.x += speed
+ elif self.counter >= distance and self.counter <= distance*2:
+ self.rect.x -= speed
+ else:
+ self.counter = 0
+
+ self.counter += 1
+
+ if not self.rect.y >= worldy-ty-ty:
+ self.rect.y += self.movey
+
+ plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
+ for p in plat_hit_list:
+ self.movey = 0
+ if self.rect.y > p.rect.y:
+ self.rect.y = p.rect.y+ty
+ else:
+ self.rect.y = p.rect.y-ty
+
+ ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
+ for g in ground_hit_list:
+ self.rect.y = worldy-ty-ty
+
+
+class Level():
+ def bad(lvl,eloc):
+ if lvl == 1:
+ enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
+ enemy_list = pygame.sprite.Group() # create enemy group
+ enemy_list.add(enemy) # add enemy to group
+
+ if lvl == 2:
+ print("Level " + str(lvl) )
+
+ return enemy_list
+
+ def loot(lvl,tx,ty):
+ if lvl == 1:
+ loot_list = pygame.sprite.Group()
+ loot = Platform(200,ty*7,tx,ty, 'loot_1.png')
+ loot_list.add(loot)
+
+ if lvl == 2:
+ print(lvl)
+
+ return loot_list
+
+ def ground(lvl,gloc,tx,ty):
+ ground_list = pygame.sprite.Group()
+ i=0
+ if lvl == 1:
+ while i < len(gloc):
+ ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
+ ground_list.add(ground)
+ i=i+1
+
+ if lvl == 2:
+ print("Level " + str(lvl) )
+
+ return ground_list
+
+ def platform(lvl,tx,ty):
+ plat_list = pygame.sprite.Group()
+ ploc = []
+ i=0
+ if lvl == 1:
+ ploc.append((20,worldy-ty-128,3))
+ ploc.append((300,worldy-ty-256,3))
+ ploc.append((500,worldy-ty-128,4))
+
+ while i < len(ploc):
+ j=0
+ while j <= ploc[i][2]:
+ plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
+ plat_list.add(plat)
+ j=j+1
+ print('run' + str(i) + str(ploc[i]))
+ i=i+1
+
+ if lvl == 2:
+ print("Level " + str(lvl) )
+
+ return plat_list
+
+'''
+Setup
+'''
+worldx = 960
+worldy = 720
+
+fps = 40 # frame rate
+ani = 4 # animation cycles
+clock = pygame.time.Clock()
+pygame.init()
+main = True
+
+BLUE = (25,25,200)
+BLACK = (23,23,23 )
+WHITE = (254,254,254)
+ALPHA = (0,255,0)
+
+world = pygame.display.set_mode([worldx,worldy])
+backdrop = pygame.image.load(os.path.join('images','stage.png')).convert()
+backdropbox = world.get_rect()
+player = Player() # spawn player
+player.rect.x = 0
+player.rect.y = 0
+player_list = pygame.sprite.Group()
+player_list.add(player)
+steps = 10
+forwardx = 600
+backwardx = 230
+
+eloc = []
+eloc = [200,20]
+gloc = []
+#gloc = [0,630,64,630,128,630,192,630,256,630,320,630,384,630]
+tx = 64 #tile size
+ty = 64 #tile size
+
+i=0
+while i <= (worldx/tx)+tx:
+ gloc.append(i*tx)
+ i=i+1
+
+enemy_list = Level.bad( 1, eloc )
+ground_list = Level.ground( 1,gloc,tx,ty )
+plat_list = Level.platform( 1,tx,ty )
+loot_list = Level.loot(1,tx,ty)
+
+'''
+Main loop
+'''
+while main == True:
+ for event in pygame.event.get():
+ if event.type == pygame.QUIT:
+ pygame.quit(); sys.exit()
+ main = False
+
+ if event.type == pygame.KEYDOWN:
+ if event.key == pygame.K_LEFT or event.key == ord('a'):
+ print("LEFT")
+ player.control(-steps,0)
+ if event.key == pygame.K_RIGHT or event.key == ord('d'):
+ print("RIGHT")
+ player.control(steps,0)
+ if event.key == pygame.K_UP or event.key == ord('w'):
+ print('jump')
+
+ if event.type == pygame.KEYUP:
+ if event.key == pygame.K_LEFT or event.key == ord('a'):
+ player.control(steps,0)
+ if event.key == pygame.K_RIGHT or event.key == ord('d'):
+ player.control(-steps,0)
+ if event.key == pygame.K_UP or event.key == ord('w'):
+ player.jump(plat_list)
+
+ if event.key == ord('q'):
+ pygame.quit()
+ sys.exit()
+ main = False
+
+ # scroll the world forward
+ if player.rect.x >= forwardx:
+ scroll = player.rect.x - forwardx
+ player.rect.x = forwardx
+ for p in plat_list:
+ p.rect.x -= scroll
+ for e in enemy_list:
+ e.rect.x -= scroll
+ for l in loot_list:
+ l.rect.x -= scroll
+
+ # scroll the world backward
+ if player.rect.x <= backwardx:
+ scroll = backwardx - player.rect.x
+ player.rect.x = backwardx
+ for p in plat_list:
+ p.rect.x += scroll
+ for e in enemy_list:
+ e.rect.x += scroll
+ for l in loot_list:
+ l.rect.x += scroll
+
+ world.blit(backdrop, backdropbox)
+ player.gravity() # check gravity
+ player.update()
+ player_list.draw(world) #refresh player position
+ enemy_list.draw(world) # refresh enemies
+ ground_list.draw(world) # refresh enemies
+ plat_list.draw(world) # refresh platforms
+ loot_list.draw(world) # refresh loot
+
+ for e in enemy_list:
+ e.move()
+ pygame.display.flip()
+ clock.tick(fps)
+```
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/loot-python-platformer-game
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[heguangzhi](https://github.com/heguangzhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory2.png?itok=AvneLxFp (Hearts, stars, and dollar signs)
+[2]: https://www.python.org/
+[3]: https://www.pygame.org/news
+[4]: https://linux.cn/article-9071-1.html
+[5]: https://linux.cn/article-10850-1.html
+[6]: https://linux.cn/article-10858-1.html
+[7]: https://linux.cn/article-10874-1.html
+[8]: https://linux.cn/article-10883-1.html
+[9]: https://linux.cn/article-11780-1.html
+[10]: https://linux.cn/article-11790-1.html
+[11]: https://linux.cn/article-11819-1.html
+[12]: https://opensource.com/sites/default/files/uploads/pygame-loot.jpg (Loot in Python platformer)
+[13]: https://linux.cn/article-10902-1.html
diff --git a/published/20200103 GNOME has a Secret- Screen Recorder. Here-s How to Use it.md b/published/202001/20200103 GNOME has a Secret- Screen Recorder. Here-s How to Use it.md
similarity index 100%
rename from published/20200103 GNOME has a Secret- Screen Recorder. Here-s How to Use it.md
rename to published/202001/20200103 GNOME has a Secret- Screen Recorder. Here-s How to Use it.md
diff --git a/published/202001/20200103 Introducing the guide to inter-process communication in Linux.md b/published/202001/20200103 Introducing the guide to inter-process communication in Linux.md
new file mode 100644
index 0000000000..91a1f7821b
--- /dev/null
+++ b/published/202001/20200103 Introducing the guide to inter-process communication in Linux.md
@@ -0,0 +1,168 @@
+[#]: collector: (lujun9972)
+[#]: translator: (laingke)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11832-1.html)
+[#]: subject: (Introducing the guide to inter-process communication in Linux)
+[#]: via: (https://opensource.com/article/20/1/inter-process-communication-linux)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+免费电子书《Linux 进程间通信指南》介绍
+======
+
+> 这本免费的电子书使经验丰富的程序员更深入了解 Linux 中进程间通信(IPC)的核心概念和机制。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/30/115631jthl0h61zhhmwpv1.jpeg)
+
+让一个软件过程与另一个软件过程进行对话是一个微妙的平衡行为。但是,它对于应用程序而言可能是至关重要的功能,因此这是任何从事复杂项目的程序员都必须解决的问题。你的应用程序是否需要启动由其它软件处理的工作;监视外设或网络上正在执行的操作;或者检测来自其它来源的信号,当你的软件需要依赖其自身代码之外的东西来知道下一步做什么或什么时候做时,你就需要考虑进程间通信inter-process communication(IPC)。
+
+这在 Unix 操作系统上已经由来已久了,这可能是因为人们早期预期软件会来自各种来源。按照相同的传统,Linux 提供了一些同样的 IPC 接口和一些新接口。Linux 内核具有多种 IPC 方法,[util-linux 包][2]包含了 `ipcmk`、`ipcrm`、`ipcs` 和 `lsipc` 命令,用于监视和管理 IPC 消息。
+
+### 显示进程间通信信息
+
+在尝试 IPC 之前,你应该知道系统上已经有哪些 IPC 设施。`lsipc` 命令提供了该信息。
+
+```
+RESOURCE DESCRIPTION LIMIT USED USE%
+MSGMNI Number of message queues 32000 0 0.00%
+MSGMAX Max size of message (byt.. 8192 - -
+MSGMNB Default max size of queue 16384 - -
+SHMMNI Shared memory segments 4096 79 1.93%
+SHMALL Shared memory pages 184[...] 25452 0.00%
+SHMMAX Max size of shared memory 18446744073692774399
+SHMMIN Min size of shared memory 1 - -
+SEMMNI Number of semaphore ident 32000 0 0.00%
+SEMMNS Total number of semaphore 1024000.. 0 0.00%
+SEMMSL Max semaphores per semap 32000 - -
+SEMOPM Max number of operations p 500 - -
+SEMVMX Semaphore max value 32767 - -
+```
+
+你可能注意到,这个示例清单包含三种不同类型的 IPC 机制,每种机制在 Linux 内核中都是可用的:消息(MSG)、共享内存(SHM)和信号量(SEM)。你可以用 `ipcs` 命令查看每个子系统的当前活动:
+
+```
+$ ipcs
+
+------ Message Queues Creators/Owners ---
+msqid perms cuid cgid [...]
+
+------ Shared Memory Segment Creators/Owners
+shmid perms cuid cgid [...]
+557056 700 seth users [...]
+3571713 700 seth users [...]
+2654210 600 seth users [...]
+2457603 700 seth users [...]
+
+------ Semaphore Arrays Creators/Owners ---
+semid perms cuid cgid [...]
+```
+
+这表明当前没有消息或信号量阵列,但是使用了一些共享内存段。
+
+你可以在系统上执行一个简单的示例,这样就可以看到正在工作的系统之一。它涉及到一些 C 代码,所以你必须在系统上有构建工具。必须安装这些软件包才能从源代码构建软件,这些软件包的名称取决于发行版,因此请参考文档以获取详细信息。例如,在基于 Debian 的发行版上,你可以在 wiki 的[构建教程][3]部分了解构建需求,而在基于 Fedora 的发行版上,你可以参考该文档的[从源代码安装软件][4]部分。
+
+### 创建一个消息队列
+
+你的系统已经有一个默认的消息队列,但是你可以使用 `ipcmk` 命令创建你自己的消息队列:
+
+```
+$ ipcmk --queue
+Message queue id: 32764
+```
+
+编写一个简单的 IPC 消息发送器,为了简单,在队列 ID 中硬编码:
+
+```
+#include
+#include
+#include
+#include
+
+struct msgbuffer {
+ char text[24];
+} message;
+
+int main() {
+ int msqid = 32764;
+ strcpy(message.text,"opensource.com");
+ msgsnd(msqid, &message, sizeof(message), 0);
+ printf("Message: %s\n",message.text);
+ printf("Queue: %d\n",msqid);
+ return 0;
+ }
+```
+
+编译该应用程序并运行:
+
+```
+$ gcc msgsend.c -o msg.bin
+$ ./msg.bin
+Message: opensource.com
+Queue: 32769
+```
+
+你刚刚向你的消息队列发送了一条消息。你可以使用 `ipcs` 命令验证这一点,可以使用 `——queue` 选项将输出限制到该消息队列:
+
+```
+$ ipcs -q
+
+------ Message Queues --------
+key msqid owner perms used-bytes messages
+0x7b341ab9 0 seth 666 0 0
+0x72bd8410 32764 seth 644 24 1
+```
+
+你也可以检索这些消息:
+
+```
+#include
+#include
+#include
+
+struct msgbuffer {
+ char text[24];
+} message;
+
+int main() {
+ int msqid = 32764;
+ msgrcv(msqid, &message, sizeof(message),0,0);
+ printf("\nQueue: %d\n",msqid);
+ printf("Got this message: %s\n", message.text);
+ msgctl(msqid,IPC_RMID,NULL);
+ return 0;
+```
+
+编译并运行:
+
+```
+$ gcc get.c -o get.bin
+$ ./get.bin
+
+Queue: 32764
+Got this message: opensource.com
+```
+
+### 下载这本电子书
+
+这只是 Marty Kalin 的《[Linux 进程间通信指南][5]》中课程的一个例子,可从 Opensource.com 下载的这本最新免费(且 CC 授权)的电子书。在短短的几节课中,你将从消息队列、共享内存和信号量、套接字、信号等中了解 IPC 的 POSIX 方法。认真阅读 Marty 的书,你将成为一个博识的程序员。而这不仅适用于经验丰富的编码人员,如果你编写的只是 shell 脚本,那么你将拥有有关管道(命名和未命名)和共享文件的大量实践知识,以及使用共享文件或外部消息队列时需要了解的重要概念。
+
+如果你对制作具有动态和具有系统感知的优秀软件感兴趣,那么你需要了解 IPC。让[这本书][5]做你的向导。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/inter-process-communication-linux
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[laingke](https://github.com/laingke)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_inter-process_communication_linux_520x292.png?itok=hPoen7oI (Inter-process Communication in Linux)
+[2]: https://mirrors.edge.kernel.org/pub/linux/utils/util-linux/
+[3]: https://wiki.debian.org/BuildingTutorial
+[4]: https://docs.pagure.org/docs-fedora/installing-software-from-source.html
+[5]: https://opensource.com/downloads/guide-inter-process-communication-linux
diff --git a/published/20200103 My Raspberry Pi retrospective- 6 projects and more.md b/published/202001/20200103 My Raspberry Pi retrospective- 6 projects and more.md
similarity index 100%
rename from published/20200103 My Raspberry Pi retrospective- 6 projects and more.md
rename to published/202001/20200103 My Raspberry Pi retrospective- 6 projects and more.md
diff --git a/published/20200105 PaperWM- tiled window management for GNOME.md b/published/202001/20200105 PaperWM- tiled window management for GNOME.md
similarity index 100%
rename from published/20200105 PaperWM- tiled window management for GNOME.md
rename to published/202001/20200105 PaperWM- tiled window management for GNOME.md
diff --git a/translated/tech/20200106 How to write a Python web API with Pyramid and Cornice.md b/published/202001/20200106 How to write a Python web API with Pyramid and Cornice.md
similarity index 68%
rename from translated/tech/20200106 How to write a Python web API with Pyramid and Cornice.md
rename to published/202001/20200106 How to write a Python web API with Pyramid and Cornice.md
index ae3afc22af..1ebcc920e9 100644
--- a/translated/tech/20200106 How to write a Python web API with Pyramid and Cornice.md
+++ b/published/202001/20200106 How to write a Python web API with Pyramid and Cornice.md
@@ -1,20 +1,22 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11788-1.html)
[#]: subject: (How to write a Python web API with Pyramid and Cornice)
[#]: via: (https://opensource.com/article/20/1/python-web-api-pyramid-cornice)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
如何使用 Pyramid 和 Cornice 编写 Python Web API
======
-使用 Pyramid 和 Cornice 构建可扩展的 RESTful Web 服务。
-![Searching for code][1]
-[Python][2] 是一种高级的,面向对象的编程语言,它以其简单的语法而闻名。它一直是构建 RESTful API 的顶级编程语言之一。
+> 使用 Pyramid 和 Cornice 构建和描述可扩展的 RESTful Web 服务。
-[Pyramid][3] 是一个 Python Web 框架,旨在随着应用的扩展而扩展:这对于简单的应用来说很简单,对于大型、复杂的应用也可以做到。Pyramid 为 PyPI (Python 软件包索引)提供了强大的支持。[Cornice][4] 提供了使用 Pyramid 构建 RESTful Web 服务的助手。
+![](https://img.linux.net.cn/data/attachment/album/202001/16/120352fcgeeccvfgt8sfvc.jpg)
+
+[Python][2] 是一种高级的、面向对象的编程语言,它以其简单的语法而闻名。它一直是构建 RESTful API 的顶级编程语言之一。
+
+[Pyramid][3] 是一个 Python Web 框架,旨在随着应用的扩展而扩展:这可以让简单的应用很简单,也可以增长为大型、复杂的应用。此外,Pyramid 为 PyPI (Python 软件包索引)提供了强大的支持。[Cornice][4] 为使用 Pyramid 构建和描述 RESTful Web 服务提供了助力。
本文将使用 Web 服务的例子来获取名人名言,来展示如何使用这些工具。
@@ -22,7 +24,6 @@
首先为你的应用创建一个虚拟环境,并创建一个文件来保存代码:
-
```
$ mkdir tutorial
$ cd tutorial
@@ -36,7 +37,6 @@ $ source env/bin/activate
使用以下命令导入这些模块:
-
```
from pyramid.config import Configurator
from cornice import Service
@@ -44,8 +44,7 @@ from cornice import Service
### 定义服务
-将引用服务定义为 **Service** 对象:
-
+将引用服务定义为 `Service` 对象:
```
QUOTES = Service(name='quotes',
@@ -55,8 +54,7 @@ QUOTES = Service(name='quotes',
### 编写引用逻辑
-到目前为止,这仅支持 **GET** 获取名言。用 **QUOTES.get** 装饰函数。这是将逻辑绑定到 REST 服务的方法:
-
+到目前为止,这仅支持获取名言。用 `QUOTES.get` 装饰函数。这是将逻辑绑定到 REST 服务的方法:
```
@QUOTES.get()
@@ -72,14 +70,13 @@ def get_quote(request):
}
```
-请注意,与其他框架不同,装饰器_不能_更改 **get_quote** 函数。如果导入此模块,你仍然可以定期调用该函数并检查结果。
+请注意,与其他框架不同,装饰器*不会*更改 `get_quote` 函数。如果导入此模块,你仍然可以定期调用该函数并检查结果。
在为 Pyramid RESTful 服务编写单元测试时,这很有用。
### 定义应用对象
-最后,使用 **scan** 查找所有修饰的函数并将其添加到配置中:
-
+最后,使用 `scan` 查找所有修饰的函数并将其添加到配置中:
```
with Configurator() as config:
@@ -94,14 +91,12 @@ with Configurator() as config:
我使用 Twisted 的 WSGI 服务器运行该应用,但是如果需要,你可以使用任何其他 [WSGI][5] 服务器,例如 Gunicorn 或 uWSGI。
-
```
-`(env)$ python -m twisted web --wsgi=main.application`
+(env)$ python -m twisted web --wsgi=main.application
```
默认情况下,Twisted 的 WSGI 服务器运行在端口 8080 上。你可以使用 [HTTPie][6] 测试该服务:
-
```
(env) $ pip install httpie
...
@@ -130,7 +125,7 @@ X-Content-Type-Options: nosniff
### 为什么要使用 Pyramid?
-Pyramid 不是最受欢迎的框架,但它已在 [PyPI][7] 等一些引人注目的项目中使用。我喜欢 Pyramid,因为它是认真对待单元测试的框架之一:因为装饰器不会修改函数并且没有线程局部变量,所以可以直接从单元测试中调用函数。例如,需要访问数据库的函数将从通过 **request.config** 传递的 **request.config** 对象中获取它。这允许单元测试人员将模拟(或真实)数据库对象放入请求中,而不用仔细设置全局变量,线程局部变量或其他特定于框架的东西。
+Pyramid 并不是最受欢迎的框架,但它已在 [PyPI][7] 等一些引人注目的项目中使用。我喜欢 Pyramid,因为它是认真对待单元测试的框架之一:因为装饰器不会修改函数并且没有线程局部变量,所以可以直接从单元测试中调用函数。例如,需要访问数据库的函数将从通过 `request.config` 传递的 `request.config` 对象中获取它。这允许单元测试人员将模拟(或真实)数据库对象放入请求中,而不用仔细设置全局变量、线程局部变量或其他特定于框架的东西。
如果你正在寻找一个经过测试的库来构建你接下来的 API,请尝试使用 Pyramid。你不会失望的。
@@ -140,8 +135,8 @@ via: https://opensource.com/article/20/1/python-web-api-pyramid-cornice
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/translated/tech/20200107 Generating numeric sequences with the Linux seq command.md b/published/202001/20200107 Generating numeric sequences with the Linux seq command.md
similarity index 58%
rename from translated/tech/20200107 Generating numeric sequences with the Linux seq command.md
rename to published/202001/20200107 Generating numeric sequences with the Linux seq command.md
index 0b12dfb508..c04fc6ff3a 100644
--- a/translated/tech/20200107 Generating numeric sequences with the Linux seq command.md
+++ b/published/202001/20200107 Generating numeric sequences with the Linux seq command.md
@@ -1,18 +1,20 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11784-1.html)
[#]: subject: (Generating numeric sequences with the Linux seq command)
[#]: via: (https://www.networkworld.com/article/3511954/generating-numeric-sequences-with-the-linux-seq-command.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
使用 Linux seq 命令生成数字序列
======
-Linux seq 命令可以以闪电般的速度生成数字列表。它易于使用而且灵活。
-[Jamie][1] [(CC BY 2.0)][2]
-在 Linux 中生成数字列表的最简单方法之一是使用 **seq**(sequence)命令。最简单的形式是,**seq** 接收一个数字,并输出从 1 到该数字的列表。例如:
+![](https://img.linux.net.cn/data/attachment/album/202001/15/112717drpb9nuwss84xebu.jpg)
+
+> Linux 的 seq 命令可以以闪电般的速度生成数字列表,而且它也易于使用而且灵活。
+
+在 Linux 中生成数字列表的最简单方法之一是使用 `seq`(系列sequence)命令。其最简单的形式是,`seq` 接收一个数字参数,并输出从 1 到该数字的列表。例如:
```
$ seq 5
@@ -23,7 +25,7 @@ $ seq 5
5
```
-除非另有指定,否则 **seq** 始终以 1 开头。你可以在最终数字前面插上不同数字开始。
+除非另有指定,否则 `seq` 始终以 1 开头。你可以在最终数字前面插上不同数字开始一个序列。
```
$ seq 3 5
@@ -34,7 +36,7 @@ $ seq 3 5
### 指定增量
-你还可以指定增量。假设你要列出 3 的倍数。指定起点(在此示例中为第一个 3 ),增量(第二个 3)和终点(18)。
+你还可以指定增量步幅。假设你要列出 3 的倍数。指定起点(在此示例中为第一个 3 ),增量(第二个 3)和终点(18)。
```
$ seq 3 3 18
@@ -58,9 +60,7 @@ $ seq 18 -3 3
3
```
-**seq** 命令也非常快。你或许可以在 10 秒内生成一百万个数字的列表。
-
-Advertisement
+`seq` 命令也非常快。你或许可以在 10 秒内生成一百万个数字的列表。
```
$ time seq 1000000
@@ -78,9 +78,9 @@ user 0m0.020s
sys 0m0.899s
```
-## 使用分隔符
+### 使用分隔符
-另一个非常有用的选项是使用分隔符。你可以插入逗号,冒号或其他一些字符,而不是在每行上列出单个数字。-s 选项后跟要使用的字符。
+另一个非常有用的选项是使用分隔符。你可以插入逗号、冒号或其他一些字符,而不是在每行上列出单个数字。`-s` 选项后跟要使用的字符。
```
$ seq -s: 3 3 18
@@ -96,21 +96,21 @@ $ seq -s' ' 3 3 18
### 开始数学运算
-从生成数字序列到进行数学运算似乎是一个巨大的飞跃,但是有了正确的分隔符,**seq** 可以轻松地传递给 **bc** 进行计算。例如:
+从生成数字序列到进行数学运算似乎是一个巨大的飞跃,但是有了正确的分隔符,`seq` 可以轻松地传递给 `bc` 进行计算。例如:
```
$ seq -s* 5 | bc
120
```
-该命令中发生了什么?让我们来看看。首先,**seq** 生成一个数字列表,并使用 \* 作为分隔符。
+该命令中发生了什么?让我们来看看。首先,`seq` 生成一个数字列表,并使用 `*` 作为分隔符。
```
$ seq -s* 5
1*2*3*4*5
```
-然后,它将字符串传递给计算器 (**bc**),计算器立即将数字相乘。你可以在不到一秒的时间内进行相当广泛的计算。
+然后,它将字符串传递给计算器(`bc`),计算器立即将数字相乘。你可以在不到一秒的时间内进行相当庞大的计算。
```
$ time seq -s* 117 | bc
@@ -125,15 +125,13 @@ sys 0m0.000s
### 局限性
-你只能选择一个分隔符,因此计算将非常有限。单独使用 **bc** 可进行更复杂的数学运算。此外,**seq** 仅适用于数字。要生成单个字母序列,请改用如下命令:
+你只能选择一个分隔符,因此计算将非常有限。而单独使用 `bc` 可进行更复杂的数学运算。此外,`seq` 仅适用于数字。要生成单个字母的序列,请改用如下命令:
```
$ echo {a..g}
a b c d e f g
```
-加入 [Facebook][5] 和 [LinkedIn][6] 上的 Network World 社区,评论热门主题。
-
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3511954/generating-numeric-sequences-with-the-linux-seq-command.html
@@ -141,7 +139,7 @@ via: https://www.networkworld.com/article/3511954/generating-numeric-sequences-w
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/202001/20200107 How piwheels will save Raspberry Pi users time in 2020.md b/published/202001/20200107 How piwheels will save Raspberry Pi users time in 2020.md
new file mode 100644
index 0000000000..2fc0aafef9
--- /dev/null
+++ b/published/202001/20200107 How piwheels will save Raspberry Pi users time in 2020.md
@@ -0,0 +1,126 @@
+[#]: collector: (lujun9972)
+[#]: translator: (heguangzhi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11786-1.html)
+[#]: subject: (How piwheels will save Raspberry Pi users time in 2020)
+[#]: via: (https://opensource.com/article/20/1/piwheels)
+[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
+
+piwheels 是如何为树莓派用户节省时间的
+======
+
+> 通过为树莓派提供预编译的 Python 包,piwheels 项目为用户节省了大量的时间和精力。
+
+![rainbow colors on pinwheels in the sun][1]
+
+piwheels 自动为 Python 包索引 [PiPi][2] 上的所有项目构建 Python wheels(预编译的 Python包),并使用了树莓派硬件以确保其兼容性。这意味着,当树莓派用户想要使用 `pip` 安装一个 Python 库时,他们会得到一个现成编译好的版本,并保证可以在树莓派上良好的工作。这使得树莓派用户更容易入门并开始他们的项目。
+
+![Piwheels logo][3]
+
+当我在 2018 年 10 月写 [piwheels:为树莓派提供快速 Python 包安装][4]时,那时 piwheels 项目已经有一年了,并且已经证明了其为树莓派用户节省大量时间和精力。但当这个项目进入第二年时,它为树莓派提供了预编译的 Python 包做了更多工作。
+
+![Raspberry Pi 4][5]
+
+### 它是怎么工作的
+
+树莓派的主要操作系统 [Raspbian][6] 预配置使用了 piwheels,所以用户不需要做任何特殊的事情就可以使用 piwheels。
+
+配置文件(在 `/etc/pip.conf`)告诉 `pip` 使用 [piwheels.org][7] 作*附加索引*,因此 `pip` 会首先查找 PyPI,然后查找 piwheels。piwheels 的网站被托管在一个树莓派 3 上,该项目构建的所有 wheels 都托管在该树莓派上。它每月提供 100 多万个软件包——这对于一台 35 美元的电脑来说还真不赖!
+
+除了提供网站服务的主树莓派以外,piwheels 项目还使用其他七个树莓派来构建软件包。其中一些运行 Raspbian Jessie,为 Python 3.4 构建 wheels;另外一些运行 Raspbian Stretch 为 Python 3.5 构建;还有一些运行 Raspbian Buster 为 Python 3.7 构建。该项目通常不支持其他 Python 版本。还有一个“合适的服务器”——一台运行 Postgres 数据库的虚拟机。由于树莓派 3 只有 1GB 的内存,所以(非常大的)数据库不能在其上很好地运行,所以我们把它移到了虚拟机上。带 4GB 内存的树莓派 4 可能是合用的,所以我们将来可能会用到它。
+
+这些树莓派都在“派云”中的 IPv6 网络上——这是一项由总部位于剑桥的托管公司 [Mythic Beasts][8] 提供的卓越服务。
+
+![Mythic Beasts hosting service][9]
+
+### 下载和统计趋势
+
+每次下载 piwheels 文件时,它都会记录在数据库中。这提供了对什么包最受欢迎以及人们使用什么 Python 版本和操作系统的统计。我们没有太多来自用户代理的信息,但是因为树莓派 1/Zero 的架构显示为 “armv6”,树莓派 2/3/4 显示为 “armv7”,所以我们可以将它们区分开来。
+
+截至 2019 年 12 月中旬,从 piwheels 下载的软件包超过 1400 万个,仅 2019 年就有近 900 万个。
+
+自项目开始以来最受欢迎的 10 个软件包是:
+
+ 1. [pycparser][10](821,060 个下载)
+ 2. [PyYAML][11](366,979 个下载)
+ 3. [numpy][12](354,531 个下载)
+ 4. [cffi][13](336,982 个下载)
+ 5. [MarkupSafe][14](318,878 个下载)
+ 6. [future][15](282,349 个下载)
+ 7. [aiohttp][16](277,046 个下载)
+ 8. [cryptography][17](276,167 个下载)
+ 9. [home-assistant-frontend][18](266,667 个下载)
+ 10. [multidict][19](256,185 个下载)
+
+请注意,许多纯 Python 包,如 [urllib3][20],都是作为 PyPI 上的 wheels 提供的;因为这些是跨平台兼容的,所以通常不会从 piwheels 下载,因为 PyPI 优先。
+
+随着时间的推移,我们也看到了使用哪些 Python 版本的趋势。这里显示了 Raspbian Buster 发布时从 3.5 版快速升级到了 Python 3.7:
+
+![Data from piwheels on Python versions used over time][21]
+
+你可以在我们的这篇 [统计博文][22] 看到更多的统计趋势。
+
+### 节省的时间
+
+每个包构建都被记录在数据库中,并且每个下载也被存储。交叉引用下载数和构建时间显示了节省了多少时间。一个例子是 numpy —— 最新版本大约需要 11 分钟来构建。
+
+迄今为止,piwheels 项目已经为用户节省了总计超过 165 年的构建时间。按照目前的使用率,piwheels 项目每天可以节省 200 多天。
+
+除了节省构建时间,拥有预编译的 wheels 也意味着人们不必安装各种开发工具来构建包。一些包需要其他 apt 包来访问共享库。弄清楚你需要哪一个可能会很痛苦,所以我们也让这一步变得容易了。首先,我们找到了这个过程,[在博客上记录了这个过程][23]。然后,我们将这个逻辑添加到构建过程中,这样当构建一个 wheels 时,它的依赖关系会被自动计算并添加到包的项目页面中:
+
+![numpy dependencies][24]
+
+### piwheels 的下一步是什么?
+
+今年,我们推出了项目页面(例如,[numpy][25]),这是一种非常有用的方式,可以让人们以人类可读的方式查找项目信息。它们还使人们更容易报告问题,例如 piwheels 中缺少一个项目,或者他们下载的包有问题。
+
+2020 年初,我们计划对 piwheels 项目进行一些升级,以启用新的 JSON 应用编程接口,这样你就可以自动检查哪些版本可用,查找项目的依赖关系,等等。
+
+下一次 Debian/Raspbian 升级要到 2021 年年中才会发生,所以在那之前我们不会开始为任何新的 Python 版本构建 wheels。
+
+你可以在这个项目的[博客][26]上读到更多关于 piwheels 的信息,我将在 2020 年初在那里发表一篇 2019 年的综述。你也可以在推特上关注 [@piwheels][27],在那里你可以看到每日和每月的统计数据以及任何达到的里程碑。
+
+当然,piwheels 是一个开源项目,你可以在 [GitHub][28] 上看到整个项目源代码。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/piwheels
+
+作者:[Ben Nuttall][a]
+选题:[lujun9972][b]
+译者:[heguangzhi](https://github.com/heguangzhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/bennuttall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rainbow-pinwheel-piwheel-diversity-inclusion.png?itok=di41Wd3V (rainbow colors on pinwheels in the sun)
+[2]: https://pypi.org/
+[3]: https://opensource.com/sites/default/files/uploads/piwheels.png (Piwheels logo)
+[4]: https://opensource.com/article/18/10/piwheels-python-raspberrypi
+[5]: https://opensource.com/sites/default/files/uploads/raspberry-pi-4_0.jpg (Raspberry Pi 4)
+[6]: https://www.raspberrypi.org/downloads/raspbian/
+[7]: http://piwheels.org
+[8]: https://www.mythic-beasts.com/order/rpi
+[9]: https://opensource.com/sites/default/files/uploads/pi-cloud.png (Mythic Beasts hosting service)
+[10]: https://www.piwheels.org/project/pycparser
+[11]: https://www.piwheels.org/project/PyYAML
+[12]: https://www.piwheels.org/project/numpy
+[13]: https://www.piwheels.org/project/cffi
+[14]: https://www.piwheels.org/project/MarkupSafe
+[15]: https://www.piwheels.org/project/future
+[16]: https://www.piwheels.org/project/aiohttp
+[17]: https://www.piwheels.org/project/cryptography
+[18]: https://www.piwheels.org/project/home-assistant-frontend
+[19]: https://www.piwheels.org/project/multidict
+[20]: https://piwheels.org/project/urllib3/
+[21]: https://opensource.com/sites/default/files/uploads/pyvers2019.png (Data from piwheels on Python versions used over time)
+[22]: https://blog.piwheels.org/piwheels-stats-for-2019/
+[23]: https://blog.piwheels.org/how-to-work-out-the-missing-dependencies-for-a-python-package/
+[24]: https://opensource.com/sites/default/files/uploads/numpy-deps.png (numpy dependencies)
+[25]: https://www.piwheels.org/project/numpy/
+[26]: https://blog.piwheels.org/
+[27]: https://twitter.com/piwheels
+[28]: https://github.com/piwheels/
diff --git a/published/202001/20200108 How to setup multiple monitors in sway.md b/published/202001/20200108 How to setup multiple monitors in sway.md
new file mode 100644
index 0000000000..153933d6be
--- /dev/null
+++ b/published/202001/20200108 How to setup multiple monitors in sway.md
@@ -0,0 +1,85 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11809-1.html)
+[#]: subject: (How to setup multiple monitors in sway)
+[#]: via: (https://fedoramagazine.org/how-to-setup-multiple-monitors-in-sway/)
+[#]: author: (arte219 https://fedoramagazine.org/author/arte219/)
+
+如何在 Sway 中设置多个显示器
+======
+
+![][1]
+
+Sway 是一种平铺式 Wayland 合成器,具有与 [i3 X11 窗口管理器][2]相同的功能、外观和工作流程。由于 Sway 使用 Wayland 而不是 X11,因此就不能一如既往地使用设置 X11 的工具。这包括 `xrandr` 之类的工具,这些工具在 X11 窗口管理器或桌面中用于设置显示器。这就是为什么必须通过编辑 Sway 配置文件来设置显示器的原因,这就是本文的目的。
+
+### 获取你的显示器 ID
+
+首先,你必须获得 Sway 用来指代显示器的名称。你可以通过运行以下命令进行操作:
+
+```
+$ swaymsg -t get_outputs
+```
+
+你将获得所有显示器的相关信息,每个显示器都用空行分隔。
+
+你必须查看每个部分的第一行,以及 `Output` 之后的内容。例如,当你看到 `Output DVI-D-1 'Philips Consumer Electronics Company'` 之类的行时,则该输出 ID 为 `DVI-D-1`。注意这些 ID 及其所属的物理监视器。
+
+### 编辑配置文件
+
+如果你之前没有编辑过 Sway 配置文件,则必须通过运行以下命令将其复制到主目录中:
+
+```
+cp -r /etc/sway/config ~/.config/sway/config
+```
+
+现在,默认配置文件位于 `~/.config/sway` 中,名为 `config`。你可以使用任何文本编辑器进行编辑。
+
+现在你需要做一点数学。想象有一个网格,其原点在左上角。X 和 Y 坐标的单位是像素。Y 轴反转。这意味着,例如,如果你从原点开始,向右移动 100 像素,向下移动 80 像素,则坐标将为 `(100, 80)`。
+
+你必须计算最终显示在此网格上的位置。显示器的位置由左上方的像素指定。例如,如果我们要使用名称为“HDMI1”且分辨率为 1920×1080 的显示器,并在其右侧使用名称为 “eDP1” 且分辨率为 1600×900 的笔记本电脑显示器,则必须在配置文件中键入 :
+
+```
+output HDMI1 pos 0 0
+output eDP1 pos 1920 0
+```
+
+你还可以使用 `res` 选项手动指定分辨率:
+
+```
+output HDMI1 pos 0 0 res 1920x1080
+output eDP1 pos 1920 0 res 1600x900
+```
+
+### 将工作空间绑定到显示器上
+
+与多个监视器一起使用 Sway 在工作区管理中可能会有些棘手。幸运的是,你可以将工作区绑定到特定的显示器上,因此你可以轻松地切换到该显示器并更有效地使用它。只需通过配置文件中的 `workspace` 命令即可完成。例如,如果要绑定工作区 1 和 2 到显示器 “DVI-D-1”,绑定工作区 8 和 9 到显示器 “HDMI-A-1”,则可以使用以下方法:
+
+```
+workspace 1 output DVI-D-1
+workspace 2 output DVI-D-1
+```
+
+```
+workspace 8 output HDMI-A-1
+workspace 9 output HDMI-A-1
+```
+
+就是这样。这就在 Sway 中多显示器设置的基础知识。可以在 中找到更详细的指南。
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/how-to-setup-multiple-monitors-in-sway/
+
+作者:[arte219][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/arte219/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2020/01/sway-multiple-monitors-816x345.png
+[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
diff --git a/translated/news/20200109 Huawei-s Linux Distribution openEuler is Available Now.md b/published/202001/20200109 Huawei-s Linux Distribution openEuler is Available Now.md
similarity index 63%
rename from translated/news/20200109 Huawei-s Linux Distribution openEuler is Available Now.md
rename to published/202001/20200109 Huawei-s Linux Distribution openEuler is Available Now.md
index ea6b5d24a5..891902da7e 100644
--- a/translated/news/20200109 Huawei-s Linux Distribution openEuler is Available Now.md
+++ b/published/202001/20200109 Huawei-s Linux Distribution openEuler is Available Now.md
@@ -1,61 +1,60 @@
[#]: collector: (lujun9972)
[#]: translator: (qianmingtian)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11787-1.html)
[#]: subject: (Huawei’s Linux Distribution openEuler is Available Now!)
[#]: via: (https://itsfoss.com/openeuler/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
-华为的linux发行版 openEuler 可以使用了!
+外媒:华为的 Linux 发行版 openEuler 可以使用了!
======
-华为提供了一个基于 CentOS 的企业 Linux 发行版 EulerOS 。最近,华为发布了一个名为 [openEuler][1] 的 EulerOS 社区版。
+> 华为提供了一个基于 CentOS 的企业级 Linux 发行版 EulerOS。最近,华为发布了一个名为 [openEuler][1] 的 EulerOS 社区版。
-openEuler 的源代码也被发布了。你在微软旗下的 GitHub 上找不到它——源代码可以在 [Gitee][2] 找到,这是一个中文的 [GitHub 的替代品][3] 。
+openEuler 的源代码也一同发布了。你在微软旗下的 GitHub 上找不到它——源代码可以在 [Gitee][2] 找到,这是一个中文的 [GitHub 的替代品][3]。
-它有两个独立的存储库,一个用于存储[源代码][2],另一个作为[包源][4] 存储有助于构建操作系统的软件包。
+它有两个独立的存储库,一个用于存储[源代码][2];另一个作为[软件包的源代码][4],存储有助于构建该操作系统的软件包。
-![][5]
+![][5]
-openuler 基础架构团队分享了他们使源代码可用的经验:
+openEuler 基础架构团队分享了他们使源代码可用的经验:
->我们现在很兴奋。很难想象我们会管理成千上万的仓库。为了确保它们能被成功地编译,我们要感谢所有参与贡献的人。
+> 我们现在很兴奋。很难想象我们会管理成千上万的仓库。为了确保它们能被成功地编译,我们要感谢所有参与贡献的人。
### openEuler 是基于 CentOS 的 Linux 发行版
与 EulerOS 一样,openEuler OS 也是基于 [CentOS][6],但华为技术有限公司为企业应用进一步开发了该操作系统。
-它是为 ARM64 架构的服务器量身定做的,同时华为声称已经做了一些改变来提高其性能。你可以在[华为发展博客][7]上了解更多。
+它是为 ARM64 架构的服务器量身定做的,同时华为声称已经做了一些改变来提高其性能。你可以在[华为开发博客][7]上了解更多。
![][8]
-
目前,根据 openEuler 的官方声明,有 50 多名贡献者为 openEuler 贡献了近 600 个提交。
-贡献者使源代码对社区可用成为可能。
+贡献者们使源代码对社区可用成为可能。
-值得注意的是,存储库还包括两个与之相关的新项目(或子项目),[iSulad][9] 和 **A-Tune**。
+值得注意的是,存储库还包括两个与之相关的新项目(或子项目),[iSulad][9] 和 A-Tune。
-A-Tune 是一个基于 AI 的操作系统调优软件, iSulad 是一个轻量级的容器运行时守护进程,如[Gitee][2]中提到的那样,它是为物联网和云基础设施设计的。
+A-Tune 是一个基于 AI 的操作系统调优软件,iSulad 是一个轻量级的容器运行时守护进程,如在 [Gitee][2] 中提到的那样,它是为物联网和云基础设施设计的。
-另外,官方的[公告][10]提到,这些系统是在华为云上通过脚本自动化构建的。这确实十分有趣。
+另外,官方的[公告][10]提到,这些系统是在华为云上通过脚本自动构建的。这确实十分有趣。
### 下载 openEuler
![][11]
-到目前为止,你找不到它的英文文档,所以你必须等待或选择通过[文档][12]帮助他们。
+到目前为止,你找不到它的英文文档,所以你必须等待或选择通过(贡献)[文档][12]来帮助他们。
你可以直接从它的[官方网站][13]下载 ISO 来测试它:
-[下载 openEuler ][13]
+- [下载 openEuler ][13]
### 你认为华为的 openEuler 怎么样?
据 cnTechPost 报道,华为曾宣布 EulerOS 将以新名字 openEuler 成为开源软件。
-目前还不清楚 openEuler 是否会取代 EulerOS ,或者两者会像 CentOS (社区版)和 Red Hat (商业版)一样同时存在。
+目前还不清楚 openEuler 是否会取代 EulerOS ,或者两者会像 CentOS(社区版)和 Red Hat(商业版)一样同时存在。
我还没有测试过它,所以我不能说 openEuler 是否适合英文用户。
@@ -68,7 +67,7 @@ via: https://itsfoss.com/openeuler/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[qianmingtian][c]
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/20200110 Bash Script to Send eMail With a List of User Accounts Expiring in -X- Days.md b/published/202001/20200110 Bash Script to Send eMail With a List of User Accounts Expiring in -X- Days.md
similarity index 100%
rename from published/20200110 Bash Script to Send eMail With a List of User Accounts Expiring in -X- Days.md
rename to published/202001/20200110 Bash Script to Send eMail With a List of User Accounts Expiring in -X- Days.md
diff --git a/published/202001/20200111 Sync files across multiple devices with Syncthing.md b/published/202001/20200111 Sync files across multiple devices with Syncthing.md
new file mode 100644
index 0000000000..65959e3d59
--- /dev/null
+++ b/published/202001/20200111 Sync files across multiple devices with Syncthing.md
@@ -0,0 +1,65 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11793-1.html)
+[#]: subject: (Sync files across multiple devices with Syncthing)
+[#]: via: (https://opensource.com/article/20/1/sync-files-syncthing)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用 Syncthing 在多个设备间同步文件
+======
+
+> 2020 年,在我们的 20 个使用开源提升生产力的系列文章中,首先了解如何使用 Syncthing 同步文件。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/18/123416rebvs7sjwm6c889y.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 使用 Synthing 同步文件
+
+设置新机器很麻烦。我们都有在机器之间复制的“标准设置”。多年来,我使用了很多方法来使它们在计算机之间同步。在过去(这会告诉你我年纪有多大了),曾经是软盘、然后是 Zip 磁盘、U 盘、SCP、Rsync、Dropbox、ownCloud,你想到的都试过。但这些似乎对我都不够好。
+
+然后我偶然发现了 [Syncthing][2]。
+
+![syncthing console][3]
+
+Syncthing 是一个轻量级的点对点文件同步系统。你不需要为服务付费,也不需要第三方服务器,而且速度很快。以我的经验,比文件同步中的许多“大牌”要快得多。
+
+Syncthing 可在 Linux、MacOS、Windows 和多种 BSD 中使用。还有一个 Android 应用(但尚无官方 iOS 版本)。以上所有终端都有方便的图形化前端(尽管我不会在这里介绍)。在 Linux 上,大多数发行版都有可用的软件包,因此安装非常简单。
+
+![Installing Syncthing on Ubuntu][4]
+
+首次启动 Syncthing 时,它将启动 Web 浏览器以配置守护程序。第一台计算机上没有太多要做,但是这是一个很好的机会来介绍一下用户界面 (UI)。最重要的是在右上方的 “Actions” 菜单下的 “System ID”。
+
+![Machine ID][5]
+
+设置第一台计算机后,请在第二台计算机上重复安装。在 UI 中,右下方将显示一个按钮,名为 “Add Remote Device”。单击该按钮,你将会看到一个要求输入 “Device ID and a Name” 的框。从第一台计算机上复制并粘贴 “Device ID”,然后单击 “Save”。
+
+你应该会在第一台上看到一个请求添加第二台的弹出窗口。接受后,新机器将显示在第一台机器的右下角。与第二台计算机共享默认目录。单击 “Default Folder”,然后单击 “Edit” 按钮。弹出窗口的顶部有四个链接。单击 “Sharing”,然后选择第二台计算机。单击 “Save”,然后查看第二台计算机。你会看到一个接受共享目录的提示。接受后,它将开始在两台计算机之间同步文件。
+
+![Sharing a directory in Syncthing][6]
+
+测试从一台计算机上复制文件到默认目录(“/你的家目录/Share”)。它应该很快会在另一台上出现。
+
+你可以根据需要添加任意数量的目录,这非常方便。如你在第一张图中所看到的,我有一个用于保存配置的 `myconfigs` 文件夹。当我买了一台新机器时,我只需安装 Syncthing,如果我在一台机器上调整了配置,我不必更新所有,它会自动更新。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/sync-files-syncthing
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
+[2]: https://syncthing.net/
+[3]: https://opensource.com/sites/default/files/uploads/productivity_1-1.png (syncthing console)
+[4]: https://opensource.com/sites/default/files/uploads/productivity_1-2.png (Installing Syncthing on Ubuntu)
+[5]: https://opensource.com/sites/default/files/uploads/productivity_1-3.png (Machine ID)
+[6]: https://opensource.com/sites/default/files/uploads/productivity_1-4.png (Sharing a directory in Syncthing)
diff --git a/published/202001/20200112 Use Stow for configuration management of multiple machines.md b/published/202001/20200112 Use Stow for configuration management of multiple machines.md
new file mode 100644
index 0000000000..cf673d3e6b
--- /dev/null
+++ b/published/202001/20200112 Use Stow for configuration management of multiple machines.md
@@ -0,0 +1,59 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11796-1.html)
+[#]: subject: (Use Stow for configuration management of multiple machines)
+[#]: via: (https://opensource.com/article/20/1/configuration-management-stow)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用 Stow 管理多台机器配置
+======
+> 2020 年,在我们的 20 个使用开源提升生产力的系列文章中,让我们了解如何使用 Stow 跨机器管理配置。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/18/141330jdcjalqzjal84a03.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 使用 Stow 管理符号链接
+
+昨天,我解释了如何使用 [Syncthing][2] 在多台计算机上保持文件同步。但是,这只是我用来保持配置一致性的工具之一。还有另一个表面上看起来更简单的工具:[Stow][3]。
+
+![Stow help screen][4]
+
+Stow 管理符号链接。默认情况下,它会链接目录到上一级目录。还有设置源和目标目录的选项,但我通常不使用它们。
+
+正如我在 Syncthing 的[文章][5] 中提到的,我使用 Syncthing 来保持 `myconfigs` 目录在我所有的计算机上一致。`myconfigs` 目录下面有多个子目录。每个子目录包含我经常使用的应用之一的配置文件。
+
+![myconfigs directory][6]
+
+在每台计算机上,我进入 `myconfigs` 目录,并运行 `stow -S <目录名称>` 以将目录中的文件符号链接到我的家目录。例如,在 `vim` 目录下,我有 `.vimrc` 和 `.vim` 目录。在每台机器上,我运行 `stow -S vim` 来创建符号链接 `~/.vimrc` 和 `~/.vim`。当我在一台计算机上更改 Vim 配置时,它会应用到我的所有机器上。
+
+然而,有时候,我需要一些特定于机器的配置,这就是为什么我有如 `msmtp-personal` 和 `msmtp-elastic`(我的雇主)这样的目录。由于我的 `msmtp` SMTP 客户端需要知道要中继电子邮件服务器,并且每个服务器都有不同的设置和凭据,我会使用 `-D` 标志来取消链接,接着链接另外一个。
+
+![Unstow one, stow the other][7]
+
+有时我要给配置添加文件。为此,有一个 `-R` 选项来“重新链接”。例如,我喜欢在图形化 Vim 中使用一种与控制台不同的特定字体。除了标准 `.vimrc` 文件,`.gvimrc` 文件能让我设置特定于图形化版本的选项。当我第一次设置它时,我移动 `~/.gvimrc` 到 `~/myconfigs/vim` 中,然后运行 `stow -R vim`,它取消链接并重新链接该目录中的所有内容。
+
+Stow 让我使用一个简单的命令行在多种配置之间切换,并且,结合 Syncthing,我可以确保无论我身在何处或在哪里进行更改,我都有我喜欢的工具的设置。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/configuration-management-stow
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb (A person programming)
+[2]: https://syncthing.net/
+[3]: https://www.gnu.org/software/stow/
+[4]: https://opensource.com/sites/default/files/uploads/productivity_2-1.png (Stow help screen)
+[5]: https://linux.cn/article-11793-1.html
+[6]: https://opensource.com/sites/default/files/uploads/productivity_2-2.png (myconfigs directory)
+[7]: https://opensource.com/sites/default/files/uploads/productivity_2-3.png (Unstow one, stow the other)
diff --git a/published/202001/20200113 Keep your email in sync with OfflineIMAP.md b/published/202001/20200113 Keep your email in sync with OfflineIMAP.md
new file mode 100644
index 0000000000..64ecd19ab6
--- /dev/null
+++ b/published/202001/20200113 Keep your email in sync with OfflineIMAP.md
@@ -0,0 +1,78 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11804-1.html)
+[#]: subject: (Keep your email in sync with OfflineIMAP)
+[#]: via: (https://opensource.com/article/20/1/sync-email-offlineimap)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用 OfflineIMAP 同步邮件
+======
+
+> 将邮件镜像保存到本地是整理消息的第一步。在我们的 20 个使用开源提升生产力的系列的第三篇文章中了解该如何做。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/20/235324nbgfyuwl98syowta.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 使用 OfflineIMAP 在本地同步你的邮件
+
+我与邮件之间存在爱恨交织的关系。我喜欢它让我与世界各地的人交流的方式。但是,像你们中的许多人一样,我收到过很多邮件,许多是来自邮件列表的,但也有很多垃圾邮件、广告等。这些积累了很多。
+
+![The OfflineIMAP "blinkenlights" UI][2]
+
+我尝试过的大多数工具(除了大型邮件服务商外)都可以很好地处理大量邮件,它们都有一个共同点:它们都依赖于以 [Maildir][3] 格式存储的本地邮件副本。这其中最有用的是 [OfflineIMAP][4]。OfflineIMAP 是将 IMAP 邮箱镜像到本地 Maildir 文件夹树的 Python 脚本。我用它来创建邮件的本地副本并使其保持同步。大多数 Linux 发行版都包含它,并且可以通过 Python 的 pip 包管理器获得。
+
+示例的最小配置文件是一个很好的模板。首先将其复制到 `~/.offlineimaprc`。我的看起来像这样:
+
+```
+[general]
+accounts = LocalSync
+ui=Quiet
+autorefresh=30
+
+[Account LocalSync]
+localrepository = LocalMail
+remoterepository = MirrorIMAP
+
+[Repository MirrorIMAP]
+type = IMAP
+remotehost = my.mail.server
+remoteuser = myusername
+remotepass = mypassword
+auth_mechanisms = LOGIN
+createfolder = true
+ssl = yes
+sslcacertfile = OS-DEFAULT
+
+[Repository LocalMail]
+type = Maildir
+localfolders = ~/Maildir
+sep = .
+createfolder = true
+```
+
+我的配置要做的是定义两个仓库:远程 IMAP 服务器和本地 Maildir 文件夹。还有一个**帐户**,告诉 OfflineIMAP 运行时要同步什么。你可以定义链接到不同仓库的多个帐户。除了本地复制外,这还允许你从一台 IMAP 服务器复制到另一台作为备份。
+
+如果你有很多邮件,那么首次运行 OfflineIMAP 将花费一些时间。但是完成后,下次会花*少得多*的时间。你也可以将 OfflineIMAP 作为 cron 任务(我的偏好)或作为守护程序在仓库之间不断进行同步。其文档涵盖了所有这些内容以及 Gmail 等高级配置选项。
+
+现在,我的邮件已在本地复制,并有多种工具用来加快搜索、归档和管理邮件的速度。这些我明天再说。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/sync-email-offlineimap
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/newsletter_email_mail_web_browser.jpg?itok=Lo91H9UH (email or newsletters via inbox and browser)
+[2]: https://opensource.com/sites/default/files/uploads/productivity_3-1.png (The OfflineIMAP "blinkenlights" UI)
+[3]: https://en.wikipedia.org/wiki/Maildir
+[4]: http://www.offlineimap.org/
diff --git a/published/202001/20200113 setV- A Bash function to maintain Python virtual environments.md b/published/202001/20200113 setV- A Bash function to maintain Python virtual environments.md
new file mode 100644
index 0000000000..45e4beaf52
--- /dev/null
+++ b/published/202001/20200113 setV- A Bash function to maintain Python virtual environments.md
@@ -0,0 +1,296 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11800-1.html)
+[#]: subject: (setV: A Bash function to maintain Python virtual environments)
+[#]: via: (https://opensource.com/article/20/1/setv-bash-function)
+[#]: author: (Sachin Patil https://opensource.com/users/psachin)
+
+setV:一个管理 Python 虚拟环境的 Bash 函数
+======
+
+> 了解一下 setV,它是一个轻量级的 Python 虚拟环境管理器,是 virtualenvwrapper 的替代产品。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/19/234306tvvg5ffwakrzr5vv.jpg)
+
+这一年多来,我的 [bash_scripts][3] 项目中悄悄隐藏这 [setV][2],但现在是时候该公开了。setV 是一个 Bash 函数,我可以用它代替 [virtualenvwrapper][4]。它提供了使你能够执行以下操作的基本功能:
+
+* 默认使用 Python 3
+* 创建一个新的虚拟环境
+* 使用带有 `-p`(或 `--python`)的自定义 Python 路径来创建新的虚拟环境
+* 删除现有的虚拟环境
+* 列出所有现有的虚拟环境
+* 使用制表符补全(以防你忘记虚拟环境名称)
+
+### 安装
+
+要安装 setV,请下载该脚本:
+
+```
+curl https://gitlab.com/psachin/setV/raw/master/install.sh
+```
+
+审核一下脚本,然后运行它:
+
+```
+sh ./install.sh
+```
+
+当安装 setV 时,安装脚本会要求你引入(`source`)一下 `~/.bashrc` 或 `~/.bash_profile` 的配置,根据你的喜好选择一个。
+
+### 用法
+
+基本的命令格式是 `setv`。
+
+#### 创建虚拟环境
+
+```
+setv --new rango # setv -n rango
+
+# 或使用定制的 Python 路径
+setv --new --python /opt/python/python3 rango # setv -n -p /opt/python/python3 rango
+```
+
+#### 激活已有的虚拟环境
+
+```
+setv VIRTUAL_ENVIRONMENT_NAME
+```
+
+```
+# 示例
+setv rango
+```
+
+#### 列出所有的虚拟环境
+
+```
+setv --list
+# 或
+setv [TAB] [TAB]
+```
+
+#### 删除虚拟环境
+
+```
+setv --delete rango
+```
+
+#### 切换到另外一个虚拟环境
+
+```
+# 假设你现在在 'rango',切换到 'tango'
+setv tango
+```
+
+#### 制表符补完
+
+如果你不完全记得虚拟环境的名称,则 Bash 式的制表符补全也可以适用于虚拟环境名称。
+
+### 参与其中
+
+setV 在 GNU [GPLv3][5]下开源,欢迎贡献。要了解更多信息,请访问它的 GitLab 存储库中的 setV 的 [README][6] 的贡献部分。
+
+### setV 脚本
+
+```
+#!/usr/bin/env bash
+# setV - A Lightweight Python virtual environment manager.
+# Author: Sachin (psachin)
+# Author's URL: https://psachin.gitlab.io/about
+#
+# License: GNU GPL v3, See LICENSE file
+#
+# Configure(Optional):
+# Set `SETV_VIRTUAL_DIR_PATH` value to your virtual environments
+# directory-path. By default it is set to '~/virtualenvs/'
+#
+# Usage:
+# Manual install: Added below line to your .bashrc or any local rc script():
+# ---
+# source /path/to/virtual.sh
+# ---
+#
+# Now you can 'activate' the virtual environment by typing
+# $ setv
+#
+# For example:
+# $ setv rango
+#
+# or type:
+# setv [TAB] [TAB] (to list all virtual envs)
+#
+# To list all your virtual environments:
+# $ setv --list
+#
+# To create new virtual environment:
+# $ setv --new new_virtualenv_name
+#
+# To delete existing virtual environment:
+# $ setv --delete existing_virtualenv_name
+#
+# To deactivate, type:
+# $ deactivate
+
+# Path to virtual environment directory
+SETV_VIRTUAL_DIR_PATH="$HOME/virtualenvs/"
+# Default python version to use. This decides whether to use `virtualenv` or `python3 -m venv`
+SETV_PYTHON_VERSION=3 # Defaults to Python3
+SETV_PY_PATH=$(which python${SETV_PYTHON_VERSION})
+
+function _setvcomplete_()
+{
+ # Bash-autocompletion.
+ # This ensures Tab-auto-completions work for virtual environment names.
+ local cmd="${1##*/}" # to handle command(s).
+ # Not necessary as such. 'setv' is the only command
+
+ local word=${COMP_WORDS[COMP_CWORD]} # Words thats being completed
+ local xpat='${word}' # Filter pattern. Include
+ # only words in variable '$names'
+ local names=$(ls -l "${SETV_VIRTUAL_DIR_PATH}" | egrep '^d' | awk -F " " '{print $NF}') # Virtual environment names
+
+ COMPREPLY=($(compgen -W "$names" -X "$xpat" -- "$word")) # compgen generates the results
+}
+
+function _setv_help_() {
+ # Echo help/usage message
+ echo "Usage: setv [OPTIONS] [NAME]"
+ echo Positional argument:
+ echo -e "NAME Activate virtual env."
+ echo Optional arguments:
+ echo -e "-l, --list List all Virtual Envs."
+ echo -e "-n, --new NAME Create a new Python Virtual Env."
+ echo -e "-d, --delete NAME Delete existing Python Virtual Env."
+ echo -e "-p, --python PATH Python binary path."
+}
+
+function _setv_custom_python_path()
+{
+ if [ -f "${1}" ];
+ then
+ if [ "`expr $1 : '.*python\([2,3]\)'`" = "3" ];
+ then
+ SETV_PYTHON_VERSION=3
+ else
+ SETV_PYTHON_VERSION=2
+ fi
+ SETV_PY_PATH=${1}
+ _setv_create $2
+ else
+ echo "Error: Path ${1} does not exist!"
+ fi
+}
+
+function _setv_create()
+{
+ # Creates new virtual environment if ran with -n|--new flag
+ if [ -z ${1} ];
+ then
+ echo "You need to pass virtual environment name"
+ _setv_help_
+ else
+ echo "Creating new virtual environment with the name: $1"
+
+ if [ ${SETV_PYTHON_VERSION} -eq 3 ];
+ then
+ ${SETV_PY_PATH} -m venv ${SETV_VIRTUAL_DIR_PATH}${1}
+ else
+ virtualenv -p ${SETV_PY_PATH} ${SETV_VIRTUAL_DIR_PATH}${1}
+ fi
+
+ echo "You can now activate the Python virtual environment by typing: setv ${1}"
+ fi
+}
+
+function _setv_delete()
+{
+ # Deletes virtual environment if ran with -d|--delete flag
+ # TODO: Refactor
+ if [ -z ${1} ];
+ then
+ echo "You need to pass virtual environment name"
+ _setv_help_
+ else
+ if [ -d ${SETV_VIRTUAL_DIR_PATH}${1} ];
+ then
+ read -p "Really delete this virtual environment(Y/N)? " yes_no
+ case $yes_no in
+ Y|y) rm -rvf ${SETV_VIRTUAL_DIR_PATH}${1};;
+ N|n) echo "Leaving the virtual environment as it is.";;
+ *) echo "You need to enter either Y/y or N/n"
+ esac
+ else
+ echo "Error: No virtual environment found by the name: ${1}"
+ fi
+ fi
+}
+
+function _setv_list()
+{
+ # Lists all virtual environments if ran with -l|--list flag
+ echo -e "List of virtual environments you have under ${SETV_VIRTUAL_DIR_PATH}:\n"
+ for virt in $(ls -l "${SETV_VIRTUAL_DIR_PATH}" | egrep '^d' | awk -F " " '{print $NF}')
+ do
+ echo ${virt}
+ done
+}
+
+function setv() {
+ # Main function
+ if [ $# -eq 0 ];
+ then
+ _setv_help_
+ elif [ $# -le 3 ];
+ then
+ case "${1}" in
+ -n|--new) _setv_create ${2};;
+ -d|--delete) _setv_delete ${2};;
+ -l|--list) _setv_list;;
+ *) if [ -d ${SETV_VIRTUAL_DIR_PATH}${1} ];
+ then
+ # Activate the virtual environment
+ source ${SETV_VIRTUAL_DIR_PATH}${1}/bin/activate
+ else
+ # Else throw an error message
+ echo "Sorry, you don't have any virtual environment with the name: ${1}"
+ _setv_help_
+ fi
+ ;;
+ esac
+ elif [ $# -le 5 ];
+ then
+ case "${2}" in
+ -p|--python) _setv_custom_python_path ${3} ${4};;
+ *) _setv_help_;;
+ esac
+ fi
+}
+
+# Calls bash-complete. The compgen command accepts most of the same
+# options that complete does but it generates results rather than just
+# storing the rules for future use.
+complete -F _setvcomplete_ setv
+```
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/setv-bash-function
+
+作者:[Sachin Patil][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/psachin
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
+[2]: https://gitlab.com/psachin/setV
+[3]: https://github.com/psachin/bash_scripts
+[4]: https://virtualenvwrapper.readthedocs.org/
+[5]: https://gitlab.com/psachin/setV/blob/master/LICENSE
+[6]: https://gitlab.com/psachin/setV/blob/master/ReadMe.org
+[7]: mailto:iclcoolster@gmail.com
diff --git a/published/202001/20200114 Organize your email with Notmuch.md b/published/202001/20200114 Organize your email with Notmuch.md
new file mode 100644
index 0000000000..dc9a67f7d6
--- /dev/null
+++ b/published/202001/20200114 Organize your email with Notmuch.md
@@ -0,0 +1,109 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11807-1.html)
+[#]: subject: (Organize your email with Notmuch)
+[#]: via: (https://opensource.com/article/20/1/organize-email-notmuch)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用 Notmuch 组织你的邮件
+======
+
+> Notmuch 可以索引、标记和排序电子邮件。在我们的 20 个使用开源提升生产力的系列的第四篇文章中了解该如何使用它。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/22/112231xg5dgv6f6g5a1iv1.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 用 Notmuch 为你的邮件建立索引
+
+昨天,我谈到了如何使用 OfflineIMAP [将我的邮件同步][2]到本地计算机。今天,我将讨论如何在阅读之前预处理所有邮件。
+
+![Notmuch][3]
+
+[Maildir][4] 可能是最有用的邮件存储格式之一。有很多工具可以帮助你管理邮件。我经常使用一个名为 [Notmuch][5] 的小程序,它能索引、标记和搜索邮件。Notmuch 配合其他几个程序一起使用可以使处理大量邮件更加容易。
+
+大多数 Linux 发行版都包含 Notmuch,你也可以在 MacOS 上获得它。Windows 用户可以通过 Linux 的 Windows 子系统([WSL][6])访问它,但可能需要进行一些其他调整。
+
+![Notmuch's first run][7]
+
+Notmuch 首次运行时,它将询问你一些问题,并在家目录中创建 `.notmuch-config` 文件。接下来,运行 `notmuch new` 来索引并标记所有邮件。你可以使用 `notmuch search tag:new` 进行验证,它会找到所有带有 `new` 标签的消息。这可能会有很多邮件,因为 Notmuch 使用 `new` 标签来指示新邮件,因此你需要对其进行清理。
+
+运行 `notmuch search tag:unread` 来查找未读消息,这会减少很多邮件。要从你已阅读的消息中删除 `new` 标签,请运行 `notmuch tag -new not tag:unread`,它将搜索所有没有 `unread` 标签的消息,并从其中删除 `new` 标签。现在,当你运行 `notmuch search tag:new` 时,它将仅显示未读邮件。
+
+但是,批量标记消息可能更有用,因为在每次运行时手动更新标记可能非常繁琐。`--batch` 命令行选项告诉 Notmuch 读取多行命令并执行它们。还有一个 `--input=filename` 选项,该选项从文件中读取命令并应用它们。我有一个名为 `tagmail.notmuch` 的文件,用于给“新”邮件添加标签;它看起来像这样:
+
+```
+# Manage sent, spam, and trash folders
+-unread -new folder:Trash
+-unread -new folder:Spam
+-unread -new folder:Sent
+
+# Note mail sent specifically to me (excluding bug mail)
++to-me to:kevin at sonney.com and tag:new and not tag:to-me
+
+# And note all mail sent from me
++sent from:kevin at sonney.com and tag:new and not tag:sent
+
+# Remove the new tag from messages
+-new tag:new
+```
+
+我可以在运行 `notmuch new` 后运行 `notmuch tag --input=tagmail.notmuch` 批量处理我的邮件,之后我也可以搜索这些标签。
+
+Notmuch 还支持 `pre-new` 和 `post-new` 钩子。这些脚本存放在 `Maildir/.notmuch/hooks` 中,它们定义了在使用 `notmuch new` 索引新邮件之前(`pre-new`)和之后(`post-new`)要做的操作。在昨天的文章中,我谈到了使用 [OfflineIMAP][8] 同步来自 IMAP 服务器的邮件。从 `pre-new` 钩子运行它非常容易:
+
+
+```
+#!/bin/bash
+# Remove the new tag from messages that are still tagged as new
+notmuch tag -new tag:new
+
+# Sync mail messages
+offlineimap -a LocalSync -u quiet
+```
+
+你还可以使用可以操作 Notmuch 数据库的 Python 应用 [afew][9],来为你标记*邮件列表*和*垃圾邮件*。你可以用类似的方法在 `post-new` 钩子中使用 `afew`:
+
+```
+#!/bin/bash
+# tag with my custom tags
+notmuch tag --input=~/tagmail.notmuch
+
+# Run afew to tag new mail
+afew -t -n
+```
+
+我建议你在使用 `afew` 标记邮件时,不要使用 `[ListMailsFilter]`,因为某些邮件处理程序会在邮件中添加模糊或者彻头彻尾是垃圾的列表标头(我说的就是你 Google)。
+
+![alot email client][10]
+
+此时,任何支持 Notmuch 或 Maildir 的邮件阅读器都可以读取我的邮件。有时,我会使用 [alot][11](一个 Notmuch 特定的客户端)在控制台中阅读邮件,但是它不像其他邮件阅读器那么美观。
+
+在接下来的几天,我将向你展示其他一些邮件客户端,它们可能会与你在使用的工具集成在一起。同时,请查看可与 Maildir 邮箱一起使用的其他工具。你可能会发现我没发现的好东西。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/organize-email-notmuch
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_organize_letter.png?itok=GTtiiabr (Filing cabinet for organization)
+[2]: https://linux.cn/article-11804-1.html
+[3]: https://opensource.com/sites/default/files/uploads/productivity_4-1.png (Notmuch)
+[4]: https://en.wikipedia.org/wiki/Maildir
+[5]: https://notmuchmail.org/
+[6]: https://docs.microsoft.com/en-us/windows/wsl/install-win10
+[7]: https://opensource.com/sites/default/files/uploads/productivity_4-2.png (Notmuch's first run)
+[8]: http://www.offlineimap.org/
+[9]: https://afew.readthedocs.io/en/latest/index.html
+[10]: https://opensource.com/sites/default/files/uploads/productivity_4-3.png (alot email client)
+[11]: https://github.com/pazz/alot
diff --git a/published/202001/20200115 6 handy Bash scripts for Git.md b/published/202001/20200115 6 handy Bash scripts for Git.md
new file mode 100644
index 0000000000..8a43bb3220
--- /dev/null
+++ b/published/202001/20200115 6 handy Bash scripts for Git.md
@@ -0,0 +1,510 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11797-1.html)
+[#]: subject: (6 handy Bash scripts for Git)
+[#]: via: (https://opensource.com/article/20/1/bash-scripts-git)
+[#]: author: (Bob Peterson https://opensource.com/users/bobpeterson)
+
+6 个方便的 Git 脚本
+======
+
+> 当使用 Git 存储库时,这六个 Bash 脚本将使你的生活更轻松。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/18/231713jegbk8fyek798gxb.jpg)
+
+我编写了许多 Bash 脚本,这些脚本使我在使用 Git 存储库时工作更加轻松。我的许多同事说没有必要:我所做的一切都可以用 Git 命令完成。虽然这可能是正确的,但我发现脚本远比尝试找出适当的 Git 命令来执行我想要的操作更加方便。
+
+### 1、gitlog
+
+`gitlog` 打印针对 master 分支的当前补丁的简短列表。它从最旧到最新打印它们,并显示作者和描述,其中 `H` 代表 `HEAD`,`^` 代表 `HEAD^`,`2` 代表 `HEAD~2`,依此类推。例如:
+
+```
+$ gitlog
+-----------------------[ recovery25 ]-----------------------
+(snip)
+11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
+10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
+ 9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
+ 8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
+ 7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
+ 6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
+ 5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
+ 4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
+ 3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
+ 2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
+ ^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
+ H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
+```
+
+如果我想查看其他分支上有哪些补丁,可以指定一个替代分支:
+
+```
+$ gitlog recovery24
+```
+
+### 2、gitlog.id
+
+`gitlog.id` 只是打印出补丁的 SHA1 ID:
+
+```
+$ gitlog.id
+-----------------------[ recovery25 ]-----------------------
+56908eeb6940 2ca4a6b628a1 fc64ad5d99fe 02031a00a251 f6f38da7dd18 d8546e8f0023 fc3cc1f98f6b 12c3e0cb3523 76cce178b134 6fc1dce3ab9c 1b681ab074ca 26fed8de719b 802ff51a5670 49f67a512d8c f04f20193bbb 5f6afe809d23 2030521dc70e dada79b3be94 9b19a1e08161 78a035041d3e f03da011cae2 0d2b2e068fcd 2449976aa133 57dfb5e12ccd 53abedfdcf72 6fbdda3474b3 49544a547188 187032f7a63c 6f75dae23d93 95fc2a261b00 ebfb14ded191 f653ee9e414a 0e2911cb8111 73968b76e2e3 8a3e4cb5e92c a5f2da803b5b 7c9ef68388ed 71ca19d0cba8 340d27a33895 9b3c4e6efb10 d2e8c22be39b 9563e31f8bfd ebac7a38036c f703a3c27874 a3e86d2ef30e da3c604755b0 4525c2f5b46f a06a5b7dea02 8ba93c796d5c e8b5ff851bb9
+```
+
+同样,它假定是当前分支,但是如果需要,我可以指定其他分支。
+
+### 3、gitlog.id2
+
+`gitlog.id2` 与 `gitlog.id` 相同,但顶部没有显示分支的行。这对于从一个分支挑选所有补丁到当前分支很方便:
+
+```
+$ # 创建一个新分支
+$ git branch --track origin/master
+$ # 检出刚刚创建的新分支
+$ git checkout recovery26
+$ # 从旧的分支挑选所有补丁到新分支
+$ for i in `gitlog.id2 recovery25` ; do git cherry-pick $i ;done
+```
+
+### 4、gitlog.grep
+
+`gitlog.grep` 会在该补丁集合中寻找一个字符串。例如,如果我发现一个错误并想修复引用了函数 `inode_go_sync` 的补丁,我可以简单地执行以下操作:
+
+```
+$ gitlog.grep inode_go_sync
+-----------------------[ recovery25 - 50 patches ]-----------------------
+(snip)
+11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
+10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
+ 9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
+152:-static void inode_go_sync(struct gfs2_glock *gl)
+153:+static int inode_go_sync(struct gfs2_glock *gl)
+163:@@ -296,6 +302,7 @@ static void inode_go_sync(struct gfs2_glock *gl)
+ 8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
+ 7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
+ 6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
+ 5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
+ 4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
+ 3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
+ 2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
+ ^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
+ H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
+```
+
+因此,现在我知道补丁 `HEAD~9` 是需要修复的补丁。我使用 `git rebase -i HEAD~10` 编辑补丁 9,`git commit -a --amend`,然后 `git rebase --continue` 以进行必要的调整。
+
+### 5、gitbranchcmp3
+
+`gitbranchcmp3` 使我可以将当前分支与另一个分支进行比较,因此我可以将较旧版本的补丁与我的较新版本进行比较,并快速查看已更改和未更改的内容。它生成一个比较脚本(使用了 KDE 工具 [Kompare][2],该工具也可在 GNOME3 上使用)以比较不太相同的补丁。如果除行号外没有其他差异,则打印 `[SAME]`。如果仅存在注释差异,则打印 `[same]`(小写)。例如:
+
+```
+$ gitbranchcmp3 recovery24
+Branch recovery24 has 47 patches
+Branch recovery25 has 50 patches
+
+(snip)
+38 87eb6901607a 340d27a33895 [same] gfs2: drain the ail2 list after io errors
+39 90fefb577a26 9b3c4e6efb10 [same] gfs2: clean up iopen glock mess in gfs2_create_inode
+40 ba3ae06b8b0e d2e8c22be39b [same] gfs2: Do proper error checking for go_sync family of glops
+41 2ab662294329 9563e31f8bfd [SAME] gfs2: use page_offset in gfs2_page_mkwrite
+42 0adc6d817b7a ebac7a38036c [SAME] gfs2: don't use buffer_heads in gfs2_allocate_page_backing
+43 55ef1f8d0be8 f703a3c27874 [SAME] gfs2: Improve mmap write vs. punch_hole consistency
+44 de57c2f72570 a3e86d2ef30e [SAME] gfs2: Multi-block allocations in gfs2_page_mkwrite
+45 7c5305fbd68a da3c604755b0 [SAME] gfs2: Fix end-of-file handling in gfs2_page_mkwrite
+46 162524005151 4525c2f5b46f [SAME] Rafael Aquini's slab instrumentation
+47 a06a5b7dea02 [ ] GFS2: Add go_get_holdtime to gl_ops
+48 8ba93c796d5c [ ] gfs2: introduce new function remaining_hold_time and use it in dq
+49 e8b5ff851bb9 [ ] gfs2: Allow rgrps to have a minimum hold time
+
+Missing from recovery25:
+The missing:
+Compare script generated at: /tmp/compare_mismatches.sh
+```
+
+### 6、gitlog.find
+
+最后,我有一个 `gitlog.find` 脚本,可以帮助我识别补丁程序的上游版本在哪里以及每个补丁的当前状态。它通过匹配补丁说明来实现。它还会生成一个比较脚本(再次使用了 Kompare),以将当前补丁与上游对应补丁进行比较:
+
+```
+$ gitlog.find
+-----------------------[ recovery25 - 50 patches ]-----------------------
+(snip)
+11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
+lo 5bcb9be74b2a Bob Peterson gfs2: drain the ail2 list after io errors
+10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
+fn 2c47c1be51fb Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
+ 9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
+lo feb7ea639472 Bob Peterson gfs2: Do proper error checking for go_sync family of glops
+ 8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
+ms f3915f83e84c Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
+ 7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
+ms 35af80aef99b Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
+ 6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
+fn 39c3a948ecf6 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
+ 5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
+fn f53056c43063 Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
+ 4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
+fn 184b4e60853d Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
+ 3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
+ Not found upstream
+ 2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
+ Not found upstream
+ ^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
+ Not found upstream
+ H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
+ Not found upstream
+Compare script generated: /tmp/compare_upstream.sh
+```
+
+补丁显示为两行,第一行是你当前的修补程序,然后是相应的上游补丁,以及 2 个字符的缩写,以指示其上游状态:
+
+* `lo` 表示补丁仅在本地(`local`)上游 Git 存储库中(即尚未推送到上游)。
+* `ms` 表示补丁位于 Linus Torvald 的主(`master`)分支中。
+* `fn` 意味着补丁被推送到我的 “for-next” 开发分支,用于下一个上游合并窗口。
+
+我的一些脚本根据我通常使用 Git 的方式做出假设。例如,当搜索上游补丁时,它使用我众所周知的 Git 树的位置。因此,你需要调整或改进它们以适合你的条件。`gitlog.find` 脚本旨在仅定位 [GFS2][3] 和 [DLM][4] 补丁,因此,除非你是 GFS2 开发人员,否则你需要针对你感兴趣的组件对其进行自定义。
+
+### 源代码
+
+以下是这些脚本的源代码。
+
+#### 1、gitlog
+
+```
+#!/bin/bash
+branch=$1
+
+if test "x$branch" = x; then
+ branch=`git branch -a | grep "*" | cut -d ' ' -f2`
+fi
+
+patches=0
+tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
+
+LIST=`git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '`
+for i in $LIST; do patches=$(echo $patches + 1 | bc);done
+
+if [[ $branch =~ .*for-next.* ]]
+then
+ start=HEAD
+# start=origin/for-next
+else
+ start=origin/master
+fi
+
+tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
+
+/usr/bin/echo "-----------------------[" $branch "]-----------------------"
+patches=$(echo $patches - 1 | bc);
+for i in $LIST; do
+ if [ $patches -eq 1 ]; then
+ cnt=" ^"
+ elif [ $patches -eq 0 ]; then
+ cnt=" H"
+ else
+ if [ $patches -lt 10 ]; then
+ cnt=" $patches"
+ else
+ cnt="$patches"
+ fi
+ fi
+ /usr/bin/git show --abbrev-commit -s --pretty=format:"$cnt %h %<|(32)%an %s %n" $i
+ patches=$(echo $patches - 1 | bc)
+done
+#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" $tracking..$branch
+#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" ^origin/master ^linux-gfs2/for-next $branch
+```
+
+#### 2、gitlog.id
+
+```
+#!/bin/bash
+branch=$1
+
+if test "x$branch" = x; then
+ branch=`git branch -a | grep "*" | cut -d ' ' -f2`
+fi
+
+tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
+
+/usr/bin/echo "-----------------------[" $branch "]-----------------------"
+git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '
+```
+
+#### 3、gitlog.id2
+
+```
+#!/bin/bash
+branch=$1
+
+if test "x$branch" = x; then
+ branch=`git branch -a | grep "*" | cut -d ' ' -f2`
+fi
+
+tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
+git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '
+```
+
+#### 4、gitlog.grep
+
+```
+#!/bin/bash
+param1=$1
+param2=$2
+
+if test "x$param2" = x; then
+ branch=`git branch -a | grep "*" | cut -d ' ' -f2`
+ string=$param1
+else
+ branch=$param1
+ string=$param2
+fi
+
+patches=0
+tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
+
+LIST=`git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '`
+for i in $LIST; do patches=$(echo $patches + 1 | bc);done
+/usr/bin/echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
+patches=$(echo $patches - 1 | bc);
+for i in $LIST; do
+ if [ $patches -eq 1 ]; then
+ cnt=" ^"
+ elif [ $patches -eq 0 ]; then
+ cnt=" H"
+ else
+ if [ $patches -lt 10 ]; then
+ cnt=" $patches"
+ else
+ cnt="$patches"
+ fi
+ fi
+ /usr/bin/git show --abbrev-commit -s --pretty=format:"$cnt %h %<|(32)%an %s" $i
+ /usr/bin/git show --pretty=email --patch-with-stat $i | grep -n "$string"
+ patches=$(echo $patches - 1 | bc)
+done
+```
+
+#### 5、gitbranchcmp3
+
+```
+#!/bin/bash
+#
+# gitbranchcmp3 []
+#
+oldbranch=$1
+newbranch=$2
+script=/tmp/compare_mismatches.sh
+
+/usr/bin/rm -f $script
+echo "#!/bin/bash" > $script
+/usr/bin/chmod 755 $script
+echo "# Generated by gitbranchcmp3.sh" >> $script
+echo "# Run this script to compare the mismatched patches" >> $script
+echo " " >> $script
+echo "function compare_them()" >> $script
+echo "{" >> $script
+echo " git show --pretty=email --patch-with-stat \$1 > /tmp/gronk1" >> $script
+echo " git show --pretty=email --patch-with-stat \$2 > /tmp/gronk2" >> $script
+echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
+echo "}" >> $script
+echo " " >> $script
+
+if test "x$newbranch" = x; then
+ newbranch=`git branch -a | grep "*" | cut -d ' ' -f2`
+fi
+
+tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
+
+declare -a oldsha1s=(`git log --reverse --abbrev-commit --pretty=oneline $tracking..$oldbranch | cut -d ' ' -f1 |paste -s -d ' '`)
+declare -a newsha1s=(`git log --reverse --abbrev-commit --pretty=oneline $tracking..$newbranch | cut -d ' ' -f1 |paste -s -d ' '`)
+
+#echo "old: " $oldsha1s
+oldcount=${#oldsha1s[@]}
+echo "Branch $oldbranch has $oldcount patches"
+oldcount=$(echo $oldcount - 1 | bc)
+#for o in `seq 0 ${#oldsha1s[@]}`; do
+# echo -n ${oldsha1s[$o]} " "
+# desc=`git show $i | head -5 | tail -1|cut -b5-`
+#done
+
+#echo "new: " $newsha1s
+newcount=${#newsha1s[@]}
+echo "Branch $newbranch has $newcount patches"
+newcount=$(echo $newcount - 1 | bc)
+#for o in `seq 0 ${#newsha1s[@]}`; do
+# echo -n ${newsha1s[$o]} " "
+# desc=`git show $i | head -5 | tail -1|cut -b5-`
+#done
+echo
+
+for new in `seq 0 $newcount`; do
+ newsha=${newsha1s[$new]}
+ newdesc=`git show $newsha | head -5 | tail -1|cut -b5-`
+ oldsha=" "
+ same="[ ]"
+ for old in `seq 0 $oldcount`; do
+ if test "${oldsha1s[$old]}" = "match"; then
+ continue;
+ fi
+ olddesc=`git show ${oldsha1s[$old]} | head -5 | tail -1|cut -b5-`
+ if test "$olddesc" = "$newdesc" ; then
+ oldsha=${oldsha1s[$old]}
+ #echo $oldsha
+ git show $oldsha |tail -n +2 |grep -v "index.*\.\." |grep -v "@@" > /tmp/gronk1
+ git show $newsha |tail -n +2 |grep -v "index.*\.\." |grep -v "@@" > /tmp/gronk2
+ diff /tmp/gronk1 /tmp/gronk2 &> /dev/null
+ if [ $? -eq 0 ] ;then
+# No differences
+ same="[SAME]"
+ oldsha1s[$old]="match"
+ break
+ fi
+ git show $oldsha |sed -n '/diff/,$p' |grep -v "index.*\.\." |grep -v "@@" > /tmp/gronk1
+ git show $newsha |sed -n '/diff/,$p' |grep -v "index.*\.\." |grep -v "@@" > /tmp/gronk2
+ diff /tmp/gronk1 /tmp/gronk2 &> /dev/null
+ if [ $? -eq 0 ] ;then
+# Differences in comments only
+ same="[same]"
+ oldsha1s[$old]="match"
+ break
+ fi
+ oldsha1s[$old]="match"
+ echo "compare_them $oldsha $newsha" >> $script
+ fi
+ done
+ echo "$new $oldsha $newsha $same $newdesc"
+done
+
+echo
+echo "Missing from $newbranch:"
+the_missing=""
+# Now run through the olds we haven't matched up
+for old in `seq 0 $oldcount`; do
+ if test ${oldsha1s[$old]} != "match"; then
+ olddesc=`git show ${oldsha1s[$old]} | head -5 | tail -1|cut -b5-`
+ echo "${oldsha1s[$old]} $olddesc"
+ the_missing=`echo "$the_missing ${oldsha1s[$old]}"`
+ fi
+done
+
+echo "The missing: " $the_missing
+echo "Compare script generated at: $script"
+#git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '
+```
+
+#### 6、gitlog.find
+
+```
+#!/bin/bash
+#
+# Find the upstream equivalent patch
+#
+# gitlog.find
+#
+cwd=$PWD
+param1=$1
+ubranch=$2
+patches=0
+script=/tmp/compare_upstream.sh
+echo "#!/bin/bash" > $script
+/usr/bin/chmod 755 $script
+echo "# Generated by gitbranchcmp3.sh" >> $script
+echo "# Run this script to compare the mismatched patches" >> $script
+echo " " >> $script
+echo "function compare_them()" >> $script
+echo "{" >> $script
+echo " cwd=$PWD" >> $script
+echo " git show --pretty=email --patch-with-stat \$2 > /tmp/gronk2" >> $script
+echo " cd ~/linux.git/fs/gfs2" >> $script
+echo " git show --pretty=email --patch-with-stat \$1 > /tmp/gronk1" >> $script
+echo " cd $cwd" >> $script
+echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
+echo "}" >> $script
+echo " " >> $script
+
+#echo "Gathering upstream patch info. Please wait."
+branch=`git branch -a | grep "*" | cut -d ' ' -f2`
+tracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
+
+cd ~/linux.git
+if test "X${ubranch}" = "X"; then
+ ubranch=`git branch -a | grep "*" | cut -d ' ' -f2`
+fi
+utracking=`git rev-parse --abbrev-ref --symbolic-full-name @{u}`
+#
+# gather a list of gfs2 patches from master just in case we can't find it
+#
+#git log --abbrev-commit --pretty=format:" %h %<|(32)%an %s" master |grep -i -e "gfs2" -e "dlm" > /tmp/gronk
+git log --reverse --abbrev-commit --pretty=format:"ms %h %<|(32)%an %s" master fs/gfs2/ > /tmp/gronk.gfs2
+# ms = in Linus's master
+git log --reverse --abbrev-commit --pretty=format:"ms %h %<|(32)%an %s" master fs/dlm/ > /tmp/gronk.dlm
+
+cd $cwd
+LIST=`git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' '`
+for i in $LIST; do patches=$(echo $patches + 1 | bc);done
+/usr/bin/echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
+patches=$(echo $patches - 1 | bc);
+for i in $LIST; do
+ if [ $patches -eq 1 ]; then
+ cnt=" ^"
+ elif [ $patches -eq 0 ]; then
+ cnt=" H"
+ else
+ if [ $patches -lt 10 ]; then
+ cnt=" $patches"
+ else
+ cnt="$patches"
+ fi
+ fi
+ /usr/bin/git show --abbrev-commit -s --pretty=format:"$cnt %h %<|(32)%an %s" $i
+ desc=`/usr/bin/git show --abbrev-commit -s --pretty=format:"%s" $i`
+ cd ~/linux.git
+ cmp=1
+ up_eq=`git log --reverse --abbrev-commit --pretty=format:"lo %h %<|(32)%an %s" $utracking..$ubranch | grep "$desc"`
+# lo = in local for-next
+ if test "X$up_eq" = "X"; then
+ up_eq=`git log --reverse --abbrev-commit --pretty=format:"fn %h %<|(32)%an %s" master..$utracking | grep "$desc"`
+# fn = in for-next for next merge window
+ if test "X$up_eq" = "X"; then
+ up_eq=`grep "$desc" /tmp/gronk.gfs2`
+ if test "X$up_eq" = "X"; then
+ up_eq=`grep "$desc" /tmp/gronk.dlm`
+ if test "X$up_eq" = "X"; then
+ up_eq=" Not found upstream"
+ cmp=0
+ fi
+ fi
+ fi
+ fi
+ echo "$up_eq"
+ if [ $cmp -eq 1 ] ; then
+ UP_SHA1=`echo $up_eq|cut -d' ' -f2`
+ echo "compare_them $UP_SHA1 $i" >> $script
+ fi
+ cd $cwd
+ patches=$(echo $patches - 1 | bc)
+done
+echo "Compare script generated: $script"
+```
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/bash-scripts-git
+
+作者:[Bob Peterson][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/bobpeterson
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
+[2]: https://kde.org/applications/development/org.kde.kompare
+[3]: https://en.wikipedia.org/wiki/GFS2
+[4]: https://en.wikipedia.org/wiki/Distributed_lock_manager
diff --git a/published/202001/20200115 Organize and sync your calendar with khal and vdirsyncer.md b/published/202001/20200115 Organize and sync your calendar with khal and vdirsyncer.md
new file mode 100644
index 0000000000..5ddb647095
--- /dev/null
+++ b/published/202001/20200115 Organize and sync your calendar with khal and vdirsyncer.md
@@ -0,0 +1,106 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11812-1.html)
+[#]: subject: (Organize and sync your calendar with khal and vdirsyncer)
+[#]: via: (https://opensource.com/article/20/1/open-source-calendar)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用 khal 和 vdirsyncer 组织和同步你的日历
+======
+
+> 保存和共享日历可能会有点麻烦。在我们的 20 个使用开源提升生产力的系列的第五篇文章中了解如何让它更简单。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/23/150009wsr3d5ovg4g1vzws.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 使用 khal 和 vdirsyncer 跟踪你的日程
+
+处理日历很*麻烦*,要找到好的工具总是很困难的。但是自从我去年将日历列为[我的“失败"之一][2]以来,我已经取得了一些进步。
+
+目前使用日历最困难的是一直需要以某种方式在线共享。两种最受欢迎的在线日历是 Google Calendar 和 Microsoft Outlook/Exchange。两者都在公司环境中大量使用,这意味着我的日历必须支持其中之一或者两个。
+
+![khal calendar][3]
+
+[Khal][4] 是基于控制台的日历,可以读取和写入 VCalendar 文件。它配置相当容易,但是不支持与其他应用同步。
+
+幸运的是,khal 能与 [vdirsyncer][5] 一起使用,它是一个漂亮的命令行程序,可以将在线日历(和联系人,我将在另一篇文章中讨论)同步到本地磁盘。是的,它还可以上传新事件。
+
+![vdirsyncer][6]
+
+Vdirsyncer 是个 Python 3 程序,可以通过软件包管理器或 `pip` 安装。它可以同步 CalDAV、VCalendar/iCalendar、Google Calendar 和目录中的本地文件。由于我使用 Google Calendar,尽管这不是最简单的设置,我也将以它为例。
+
+在 vdirsyncer 中设置 Google Calendar 是[有文档参考的][7],所以这里我不再赘述。重要的是确保设置你的同步对,将 Google Calendar 设置为冲突解决的“赢家”。也就是说,如果同一事件有两个更新,那么需要知道哪个更新优先。类似这样做:
+
+```
+[general]
+status_path = "~/.calendars/status"
+
+[pair personal_sync]
+a = "personal"
+b = "personallocal"
+collections = ["from a", "from b"]
+conflict_resolution = "a wins"
+metadata = ["color"]
+
+[storage personal]
+type = "google_calendar"
+token_file = "~/.vdirsyncer/google_calendar_token"
+client_id = "google_client_id"
+client_secret = "google_client_secret"
+
+[storage personallocal]
+type = "filesystem"
+path = "~/.calendars/Personal"
+fileext = ".ics"
+```
+
+在第一次 vdirsyncer 同步之后,你将在存储路径中看到一系列目录。每个文件夹都将包含多个文件,日历中的每个事件都是一个文件。下一步是导入 khal。首先运行 `khal configure` 进行初始设置。
+
+![Configuring khal][8]
+
+现在,运行 `khal interactive` 将显示本文开头的界面。输入 `n` 将打开“新事件”对话框。这里要注意的一件事:日历的名称与 vdirsyncer 创建的目录匹配,但是你可以更改 khal 配置文件来指定更清晰的名称。根据条目所在的日历,向条目添加颜色还可以帮助你确定日历内容:
+
+```
+[calendars]
+[[personal]]
+path = ~/.calendars/Personal/kevin@sonney.com/
+color = light magenta
+[[holidays]]
+path = ~/.calendars/Personal/cln2stbjc4hmgrrcd5i62ua0ctp6utbg5pr2sor1dhimsp31e8n6errfctm6abj3dtmg@virtual/
+color = light blue
+[[birthdays]]
+path = ~/.calendars/Personal/c5i68sj5edpm4rrfdchm6rreehgm6t3j81jn4rrle0n7cbj3c5m6arj4c5p2sprfdtjmop9ecdnmq@virtual/
+color = brown
+```
+
+现在,当你运行 `khal interactive` 时,每个日历将被着色以区别于其他日历,并且当你添加新条目时,它将有更具描述性的名称。
+
+![Adding a new calendar entry][9]
+
+设置有些麻烦,但是完成后,khal 和 vdirsyncer 可以一起为你提供一种简便的方法来管理日历事件并使它们与你的在线服务保持同步。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/open-source-calendar
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
+[2]: https://opensource.com/article/19/1/productivity-tool-wish-list
+[3]: https://opensource.com/sites/default/files/uploads/productivity_5-1.png (khal calendar)
+[4]: https://khal.readthedocs.io/en/v0.9.2/index.html
+[5]: https://github.com/pimutils/vdirsyncer
+[6]: https://opensource.com/sites/default/files/uploads/productivity_5-2.png (vdirsyncer)
+[7]: https://vdirsyncer.pimutils.org/en/stable/config.html#google
+[8]: https://opensource.com/sites/default/files/uploads/productivity_5-3.png (Configuring khal)
+[9]: https://opensource.com/sites/default/files/uploads/productivity_5-4.png
diff --git a/published/202001/20200115 Root User in Ubuntu- Important Things You Should Know.md b/published/202001/20200115 Root User in Ubuntu- Important Things You Should Know.md
new file mode 100644
index 0000000000..0abc566f4b
--- /dev/null
+++ b/published/202001/20200115 Root User in Ubuntu- Important Things You Should Know.md
@@ -0,0 +1,169 @@
+[#]: collector: (lujun9972)
+[#]: translator: (robsean)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11837-1.html)
+[#]: subject: (Root User in Ubuntu: Important Things You Should Know)
+[#]: via: (https://itsfoss.com/root-user-ubuntu/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+Ubuntu 中的 root 用户:你应该知道的重要事情
+======
+
+![][5]
+
+当你刚开始使用 Linux 时,你将发现与 Windows 的很多不同。其中一个“不同的东西”是 root 用户的概念。
+
+在这个初学者系列中,我将解释几个关于 Ubuntu 的 root 用户的重要的东西。
+
+**请记住,尽管我正在从 Ubuntu 用户的角度编写这篇文章,它应该对大多数的 Linux 发行版也是有效的。**
+
+你将在这篇文章中学到下面的内容:
+
+* 为什么在 Ubuntu 中禁用 root 用户
+* 像 root 用户一样使用命
+* 切换为 root 用户
+* 解锁 root 用户
+
+### 什么是 root 用户?为什么它在 Ubuntu 中被锁定?
+
+在 Linux 中,有一个称为 [root][6] 的超级用户。这是超级管理员账号,它可以做任何事以及使用系统的一切东西。它可以在你的 Linux 系统上访问任何文件和运行任何命令。
+
+能力越大,责任越大。root 用户给予你完全控制系统的能力,因此,它应该被谨慎地使用。root 用户可以访问系统文件,运行更改系统配置的命令。因此,一个错误的命令可能会破坏系统。
+
+这就是为什么 [Ubuntu][7] 和其它基于 Ubuntu 的发行版默认锁定 root 用户,以从意外的灾难中挽救你的原因。
+
+对于你的日常任务,像移动你家目录中的文件,从互联网下载文件,创建文档等等,你不需要拥有 root 权限。
+
+**打个比方来更好地理解它。假设你想要切一个水果,你可以使用一把厨房用刀。假设你想要砍一颗树,你就得使用一把锯子。现在,你可以使用锯子来切水果,但是那不明智,不是吗?**_
+
+这意味着,你不能是 Ubuntu 中 root 用户或者不能使用 root 权限来使用系统吗?不,你仍然可以在 `sudo` 的帮助下来拥有 root 权限来访问(在下一节中解释)。
+
+> **要点:** 使用于常规任务,root 用户权限太过强大。这就是为什么不建议一直使用 root 用户。你仍然可以使用 root 用户来运行特殊的命令。
+
+### 如何在 Ubuntu 中像 root 用户一样运行命令?
+
+![Image Credit: xkcd][8]
+
+对于一些系统的特殊任务来说,你将需要 root 权限。例如。如果你想[通过命令行更新 Ubuntu][9],你不能作为一个常规用户运行该命令。它将给出权限被拒绝的错误。
+
+```
+apt update
+Reading package lists... Done
+E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
+E: Unable to lock directory /var/lib/apt/lists/
+W: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (13: Permission denied)
+W: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (13: Permission denied)
+```
+
+那么,你如何像 root 用户一样运行命令?简单的答案是,在命令前添加 `sudo`,来像 root 用户一样运行。
+
+```
+sudo apt update
+```
+
+Ubuntu 和很多其它的 Linux 发行版使用一个被称为 `sudo` 的特殊程序机制。`sudo` 是一个以 root 用户(或其它用户)来控制运行命令访问的程序。
+
+实际上,`sudo` 是一个非常多用途的工具。它可以配置为允许一个用户像 root 用户一样来运行所有的命令,或者仅仅一些命令。你也可以配置为无需密码即可使用 sudo 运行命令。这个主题内容比较丰富,也许我将在另一篇文章中详细讨论它。
+
+就目前而言,你应该知道[当你安装 Ubuntu 时][10],你必须创建一个用户账号。这个用户账号在你系统上以管理员身份来工作,并且按照 Ubuntu 中的默认 sudo 策略,它可以在你的系统上使用 root 用户权限来运行任何命令。
+
+`sudo` 的问题是,运行 **sudo 不需要 root 用户密码,而是需要用户自己的密码**。
+
+并且这就是为什么当你使用 `sudo` 运行一个命令,会要求输入正在运行 `sudo` 命令的用户的密码的原因:
+
+```
+[email protected]:~$ sudo apt update
+[sudo] password for abhishek:
+```
+
+正如你在上面示例中所见 `abhishek` 在尝试使用 `sudo` 来运行 `apt update` 命令,系统要求输入 `abhishek` 的密码。
+
+**如果你对 Linux 完全不熟悉,当你在终端中开始输入密码时,你可能会惊讶,在屏幕上什么都没有发生。这是十分正常的,因为作为默认的安全功能,在屏幕上什么都不会显示。甚至星号(`*`)都没有。输入你的密码并按回车键。**
+
+> **要点:**为在 Ubuntu 中像 root 用户一样运行命令,在命令前添加 `sudo`。 当被要求输入密码时,输入你的账户的密码。当你在屏幕上输入密码时,什么都看不到。请继续输入密码,并按回车键。
+
+### 如何在 Ubuntu 中成为 root 用户?
+
+你可以使用 `sudo` 来像 root 用户一样运行命令。但是,在某些情况下,你必须以 root 用户身份来运行一些命令,而你总是忘了在命令前添加 `sudo`,那么你可以临时切换为 root 用户。
+
+`sudo` 命令允许你来模拟一个 root 用户登录的 shell ,使用这个命令:
+
+```
+sudo -i
+```
+
+```
+[email protected]:~$ sudo -i
+[sudo] password for abhishek:
+[email protected]:~# whoami
+root
+[email protected]:~#
+```
+
+你将注意到,当你切换为 root 用户时,shell 命令提示符从 `$`(美元符号)更改为 `#`(英镑符号)。我开个(拙劣的)玩笑,英镑比美元强大。
+
+**虽然我已经向你显示如何成为 root 用户,但是我必须警告你,你应该避免作为 root 用户使用系统。毕竟它有阻拦你使用 root 用户的原因。**
+
+另外一种临时切换为 root 用户的方法是使用 `su` 命令:
+
+```
+sudo su
+```
+
+如果你尝试使用不带有的 `sudo` 的 `su` 命令,你将遇到 “su authentication failure” 错误。
+
+你可以使用 `exit` 命令来恢复为正常用户。
+
+```
+exit
+```
+
+### 如何在 Ubuntu 中启用 root 用户?
+
+现在你知道,root 用户在基于 Ubuntu 发行版中是默认锁定的。
+
+Linux 给予你在系统上想做什么就做什么的自由。解锁 root 用户就是这些自由之一。
+
+如果出于某些原因,你决定启用 root 用户,你可以通过为其设置一个密码来做到:
+
+```
+sudo passwd root
+```
+
+再强调一次,不建议使用 root 用户,并且我也不鼓励你在桌面上这样做。如果你忘记了密码,你将不能再次[在 Ubuntu 中更改 root 用户密码][11]。(LCTT 译注:可以通过单用户模式修改。)
+
+你可以通过移除密码来再次锁定 root 用户:
+
+```
+sudo passwd -dl root
+```
+
+### 最后…
+
+我希望你现在对 root 概念理解得更好一点。如果你仍然有些关于它的困惑和问题,请在评论中让我知道。我将尝试回答你的问题,并且也可能更新这篇文章。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/root-user-ubuntu/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: tmp.IrHYJBAqVn#what-is-root
+[2]: tmp.IrHYJBAqVn#run-command-as-root
+[3]: tmp.IrHYJBAqVn#become-root
+[4]: tmp.IrHYJBAqVn#enable-root
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/root_user_ubuntu.png?ssl=1
+[6]: http://www.linfo.org/root.html
+[7]: https://ubuntu.com/
+[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/sudo_sandwich.png?ssl=1
+[9]: https://itsfoss.com/update-ubuntu/
+[10]: https://itsfoss.com/install-ubuntu/
+[11]: https://itsfoss.com/how-to-hack-ubuntu-password/
diff --git a/published/202001/20200115 Why everyone is talking about WebAssembly.md b/published/202001/20200115 Why everyone is talking about WebAssembly.md
new file mode 100644
index 0000000000..e8fdb04ce0
--- /dev/null
+++ b/published/202001/20200115 Why everyone is talking about WebAssembly.md
@@ -0,0 +1,87 @@
+[#]: collector: (lujun9972)
+[#]: translator: (laingke)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11823-1.html)
+[#]: subject: (Why everyone is talking about WebAssembly)
+[#]: via: (https://opensource.com/article/20/1/webassembly)
+[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
+
+为什么每个人都在谈论 WebAssembly
+======
+
+> 了解有关在 Web 浏览器中运行任何代码的最新方法的更多信息。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/27/125343ch0hxdfbzibrihfn.jpg)
+
+如果你还没有听说过 [WebAssembly][2],那么你很快就会知道。这是业界最保密的秘密之一,但它无处不在。所有主流的浏览器都支持它,并且它也将在服务器端使用。它很快,它能用于游戏编程。这是主要的国际网络标准组织万维网联盟(W3C)的一个开放标准。
+
+你可能会说:“哇,这听起来像是我应该学习编程的东西!”你可能是对的,但也是错的。你不需要用 WebAssembly 编程。让我们花一些时间来学习这种通常被缩写为“Wasm”的技术。
+
+### 它从哪里来?
+
+大约十年前,人们越来越认识到,广泛使用的 JavaScript 不够快速,无法满足许多目的。JavaScript 无疑是成功和方便的。它可以在任何浏览器中运行,并启用了今天我们认为理所当然的动态网页类型。但这是一种高级语言,在设计时并没有考虑到计算密集型工作负载。
+
+然而,尽管负责主流 web 浏览器的工程师们对性能问题的看法大体一致,但他们对如何解决这个问题却意见不一。出现了两个阵营,谷歌开始了它的原生客户端Native Client项目,后来又推出了可移植原生客户端Portable Native Client变体,着重于允许用 C/C++ 编写的游戏和其它软件在 Chrome 的一个安全隔间中运行。与此同时,Mozilla 赢得了微软对 asm.js 的支持。该方法更新了浏览器,因此它可以非常快速地运行 JavaScript 指令的低级子集(有另一个项目可以将 C/C++ 代码转换为这些指令)。
+
+由于这两个阵营都没有得到广泛采用,各方在 2015 年同意围绕一种称为 WebAssembly 的新标准,以 asm.js 所采用的基本方法为基础,联合起来。[如 CNET 的 Stephen Shankland 当时所写][3],“在当今的 Web 上,浏览器的 JavaScript 将这些指令转换为机器代码。但是,通过 WebAssembly,程序员可以在此过程的早期阶段完成很多工作,从而生成介于两种状态之间的程序。这使浏览器摆脱了创建机器代码的繁琐工作,但也实现了 Web 的承诺 —— 该软件将在具有浏览器的任何设备上运行,而无需考虑基础硬件的细节。”
+
+在 2017 年,Mozilla 宣布了它的最小可行的产品(MVP),并使其脱离预览版阶段。到该年年底,所有主流的浏览器都采用了它。[2019 年 12 月][4],WebAssembly 工作组发布了三个 W3C 推荐的 WebAssembly 规范。
+
+WebAssembly 定义了一种可执行程序的可移植二进制代码格式、相应的文本汇编语言以及用于促进此类程序与其宿主环境之间的交互接口。WebAssembly 代码在低级虚拟机中运行,这个可运行于许多微处理器之上的虚拟机可模仿这些处理器的功能。通过即时(JIT)编译或解释,WebAssembly 引擎可以以近乎原生平台编译代码的速度执行。
+
+### 为什么现在感兴趣?
+
+当然,最近对 WebAssembly 感兴趣的部分原因是最初希望在浏览器中运行更多计算密集型代码。尤其是笔记本电脑用户,越来越多的时间都花在浏览器上(或者,对于 Chromebook 用户来说,基本上是所有时间)。这种趋势已经迫切需要消除在浏览器中运行各种应用程序的障碍。这些障碍之一通常是性能的某些方面,这正是 WebAssembly 及其前身最初旨在解决的问题。
+
+但是,WebAssembly 并不仅仅适用于浏览器。在 2019 年,[Mozilla 宣布了一个名为 WASI][5](WebAssembly 系统接口WebAssembly System Interface)的项目,以标准化 WebAssembly 代码如何与浏览器上下文之外的操作系统进行交互。通过将浏览器对 WebAssembly 和 WASI 的支持结合在一起,编译后的二进制文件将能够以接近原生的速度,跨不同的设备和操作系统在浏览器内外运行。
+
+WebAssembly 的低开销立即使它可以在浏览器之外使用,但这无疑是赌注;显然,还有其它不会引入性能瓶颈的运行应用程序的方法。为什么要专门使用 WebAssembly?
+
+一个重要的原因是它的可移植性。如今,像 C++ 和 Rust 这样的广泛使用的编译语言可能是与 WebAssembly 关联最紧密的语言。但是,[各种各样的其他语言][6]可以编译为 WebAssembly 或拥有它们的 WebAssembly 虚拟机。此外,尽管 WebAssembly 为其执行环境[假定了某些先决条件][7],但它被设计为在各种操作系统和指令集体系结构上有效执行。因此,WebAssembly 代码可以使用多种语言编写,并可以在多种操作系统和处理器类型上运行。
+
+另一个 WebAssembly 优势源于这样一个事实:代码在虚拟机中运行。因此,每个 WebAssembly 模块都在沙盒环境中执行,并使用故障隔离技术将其与宿主机运行时环境分开。这意味着,对于其它部分而言,应用程序独立于其宿主机环境的其余部分执行,如果不调用适当的 API,就无法摆脱沙箱。
+
+### WebAssembly 现状
+
+这一切在实践中意味着什么?
+
+如今在运作中的 WebAssembly 的一个例子是 [Enarx][8]。
+
+Enarx 是一个提供硬件独立性的项目,可使用受信任的执行环境Trusted Execution Environments(TEE)保护应用程序的安全。Enarx 使你可以安全地将编译为 WebAssembly 的应用程序始终交付到云服务商,并远程执行它。正如 Red Hat 安全工程师 [Nathaniel McCallum 指出的那样][9]:“我们这样做的方式是,我们将你的应用程序作为输入,并使用远程硬件执行认证过程。我们使用加密技术验证了远程硬件实际上是它声称的硬件。最终的结果不仅是我们对硬件的信任度提高了;它也是一个会话密钥,我们可以使用它将加密的代码和数据传递到我们刚刚要求加密验证的环境中。”
+
+另一个例子是 OPA,开放策略代理Open Policy Agent,它[发布][10]于 2019 年 11 月,你可以[编译][11]他们的策略定义语言 Rego 为 WebAssembly。Rego 允许你编写逻辑来搜索和组合来自不同来源的 JSON/YAML 数据,以询问诸如“是否允许使用此 API?”之类的问题。
+
+OPA 已被用于支持策略的软件,包括但不限于 Kubernetes。使用 OPA 之类的工具来简化策略[被认为是在各种不同环境中正确保护 Kubernetes 部署的重要步骤][12]。WebAssembly 的可移植性和内置的安全功能非常适合这些工具。
+
+我们的最后一个例子是 [Unity][13]。还记得我们在文章开头提到过 WebAssembly 可用于游戏吗?好吧,跨平台游戏引擎 Unity 是 WebAssembly 的较早采用者,它提供了在浏览器中运行的 Wasm 的首个演示品,并且自 2018 年 8 月以来,[已将 WebAssembly][14]用作 Unity WebGL 构建目标的输出目标。
+
+这些只是 WebAssembly 已经开始产生影响的几种方式。你可以在 上查找更多信息并了解 Wasm 的所有最新信息。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/webassembly
+
+作者:[Mike Bursell][a]
+选题:[lujun9972][b]
+译者:[laingke](https://github.com/laingke)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mikecamel
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
+[2]: https://opensource.com/article/19/8/webassembly-speed-code-reuse
+[3]: https://www.cnet.com/news/the-secret-alliance-that-could-give-the-web-a-massive-speed-boost/
+[4]: https://www.w3.org/blog/news/archives/8123
+[5]: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/
+[6]: https://github.com/appcypher/awesome-wasm-langs
+[7]: https://webassembly.org/docs/portability/
+[8]: https://enarx.io
+[9]: https://enterprisersproject.com/article/2019/9/application-security-4-facts-confidential-computing-consortium
+[10]: https://blog.openpolicyagent.org/tagged/webassembly
+[11]: https://github.com/open-policy-agent/opa/tree/master/wasm
+[12]: https://enterprisersproject.com/article/2019/11/kubernetes-reality-check-3-takeaways-kubecon
+[13]: https://opensource.com/article/20/1/www.unity.com
+[14]: https://blogs.unity3d.com/2018/08/15/webassembly-is-here/
diff --git a/published/202001/20200116 3 open source tools to manage your contacts.md b/published/202001/20200116 3 open source tools to manage your contacts.md
new file mode 100644
index 0000000000..a1862377d8
--- /dev/null
+++ b/published/202001/20200116 3 open source tools to manage your contacts.md
@@ -0,0 +1,141 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11834-1.html)
+[#]: subject: (3 open source tools to manage your contacts)
+[#]: via: (https://opensource.com/article/20/1/sync-contacts-locally)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+用于联系人管理的三个开源工具
+======
+
+> 通过将联系人同步到本地从而更快访问它。在我们的 20 个使用开源提升生产力的系列的第六篇文章中了解该如何做。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/30/194811bbtt449zfr9zppb3.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 用于联系人管理的开源工具
+
+在本系列之前的文章中,我解释了如何在本地同步你的[邮件][2]和[日历][3]。希望这些加速了你访问邮件和日历。现在,我将讨论联系人同步,你可以给他们发送邮件和日历邀请。
+
+![abook][4]
+
+我目前收集了很多邮件地址。管理这些数据可能有点麻烦。有基于 Web 的服务,但它们不如本地副本快。
+
+几天前,我谈到了用于管理日历的 [vdirsyncer][5]。vdirsyncer 还使用 CardDAV 协议处理联系人。vdirsyncer 除了可以使用**文件系统**存储日历外,还支持通过 **google_contacts** 和 **carddav** 进行联系人同步,但 `fileext` 设置会被更改,因此你无法在日历文件中存储联系人。
+
+我在配置文件添加了一块配置,并从 Google 镜像了我的联系人。设置它需要额外的步骤。从 Google 镜像完成后,配置非常简单:
+
+```
+[pair address_sync]
+a = "googlecard"
+b = "localcard"
+collections = ["from a", "from b"]
+conflict_resolution = "a wins"
+
+[storage googlecard]
+type = "google_contacts"
+token_file = "~/.vdirsyncer/google_token"
+client_id = "my_client_id"
+client_secret = "my_client_secret"
+
+[storage localcard]
+type = "filesystem"
+path = "~/.calendars/Addresses/"
+fileext = ".vcf"
+```
+
+现在,当我运行 `vdirsyncer discover` 时,它会找到我的 Google 联系人,并且 `vdirsyncer sync` 将它们复制到我的本地计算机。但同样,这只进行到一半。现在我想查看和使用联系人。需要 [khard][6] 和 [abook][7]。
+
+![khard search][8]
+
+为什么选择两个应用?因为每个都有它自己的使用场景,在这里,越多越好。khard 用于管理地址,类似于 [khal][9] 用于管理日历条目。如果你的发行版附带了旧版本,你可能需要通过 `pip` 安装最新版本。安装 khard 后,你需要创建 `~/.config/khard/khard.conf`,因为 khard 没有与 khal 那样漂亮的配置向导。我的看起来像这样:
+
+```
+[addressbooks]
+[[addresses]]
+path = ~/.calendars/Addresses/default/
+
+[general]
+debug = no
+default_action = list
+editor = vim, -i, NONE
+merge_editor = vimdiff
+
+[contact table]
+display = first_name
+group_by_addressbook = no
+reverse = no
+show_nicknames = yes
+show_uids = no
+sort = last_name
+localize_dates = yes
+
+[vcard]
+preferred_version = 3.0
+search_in_source_files = yes
+skip_unparsable = no
+```
+
+这会定义源通讯簿(并给它一个友好的名称)、显示内容和联系人编辑程序。运行 `khard list` 将列出所有条目,`khard list ` 可以搜索特定条目。如果要添加或编辑条目,`add` 和 `edit` 命令将使用相同的基本模板打开配置的编辑器,唯一的区别是 `add` 命令的模板将为空。
+
+![editing in khard][11]
+
+abook 需要你导入和导出 VCF 文件,但它为查找提供了一些不错的功能。要将文件转换为 abook 格式,请先安装 abook 并创建 `~/.abook` 默认目录。然后让 abook 解析所有文件,并将它们放入 `~/.abook/addresses` 文件中:
+
+```
+apt install abook
+ls ~/.calendars/Addresses/default/* | xargs cat | abook --convert --informat vcard --outformat abook > ~/.abook/addresses
+```
+
+现在运行 `abook`,你将有一个非常漂亮的 UI 来浏览、搜索和编辑条目。将它们导出到单个文件有点痛苦,所以我用 khard 进行大部分编辑,并有一个 cron 任务将它们导入到 abook 中。
+
+abook 还可在命令行中搜索,并有大量有关将其与邮件客户端集成的文档。例如,你可以在 `.config/alot/config` 文件中添加一些信息,从而在 [Nmuch][12] 的邮件客户端 [alot][13] 中使用 abook 查询联系人:
+
+```
+[accounts]
+ [[Personal]]
+ realname = Kevin Sonney
+ address = kevin@sonney.com
+ alias_regexp = kevin\+.+@sonney.com
+ gpg_key = 7BB612C9
+ sendmail_command = msmtp --account=Personal -t
+ # ~ expansion works
+ sent_box = maildir://~/Maildir/Sent
+ draft_box = maildir://~/Maildir/Drafts
+ [[[abook]]]
+ type = abook
+```
+
+这样你就可以在邮件和日历中快速查找联系人了!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/sync-contacts-locally
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
+[2]: https://linux.cn/article-11804-1.html
+[3]: https://linux.cn/article-11812-1.html
+[4]: https://opensource.com/sites/default/files/uploads/productivity_6-1.png (abook)
+[5]: https://github.com/pimutils/vdirsyncer
+[6]: https://github.com/scheibler/khard
+[7]: http://abook.sourceforge.net/
+[8]: https://opensource.com/sites/default/files/uploads/productivity_6-2.png (khard search)
+[9]: https://khal.readthedocs.io/en/v0.9.2/index.html
+[10]: mailto:some@email.adr
+[11]: https://opensource.com/sites/default/files/uploads/productivity_6-3.png (editing in khard)
+[12]: https://opensource.com/article/20/1/organize-email-notmuch
+[13]: https://github.com/pazz/alot
+[14]: mailto:kevin@sonney.com
+[15]: mailto:+.+@sonney.com
diff --git a/published/202001/20200117 C vs. Rust- Which to choose for programming hardware abstractions.md b/published/202001/20200117 C vs. Rust- Which to choose for programming hardware abstractions.md
new file mode 100644
index 0000000000..dcdbc98b5a
--- /dev/null
+++ b/published/202001/20200117 C vs. Rust- Which to choose for programming hardware abstractions.md
@@ -0,0 +1,477 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11825-1.html)
+[#]: subject: (C vs. Rust: Which to choose for programming hardware abstractions)
+[#]: via: (https://opensource.com/article/20/1/c-vs-rust-abstractions)
+[#]: author: (Dan Pittman https://opensource.com/users/dan-pittman)
+
+C 还是 Rust:选择哪个用于硬件抽象编程
+======
+
+> 在 Rust 中使用类型级编程可以使硬件抽象更加安全。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/28/123350k2w4mr3tp7crd4m2.jpg)
+
+Rust 是一种日益流行的编程语言,被视为硬件接口的最佳选择。通常会将其与 C 的抽象级别相比较。本文介绍了 Rust 如何通过多种方式处理按位运算,并提供了既安全又易于使用的解决方案。
+
+语言 | 诞生于 | 官方描述 | 总览
+---|---|---|---
+C | 1972 年 | C 是一种通用编程语言,具有表达式简约、现代的控制流和数据结构,以及丰富的运算符集等特点。(来源:[CS 基础知识] [2])| C 是(一种)命令式语言,旨在以相对简单的方式进行编译,从而提供对内存的低级访问。(来源:[W3schools.in] [3])
+Rust | 2010 年 | 一种赋予所有人构建可靠、高效的软件的能力的语言(来源:[Rust 网站] [4])| Rust 是一种专注于安全性(尤其是安全并发性)的多范式系统编程语言。(来源:[维基百科] [5])
+
+### 在 C 语言中对寄存器值进行按位运算
+
+在系统编程领域,你可能经常需要编写硬件驱动程序或直接与内存映射设备进行交互,而这些交互几乎总是通过硬件提供的内存映射寄存器来完成的。通常,你通过对某些固定宽度的数字类型进行按位运算来与这些寄存器进行交互。
+
+例如,假设一个 8 位寄存器具有三个字段:
+
+```
++----------+------+-----------+---------+
+| (unused) | Kind | Interrupt | Enabled |
++----------+------+-----------+---------+
+ 5-7 2-4 1 0
+```
+
+字段名称下方的数字规定了该字段在寄存器中使用的位。要启用该寄存器,你将写入值 `1`(以二进制表示为 `0000_0001`)来设置 `Enabled` 字段的位。但是,通常情况下,你也不想干扰寄存器中的现有配置。假设你要在设备上启用中断功能,但也要确保设备保持启用状态。为此,必须将 `Interrupt` 字段的值与 `Enabled` 字段的值结合起来。你可以通过按位操作来做到这一点:
+
+```
+1 | (1 << 1)
+```
+
+通过将 1 和 2(`1` 左移一位得到)进行“或”(`|`)运算得到二进制值 `0000_0011` 。你可以将其写入寄存器,使其保持启用状态,但也启用中断功能。
+
+你的头脑中要记住很多事情,特别是当你要在一个完整的系统上和可能有数百个之多的寄存器打交道时。在实践上,你可以使用助记符来执行此操作,助记符可跟踪字段在寄存器中的位置以及字段的宽度(即它的上边界是什么)
+
+下面是这些助记符之一的示例。它们是 C 语言的宏,用右侧的代码替换它们的出现的地方。这是上面列出的寄存器的简写。`&` 的左侧是该字段的起始位置,而右侧则限制该字段所占的位:
+
+```
+#define REG_ENABLED_FIELD(x) (x << 0) & 1
+#define REG_INTERRUPT_FIELD(x) (x << 1) & 2
+#define REG_KIND_FIELD(x) (x << 2) & (7 << 2)
+```
+
+然后,你可以使用这些来抽象化寄存器值的操作,如下所示:
+
+```
+void set_reg_val(reg* u8, val u8);
+
+fn enable_reg_with_interrupt(reg* u8) {
+ set_reg_val(reg, REG_ENABLED_FIELD(1) | REG_INTERRUPT_FIELD(1));
+}
+```
+
+这就是现在的做法。实际上,这就是大多数驱动程序在 Linux 内核中的使用方式。
+
+有没有更好的办法?如果能够基于对现代编程语言研究得出新的类型系统,就可能能够获得安全性和可表达性的好处。也就是说,如何使用更丰富、更具表现力的类型系统来使此过程更安全、更持久?
+
+### 在 Rust 语言中对寄存器值进行按位运算
+
+继续用上面的寄存器作为例子:
+
+```
++----------+------+-----------+---------+
+| (unused) | Kind | Interrupt | Enabled |
++----------+------+-----------+---------+
+ 5-7 2-4 1 0
+```
+
+你想如何用 Rust 类型来表示它呢?
+
+你将以类似的方式开始,为每个字段的*偏移*定义常量(即,距最低有效位有多远)及其掩码。*掩码*是一个值,其二进制表示形式可用于更新或读取寄存器内部的字段:
+
+```
+const ENABLED_MASK: u8 = 1;
+const ENABLED_OFFSET: u8 = 0;
+
+const INTERRUPT_MASK: u8 = 2;
+const INTERRUPT_OFFSET: u8 = 1;
+
+const KIND_MASK: u8 = 7 << 2;
+const KIND_OFFSET: u8 = 2;
+```
+
+接下来,你将声明一个 `Field` 类型并进行操作,将给定值转换为与其位置相关的值,以供在寄存器内使用:
+
+```
+struct Field {
+ value: u8,
+}
+
+impl Field {
+ fn new(mask: u8, offset: u8, val: u8) -> Self {
+ Field {
+ value: (val << offset) & mask,
+ }
+ }
+}
+```
+
+最后,你将使用一个 `Register` 类型,该类型会封装一个与你的寄存器宽度匹配的数字类型。 `Register` 具有 `update` 函数,可使用给定字段来更新寄存器:
+
+```
+struct Register(u8);
+
+impl Register {
+ fn update(&mut self, val: Field) {
+ self.0 = self.0 | field.value;
+ }
+}
+
+fn enable_register(&mut reg) {
+ reg.update(Field::new(ENABLED_MASK, ENABLED_OFFSET, 1));
+}
+```
+
+使用 Rust,你可以使用数据结构来表示字段,将它们与特定的寄存器联系起来,并在与硬件交互时提供简洁明了的工效。这个例子使用了 Rust 提供的最基本的功能。无论如何,添加的结构都会减轻上述 C 示例中的某些晦涩的地方。现在,字段是个带有名字的事物,而不是从模糊的按位运算符派生而来的数字,并且寄存器是具有状态的类型 —— 这在硬件上多了一层抽象。
+
+### 一个易用的 Rust 实现
+
+用 Rust 重写的第一个版本很好,但是并不理想。你必须记住要带上掩码和偏移量,并且要手工进行临时计算,这容易出错。人类不擅长精确且重复的任务 —— 我们往往会感到疲劳或失去专注力,这会导致错误。一次一个寄存器地手动记录掩码和偏移量几乎可以肯定会以糟糕的结局而告终。这是最好留给机器的任务。
+
+其次,从结构上进行思考:如果有一种方法可以让字段的类型携带掩码和偏移信息呢?如果可以在编译时就发现硬件寄存器的访问和交互的实现代码中存在错误,而不是在运行时才发现,该怎么办?也许你可以依靠一种在编译时解决问题的常用策略,例如类型。
+
+你可以使用 [typenum][6] 来修改前面的示例,该库在类型级别提供数字和算术。在这里,你将使用掩码和偏移量对 `Field` 类型进行参数化,使其可用于任何 `Field` 实例,而无需将其包括在调用处:
+
+```
+#[macro_use]
+extern crate typenum;
+
+use core::marker::PhantomData;
+
+use typenum::*;
+
+// Now we'll add Mask and Offset to Field's type
+struct Field {
+ value: u8,
+ _mask: PhantomData,
+ _offset: PhantomData,
+}
+
+// We can use type aliases to give meaningful names to
+// our fields (and not have to remember their offsets and masks).
+type RegEnabled = Field;
+type RegInterrupt = Field;
+type RegKind = Field;
+```
+
+现在,当重新访问 `Field` 的构造函数时,你可以忽略掩码和偏移量参数,因为类型中包含该信息:
+
+```
+impl Field {
+ fn new(val: u8) -> Self {
+ Field {
+ value: (val << Offset::U8) & Mask::U8,
+ _mask: PhantomData,
+ _offset: PhantomData,
+ }
+ }
+}
+
+// And to enable our register...
+fn enable_register(&mut reg) {
+ reg.update(RegEnabled::new(1));
+}
+```
+
+看起来不错,但是……如果你在给定的值是否*适合*该字段方面犯了错误,会发生什么?考虑一个简单的输入错误,你在其中放置了 `10` 而不是 `1`:
+
+```
+fn enable_register(&mut reg) {
+ reg.update(RegEnabled::new(10));
+}
+```
+
+在上面的代码中,预期结果是什么?好吧,代码会将启用位设置为 0,因为 `10&1 = 0`。那真不幸;最好在尝试写入之前知道你要写入字段的值是否适合该字段。事实上,我认为截掉错误字段值的高位是一种 1*未定义的行为*(哈)。
+
+### 出于安全考虑使用 Rust
+
+如何以一般方式检查字段的值是否适合其规定的位置?需要更多类型级别的数字!
+
+你可以在 `Field` 中添加 `Width` 参数,并使用它来验证给定的值是否适合该字段:
+
+```
+struct Field {
+ value: u8,
+ _mask: PhantomData,
+ _offset: PhantomData,
+ _width: PhantomData,
+}
+
+type RegEnabled = Field;
+type RegInterrupt = Field;
+type RegKind = Field;
+
+impl Field {
+ fn new(val: u8) -> Option {
+ if val <= (1 << Width::U8) - 1 {
+ Some(Field {
+ value: (val << Offset::U8) & Mask::U8,
+ _mask: PhantomData,
+ _offset: PhantomData,
+ _width: PhantomData,
+ })
+ } else {
+ None
+ }
+ }
+}
+```
+
+现在,只有给定值适合时,你才能构造一个 `Field` !否则,你将得到 `None` 信号,该信号指示发生了错误,而不是截掉该值的高位并静默写入意外的值。
+
+但是请注意,这将在运行时环境中引发错误。但是,我们事先知道我们想写入的值,还记得吗?鉴于此,我们可以教编译器完全拒绝具有无效字段值的程序 —— 我们不必等到运行它!
+
+这次,你将向 `new` 的新实现 `new_checked` 中添加一个特征绑定(`where` 子句),该函数要求输入值小于或等于给定字段用 `Width` 所能容纳的最大可能值:
+
+```
+struct Field {
+ value: u8,
+ _mask: PhantomData,
+ _offset: PhantomData,
+ _width: PhantomData,
+}
+
+type RegEnabled = Field;
+type RegInterrupt = Field;
+type RegKind = Field;
+
+impl Field {
+ const fn new_checked() -> Self
+ where
+ V: IsLessOrEqual,
+ {
+ Field {
+ value: (V::U8 << Offset::U8) & Mask::U8,
+ _mask: PhantomData,
+ _offset: PhantomData,
+ _width: PhantomData,
+ }
+ }
+}
+```
+
+只有拥有此属性的数字才实现此特征,因此,如果使用不适合的数字,它将无法编译。让我们看一看!
+
+```
+fn enable_register(&mut reg) {
+ reg.update(RegEnabled::new_checked::());
+}
+12 | reg.update(RegEnabled::new_checked::());
+ | ^^^^^^^^^^^^^^^^ expected struct `typenum::B0`, found struct `typenum::B1`
+ |
+ = note: expected type `typenum::B0`
+ found type `typenum::B1`
+```
+
+`new_checked` 将无法生成一个程序,因为该字段的值有错误的高位。你的输入错误不会在运行时环境中才爆炸,因为你永远无法获得一个可以运行的工件。
+
+就使内存映射的硬件进行交互的安全性而言,你已经接近 Rust 的极致。但是,你在 C 的第一个示例中所写的内容比最终得到的一锅粥的类型参数更简洁。当你谈论潜在可能有数百甚至数千个寄存器时,这样做是否容易处理?
+
+### 让 Rust 恰到好处:既安全又方便使用
+
+早些时候,我认为手工计算掩码有问题,但我又做了同样有问题的事情 —— 尽管是在类型级别。虽然使用这种方法很不错,但要达到编写任何代码的地步,则需要大量样板和手动转录(我在这里谈论的是类型的同义词)。
+
+我们的团队想要像 [TockOS mmio 寄存器][7]之类的东西,而以最少的手动转录生成类型安全的实现。我们得出的结果是一个宏,该宏生成必要的样板以获得类似 Tock 的 API 以及基于类型的边界检查。要使用它,请写下一些有关寄存器的信息,其字段、宽度和偏移量以及可选的[枚举][8]类的值(你应该为字段可能具有的值赋予“含义”):
+
+```
+register! {
+ // The register's name
+ Status,
+ // The type which represents the whole register.
+ u8,
+ // The register's mode, ReadOnly, ReadWrite, or WriteOnly.
+ RW,
+ // And the fields in this register.
+ Fields [
+ On WIDTH(U1) OFFSET(U0),
+ Dead WIDTH(U1) OFFSET(U1),
+ Color WIDTH(U3) OFFSET(U2) [
+ Red = U1,
+ Blue = U2,
+ Green = U3,
+ Yellow = U4
+ ]
+ ]
+}
+```
+
+由此,你可以生成寄存器和字段类型,如上例所示,其中索引:`Width`、`Mask` 和 `Offset` 是从一个字段定义的 `WIDTH` 和 `OFFSET` 部分的输入值派生的。另外,请注意,所有这些数字都是 “类型数字”;它们将直接进入你的 `Field` 定义!
+
+生成的代码通过为寄存器及字段指定名称来为寄存器及其相关字段提供名称空间。这很绕口,看起来是这样的:
+
+```
+mod Status {
+ struct Register(u8);
+ mod On {
+ struct Field; // There is of course more to this definition
+ }
+ mod Dead {
+ struct Field;
+ }
+ mod Color {
+ struct Field;
+ pub const Red: Field = Field::new();
+ // &c.
+ }
+}
+```
+
+生成的 API 包含名义上期望的读取和写入的原语,以获取原始寄存器的值,但它也有办法获取单个字段的值、执行集合操作以及确定是否设置了任何(或全部)位集合的方法。你可以阅读[完整生成的 API][9]上的文档。
+
+### 粗略检查
+
+将这些定义用于实际设备会是什么样?代码中是否会充斥着类型参数,从而掩盖了视图中的实际逻辑?
+
+不会!通过使用类型同义词和类型推断,你实际上根本不必考虑程序的类型层面部分。你可以直接与硬件交互,并自动获得与边界相关的保证。
+
+这是一个 [UART][10] 寄存器块的示例。我会跳过寄存器本身的声明,因为包括在这里就太多了。而是从寄存器“块”开始,然后帮助编译器知道如何从指向该块开头的指针中查找寄存器。我们通过实现 `Deref` 和 `DerefMut` 来做到这一点:
+
+```
+#[repr(C)]
+pub struct UartBlock {
+ rx: UartRX::Register,
+ _padding1: [u32; 15],
+ tx: UartTX::Register,
+ _padding2: [u32; 15],
+ control1: UartControl1::Register,
+}
+
+pub struct Regs {
+ addr: usize,
+}
+
+impl Deref for Regs {
+ type Target = UartBlock;
+
+ fn deref(&self) -> &UartBlock {
+ unsafe { &*(self.addr as *const UartBlock) }
+ }
+}
+
+impl DerefMut for Regs {
+ fn deref_mut(&mut self) -> &mut UartBlock {
+ unsafe { &mut *(self.addr as *mut UartBlock) }
+ }
+}
+```
+
+一旦到位,使用这些寄存器就像 `read()` 和 `modify()` 一样简单:
+
+```
+fn main() {
+ // A pretend register block.
+ let mut x = [0_u32; 33];
+
+ let mut regs = Regs {
+ // Some shenanigans to get at `x` as though it were a
+ // pointer. Normally you'd be given some address like
+ // `0xDEADBEEF` over which you'd instantiate a `Regs`.
+ addr: &mut x as *mut [u32; 33] as usize,
+ };
+
+ assert_eq!(regs.rx.read(), 0);
+
+ regs.control1
+ .modify(UartControl1::Enable::Set + UartControl1::RecvReadyInterrupt::Set);
+
+ // The first bit and the 10th bit should be set.
+ assert_eq!(regs.control1.read(), 0b_10_0000_0001);
+}
+```
+
+当我们使用运行时值时,我们使用如前所述的**选项**。这里我使用的是 `unwrap`,但是在一个输入未知的真实程序中,你可能想检查一下从新调用中返回的**某些东西**: [^1] [^2]
+
+```
+fn main() {
+ // A pretend register block.
+ let mut x = [0_u32; 33];
+
+ let mut regs = Regs {
+ // Some shenanigans to get at `x` as though it were a
+ // pointer. Normally you'd be given some address like
+ // `0xDEADBEEF` over which you'd instantiate a `Regs`.
+ addr: &mut x as *mut [u32; 33] as usize,
+ };
+
+ let input = regs.rx.get_field(UartRX::Data::Field::Read).unwrap();
+ regs.tx.modify(UartTX::Data::Field::new(input).unwrap());
+}
+```
+
+### 解码失败条件
+
+根据你的个人痛苦忍耐程度,你可能已经注意到这些错误几乎是无法理解的。看一下我所说的不那么微妙的提醒:
+
+```
+error[E0271]: type mismatch resolving `, typenum::B0>, typenum::B1>, typenum::B0>, typenum::B0> as typenum::IsLessOrEqual, typenum::B0>, typenum::B1>, typenum::B0>>>::Output == typenum::B1`
+ --> src/main.rs:12:5
+ |
+12 | less_than_ten::();
+ | ^^^^^^^^^^^^^^^^^^^^ expected struct `typenum::B0`, found struct `typenum::B1`
+ |
+ = note: expected type `typenum::B0`
+ found type `typenum::B1`
+```
+
+`expected struct typenum::B0, found struct typenum::B1` 部分是有意义的,但是 ` typenum::UInt疯了,地狱了,不要再忍受了Mad As Hell And Wasn't Going To Take It Anymore》,并做了一个小工具 `tnfilt`,从这种命名空间的二进制 cons 单元的痛苦中解脱出来。`tnfilt` 将 cons 单元格式的表示法替换为可让人看懂的十进制数字。我们认为其他人也会遇到类似的困难,所以我们分享了 [tnfilt][14]。你可以像这样使用它:
+
+```
+$ cargo build 2>&1 | tnfilt
+```
+
+它将上面的输出转换为如下所示:
+
+```
+error[E0271]: type mismatch resolving `>::Output == typenum::B1`
+```
+
+现在*这*才有意义!
+
+### 结论
+
+当在软件与硬件进行交互时,普遍使用内存映射寄存器,并且有无数种方法来描述这些交互,每种方法在易用性和安全性上都有不同的权衡。我们发现使用类型级编程来取得内存映射寄存器交互的编译时检查可以为我们提供制作更安全软件的必要信息。该代码可在 [bounded-registers][15] crate(Rust 包)中找到。
+
+我们的团队从安全性较高的一面开始,然后尝试找出如何将易用性滑块移近易用端。从这些雄心壮志中,“边界寄存器”就诞生了,我们在 Auxon 公司的冒险中遇到内存映射设备的任何时候都可以使用它。
+
+* * *
+
+[^1]: 从技术上讲,从定义上看,从寄存器字段读取的值只能在规定的范围内,但是我们当中没有一个人生活在一个纯净的世界中,而且你永远都不知道外部系统发挥作用时会发生什么。你是在这里接受硬件之神的命令,因此与其强迫你进入“可能的恐慌”状态,还不如给你提供处理“这将永远不会发生”的机会。
+[^2]: `get_field` 看起来有点奇怪。我正在专门查看 `Field::Read` 部分。`Field` 是一种类型,你需要该类型的实例才能传递给 `get_field`。更干净的 API 可能类似于:`regs.rx.get_field::();` 但是请记住,`Field` 是一种具有固定的宽度、偏移量等索引的类型的同义词。要像这样对 `get_field` 进行参数化,你需要使用更高级的类型。
+
+* * *
+
+此内容最初发布在 [Auxon Engineering 博客][16]上,并经许可进行编辑和重新发布。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/c-vs-rust-abstractions
+
+作者:[Dan Pittman][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dan-pittman
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl (Tools illustration)
+[2]: https://cs-fundamentals.com/c-programming/history-of-c-programming-language.php
+[3]: https://www.w3schools.in/c-tutorial/history-of-c/
+[4]: https://www.rust-lang.org/
+[5]: https://en.wikipedia.org/wiki/Rust_(programming_language)
+[6]: https://docs.rs/crate/typenum
+[7]: https://docs.rs/tock-registers/0.3.0/tock_registers/
+[8]: https://en.wikipedia.org/wiki/Enumerated_type
+[9]: https://github.com/auxoncorp/bounded-registers#the-register-api
+[10]: https://en.wikipedia.org/wiki/Universal_asynchronous_receiver-transmitter
+[11]: tmp.shpxgDsodx#1
+[12]: tmp.shpxgDsodx#2
+[13]: https://en.wikipedia.org/wiki/Cons
+[14]: https://github.com/auxoncorp/tnfilt
+[15]: https://crates.io/crates/bounded-registers
+[16]: https://blog.auxon.io/2019/10/25/type-level-registers/
diff --git a/published/202001/20200117 Get started with this open source to-do list manager.md b/published/202001/20200117 Get started with this open source to-do list manager.md
new file mode 100644
index 0000000000..deaa6da72a
--- /dev/null
+++ b/published/202001/20200117 Get started with this open source to-do list manager.md
@@ -0,0 +1,106 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11835-1.html)
+[#]: subject: (Get started with this open source to-do list manager)
+[#]: via: (https://opensource.com/article/20/1/open-source-to-do-list)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+开始使用开源待办事项清单管理器
+======
+
+> 待办事项清单是跟踪任务列表的强大方法。在我们的 20 个使用开源提升生产力的系列的第七篇文章中了解如何使用它。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/31/111103kmv55ploshuso4ot.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 使用 todo 跟踪任务
+
+任务管理和待办事项清单是我非常喜欢0的东西。我是一位生产效率的狂热粉丝(以至于我为此做了一个[播客][2]),我尝试了各种不同的应用。我甚至为此[做了演讲][3]并[写了些文章][4]。因此,当我谈到提高工作效率时,肯定会出现任务管理和待办事项清单工具。
+
+![Getting fancy with Todo.txt][5]
+
+说实话,由于简单、跨平台且易于同步,用 [todo.txt][6] 肯定不会错。它是我不断反复提到的两个待办事项清单以及任务管理应用之一(另一个是 [Org 模式][7])。让我反复使用它的原因是它简单、可移植、易于理解,并且有许多很好的附加组件,并且当一台机器有附加组件,而另一台没有,也不会破坏它。由于它是一个 Bash shell 脚本,我还没发现一个无法支持它的系统。
+
+#### 设置 todo.txt
+
+首先,你需要安装基本 shell 脚本并将默认配置文件复制到 `~/.todo` 目录:
+
+```
+git clone https://github.com/todotxt/todo.txt-cli.git
+cd todo.txt-cli
+make
+sudo make install
+mkdir ~/.todo
+cp todo.cfg ~/.todo/config
+```
+
+接下来,设置配置文件。一般,我想取消对颜色设置的注释,但必须马上设置的是 `TODO_DIR` 变量:
+
+```
+export TODO_DIR="$HOME/.todo"
+```
+
+#### 添加待办事件
+
+要添加第一个待办事件,只需输入 `todo.sh add ` 就能添加。这还将在 `$HOME/.todo/` 中创建三个文件:`todo.txt`、`done.txt` 和 `reports.txt`。
+
+添加几个项目后,运行 `todo.sh ls` 查看你的待办事项。
+
+![Basic todo.txt list][8]
+
+#### 管理任务
+
+你可以通过给项目设置优先级来稍微改善它。要向项目添加优先级,运行 `todo.sh pri # A`。数字是列表中任务的数量,而字母 `A` 是优先级。你可以将优先级设置为从 A 到 Z,因为这是它的排序方式。
+
+要完成任务,运行 `todo.sh do #` 来标记项目已完成并将它移动到 `done.txt`。运行 `todo.sh report` 会向 `report.txt` 写入已完成和未完成项的数量。
+
+所有这三个文件的格式都有详细的说明,因此你可以使用你的文本编辑器修改。`todo.txt` 的基本格式是:
+
+```
+(Priority) YYYY-MM-DD Task
+```
+
+该日期表示任务的到期日期(如果已设置)。手动编辑文件时,只需在任务前面加一个 `x` 来标记为已完成。运行 `todo.sh archive` 会将这些项目移动到 `done.txt`,你可以编辑该文本文件,并在有时间时将已完成的项目归档。
+
+#### 设置重复任务
+
+我有很多重复的任务,我需要以每天/周/月来计划。
+
+![Recurring tasks with the ice_recur add-on][9]
+
+这就是 `todo.txt` 的灵活性所在。通过在 `~/.todo.actions.d/` 中使用[附加组件][10],你可以添加命令并扩展基本 `todo.sh` 的功能。附加组件基本上是实现特定命令的脚本。对于重复执行的任务,插件 [ice_recur][11] 应该符合要求。按照其页面上的说明操作,你可以设置任务以非常灵活的方式重复执行。
+
+![Todour on MacOS][12]
+
+在该[附加组件目录][10]中有很多附加组件,包括同步到某些云服务,也有链接到桌面或移动端应用的组件,这样你可以随时看到待办列表。
+
+我只是简单介绍了这个代办事项清单功能,请花点时间深入了解这个工具的强大!它确实可以帮助我每天完成任务。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/open-source-to-do-list
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
+[2]: https://productivityalchemy.com/
+[3]: https://www.slideshare.net/AllThingsOpen/getting-to-done-on-the-command-line
+[4]: https://opensource.com/article/18/2/getting-to-done-agile-linux-command-line
+[5]: https://opensource.com/sites/default/files/uploads/productivity_7-1.png
+[6]: http://todotxt.org/
+[7]: https://orgmode.org/
+[8]: https://opensource.com/sites/default/files/uploads/productivity_7-2.png (Basic todo.txt list)
+[9]: https://opensource.com/sites/default/files/uploads/productivity_7-3.png (Recurring tasks with the ice_recur add-on)
+[10]: https://github.com/todotxt/todo.txt-cli/wiki/Todo.sh-Add-on-Directory
+[11]: https://github.com/rlpowell/todo-text-stuff
+[12]: https://opensource.com/sites/default/files/uploads/productivity_7-4.png (Todour on MacOS)
diff --git a/published/202001/20200117 Locking and unlocking accounts on Linux systems.md b/published/202001/20200117 Locking and unlocking accounts on Linux systems.md
new file mode 100644
index 0000000000..64a19e1010
--- /dev/null
+++ b/published/202001/20200117 Locking and unlocking accounts on Linux systems.md
@@ -0,0 +1,112 @@
+[#]: collector: (lujun9972)
+[#]: translator: (FSSlc)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11813-1.html)
+[#]: subject: (Locking and unlocking accounts on Linux systems)
+[#]: via: (https://www.networkworld.com/article/3513982/locking-and-unlocking-accounts-on-linux-systems.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+在 Linux 系统中禁用与解禁用户的账号
+======
+
+> 总有这样的时候:有时你需要禁用某位 Linux 用户的账号,有时你还需要反过来解禁用户的账号。
+本文将介绍一些管理用户访问的命令,并介绍它们背后的原理。
+
+![](https://images.idgesg.net/images/article/2019/10/cso_cybersecurity_mysterious_padlock_complex_circuits_gold_by_sqback_gettyimages-1177918748_2400x1600-100813830-large.jpg)
+
+假如你正管理着一台 [Linux][1] 系统,那么很有可能将遇到需要禁用一个账号的情况。可能是某人已经换了职位,他们是否还需要该账号仍是个问题;或许有理由相信再次使用该账号并没有大碍。不管上述哪种情况,知晓如何禁用账号并解禁账号都是你需要知道的知识。
+
+需要你记住的一件重要的事是尽管有多种方法来禁用账号,但它们并不都达到相同的效果。假如用户使用公钥/私钥来使用该账号而不是使用密码来访问,那么你使用的某些命令来阻止用户获取该账号或许将不会生效。
+
+### 使用 passwd 来禁用一个账号
+
+最为简单的用来禁用一个账号的方法是使用 `passwd -l` 命令。例如:
+
+```
+$ sudo passwd -l tadpole
+```
+
+上面这个命令的效果是在加密后的密码文件 `/etc/shadow` 中,用户对应的那一行的最前面加上一个 `!` 符号。这样就足够阻止用户使用密码来访问账号了。
+
+在没有使用上述命令前,加密后的密码行如下所示(请注意第一个字符):
+
+```
+$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPCnXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7:::
+```
+
+而禁用该账号后,这一行将变为:
+
+```
+!$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPCnXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7:::
+```
+
+在 tadpole 下一次尝试登录时,他可能会使用他原有的密码来尝试多次登录,但就是无法再登录成功了。另一方面,你则可以使用下面的命令来查看他这个账号的状态(`-S` = status):
+
+```
+$ sudo passwd -S tadpole
+tadpole L 10/15/2019 0 99999 7 -1
+```
+
+第二项的 `L` 告诉你这个账号已经被禁用了。在该账号被禁用前,这一项应该是 `P`。如果显示的是 `NP` 则意味着该账号还没有设置密码。
+
+命令 `usermod -L` 也具有相同的效果(添加 `!` 来禁用账号的使用)。
+
+使用这种方法来禁用某个账号的一个好处是当需要解禁某个账号时非常容易。只需要使用一个文本编辑器或者使用 `passwd -u` 命令来执行相反的操作,即将添加的 `!` 移除即可。
+
+```
+$ sudo passwd -u tadpole
+passwd: password expiry information changed.
+```
+
+但使用这种方式的问题是如果用户使用公钥/私钥对的方式来访问他/她的账号,这种方式将不能阻止他们使用该账号。
+
+### 使用 chage 命令来禁用账号
+
+另一种禁用用户账号的方法是使用 `chage` 命令,它可以帮助管理用户账号的过期日期。
+
+```
+$ sudu chage -E0 tadpole
+$ sudo passwd -S tadpole
+tadpole P 10/15/2019 0 99999 7 -1
+```
+
+`chage` 命令将会稍微修改 `/etc/shadow` 文件。在这个使用 `:` 来分隔的文件(下面将进行展示)中,某行的第 8 项将被设置为 `0`(先前为空),这就意味着这个账号已经过期了。`chage` 命令会追踪密码更改期间的天数,通过选项也可以提供账号过期信息。第 8 项如果是 0 则意味着这个账号在 1970 年 1 月 1 日后的一天过期,当使用上面显示的那个命令时可以用来禁用账号。
+
+```
+$ sudo grep tadpole /etc/shadow | fold
+tadpole:$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPC
+nXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7::0:
+ ^
+ |
+ +--- days until expiration
+```
+
+为了执行相反的操作,你可以简单地使用下面的命令将放置在 `/etc/shadow` 文件中的 `0` 移除掉:
+
+```
+% sudo chage -E-1 tadpole
+```
+
+一旦一个账号使用这种方式被禁用,即便是无密码的 [SSH][4] 登录也不能再访问该账号了。
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3513982/locking-and-unlocking-accounts-on-linux-systems.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[FSSlc](https://github.com/FSSlc)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://www.networkworld.com/article/3215226/what-is-linux-uses-featres-products-operating-systems.html
+[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
+[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
+[4]: https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html
+[5]: https://www.facebook.com/NetworkWorld/
+[6]: https://www.linkedin.com/company/network-world
diff --git a/published/202001/20200119 What-s your favorite Linux terminal trick.md b/published/202001/20200119 What-s your favorite Linux terminal trick.md
new file mode 100644
index 0000000000..675f36a935
--- /dev/null
+++ b/published/202001/20200119 What-s your favorite Linux terminal trick.md
@@ -0,0 +1,57 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11817-1.html)
+[#]: subject: (What's your favorite Linux terminal trick?)
+[#]: via: (https://opensource.com/article/20/1/linux-terminal-trick)
+[#]: author: (Opensource.com https://opensource.com/users/admin)
+
+你有什么喜欢的 Linux 终端技巧?
+======
+
+> 告诉我们你最喜欢的终端技巧,无论是提高生产率的快捷方式还是有趣的彩蛋。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/25/135858accxc70tfxuifxx1.jpg)
+
+新年伊始始终是评估提高效率的新方法的好时机。许多人尝试使用新的生产力工具,或者想找出如何优化其最常用的流程。终端是一个需要评估的领域,尤其是在开源世界中,有无数种方法可以通过快捷键和命令使终端上的生活更加高效(又有趣!)。
+
+我们向作者们询问了他们最喜欢的终端技巧。他们分享了一些节省时间的技巧,甚至还有一个有趣的终端彩蛋。你会采用这些键盘快捷键或命令行技巧吗?你有喜欢分享的最爱吗?请发表评论来告诉我们。
+
+“我找不出哪个是我最喜欢的;每天我都会使用这三个:
+
+* `Ctrl + L` 来清除屏幕(而不是键入 `clear`)。
+* `sudo !!` 以 `sudo` 特权运行先前的命令。
+* `grep -Ev '^#|^$' ` 将显示文件内容,不带注释或空行。” —Mars Toktonaliev
+
+“对我来说,如果我正在使用终端文本编辑器,并且希望将其丢开,以便可以快速执行其他操作,则可以使用 `Ctrl + Z` 将其放到后台,接着执行我需要做的一切,然后用 `fg` 将其带回前台。有时我也会对 `top` 或 `htop` 做同样的事情。我可以将其丢到后台,并在我想检查当前性能时随时将其带回前台。我不会将通常很快能完成的任务在前后台之间切换,它确实可以增强终端上的多任务处理能力。” —Jay LaCroix
+
+“我经常在某一天在终端中做很多相同的事情,有两件事是每天都不变的:
+
+* `Ctrl + R` 反向搜索我的 Bash 历史记录以查找我已经运行并且希望再次执行的命令。
+* 插入号(`^`)替换是最好的,因为我经常做诸如 `sudo dnf search ` 之类的事情,然后,如果我以这种方式找到合适的软件包,则执行 `^search^install` 来重新运行该命令,以 `install` 替换 `search`。
+
+这些东西肯定是很基本的,但是对我来说却节省了时间。” —Steve Morris
+
+“我的炫酷终端技巧不是我在终端上执行的操作,而是我使用的终端。有时候我只是想要使用 Apple II 或旧式琥珀色终端的感觉,那我就启动了 Cool-Retro-Term。它的截屏可以在这个[网站][2]上找到。” —Jim Hall
+
+“可能是用 `ssh -X` 来在其他计算机上运行图形程序。(在某些终端仿真器上,例如 gnome-terminal)用 `C-S c` 和 `C-S v` 复制/粘贴。我不确定这是否有价值(因为它有趣的是以 ssh 启动的图形化)。最近,我需要登录另一台计算机,但是我的孩子们可以在笔记本电脑的大屏幕上看到它。这个[链接][3]向我展示了一些我从未见过的内容:通过局域网从我的笔记本电脑上镜像来自另一台计算机屏幕上的活动会话(`x11vnc -desktop`),并能够同时从两台计算机上进行控制。” —Kyle R. Conway
+
+“你可以安装 `sl`(`$ sudo apt install sl` 或 `$ sudo dnf install sl`),并且当在 Bash 中输入命令 `sl` 时,一个基于文本的蒸汽机车就会在显示屏上移动。” —Don Watkins
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/linux-terminal-trick
+
+作者:[Opensource.com][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/admin
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
+[2]: https://github.com/Swordfish90/cool-retro-term
+[3]: https://elinux.org/Screen_Casting_on_a_Raspberry_Pi
diff --git a/published/202001/20200122 Setting up passwordless Linux logins using public-private keys.md b/published/202001/20200122 Setting up passwordless Linux logins using public-private keys.md
new file mode 100644
index 0000000000..89f54a6b45
--- /dev/null
+++ b/published/202001/20200122 Setting up passwordless Linux logins using public-private keys.md
@@ -0,0 +1,202 @@
+[#]: collector: (lujun9972)
+[#]: translator: (laingke)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11830-1.html)
+[#]: subject: (Setting up passwordless Linux logins using public/private keys)
+[#]: via: (https://www.networkworld.com/article/3514607/setting-up-passwordless-linux-logins-using-publicprivate-keys.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+使用公钥/私钥对设定免密的 Linux 登录方式
+======
+
+> 使用一组公钥/私钥对让你不需要密码登录到远程 Linux 系统或使用 ssh 运行命令,这会非常方便,但是设置过程有点复杂。下面是帮助你的方法和脚本。
+
+![](https://img.linux.net.cn/data/attachment/album/202001/29/141343ldps4muy4kp64k4l.jpg)
+
+在 [Linux][1] 系统上设置一个允许你无需密码即可远程登录或运行命令的帐户并不难,但是要使它正常工作,你还需要掌握一些繁琐的细节。在本文,我们将完成整个过程,然后给出一个可以帮助处理琐碎细节的脚本。
+
+设置好之后,如果希望在脚本中运行 `ssh` 命令,尤其是希望配置自动运行的命令,那么免密访问特别有用。
+
+需要注意的是,你不需要在两个系统上使用相同的用户帐户。实际上,你可以把公用密钥用于系统上的多个帐户或多个系统上的不同帐户。
+
+设置方法如下。
+
+### 在哪个系统上启动?
+
+首先,你需要从要发出命令的系统上着手。那就是你用来创建 `ssh` 密钥的系统。你还需要可以访问远程系统上的帐户并在其上运行这些命令。
+
+为了使角色清晰明了,我们将场景中的第一个系统称为 “boss”,因为它将发出要在另一个系统上运行的命令。
+
+因此,命令提示符如下:
+
+```
+boss$
+```
+
+如果你还没有在 boss 系统上为你的帐户设置公钥/私钥对,请使用如下所示的命令创建一个密钥对。注意,你可以在各种加密算法之间进行选择。(一般使用 RSA 或 DSA。)注意,要在不输入密码的情况下访问系统,你需要在下面的对话框中的两个提示符出不输入密码。
+
+如果你已经有一个与此帐户关联的公钥/私钥对,请跳过此步骤。
+
+```
+boss$ ssh-keygen -t rsa
+Generating public/private rsa key pair.
+Enter file in which to save the key (/home/myself/.ssh/id_rsa):
+Enter passphrase (empty for no passphrase): <== 按下回车键即可
+Enter same passphrase again: <== 按下回车键即可
+Your identification has been saved in /home/myself/.ssh/id_rsa.
+Your public key has been saved in /home/myself/.ssh/id_rsa.pub.
+The key fingerprint is:
+SHA256:1zz6pZcMjA1av8iyojqo6NVYgTl1+cc+N43kIwGKOUI myself@boss
+The key's randomart image is:
++---[RSA 3072]----+
+| . .. |
+| E+ .. . |
+| .+ .o + o |
+| ..+.. .o* . |
+| ... So+*B o |
+| + ...==B . |
+| . o . ....++. |
+|o o . . o..o+ |
+|=..o.. ..o o. |
++----[SHA256]-----+
+```
+
+上面显示的命令将创建公钥和私钥。其中公钥用于加密,私钥用于解密。因此,这些密钥之间的关系是关键的,私有密钥**绝不**应该被共享。相反,它应该保存在 boss 系统的 `.ssh` 文件夹中。
+
+注意,在创建时,你的公钥和私钥将会保存在 `.ssh` 文件夹中。
+
+下一步是将**公钥**复制到你希望从 boss 系统免密访问的系统。你可以使用 `scp` 命令来完成此操作,但此时你仍然需要输入密码。在本例中,该系统称为 “target”。
+
+```
+boss$ scp .ssh/id_rsa.pub myacct@target:/home/myaccount
+myacct@target's password:
+```
+
+你需要安装公钥在 target 系统(将运行命令的系统)上。如果你没有 `.ssh` 目录(例如,你从未在该系统上使用过 `ssh`),运行这样的命令将为你设置一个目录:
+
+```
+target$ ssh localhost date
+target$ ls -la .ssh
+total 12
+drwx------ 2 myacct myacct 4096 Jan 19 11:48 .
+drwxr-xr-x 6 myacct myacct 4096 Jan 19 11:49 ..
+-rw-r--r-- 1 myacct myacct 222 Jan 19 11:48 known_hosts
+```
+
+仍然在目标系统上,你需要将从“boss”系统传输的公钥添加到 `.ssh/authorized_keys` 文件中。如果该文件已经存在,使用下面的命令将把它添加到文件的末尾;如果文件不存在,则创建该文件并添加密钥。
+
+```
+target$ cat id_rsa.pub >> .ssh/authorized_keys
+```
+
+下一步,你需要确保你的 `authorized_keys` 文件权限为 600。如果还不是,执行命令 `chmod 600 .ssh/authorized_keys`。
+
+```
+target$ ls -l authorized_keys
+-rw------- 1 myself myself 569 Jan 19 12:10 authorized_keys
+```
+
+还要检查目标系统上 `.ssh` 目录的权限是否设置为 700。如果需要,执行 `chmod 700 .ssh` 命令修改权限。
+
+```
+target$ ls -ld .ssh
+drwx------ 2 myacct myacct 4096 Jan 14 15:54 .ssh
+```
+
+此时,你应该能够从 boss 系统远程免密运行命令到目标系统。除非目标系统上的目标用户帐户拥有与你试图连接的用户和主机相同的旧公钥,否则这应该可以工作。如果是这样,你应该删除早期的(并冲突的)条目。
+
+### 使用脚本
+
+使用脚本可以使某些工作变得更加容易。但是,在下面的示例脚本中,你会遇到的一个烦人的问题是,在配置免密访问权限之前,你必须多次输入目标用户的密码。一种选择是将脚本分为两部分——需要在 boss 系统上运行的命令和需要在 target 系统上运行的命令。
+
+这是“一步到位”版本的脚本:
+
+```
+#!/bin/bash
+# NOTE: This script requires that you have the password for the remote acct
+# in order to set up password-free access using your public key
+
+LOC=`hostname` # the local system from which you want to run commands from
+ # wo a password
+
+# get target system and account
+echo -n "target system> "
+read REM
+echo -n "target user> "
+read user
+
+# create a key pair if no public key exists
+if [ ! -f ~/.ssh/id_rsa.pub ]; then
+ ssh-keygen -t rsa
+fi
+
+# ensure a .ssh directory exists in the remote account
+echo checking for .ssh directory on remote system
+ssh $user@$REM "if [ ! -d /home/$user/.ssh ]; then mkdir /home/$user/.ssh; fi"
+
+# share the public key (using local hostname)
+echo copying the public key
+scp ~/.ssh/id_rsa.pub $user@$REM:/home/$user/$user-$LOC.pub
+
+# put the public key into the proper location
+echo adding key to authorized_keys
+ssh $user@$REM "cat /home/$user/$user-$LOC.pub >> /home/$user/.ssh/authorized_ke
+ys"
+
+# set permissions on authorized_keys and .ssh (might be OK already)
+echo setting permissions
+ssh $user@$REM "chmod 600 ~/.ssh/authorized_keys"
+ssh $user@$REM "chmod 700 ~/.ssh"
+
+# try it out -- should NOT ask for a password
+echo testing -- if no password is requested, you are all set
+ssh $user@$REM /bin/hostname
+```
+
+脚本已经配置为在你每次必须输入密码时告诉你它正在做什么。交互看起来是这样的:
+
+```
+$ ./rem_login_setup
+target system> fruitfly
+target user> lola
+checking for .ssh directory on remote system
+lola@fruitfly's password:
+copying the public key
+lola@fruitfly's password:
+id_rsa.pub 100% 567 219.1KB/s 00:00
+adding key to authorized_keys
+lola@fruitfly's password:
+setting permissions
+lola@fruitfly's password:
+testing -- if no password is requested, you are all set
+fruitfly
+```
+
+在上面的场景之后,你就可以像这样登录到 lola 的帐户:
+
+```
+$ ssh lola@fruitfly
+[lola@fruitfly ~]$
+```
+
+一旦设置了免密登录,你就可以不需要键入密码从 boss 系统登录到 target 系统,并且运行任意的 `ssh` 命令。以这种免密的方式运行并不意味着你的帐户不安全。然而,根据 target 系统的性质,保护你在 boss 系统上的密码可能变得更加重要。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3514607/setting-up-passwordless-linux-logins-using-publicprivate-keys.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[laingke](https://github.com/laingke)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://www.networkworld.com/article/3215226/what-is-linux-uses-featres-products-operating-systems.html
+[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
+[3]: https://www.networkworld.com/article/3143050/linux/linux-hardening-a-15-step-checklist-for-a-secure-linux-server.html#tk.nww-fsb
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/published/202001/20200123 Wine 5.0 is Released- Here-s How to Install it.md b/published/202001/20200123 Wine 5.0 is Released- Here-s How to Install it.md
new file mode 100644
index 0000000000..683f033104
--- /dev/null
+++ b/published/202001/20200123 Wine 5.0 is Released- Here-s How to Install it.md
@@ -0,0 +1,165 @@
+[#]: collector: (lujun9972)
+[#]: translator: (robsean)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11827-1.html)
+[#]: subject: (Wine 5.0 is Released! Here’s How to Install it)
+[#]: via: (https://itsfoss.com/wine-5-release/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Wine 5.0 发布了!
+======
+
+> Wine 的一个新的主要版本发布了。使用 Wine 5.0,在 Linux 上运行 Windows 应用程序和游戏的体验得到进一步改进。
+
+通过一些努力,你可以使用 Wine [在 Linux 上运行 Windows 应用程序][1]。当你必须使用一个仅在 Windows 上可用的软件时,Wine 是一个可以尝试的工具。它支持许多这样的软件。
+
+Wine 的一个新的主要发布版本已经降临,即 Wine 5.0,几乎距它的 4.0 发布一年之后。
+
+Wine 5.0 发布版本引进了几个主要特性和很多显著的更改/改进。在这篇文章中,我将重点介绍新的特性是什么,并且也将提到安装说明。
+
+### 在 Wine 5.0 中有什么新的特性?
+
+![][2]
+
+如他们的[官方声明][3]所述,这是 5.0 发布版本中的关键更改:
+
+* PE 格式的内置模块。
+* 支持多显示器。
+* 重新实现了 XAudio2。
+* 支持 Vulkan 1.1。
+* 支持微软安装程序(MSI)补丁文件。
+* 性能提升。
+
+因此,随着 Vulkan 1.1 和对多显示器的支持 —— Wine 5.0 发布版本是一件大事。
+
+除了上面强调的这些关键内容以外,在新的版本中包含成千上万的更改/改进中,你还可以期待对控制器的支持更好。
+
+值得注意的是,此版本特别纪念了 **Józef Kucia**(vkd3d 项目的首席开发人员)。
+
+他们也已经在[发布说明][4]中提到这一点:
+
+> 这个发布版本特别纪念了 Józef Kucia,他于 2019 年 8 月去世,年仅 30 岁。Józef 是 Wine 的 Direct3D 实现的一个主要贡献者,并且是 vkd3d 项目的首席开发人员。我们都非常怀念他的技能和友善。
+
+### 如何在 Ubuntu 和 Linux Mint 上安装 Wine 5.0
+
+> 注意:
+
+> 如果你在以前安装过 Wine,你应该将其完全移除,以(如你希望的)避免一些冲突。此外,WineHQ 存储库的密钥最近已被更改,针对你的 Linux 发行版的更多的操作指南,你可以参考它的[下载页面][5]。
+
+Wine 5.0 的源码可在它的[官方网站][3]上获得。为了使其工作,你可以阅读更多关于[构建 Wine][6] 的信息。基于 Arch 的用户应该很快就会得到它。
+
+在这里,我将向你展示在 Ubuntu 和其它基于 Ubuntu 的发行版上安装 Wine 5.0 的步骤。请耐心,并按照步骤一步一步安装和使用 Wine。这里涉及几个步骤。
+
+请记住,Wine 安装了太多软件包。你会看到大量的软件包列表,下载大小约为 1.3 GB。
+
+### 在 Ubuntu 上安装 Wine 5.0(不适用于 Linux Mint)
+
+首先,使用这个命令来移除现存的 Wine:
+
+```
+sudo apt remove winehq-stable wine-stable wine1.6 wine-mono wine-geco winetricks
+```
+
+然后确保添加 32 位体系结构支持:
+
+```
+sudo dpkg --add-architecture i386
+```
+
+下载并添加官方 Wine 存储库密钥:
+
+```
+wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add -
+```
+
+现在,接下来的步骤需要添加存储库,为此, 你需要首先[知道你的 Ubuntu 版本][7]。
+
+对于 **Ubuntu 18.04 和 19.04**,用这个 PPA 添加 FAudio 依赖, **Ubuntu 19.10** 不需要它:
+
+```
+sudo add-apt-repository ppa:cybermax-dexter/sdl2-backport
+```
+
+现在使用此命令添加存储库:
+
+```
+sudo apt-add-repository "deb https://dl.winehq.org/wine-builds/ubuntu $(lsb_release -cs) main"
+```
+
+现在你已经添加了正确的存储库,可以使用以下命令安装 Wine 5.0:
+
+```
+sudo apt update && sudo apt install --install-recommends winehq-stable
+```
+
+请注意,尽管[在软件包列表中将 Wine 5 列为稳定版][8],但你仍可能会看到 winehq-stable 的 wine 4.0.3。也许它不会传播到所有地理位置。从今天早上开始,我可以看到 Wine 5.0。
+
+### 在 Linux Mint 19.1、19.2 和 19.3 中安装 Wine 5.0
+
+正如一些读者通知我的那样,[apt-add 存储库命令][9]不适用于 Linux Mint 19.x 系列。
+
+这是添加自定义存储库的另一种方法。你必须执行与 Ubuntu 相同的步骤。如删除现存的 Wine 包:
+
+```
+sudo apt remove winehq-stable wine-stable wine1.6 wine-mono wine-geco winetricks
+```
+
+添加 32 位支持:
+
+```
+sudo dpkg --add-architecture i386
+```
+
+然后添加 GPG 密钥:
+
+```
+wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add -
+```
+
+添加 FAudio 依赖:
+
+```
+sudo add-apt-repository ppa:cybermax-dexter/sdl2-backport
+```
+
+现在为 Wine 存储库创建一个新条目:
+
+```
+sudo sh -c "echo 'deb https://dl.winehq.org/wine-builds/ubuntu/ bionic main' >> /etc/apt/sources.list.d/winehq.list"
+```
+
+更新软件包列表并安装Wine:
+
+```
+sudo apt update && sudo apt install --install-recommends winehq-stable
+```
+
+### 总结
+
+你尝试过最新的 Wine 5.0 发布版本吗?如果是的话,在运行中你看到什么改进?
+
+在下面的评论区域,让我知道你对新的发布版本的看法。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/wine-5-release/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/use-windows-applications-linux/
+[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/wine_5.png?ssl=1
+[3]: https://www.winehq.org/news/2020012101
+[4]: https://www.winehq.org/announce/5.0
+[5]: https://wiki.winehq.org/Download
+[6]: https://wiki.winehq.org/Building_Wine
+[7]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
+[8]: https://dl.winehq.org/wine-builds/ubuntu/dists/bionic/main/binary-amd64/
+[9]: https://itsfoss.com/add-apt-repository-command-not-found/
diff --git a/published/20200102 Data streaming and functional programming in Java.md b/published/20200102 Data streaming and functional programming in Java.md
new file mode 100644
index 0000000000..cf3b1bd11d
--- /dev/null
+++ b/published/20200102 Data streaming and functional programming in Java.md
@@ -0,0 +1,456 @@
+[#]: collector: (lujun9972)
+[#]: translator: (laingke)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11857-1.html)
+[#]: subject: (Data streaming and functional programming in Java)
+[#]: via: (https://opensource.com/article/20/1/javastream)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+Java 中的数据流和函数式编程
+======
+
+> 学习如何使用 Java 8 中的流 API 和函数式编程结构。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/06/002505flazlb4cg4aavvb4.jpg)
+
+当 Java SE 8(又名核心 Java 8)在 2014 年被推出时,它引入了一些更改,从根本上影响了用它进行的编程。这些更改中有两个紧密相连的部分:流 API 和函数式编程构造。本文使用代码示例,从基础到高级特性,介绍每个部分并说明它们之间的相互作用。
+
+### 基础特性
+
+流 API 是在数据序列中迭代元素的简洁而高级的方法。包 `java.util.stream` 和 `java.util.function` 包含了用于流 API 和相关函数式编程构造的新库。当然,代码示例胜过千言万语。
+
+下面的代码段用大约 2,000 个随机整数值填充了一个 `List`:
+
+```
+Random rand = new Random2();
+List list = new ArrayList(); // 空 list
+for (int i = 0; i < 2048; i++) list.add(rand.nextInt()); // 填充它
+```
+
+另外用一个 `for` 循环可用于遍历填充列表,以将偶数值收集到另一个列表中。
+
+流 API 提供了一种更简洁的方法来执行此操作:
+
+```
+List evens = list
+ .stream() // 流化 list
+ .filter(n -> (n & 0x1) == 0) // 过滤出奇数值
+ .collect(Collectors.toList()); // 收集偶数值
+```
+
+这个例子有三个来自流 API 的函数:
+
+- `stream` 函数可以将**集合**转换为流,而流是一个每次可访问一个值的传送带。流化是惰性的(因此也是高效的),因为值是根据需要产生的,而不是一次性产生的。
+- `filter` 函数确定哪些流的值(如果有的话)通过了处理管道中的下一个阶段,即 `collect` 阶段。`filter` 函数是 高阶的higher-order,因为它的参数是一个函数 —— 在这个例子中是一个 lambda 表达式,它是一个未命名的函数,并且是 Java 新的函数式编程结构的核心。
+
+lambda 语法与传统的 Java 完全不同:
+
+```
+n -> (n & 0x1) == 0
+```
+
+箭头(一个减号后面紧跟着一个大于号)将左边的参数列表与右边的函数体分隔开。参数 `n` 虽未明确类型,但也可以明确。在任何情况下,编译器都会发现 `n` 是个 `Integer`。如果有多个参数,这些参数将被括在括号中,并用逗号分隔。
+
+在本例中,函数体检查一个整数的最低位(最右)是否为零,这用来表示偶数。过滤器应返回一个布尔值。尽管可以,但该函数的主体中没有显式的 `return`。如果主体没有显式的 `return`,则主体的最后一个表达式即是返回值。在这个例子中,主体按照 lambda 编程的思想编写,由一个简单的布尔表达式 `(n & 0x1) == 0` 组成。
+
+- `collect` 函数将偶数值收集到引用为 `evens` 的列表中。如下例所示,`collect` 函数是线程安全的,因此,即使在多个线程之间共享了过滤操作,该函数也可以正常工作。
+
+### 方便的功能和轻松实现多线程
+
+在生产环境中,数据流的源可能是文件或网络连接。为了学习流 API, Java 提供了诸如 `IntStream` 这样的类型,它可以用各种类型的元素生成流。这里有一个 `IntStream` 的例子:
+
+```
+IntStream // 整型流
+ .range(1, 2048) // 生成此范围内的整型流
+ .parallel() // 为多个线程分区数据
+ .filter(i -> ((i & 0x1) > 0)) // 奇偶校验 - 只允许奇数通过
+ .forEach(System.out::println); // 打印每个值
+```
+
+`IntStream` 类型包括一个 `range` 函数,该函数在指定的范围内生成一个整数值流,在本例中,以 1 为增量,从 1 递增到 2048。`parallel` 函数自动划分该工作到多个线程中,在各个线程中进行过滤和打印。(线程数通常与主机系统上的 CPU 数量匹配。)函数 `forEach` 参数是一个*方法引用*,在本例中是对封装在 `System.out` 中的 `println` 方法的引用,方法输出类型为 `PrintStream`。方法和构造器引用的语法将在稍后讨论。
+
+由于具有多线程,因此整数值整体上以任意顺序打印,但在给定线程中是按顺序打印的。例如,如果线程 T1 打印 409 和 411,那么 T1 将按照顺序 409-411 打印,但是其它某个线程可能会预先打印 2045。`parallel` 调用后面的线程是并发执行的,因此它们的输出顺序是不确定的。
+
+### map/reduce 模式
+
+*map/reduce* 模式在处理大型数据集方面变得很流行。一个 map/reduce 宏操作由两个微操作构成。首先,将数据分散(映射mapped)到各个工作程序中,然后将单独的结果收集在一起 —— 也可能收集统计起来成为一个值,即归约reduction。归约可以采用不同的形式,如以下示例所示。
+
+下面 `Number` 类的实例用 `EVEN` 或 `ODD` 表示有奇偶校验的整数值:
+
+```
+public class Number {
+ enum Parity { EVEN, ODD }
+ private int value;
+ public Number(int n) { setValue(n); }
+ public void setValue(int value) { this.value = value; }
+ public int getValue() { return this.value; }
+ public Parity getParity() {
+ return ((value & 0x1) == 0) ? Parity.EVEN : Parity.ODD;
+ }
+ public void dump() {
+ System.out.format("Value: %2d (parity: %s)\n", getValue(),
+ (getParity() == Parity.ODD ? "odd" : "even"));
+ }
+}
+```
+
+下面的代码演示了用 `Number` 流进行 map/reduce 的情形,从而表明流 API 不仅可以处理 `int` 和 `float` 等基本类型,还可以处理程序员自定义的类类型。
+
+在下面的代码段中,使用了 `parallelStream` 而不是 `stream` 函数对随机整数值列表进行流化处理。与前面介绍的 `parallel` 函数一样,`parallelStream` 变体也可以自动执行多线程。
+
+```
+final int howMany = 200;
+Random r = new Random();
+Number[] nums = new Number[howMany];
+for (int i = 0; i < howMany; i++) nums[i] = new Number(r.nextInt(100));
+List listOfNums = Arrays.asList(nums); // 将数组转化为 list
+
+Integer sum4All = listOfNums
+ .parallelStream() // 自动执行多线程
+ .mapToInt(Number::getValue) // 使用方法引用,而不是 lambda
+ .sum(); // 将流值计算出和值
+System.out.println("The sum of the randomly generated values is: " + sum4All);
+```
+
+高阶的 `mapToInt` 函数可以接受一个 lambda 作为参数,但在本例中,它接受一个方法引用,即 `Number::getValue`。`getValue` 方法不需要参数,它返回给定的 `Number` 实例的 `int` 值。语法并不复杂:类名 `Number` 后跟一个双冒号和方法名。回想一下先前的例子 `System.out::println`,它在 `System` 类中的 `static` 属性 `out` 后面有一个双冒号。
+
+方法引用 `Number::getValue` 可以用下面的 lambda 表达式替换。参数 `n` 是流中的 `Number` 实例中的之一:
+
+```
+mapToInt(n -> n.getValue())
+```
+
+通常,lambda 表达式和方法引用是可互换的:如果像 `mapToInt` 这样的高阶函数可以采用一种形式作为参数,那么这个函数也可以采用另一种形式。这两个函数式编程结构具有相同的目的 —— 对作为参数传入的数据执行一些自定义操作。在两者之间进行选择通常是为了方便。例如,lambda 可以在没有封装类的情况下编写,而方法则不能。我的习惯是使用 lambda,除非已经有了适当的封装方法。
+
+当前示例末尾的 `sum` 函数通过结合来自 `parallelStream` 线程的部分和,以线程安全的方式进行归约。但是,程序员有责任确保在 `parallelStream` 调用引发的多线程过程中,程序员自己的函数调用(在本例中为 `getValue`)是线程安全的。
+
+最后一点值得强调。lambda 语法鼓励编写纯函数pure function,即函数的返回值仅取决于传入的参数(如果有);纯函数没有副作用,例如更新一个类中的 `static` 字段。因此,纯函数是线程安全的,并且如果传递给高阶函数的函数参数(例如 `filter` 和 `map` )是纯函数,则流 API 效果最佳。
+
+对于更细粒度的控制,有另一个流 API 函数,名为 `reduce`,可用于对 `Number` 流中的值求和:
+
+```
+Integer sum4AllHarder = listOfNums
+ .parallelStream() // 多线程
+ .map(Number::getValue) // 每个 Number 的值
+ .reduce(0, (sofar, next) -> sofar + next); // 求和
+```
+
+此版本的 `reduce` 函数带有两个参数,第二个参数是一个函数:
+
+- 第一个参数(在这种情况下为零)是*特征*值,该值用作求和操作的初始值,并且在求和过程中流结束时用作默认值。
+- 第二个参数是*累加器*,在本例中,这个 lambda 表达式有两个参数:第一个参数(`sofar`)是正在运行的和,第二个参数(`next`)是来自流的下一个值。运行的和以及下一个值相加,然后更新累加器。请记住,由于开始时调用了 `parallelStream`,因此 `map` 和 `reduce` 函数现在都在多线程上下文中执行。
+
+在到目前为止的示例中,流值被收集,然后被规约,但是,通常情况下,流 API 中的 `Collectors` 可以累积值,而不需要将它们规约到单个值。正如下一个代码段所示,收集活动可以生成任意丰富的数据结构。该示例使用与前面示例相同的 `listOfNums`:
+
+```
+Map> numMap = listOfNums
+ .parallelStream()
+ .collect(Collectors.groupingBy(Number::getParity));
+
+List evens = numMap.get(Number.Parity.EVEN);
+List odds = numMap.get(Number.Parity.ODD);
+```
+
+第一行中的 `numMap` 指的是一个 `Map`,它的键是一个 `Number` 奇偶校验位(`ODD` 或 `EVEN`),其值是一个具有指定奇偶校验位值的 `Number` 实例的 `List`。同样,通过 `parallelStream` 调用进行多线程处理,然后 `collect` 调用(以线程安全的方式)将部分结果组装到 `numMap` 引用的 `Map` 中。然后,在 `numMap` 上调用 `get` 方法两次,一次获取 `evens`,第二次获取 `odds`。
+
+实用函数 `dumpList` 再次使用来自流 API 的高阶 `forEach` 函数:
+
+```
+private void dumpList(String msg, List list) {
+ System.out.println("\n" + msg);
+ list.stream().forEach(n -> n.dump()); // 或者使用 forEach(Number::dump)
+}
+```
+
+这是示例运行中程序输出的一部分:
+
+```
+The sum of the randomly generated values is: 3322
+The sum again, using a different method: 3322
+
+Evens:
+
+Value: 72 (parity: even)
+Value: 54 (parity: even)
+...
+Value: 92 (parity: even)
+
+Odds:
+
+Value: 35 (parity: odd)
+Value: 37 (parity: odd)
+...
+Value: 41 (parity: odd)
+```
+
+### 用于代码简化的函数式结构
+
+函数式结构(如方法引用和 lambda 表达式)非常适合在流 API 中使用。这些构造代表了 Java 中对高阶函数的主要简化。即使在糟糕的过去,Java 也通过 `Method` 和 `Constructor` 类型在技术上支持高阶函数,这些类型的实例可以作为参数传递给其它函数。由于其复杂性,这些类型在生产级 Java 中很少使用。例如,调用 `Method` 需要对象引用(如果方法是非**静态**的)或至少一个类标识符(如果方法是**静态**的)。然后,被调用的 `Method` 的参数作为**对象**实例传递给它,如果没有发生多态(那会出现另一种复杂性!),则可能需要显式向下转换。相比之下,lambda 和方法引用很容易作为参数传递给其它函数。
+
+但是,新的函数式结构在流 API 之外具有其它用途。考虑一个 Java GUI 程序,该程序带有一个供用户按下的按钮,例如,按下以获取当前时间。按钮按下的事件处理程序可能编写如下:
+
+```
+JButton updateCurrentTime = new JButton("Update current time");
+updateCurrentTime.addActionListener(new ActionListener() {
+ @Override
+ public void actionPerformed(ActionEvent e) {
+ currentTime.setText(new Date().toString());
+ }
+});
+```
+
+这个简短的代码段很难解释。关注第二行,其中方法 `addActionListener` 的参数开始如下:
+
+```
+new ActionListener() {
+```
+
+这似乎是错误的,因为 `ActionListener` 是一个**抽象**接口,而**抽象**类型不能通过调用 `new` 实例化。但是,事实证明,还有其它一些实例被实例化了:一个实现此接口的未命名内部类。如果上面的代码封装在名为 `OldJava` 的类中,则该未命名的内部类将被编译为 `OldJava$1.class`。`actionPerformed` 方法在这个未命名的内部类中被重写。
+
+现在考虑使用新的函数式结构进行这个令人耳目一新的更改:
+
+```
+updateCurrentTime.addActionListener(e -> currentTime.setText(new Date().toString()));
+```
+
+lambda 表达式中的参数 `e` 是一个 `ActionEvent` 实例,而 lambda 的主体是对按钮上的 `setText` 的简单调用。
+
+### 函数式接口和函数组合
+
+到目前为止,使用的 lambda 已经写好了。但是,为了方便起见,我们可以像引用封装方法一样引用 lambda 表达式。以下一系列简短示例说明了这一点。
+
+考虑以下接口定义:
+
+```
+@FunctionalInterface // 可选,通常省略
+interface BinaryIntOp {
+ abstract int compute(int arg1, int arg2); // abstract 声明可以被删除
+}
+```
+
+注释 `@FunctionalInterface` 适用于声明*唯一*抽象方法的任何接口;在本例中,这个抽象接口是 `compute`。一些标准接口,(例如具有唯一声明方法 `run` 的 `Runnable` 接口)同样符合这个要求。在此示例中,`compute` 是已声明的方法。该接口可用作引用声明中的目标类型:
+
+```
+BinaryIntOp div = (arg1, arg2) -> arg1 / arg2;
+div.compute(12, 3); // 4
+```
+
+包 `java.util.function` 提供各种函数式接口。以下是一些示例。
+
+下面的代码段介绍了参数化的 `Predicate` 函数式接口。在此示例中,带有参数 `String` 的 `Predicate` 类型可以引用具有 `String` 参数的 lambda 表达式或诸如 `isEmpty` 之类的 `String` 方法。通常情况下,Predicate 是一个返回布尔值的函数。
+
+```
+Predicate pred = String::isEmpty; // String 方法的 predicate 声明
+String[] strings = {"one", "two", "", "three", "four"};
+Arrays.asList(strings)
+ .stream()
+ .filter(pred) // 过滤掉非空字符串
+ .forEach(System.out::println); // 只打印空字符串
+```
+
+在字符串长度为零的情况下,`isEmpty` Predicate 判定结果为 `true`。 因此,只有空字符串才能进入管道的 `forEach` 阶段。
+
+下一段代码将演示如何将简单的 lambda 或方法引用组合成更丰富的 lambda 或方法引用。考虑这一系列对 `IntUnaryOperator` 类型的引用的赋值,它接受一个整型参数并返回一个整型值:
+
+```
+IntUnaryOperator doubled = n -> n * 2;
+IntUnaryOperator tripled = n -> n * 3;
+IntUnaryOperator squared = n -> n * n;
+```
+
+`IntUnaryOperator` 是一个 `FunctionalInterface`,其唯一声明的方法为 `applyAsInt`。现在可以单独使用或以各种组合形式使用这三个引用 `doubled`、`tripled` 和 `squared`:
+
+```
+int arg = 5;
+doubled.applyAsInt(arg); // 10
+tripled.applyAsInt(arg); // 15
+squared.applyAsInt(arg); // 25
+```
+
+以下是一些函数组合的样例:
+
+```
+int arg = 5;
+doubled.compose(squared).applyAsInt(arg); // 5 求 2 次方后乘 2:50
+tripled.compose(doubled).applyAsInt(arg); // 5 乘 2 后再乘 3:30
+doubled.andThen(squared).applyAsInt(arg); // 5 乘 2 后求 2 次方:100
+squared.andThen(tripled).applyAsInt(arg); // 5 求 2 次方后乘 3:75
+```
+
+函数组合可以直接使用 lambda 表达式实现,但是引用使代码更简洁。
+
+### 构造器引用
+
+构造器引用是另一种函数式编程构造,而这些引用在比 lambda 和方法引用更微妙的上下文中非常有用。再一次重申,代码示例似乎是最好的解释方式。
+
+考虑这个 [POJO][13] 类:
+
+```
+public class BedRocker { // 基岩的居民
+ private String name;
+ public BedRocker(String name) { this.name = name; }
+ public String getName() { return this.name; }
+ public void dump() { System.out.println(getName()); }
+}
+```
+
+该类只有一个构造函数,它需要一个 `String` 参数。给定一个名字数组,目标是生成一个 `BedRocker` 元素数组,每个名字代表一个元素。下面是使用了函数式结构的代码段:
+
+```
+String[] names = {"Fred", "Wilma", "Peebles", "Dino", "Baby Puss"};
+
+Stream bedrockers = Arrays.asList(names).stream().map(BedRocker::new);
+BedRocker[] arrayBR = bedrockers.toArray(BedRocker[]::new);
+
+Arrays.asList(arrayBR).stream().forEach(BedRocker::dump);
+```
+
+在较高的层次上,这个代码段将名字转换为 `BedRocker` 数组元素。具体来说,代码如下所示。`Stream` 接口(在包 `java.util.stream` 中)可以被参数化,而在本例中,生成了一个名为 `bedrockers` 的 `BedRocker` 流。
+
+`Arrays.asList` 实用程序再次用于流化一个数组 `names`,然后将流的每一项传递给 `map` 函数,该函数的参数现在是构造器引用 `BedRocker::new`。这个构造器引用通过在每次调用时生成和初始化一个 `BedRocker` 实例来充当一个对象工厂。在第二行执行之后,名为 `bedrockers` 的流由五项 `BedRocker` 组成。
+
+这个例子可以通过关注高阶 `map` 函数来进一步阐明。在通常情况下,一个映射将一个类型的值(例如,一个 `int`)转换为另一个*相同*类型的值(例如,一个整数的后继):
+
+```
+map(n -> n + 1) // 将 n 映射到其后继
+```
+
+然而,在 `BedRocker` 这个例子中,转换更加戏剧化,因为一个类型的值(代表一个名字的 `String`)被映射到一个*不同*类型的值,在这个例子中,就是一个 `BedRocker` 实例,这个字符串就是它的名字。转换是通过一个构造器调用来完成的,它是由构造器引用来实现的:
+
+```
+map(BedRocker::new) // 将 String 映射到 BedRocker
+```
+
+传递给构造器的值是 `names` 数组中的其中一项。
+
+此代码示例的第二行还演示了一个你目前已经非常熟悉的转换:先将数组先转换成 `List`,然后再转换成 `Stream`:
+
+```
+Stream bedrockers = Arrays.asList(names).stream().map(BedRocker::new);
+```
+
+第三行则是另一种方式 —— 流 `bedrockers` 通过使用*数组*构造器引用 `BedRocker[]::new` 调用 `toArray` 方法:
+
+```
+BedRocker[ ] arrayBR = bedrockers.toArray(BedRocker[]::new);
+```
+
+该构造器引用不会创建单个 `BedRocker` 实例,而是创建这些实例的整个数组:该构造器引用现在为 `BedRocker[]:new`,而不是 `BedRocker::new`。为了进行确认,将 `arrayBR` 转换为 `List`,再次对其进行流式处理,以便可以使用 `forEach` 来打印 `BedRocker` 的名字。
+
+```
+Fred
+Wilma
+Peebles
+Dino
+Baby Puss
+```
+
+该示例对数据结构的微妙转换仅用几行代码即可完成,从而突出了可以将 lambda,方法引用或构造器引用作为参数的各种高阶函数的功能。
+
+### 柯里化Currying
+
+*柯里化*函数是指减少函数执行任何工作所需的显式参数的数量(通常减少到一个)。(该术语是为了纪念逻辑学家 Haskell Curry。)一般来说,函数的参数越少,调用起来就越容易,也更健壮。(回想一下一些需要半打左右参数的噩梦般的函数!)因此,应将柯里化视为简化函数调用的一种尝试。`java.util.function` 包中的接口类型适合于柯里化,如以下示例所示。
+
+引用的 `IntBinaryOperator` 接口类型是为函数接受两个整型参数,并返回一个整型值:
+
+```
+IntBinaryOperator mult2 = (n1, n2) -> n1 * n2;
+mult2.applyAsInt(10, 20); // 200
+mult2.applyAsInt(10, 30); // 300
+```
+
+引用 `mult2` 强调了需要两个显式参数,在本例中是 10 和 20。
+
+前面介绍的 `IntUnaryOperator` 比 `IntBinaryOperator` 简单,因为前者只需要一个参数,而后者则需要两个参数。两者均返回整数值。因此,目标是将名为 `mult2` 的两个参数 `IntBinraryOperator` 柯里化成一个单一的 `IntUnaryOperator` 版本 `curriedMult2`。
+
+考虑 `IntFunction` 类型。此类型的函数采用整型参数,并返回类型为 `R` 的结果,该结果可以是另一个函数 —— 更准确地说,是 `IntBinaryOperator`。让一个 lambda 返回另一个 lambda 很简单:
+
+```
+arg1 -> (arg2 -> arg1 * arg2) // 括号可以省略
+```
+
+完整的 lambda 以 `arg1` 开头,而该 lambda 的主体以及返回的值是另一个以 `arg2` 开头的 lambda。返回的 lambda 仅接受一个参数(`arg2`),但返回了两个数字的乘积(`arg1` 和 `arg2`)。下面的概述,再加上代码,应该可以更好地进行说明。
+
+以下是如何柯里化 `mult2` 的概述:
+
+- 类型为 `IntFunction` 的 lambda 被写入并调用,其整型值为 10。返回的 `IntUnaryOperator` 缓存了值 10,因此变成了已柯里化版本的 `mult2`,在本例中为 `curriedMult2`。
+- 然后使用单个显式参数(例如,20)调用 `curriedMult2` 函数,该参数与缓存的参数(在本例中为 10)相乘以生成返回的乘积。。
+
+这是代码的详细信息:
+
+```
+// 创建一个接受一个参数 n1 并返回一个单参数 n2 -> n1 * n2 的函数,该函数返回一个(n1 * n2 乘积的)整型数。
+IntFunction curriedMult2Maker = n1 -> (n2 -> n1 * n2);
+```
+
+调用 `curriedMult2Maker` 生成所需的 `IntUnaryOperator` 函数:
+
+```
+// 使用 curriedMult2Maker 获取已柯里化版本的 mult2。
+// 参数 10 是上面的 lambda 的 n1。
+IntUnaryOperator curriedMult2 = curriedMult2Maker2.apply(10);
+```
+
+值 `10` 现在缓存在 `curriedMult2` 函数中,以便 `curriedMult2` 调用中的显式整型参数乘以 10:
+
+```
+curriedMult2.applyAsInt(20); // 200 = 10 * 20
+curriedMult2.applyAsInt(80); // 800 = 10 * 80
+```
+
+缓存的值可以随意更改:
+
+```
+curriedMult2 = curriedMult2Maker.apply(50); // 缓存 50
+curriedMult2.applyAsInt(101); // 5050 = 101 * 50
+```
+
+当然,可以通过这种方式创建多个已柯里化版本的 `mult2`,每个版本都有一个 `IntUnaryOperator`。
+
+柯里化充分利用了 lambda 的强大功能:可以很容易地编写 lambda 表达式来返回需要的任何类型的值,包括另一个 lambda。
+
+### 总结
+
+Java 仍然是基于类的面向对象的编程语言。但是,借助流 API 及其支持的函数式构造,Java 向函数式语言(例如 Lisp)迈出了决定性的(同时也是受欢迎的)一步。结果是 Java 更适合处理现代编程中常见的海量数据流。在函数式方向上的这一步还使以在前面的代码示例中突出显示的管道的方式编写清晰简洁的 Java 代码更加容易:
+
+```
+dataStream
+ .parallelStream() // 多线程以提高效率
+ .filter(...) // 阶段 1
+ .map(...) // 阶段 2
+ .filter(...) // 阶段 3
+ ...
+ .collect(...); // 或者,也可以进行归约:阶段 N
+```
+
+自动多线程,以 `parallel` 和 `parallelStream` 调用为例,建立在 Java 的 fork/join 框架上,该框架支持 任务窃取task stealing 以提高效率。假设 `parallelStream` 调用后面的线程池由八个线程组成,并且 `dataStream` 被八种方式分区。某个线程(例如,T1)可能比另一个线程(例如,T7)工作更快,这意味着应该将 T7 的某些任务移到 T1 的工作队列中。这会在运行时自动发生。
+
+在这个简单的多线程世界中,程序员的主要职责是编写线程安全函数,这些函数作为参数传递给在流 API 中占主导地位的高阶函数。尤其是 lambda 鼓励编写纯函数(因此是线程安全的)函数。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/javastream
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+译者:[laingke](https://github.com/laingke)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK (computer screen )
+[2]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+random
+[3]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+list
+[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
+[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+number
+[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+arrays
+[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
+[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
+[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+jbutton
+[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+actionlistener
+[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+actionevent
+[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+date
+[13]: https://en.wikipedia.org/wiki/Plain_old_Java_object
diff --git a/published/20200103 Add scorekeeping to your Python game.md b/published/20200103 Add scorekeeping to your Python game.md
new file mode 100644
index 0000000000..62fa3fe600
--- /dev/null
+++ b/published/20200103 Add scorekeeping to your Python game.md
@@ -0,0 +1,626 @@
+[#]: collector: (lujun9972)
+[#]: translator: (robsean)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11839-1.html)
+[#]: subject: (Add scorekeeping to your Python game)
+[#]: via: (https://opensource.com/article/20/1/add-scorekeeping-your-python-game)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+添加计分到你的 Python 游戏
+======
+
+> 在本系列的第十一篇有关使用 Python Pygame 模块进行编程的文章中,显示玩家获得战利品或受到伤害时的得分。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/01/154838led0y08y2aqetz1q.jpg)
+
+这是仍在进行中的关于使用 [Pygame][3] 模块来在 [Python 3][2] 在创建电脑游戏的第十一部分。先前的文章是:
+
+ * [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][4]
+ * [使用 Python 和 Pygame 模块构建一个游戏框架][5]
+ * [如何在你的 Python 游戏中添加一个玩家][6]
+ * [用 Pygame 使你的游戏角色移动起来][7]
+ * [如何向你的 Python 游戏中添加一个敌人][8]
+ * [在 Pygame 游戏中放置平台][19]
+ * [在你的 Python 游戏中模拟引力][9]
+ * [为你的 Python 平台类游戏添加跳跃功能][10]
+ * [使你的 Python 游戏玩家能够向前和向后跑][11]
+ * [在你的 Python 平台类游戏中放一些奖励][12]
+
+如果你已经跟随这一系列很久,那么已经学习了使用 Python 创建一个视频游戏所需的所有基本语法和模式。然而,它仍然缺少一个至关重要的组成部分。这一组成部分不仅仅对用 Python 编程游戏重要;不管你探究哪个计算机分支,你都必需精通:作为一个程序员,通过阅读一种语言的或库的文档来学习新的技巧。
+
+幸运的是,你正在阅读本文的事实表明你熟悉文档。为了使你的平台类游戏更加美观,在这篇文章中,你将在游戏屏幕上添加得分和生命值显示。不过,教你如何找到一个库的功能以及如何使用这些新的功能的这节课程并没有多神秘。
+
+### 在 Pygame 中显示得分
+
+现在,既然你有了可以被玩家收集的奖励,那就有充分的理由来记录分数,以便你的玩家看到他们收集了多少奖励。你也可以跟踪玩家的生命值,以便当他们被敌人击中时会有相应结果。
+
+你已经有了跟踪分数和生命值的变量,但是这一切都发生在后台。这篇文章教你在游戏期间在游戏屏幕上以你选择的一种字体来显示这些统计数字。
+
+### 阅读文档
+
+大多数 Python 模块都有文档,即使那些没有文档的模块,也能通过 Python 的帮助功能来进行最小的文档化。[Pygame 的主页面][13] 链接了它的文档。不过,Pygame 是一个带有很多文档的大模块,并且它的文档不像在 Opensource.com 上的文章一样,以同样易理解的(和友好的、易解释的、有用的)叙述风格来撰写的。它们是技术文档,并且列出在模块中可用的每个类和函数,各自要求的输入类型等等。如果你不适应参考代码组件描述,这可能会令人不知所措。
+
+在烦恼于库的文档前,第一件要做的事,就是来想想你正在尝试达到的目标。在这种情况下,你想在屏幕上显示玩家的得分和生命值。
+
+在你确定你需要的结果后,想想它需要什么的组件。你可以从变量和函数的方面考虑这一点,或者,如果你还没有自然地想到这一点,你可以进行一般性思考。你可能意识到需要一些文本来显示一个分数,你希望 Pygame 在屏幕上绘制这些文本。如果你仔细思考,你可能会意识到它与在屏幕上渲染一个玩家、奖励或一个平台并多么大的不同。
+
+从技术上讲,你*可以*使用数字图形,并让 Pygame 显示这些数字图形。它不是达到你目标的最容易的方法,但是如果它是你唯一知道的方法,那么它是一个有效的方法。不过,如果你参考 Pygame 的文档,你看到列出的模块之一是 `font`,这是 Pygame 使得在屏幕上来使打印文本像输入文字一样容易的方法。
+
+### 解密技术文档
+
+`font` 文档页面以 `pygame.font.init()` 开始,它列出了用于初始化字体模块的函数。它由 `pygame.init()` 自动地调用,你已经在代码中调用了它。再强调一次,从技术上讲,你已经到达一个*足够好*的点。虽然你尚不知道*如何做*,你知道你*能够*使用 `pygame.font` 函数来在屏幕上打印文本。
+
+然而,如果你阅读更多一些,你会找到这里还有一种更好的方法来打印字体。`pygame.freetype` 模块在文档中的描述方式如下:
+
+> `pygame.freetype` 模块是 `pygame.fontpygame` 模块的一个替代品,用于加载和渲染字体。它有原函数的所有功能,外加很多新的功能。
+
+在 `pygame.freetype` 文档页面的下方,有一些示例代码:
+
+```
+import pygame
+import pygame.freetype
+```
+
+你的代码应该已经导入了 Pygame,不过,请修改你的 `import` 语句以包含 Freetype 模块:
+
+```
+import pygame
+import sys
+import os
+import pygame.freetype
+```
+
+### 在 Pygame 中使用字体
+
+从 `font` 模块的描述中可以看出,显然 Pygame 使用一种字体(不管它的你提供的或内置到 Pygame 的默认字体)在屏幕上渲染字体。滚动浏览 `pygame.freetype` 文档来找到 `pygame.freetype.Font` 函数:
+
+```
+pygame.freetype.Font
+从支持的字体文件中创建一个新的字体实例。
+
+Font(file, size=0, font_index=0, resolution=0, ucs4=False) -> Font
+
+pygame.freetype.Font.name
+ 符合规则的字体名称。
+
+pygame.freetype.Font.path
+ 字体文件路径。
+
+pygame.freetype.Font.size
+ 在渲染中使用的默认点大小
+```
+
+这描述了如何在 Pygame 中构建一个字体“对象”。把屏幕上的一个简单对象视为一些代码属性的组合对你来说可能不太自然,但是这与你构建英雄和敌人精灵的方式非常类似。你需要一个字体文件,而不是一个图像文件。在你有一个字体文件后,你可以在你的代码中使用 `pygame.freetype.Font` 函数来创建一个字体对象,然后使用该对象来在屏幕上渲染文本。
+
+因为并不是世界上的每个人的电脑上都有完全一样的字体,因此将你选择的字体与你的游戏捆绑在一起是很重要的。要捆绑字体,首先在你的游戏文件夹中创建一个新的目录,放在你为图像而创建的文件目录旁边。称其为 `fonts` 。
+
+即使你的计算机操作系统随附了几种字体,但是将这些字体给予其他人是非法的。这看起来很奇怪,但法律就是这样运作的。如果想与你的游戏一起随附一种字体,你必需找到一种开源或知识共享的字体,以允许你随游戏一起提供该字体。
+
+专门提供自由和合法字体的网站包括:
+
+ * [Font Library][14]
+ * [Font Squirrel][15]
+ * [League of Moveable Type][16]
+
+当你找到你喜欢的字体后,下载下来。解压缩 ZIP 或 [TAR][17] 文件,并移动 `.ttf` 或 `.otf` 文件到你的项目目录下的 `fonts` 文件夹中。
+
+你没有安装字体到你的计算机上。你只是放置字体到你游戏的 `fonts` 文件夹中,以便 Pygame 可以使用它。如果你想,你*可以*在你的计算机上安装该字体,但是没有必要。重要的是将字体放在你的游戏目录中,这样 Pygame 可以“描绘”字体到屏幕上。
+
+如果字体文件的名称复杂且带有空格或特殊字符,只需要重新命名它即可。文件名称是完全任意的,并且对你来说,文件名称越简单,越容易将其键入你的代码中。
+
+现在告诉 Pygame 你的字体。从文档中你知道,当你至少提供了字体文件路径给 `pygame.freetype.Font` 时(文档明确指出所有其余属性都是可选的),你将在返回中获得一个字体对象:
+
+```
+Font(file, size=0, font_index=0, resolution=0, ucs4=False) -> Font
+```
+
+创建一个称为 `myfont` 的新变量来充当你在游戏中字体,并放置 `Font` 函数的结果到这个变量中。这个示例中使用 `amazdoom.ttf` 字体,但是你可以使用任何你想使用的字体。在你的设置部分放置这些代码:
+
+```
+font_path = os.path.join(os.path.dirname(os.path.realpath(__file__)),"fonts","amazdoom.ttf")
+font_size = tx
+myfont = pygame.freetype.Font(font_path, font_size)
+```
+
+### 在 Pygame 中显示文本
+
+现在你已经创建一个字体对象,你需要一个函数来绘制你想绘制到屏幕上的文本。这和你在你的游戏中绘制背景和平台是相同的原理。
+
+首先,创建一个函数,并使用 `myfont` 对象来创建一些文本,设置颜色为某些 RGB 值。这必须是一个全局函数;它不属于任何具体的类:
+
+```
+def stats(score,health):
+ myfont.render_to(world, (4, 4), "Score:"+str(score), WHITE, None, size=64)
+ myfont.render_to(world, (4, 72), "Health:"+str(health), WHITE, None, size=64)
+```
+
+当然,你此刻已经知道,如果它不在主循环中,你的游戏将不会发生任何事,所以在文件的底部添加一个对你的 `stats` 函数的调用:
+
+```
+ for e in enemy_list:
+ e.move()
+ stats(player.score,player.health) # draw text
+ pygame.display.flip()
+```
+
+尝试你的游戏。
+
+当玩家收集奖励品时,得分会上升。当玩家被敌人击中时,生命值下降。成功!
+
+![Keeping score in Pygame][18]
+
+不过,这里有一个问题。当一个玩家被敌人击中时,健康度会*一路*下降,这是不公平的。你刚刚发现一个非致命的错误。非致命的错误是这些在应用程序中小问题,(通常)不会阻止应用程序启动或甚至导致停止工作,但是它们要么没有意义,要么会惹恼用户。这里是如何解决这个问题的方法。
+
+### 修复生命值计数
+
+当前生命值系统的问题是,敌人接触玩家时,Pygame 时钟的每一次滴答,健康度都会减少。这意味着一个缓慢移动的敌人可能在一次遭遇中将一个玩家降低健康度至 -200 ,这不公平。当然,你可以给你的玩家一个 10000 的起始健康度得分,而不用担心它;这可以工作,并且可能没有人会注意。但是这里有一个更好的方法。
+
+当前,你的代码侦查出一个玩家和一个敌人发生碰撞的时候。生命值问题的修复是检测*两个*独立的事件:什么时候玩家和敌人碰撞,并且,在它们碰撞后,什么时候它们*停止*碰撞。
+
+首先,在你的玩家类中,创建一个变量来代表玩家和敌人碰撞在一起:
+
+```
+ self.frame = 0
+ self.health = 10
+ self.damage = 0
+```
+
+在你的 `Player` 类的 `update` 函数中,*移除*这块代码块:
+
+```
+ for enemy in enemy_hit_list:
+ self.health -= 1
+ #print(self.health)
+```
+
+并且在它的位置,只要玩家当前没有被击中,检查碰撞:
+
+```
+ if self.damage == 0:
+ for enemy in enemy_hit_list:
+ if not self.rect.contains(enemy):
+ self.damage = self.rect.colliderect(enemy)
+```
+
+你可能会在你删除的语句块和你刚刚添加的语句块之间看到相似之处。它们都在做相同的工作,但是新的代码更复杂。最重要的是,只有当玩家*当前*没有被击中时,新的代码才运行。这意味着,当一个玩家和敌人碰撞时,这些代码运行一次,而不是像以前那样一直发生碰撞。
+
+新的代码使用两个新的 Pygame 函数。`self.rect.contains` 函数检查一个敌人当前是否在玩家的边界框内,并且当它是 `true` 时, `self.rect.colliderect` 设置你的新的 `self.damage` 变量为 1,而不管它多少次是 `true` 。
+
+现在,即使被一个敌人击中 3 秒,对 Pygame 来说仍然看作一次击中。
+
+我通过通读 Pygame 的文档而发现了这些函数。你没有必要一次阅读完全部的文档,并且你也没有必要阅读每个函数的每个单词。不过,花费时间在你正在使用的新的库或模块的文档上是很重要的;否则,你极有可能在重新发明轮子。不要花费一个下午的时间来尝试修改拼接一个解决方案到一些东西,而这些东西已经被你正在使用的框架的所解决。阅读文档,知悉函数,并从别人的工作中获益!
+
+最后,添加另一个代码语句块来侦查出什么时候玩家和敌人不再接触。然后直到那时,才从玩家减少一个生命值。
+
+```
+ if self.damage == 1:
+ idx = self.rect.collidelist(enemy_hit_list)
+ if idx == -1:
+ self.damage = 0 # set damage back to 0
+ self.health -= 1 # subtract 1 hp
+```
+
+注意,*只有*当玩家被击中时,这个新的代码才会被触发。这意味着,在你的玩家在你的游戏世界正在探索或收集奖励时,这个代码不会运行。它仅当 `self.damage` 变量被激活时运行。
+
+当代码运行时,它使用 `self.rect.collidelist` 来查看玩家是否*仍然*接触在你敌人列表中的敌人(当其未侦查到碰撞时,`collidelist` 返回 -1)。在它没有接触敌人时,是该处理 `self.damage` 的时机:通过设置 `self.damage` 变量回到 0 来使其无效,并减少一点生命值。
+
+现在尝试你的游戏。
+
+### 得分反应
+
+现在,你有一个来让你的玩家知道它们分数和生命值的方法,当你的玩家达到某些里程碑时,你可以确保某些事件发生。例如,也许这里有一个特殊的恢复一些生命值的奖励项目。也许一个到达 0 生命值的玩家不得不从一个关卡的起始位置重新开始。
+
+你可以在你的代码中检查这些事件,并且相应地操纵你的游戏世界。你已经知道该怎么做,所以请浏览文档来寻找新的技巧,并且独立地尝试这些技巧。
+
+这里是到目前为止所有的代码:
+
+```
+#!/usr/bin/env python3
+# draw a world
+# add a player and player control
+# add player movement
+# add enemy and basic collision
+# add platform
+# add gravity
+# add jumping
+# add scrolling
+# add loot
+# add score
+
+# GNU All-Permissive License
+# Copying and distribution of this file, with or without modification,
+# are permitted in any medium without royalty provided the copyright
+# notice and this notice are preserved. This file is offered as-is,
+# without any warranty.
+
+import pygame
+import sys
+import os
+import pygame.freetype
+
+'''
+Objects
+'''
+
+class Platform(pygame.sprite.Sprite):
+ # x location, y location, img width, img height, img file
+ def __init__(self,xloc,yloc,imgw,imgh,img):
+ pygame.sprite.Sprite.__init__(self)
+ self.image = pygame.image.load(os.path.join('images',img)).convert()
+ self.image.convert_alpha()
+ self.rect = self.image.get_rect()
+ self.rect.y = yloc
+ self.rect.x = xloc
+
+class Player(pygame.sprite.Sprite):
+ '''
+ Spawn a player
+ '''
+ def __init__(self):
+ pygame.sprite.Sprite.__init__(self)
+ self.movex = 0
+ self.movey = 0
+ self.frame = 0
+ self.health = 10
+ self.damage = 0
+ self.collide_delta = 0
+ self.jump_delta = 6
+ self.score = 1
+ self.images = []
+ for i in range(1,9):
+ img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
+ img.convert_alpha()
+ img.set_colorkey(ALPHA)
+ self.images.append(img)
+ self.image = self.images[0]
+ self.rect = self.image.get_rect()
+
+ def jump(self,platform_list):
+ self.jump_delta = 0
+
+ def gravity(self):
+ self.movey += 3.2 # how fast player falls
+
+ if self.rect.y > worldy and self.movey >= 0:
+ self.movey = 0
+ self.rect.y = worldy-ty
+
+ def control(self,x,y):
+ '''
+ control player movement
+ '''
+ self.movex += x
+ self.movey += y
+
+ def update(self):
+ '''
+ Update sprite position
+ '''
+
+ self.rect.x = self.rect.x + self.movex
+ self.rect.y = self.rect.y + self.movey
+
+ # moving left
+ if self.movex < 0:
+ self.frame += 1
+ if self.frame > ani*3:
+ self.frame = 0
+ self.image = self.images[self.frame//ani]
+
+ # moving right
+ if self.movex > 0:
+ self.frame += 1
+ if self.frame > ani*3:
+ self.frame = 0
+ self.image = self.images[(self.frame//ani)+4]
+
+ # collisions
+ enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
+ if self.damage == 0:
+ for enemy in enemy_hit_list:
+ if not self.rect.contains(enemy):
+ self.damage = self.rect.colliderect(enemy)
+
+ if self.damage == 1:
+ idx = self.rect.collidelist(enemy_hit_list)
+ if idx == -1:
+ self.damage = 0 # set damage back to 0
+ self.health -= 1 # subtract 1 hp
+
+ loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
+ for loot in loot_hit_list:
+ loot_list.remove(loot)
+ self.score += 1
+ print(self.score)
+
+ plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
+ for p in plat_hit_list:
+ self.collide_delta = 0 # stop jumping
+ self.movey = 0
+ if self.rect.y > p.rect.y:
+ self.rect.y = p.rect.y+ty
+ else:
+ self.rect.y = p.rect.y-ty
+
+ ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
+ for g in ground_hit_list:
+ self.movey = 0
+ self.rect.y = worldy-ty-ty
+ self.collide_delta = 0 # stop jumping
+ if self.rect.y > g.rect.y:
+ self.health -=1
+ print(self.health)
+
+ if self.collide_delta < 6 and self.jump_delta < 6:
+ self.jump_delta = 6*2
+ self.movey -= 33 # how high to jump
+ self.collide_delta += 6
+ self.jump_delta += 6
+
+class Enemy(pygame.sprite.Sprite):
+ '''
+ Spawn an enemy
+ '''
+ def __init__(self,x,y,img):
+ pygame.sprite.Sprite.__init__(self)
+ self.image = pygame.image.load(os.path.join('images',img))
+ self.movey = 0
+ #self.image.convert_alpha()
+ #self.image.set_colorkey(ALPHA)
+ self.rect = self.image.get_rect()
+ self.rect.x = x
+ self.rect.y = y
+ self.counter = 0
+
+
+ def move(self):
+ '''
+ enemy movement
+ '''
+ distance = 80
+ speed = 8
+
+ self.movey += 3.2
+
+ if self.counter >= 0 and self.counter <= distance:
+ self.rect.x += speed
+ elif self.counter >= distance and self.counter <= distance*2:
+ self.rect.x -= speed
+ else:
+ self.counter = 0
+
+ self.counter += 1
+
+ if not self.rect.y >= worldy-ty-ty:
+ self.rect.y += self.movey
+
+ plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
+ for p in plat_hit_list:
+ self.movey = 0
+ if self.rect.y > p.rect.y:
+ self.rect.y = p.rect.y+ty
+ else:
+ self.rect.y = p.rect.y-ty
+
+ ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
+ for g in ground_hit_list:
+ self.rect.y = worldy-ty-ty
+
+
+class Level():
+ def bad(lvl,eloc):
+ if lvl == 1:
+ enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
+ enemy_list = pygame.sprite.Group() # create enemy group
+ enemy_list.add(enemy) # add enemy to group
+
+ if lvl == 2:
+ print("Level " + str(lvl) )
+
+ return enemy_list
+
+ def loot(lvl,tx,ty):
+ if lvl == 1:
+ loot_list = pygame.sprite.Group()
+ loot = Platform(200,ty*7,tx,ty, 'loot_1.png')
+ loot_list.add(loot)
+
+ if lvl == 2:
+ print(lvl)
+
+ return loot_list
+
+ def ground(lvl,gloc,tx,ty):
+ ground_list = pygame.sprite.Group()
+ i=0
+ if lvl == 1:
+ while i < len(gloc):
+ ground = Platform(gloc[i],worldy-ty,tx,ty,'ground.png')
+ ground_list.add(ground)
+ i=i+1
+
+ if lvl == 2:
+ print("Level " + str(lvl) )
+
+ return ground_list
+
+ def platform(lvl,tx,ty):
+ plat_list = pygame.sprite.Group()
+ ploc = []
+ i=0
+ if lvl == 1:
+ ploc.append((20,worldy-ty-128,3))
+ ploc.append((300,worldy-ty-256,3))
+ ploc.append((500,worldy-ty-128,4))
+
+ while i < len(ploc):
+ j=0
+ while j <= ploc[i][2]:
+ plat = Platform((ploc[i][0]+(j*tx)),ploc[i][1],tx,ty,'ground.png')
+ plat_list.add(plat)
+ j=j+1
+ print('run' + str(i) + str(ploc[i]))
+ i=i+1
+
+ if lvl == 2:
+ print("Level " + str(lvl) )
+
+ return plat_list
+
+def stats(score,health):
+ myfont.render_to(world, (4, 4), "Score:"+str(score), SNOWGRAY, None, size=64)
+ myfont.render_to(world, (4, 72), "Health:"+str(health), SNOWGRAY, None, size=64)
+
+'''
+Setup
+'''
+worldx = 960
+worldy = 720
+
+fps = 40 # frame rate
+ani = 4 # animation cycles
+clock = pygame.time.Clock()
+pygame.init()
+main = True
+
+BLUE = (25,25,200)
+BLACK = (23,23,23 )
+WHITE = (254,254,254)
+SNOWGRAY = (137,164,166)
+ALPHA = (0,255,0)
+
+world = pygame.display.set_mode([worldx,worldy])
+backdrop = pygame.image.load(os.path.join('images','stage.png')).convert()
+backdropbox = world.get_rect()
+player = Player() # spawn player
+player.rect.x = 0
+player.rect.y = 0
+player_list = pygame.sprite.Group()
+player_list.add(player)
+steps = 10
+forwardx = 600
+backwardx = 230
+
+eloc = []
+eloc = [200,20]
+gloc = []
+tx = 64 #tile size
+ty = 64 #tile size
+
+font_path = os.path.join(os.path.dirname(os.path.realpath(__file__)),"fonts","amazdoom.ttf")
+font_size = tx
+myfont = pygame.freetype.Font(font_path, font_size)
+
+i=0
+while i <= (worldx/tx)+tx:
+ gloc.append(i*tx)
+ i=i+1
+
+enemy_list = Level.bad( 1, eloc )
+ground_list = Level.ground( 1,gloc,tx,ty )
+plat_list = Level.platform( 1,tx,ty )
+loot_list = Level.loot(1,tx,ty)
+
+'''
+Main loop
+'''
+while main == True:
+ for event in pygame.event.get():
+ if event.type == pygame.QUIT:
+ pygame.quit(); sys.exit()
+ main = False
+
+ if event.type == pygame.KEYDOWN:
+ if event.key == pygame.K_LEFT or event.key == ord('a'):
+ print("LEFT")
+ player.control(-steps,0)
+ if event.key == pygame.K_RIGHT or event.key == ord('d'):
+ print("RIGHT")
+ player.control(steps,0)
+ if event.key == pygame.K_UP or event.key == ord('w'):
+ print('jump')
+
+ if event.type == pygame.KEYUP:
+ if event.key == pygame.K_LEFT or event.key == ord('a'):
+ player.control(steps,0)
+ if event.key == pygame.K_RIGHT or event.key == ord('d'):
+ player.control(-steps,0)
+ if event.key == pygame.K_UP or event.key == ord('w'):
+ player.jump(plat_list)
+
+ if event.key == ord('q'):
+ pygame.quit()
+ sys.exit()
+ main = False
+
+ # scroll the world forward
+ if player.rect.x >= forwardx:
+ scroll = player.rect.x - forwardx
+ player.rect.x = forwardx
+ for p in plat_list:
+ p.rect.x -= scroll
+ for e in enemy_list:
+ e.rect.x -= scroll
+ for l in loot_list:
+
+ l.rect.x -= scroll
+
+ # scroll the world backward
+ if player.rect.x <= backwardx:
+ scroll = backwardx - player.rect.x
+ player.rect.x = backwardx
+ for p in plat_list:
+ p.rect.x += scroll
+ for e in enemy_list:
+ e.rect.x += scroll
+ for l in loot_list:
+ l.rect.x += scroll
+
+ world.blit(backdrop, backdropbox)
+ player.gravity() # check gravity
+ player.update()
+ player_list.draw(world) #refresh player position
+ enemy_list.draw(world) # refresh enemies
+ ground_list.draw(world) # refresh enemies
+ plat_list.draw(world) # refresh platforms
+ loot_list.draw(world) # refresh loot
+ for e in enemy_list:
+ e.move()
+ stats(player.score,player.health) # draw text
+ pygame.display.flip()
+ clock.tick(fps)
+```
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/add-scorekeeping-your-python-game
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_maze.png?itok=mZ5LP4-X (connecting yellow dots in a maze)
+[2]: https://www.python.org/
+[3]: https://www.pygame.org/news
+[4]: https://linux.cn/article-9071-1.html
+[5]: https://linux.cn/article-10850-1.html
+[6]: https://linux.cn/article-10858-1.html
+[7]: https://linux.cn/article-10874-1.html
+[8]: https://linux.cn/article-10883-1.html
+[9]: https://linux.cn/article-11780-1.html
+[10]: https://linux.cn/article-11790-1.html
+[11]: https://linux.cn/article-11819-1.html
+[12]: https://linux.cn/article-11828-1.html
+[13]: http://pygame.org/news
+[14]: https://fontlibrary.org/
+[15]: https://www.fontsquirrel.com/
+[16]: https://www.theleagueofmoveabletype.com/
+[17]: https://opensource.com/article/17/7/how-unzip-targz-file
+[18]: https://opensource.com/sites/default/files/uploads/pygame-score.jpg (Keeping score in Pygame)
+[19]: https://linux.cn/article-10902-1.html
diff --git a/published/20200109 My favorite Bash hacks.md b/published/20200109 My favorite Bash hacks.md
new file mode 100644
index 0000000000..da0173b7a2
--- /dev/null
+++ b/published/20200109 My favorite Bash hacks.md
@@ -0,0 +1,135 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11841-1.html)
+[#]: subject: (My favorite Bash hacks)
+[#]: via: (https://opensource.com/article/20/1/bash-scripts-aliases)
+[#]: author: (Katie McLaughlin https://opensource.com/users/glasnt)
+
+我珍藏的 Bash 秘籍
+======
+
+> 通过别名和其他捷径来提高你经常忘记的那些事情的效率。
+
+![bash logo on green background][1]
+
+要是你整天使用计算机,如果能找到需要重复执行的命令并记下它们以便以后轻松使用那就太棒了。它们全都呆在那里,藏在 `~/.bashrc` 中(或 [zsh 用户][2]的 `~/.zshrc` 中),等待着改善你的生活!
+
+在本文中,我分享了我最喜欢的这些助手命令,对于我经常遗忘的事情,它们很有用,也希望这可以帮助到你,以及为你解决一些经常头疼的问题。
+
+### 完事吱一声
+
+当我执行一个需要长时间运行的命令时,我经常采用多任务的方式,然后就必须回头去检查该操作是否已完成。然而通过有用的 `say` 命令,现在就不用再这样了(这是在 MacOS 上;请根据你的本地环境更改为等效的方式):
+
+```
+function looooooooong {
+ START=$(date +%s.%N)
+ $*
+ EXIT_CODE=$?
+ END=$(date +%s.%N)
+ DIFF=$(echo "$END - $START" | bc)
+ RES=$(python -c "diff = $DIFF; min = int(diff / 60); print('%s min' % min)")
+ result="$1 completed in $RES, exit code $EXIT_CODE."
+ echo -e "\n⏰ $result"
+ ( say -r 250 $result 2>&1 > /dev/null & )
+}
+```
+
+这个命令会记录命令的开始和结束时间,计算所需的分钟数,并“说”出调用的命令、花费的时间和退出码。当简单的控制台铃声无法使用时,我发现这个超级有用。
+
+### 安装小助手
+
+我在小时候就开始使用 Ubuntu,而我需要学习的第一件事就是如何安装软件包。我曾经首先添加的别名之一是它的助手(根据当天的流行梗命名的):
+
+```
+alias canhas="sudo apt-get install -y"
+```
+
+### GPG 签名
+
+有时候,我必须在没有 GPG 扩展程序或应用程序的情况下给电子邮件签署 [GPG][3] 签名,我会跳到命令行并使用以下令人讨厌的别名:
+
+```
+alias gibson="gpg --encrypt --sign --armor"
+alias ungibson="gpg --decrypt"
+```
+
+### Docker
+
+Docker 的子命令很多,但是 Docker compose 的更多。我曾经使用这些别名来将 `--rm` 标志丢到脑后,但是现在不再使用这些有用的别名了:
+
+```
+alias dc="docker-compose"
+alias dcr="docker-compose run --rm"
+alias dcb="docker-compose run --rm --build"
+```
+
+### Google Cloud 的 gcurl 助手
+
+对于我来说,Google Cloud 是一个相对较新的东西,而它有[极多的文档][4]。`gcurl` 是一个别名,可确保在用带有身份验证标头的本地 `curl` 命令连接 Google Cloud API 时,可以获得所有正确的标头。
+
+### Git 和 ~/.gitignore
+
+我工作中用 Git 很多,因此我有一个专门的部分来介绍 Git 助手。
+
+我最有用的助手之一是我用来克隆 GitHub 存储库的。你不必运行:
+
+```
+git clone git@github.com:org/repo /Users/glasnt/git/org/repo
+```
+
+我设置了一个克隆函数:
+
+```
+clone(){
+ echo Cloning $1 to ~/git/$1
+ cd ~/git
+ git clone git@github.com:$1 $1
+ cd $1
+}
+```
+
+即使每次进入 `~/.bashrc` 文件看到这个时,我总是会忘记和傻笑,我也有一个“刷新上游”命令:
+
+```
+alias yoink="git checkout master && git fetch upstream master && git merge upstream/master"
+```
+
+给 Git 一族的另一个助手是全局忽略文件。在你的 `git config --global --list` 中,你应该看到一个 `core.excludesfile`。如果没有,请[创建一个][6],然后将你总是放到各个 `.gitignore` 文件中的内容填满它。作为 MacOS 上的 Python 开发人员,对我来说,这些内容是:
+
+```
+.DS_Store # macOS clutter
+venv/ # I never want to commit my virtualenv
+*.egg-info/* # ... nor any locally compiled packages
+__pycache__ # ... or source
+*.swp # ... nor any files open in vim
+```
+
+你可以在 [Gitignore.io][7] 或 GitHub 上的 [Gitignore 存储库][8]上找到其他建议。
+
+### 轮到你了
+
+你最喜欢的助手命令是什么?请在评论中分享。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/bash-scripts-aliases
+
+作者:[Katie McLaughlin][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/glasnt
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
+[2]: https://opensource.com/article/19/9/getting-started-zsh
+[3]: https://gnupg.org/
+[4]: https://cloud.google.com/service-infrastructure/docs/service-control/getting-started
+[5]: mailto:git@github.com
+[6]: https://help.github.com/en/github/using-git/ignoring-files#create-a-global-gitignore
+[7]: https://www.gitignore.io/
+[8]: https://github.com/github/gitignore
diff --git a/published/20200109 What-s HTTPS for secure computing.md b/published/20200109 What-s HTTPS for secure computing.md
new file mode 100644
index 0000000000..975b4ebdff
--- /dev/null
+++ b/published/20200109 What-s HTTPS for secure computing.md
@@ -0,0 +1,65 @@
+[#]: collector: (lujun9972)
+[#]: translator: (hopefully2333)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11877-1.html)
+[#]: subject: (What's HTTPS for secure computing?)
+[#]: via: (https://opensource.com/article/20/1/confidential-computing)
+[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
+
+用于安全计算的 HTTPS 是什么?
+======
+
+> 在默认的情况下,网站的安全性还不足够。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/11/123552rqncn4c7474j44jq.jpg)
+
+在过去的几年里,寻找一个只以 “http://...” 开头的网站变得越来越难,这是因为业界终于意识到,网络安全“是件事”,同时也是因为客户端和服务端之间建立和使用 https 连接变得更加容易了。类似的转变可能正以不同的方式发生在云计算、边缘计算、物联网、区块链,人工智能、机器学习等领域。长久以来,我们都知道我们应该对存储的静态数据和在网络中传输的数据进行加密,但是在使用和处理数据的时候对它进行加密是困难且昂贵的。可信计算(使用例如受信任的执行环境Trusted Execution Environments TEEs 这样的硬件功能来提供数据和算法这种类型的保护)可以保护主机系统中的或者易受攻击的环境中的数据。
+
+关于 [TEEs][2],当然,还有我和 Nathaniel McCallum 共同创立的 [Enarx 项目][3],我已经写了几次文章(参见《[给每个人的 Enarx(一个任务)][4]》 和 《[Enarx 迈向多平台][5]》)。Enarx 使用 TEEs 来提供独立于平台和语言的部署平台,以此来让你能够安全地将敏感应用或者敏感组件(例如微服务)部署在你不信任的主机上。当然,Enarx 是完全开源的(顺便提一下,我们使用的是 Apache 2.0 许可证)。能够在你不信任的主机上运行工作负载,这是可信计算的承诺,它扩展了使用静态敏感数据和传输中数据的常规做法:
+
+* **存储**:你要加密你的静态数据,因为你不完全信任你的基础存储架构。
+* **网络**:你要加密你正在传输中的数据,因为你不完全信任你的基础网络架构。
+* **计算**:你要加密你正在使用中的数据,因为你不完全信任你的基础计算架构。
+
+关于信任,我有非常多的话想说,而且,上述说法里的单词“**完全**”是很重要的(在重新读我写的这篇文章的时候,我新加了这个单词)。不论哪种情况,你必须在一定程度上信任你的基础设施,无论是传递你的数据包还是存储你的数据块,例如,对于计算基础架构,你必须要去信任 CPU 和与之关联的固件,这是因为如果你不信任他们,你就无法真正地进行计算(现在有一些诸如同态加密homomorphic encryption一类的技术,这些技术正在开始提供一些可能性,但是它们依然有限,这些技术还不够成熟)。
+
+考虑到发现的一些 CPU 安全性问题,是否应该完全信任 CPU 有时自然会产生疑问,以及它们是否在针对其所在的主机的物理攻击中具有完全的安全性。
+
+这两个问题的回答都是“不”,但是在考虑到大规模可用性和普遍推广的成本,这已经是我们当前拥有的最好的技术了。为了解决第二个问题,没有人去假装这项技术(或者任何的其他技术)是完全安全的:我们需要做的是思考我们的[威胁模型][6]并确定这个情况下的 TEEs 是否为我们的特殊需求提供了足够的安全防护。关于第一个问题,Enarx 采用的模型是在部署时就对你是否信任一个特定的 CPU 组做出决定。举个例子,如果供应商 Q 的 R 代芯片被发现有漏洞,可以很简单地说“我拒绝将我的工作内容部署到 Q 的 R 代芯片上去,但是仍然可以部署到 Q 的 S 型号、T 型号和 U 型号的芯片以及任何 P、M 和 N 供应商的任何芯片上去。”
+
+我认为这里发生了三处改变,这些改变引起了人们现在对机密计算confidential computing的兴趣和采用。
+
+1. **硬件可用**:只是在过去的 6 到 12 个月里,支持 TEEs 的硬件才开始变得广泛可用,这会儿市场上的主要例子是 Intel 的 SGX 和 AMD 的 SEV。我们期望在未来可以看到支持 TEE 的硬件的其他例子。
+2. **行业就绪**:就像上云越来越多地被接受作为应用程序部署的模型,监管机构和立法机构也在提高各类组织保护其管理的数据的要求。组织开始呼吁在不受信任的主机运行敏感程序(或者是处理敏感数据的应用程序)的方法,更确切地说,是在无法完全信任且带有敏感数据的主机上运行的方法。这不足为奇:如果芯片制造商看不到这项技术的市场,他们就不会投太多的钱在这项技术上。Linux 基金会的[机密计算联盟(CCC)][7]的成立就是业界对如何寻找使用加密计算的通用模型并且鼓励开源项目使用这些技术感兴趣的案例。(红帽发起的 Enarx 是一个 CCC 项目。)
+3. **开放源码**:就像区块链一样,机密计算是使用开源绝对明智的技术之一。如果你要运行敏感程序,你需要去信任正在为你运行的程序。不仅仅是 CPU 和固件,同样还有在 TEE 内执行你的工作负载的框架。可以很好地说,“我不信任主机机器和它上面的软件栈,所以我打算使用 TEE,”但是如果你不够了解 TEE 软件环境,那你就是将一种软件不透明换成另外一种。TEEs 的开源支持将允许你或者社区(实际上是你与社区)以一种专有软件不可能实现的方式来检查和审计你所运行的程序。这就是为什么 CCC 位于 Linux 基金会旗下(这个基金会致力于开放式开发模型)并鼓励 TEE 相关的软件项目加入且成为开源项目(如果它们还没有成为开源)。
+
+我认为,在过去的 15 到 20 年里,硬件可用、行业就绪和开放源码已成为推动技术改变的驱动力。区块链、人工智能、云计算、大规模计算webscale computing、大数据和互联网商务都是这三个点同时发挥作用的例子,并且在业界带来了巨大的改变。
+
+在一般情况下,安全是我们这数十年来听到的一种承诺,并且其仍然未被实现。老实说,我不确定它未来会不会实现。但是随着新技术的到来,特定用例的安全变得越来越实用和无处不在,并且在业内受到越来越多的期待。这样看起来,机密计算似乎已准备好成为成为下一个重大变化 —— 而你,我亲爱的读者,可以一起来加入到这场革命(毕竟它是开源的)。
+
+这篇文章最初是发布在 Alice, Eve, and Bob 上的,这是得到了作者许可的重发。
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/confidential-computing
+
+作者:[Mike Bursell][a]
+选题:[lujun9972][b]
+译者:[hopefully2333](https://github.com/hopefully2333)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mikecamel
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/secure_https_url_browser.jpg?itok=OaPuqBkG (Secure https browser)
+[2]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/
+[3]: https://enarx.io/
+[4]: https://aliceevebob.com/2019/08/20/enarx-for-everyone-a-quest/
+[5]: https://aliceevebob.com/2019/10/29/enarx-goes-multi-platform/
+[6]: https://aliceevebob.com/2018/02/20/there-are-no-absolutes-in-security/
+[7]: https://confidentialcomputing.io/
+[8]: tmp.VEZpFGxsLv#1
+[9]: https://aliceevebob.com/2019/12/03/confidential-computing-the-new-https/
diff --git a/published/20200112 What I learned going from prison to Python.md b/published/20200112 What I learned going from prison to Python.md
new file mode 100644
index 0000000000..283212f4b4
--- /dev/null
+++ b/published/20200112 What I learned going from prison to Python.md
@@ -0,0 +1,104 @@
+[#]: collector: (lujun9972)
+[#]: translator: (heguangzhi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11893-1.html)
+[#]: subject: (What I learned going from prison to Python)
+[#]: via: (https://opensource.com/article/20/1/prison-to-python)
+[#]: author: (Shadeed "Sha" Wallace-Stepter https://opensource.com/users/shastepter)
+
+从监狱到 Python
+======
+
+> 入狱后,开源编程是如何提供机会的。
+
+![书架上的编程书籍][1]
+
+不到一年前,我还在圣昆廷州立监狱服刑,我是无期徒刑。
+
+我高三的时候,我抢劫了一个人并向他开了枪。现在,我经过一段时间才意识到并承认自己做错了,这是在经历了陪审团审判并看到我的行为带来的恶果后,我知道需要改变自己,我也确实做到了。尽管我对我的行为表示懊悔,但我毕竟开枪打了一个人,并差点杀了他。做这样的事是有后果的,这是理所当然的。所以在我 18 岁的时候,我被判了终身监禁。
+
+监狱是一个非常可怕的地方;我是不推荐你去的。但是我必须去,所以我去了。我不告诉你具体的细节,但你可以放心,这是一个没有太多动机去改变的地方,许多人在这里养成的坏习惯比他们过去在别处养成的更多。
+
+我是幸运儿之一。当我在服刑的时候,发生了一些不同寻常的事情。我开始想象自己出狱后的的未来,虽然在这之前,我还是已经在那里度过了我整个成年生活。
+
+现在你想想:我是黑人,只受过高中教育。我没有工作经历,如果我离开监狱,在被释放前,我还是一个被定罪的重罪犯。当每个雇主看到我的简历,都不会有“我需要雇用这个人”想法,我认为是正常的。
+
+我不知道我的选择是什么,但我已经下定决心了。我需要做些活下去的事情,并且这和我入狱前的生活一点也不像。
+
+### Python 之路
+
+最终,我被关在了圣昆廷州立监狱,我不知道我为何幸运地被关在那里。圣昆廷提供了几个自助和教育编程项目。这些[改造机会][2]帮助囚犯使他们拥有在获释后避免再次犯罪的技能。
+
+作为其中一个编程项目的一部分,2017 年我通过圣昆廷媒体项目认识了[杰西卡·麦凯拉][3]。杰西卡是编程语言 [Python][4] 的爱好者,她开始向我推荐 Python 有多棒,以及它是刚起步的人学习的完美语言。这就是故事变得比小说更精彩的地方。
+
+> 感谢 [@northbaypython][5] 让 [@ShaStepter][6] 和我重复 [@pycon][7] 的主题演讲,让他们被录制下来。我很荣幸与大家分享:
+>
+> 从监狱到 Pythone: https://t.co/rcumoAgZHm
+>
+> 大规模裁员:如果我们不雇佣被判重罪的人,谁会呢? https://t.co/fENDUFdxfX
+>
+> [pic.Twitter.com/kpjo8d3ul6][8]
+>
+> —杰西卡·麦凯拉(@jessicamckellar)[2019 年 11 月 5 日][9]
+
+杰西卡向我介绍了一些 Python 视频教程,这些教程是她为一家名叫 [O’Reilly Media][10] 的公司做的,课程是在线的,如果我能接触到它们,那该有多好呀。不幸的是,在监狱里上网是不可能的。但是,我遇到了一个叫 Tim O’Reilly 的人,他最近刚来到圣昆廷。在他访问之后,Tim 从他的公司 O’Reilly Media 公司向监狱的编程班捐赠了大量内容。最终,我拿到了一款平板电脑,上面有杰西卡的 Python 教程,并学会了如何使用这些 Python 教程进行编码。
+
+真是难以置信。背景和生活与我完全不同的陌生人把这些联系在一起,让我学会了编码。
+
+### 对 Python 社区的热爱
+
+在这之后,我开始经常和杰西卡见面,她开始告诉我关于开源社区的情况。我了解到,从根本上说,开源社区就是关于伙伴关系和协作的社区。之所以如此有效,是因为没有人被排除在外。
+
+对我来说,一个努力寻找自己定位的人,我所看到的是一种非常基本的爱——通过合作和接受的爱,通过接触的爱,通过包容的爱。我渴望成为其中的一部分。所以我继续学习 Python,不幸的是,我无法获得更多的教程,但是我能够从开源社区收集的大量书面知识中获益。我读一切提到 Python 的东西,从平装本到晦涩难懂的杂志文章,我使用平板电脑来解决我读到的 Python 问题。
+
+我对 Python 和编程的热情不是我的许多同龄人所共有的。除了监狱编程课上的极少数人之外,我认识的其他人都没有提到过编程;一般囚犯都不知道。我认为这是因为有过监禁经历的人无法接触编程,尤其是如果你是有色人种。
+
+### 监狱外的 Python 生活
+
+然而,在 2018 年 8 月 17 日,我得到了生命中的惊喜。时任州长的杰里·布朗将我 27 年的刑期减刑,在服刑将近 19 年后,我被释放出狱了。
+
+但现实情况是,这也是为什么我认为编程和开源社区如此有价值。我是一名 37 岁的黑人罪犯,没有工作经历,刚刚在监狱服刑 18 年。我有犯罪史,并且现存偏见导致没有多少职业适合我。但是编程是少数例外之一。
+
+现在,监禁后重返社会的人们迫切需要包容,但当谈及工作场所的多样性以及对多样性的需求时,你真的听不到这个群体被提及或包容。
+
+> 还有什么:
+>
+> 1、背景调查:询问他们在你的公司是如何使用的。
+>
+> 2、初级角色:删除虚假的、不必要的先决条件,这些条件将排除有记录的合格人员。
+>
+> 3、积极拓展:与当地再就业项目合作,创建招聘渠道。
+>
+> [pic.twitter.com/WnzdEUTuxr][11]
+>
+> —杰西卡·麦凯拉(@jessicamckellar)[2019 年 5 月 12 日][12]
+
+
+因此,我想谦卑地挑战开源社区的所有程序员和成员,让他们围绕包容和多样性展开思考。今天,我自豪地站在你们面前,代表一个大多数人都没有想到的群体——以前被监禁的人。但是我们存在,我们渴望证明我们的价值,最重要的是,我们期待被接受。当我们重返社会时,许多挑战等待着我们,我请求你们允许我们有机会展示我们的价值。欢迎我们,接受我们,最重要的是,包容我们。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/prison-to-python
+
+作者:[Shadeed "Sha" Wallace-Stepter][a]
+选题:[lujun9972][b]
+译者:[heguangzhi](https://github.com/heguangzhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/shastepter
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_programming_languages.jpg?itok=KJcdnXM2 (Programming books on a shelf)
+[2]: https://www.dailycal.org/2019/02/27/san-quentin-rehabilitation-programs-offer-inmates-education-a-voice/
+[3]: https://twitter.com/jessicamckellar?lang=en
+[4]: https://www.python.org/
+[5]: https://twitter.com/northbaypython?ref_src=twsrc%5Etfw
+[6]: https://twitter.com/ShaStepter?ref_src=twsrc%5Etfw
+[7]: https://twitter.com/pycon?ref_src=twsrc%5Etfw
+[8]: https://t.co/Kpjo8d3ul6
+[9]: https://twitter.com/jessicamckellar/status/1191601209917837312?ref_src=twsrc%5Etfw
+[10]: http://shop.oreilly.com/product/110000448.do
+[11]: https://t.co/WnzdEUTuxr
+[12]: https://twitter.com/jessicamckellar/status/1127640222504636416?ref_src=twsrc%5Etfw
diff --git a/published/20200117 Use this Python script to find bugs in your Overcloud.md b/published/20200117 Use this Python script to find bugs in your Overcloud.md
new file mode 100644
index 0000000000..94428426e4
--- /dev/null
+++ b/published/20200117 Use this Python script to find bugs in your Overcloud.md
@@ -0,0 +1,172 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Morisun029)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11886-1.html)
+[#]: subject: (Use this Python script to find bugs in your Overcloud)
+[#]: via: (https://opensource.com/article/20/1/logtool-root-cause-identification)
+[#]: author: (Arkady Shtempler https://opensource.com/users/ashtempl)
+
+用 Python 脚本发现 OpenStack Overcloud 中的问题
+======
+
+> LogTool 是一组 Python 脚本,可帮助你找出 Overcloud 节点中问题的根本原因。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/12/211455woy57xx5q19cx175.jpg)
+
+OpenStack 在其 Overcloud 节点和 Undercloud 主机上存储和管理了一堆日志文件。因此,使用 OSP 日志文件来排查遇到的问题并不是一件容易的事,尤其在你甚至都不知道是什么原因导致问题时。
+
+如果你正处于这种情况,那么 [LogTool][2] 可以使你的生活变得更加轻松!它会为你节省本需要人工排查问题所需的时间和精力。LogTool 基于模糊字符串匹配算法,可提供过去发生的所有唯一错误和警告信息。你可以根据日志中的时间戳导出特定时间段(例如 10 分钟前、一个小时前、一天前等)的这些信息。
+
+LogTool 是一组 Python 脚本,其主要模块 `PyTool.py` 在 Undercloud 主机上执行。某些操作模式使用直接在 Overcloud 节点上执行的其他脚本,例如从 Overcloud 日志中导出错误和警告信息。
+
+LogTool 支持 Python 2 和 Python 3,你可以根据需要更改工作目录:[LogTool_Python2][3] or [LogTool_Python3][4]。
+
+### 操作方式
+
+#### 1、从 Overcloud 日志中导出错误和警告信息
+
+此模式用于从过去发生的 Overcloud 节点中提取 **错误** 和 **警告** 信息。作为用户,系统将提示你提供“开始时间”和“调试级别”,以用于提取错误或警告消息。例如,如果在过去 10 分钟内出了问题,你则可以只提取该时间段内的错误和警告消息。
+
+此操作模式将为每个 Overcloud 节点生成一个包含结果文件的目录。结果文件是经过压缩的简单文本文件(`*.gz`),以减少从 Overcloud 节点下载所需的时间。将压缩文件转换为常规文本文件,可以使用 `zcat` 或类似工具。此外,Vi 的某些版本和 Emacs 的任何最新版本均支持读取压缩数据。结果文件分为几部分,并在底部包含目录。
+
+LogTool 可以即时检测两种日志文件:标准和非标准。在标准文件中,每条日志行都有一个已知的和已定义的结构:时间戳、调试级别、信息等等。在非标准文件中,日志的结构未知。例如,它可能是第三方的日志。在目录中,你可以找到每个部分的“名称 --> 行号”例如:
+
+ * **原始数据 - 从标准 OSP 日志中提取的错误/警告消息:** 这部分包含所有提取的错误/警告消息,没有任何修改或更改。这些消息是 LogTool 用于模糊匹配分析的原始数据。
+ * **统计信息 - 每个标准 OSP 日志的错误/警告信息数量:** 在此部分,你将找到每个标准日志文件的错误和警告数量。这些信息可以帮助你了解用于排查问题根本原因的潜在组件。
+ * **统计信息 - 每个标准 OSP 日志文件的唯一消息:** 这部分提供指定时间戳内的唯一的错误和警告消息。有关每个唯一错误或警告的更多详细信息,请在“原始数据”部分中查找相同的消息。
+ * **统计信息 - 每个非标准日志文件在任意时间的唯一消息:** 此部分包含非标准日志文件中的唯一消息。遗憾的是,LogTool 无法像标准日志文件那样的处理方式处理这些日志文件。因此,在你提取“特定时间”的日志信息时会被忽略,你会看到过去创建的所有唯一的错误/警告消息。因此,首先,向下滚动到结果文件底部的目录并查看其部分-使用目录中的行索引跳到相关部分,其中第 3、4 和 5 行的信息最重要。
+
+#### 2、从 Overcloud 节点下载所有日志
+
+所有 Overcloud 节点的日志将被压缩并下载到 Undercloud 主机上的本地目录。
+
+#### 3、所有 Overcloud 日志中搜索字符串
+
+该模式“grep”(搜索)由用户在所有 Overcloud 日志上提供的字符串。例如,你可能希望查看特定请求的所有日志消息,例如,“Create VM”的失败的请求 ID。
+
+#### 4、检查 Overcloud 上当前的 CPU、RAM 和磁盘使用情况
+
+该模式显示每个 Overcloud 节点上的当前 CPU、RAM 和磁盘信息。
+
+#### 5、执行用户脚本
+
+该模式使用户可以在 Overcloud 节点上运行自己的脚本。例如,假设 Overcloud 部署失败,你就需要在每个控制器节点上执行相同的过程来修复该问题。你可以实现“替代方法”脚本,并使用此模式在控制器上运行它。
+
+#### 6、仅按给定的时间戳下载相关日志
+
+此模式仅下载 Overcloud 上 “给定的时间戳”的“上次修改时间”的日志。例如,如果 10 分钟前出现错误,则与旧日志文件就没有关系,因此无需下载。此外,你不能(或不应)在某些错误报告工具中附加大文件,因此此模式可能有助于编写错误报告。
+
+#### 7、从 Undercloud 日志中导出错误和警告信息
+
+这与上面的模式 1 相同。
+
+#### 8、在 Overcloud 上检查不正常的 docker
+
+此模式用于在节点上搜索不正常的 Docker。
+
+#### 9、下载 OSP 日志并在本地运行 LogTool
+
+此模式允许你从 Jenkins 或 Log Storage 下载 OSP 日志(例如,`cougar11.scl.lab.tlv.redhat.com`),并在本地分析。
+
+#### 10、在 Undercloud 上分析部署日志
+
+此模式可以帮助你了解 Overcloud 或 Undercloud 部署过程中出了什么问题。例如,在`overcloud_deploy.sh` 脚本中,使用 `--log` 选项时会生成部署日志;此类日志的问题是“不友好”,你很难理解是什么出了问题,尤其是当详细程度设置为 `vv` 或更高时,使得日志中的数据难以读取。此模式提供有关所有失败任务的详细信息。
+
+#### 11、分析 Gerrit(Zuul)失败的日志
+
+此模式用于分析 Gerrit(Zuul)日志文件。它会自动从远程 Gerrit 门下载所有文件(HTTP 下载)并在本地进行分析。
+
+### 安装
+
+GitHub 上有 LogTool,使用以下命令将其克隆到你的 Undercloud 主机:
+
+```
+git clone https://github.com/zahlabut/LogTool.git
+```
+
+该工具还使用了一些外部 Python 模块:
+
+#### Paramiko
+
+默认情况下,SSH 模块通常会安装在 Undercloud 上。使用以下命令来验证是否已安装:
+
+```
+ls -a /usr/lib/python2.7/site-packages | grep paramiko
+```
+
+如果需要安装模块,请在 Undercloud 上执行以下命令:
+
+```
+sudo easy_install pip
+sudo pip install paramiko==2.1.1
+```
+
+#### BeautifulSoup
+
+此 HTML 解析器模块仅在使用 HTTP 下载日志文件的模式下使用。它用于解析 Artifacts HTML 页面以获取其中的所有链接。安装 BeautifulSoup,请输入以下命令:
+
+```
+pip install beautifulsoup4
+```
+
+你还可以通过执行以下命令使用 [requirements.txt][6] 文件安装所有必需的模块:
+
+```
+pip install -r requirements.txt
+```
+
+### 配置
+
+所有必需的参数都直接在 `PyTool.py` 脚本中设置。默认值为:
+
+```
+overcloud_logs_dir = '/var/log/containers'
+overcloud_ssh_user = 'heat-admin'
+overcloud_ssh_key = '/home/stack/.ssh/id_rsa'
+undercloud_logs_dir ='/var/log/containers'
+source_rc_file_path='/home/stack/'
+```
+
+### 用法
+
+此工具是交互式的,因此要启动它,只需输入:
+
+```
+cd LogTool
+python PyTool.py
+```
+
+### 排除 LogTool 故障
+
+
+在运行时会创建两个日志文件:`Error.log` 和 `Runtime.log`。请在你要打开的问题的描述中添加两者的内容。
+
+### 局限性
+
+LogTool 进行硬编码以处理最大 500 MB 的文件。
+
+### LogTool_Python3 脚本
+
+在 [github.com/zahlabut/LogTool][2] 获取。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/logtool-root-cause-identification
+
+作者:[Arkady Shtempler][a]
+选题:[lujun9972][b]
+译者:[Morisun029](https://github.com/译者ID)
+校对:[wxy](https://github.com/wxy)
+
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ashtempl
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
+[2]: https://github.com/zahlabut/LogTool
+[3]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python2
+[4]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python3
+[5]: https://opensource.com/article/19/2/getting-started-cat-command
+[6]: https://github.com/zahlabut/LogTool/blob/master/LogTool_Python3/requirements.txt
diff --git a/published/20200118 Keep a journal of your activities with this Python program.md b/published/20200118 Keep a journal of your activities with this Python program.md
new file mode 100644
index 0000000000..6426f86baa
--- /dev/null
+++ b/published/20200118 Keep a journal of your activities with this Python program.md
@@ -0,0 +1,76 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11846-1.html)
+[#]: subject: (Keep a journal of your activities with this Python program)
+[#]: via: (https://opensource.com/article/20/1/python-journal)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用这个 Python 程序记录你的活动
+======
+
+> jrnl 可以创建可搜索、带时间戳、可导出、加密的(如果需要)的日常活动日志。在我们的 20 个使用开源提升生产力的系列的第八篇文章中了解更多。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/03/105455tx03zo2pu7woyusp.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 使用 jrnl 记录日志
+
+在我的公司,许多人会在下班之前在 Slack 上发送一个“一天结束”的状态。在有着许多项目和全球化的团队里,这是一个分享你已完成、未完成以及你需要哪些帮助的一个很好的方式。但有时候我太忙了,以至于我忘了做了什么。这时候就需要记录日志了。
+
+![jrnl][2]
+
+打开一个文本编辑器并在你做一些事的时候添加一行很容易。但是在需要找出你在什么时候做的笔记,或者要快速提取相关的行时会有挑战。幸运的是,[jrnl][3] 可以提供帮助。
+
+jrnl 能让你在命令行中快速输入条目、搜索过去的条目并导出为 HTML 和 Markdown 等富文本格式。你可以有多个日志,这意味着你可以将工作条目与私有条目分开。它将条目存储为纯文本,因此即使 jrnl 停止工作,数据也不会丢失。
+
+由于 jrnl 是一个 Python 程序,最简单的安装方法是使用 `pip3 install jrnl`。这将确保你获得最新和最好的版本。第一次运行它会询问一些问题,接下来就能正常使用。
+
+![jrnl's first run][4]
+
+现在,每当你需要做笔记或记录日志时,只需输入 `jrnl `,它将带有时间戳的记录保存到默认文件中。你可以使用 `jrnl -on YYYY-MM-DD` 搜索特定日期条目,`jrnl -from YYYY-MM-DD` 搜索在那日期之后的条目,以及用 `jrnl -to YYYY-MM-DD` 搜索到那日期的条目。搜索词可以与 `-and` 参数结合使用,允许像 `jrnl -from 2019-01-01 -and -to 2019-12-31` 这类搜索。
+
+你还可以使用 `--edit` 标志编辑日志中的条目。开始之前,通过编辑文件 `~/.config/jrnl/jrnl.yaml` 来设置默认编辑器。你还可以指定日志使用什么文件、用于标签的特殊字符以及一些其他选项。现在,重要的是设置编辑器。我使用 Vim,jrnl 的文档中有一些使用其他编辑器如 VSCode 和 Sublime Text 的[有用提示][5]。
+
+![Example jrnl config file][6]
+
+jrnl 还可以加密日志文件。通过设置全局 `encrypt` 变量,你将告诉 jrnl 加密你定义的所有日志。还可在配置文件中的针对文件设置 `encrypt: true` 来加密文件。
+
+```
+journals:
+ default: ~/journals/journal.txt
+ work: ~/journals/work.txt
+ private:
+ journal: ~/journals/private.txt
+ encrypt: true
+```
+
+如果日志尚未加密,系统将提示你输入在对它进行任何操作的密码。日志文件将加密保存在磁盘上,以免受窥探。[jrnl 文档][7] 中包含其工作原理、使用哪些加密方式等的更多信息。
+
+![Encrypted jrnl file][8]
+
+日志记录帮助我记住什么时候做了什么事,并在我需要的时候能够找到它。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/python-journal
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/notebook-writing-pen.jpg?itok=uA3dCfu_ (Writing in a notebook)
+[2]: https://opensource.com/sites/default/files/uploads/productivity_8-1.png (jrnl)
+[3]: https://jrnl.sh/
+[4]: https://opensource.com/sites/default/files/uploads/productivity_8-2.png (jrnl's first run)
+[5]: https://jrnl.sh/recipes/#external-editors
+[6]: https://opensource.com/sites/default/files/uploads/productivity_8-3.png (Example jrnl config file)
+[7]: https://jrnl.sh/encryption/
+[8]: https://opensource.com/sites/default/files/uploads/productivity_8-4.png (Encrypted jrnl file)
diff --git a/published/20200119 How to Set or Change Timezone in Ubuntu Linux -Beginner-s Tip.md b/published/20200119 How to Set or Change Timezone in Ubuntu Linux -Beginner-s Tip.md
new file mode 100644
index 0000000000..5395a64b6e
--- /dev/null
+++ b/published/20200119 How to Set or Change Timezone in Ubuntu Linux -Beginner-s Tip.md
@@ -0,0 +1,128 @@
+[#]: collector: (lujun9972)
+[#]: translator: (robsean)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11838-1.html)
+[#]: subject: (How to Set or Change Timezone in Ubuntu Linux [Beginner’s Tip])
+[#]: via: (https://itsfoss.com/change-timezone-ubuntu/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+如何在 Ubuntu Linux 中设置或更改时区
+======
+
+[你安装 Ubuntu 时][1],它会要求你设置时区。如果你选择一个错误的时区,或者你移动到世界的一些其它地方,你可以很容易地在以后更改它。
+
+### 如何在 Ubuntu 和其它 Linux 发行版中更改时区
+
+这里有两种方法来更改 Ubuntu 中的时区。你可以使用图形化设置或在终端中使用 `timedatectl` 命令。你也可以直接更改 `/etc/timezone` 文件,但是我不建议这样做。
+
+在这篇初学者教程中,我将向你展示图形化和终端两种方法:
+
+ * [通过 GUI 更改 Ubuntu 中的时区][2] (适合桌面用户)
+ * [通过命令行更改 Ubuntu 中的时区][3] (桌面和服务器都工作)
+
+![][4]
+
+#### 方法 1: 通过终端更改 Ubuntu 时区
+
+[Ubuntu][5] 或一些使用 systemd 的其它发行版可以在 Linux 终端中使用 `timedatectl` 命令来设置时区。
+
+你可以使用没有任何参数的 `timedatectl` 命令来检查当前是日期和时区设置:
+
+```
+[email protected]:~$ timedatectl
+ Local time: Sat 2020-01-18 17:39:52 IST
+ Universal time: Sat 2020-01-18 12:09:52 UTC
+ RTC time: Sat 2020-01-18 12:09:52
+ Time zone: Asia/Kolkata (IST, +0530)
+ System clock synchronized: yes
+systemd-timesyncd.service active: yes
+ RTC in local TZ: no
+```
+
+正如你在上面的输出中所看,我的系统使用 Asia/Kolkata 。它也告诉我现在比世界时早 5 小时 30 分钟。
+
+为在 Linux 中设置时区,你需要知道准确的时区。你必需使用时区的正确的格式 (时区格式是洲/城市)。
+
+为获取时区列表,使用 `timedatectl` 命令的 `list-timezones` 参数:
+
+```
+timedatectl list-timezones
+```
+
+它将向你显示大量可用的时区列表。
+
+![Timezones List][6]
+
+你可以使用向上箭头和向下箭头或 `PgUp` 和 `PgDown` 键来在页面之间移动。
+
+你也可以 `grep` 输出,并搜索你的时区。例如,假如你正在寻找欧洲的时区,你可以使用:
+
+```
+timedatectl list-timezones | grep -i europe
+```
+
+比方说,你想设置时区为巴黎。在这里,使用的时区值的 Europe/Paris :
+
+```
+timedatectl set-timezone Europe/Paris
+```
+
+它虽然不显示任何成功信息,但是时区会立即更改。你不需要重新启动或注销。
+
+记住,虽然你不需要成为 root 用户并对命令使用 `sudo`,但是你的账户仍然需要拥有管理器权限来更改时区。
+
+你可以使用 [date 命令][7] 来验证更改的时间好时区:
+
+```
+[email protected]:~$ date
+Sat Jan 18 13:56:26 CET 2020
+```
+
+#### 方法 2: 通过 GUI 更改 Ubuntu 时区
+
+按下 `super` 键 (Windows 键) ,并搜索设置:
+
+![Applications Menu Settings][8]
+
+在左侧边栏中,向下滚动一点,查看详细信息:
+
+![Go to Settings -> Details][9]
+
+在详细信息中,你将在左侧边栏中找到“日期和时间”。在这里,你应该关闭自动时区选项(如果它已经被启用),然后在时区上单击:
+
+![In Details -> Date & Time, turn off the Automatic Time Zone][10]
+
+当你单击时区时,它将打开一个交互式地图,你可以在你选择的地理位置上单击,关闭窗口。
+
+![Select a timezone][11]
+
+在选择新的时区后,除了关闭这个地图后,你不必做任何事情。不需要注销或 [关闭 Ubuntu][12]。
+
+我希望这篇快速教程能帮助你在 Ubuntu 和其它 Linux 发行版中更改时区。如果你有问题或建议,请告诉我。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/change-timezone-ubuntu/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/install-ubuntu/
+[2]: tmp.bHvVztzy6d#change-timezone-gui
+[3]: tmp.bHvVztzy6d#change-timezone-command-line
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/Ubuntu_Change-_Time_Zone.png?ssl=1
+[5]: https://ubuntu.com/
+[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/timezones_in_ubuntu.jpg?ssl=1
+[7]: https://linuxhandbook.com/date-command/
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/applications_menu_settings.jpg?ssl=1
+[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/settings_detail_ubuntu.jpg?ssl=1
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/change_timezone_in_ubuntu.jpg?ssl=1
+[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/change_timezone_in_ubuntu_2.jpg?ssl=1
+[12]: https://itsfoss.com/schedule-shutdown-ubuntu/
diff --git a/published/20200119 One open source chat tool to rule them all.md b/published/20200119 One open source chat tool to rule them all.md
new file mode 100644
index 0000000000..1d2240f8f7
--- /dev/null
+++ b/published/20200119 One open source chat tool to rule them all.md
@@ -0,0 +1,104 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11856-1.html)
+[#]: subject: (One open source chat tool to rule them all)
+[#]: via: (https://opensource.com/article/20/1/open-source-chat-tool)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+一个通过 IRC 管理所有聊天的开源聊天工具
+======
+
+> BitlBee 将多个聊天应用集合到一个界面中。在我们的 20 个使用开源提升生产力的系列的第九篇文章中了解如何设置和使用 BitlBee。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/05/123636dw8uw34mbkqzmw84.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 将所有聊天都放到 BitlBee 中
+
+即时消息和聊天已经成为网络世界的主要内容。如果你像我一样,你可能打开五六个不同的应用与你的朋友、同事和其他人交谈。关注所有聊天真的很痛苦。谢天谢地,你可以使用一个应用(好吧,是两个)将这些聊天整个到一个地方。
+
+![BitlBee on XChat][2]
+
+[BitlBee][3] 是作为服务运行的应用,它可以将标准的 IRC 客户端与大量的消息服务进行桥接。而且,由于它本质上是 IRC 服务器,因此你可以选择很多客户端。
+
+BitlBee 几乎包含在所有 Linux 发行版中。在 Ubuntu 上安装(我选择的 Linux 桌面),类似这样:
+
+```
+sudo apt install bitlbee-libpurple
+```
+
+在其他发行版上,包名可能略有不同,但搜索 “bitlbee” 应该就能看到。
+
+你会注意到我用的 libpurple 版的 BitlBee。这个版本能让我使用 [libpurple][4] 即时消息库中提供的所有协议,该库最初是为 [Pidgin][5] 开发的。
+
+安装完成后,服务应会自动启动。现在,使用一个 IRC 客户端(图片中为 [XChat][6]),我可以连接到端口 6667(标准 IRC 端口)上的服务。
+
+![Initial BitlBee connection][7]
+
+你将自动连接到控制频道 &bitlbee。此频道对于你是独一无二的,在多用户系统上每个人都有一个自己的。在这里你可以配置该服务。
+
+在控制频道中输入 `help`,你可以随时获得完整的文档。浏览它,然后使用 `register` 命令在服务器上注册帐户。
+
+```
+register
+```
+
+现在,你在服务器上所做的任何配置更改(IM 帐户、设置等)都将在输入 `save` 时保存。每当你连接时,使用 `identify ` 连接到你的帐户并加载这些设置。
+
+![purple settings][8]
+
+命令 `help purple` 将显示 libpurple 提供的所有可用协议。例如,我安装了 [telegram-purple][9] 包,它增加了连接到 Telegram 的能力。我可以使用 `account add` 命令将我的电话号码作为帐户添加。
+
+```
+account add telegram +15555555
+```
+
+BitlBee 将显示它已添加帐户。你可以使用 `account list` 列出你的帐户。因为我只有一个帐户,我可以通过 `account 0 on` 登录,它会进行 Telegram 登录,列出我所有的朋友和聊天,接下来就能正常聊天了。
+
+但是,对于 Slack 这个最常见的聊天系统之一呢?你可以安装 [slack-libpurple][10] 插件,并且对 Slack 执行同样的操作。如果你不愿意编译和安装这些,这可能不适合你。
+
+按照插件页面上的说明操作,安装后重新启动 BitlBee 服务。现在,当你运行 `help purple` 时,应该会列出 Slack。像其他协议一样添加一个 Slack 帐户。
+
+```
+account add slack ksonney@myslack.slack.com
+account 1 set password my_legcay_API_token
+account 1 on
+```
+
+你知道么,你已经连接到 Slack 中,你可以通过 `chat add` 命令添加你感兴趣的 Slack 频道。比如:
+
+```
+chat add 1 happyparty
+```
+
+将 Slack 频道 happyparty 添加为本地频道 #happyparty。现在可以使用标准 IRC `/join` 命令访问该频道。这很酷。
+
+BitlBee 和 IRC 客户端帮助我的(大部分)聊天和即时消息保存在一个地方,并减少了我的分心,因为我不再需要查找并切换到任何一个刚刚找我的应用上。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/open-source-chat-tool
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
+[2]: https://opensource.com/sites/default/files/uploads/productivity_9-1.png (BitlBee on XChat)
+[3]: https://www.bitlbee.org/
+[4]: https://developer.pidgin.im/wiki/WhatIsLibpurple
+[5]: http://pidgin.im/
+[6]: http://xchat.org/
+[7]: https://opensource.com/sites/default/files/uploads/productivity_9-2.png (Initial BitlBee connection)
+[8]: https://opensource.com/sites/default/files/uploads/productivity_9-3.png (purple settings)
+[9]: https://github.com/majn/telegram-purple
+[10]: https://github.com/dylex/slack-libpurple
+[11]: mailto:ksonney@myslack.slack.com
diff --git a/published/20200120 Use this Twitter client for Linux to tweet from the terminal.md b/published/20200120 Use this Twitter client for Linux to tweet from the terminal.md
new file mode 100644
index 0000000000..6a6bf89b13
--- /dev/null
+++ b/published/20200120 Use this Twitter client for Linux to tweet from the terminal.md
@@ -0,0 +1,65 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11858-1.html)
+[#]: subject: (Use this Twitter client for Linux to tweet from the terminal)
+[#]: via: (https://opensource.com/article/20/1/tweet-terminal-rainbow-stream)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用这个 Twitter 客户端在 Linux 终端中发推特
+======
+
+> 在我们的 20 个使用开源提升生产力的系列的第十篇文章中,使用 Rainbow Stream 跟上你的 Twitter 流而无需离开终端。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/06/113720bwi55j7xcccwwwi0.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 通过 Rainbow Stream 跟上Twitter
+
+我喜欢社交网络和微博。它快速、简单,还有我可以与世界分享我的想法。当然,缺点是几乎所有非 Windows 的桌面客户端都对是网站的封装。[Twitter][2] 有很多客户端,但我真正想要的是轻量、易于使用,最重要的是吸引人的客户端。
+
+![Rainbow Stream for Twitter][3]
+
+[Rainbow Stream][4] 是好看的 Twitter 客户端之一。它简单易用,并且可以通过 `pip3 install rainbowstream` 快速安装。第一次运行时,它将打开浏览器窗口,并让你通过 Twitter 授权。完成后,你将回到命令行,你的 Twitter 时间线将开始滚动。
+
+![Rainbow Stream first run][5]
+
+要了解的最重要的命令是 `p` 暂停推流、`r` 继续推流、`h` 得到帮助,以及 `t` 发布新的推文。例如,`h tweets` 将提供发送和回复推文的所有选项。另一个有用的帮助页面是 `h messages`,它提供了处理直接消息的命令,这是我妻子和我经常使用的东西。还有很多其他命令,我会回头获得很多帮助。
+
+随着时间线的滚动,你可以看到它有完整的 UTF-8 支持,并以正确的字体显示推文被转推以及喜欢的次数,图标和 emoji 也能正确显示。
+
+![Kill this love][6]
+
+关于 Rainbow Stream 的*最好*功能之一就是你不必放弃照片和图像。默认情况下,此功能是关闭的,但是你可以使用 `config` 命令尝试它。
+
+```
+config IMAGE_ON_TERM = true
+```
+
+此命令将任何图像渲染为 ASCII 艺术。如果你有大量照片流,它可能会有点多,但是我喜欢。它有非常复古的 1990 年代 BBS 感觉,我也确实喜欢 1990 年代的 BBS 场景。
+
+你还可以使用 Rainbow Stream 管理列表、屏蔽某人、拉黑某人、关注、取消关注以及 Twitter API 的所有其他功能。它还支持主题,因此你可以用喜欢的颜色方案自定义流。
+
+当我正在工作并且不想在浏览器上打开另一个选项卡时,Rainbow Stream 让我可以留在终端中。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/tweet-terminal-rainbow-stream
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
+[2]: https://twitter.com/home
+[3]: https://opensource.com/sites/default/files/uploads/productivity_10-1.png (Rainbow Stream for Twitter)
+[4]: https://rainbowstream.readthedocs.io/en/latest/
+[5]: https://opensource.com/sites/default/files/uploads/productivity_10-2.png (Rainbow Stream first run)
+[6]: https://opensource.com/sites/default/files/uploads/day10-image3_1.png (Kill this love)
diff --git a/published/20200121 Read Reddit from the Linux terminal.md b/published/20200121 Read Reddit from the Linux terminal.md
new file mode 100644
index 0000000000..2910769f83
--- /dev/null
+++ b/published/20200121 Read Reddit from the Linux terminal.md
@@ -0,0 +1,63 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11869-1.html)
+[#]: subject: (Read Reddit from the Linux terminal)
+[#]: via: (https://opensource.com/article/20/1/open-source-reddit-client)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+在 Linux 终端中阅读 Reddit
+======
+
+> 在我们的 20 个使用开源提升生产力的系列的第十一篇文章中使用 Reddit 客户端 Tuir 在工作中短暂休息一下。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/09/104113w1ytjmlv1jly0j1t.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 使用 Tuir 阅读 Reddit
+
+短暂休息对于保持生产力很重要。我休息时喜欢去的地方之一是 [Reddit][2],如果你愿意,这可能是一个很好的资源。我在那里发现了各种有关 DevOps、生产力、Emacs、鸡和 ChromeOS 项目的文章。这些讨论可能很有价值。我还关注了一些只有动物图片的子板,因为我喜欢动物(而不只是鸡)照片,有时经过长时间的工作后,我真正需要的是小猫照片。
+
+![/r/emacs in Tuir][3]
+
+当我阅读 Reddit(不仅仅是看动物宝宝的图片)时,我使用 [Tuir][4](Reddit 终端 UI)。Tuir 是功能齐全的 Reddit 客户端,可以在运行 Python 的任何系统上运行。安装是通过 `pip` 完成的,非常简单。
+
+首次运行时,Tuir 会进入 Reddit 默认文章列表。屏幕的顶部和底部有列出不同命令的栏。顶部栏显示你在 Reddit 上的位置,第二行显示根据 Reddit “Hot/New/Controversial” 等类别筛选的命令。按下筛选器前面的数字触发筛选。
+
+![Filtering by Reddit's "top" category][5]
+
+你可以使用箭头键或 `j`、`k`、`h` 和 `l` 键浏览列表,这与 Vi/Vim 使用的键相同。底部栏有用于应用导航的命令。如果要跳转到另一个子板,只需按 `/` 键打开提示,然后输入你要进入的子板名称。
+
+![Logging in][6]
+
+某些东西除非你登录,否则无法访问。如果你尝试执行需要登录的操作,那么 Tuir 就会提示你,例如发布新文章 (`c`)或赞成/反对 (`a` 和 `z`)。要登录,请按 `u` 键。这将打开浏览器以通过 OAuth2 登录,Tuir 将保存令牌。之后,你的用户名应出现在屏幕的右上方。
+
+Tuir 还可以打开浏览器来查看图像、加载链接等。稍作调整,它甚至可以在终端中显示图像(尽管我没有让它可以正常工作)。
+
+总的来说,我对 Tuir 在我需要休息时能快速跟上 Reddit 感到很满意。
+
+Tuir 是现已淘汰的 [RTV][7] 的两个分叉之一。另一个是 [TTRV][8],它还无法通过 `pip` 安装,但功能相同。我期待看到它们随着时间的推移脱颖而出。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/open-source-reddit-client
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
+[2]: https://www.reddit.com/
+[3]: https://opensource.com/sites/default/files/uploads/productivity_11-1.png (/r/emacs in Tuir)
+[4]: https://gitlab.com/ajak/tuir
+[5]: https://opensource.com/sites/default/files/uploads/productivity_11-2.png (Filtering by Reddit's "top" category)
+[6]: https://opensource.com/sites/default/files/uploads/productivity_11-3.png (Logging in)
+[7]: https://github.com/michael-lazar/rtv
+[8]: https://github.com/tildeclub/ttrv
diff --git a/published/20200122 Get your RSS feeds and podcasts in one place with this open source tool.md b/published/20200122 Get your RSS feeds and podcasts in one place with this open source tool.md
new file mode 100644
index 0000000000..ec53804d3a
--- /dev/null
+++ b/published/20200122 Get your RSS feeds and podcasts in one place with this open source tool.md
@@ -0,0 +1,72 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11876-1.html)
+[#]: subject: (Get your RSS feeds and podcasts in one place with this open source tool)
+[#]: via: (https://opensource.com/article/20/1/open-source-rss-feed-reader)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用此开源工具在一起收取你的 RSS 订阅源和播客
+======
+
+> 在我们的 20 个使用开源提升生产力的系列的第十二篇文章中使用 Newsboat 收取你的新闻 RSS 源和播客。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/10/162526wv5jdl0m12sw10md.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 使用 Newsboat 访问你的 RSS 源和播客
+
+RSS 新闻源是了解各个网站最新消息的非常方便的方法。除了 Opensource.com,我还会关注 [SysAdvent][2] sysadmin 年度工具,还有一些我最喜欢的作者以及一些网络漫画。RSS 阅读器可以让我“批处理”阅读内容,因此,我每天不会在不同的网站上花费很多时间。
+
+![Newsboat][3]
+
+[Newsboat][4] 是一个基于终端的 RSS 订阅源阅读器,外观感觉很像电子邮件程序 [Mutt][5]。它使阅读新闻变得容易,并有许多不错的功能。
+
+安装 Newsboat 非常容易,因为它包含在大多数发行版(以及 MacOS 上的 Homebrew)中。安装后,只需在 `~/.newsboat/urls` 中添加订阅源。如果你是从其他阅读器迁移而来,并有导出的 OPML 文件,那么可以使用以下方式导入:
+
+```
+newsboat -i
+```
+
+添加订阅源后,Newsboat 的界面非常熟悉,特别是如果你使用过 Mutt。你可以使用箭头键上下滚动,使用 `r` 检查某个源中是否有新项目,使用 `R` 检查所有源中是否有新项目,按回车打开订阅源,并选择要阅读的文章。
+
+![Newsboat article list][6]
+
+但是,你不仅限于本地 URL 列表。Newsboat 还是 [Tiny Tiny RSS][7]、ownCloud 和 Nextcloud News 等新闻阅读服务以及一些 Google Reader 后续产品的客户端。[Newsboat 的文档][8]中涵盖了有关此的详细信息以及其他许多配置选项。
+
+![Reading an article in Newsboat][9]
+
+#### 播客
+
+Newsboat 还通过 Podboat 提供了[播客支持][10],Podboat 是一个附带的应用,它可帮助下载和排队播客节目。在 Newsboat 中查看播客源时,按下 `e` 将节目添加到你的下载队列中。所有信息将保存在 `~/.newsboat` 目录中的队列文件中。Podboat 读取此队列并将节目下载到本地磁盘。你可以在 Podboat 的用户界面(外观和行为类似于 Newsboat)执行此操作,也可以使用 `podboat -a` 让 Podboat 下载所有内容。作为播客人和播客听众,我认为这*真的*很方便。
+
+![Podboat][11]
+
+总体而言,Newsboat 有一些非常好的功能,并且是一些基于 Web 或桌面应用的不错的轻量级替代方案。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/open-source-rss-feed-reader
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
+[2]: https://sysadvent.blogspot.com/
+[3]: https://opensource.com/sites/default/files/uploads/productivity_12-1.png (Newsboat)
+[4]: https://newsboat.org
+[5]: http://mutt.org/
+[6]: https://opensource.com/sites/default/files/uploads/productivity_12-2.png (Newsboat article list)
+[7]: https://tt-rss.org/
+[8]: https://newsboat.org/releases/2.18/docs/newsboat.html
+[9]: https://opensource.com/sites/default/files/uploads/productivity_12-3.png (Reading an article in Newsboat)
+[10]: https://newsboat.org/releases/2.18/docs/newsboat.html#_podcast_support
+[11]: https://opensource.com/sites/default/files/uploads/productivity_12-4.png (Podboat)
diff --git a/published/20200123 How to stop typosquatting attacks.md b/published/20200123 How to stop typosquatting attacks.md
new file mode 100644
index 0000000000..728bf29f50
--- /dev/null
+++ b/published/20200123 How to stop typosquatting attacks.md
@@ -0,0 +1,105 @@
+[#]: collector: (lujun9972)
+[#]: translator: (HankChow)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11899-1.html)
+[#]: subject: (How to stop typosquatting attacks)
+[#]: via: (https://opensource.com/article/20/1/stop-typosquatting-attacks)
+[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
+
+如何防范误植攻击
+======
+
+> 误植Typosquatting是一种引诱用户将敏感数据泄露给不法分子的方式,针对这种攻击方式,我们很有必要了解如何保护我们的组织、我们的开源项目以及我们自己。
+
+![Gears above purple clouds][1]
+
+除了常规手段以外,网络罪犯还会利用社会工程的方式,试图让安全意识较弱的人泄露私人信息或是有价值的证书。很多[网络钓鱼骗局][2]的实质都是攻击者伪装成信誉良好的公司或组织,然后借此大规模传播病毒或恶意软件。
+
+[误植][3]Typosquatting就是其中一个常用的手法。它是一种社会工程学的攻击方式,通过使用一些合法网站的错误拼写的 URL 以引诱用户访问恶意网站,这样的做法既使真正的原网站遭受声誉上的损害,又诱使用户向这些恶意网站提交个人敏感信息。因此,网站的管理人员和用户双方都应该意识到这个问题带来的风险,并采取措施加以保护。
+
+一些由广大开发者在公共代码库中维护的开源软件通常都被认为具有安全上的优势,但当面临社会工程学攻击或恶意软件植入时,开源软件也需要注意以免受到伤害。
+
+下面就来关注一下误植攻击的发展趋势,以及这种攻击方式在未来可能对开源软件造成的影响。
+
+### 什么是误植?
+
+误植是一种非常特殊的网络犯罪形式,其背后通常是一个更大的网络钓鱼骗局。不法分子首先会购买和注册域名,而他们注册的域名通常是一个常用网站的错误拼写形式,例如在正确拼写的基础上添加一个额外的元音字母,又或者是将字母“i”替换成字母“l”。对于同一个正常域名,不法分子通常会注册数十个拼写错误的变体域名。
+
+用户一旦访问这样的域名,不法分子的目的就已经成功了一半。为此,他们会通过电子邮件的方式,诱导用户访问这样的伪造域名。伪造域名指向的页面中,通常都带有一个简单的登录界面,还会附上熟悉的被模仿网站的徽标,尽可能让用户认为自己访问的是真实的网站。
+
+如果用户没有识破这一个骗局,在页面中提交了诸如银行卡号、用户名、密码等敏感信息,这些数据就会被不法分子所完全掌控。进一步来看,如果这个用户在其它网站也使用了相同的用户名和密码,那就有同样受到波及的风险。受害者最终可能会面临身份被盗、信用记录被破坏等危险。
+
+### 最近的一些案例
+
+从网站的所有方来看,遭到误植攻击可能会带来一场公关危机。尽管网站域名的所有者没有参与到犯罪当中,但这会被认为是一次管理上的失职,因为域名所有者有主动防御误植攻击的责任,以避免这一类欺诈事件的发生。
+
+在几年之前就发生过[一起案件][4],很多健康保险客户收到了一封指向 we11point.com 的钓鱼电子邮件,其中 URL 里正确的字母“l”被换成了数字“1”,从而导致一批用户成为了这一次攻击的受害者。
+
+最初,与特定国家/地区相关的顶级域名是不允许随意注册的。但后来国际域名规则中放开这一限制之后,又兴起了一波新的误植攻击。例如最常见的一种手法就是注册一个与 .com 域名类似的 .om 域名,一旦在输入 URL 时不慎遗漏了字母 c 就会给不法分子带来可乘之机。
+
+### 网站如何防范误植攻击
+
+对于一个公司来说,最好的策略就是永远比误植攻击采取早一步的行动。
+
+也就是说,在注册域名的时候,不仅要注册自己商标名称的域名,最好还要同时注册可能由于拼写错误产生的其它域名。当然,没有太大必要把可能导致错误的所有顶级域名都注册掉,但至少要把可能导致错误的一些一级域名抢注下来。
+
+如果你有让用户跳转到一个第三方网站的需求,务必要让用户从你的官方网站上进行跳转,而不应该通过类似群发邮件的方式向用户告知 URL。因此,必须明确一个策略:在与用户通信交流时,不将用户引导到官方网站以外的地方去。在这样的情况下,如果有不法分子试图以你公司的名义发布虚假消息,用户将会从带有异样的页面或 URL 上有所察觉。
+
+你可以使用类似 [DNS Twist][5] 的开源工具来扫描公司正在使用的域名,它可以确定是否有相似的域名已被注册,从而暴露潜在的误植攻击。DNS Twist 可以在 Linux 系统上通过一系列的 shell 命令来运行。
+
+还有一些网络提供商(ISP)会将防护误植攻击作为他们网络产品的一部分。这就相当于一层额外的保护,如果用户不慎输入了带有拼写错误的 URL,就会被提示该页面已经被阻止并重定向到正确的域名。
+
+如果你是系统管理员,还可以考虑运行一个自建的 [DNS 服务器][6],以便通过黑名单的机制禁止对某些域名的访问。
+
+你还可以密切监控网站的访问流量,如果来自某个特定地区的用户被集体重定向到了虚假的站点,那么访问量将会发生骤降。这也是一个有效监控误植攻击的角度。
+
+防范误植攻击与防范其它网络攻击一样需要保持警惕。所有用户都希望网站的所有者能够扫除那些与正主类似的假冒站点,如果这项工作没有做好,用户的信任对你的信任程度就会每况愈下。
+
+### 误植对开源软件的影响
+
+因为开源项目的源代码是公开的,所以其中大部分项目都会进行安全和渗透测试。但错误是不可能完全避免的,如果你参与了开源项目,还是有需要注意的地方。
+
+当你收到一个不明来源的合并请求Merge Request或补丁时,必须在合并之前仔细检查,尤其是相关代码涉及到网络层面的时候。不要屈服于只测试构建的诱惑; 一定要进行严格的检查和测试,以确保没有恶意代码混入正常的代码当中。
+
+同时,还要严格按照正确的方法使用域名,避免不法分子创建仿冒的下载站点并提供带有恶意代码的软件。可以通过如下所示的方法使用数字签名来确保你的软件没有被篡改:
+
+```
+gpg --armor --detach-sig \
+ --output advent-gnome.sig \
+ example-0.0.1.tar.xz
+```
+
+同时给出你提供的文件的校验和:
+
+```
+sha256sum example-0.0.1.tar.xz > example-0.0.1.txt
+```
+
+无论你的用户会不会去用上这些安全措施,你也应该提供这些必要的信息。因为只要有那么一个人留意到签名有异样,就能为你敲响警钟。
+
+### 总结
+
+人类犯错在所难免。世界上数百万人输入同一个网址时,总会有人出现拼写的错误。不法分子也正是抓住了这个漏洞才得以实施误植攻击。
+
+用抢注域名的方式去完全根治误植攻击也是不太现实的,我们更应该关注这种攻击的传播方式以减轻它对我们的影响。最好的保护就是和用户之间建立信任,并积极检测误植攻击的潜在风险。作为开源社区,我们更应该团结起来一起应对误植攻击。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/stop-typosquatting-attacks
+
+作者:[Sam Bocetta][a]
+选题:[lujun9972][b]
+译者:[HankChow](https://github.com/HankChow)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sambocetta
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chaos_engineer_monster_scary_devops_gear_kubernetes.png?itok=GPYLvfVh (Gears above purple clouds)
+[2]: https://www.cloudberrylab.com/resources/guides/types-of-phishing/
+[3]: https://en.wikipedia.org/wiki/Typosquatting
+[4]: https://www.menlosecurity.com/blog/-a-new-approach-to-end-typosquatting
+[5]: https://github.com/elceef/dnstwist
+[6]: https://opensource.com/article/17/4/build-your-own-name-server
diff --git a/published/20200123 Use this open source tool to get your local weather forecast.md b/published/20200123 Use this open source tool to get your local weather forecast.md
new file mode 100644
index 0000000000..e0022dfdee
--- /dev/null
+++ b/published/20200123 Use this open source tool to get your local weather forecast.md
@@ -0,0 +1,99 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11879-1.html)
+[#]: subject: (Use this open source tool to get your local weather forecast)
+[#]: via: (https://opensource.com/article/20/1/open-source-weather-forecast)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用这个开源工具获取本地天气预报
+======
+
+> 在我们的 20 个使用开源提升生产力的系列的第十三篇文章中使用 wego 来了解出门前你是否要需要外套、雨伞或者防晒霜。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/11/140842a8qwomfeg9mwegg8.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 使用 wego 了解天气
+
+过去十年我对我的职业最满意的地方之一是大多数时候是远程工作。尽管现实情况是我很多时候是在家里办公,但我可以在世界上任何地方工作。缺点是,离家时我会根据天气做出一些决定。在我居住的地方,“晴朗”可以表示从“酷热”、“低于零度”到“一小时内会小雨”。能够了解实际情况和快速预测非常有用。
+
+![Wego][2]
+
+[Wego][3] 是用 Go 编写的程序,可以获取并显示你的当地天气。如果你愿意,它甚至可以用闪亮的 ASCII 艺术效果进行渲染。
+
+要安装 `wego`,你需要确保在系统上安装了[Go][4]。之后,你可以使用 `go get` 命令获取最新版本。你可能还想将 `~/go/bin` 目录添加到路径中:
+
+```
+go get -u github.com/schachmat/wego
+export PATH=~/go/bin:$PATH
+wego
+```
+
+首次运行时,`wego` 会报告缺失 API 密钥。现在你需要决定一个后端。默认后端是 [Forecast.io][5],它是 [Dark Sky][6]的一部分。`wego` 还支持 [OpenWeatherMap][7] 和 [WorldWeatherOnline][8]。我更喜欢 OpenWeatherMap,因此我将在此向你展示如何设置。
+
+你需要在 OpenWeatherMap 中[注册 API 密钥][9]。注册是免费的,尽管免费的 API 密钥限制了一天可以查询的数量,但这对于普通用户来说应该没问题。得到 API 密钥后,将它放到 `~/.wegorc` 文件中。现在可以填写你的位置、语言以及使用公制、英制(英国/美国)还是国际单位制(SI)。OpenWeatherMap 可通过名称、邮政编码、坐标和 ID 确定位置,这是我喜欢它的原因之一。
+
+```
+# wego configuration for OEM
+aat-coords=false
+aat-monochrome=false
+backend=openweathermap
+days=3
+forecast-lang=en
+frontend=ascii-art-table
+jsn-no-indent=false
+location=Pittsboro
+owm-api-key=XXXXXXXXXXXXXXXXXXXXX
+owm-debug=false
+owm-lang=en
+units=imperial
+```
+
+现在,在命令行运行 `wego` 将显示接下来三天的当地天气。
+
+`wego` 还可以输出 JSON 以便程序使用,还可显示 emoji。你可以使用 `-f` 参数或在 `.wegorc` 文件中指定前端。
+
+![Wego at login][10]
+
+如果你想在每次打开 shell 或登录主机时查看天气,只需将 wego 添加到 `~/.bashrc`(我这里是 `~/.zshrc`)即可。
+
+[wttr.in][11] 项目是 wego 上的基于 Web 的封装。它提供了一些其他显示选项,并且可以在同名网站上看到。关于 wttr.in 的一件很酷的事情是,你可以使用 `curl` 获取一行天气信息。我有一个名为 `get_wttr` 的 shell 函数,用于获取当前简化的预报信息。
+
+```
+get_wttr() {
+ curl -s "wttr.in/Pittsboro?format=3"
+}
+```
+
+![weather tool for productivity][12]
+
+现在,在我离开家之前,我就可以通过命令行快速简单地获取我是否需要外套、雨伞或者防晒霜了。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/open-source-weather-forecast
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS (Sky with clouds and grass)
+[2]: https://opensource.com/sites/default/files/uploads/productivity_13-1.png (Wego)
+[3]: https://github.com/schachmat/wego
+[4]: https://golang.org/doc/install
+[5]: https://forecast.io
+[6]: https://darksky.net
+[7]: https://openweathermap.org/
+[8]: https://www.worldweatheronline.com/
+[9]: https://openweathermap.org/api
+[10]: https://opensource.com/sites/default/files/uploads/productivity_13-2.png (Wego at login)
+[11]: https://github.com/chubin/wttr.in
+[12]: https://opensource.com/sites/default/files/uploads/day13-image3.png (weather tool for productivity)
diff --git a/published/20200124 3 handy command-line internet speed tests.md b/published/20200124 3 handy command-line internet speed tests.md
new file mode 100644
index 0000000000..58426cffea
--- /dev/null
+++ b/published/20200124 3 handy command-line internet speed tests.md
@@ -0,0 +1,147 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11882-1.html)
+[#]: subject: (3 handy command-line internet speed tests)
+[#]: via: (https://opensource.com/article/20/1/internet-speed-tests)
+[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
+
+3 个方便的命令行网速测试工具
+======
+
+> 用这三个开源工具检查你的互联网和局域网速度。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/12/115915kk6hkax1vparkuvk.jpg)
+
+能够验证网络连接速度使您可以控制计算机。 使您可以在命令行中检查互联网和网络速度的三个开源工具是 Speedtest、Fast 和 iPerf。
+
+### Speedtest
+
+[Speedtest][2] 是一个旧宠。它用 Python 实现,并打包在 Apt 中,也可用 `pip` 安装。你可以将它作为命令行工具或在 Python 脚本中使用。
+
+使用以下命令安装:
+
+```
+sudo apt install speedtest-cli
+```
+
+或者
+
+```
+sudo pip3 install speedtest-cli
+```
+
+然后使用命令 `speedtest` 运行它:
+
+```
+$ speedtest
+Retrieving speedtest.net configuration...
+Testing from CenturyLink (65.128.194.58)...
+Retrieving speedtest.net server list...
+Selecting best server based on ping...
+Hosted by CenturyLink (Cambridge, UK) [20.49 km]: 31.566 ms
+Testing download speed................................................................................
+Download: 68.62 Mbit/s
+Testing upload speed......................................................................................................
+Upload: 10.93 Mbit/s
+```
+
+它给你提供了互联网上传和下载的网速。它快速而且可脚本调用,因此你可以定期运行它,并将输出保存到文件或数据库中,以记录一段时间内的网络速度。
+
+### Fast
+
+[Fast][3] 是 Netflix 提供的服务。它的网址是 [Fast.com][4],同时它有一个可通过 `npm` 安装的命令行工具:
+
+```
+npm install --global fast-cli
+```
+
+网站和命令行程序都提供了相同的基本界面:它是一个尽可能简单的速度测试:
+
+```
+$ fast
+
+ 82 Mbps ↓
+```
+
+该命令返回你的网络下载速度。要获取上传速度,请使用 `-u` 标志:
+
+```
+$ fast -u
+
+ ⠧ 80 Mbps ↓ / 8.2 Mbps ↑
+```
+
+### iPerf
+
+[iPerf][5] 测试的是局域网速度(而不是像前两个工具一样测试互联网速度)的好方法。Debian、Raspbian 和 Ubuntu 用户可以使用 apt 安装它:
+
+```
+sudo apt install iperf
+```
+
+它还可用于 Mac 和 Windows。
+
+安装完成后,你需要在同一网络上的两台计算机上使用它(两台都必须安装 iPerf)。指定其中一台作为服务器。
+
+获取服务端计算机的 IP 地址:
+
+```
+ip addr show | grep inet.*brd
+```
+
+你的本地 IP 地址(假设为 IPv4 本地网络)以 `192.168` 或 `10` 开头。记下 IP 地址,以便可以在另一台计算机(指定为客户端的计算机)上使用它。
+
+在服务端启动 `iperf`:
+
+```
+iperf -s
+```
+
+它会等待来自客户端的传入连接。将另一台计算机作为为客户端并运行此命令,将示例中的 IP 替换为服务端计算机的 IP:
+
+```
+iperf -c 192.168.1.2
+```
+
+![iPerf][6]
+
+只需几秒钟即可完成测试,然后返回传输大小和计算出的带宽。我使用家用服务器作为服务端,在 PC 和笔记本电脑上进行了一些测试。我最近在房屋周围安装了六类线以太网,因此我的有线连接速度达到 1Gbps,但 WiFi 连接速度却低得多。
+
+![iPerf][7]
+
+你可能注意到它记录到 16Gbps。那是我使用服务器进行自我测试,因此它只是在测试写入磁盘的速度。该服务器具有仅 16 Gbps 的硬盘驱动器,但是我的台式机有 46Gbps,另外我的(较新的)笔记本超过了 60Gbps,因为它们都有固态硬盘。
+
+![iPerf][8]
+
+### 总结
+
+通过这些工具来了解你的网络速度是一项非常简单的任务。如果你更喜欢脚本或者在命令行中运行,上面的任何一个都能满足你。如果你要了解点对点的指标,iPerf 能满足你。
+
+你还使用其他哪些工具来衡量家庭网络?在评论中分享你的评论。
+
+本文最初发表在 Ben Nuttall 的 [Tooling blog][9] 上,并获准在此使用。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/internet-speed-tests
+
+作者:[Ben Nuttall][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/bennuttall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/train-plane-speed-big-machine.png?itok=f377dXKs (Old train)
+[2]: https://github.com/sivel/speedtest-cli
+[3]: https://github.com/sindresorhus/fast-cli
+[4]: https://fast.com/
+[5]: https://iperf.fr/
+[6]: https://opensource.com/sites/default/files/uploads/iperf.png (iPerf)
+[7]: https://opensource.com/sites/default/files/uploads/iperf2.png (iPerf)
+[8]: https://opensource.com/sites/default/files/uploads/iperf3.png (iPerf)
+[9]: https://tooling.bennuttall.com/command-line-speedtest-tools/
diff --git a/published/20200124 Run multiple consoles at once with this open source window environment.md b/published/20200124 Run multiple consoles at once with this open source window environment.md
new file mode 100644
index 0000000000..33593eb788
--- /dev/null
+++ b/published/20200124 Run multiple consoles at once with this open source window environment.md
@@ -0,0 +1,111 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11892-1.html)
+[#]: subject: (Run multiple consoles at once with this open source window environment)
+[#]: via: (https://opensource.com/article/20/1/multiple-consoles-twin)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用开源窗口环境 twin 一次运行多个控制台
+======
+
+> 在我们的 20 个使用开源提升生产力的系列的第十四篇文章中用 twin 模拟了老式的 DESQview 体验。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/14/193658tlbyft0lbu44f0s3.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 通过 twin 克服“一个屏幕,一个应用程序”的限制
+
+还有人记得 [DESQview][2] 吗?我们在 Windows、Linux 和 MacOS 中理所当然地可以在屏幕上同时运行多个程序,而 DESQview 赋予了 DOS 同样的功能。在我运营拨号 BBS 服务的初期,DESQview 是必需的,它使我能够让 BBS 在后台运行,同时在前台进行其他操作。例如,当有人拨打电话时,我可能正在开发新功能或设置新的外部程序而不会影响他们的体验。后来,在我早期做支持工作的时候,我可以同时运行我的工作电子邮件([MHS 上的 DaVinci 电子邮件][3])、支持单据系统和其他 DOS 程序。这是令人吃惊的!
+
+![twin][4]
+
+从那时起,运行多个控制台应用程序的功能已经发展了很多。但是 [tmux][5] 和 [Screen][6] 等应用仍然遵循“一个屏幕,一个应用”的显示方式。好吧,是的,tmux 具有屏幕拆分和窗格,但是不像 DESQview 那样具有将窗口“浮动”在其他窗口上的功能,就我个人而言,我怀念那个功能。
+
+让我们来看看 [twin][7](文本模式窗口环境)。我认为,这个相对年轻的项目是 DESQview 的精神继任者。它支持控制台和图形环境,并具有与会话脱离和重新接驳的功能。设置起来并不是那么容易,但是它可以在大多数现代操作系统上运行。
+
+Twin 是从源代码安装的(现在是这样)。但是首先,你需要安装所需的开发库。库名称将因操作系统而异。 以下示例显示了在我的 Ubuntu 19.10 系统中的情况。一旦安装了依赖库,请从 Git 中检出 twin 源代码,并运行 `./configure` 和 `make`,它们应自动检测所有内容并构建 twin:
+
+```
+sudo apt install libx11-dev libxpm-dev libncurses-dev zlib1g-dev libgpm-dev
+git clone git@github.com:cosmos72/twin.git
+cd twin
+./configure
+make
+sudo make install
+```
+
+注意:如果要在 MacOS 或 BSD 上进行编译,则需要在运行 `make` 之前在文件 `include/Tw/autoconf.h` 和 `include/twautoconf.h` 中注释掉 `#define socklen_t int`。这个问题应该在 [twin #57][9] 解决了。
+
+![twin text mode][10]
+
+第一次调用 twin 是一个挑战。你需要通过 `--hw` 参数告诉它正在使用哪种显示。例如,要启动文本模式的 twin,请输入 `twin --hw=tty,TERM=linux`。这里指定的 `TERM` 变量替代了你当前 Shell 中终端变量。要启动图形版本,运行 `twin --hw=X@$DISPLAY`。在 Linux 上,twin 一般都“可以正常工作”,而在 MacOS 上,Twin 基本是只能在终端上使用。
+
+*真正*的乐趣是可以通过 `twattach` 和 `twdisplay` 命令接驳到正在运行的会话的功能。它们使你可以接驳到其他正在运行的 twin 会话。例如,在 Mac 上,我可以运行以下命令以接驳到演示机器上运行的 twin 会话:
+
+```
+twdisplay --twin@20days2020.local:0 --hw=tty,TERM=linux
+```
+
+![remote twin session][11]
+
+通过多做一些工作,你还可以将其用作登录外壳,以代替控制台上的 [getty][12]。这需要 gdm 鼠标守护程序、twdm 应用程序(包括)和一些额外的配置。在使用 systemd 的系统上,首先安装并启用 gdm(如果尚未安装),然后使用 `systemctl` 为控制台(我使用 tty6)创建一个覆盖。这些命令必须以 root 用户身份运行;在 Ubuntu 上,它们看起来像这样:
+
+```
+apt install gdm
+systemctl enable gdm
+systemctl start gdm
+systemctl edit getty@tty6
+```
+
+`systemctl edit getty@tty6` 命令将打开一个名为 `override.conf` 的空文件。它可以定义 systemd 服务设置以覆盖 tty6 的默认设置。将内容更新为:
+
+```
+[service]
+ExecStart=
+ExecStart=-/usr/local/sbin/twdm --hw=tty@/dev/tty6,TERM=linux
+StandardInput=tty
+StandardOutput=tty
+```
+
+现在,重新加载 systemd 并重新启动 tty6 以获得 twin 登录提示界面:
+
+```
+systemctl daemon-reload
+systemctl restart getty@tty6
+```
+
+![twin][13]
+
+这将为登录的用户启动一个 twin 会话。我不建议在多用户系统中使用此会话,但是对于个人桌面来说,这是很酷的。并且,通过使用 `twattach` 和 `twdisplay`,你可以从本地 GUI 或远程桌面访问该会话。
+
+我认为 twin 真是太酷了。它还有一些细节不够完善,但是基本功能都已经有了,并且有一些非常好的文档。另外,它也使我可以在现代操作系统上稍解对 DESQview 式的体验的渴望。我希望随着时间的推移它会有所改进,希望你和我一样喜欢它。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/multiple-consoles-twin
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
+[2]: https://en.wikipedia.org/wiki/DESQview
+[3]: https://en.wikipedia.org/wiki/Message_Handling_System
+[4]: https://opensource.com/sites/default/files/uploads/productivity_14-1.png (twin)
+[5]: https://github.com/tmux/tmux/wiki
+[6]: https://www.gnu.org/software/screen/
+[7]: https://github.com/cosmos72/twin
+[8]: mailto:git@github.com
+[9]: https://github.com/cosmos72/twin/issues/57
+[10]: https://opensource.com/sites/default/files/uploads/productivity_14-2.png (twin text mode)
+[11]: https://opensource.com/sites/default/files/uploads/productivity_14-3.png (remote twin session)
+[12]: https://en.wikipedia.org/wiki/Getty_(Unix)
+[13]: https://opensource.com/sites/default/files/uploads/productivity_14-4.png (twin)
diff --git a/published/20200125 Use tmux to create the console of your dreams.md b/published/20200125 Use tmux to create the console of your dreams.md
new file mode 100644
index 0000000000..9e1d32dbd3
--- /dev/null
+++ b/published/20200125 Use tmux to create the console of your dreams.md
@@ -0,0 +1,117 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11900-1.html)
+[#]: subject: (Use tmux to create the console of your dreams)
+[#]: via: (https://opensource.com/article/20/1/tmux-console)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用 tmux 创建你的梦想主控台
+======
+
+> 使用 tmux 可以做很多事情,尤其是在将 tmuxinator 添加到其中时。在我们的二十篇系列文章的第十五期中查看它们,以在 2020 年实现开源生产力的提高。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/16/220832bd4l1ag4tlqxlpr4.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 使用 tmux 和 tmuxinator 全部放到主控台上
+
+到目前为止,在本系列文章中,我已经撰写了有关单个应用程序和工具的文章。从今天开始,我将把它们放在一起进行全面设置以简化操作。让我们从命令行开始。为什么使用命令行?简而言之,在命令行上工作可以使我能够从运行 SSH 的任何位置访问许多这些工具和功能。我可以 SSH 进入我的一台个人计算机,并在工作计算机上运行与我的个人计算机上所使用的相同设置。我要使用的主要工具是 [tmux][2]。
+
+大多数人都只使用了 tmux 非常基础的功能,比如说在远程服务器上打开 tmux,然后启动进程,也许还会打开第二个会话以查看日志文件或调试信息,然后断开连接并在稍后返回。但是其实你可以使用 tmux 做很多工作。
+
+![tmux][3]
+
+首先,如果你有一个已有的 tmux 配置文件,请对其进行备份。tmux 的配置文件是 `~/.tmux.conf`。将其移动到另一个目录,例如 `~/tmp`。现在,用 Git 克隆 [Oh My Tmux][4] 项目。从该克隆目录中将 `.tmux.conf` 符号链接到你的家目录,并复制该克隆目录中的 `.tmux.conf.local` 文件到家目录中以进行调整:
+
+```
+cd ~
+mkdir ~/tmp
+mv ~/.tmux.conf ~/tmp/
+git clone https://github.com/gpakosz/.tmux.git
+ln -s ~/.tmux/.tmux.conf ./
+cp ~/.tmux/.tmux.conf.local ./
+```
+
+`.tmux.conf.local` 文件包含了本地设置和覆盖的设置。例如,我稍微更改了默认颜色,然后启用了 [Powerline][5] 分隔线。下面的代码段仅显示了我更改过的内容:
+
+```
+tmux_conf_theme_24b_colour=true
+tmux_conf_theme_focused_pane_bg='default'
+tmux_conf_theme_pane_border_style=fat
+tmux_conf_theme_left_separator_main='\uE0B0'
+tmux_conf_theme_left_separator_sub='\uE0B1'
+tmux_conf_theme_right_separator_main='\uE0B2'
+tmux_conf_theme_right_separator_sub='\uE0B3'
+#tmux_conf_battery_bar_symbol_full='◼'
+#tmux_conf_battery_bar_symbol_empty='◻'
+tmux_conf_battery_bar_symbol_full='♥'
+tmux_conf_battery_bar_symbol_empty='·'
+tmux_conf_copy_to_os_clipboard=true
+set -g mouse on
+```
+
+请注意,你不需要安装 Powerline,你只需要支持 Powerline 符号的字体即可。我在与控制台相关的所有内容中几乎都使用 [Hack Nerd Font][6],因为它易于阅读并且具有许多有用的额外符号。你还会注意到,我打开了操作系统剪贴板支持和鼠标支持。
+
+现在,当 tmux 启动时,底部的状态栏会以吸引人的颜色提供更多信息。`Ctrl` + `b` 仍然是输入命令的 “引导” 键,但其他一些进行了更改。现在水平拆分(顶部/底部)窗格为 `Ctrl` + `b` + `-`,垂直拆分为 `Ctrl` + `b` + `_`。启用鼠标模式后,你可以单击以在窗格之间切换,并拖动分隔线以调整其大小。打开新窗口仍然是 `Ctrl` + `b` + `n`,你现在可以单击底部栏上的窗口名称在它们之间进行切换。同样,`Ctrl` + `b` + `e` 将打开 `.tmux.conf.local` 文件以进行编辑。退出编辑器时,tmux 将重新加载配置,而不会重新加载其他任何内容。这很有用。
+
+到目前为止,我仅对功能和视觉显示进行了一些简单的更改,并增加了鼠标支持。现在,我将它设置为以一种有意义的方式启动我想要的应用程序,而不必每次都重新定位和调整它们的大小。为此,我将使用 [tmuxinator][7]。tmuxinator 是 tmux 的启动器,它允许你指定和管理布局以及使用 YAML 文件自动启动应用程序。要使用它,请启动 tmux 并创建要在其中运行程序的窗格。然后,使用 `Ctrl` + `b` + `n` 打开一个新窗口,并执行 `tmux list-windows`。你将获得有关布局的详细信息。
+
+![tmux layout information][8]
+
+请注意上面代码中的第一行,我在其中设置了四个窗格,每个窗格中都有一个应用程序。保存运行时的输出以供以后使用。现在,运行 `tmuxinator new 20days` 以创建名为 “20days” 的布局。这将显示一个带有默认布局文件的文本编辑器。它包含很多有用的内容,我建议你阅读所有选项。首先输入上方的布局信息以及所需的应用程序:
+
+```
+# /Users/ksonney/.config/tmuxinator/20days.yml
+name: 20days
+root: ~/
+windows:
+ - mail:
+ layout: d9da,208x60,0,0[208x26,0,0{104x26,0,0,0,103x26,105,0,5},208x33,0,27{104x33,0,27,1,103x33,105,27,4}]] @0
+ panes:
+ - alot
+ - abook
+ - ikhal
+ - todo.sh ls +20days
+```
+
+注意空格缩进!与 Python 代码一样,空格和缩进关系到文件的解释方式。保存该文件,然后运行 `tmuxinator 20days`。你应该会得到四个窗格,分别是 [alot][9] 邮件程序、[abook][10]、ikhal(交互式 [khal][11] 的快捷方式)以及 [todo.txt][12] 中带有 “+20days” 标签的任何内容。
+
+![sample layout launched by tmuxinator][13]
+
+你还会注意到,底部栏上的窗口标记为 “Mail”。你可以单击该名称(以及其他命名的窗口)以跳到该视图。漂亮吧?我在同一个文件中还设置了名为 “Social” 的第二个窗口,包括 [Tuir][14]、[Newsboat][15]、连接到 [BitlBee][16] 的 IRC 客户端和 [Rainbow Stream][17]。
+
+tmux 是我跟踪所有事情的生产力动力之源,有了 tmuxinator,我不必在不断调整大小、放置和启动我的应用程序上费心。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/tmux-console
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hat drink at the computer)
+[2]: https://github.com/tmux/tmux
+[3]: https://opensource.com/sites/default/files/uploads/productivity_15-1.png (tumux)
+[4]: https://github.com/gpakosz/.tmux
+[5]: https://github.com/powerline/powerline
+[6]: https://www.nerdfonts.com/
+[7]: https://github.com/tmuxinator/tmuxinator
+[8]: https://opensource.com/sites/default/files/uploads/productivity_15-2.png (tmux layout information)
+[9]: https://opensource.com/article/20/1/organize-email-notmuch
+[10]: https://opensource.com/article/20/1/sync-contacts-locally
+[11]: https://opensource.com/article/20/1/open-source-calendar
+[12]: https://opensource.com/article/20/1/open-source-to-do-list
+[13]: https://opensource.com/sites/default/files/uploads/productivity_15-3.png (sample layout launched by tmuxinator)
+[14]: https://opensource.com/article/20/1/open-source-reddit-client
+[15]: https://opensource.com/article/20/1/open-source-rss-feed-reader
+[16]: https://opensource.com/article/20/1/open-source-chat-tool
+[17]: https://opensource.com/article/20/1/tweet-terminal-rainbow-stream
diff --git a/published/20200126 Use Vim to send email and check your calendar.md b/published/20200126 Use Vim to send email and check your calendar.md
new file mode 100644
index 0000000000..032329e76e
--- /dev/null
+++ b/published/20200126 Use Vim to send email and check your calendar.md
@@ -0,0 +1,113 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11908-1.html)
+[#]: subject: (Use Vim to send email and check your calendar)
+[#]: via: (https://opensource.com/article/20/1/vim-email-calendar)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+使用 Vim 发送邮件和检查日历
+======
+
+> 在 2020 年用开源实现更高生产力的二十种方式的第十六篇文章中,直接通过文本编辑器管理你的电子邮件和日历。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/19/185842eyz2znxx1yc2ctnc.jpg)
+
+去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
+
+### 用 Vim 做(几乎)所有事情,第一部分
+
+我经常使用两个文本编辑器 —— [Vim][2] 和 [Emacs][3]。为什么两者都用呢?它们有不同的使用场景,在本系列的后续几篇文章中,我将讨论其中的一些用例。
+
+![][4]
+
+好吧,为什么要在 Vim 中执行所有操作?因为如果有一个应用程序是我可以访问的每台计算机上都有的,那就是 Vim。如果你像我一样,可能已经在 Vim 中打发了很多时光。那么,为什么不将其用于**所有事情**呢?
+
+但是,在此之前,你需要做一些事情。首先是确保你的 Vim 具有 Ruby 支持。你可以使用 `vim --version | grep ruby`。如果结果不是 `+ruby`,则需要解决这个问题。这可能有点麻烦,你应该查看发行版的文档以获取正确的软件包。在 MacOS 上,用的是官方的 MacVim(不是 Brew 发行的),在大多数 Linux 发行版中,用的是 vim-nox 或 vim-gtk,而不是 vim-gtk3。
+
+我使用 [Pathogen][5] 自动加载插件和捆绑软件。如果你使用 [Vundle][6] 或其他 Vim 软件包管理器,则需要调整以下命令才能使用它。
+
+#### 在 Vim 中管理你的邮件
+
+使 Vim 在你的生产力计划中发挥更大作用的一个很好的起点是使用它通过 [Notmuch] [7] 发送和接收电子邮件,和使用 [abook] [8] 访问你的联系人列表。你需要为此安装一些东西。下面的所有示例代码都运行在 Ubuntu 上,因此如果你使用其他发行版,则需要对此进行调整。通过以下步骤进行设置:
+
+```
+sudo apt install notmuch-vim ruby-mail
+curl -o ~/.vim/plugin/abook --create-dirs https://raw.githubusercontent.com/dcbaker/vim-abook/master/plugin/abook.vim
+```
+
+到目前为止,一切都很顺利。现在启动 Vim 并执行 `:NotMuch`。由于是用较旧版本的邮件库 `notmuch-vim` 编写的,可能会出现一些警告,但总的来说,Vim 现在将成为功能齐全的 Notmuch 邮件客户端。
+
+![Reading Mail in Vim][9]
+
+如果要搜索特定标签,请输入 `\t`,输入标签名称,然后按回车。这将拉出一个带有该标签的所有消息的列表。`\s` 组合键会弹出 `Search:` 提示符,可以对 Notmuch 数据库进行全面搜索。使用箭头键浏览消息列表,按回车键显示所选项目,然后输入 `\q` 退出当前视图。
+
+要撰写邮件,请使用 `\c` 按键。你将看到一条空白消息。这是 `abook.vim` 插件发挥作用的位置。按下 `Esc` 并输入 `:AbookQuery `,其中 `` 是你要查找的名称或电子邮件地址的一部分。你将在 abook 数据库中找到与你的搜索匹配的条目列表。通过键入你想要的地址的编号,将其添加到电子邮件的地址行中。完成电子邮件的键入和编辑,按 `Esc` 退出编辑模式,然后输入 `,s` 发送。
+
+如果要在 `:NotMuch` 启动时更改默认文件夹视图,则可以将变量 `g:notmuch_folders` 添加到你的 `.vimrc` 文件中:
+
+```
+let g:notmuch_folders = [
+ \ [ 'new', 'tag:inbox and tag:unread' ],
+ \ [ 'inbox', 'tag:inbox' ],
+ \ [ 'unread', 'tag:unread' ],
+ \ [ 'News', 'tag:@sanenews' ],
+ \ [ 'Later', 'tag:@sanelater' ],
+ \ [ 'Patreon', 'tag:@patreon' ],
+ \ [ 'LivestockConservancy', 'tag:livestock-conservancy' ],
+ \ ]
+```
+
+Notmuch 插件的文档中涵盖了更多设置,包括设置标签键和使用其它的邮件程序。
+
+#### 在 Vim 中查询日历
+
+![][10]
+
+遗憾的是,似乎没有使用 vCalendar 或 iCalendar 格式的 Vim 日历程序。有个 [Calendar.vim][11],做得很好。设置 Vim 通过以下方式访问你的日历:
+
+```
+cd ~/.vim/bundle
+git clone git@github.com:itchyny/calendar.vim.git
+```
+
+现在,你可以通过输入 `:Calendar` 在 Vim 中查看日历。你可以使用 `<` 和 `>` 键在年、月、周、日和时钟视图之间切换。如果要从一个特定的视图开始,请使用 `-view=` 标志告诉它你希望看到哪个视图。你也可以在任何视图中定位日期。例如,如果我想查看 2020 年 7 月 4 日这一周的情况,请输入 `:Calendar -view week 7 4 2020`。它的帮助信息非常好,可以使用 `?` 键参看。
+
+![][13]
+
+Calendar.vim 还支持 Google Calendar(我需要),但是在 2019 年 12 月,Google 禁用了它的访问权限。作者已在 [GitHub 上的这个提案][14]中发布了一种变通方法。
+
+这样你就在 Vim 中有了这些:你的邮件、地址簿和日历。但是这些还没有完成; 下一篇你将在 Vim 上做更多的事情!
+
+Vim 为作家提供了很多好处,无论他们是否具有技术意识。
+
+需要保持时间表正确吗?了解如何使用这些免费的开源软件来做到这一点。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/vim-email-calendar
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
+[2]: https://www.vim.org/
+[3]: https://www.gnu.org/software/emacs/
+[4]: https://opensource.com/sites/default/files/uploads/day16-image1.png
+[5]: https://github.com/tpope/vim-pathogen
+[6]: https://github.com/VundleVim/Vundle.vim
+[7]: https://opensource.com/article/20/1/organize-email-notmuch
+[8]: https://opensource.com/article/20/1/sync-contacts-locally
+[9]: https://opensource.com/sites/default/files/uploads/productivity_16-2.png (Reading Mail in Vim)
+[10]: https://opensource.com/sites/default/files/uploads/day16-image3.png
+[11]: https://github.com/itchyny/calendar.vim
+[12]: mailto:git@github.com
+[13]: https://opensource.com/sites/default/files/uploads/day16-image4.png
+[14]: https://github.com/itchyny/calendar.vim/issues/156
diff --git a/published/20200126 What-s your favorite Linux distribution.md b/published/20200126 What-s your favorite Linux distribution.md
new file mode 100644
index 0000000000..d3d8e99b87
--- /dev/null
+++ b/published/20200126 What-s your favorite Linux distribution.md
@@ -0,0 +1,56 @@
+[#]: collector: (lujun9972)
+[#]: translator: (LazyWolfLin)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11867-1.html)
+[#]: subject: (What's your favorite Linux distribution?)
+[#]: via: (https://opensource.com/article/20/1/favorite-linux-distribution)
+[#]: author: (Opensource.com https://opensource.com/users/admin)
+
+你最喜欢哪个 Linux 发行版?
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202002/08/004438ei1y4pp44pw4xy3w.jpg)
+
+你最喜欢哪个 Linux 发行版?虽然有所变化,但现在仍有数百种 [Linux 发行版][2]保持活跃且运作良好。发行版、包管理器和桌面的组合为 Linux 用户创建了无数客制化系统环境。
+
+我们询问了社区的作者们,哪个是他们的最爱以及原因。尽管回答中存在一些共性(由于各种原因,Fedora 和 Ubuntu 是最受欢迎的选择),但我们也听到一些惊奇的回答。以下是他们的一些回答:
+
+> “我使用 Fedora 发行版!我喜欢这样的社区,成员们共同创建一个令人惊叹的操作系统,展现了开源软件世界最伟大的造物。”——Matthew Miller
+
+> “我在家中使用 Arch。作为一名游戏玩家,我希望可以轻松使用最新版本的 Wine 和 GFX 驱动,同时最大限度地掌控我的系统。所以我选择一个滚动升级并且每个包都保持领先的发行版。”——Aimi Hobson
+
+> “NixOS,在业余爱好者市场中没有比这更合适的。”——Alexander Sosedkin
+
+> “我用过每个 Fedora 版本作为我的工作系统。这意味着我从第一个版本开始使用。从前,我问自己是否会忘记我使用的是哪一个版本。而这一天已经到来了,是从什么时候开始忘记了的呢?”——Hugh Brock
+
+> “通常,在我的家里和办公室里都有运行 Ubuntu、CentOS 和 Fedora 的机器。我依赖这些发行版来完成各种工作。Fedora 速度很快,而且可以获取最新版本的应用和库。Ubuntu 有大型社区支持,可以轻松使用。CentOS 则当我们需要稳如磐石的服务器平台时。”——Steve Morris
+
+> “我最喜欢?对于社区以及如何为发行版构建软件包(从源码构建而非二进制文件),我选择 Fedora。对于可用包的范围和包的定义和开发,我选择 Debian。对于文档,我选择 Arch。对于新手的提问,我以前会推荐 Ubuntu,而现在会推荐 Fedora。”——Al Stone
+
+* * *
+
+自从 2014 以来,我们一直向社区提出这一问题。除了 2015 年 PCLinuxOS 出乎意料的领先,Ubuntu 往往每年都获得粉丝们的青睐。其他受欢迎的竞争者还包括 Fedora、Debian、Mint 和 Arch。在新的十年里,哪个发行版更吸引你?如果我们的投票列表中没有你最喜欢的选择,请在评论中告诉我们。
+
+下面是过去七年来你最喜欢的 Linux 发行版投票的总览。你可以在我们去年的年刊《[Opensource.com 上的十年最佳][3]》中看到它。[点击这里][3]下载完整版电子书!
+
+![Poll results for favorite Linux distribution through the years][4]
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/favorite-linux-distribution
+
+作者:[Opensource.com][a]
+选题:[lujun9972][b]
+译者:[LazyWolfLin](https://github.com/LazyWolfLin)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/admin
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
+[2]: https://distrowatch.com/
+[3]: https://opensource.com/downloads/2019-yearbook-special-edition
+[4]: https://opensource.com/sites/default/files/pictures/linux-distributions-through-the-years.jpg (favorite Linux distribution through the years)
+[5]: https://opensource.com/article/20/1/favorite-linux-distribution
diff --git a/published/20200129 4 cool new projects to try in COPR for January 2020.md b/published/20200129 4 cool new projects to try in COPR for January 2020.md
new file mode 100644
index 0000000000..d6c46e11ea
--- /dev/null
+++ b/published/20200129 4 cool new projects to try in COPR for January 2020.md
@@ -0,0 +1,101 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11863-1.html)
+[#]: subject: (4 cool new projects to try in COPR for January 2020)
+[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/)
+[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
+
+COPR 仓库中 4 个很酷的新项目(2020.01)
+======
+
+![][1]
+
+COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准,尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
+
+本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR,请参阅 [COPR 用户文档][3]。
+
+### Contrast
+
+[Contrast][4] 是一款小应用,用于检查两种颜色之间的对比度并确定其是否满足 [WCAG][5] 中指定的要求。可以使用十六进制 RGB 代码或使用颜色选择器选择颜色。除了显示对比度之外,Contrast 还以选定的颜色为背景上显示短文本来显示比较。
+
+![][6]
+
+#### 安装说明
+
+[仓库][7]当前为 Fedora 31 和 Rawhide 提供了 Contrast。要安装 Contrast,请使用以下命令:
+
+```
+sudo dnf copr enable atim/contrast
+sudo dnf install contrast
+```
+
+### Pamixer
+
+[Pamixer][8] 是一个使用 PulseAudio 调整和监控声音设备音量的命令行工具。你可以显示设备的当前音量并直接增加/减小它,或静音/取消静音。Pamixer 可以列出所有源和接收器。
+
+#### 安装说明
+
+[仓库][7]当前为 Fedora 31 和 Rawhide 提供了 Pamixer。要安装 Pamixer,请使用以下命令:
+
+```
+sudo dnf copr enable opuk/pamixer
+sudo dnf install pamixer
+```
+
+### PhotoFlare
+
+[PhotoFlare][10] 是一款图像编辑器。它有简单且布局合理的用户界面,其中的大多数功能都可在工具栏中使用。尽管它不支持使用图层,但 PhotoFlare 提供了诸如各种颜色调整、图像变换、滤镜、画笔和自动裁剪等功能。此外,PhotoFlare 可以批量编辑图片,来对所有图片应用相同的滤镜和转换,并将结果保存在指定目录中。
+
+![][11]
+
+#### 安装说明
+
+[仓库][7]当前为 Fedora 31 提供了 PhotoFlare。要安装 PhotoFlare,请使用以下命令:
+
+```
+sudo dnf copr enable adriend/photoflare
+sudo dnf install photoflare
+```
+
+### Tdiff
+
+[Tdiff][13] 是用于比较两个文件树的命令行工具。除了显示某些文件或目录仅存在于一棵树中之外,tdiff 还显示文件大小、类型和内容,所有者用户和组 ID、权限、修改时间等方面的差异。
+
+#### 安装说明
+
+[仓库][7]当前为 Fedora 29-31、Rawhide、EPEL 6-8 和其他发行版提供了 tdiff。要安装 tdiff,请使用以下命令:
+
+```
+sudo dnf copr enable fif/tdiff
+sudo dnf install tdiff
+```
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/
+
+作者:[Dominik Turecek][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/dturecek/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
+[2]: https://copr.fedorainfracloud.org/
+[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
+[4]: https://gitlab.gnome.org/World/design/contrast
+[5]: https://www.w3.org/WAI/standards-guidelines/wcag/
+[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/contrast-screenshot.png
+[7]: https://copr.fedorainfracloud.org/coprs/atim/contrast/
+[8]: https://github.com/cdemoulins/pamixer
+[9]: https://copr.fedorainfracloud.org/coprs/opuk/pamixer/
+[10]: https://photoflare.io/
+[11]: https://fedoramagazine.org/wp-content/uploads/2020/01/photoflare-screenshot.png
+[12]: https://copr.fedorainfracloud.org/coprs/adriend/photoflare/
+[13]: https://github.com/F-i-f/tdiff
+[14]: https://copr.fedorainfracloud.org/coprs/fif/tdiff/
diff --git a/published/20200129 Joplin- The True Open Source Evernote Alternative.md b/published/20200129 Joplin- The True Open Source Evernote Alternative.md
new file mode 100644
index 0000000000..106d670f11
--- /dev/null
+++ b/published/20200129 Joplin- The True Open Source Evernote Alternative.md
@@ -0,0 +1,113 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11896-1.html)
+[#]: subject: (Joplin: The True Open Source Evernote Alternative)
+[#]: via: (https://itsfoss.com/joplin/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+Joplin:真正的 Evernote 开源替代品
+======
+
+> Joplin 是一个开源笔记记录和待办应用。你可以将笔记组织到笔记本中并标记它们。Joplin 还提供网络剪贴板来保存来自互联网的文章。
+
+### Joplin:开源笔记管理器
+
+![][4]
+
+如果你喜欢 [Evernote][2],那么你不会不太适应这个开源软件 [Joplin][3]。
+
+Joplin 是一个优秀的开源笔记应用,拥有丰富的功能。你可以记笔记、记录待办事项并且通过和 Dropbox 和 NextCloud 等云服务链接来跨设备同步笔记。同步过程通过端到端加密保护。
+
+Joplin 还有一个 Web 剪贴板,能让你将网页另存为笔记。这个网络剪贴板可用于 Firefox 和 Chrome/Chromium 浏览器。
+
+Joplin 可以导入 enex 格式的 Evernote 文件,这让从 Evernote 切换变得容易。
+
+因为数据自行保存,所以你可以用 Joplin 格式或者原始格式导出所有文件。
+
+### Joplin 的功能
+
+![][1]
+
+以下是 Joplin 的所有功能列表:
+
+* 将笔记保存到笔记本和子笔记本中,以便更好地组织
+* 创建待办事项清单
+* 可以标记和搜索笔记
+* 离线优先,因此即使没有互联网连接,所有数据始终在设备上可用
+* Markdown 笔记支持图片、数学符号和复选框
+* 支持附件
+* 可在桌面、移动设备和终端(CLI)使用
+* 可在 Firefox 和 Chrome 使用[网页剪切板][5]
+* 端到端加密
+* 保留笔记历史
+* 根据名称、时间等对笔记进行排序
+* 可与 [Nextcloud][7]、Dropbox、WebDAV 和 OneDrive 等各种[云服务][6]同步
+* 从 Evernote 导入文件
+* 导出 JEX 文件(Joplin 导出格式)和原始文件
+* 支持笔记、待办事项、标签和笔记本
+* 任意跳转功能
+* 支持移动设备和桌面应用通知
+* 地理位置支持
+* 支持多种语言
+* 外部编辑器支持:在 Joplin 中一键用你最喜欢的编辑器打开笔记
+
+### 在 Linux 和其它平台上安装 Joplin
+
+![][10]
+
+[Joplin][11] 是一个跨平台应用,可用于 Linux、macOS 和 Windows。在移动设备上,你可以[获取 APK 文件][12]将其安装在 Android 和基于 Android 的 ROM 上。你也可以[从谷歌 Play 商店下载][13]。
+
+在 Linux 中,你可以获取 Joplin 的 [AppImage][14] 文件,并作为可执行文件运行。你需要为下载的文件授予执行权限。
+
+- [下载 Joplin][15]
+
+### 体验 Joplin
+
+Joplin 中的笔记使用 Markdown,但你不需要了解它。编辑器的顶部面板能让你以图形方式选择项目符号、标题、图像、链接等。
+
+虽然 Joplin 提供了许多有趣的功能,但你需要自己去尝试。例如,默认情况下未启用 Web 剪切板,我需要发现如何打开它。
+
+你需要从桌面应用启用剪切板。在顶部菜单中,进入 “Tools->Options”。你可以在此处找到 Web 剪切板选项:
+
+![Enable Web Clipper from the desktop application first][16]
+
+它的 Web 剪切板不如 Evernote 的 Web 剪切板聪明,后者可以以图形方式剪辑网页文章的一部分。但是,也足够了。
+
+这是一个在活跃开发中的开源软件,我希望它随着时间的推移得到更多的改进。
+
+### 总结
+
+如果你正在寻找一个不错的拥有 Web 剪切板的笔记应用,你可以试试 Joplin。如果你喜欢它,并将继续使用,尝试通过捐赠或改进代码和文档来帮助 Joplin 开发。我以 FOSS 的名义[捐赠][17]了 25 欧。
+
+如果你曾经使用过 Joplin,或者仍在使用它,你对此的体验如何?如果你用的是其他笔记应用,你会切换到 Joplin 么?欢迎分享你的观点。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/joplin/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_logo.png?ssl=1
+[2]: https://evernote.com/
+[3]: https://joplinapp.org/
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_featured.jpg?ssl=1
+[5]: https://joplinapp.org/clipper/
+[6]: https://itsfoss.com/cloud-services-linux/
+[7]: https://nextcloud.com/
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_ubuntu.jpg?ssl=1
+[11]: https://github.com/laurent22/joplin
+[12]: https://itsfoss.com/download-apk-ubuntu/
+[13]: https://play.google.com/store/apps/details?id=net.cozic.joplin&hl=en_US
+[14]: https://itsfoss.com/use-appimage-linux/
+[15]: https://github.com/laurent22/joplin/releases
+[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/joplin_web_clipper.jpg?ssl=1
+[17]: https://itsfoss.com/donations-foss/
diff --git a/published/20200129 Showing memory usage in Linux by process and user.md b/published/20200129 Showing memory usage in Linux by process and user.md
new file mode 100644
index 0000000000..e1cd22286b
--- /dev/null
+++ b/published/20200129 Showing memory usage in Linux by process and user.md
@@ -0,0 +1,189 @@
+[#]: collector: (lujun9972)
+[#]: translator: (mengxinayan)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11849-1.html)
+[#]: subject: (Showing memory usage in Linux by process and user)
+[#]: via: (https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+查看 Linux 系统中进程和用户的内存使用情况
+======
+
+> 有一些命令可以用来检查 Linux 系统中的内存使用情况,下面是一些更好的命令。
+
+![Fancycrave][1]
+
+有许多工具可以查看 Linux 系统中的内存使用情况。一些命令被广泛使用,比如 `free`、`ps`。而另一些命令允许通过多种方式展示系统的性能统计信息,比如 `top`。在这篇文章中,我们将介绍一些命令以帮助你确定当前占用着最多内存资源的用户或者进程。
+
+下面是一些按照进程查看内存使用情况的命令:
+
+### 按照进程查看内存使用情况
+
+#### 使用 top
+
+`top` 是最好的查看内存使用情况的命令之一。为了查看哪个进程使用着最多的内存,一个简单的办法就是启动 `top`,然后按下 `shift+m`,这样便可以查看按照内存占用百分比从高到底排列的进程。当你按下了 `shift+m` ,你的 `top` 应该会得到类似于下面这样的输出结果:
+
+```
+$top
+top - 09:39:34 up 5 days, 3 min, 3 users, load average: 4.77, 4.43, 3.72
+Tasks: 251 total, 3 running, 247 sleeping, 1 stopped, 0 zombie
+%Cpu(s): 50.6 us, 35.9 sy, 0.0 ni, 13.4 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
+MiB Mem : 5944.4 total, 128.9 free, 2509.3 used, 3306.2 buff/cache
+MiB Swap: 2048.0 total, 2045.7 free, 2.2 used. 3053.5 avail Mem
+
+ PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
+ 400 nemo 20 0 3309580 550188 168372 S 0.3 9.0 1:33.27 Web Content
+32469 nemo 20 0 3492840 447372 163296 S 7.3 7.3 3:55.60 firefox
+32542 nemo 20 0 2845732 433388 140984 S 6.0 7.1 4:11.16 Web Content
+ 342 nemo 20 0 2848520 352288 118972 S 10.3 5.8 4:04.89 Web Content
+ 2389 nemo 20 0 1774412 236700 90044 S 39.7 3.9 9:32.64 vlc
+29527 nemo 20 0 2735792 225980 84744 S 9.6 3.7 3:02.35 gnome-shell
+30497 nemo 30 10 1088476 159636 88884 S 0.0 2.6 0:11.99 update-manager
+30058 nemo 20 0 1089464 140952 33128 S 0.0 2.3 0:04.58 gnome-software
+32533 nemo 20 0 2389088 104712 79544 S 0.0 1.7 0:01.43 WebExtensions
+ 2256 nemo 20 0 1217884 103424 31304 T 0.0 1.7 0:00.28 vlc
+ 1713 nemo 20 0 2374396 79588 61452 S 0.0 1.3 0:00.49 Web Content
+29306 nemo 20 0 389668 74376 54340 S 2.3 1.2 0:57.25 Xorg
+32739 nemo 20 0 289528 58900 34480 S 1.0 1.0 1:04.08 RDD Process
+29732 nemo 20 0 789196 57724 42428 S 0.0 0.9 0:00.38 evolution-alarm
+ 2373 root 20 0 150408 57000 9924 S 0.3 0.9 10:15.35 nessusd
+```
+
+注意 `%MEM` 排序。列表的大小取决于你的窗口大小,但是占据着最多的内存的进程将会显示在列表的顶端。
+
+#### 使用 ps
+
+`ps` 命令中的一列用来展示每个进程的内存使用情况。为了展示和查看哪个进程使用着最多的内存,你可以将 `ps` 命令的结果传递给 `sort` 命令。下面是一个有用的示例:
+
+```
+$ ps aux | sort -rnk 4 | head -5
+nemo 400 3.4 9.2 3309580 563336 ? Sl 08:59 1:36 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9086 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
+nemo 32469 8.2 7.7 3492840 469516 ? Sl 08:54 4:15 /usr/lib/firefox/firefox -new-window
+nemo 32542 8.9 7.6 2875428 462720 ? Sl 08:55 4:36 /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 1 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
+nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 8763 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
+nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4
+```
+
+在上面的例子中(文中已截断),`sort` 命令使用了 `-r` 选项(反转)、`-n` 选项(数字值)、`-k` 选项(关键字),使 `sort` 命令对 `ps` 命令的结果按照第四列(内存使用情况)中的数字逆序进行排列并输出。如果我们首先显示 `ps` 命令的标题,那么将会便于查看。
+
+```
+$ ps aux | head -1; ps aux | sort -rnk 4 | head -5
+USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
+nemo 400 3.4 9.2 3309580 563336 ? Sl 08:59 1:36 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9086 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
+nemo 32469 8.2 7.7 3492840 469516 ? Sl 08:54 4:15 /usr/lib/firefox/firefox -new-window
+nemo 32542 8.9 7.6 2875428 462720 ? Sl 08:55 4:36 /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 1 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
+nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 8763 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
+nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4
+```
+
+如果你喜欢这个命令,你可以用下面的命令为他指定一个别名,如果你想一直使用它,不要忘记把该命令添加到你的 `~/.bashrc` 文件中。
+
+```
+$ alias mem-by-proc="ps aux | head -1; ps aux | sort -rnk 4"
+```
+
+下面是一些根据用户查看内存使用情况的命令:
+
+### 按用户查看内存使用情况
+
+#### 使用 top
+
+按照用户检查内存使用情况会更复杂一些,因为你需要找到一种方法把用户所拥有的所有进程统计为单一的内存使用量。
+
+如果你只想查看单个用户进程使用情况,`top` 命令可以采用与上文中同样的方法进行使用。只需要添加 `-U` 选项并在其后面指定你要查看的用户名,然后按下 `shift+m` 便可以按照内存使用有多到少进行查看。
+
+```
+$ top -U nemo
+top - 10:16:33 up 5 days, 40 min, 3 users, load average: 1.91, 1.82, 2.15
+Tasks: 253 total, 2 running, 250 sleeping, 1 stopped, 0 zombie
+%Cpu(s): 28.5 us, 36.8 sy, 0.0 ni, 34.4 id, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st
+MiB Mem : 5944.4 total, 224.1 free, 2752.9 used, 2967.4 buff/cache
+MiB Swap: 2048.0 total, 2042.7 free, 5.2 used. 2812.0 avail Mem
+
+ PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
+ 400 nemo 20 0 3315724 623748 165440 S 1.0 10.2 1:48.78 Web Content
+32469 nemo 20 0 3629380 607492 161688 S 2.3 10.0 6:06.89 firefox
+32542 nemo 20 0 2886700 404980 136648 S 5.6 6.7 6:50.01 Web Content
+ 342 nemo 20 0 2922248 375784 116096 S 19.5 6.2 8:16.07 Web Content
+ 2389 nemo 20 0 1762960 234644 87452 S 0.0 3.9 13:57.53 vlc
+29527 nemo 20 0 2736924 227260 86092 S 0.0 3.7 4:09.11 gnome-shell
+30497 nemo 30 10 1088476 156372 85620 S 0.0 2.6 0:11.99 update-manager
+30058 nemo 20 0 1089464 138160 30336 S 0.0 2.3 0:04.62 gnome-software
+32533 nemo 20 0 2389088 102532 76808 S 0.0 1.7 0:01.79 WebExtensions
+```
+
+#### 使用 ps
+
+你依旧可以使用 `ps` 命令通过内存使用情况来排列某个用户的进程。在这个例子中,我们将使用 `grep` 命令来筛选得到某个用户的所有进程。
+
+```
+$ ps aux | head -1; ps aux | grep ^nemo| sort -rnk 4 | more
+USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
+nemo 32469 7.1 11.5 3724364 701388 ? Sl 08:54 7:21 /usr/lib/firefox/firefox -new-window
+nemo 400 2.0 8.9 3308556 543232 ? Sl 08:59 2:01 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9086 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni/usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
+nemo 32542 7.9 7.1 2903084 436196 ? Sl 08:55 8:07 /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 1 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
+nemo 342 10.8 7.0 2941056 426484 ? Rl 08:59 10:45 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 8763 -prefMapSize 210653 -parentBuildID 20200107212822 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 32469 true tab
+nemo 2389 16.9 3.8 1762960 234644 pts/1 Sl+ 09:15 13:57 vlc videos/edge_computing.mp4
+nemo 29527 3.9 3.7 2736924 227448 ? Ssl 08:50 4:11 /usr/bin/gnome-shell
+```
+### 使用 ps 和其他命令的搭配
+
+如果你想比较某个用户与其他用户内存使用情况将会比较复杂。在这种情况中,创建并排序一个按照用户总的内存使用量是一个不错的方法,但是它需要做一些更多的工作,并涉及到许多命令。在下面的脚本中,我们使用 `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u` 命令得到了用户列表。其中包含了系统用户比如 `syslog`。我们对每个任务使用 `awk` 命令以收集每个用户总的内存使用情况。在最后一步中,我们展示每个用户总的内存使用量(按照从大到小的顺序)。
+
+```
+#!/bin/bash
+
+stats=””
+echo "% user"
+echo "============"
+
+# collect the data
+for user in `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u`
+do
+ stats="$stats\n`ps aux | egrep ^$user | awk 'BEGIN{total=0}; \
+ {total += $4};END{print total,$1}'`"
+done
+
+# sort data numerically (largest first)
+echo -e $stats | grep -v ^$ | sort -rn | head
+```
+
+这个脚本的输出可能如下:
+
+```
+$ ./show_user_mem_usage
+% user
+============
+69.6 nemo
+5.8 root
+0.5 www-data
+0.3 shs
+0.2 whoopsie
+0.2 systemd+
+0.2 colord
+0.2 clamav
+0 syslog
+0 rtkit
+```
+
+在 Linux 有许多方法可以报告内存使用情况。可以通过一些用心设计的工具和命令,来查看并获得某个进程或者用户占用着最多的内存。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[萌新阿岩](https://github.com/mengxinayan)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/06/chips_processors_memory_cards_by_fancycrave_cc0_via_unsplash_1200x800-100760955-large.jpg
+[2]: https://creativecommons.org/publicdomain/zero/1.0/
+[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
diff --git a/published/20200131 Intro to the Linux command line.md b/published/20200131 Intro to the Linux command line.md
new file mode 100644
index 0000000000..49550210fe
--- /dev/null
+++ b/published/20200131 Intro to the Linux command line.md
@@ -0,0 +1,87 @@
+[#]: collector: (lujun9972)
+[#]: translator: (qianmingtian)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11864-1.html)
+[#]: subject: (Intro to the Linux command line)
+[#]: via: (https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+Linux 命令行简介
+======
+
+> 下面是一些针对刚开始使用 Linux 命令行的人的热身练习。警告:它可能会上瘾。
+
+![](https://images.idgesg.net/images/article/2020/01/cmd_linux-control_linux-logo_-100828420-large.jpg)
+
+如果你是 Linux 新手,或者从来没有花时间研究过命令行,你可能不会理解为什么这么多 Linux 爱好者坐在舒适的桌面前兴奋地输入命令来使用大量工具和应用。在这篇文章中,我们将快速浏览一下命令行的奇妙之处,看看能否让你着迷。
+
+首先,要使用命令行,你必须打开一个命令工具(也称为“命令提示符”)。如何做到这一点将取决于你运行的 Linux 版本。例如,在 RedHat 上,你可能会在屏幕顶部看到一个 “Activities” 选项卡,它将打开一个选项列表和一个用于输入命令的小窗口(类似 “cmd” 为你打开的窗口)。在 Ubuntu 和其他一些版本中,你可能会在屏幕左侧看到一个小的终端图标。在许多系统上,你可以同时按 `Ctrl+Alt+t` 键打开命令窗口。
+
+如果你使用 PuTTY 之类的工具登录 Linux 系统,你会发现自己已经处于命令行界面。
+
+一旦你得到你的命令行窗口,你会发现自己坐在一个提示符面前。它可能只是一个 `$` 或者像 `user@system:~$` 这样的东西,但它意味着系统已经准备好为你运行命令了。
+
+一旦你走到这一步,就应该开始输入命令了。下面是一些要首先尝试的命令,以及这里是一些特别有用的命令的 [PDF][4] 和适合打印和做成卡片的双面命令手册。
+
+
+| 命令 | 用途 |
+|---|---|
+| `pwd` | 显示我在文件系统中的位置(在最初进入系统时运行将显示主目录) |
+| `ls` | 列出我的文件 |
+| `ls -a` | 列出我更多的文件(包括隐藏文件) |
+| `ls -al` | 列出我的文件,并且包含很多详细信息(包括日期、文件大小和权限) |
+| `who` | 告诉我谁登录了(如果只有你,不要失望) |
+| `date` | 日期提醒我今天是星期几(也显示时间) |
+| `ps` | 列出我正在运行的进程(可能只是你的 shell 和 `ps` 命令) |
+
+一旦你从命令行角度习惯了 Linux 主目录之后,就可以开始探索了。也许你会准备好使用以下命令在文件系统中闲逛:
+
+| 命令 | 用途 |
+|---|---|
+| `cd /tmp` | 移动到其他文件夹(本例中,打开 `/tmp` 文件夹) |
+| `ls` | 列出当前位置的文件 |
+| `cd` | 回到主目录(不带参数的 `cd` 总是能将你带回到主目录) |
+| `cat .bashrc` | 显示文件的内容(本例中显示 `.bashrc` 文件的内容) |
+| `history` | 显示最近执行的命令 |
+| `echo hello` | 跟自己说 “hello” |
+| `cal` | 显示当前月份的日历 |
+
+要了解为什么高级 Linux 用户如此喜欢命令行,你将需要尝试其他一些功能,例如重定向和管道。“重定向”是当你获取命令的输出并将其放到文件中而不是在屏幕上显示时。“管道”是指你将一个命令的输出发送给另一条将以某种方式对其进行操作的命令。这是可以尝试的命令:
+
+| 命令 | 用途 |
+|---|---|
+| `echo "echo hello" > tryme` | 创建一个新的文件并将 “echo hello” 写入该文件 |
+| `chmod 700 tryme` | 使新建的文件可执行 |
+| `tryme` | 运行新文件(它应当运行文件中包含的命令并且显示 “hello” )|
+| `ps aux` | 显示所有运行中的程序 |
+| `ps aux | grep $USER` | 显示所有运行中的程序,但是限制输出的内容包含你的用户名 |
+| `echo $USER` | 使用环境变量显示你的用户名 |
+| `whoami` | 使用命令显示你的用户名 |
+| `who | wc -l` | 计数所有当前登录的用户数目 |
+
+### 总结
+
+一旦你习惯了基本命令,就可以探索其他命令并尝试编写脚本。 你可能会发现 Linux 比你想象的要强大并且好用得多.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[qianmingtian][c]
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[c]: https://github.com/qianmingtian
+[1]: https://commons.wikimedia.org/wiki/File:Tux.svg
+[2]: https://creativecommons.org/publicdomain/zero/1.0/
+[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
+[4]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
+[5]: https://www.networkworld.com/newsletters/signup.html
+[6]: https://www.facebook.com/NetworkWorld/
+[7]: https://www.linkedin.com/company/network-world
diff --git a/published/20200202 4 Key Changes to Look Out for in Linux Kernel 5.6.md b/published/20200202 4 Key Changes to Look Out for in Linux Kernel 5.6.md
new file mode 100644
index 0000000000..e6bf7ef27a
--- /dev/null
+++ b/published/20200202 4 Key Changes to Look Out for in Linux Kernel 5.6.md
@@ -0,0 +1,96 @@
+[#]: collector: (lujun9972)
+[#]: translator: (LazyWolfLin)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11853-1.html)
+[#]: subject: (4 Key Changes to Look Out for in Linux Kernel 5.6)
+[#]: via: (https://itsfoss.com/linux-kernel-5-6/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+四大亮点带你看 Linux 内核 5.6
+======
+
+当我们还在体验 Linux 5.5 稳定发行版带来更好的硬件支持时,Linux 5.6 已经来了。
+
+说实话,Linux 5.6 比 5.5 更令人兴奋。即使即将发布的 Ubuntu 20.04 LTS 发行版将自带 Linux 5.5,你也需要切实了解一下 Linux 5.6 内核为我们提供了什么。
+
+我将在本文中重点介绍 Linux 5.6 发布版中值得期待的关键更改和功能:
+
+### Linux 5.6 功能亮点
+
+![][1]
+
+当 Linux 5.6 有新消息时,我会努力更新这份功能列表。但现在让我们先看一下当前已知的内容:
+
+#### 1、支持 WireGuard
+
+WireGuard 将被添加到 Linux 5.6,出于各种原因的考虑它可能将取代 [OpenVPN][2]。
+
+你可以在官网上进一步了解 [WireGuard][3] 的优点。当然,如果你使用过它,那你可能已经知道它比 OpenVPN 更好的原因。
+
+同样,[Ubuntu 20.04 LTS 将支持 WireGuard][4]。
+
+#### 2、支持 USB4
+
+Linux 5.6 也将支持 **USB4**。
+
+如果你不了解 USB 4.0 (USB4),你可以阅读这份[文档][5]。
+
+根据文档,“USB4 将使 USB 的最大带宽增大一倍并支持多并发数据和显示协议multiple simultaneous data and display protocols。”
+
+另外,虽然我们都知道 USB4 基于 Thunderbolt 接口协议,但它将向后兼容 USB 2.0、USB 3.0 以及 Thunderbolt 3,这将是一个好消息。
+
+#### 3、使用 LZO/LZ4 压缩 F2FS 数据
+
+Linux 5.6 也将支持使用 LZO/LZ4 算法压缩 F2FS 数据。
+
+换句话说,这只是 Linux 文件系统的一种新压缩技术,你可以选择待定的文件扩展技术。
+
+#### 4、解决 32 位系统的 2038 年问题
+
+Unix 和 Linux 将时间值以 32 位有符号整数格式存储,其最大值为 2147483647。时间值如果超过这个数值则将由于整数溢出而存储为负数。
+
+这意味着对于 32 位系统,时间值不能超过 1970 年 1 月 1 日后的 2147483647 秒。也就是说,在 UTC 时间 2038 年 1 月 19 日 03:14:07 时,由于整数溢出,时间将显示为 1901 年 12 月 13 日而不是 2038 年 1 月 19 日。
+
+Linux kernel 5.6 解决了这个问题,因此 32 位系统也可以运行到 2038 年以后。
+
+#### 5、改进硬件支持
+
+很显然,在下一个发布版中,硬件支持也将继续提升。而支持新式无线外设的计划也同样是优先的。
+
+新内核中将增加对 MX Master 3 鼠标以及罗技其他无线产品的支持。
+
+除了罗技的产品外,你还可以期待获得许多不同硬件的支持(包括对 AMD GPU、NVIDIA GPU 和 Intel Tiger Lake 芯片组的支持)。
+
+#### 6、其他更新
+
+此外,Linux 5.6 中除了上述主要的新增功能或支持外,下一个内核版本也将进行其他一些改进:
+
+ * 改进 AMD Zen 的温度/功率报告
+ * 修复华硕飞行堡垒系列笔记本中 AMD CPU 过热
+ * 开源支持 NVIDIA RTX 2000 图灵系列显卡
+ * 内建 FSCRYPT 加密
+
+[Phoronix][6] 跟踪了 Linux 5.6 带来的许多技术性更改。因此,如果你好奇 Linux 5.6 所涉及的全部更改,则可以亲自了解一下。
+
+现在你已经了解了 Linux 5.6 发布版带来的新功能,对此有什么看法呢?在下方评论中留下你的看法。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/linux-kernel-5-6/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[LazyWolfLin](https://github.com/LazyWolfLin)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/linux-kernel-5.6.jpg?ssl=1
+[2]: https://openvpn.net/
+[3]: https://www.wireguard.com/
+[4]: https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-20.04-Adds-WireGuard
+[5]: https://www.usb.org/sites/default/files/2019-09/USB-IF_USB4%20spec%20announcement_FINAL.pdf
+[6]: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.6-Spectacular
diff --git a/published/20200203 Give an old MacBook new life with Linux.md b/published/20200203 Give an old MacBook new life with Linux.md
new file mode 100644
index 0000000000..4a2ec03b17
--- /dev/null
+++ b/published/20200203 Give an old MacBook new life with Linux.md
@@ -0,0 +1,83 @@
+[#]: collector: (lujun9972)
+[#]: translator: (qianmingtian)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11902-1.html)
+[#]: subject: (Give an old MacBook new life with Linux)
+[#]: via: (https://opensource.com/article/20/2/macbook-linux-elementary)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
+
+用 Linux 让旧 MacBook 焕发新生
+======
+
+> Elementary OS 的最新版本 Hera 是一个令人印象深刻的平台,它可以让过时的 MacBook 得以重生。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/18/113614k2jx6ju7uuu0alhk.png)
+
+当我安装苹果的 [MacOS Mojave][2] 时,它使我以前可靠的 MacBook Air 慢得像爬一样。我的计算机发售于 2015 年,具有 4 GB 内存、i5 处理器和 Broadcom 4360 无线卡,但是对于我的日常使用来说,Mojava 有点过分了,它不能和 [GnuCash][3] 一起工作,这激起了我重返 Linux 的欲望。我很高兴能重返,但是我深感遗憾的是,我的这台出色的 MacBook 被闲置了。
+
+我在 MacBook Air 上尝试了几种 Linux 发行版,但总会有缺陷。有时是无线网卡;还有一次,它缺少对触摸板的支持。看了一些不错的评论后,我决定尝试 [Elementary OS][4] 5.0(Juno)。我用 USB [制作了启动盘][5],并将其插入 MacBook Air 。我来到了一个现场live桌面,并且操作系统识别出了我的 Broadcom 无线芯片组 —— 我认为这可能行得通!
+
+我喜欢在 Elementary OS 中看到的内容。它的 [Pantheon][6] 桌面真的很棒,并且其外观和使用起来的感觉对 Apple 用户来说很熟悉 —— 它的显示屏底部有一个扩展坞,并带有一些指向常用应用程序的图标。我对我之前期待的预览感到满意,所以我决定安装它,然后我的无线设备消失了。真的很令人失望。我真的很喜欢 Elementary OS ,但是没有无线网络是不行的。
+
+时间快进到 2019 年 12 月,当我在 [Linux4Everyone][7] 播客上听到有关 Elementary 最新版本 v.5.1(Hera) 使 MacBook 复活的评论时,我决定用 Hera 再试一次。我下载了 ISO ,创建了可启动驱动器,将其插入电脑,这次操作系统识别了我的无线网卡。我可以在上面工作了。
+
+![运行 Hera 的 MacBook Air][8]
+
+我非常高兴我轻巧又功能强大的 MacBook Air 通过 Linux 焕然一新。我一直在更详细地研究 Elementary OS,我可以告诉你我印象深刻的东西。
+
+### Elementary OS 的功能
+
+根据 [Elementary 的博客][9],“新设计的登录和锁定屏幕问候语看起来更清晰、效果更好,并且修复了以前问候语中报告的许多问题,包括输入焦点问题,HiDPI 问题和更好的本地化。Hera 的新设计是为了响应来自 Juno 的用户反馈,并启用了一些不错的新功能。”
+
+“不错的新功能”是在轻描淡写 —— Elementary OS 拥有我见过的最佳设计的 Linux 用户界面之一。默认情况下,系统上的“系统设置”图标位于扩展坞上。更改设置很容易,很快我就按照自己的喜好配置了系统。我需要的文字大小比默认值大,辅助功能是易于使用的,允许我设置大文字和高对比度。我还可以使用较大的图标和其他选项来调整扩展坞。
+
+![Elementary OS 的设置界面][10]
+
+按下 Mac 的 Command 键将弹出一个键盘快捷键列表,这对新用户非常有帮助。
+
+![Elementary OS 的键盘快捷键][11]
+
+Elementary OS 附带的 [Epiphany][12] Web 浏览器,我发现它非常易于使用。它与 Chrome、Chromium 或 Firefox 略有不同,但它已经绰绰有余。
+
+对于注重安全的用户(我们应该都是),Elementary OS 的安全和隐私设置提供了多个选项,包括防火墙、历史记录、锁定,临时和垃圾文件的自动删除以及用于位置服务开/关的开关。
+
+![Elementary OS 的隐私与安全][13]
+
+### 有关 Elementray OS 的更多信息
+
+Elementary OS 最初于 2011 年发布,其最新版本 Hera 于 2019 年 12 月 3 日发布。 Elementary 的联合创始人兼 CXO 的 [Cassidy James Blaede][14] 是操作系统的 UX 架构师。 Cassidy 喜欢使用开放技术来设计和构建有用、可用和令人愉悦的数字产品。
+
+Elementary OS 具有出色的用户[文档][15],其代码(在 GPL 3.0 下许可)可在 [GitHub][16] 上获得。Elementary OS 鼓励参与该项目,因此请务必伸出援手并[加入社区][17]。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/2/macbook-linux-elementary
+
+作者:[Don Watkins][a]
+选题:[lujun9972][b]
+译者:[qianmingtian][c]
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[c]: https://github.com/qianmingtian
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
+[2]: https://en.wikipedia.org/wiki/MacOS_Mojave
+[3]: https://www.gnucash.org/
+[4]: https://elementary.io/
+[5]: https://opensource.com/life/14/10/test-drive-linux-nothing-flash-drive
+[6]: https://opensource.com/article/19/12/pantheon-linux-desktop
+[7]: https://www.linux4everyone.com/20-macbook-pro-elementary-os
+[8]: https://opensource.com/sites/default/files/uploads/macbookair_hera.png (MacBook Air with Hera)
+[9]: https://blog.elementary.io/introducing-elementary-os-5-1-hera/
+[10]: https://opensource.com/sites/default/files/uploads/elementaryos_settings.png (Elementary OS's Settings screen)
+[11]: https://opensource.com/sites/default/files/uploads/elementaryos_keyboardshortcuts.png (Elementary OS's Keyboard shortcuts)
+[12]: https://en.wikipedia.org/wiki/GNOME_Web
+[13]: https://opensource.com/sites/default/files/uploads/elementaryos_privacy-security.png (Elementary OS's Privacy and Security screen)
+[14]: https://github.com/cassidyjames
+[15]: https://elementary.io/docs/learning-the-basics#learning-the-basics
+[16]: https://github.com/elementary
+[17]: https://elementary.io/get-involved
diff --git a/published/20200204 Ubuntu 19.04 Has Reached End of Life- Existing Users Must Upgrade to Ubuntu 19.10.md b/published/20200204 Ubuntu 19.04 Has Reached End of Life- Existing Users Must Upgrade to Ubuntu 19.10.md
new file mode 100644
index 0000000000..3526c8e6ec
--- /dev/null
+++ b/published/20200204 Ubuntu 19.04 Has Reached End of Life- Existing Users Must Upgrade to Ubuntu 19.10.md
@@ -0,0 +1,76 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11866-1.html)
+[#]: subject: (Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10)
+[#]: via: (https://itsfoss.com/ubuntu-19-04-end-of-life/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+Ubuntu 19.04 已经到期!现有用户必须升级到 Ubuntu 19.10
+======
+
+> Ubuntu 19.04 已在 2020 年 1 月 23 日到期,这意味着运行 Ubuntu 19.04 的系统将不再会接收到安全和维护更新,因此将使其容易受到攻击。
+
+![][1]
+
+[Ubuntu 19.04][2] 发布于 2019 年 4 月 18 日。由于它不是长期支持(LTS)版本,因此只有 9 个月的支持。完成它的发行周期后,Ubuntu 19.04 于 2020 年 1 月 23 日到期。
+
+Ubuntu 19.04 带来了一些视觉和性能方面的改进,为时尚和美观的 Ubuntu 外观铺平了道路。与其他常规 Ubuntu 版本一样,它的生命周期为 9 个月。它如今结束了。
+
+### Ubuntu 19.04 终止了吗?这是什么意思?
+
+EOL(End of life)是指在某个日期之后操作系统版本将无法获得更新。你可能已经知道 Ubuntu(或其他操作系统)提供了安全性和维护升级,以使你的系统免受网络攻击。当发行版到期后,操作系统将停止接收这些重要更新。
+
+如果你的操作系统版本到期后继续使用该系统,那么系统将容易受到网络和恶意软件的攻击。不仅如此。在 Ubuntu 中,你使用 APT 从软件中心下载的应用也不会更新。实际上,你将不再能够[使用 apt-get 命令安装新软件][3](如果不是立即,那就是逐渐地)。
+
+### 所有 Ubuntu 19.04 用户必须升级到 Ubuntu 19.10
+
+从 2020 年 1 月 23 日开始,Ubuntu 19.04 将停止接收更新。你必须升级到 2020 年 7 月之前受支持的 Ubuntu 19.10。这也适用于其他[官方 Ubuntu 衍生版][4],例如 Lubuntu、Xubuntu、Kubuntu 等。
+
+你可以在“设置 -> 细节” 或使用如下命令来[检查你的 Ubuntu 版本][9]:
+
+```
+lsb_release -a
+```
+
+#### 如何升级到 Ubuntu 19.10?
+
+值得庆幸的是,Ubuntu 提供了简单的方法来将现有系统升级到新版本。实际上,Ubuntu 还会提示你有新的 Ubuntu 版本可用,你应该升级到该版本。
+
+![Existing Ubuntu 19.04 should see a message to upgrade to Ubuntu 19.10][5]
+
+如果你的互联网连接良好,那么可以使用[和更新 Ubuntu 一样的 Software Updater 工具][6]。在上图中,你只需单击 “Upgrade” 按钮并按照说明进行操作。我已经编写了有关使用此方法[升级到 Ubuntu 18.04][7]的文章。
+
+如果你没有良好的互联网连接,那么有一种临时方案。在外部磁盘上备份家目录或重要数据。
+
+然后,制作一个 Ubuntu 19.10 的 Live USB。下载 Ubuntu 19.10 ISO,并使用 Ubuntu 系统上已安装的启动磁盘创建器从该 ISO 创建 Live USB。
+
+从该 Live USB 引导,然后继续“安装” Ubuntu 19.10。在安装过程中,你应该看到一个删除 Ubuntu 19.04 并将其替换为 Ubuntu 19.10 的选项。选择此选项,然后像重新[安装 Ubuntu][8]一样进行下去。
+
+#### 你是否仍在使用 Ubuntu 19.04、18.10、17.10 或其他不受支持的版本?
+
+你应该注意,目前仅 Ubuntu 16.04、18.04 和 19.10(或更高版本)版本还受支持。如果你运行的不是这些 Ubuntu 版本,那么你必须升级到较新版本。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/ubuntu-19-04-end-of-life/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/End-of-Life-Ubuntu-19.04.png?ssl=1
+[2]: https://itsfoss.com/ubuntu-19-04-release/
+[3]: https://itsfoss.com/apt-get-linux-guide/
+[4]: https://itsfoss.com/which-ubuntu-install/
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/ubuntu_19_04_end_of_life.jpg?ssl=1
+[6]: https://itsfoss.com/update-ubuntu/
+[7]: https://itsfoss.com/upgrade-ubuntu-version/
+[8]: https://itsfoss.com/install-ubuntu/
+[9]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
diff --git a/published/20200205 Getting started with GnuCash.md b/published/20200205 Getting started with GnuCash.md
new file mode 100644
index 0000000000..00dc0465c5
--- /dev/null
+++ b/published/20200205 Getting started with GnuCash.md
@@ -0,0 +1,96 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11895-1.html)
+[#]: subject: (Getting started with GnuCash)
+[#]: via: (https://opensource.com/article/20/2/gnucash)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
+
+开始使用 GnuCash
+======
+
+> 使用 GnuCash 管理你的个人或小型企业会计。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/15/124236wz5e0z5vq7571qby.jpg)
+
+在过去的四年里,我一直在用 [GnuCash][2] 来管理我的个人财务,我对此非常满意。这个开源(GPL v3)项目自 1998 年首次发布以来一直成长和改进,2019 年 12 月发布的最新版本 3.8 增加了许多改进和 bug 修复。
+
+GnuCash 可在 Windows、MacOS 和 Linux 中使用。它实现了一个复式记账系统,并可以导入各种流行的开放和专有文件格式,包括 QIF、QFX、OFX、CSV 等。这使得从其他财务应用转换(包括 Quicken)而来很容易,它是为取代这些而出现的。
+
+借助 GnuCash,你可以跟踪个人财务状况以及小型企业会计和开票。它没有集成的工资系统。根据文档,你可以在 GnuCash 中跟踪工资支出,但你必须在该软件外计算税金和扣减。
+
+### 安装
+
+要在 Linux 上安装 GnuCash:
+
+ * 在 Red Hat、CentOS 或 Fedora 中: `$ sudo dnf install gnucash`
+ * 在 Debian、Ubuntu 或 Pop_OS 中: `$ sudo apt install gnucash`
+
+你也可以从 [Flathub][3] 安装它,我在运行 Elementary OS 的笔记本上使用它。(本文中的所有截图都来自此次安装)。
+
+### 设置
+
+安装并启动程序后,你将看到一个欢迎屏幕,该页面提供了创建新账户集、导入 QIF 文件或打开新用户教程的选项。
+
+![GnuCash Welcome screen][4]
+
+#### 个人账户
+
+如果你选择第一个选项(正如我所做的那样),GnuCash 会打开一个页面帮你起步。它收集初始数据并设置账户首选项,例如账户类型和名称、商业数据(例如,税号)和首选货币。
+
+![GnuCash new account setup][5]
+
+GnuCash 支持个人银行账户、商业账户、汽车贷款、CD 和货币市场账户、儿童保育账户等。
+
+例如,首先创建一个简单的支票簿。你可以输入账户的初始余额或以多种格式导入现有账户数据。
+
+![GnuCash import data][6]
+
+#### 开票
+
+GnuCash 还支持小型企业功能,包括客户、供应商和开票。要创建发票,请在 “Business -> Invoice” 中输入数据。
+
+![GnuCash create invoice][7]
+
+然后,你可以将发票打印在纸上,也可以将其导出到 PDF 并通过电子邮件发送给你的客户。
+
+![GnuCash invoice][8]
+
+### 获取帮助
+
+如果你有任何疑问,它有一个优秀的帮助,你可在菜单栏的右侧获取指导。
+
+![GnuCash help][9]
+
+该项目的网站包含许多有用的信息的链接,例如 GnuCash [功能][10]的概述。GnuCash 还提供了[详细的文档][11],可供下载和离线阅读,它还有一个 [wiki][12],为用户和开发人员提供了有用的信息。
+
+你可以在项目的 [GitHub][13] 仓库中找到其他文件和文档。GnuCash 项目由志愿者驱动。如果你想参与,请查看项目的 wiki 上的 [Getting involved][14] 部分。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/2/gnucash
+
+作者:[Don Watkins][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0 (A dollar sign in a network)
+[2]: https://www.gnucash.org/
+[3]: https://flathub.org/apps/details/org.gnucash.GnuCash
+[4]: https://opensource.com/sites/default/files/images/gnucash_welcome.png (GnuCash Welcome screen)
+[5]: https://opensource.com/sites/default/files/uploads/gnucash_newaccountsetup.png (GnuCash new account setup)
+[6]: https://opensource.com/sites/default/files/uploads/gnucash_importdata.png (GnuCash import data)
+[7]: https://opensource.com/sites/default/files/uploads/gnucash_enter-invoice.png (GnuCash create invoice)
+[8]: https://opensource.com/sites/default/files/uploads/gnucash_invoice.png (GnuCash invoice)
+[9]: https://opensource.com/sites/default/files/uploads/gnucash_help.png (GnuCash help)
+[10]: https://www.gnucash.org/features.phtml
+[11]: https://www.gnucash.org/docs/v3/C/gnucash-help.pdf
+[12]: https://wiki.gnucash.org/wiki/GnuCash
+[13]: https://github.com/Gnucash
+[14]: https://wiki.gnucash.org/wiki/GnuCash#Getting_involved_in_the_GnuCash_project
diff --git a/published/20200206 3 ways to use PostgreSQL commands.md b/published/20200206 3 ways to use PostgreSQL commands.md
new file mode 100644
index 0000000000..edb92e9144
--- /dev/null
+++ b/published/20200206 3 ways to use PostgreSQL commands.md
@@ -0,0 +1,218 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Morisun029)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11904-1.html)
+[#]: subject: (3 ways to use PostgreSQL commands)
+[#]: via: (https://opensource.com/article/20/2/postgresql-commands)
+[#]: author: (Greg Pittman https://opensource.com/users/greg-p)
+
+3 种使用 PostgreSQL 命令的方式
+======
+
+> 无论你需要的东西简单(如一个购物清单)亦或复杂(如色卡生成器) ,PostgreSQL 命令都能使它变得容易起来。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/18/124003twk7fryz2krw2r39.jpg)
+
+在 [PostgreSQL 入门][2]一文中, 我解释了如何安装、设置和开始使用这个开源数据库软件。不过,使用 [PostgreSQL][3] 中的命令可以做更多事情。
+
+例如,我使用 Postgres 来跟踪我的杂货店购物清单。我的大多数杂货店购物是在家里进行的,而且每周进行一次大批量的采购。我去几个不同的地方购买清单上的东西,因为每家商店都提供特定的选品或质量,亦或更好的价格。最初,我制作了一个 HTML 表单页面来管理我的购物清单,但这样无法保存我的输入内容。因此,在想到要购买的物品时我必须马上列出全部清单,然后到采购时我常常会忘记一些我需要或想要的东西。
+
+相反,使用 PostgreSQL,当我想到需要的物品时,我可以随时输入,并在购物前打印出来。你也可以这样做。
+
+### 创建一个简单的购物清单
+
+首先,输入 `psql` 命令进入数据库,然后用下面的命令创建一个表:
+
+```
+Create table groc (item varchar(20), comment varchar(10));
+```
+
+输入如下命令在清单中加入商品:
+
+```
+insert into groc values ('milk', 'K');
+insert into groc values ('bananas', 'KW');
+```
+
+括号中有两个信息(逗号隔开):前面是你需要买的东西,后面字母代表你要购买的地点以及哪些东西是你每周通常都要买的(`W`)。
+
+因为 `psql` 有历史记录,你可以按向上键在括号内编辑信息,而无需输入商品的整行信息。
+
+在输入一小部分商品后,输入下面命令来检查前面的输入内容。
+
+```
+Select * from groc order by comment;
+
+ item | comment
+----------------+---------
+ ground coffee | H
+ butter | K
+ chips | K
+ steak | K
+ milk | K
+ bananas | KW
+ raisin bran | KW
+ raclette | L
+ goat cheese | L
+ onion | P
+ oranges | P
+ potatoes | P
+ spinach | PW
+ broccoli | PW
+ asparagus | PW
+ cucumber | PW
+ sugarsnap peas | PW
+ salmon | S
+(18 rows)
+```
+
+此命令按 `comment` 列对结果进行排序,以便按购买地点对商品进行分组,从而使你的购物更加方便。
+
+使用 `W` 来指明你每周要买的东西,当你要清除表单为下周的列表做准备时,你可以将每周的商品保留在购物清单上。输入:
+
+```
+delete from groc where comment not like '%W';
+```
+
+注意,在 PostgreSQL 中 `%` 表示通配符(而非星号)。所以,要保存输入内容,需要输入:
+
+```
+delete from groc where item like 'goat%';
+```
+
+不能使用 `item = 'goat%'`,这样没用。
+
+在购物时,用以下命令输出清单并打印或发送到你的手机:
+
+```
+\o groclist.txt
+select * from groc order by comment;
+\o
+```
+
+最后一个命令 `\o` 后面没有任何内容,将重置输出到命令行。否则,所有的输出会继续输出到你创建的杂货店购物文件 `groclist.txt` 中。
+
+### 分析复杂的表
+
+这个逐项列表对于数据量小的表来说没有问题,但是对于数据量大的表呢?几年前,我帮 [FreieFarbe.de][4] 的团队从 HLC 调色板中创建一个自由色的色样册。事实上,任何能想象到的打印色都可按色调、亮度、浓度(饱和度)来规定。最终结果是 [HLC Color Atlas][5],下面是我们如何实现的。
+
+该团队向我发送了具有颜色规范的文件,因此我可以编写可与 Scribus 配合使用的 Python 脚本,以轻松生成色样册。一个例子像这样开始:
+
+```
+HLC, C, M, Y, K
+H010_L15_C010, 0.5, 49.1, 0.1, 84.5
+H010_L15_C020, 0.0, 79.7, 15.1, 78.9
+H010_L25_C010, 6.1, 38.3, 0.0, 72.5
+H010_L25_C020, 0.0, 61.8, 10.6, 67.9
+H010_L25_C030, 0.0, 79.5, 18.5, 62.7
+H010_L25_C040, 0.4, 94.2, 17.3, 56.5
+H010_L25_C050, 0.0, 100.0, 15.1, 50.6
+H010_L35_C010, 6.1, 32.1, 0.0, 61.8
+H010_L35_C020, 0.0, 51.7, 8.4, 57.5
+H010_L35_C030, 0.0, 68.5, 17.1, 52.5
+H010_L35_C040, 0.0, 81.2, 22.0, 46.2
+H010_L35_C050, 0.0, 91.9, 20.4, 39.3
+H010_L35_C060, 0.1, 100.0, 17.3, 31.5
+H010_L45_C010, 4.3, 27.4, 0.1, 51.3
+```
+
+这与原始数据相比,稍有修改,原始数据用制表符分隔。我将其转换成 CSV 格式(用逗号分割值),我更喜欢其与 Python 一起使用(CSV 文也很有用,因为它可轻松导入到电子表格程序中)。
+
+在每一行中,第一项是颜色名称,其后是其 C、M、Y 和 K 颜色值。 该文件包含 1,793 种颜色,我想要一种分析信息的方法,以了解这些值的范围。这就是 PostgreSQL 发挥作用的地方。我不想手动输入所有数据 —— 我认为输入过程中我不可能不出错,而且令人头痛。幸运的是,PostgreSQL 为此提供了一个命令。
+
+首先用以下命令创建数据库:
+
+```
+Create table hlc_cmyk (color varchar(40), c decimal, m decimal, y decimal, k decimal);
+```
+
+然后通过以下命令引入数据:
+
+```
+\copy hlc_cmyk from '/home/gregp/HLC_Atlas_CMYK_SampleData.csv' with (header, format CSV);
+```
+
+开头有反斜杠,是因为使用纯 `copy` 命令的权限仅限于 root 用户和 Postgres 的超级用户。在括号中,`header` 表示第一行包含标题,应忽略,`CSV` 表示文件格式为 CSV。请注意,在此方法中,颜色名称不需要用括号括起来。
+
+如果操作成功,会看到 `COPY NNNN`,其中 N 表示插入到表中的行数。
+
+最后,可以用下列命令查询:
+
+```
+select * from hlc_cmyk;
+
+ color | c | m | y | k
+---------------+-------+-------+-------+------
+ H010_L15_C010 | 0.5 | 49.1 | 0.1 | 84.5
+ H010_L15_C020 | 0.0 | 79.7 | 15.1 | 78.9
+ H010_L25_C010 | 6.1 | 38.3 | 0.0 | 72.5
+ H010_L25_C020 | 0.0 | 61.8 | 10.6 | 67.9
+ H010_L25_C030 | 0.0 | 79.5 | 18.5 | 62.7
+ H010_L25_C040 | 0.4 | 94.2 | 17.3 | 56.5
+ H010_L25_C050 | 0.0 | 100.0 | 15.1 | 50.6
+ H010_L35_C010 | 6.1 | 32.1 | 0.0 | 61.8
+ H010_L35_C020 | 0.0 | 51.7 | 8.4 | 57.5
+ H010_L35_C030 | 0.0 | 68.5 | 17.1 | 52.5
+```
+
+
+所有的 1,793 行数据都是这样的。回想起来,我不能说此查询对于 HLC 和 Scribus 任务是绝对必要的,但是它减轻了我对该项目的一些担忧。
+
+为了生成 HLC 色谱,我使用 Scribus 为色板页面中的 13,000 多种颜色自动创建了颜色图表。
+
+我可以使用 `copy` 命令输出数据:
+
+```
+\copy hlc_cmyk to '/home/gregp/hlc_cmyk_backup.csv' with (header, format CSV);
+```
+
+我还可以使用 `where` 子句根据某些值来限制输出。
+
+例如,以下命令将仅发送以 `H10` 开头的色调值。
+
+```
+\copy hlc_cmyk to '/home/gregp/hlc_cmyk_backup.csv' with (header, format CSV) where color like 'H10%';
+```
+
+### 备份或传输数据库或表
+
+我在此要提到的最后一个命令是 `pg_dump`,它用于备份 PostgreSQL 数据库,并在 `psql` 控制台之外运行。 例如:
+
+```
+pg_dump gregp -t hlc_cmyk > hlc.out
+pg_dump gregp > dball.out
+```
+
+第一行是导出 `hlc_cmyk` 表及其结构。第二行将转储 `gregp` 数据库中的所有表。这对于备份或传输数据库或表非常有用。
+
+要将数据库或表传输到另一台电脑(查看 [PostgreSQL 入门][2]那篇文章获取详细信息),首先在要转入的电脑上创建一个数据库,然后执行相反的操作。
+
+```
+psql -d gregp -f dball.out
+```
+
+一步创建所有表并输入数据。
+
+### 总结
+
+在本文中,我们了解了如何使用 `WHERE` 参数限制操作,以及如何使用 PostgreSQL 通配符 `%`。我们还了解了如何将大批量数据加载到表中,然后将部分或全部表数据输出到文件,甚至是将整个数据库及其所有单个表输出。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/2/postgresql-commands
+
+作者:[Greg Pittman][a]
+选题:[lujun9972][b]
+译者:[Morisun029](https://github.com/Morisun029)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/greg-p
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
+[2]: https://linux.cn/article-11593-1.html
+[3]: https://www.postgresql.org/
+[4]: http://freiefarbe.de
+[5]: https://www.freiefarbe.de/en/thema-farbe/hlc-colour-atlas/
diff --git a/published/20200207 Best Open Source eCommerce Platforms to Build Online Shopping Websites.md b/published/20200207 Best Open Source eCommerce Platforms to Build Online Shopping Websites.md
new file mode 100644
index 0000000000..ae38ffcddd
--- /dev/null
+++ b/published/20200207 Best Open Source eCommerce Platforms to Build Online Shopping Websites.md
@@ -0,0 +1,186 @@
+[#]: collector: (lujun9972)
+[#]: translator: (HankChow)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11884-1.html)
+[#]: subject: (Best Open Source eCommerce Platforms to Build Online Shopping Websites)
+[#]: via: (https://itsfoss.com/open-source-ecommerce/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+7 个可用于自建电商站点的开源解决方案
+======
+
+在[之前的文章][1]中,我介绍过一些开源内容管理系统Content Management System(CMS),顾名思义,这些 CMS 平台更适用于以内容为主的站点。
+
+那如果想要建立自己的线上购物站点呢?我们正好还有一些优秀的开源电商解决方案,可以自行部署在自己的 Linux 服务器上。
+
+这些电商解决方案是专为搭建线上购物站点设计的,因此都集成了库存管理、商品列表、购物车、下单、愿望清单以及支付这些必需的基础功能。
+
+但请注意,这篇文章并不会进行深入介绍。因此,我建议最好广泛试用其中的多个产品,以便进一步的了解和比较。
+
+### 优秀的开源电商解决方案
+
+![][2]
+
+开源电商解决方案种类繁多,一些缺乏维护的都会被我们忽略掉,以免搭建出来的站点因维护不及时而受到影响。
+
+另外,以下的列表排名不分先后。
+
+#### 1、nopCommerce
+
+![][3]
+
+nopCommerce 是基于 [ASP.NET Core][4] 的自由开源的电商解决方案。如果你要找的是基于 PHP 的解决方案,可以跳过这一节了。
+
+nopCommerce 的管理面板界面具有简洁易用的特点,如果你还使用过 OpenCart,就可能会感到似曾相识(我不是在抱怨)。在默认情况下,它就已经自带了很多基本的功能,同时还为移动端用户提供了响应式的设计。
+
+你可以在其[官方商店][5]中获取到一些兼容的界面主题和应用扩展,还可以选择付费的支持服务。
+
+在开始使用前,你可以从 nopCommerce 的[官方网站][6]下载源代码包,然后进行自定义配置和部署;也可以直接下载完整的软件包快速安装到 web 服务器上。详细信息可以查阅 nopCommerce 的 [GitHub 页面][7]或官方网站。
+
+- [nopCommerce][8]
+
+#### 2、OpenCart
+
+![][9]
+
+OpenCart 是一个基于 PHP 的非常流行的电商解决方案,就我个人而言,我曾为一个项目用过它,并且体验非常好,如果不是最好的话。
+
+或许你会觉得它维护得不是很频繁,但实际上使用 OpenCart 的开发者并不在少数。你可以获得许多受支持的扩展并将它们的功能加入到 OpenCart 中。
+
+OpenCart 不一定是适合所有人的“现代”电商解决方案,但如果你需要的只是一个基于 PHP 的开源解决方案,OpenCart 是个值得一试的选择。在大多数具有一键式应用程序安装支持的网络托管平台中,应该可以安装 OpenCart。想要了解更多,可以查阅 OpenCart 的官方网站或 [GitHub 页面][10]。
+
+- [OpenCart][11]
+
+#### 3、PrestaShop
+
+![][12]
+
+PrestaShop 也是一个可以尝试的开源电商解决方案。
+
+PrestaShop 是一个积极维护下的开源解决方案,它的官方商店中也有额外提供主题和扩展。与 OpenCart 不同,在托管服务平台上,你可能找不到一键安装的 PrestaShop。但不需要担心,从官方网站下载下来之后,它的部署过程也并不复杂。如果你需要帮助,也可以参考 PrestaShop 的[安装指南][15]。
+
+PrestaShop 的特点就是配置丰富和易于使用,我发现很多其它用户也在用它,你也不妨试用一下。
+
+你也可以在 PrestaShop 的 [GitHub 页面][16]查阅到更多相关内容。
+
+- [PrestaShop][17]
+
+#### 4、WooCommerce
+
+![][18]
+
+如果你想用 [WordPress][19] 来搭建电商站点,不妨使用 WooCommerce。
+
+从技术上来说,这种方式其实是搭建一个 WordPress 应用,然后把 WooCommerce 作为一个插件或扩展以实现电商站点所需要的功能。很多 web 开发者都知道如何使用 WordPress,因此 WooCommerce 的学习成本不会很高。
+
+WordPress 作为目前最好的开源站点项目之一,对大部分人来说都不会有太高的门槛。它具有易用、稳定的特点,同时还支持大量的扩展插件。
+
+WooCommerce 的灵活性也是一大亮点,在它的线上商店提供了许多设计和扩展可供选择。你也可以到它的 [GitHub 页面][20]查看相关介绍。
+
+- [WooCommerce][21]
+
+#### 5、Zen Cart
+
+![][22]
+
+这或许是一个稍显古老的电商解决方案,但同时也是最好的开源解决方案之一。如果你喜欢老式风格的模板(主要基于 HTML),而且只需要一些基础性的扩展,那你也可以尝试使用 Zen Cart。
+
+就我个人而言,我不建议把 Zen Cart 用在一个新项目当中。但考虑到它仍然是一个活跃更新中的解决方案,如果你喜欢的话,也不妨用它来进行试验。
+
+你也可以在 [SourceForge][23] 找到 Zen Cart 这个项目。
+
+- [Zen Cart][24]
+
+#### 6、Magento
+
+![Image Credits: Magestore][25]
+
+Magento 是 Abode 旗下的开源电商解决方案,从某种角度来说,可能比 WordPress 表现得更为优秀。
+
+Magento 完全是作为电商应用程序而生的,因此你会发现它的很多基础功能都非常好用,甚至还提供了高级的定制。
+
+但如果你使用的是 Magento 的开源版,可能会接触不到托管版的一些高级功能,两个版本的差异,可以在[官方文档][26]中查看到。如果你使用托管版,还可以选择相关的托管支持服务。
+
+想要了解更多,可以查看 Magento 的 [GitHub 页面][27]。
+
+- [Magento][28]
+
+#### 7、Drupal
+
+![Drupal][29]
+
+Drupal 是一个适用于创建电商站点的开源 CMS 解决方案。
+
+我没有使用过 Drupal,因此我不太确定它用起来是否足够灵活。但从它的官方网站上来看,它提供的扩展模块和主题列表,足以让你轻松完成一个电商站点需要做的任何事情。
+
+跟 WordPress 类似,Drupal 在服务器上的部署并不复杂,不妨看看它的使用效果。在它的[下载页面][30]可以查看这个项目以及下载最新的版本。
+
+- [Drupal][31]
+
+#### 8、Odoo eCommerce
+
+![Odoo Ecommerce Platform][32]
+
+如果你还不知道,Odoo 提供了一套开源商务应用程序。他们还提供了[开源会计软件][33]和 CRM 解决方案,我们将会在单独的列表中进行介绍。
+
+对于电子商务门户,你可以根据需要使用其在线拖放生成器自定义网站。你也可以推广该网站。除了简单的主题安装和自定义选项之外,你还可以利用 HTML/CSS 在一定程度上手动自定义外观。
+
+你也可以查看其 [GitHub][34] 页面以进一步了解它。
+
+- [Odoo eCommerce][35]
+
+### 总结
+
+我敢肯定还有更多的开源电子商务平台,但是,我现在还没有遇到比我上面列出的更好的东西。
+
+如果你还有其它值得一提的产品,可以在评论区发表。也欢迎在评论区分享你对开源电商解决方案的经验和想法。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/open-source-ecommerce/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[HankChow](https://github.com/HankChow)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/open-source-cms/
+[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/open-source-eCommerce.png?ssl=1
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/nopCommerce.png?ssl=1
+[4]: https://en.wikipedia.org/wiki/ASP.NET_Core
+[5]: https://www.nopcommerce.com/marketplace
+[6]: https://www.nopcommerce.com/download-nopcommerce
+[7]: https://github.com/nopSolutions/nopCommerce
+[8]: https://www.nopcommerce.com/
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/opencart.jpg?ssl=1
+[10]: https://github.com/opencart/opencart
+[11]: https://www.opencart.com/
+[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/prestashop.jpg?ssl=1
+[13]: https://addons.prestashop.com/en/3-templates-prestashop
+[14]: https://addons.prestashop.com/en/
+[15]: http://doc.prestashop.com/display/PS17/Installing+PrestaShop
+[16]: https://github.com/PrestaShop/PrestaShop
+[17]: https://www.prestashop.com/en
+[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/woocommerce.jpg?ssl=1
+[19]: https://wordpress.org/
+[20]: https://github.com/woocommerce/woocommerce
+[21]: https://woocommerce.com/
+[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/Zen-cart.jpg?ssl=1
+[23]: https://sourceforge.net/projects/zencart/
+[24]: https://www.zen-cart.com/
+[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/magento.jpg?ssl=1
+[26]: https://magento.com/compare-open-source-and-magento-commerce
+[27]: https://github.com/magento
+[28]: https://magento.com/
+[29]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/drupal.png?ssl=1
+[30]: https://www.drupal.org/project/drupal
+[31]: https://www.drupal.org/industries/ecommerce
+[32]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/odoo-ecommerce-platform.jpg?w=800&ssl=1
+[33]: https://itsfoss.com/open-source-accounting-software/
+[34]: https://github.com/odoo/odoo
+[35]: https://www.odoo.com/page/open-source-ecommerce
diff --git a/published/20200207 Connect Fedora to your Android phone with GSConnect.md b/published/20200207 Connect Fedora to your Android phone with GSConnect.md
new file mode 100644
index 0000000000..80a2884a6f
--- /dev/null
+++ b/published/20200207 Connect Fedora to your Android phone with GSConnect.md
@@ -0,0 +1,112 @@
+[#]: collector: (lujun9972)
+[#]: translator: (chai-yuan)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11897-1.html)
+[#]: subject: (Connect Fedora to your Android phone with GSConnect)
+[#]: via: (https://fedoramagazine.org/connect-fedora-to-your-android-phone-with-gsconnect/)
+[#]: author: (Lokesh Krishna https://fedoramagazine.org/author/lowkeyskywalker/)
+
+使用 GSConnect 将 Android 手机连接到 Fedora 系统
+======
+
+![][1]
+
+苹果和微软公司都不同程度的提供了桌面产品与移动设备集成。Fedora 提供了类似甚至更高集成度的工具——GSConnect。它可以让你将安卓手机和你的 Fedora 桌面配对并使用。请继续阅读,以了解更多关于它的情况以及它是如何工作的信息。
+
+### GSConnect 是什么?
+
+GSConnect 是针对 GNOME 桌面定制的 KDE Connect 程序。KDE Connect 可以使你的设备能够互相通信。但是,在 Fedora 默认的 GNOME 桌面上安装它需要安装大量的 KDE 依赖。
+
+GSConnect 是一个 KDE Connect 的完整实现,其以 GNOME shell 的拓展形式出现。安装后,GSConnect 允许你执行以下操作及更多:
+
+* 在计算机上接收电话通知并回复信息
+* 用手机操纵你的桌面
+* 在不同设备之间分享文件与链接
+* 在计算机上查看手机电量
+* 让手机响铃以便你能找到它
+
+### 设置 GSConnect 扩展
+
+设置 GSConnect 需要安装两个组件:计算机上的 GSConnect 扩展和 Android 设备上的 KDE Connect 应用。
+
+首先,从 GNOME Shell 扩展网站上安装 [GSConnect][2] 扩展。(Fedora Magazine 有一篇关于[如何安装 GNOME Shell 扩展][3]的文章,可以帮助你完成这一步。)
+
+KDE Connect 应用程序可以在 Google 的 [Play 商店][4]上找到。它也可以在 FOSS Android 应用程序库 [F-Droid][5] 上找到。
+
+一旦安装了这两个组件,就可以配对两个设备。安装扩展后它在你的系统菜单中显示为“移动设备Mobile Devices”。单击它会出现一个下拉菜单,你可以从中访问“移动设置Mobile Settings”。
+
+![][6]
+
+你可以在这里用 GSConnect 查看并管理已配对的设备。进入此界面后,需要在 Android 设备上启动应用程序。
+
+你可以在任意一台设备上进行配对初始化,在这里我们从 Android 设备连接到计算机。点击应用程序上的“刷新”,只要两个设备都在同一个无线网络环境中,你的 Android 设备便可以搜索到你的计算机。现在可以向桌面发送配对请求,并在桌面上接受配对请求以完成配对。
+
+![][7]
+
+### 使用 GSConnect
+
+配对后,你将需要在 Android 设备授予权限,才能使用 GSConnect 上提供的许多功能。单击设备列表中的已配对设备,便可以查看所有可用功能,并根据你的偏好和需要启用或禁用它们。
+
+![][8]
+
+请记住,你还需要在这个 Android 应用程序中授予相应的权限才能使用这些功能。启用权限后,你现在可以访问桌面上的移动联系人,获得消息通知并回复消息,甚至同步桌面和 Android 设备的剪贴板。
+
+### 将你的浏览器与“文件”应用集成
+
+GSConnect 允许你直接从计算机上的文件资源管理器的关联菜单向 Android 设备发送文件。
+
+在 Fedora 的默认 GNOME 桌面上,你需要安装 `nautilus-python` 依赖包,以便在关联菜单中显示配对的设备。安装此命令非常简单,只需要在你的首选终端运行以下命令:
+
+```
+$ sudo dnf install nautilus-python
+```
+
+完成后,将在“文件Files”应用的关联菜单中显示“发送到移动设备Send to Mobile Device”选项。
+
+![][9]
+
+同样,为你的浏览器安装相应的 WebExtension,无论是 [Firefox][10] 还是 [Chrome][11] 浏览器,都可以将链接发送到你的 Android 设备。你可以选择直接发送链接以在浏览器中直接打开,或将其作为短信息发送。
+
+### 运行命令
+
+GSConnect 允许你定义命令,然后可以从远程设备在计算机上运行这些命令。这使得你可以远程截屏,或者从你的 Android 设备锁定和解锁你的桌面。
+
+![][12]
+
+要使用此功能,可以使用标准的 shell 命令和 GSConnect 提供的 CLI。该项目的 GitHub 存储库(CLI Scripting)中提供了有关此操作的文档。
+
+[KDE UserBase Wiki][13] 有一个命令示例列表。这些例子包括控制桌面的亮度和音量、锁定鼠标和键盘,甚至更改桌面主题。其中一些命令是针对 KDE Plasma 设计的,需要进行修改才能在 GNOME 桌面上运行。
+
+### 探索并享受乐趣
+
+GSConnect 使我们能够享受到极大的便利和舒适。深入研究首选项,查看你可以做的所有事情,灵活的使用这些命令功能发挥创意,并在下面的评论中自由分享你解锁的新方式。
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/connect-fedora-to-your-android-phone-with-gsconnect/
+
+作者:[Lokesh Krishna][a]
+选题:[lujun9972][b]
+译者:[chai-yuan](https://github.com/chai-yuan)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/lowkeyskywalker/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/12/gsconnect-816x345.jpg
+[2]: https://extensions.gnome.org/extension/1319/gsconnect/
+[3]: https://fedoramagazine.org/install-gnome-shell-extension/
+[4]: https://play.google.com/store/apps/details?id=org.kde.kdeconnect_tp
+[5]: https://f-droid.org/en/packages/org.kde.kdeconnect_tp/
+[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/within-the-menu-1024x576.png
+[7]: https://fedoramagazine.org/wp-content/uploads/2020/01/pair-request-1024x576.png
+[8]: https://fedoramagazine.org/wp-content/uploads/2020/01/permissions-1024x576.png
+[9]: https://fedoramagazine.org/wp-content/uploads/2020/01/send-to-mobile-2-1024x576.png
+[10]: https://addons.mozilla.org/en-US/firefox/addon/gsconnect/
+[11]: https://chrome.google.com/webstore/detail/gsconnect/jfnifeihccihocjbfcfhicmmgpjicaec
+[12]: https://fedoramagazine.org/wp-content/uploads/2020/01/commands-1024x576.png
+[13]: https://userbase.kde.org/KDE_Connect/Tutorials/Useful_commands
+[14]: https://unsplash.com/@pathum_danthanarayana?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[15]: https://unsplash.com/s/photos/android?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
diff --git a/published/20200207 Customize your internet with an open source search engine.md b/published/20200207 Customize your internet with an open source search engine.md
new file mode 100644
index 0000000000..ed102d2f0e
--- /dev/null
+++ b/published/20200207 Customize your internet with an open source search engine.md
@@ -0,0 +1,118 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11905-1.html)
+[#]: subject: (Customize your internet with an open source search engine)
+[#]: via: (https://opensource.com/article/20/2/open-source-search-engine)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+用开源搜索引擎定制你的互联网
+======
+
+> 上手开源的对等 Web 索引器 YaCy。
+
+![](https://img.linux.net.cn/data/attachment/album/202002/19/103541la7erglz7oloa4ye.jpg)
+
+很久以前,互联网很小,小到几个人就可以索引它们,这些人收集了所有网站的名称和链接,并按主题将它们分别列在页面或印刷书籍中。随着万维网网络的发展,形成了“网站环”形式,具有类似的内容、主题或敏感性的站点捆绑在一起,形成了通往每个成员的循环路径。环中任何站点的访问者都可以单击按钮以转到环中的下一个或上一个站点,以发现与其兴趣相关的新站点。
+
+又过了一段时间,互联网似乎变得臃肿不堪了。每个人都在网络上,有很多冗余信息和垃圾邮件,多到让你无法找到任何东西。Yahoo 和 AOL、CompuServe 以及类似的服务各自采用了不同的方法来解决这个问题,但是直到谷歌出现后,现代的搜索模型才得以普及。按谷歌的做法,互联网应该通过搜索引擎进行索引、排序和排名。
+
+### 为什么选择开源替代品?
+
+像谷歌和 DuckDuckGo 这样的搜索引擎显然是卓有成效的。你可能是通过搜索引擎访问的本站。尽管对于因主机没有选择遵循优化搜索引擎的最佳实践从而导致会内容陷入困境这件事仍存在争论,但用于管理丰富的文化、知识和轻率的信息(即互联网)的现代解决方案是冷冰冰的索引。
+
+但是也许出于隐私方面的考虑,或者你希望为使互联网更加独立而做出贡献,你或许不愿意使用谷歌或 DuckDuckGo。如果你对此感兴趣,那么可以考虑参加 [YaCy][2],这是一个对等互联网索引器和搜索引擎。
+
+### 安装 YaCy
+
+要安装并尝试 YaCy,请首先确保已安装 Java。如果你使用的是 Linux,则可以按照我的《[如何在 Linux 上安装 Java][3]》中的说明进行操作。如果你使用 Windows 或 MacOS,请从 [AdoptOpenJDK.net][4] 获取安装程序。
+
+安装 Java 后,请根据你的平台[下载安装程序][5]。
+
+如果你使用的是 Linux,请解压缩 tarball 并将其移至 `/opt` 目录:
+
+```
+$ sudo tar --extract --file yacy_*z --directory /opt
+```
+
+根据下载的安装程序的说明启动 YaCy。
+
+在 Linux 上,启动在后台运行的 YaCy:
+
+```
+$ /opt/startYACY.sh &
+```
+
+在 Web 浏览器中,导航到 `localhost:8090` 并进行搜索。
+
+![YaCy start page][6]
+
+### 将 YaCy 添加到你的地址栏
+
+如果你使用的是 Firefox Web 浏览器,则只需单击几下,即可在 Awesome Bar(Mozilla 给 URL 栏起的名称)中将 YaCy 设置为默认搜索引擎。
+
+首先,如果尚未显示,在 Firefox 工具栏中使专用搜索栏显示出来(你不必使搜索栏保持一直可见;只需要激活它足够长的时间即可添加自定义搜索引擎)。Firefox 右上角的“汉堡”菜单中的“自定义”菜单中提供了搜索栏。在 Firefox 工具栏上的搜索栏可见后,导航至 `localhost:8090`,然后单击刚添加的 Firefox 搜索栏中的放大镜图标。单击选项将 YaCy 添加到你的 Firefox 的搜索引擎中。
+
+![Adding YaCy to Firefox][7]
+
+完成此操作后,你可以在 Firefox 首选项中将其标记为默认值,或者仅在 Firefox 搜索栏中执行的搜索中选择性地使用它。如果将其设置为默认搜索引擎,则可能不需要专用搜索栏,因为 Awesome Bar 也使用默认引擎,因此可以将其从工具栏中删除。
+
+### 对等搜索引擎如何工作
+
+YaCy 是一个开源的分布式搜索引擎。它是用 [Java][8] 编写的,因此可以在任何平台上运行,并且可以执行 Web 爬网、索引和搜索。这是一个对等(P2P)网络,因此每个运行 YaCy 的用户都将努力地不断跟踪互联网的变化情况。当然,没有单个用户能拥有整个互联网的完整索引,因为这将需要一个数据中心来容纳,但是该索引分布在所有 YaCy 用户中且是冗余的。它与 BitTorrent 非常相似(因为它使用分布式哈希表 DHT 来引用索引条目),只不过你所共享的数据是单词和 URL 关联的矩阵。通过混合哈希表返回的结果,没人能说出谁搜索了哪些单词,因此所有搜索在功能上都是匿名的。这是用于无偏见、无广告、未跟踪和匿名搜索的有效系统,你只需要使用它就加入了它。
+
+### 搜索引擎和算法
+
+索引互联网的行为是指将网页分成单个单词,然后将页面的 URL 与每个单词相关联。在搜索引擎中搜索一个或多个单词将获取与该查询关联的所有 URL。YaCy 客户端在运行时也是如此。
+
+客户端要做的另一件事是为你的浏览器提供搜索界面。你可以将 Web 浏览器指向 `localhost:8090` 来搜索 YaCy,而不是在要搜索时导航到谷歌。你甚至可以将其添加到浏览器的搜索栏中(取决于浏览器的可扩展性),因此可以从 URL 栏中进行搜索。
+
+### YaCy 的防火墙设置
+
+首次开始使用 YaCy 时,它可能运行在“初级”模式下。这意味着你的客户端爬网的站点仅对你可用,因为其他 YaCy 客户端无法访问你的索引条目。要加入对等环境,必须在路由器的防火墙(或者你正在运行的软件防火墙)中打开端口 8090,这称为“高级”模式。
+
+如果你使用的是 Linux,则可以在《[使用防火墙让你的 Linux 更加强大][9]》中找到有关计算机防火墙的更多信息。在其他平台上,请参考操作系统的文档。
+
+互联网服务提供商(ISP)提供的路由器上几乎总是启用了防火墙,并且有太多种类的防火墙无法准确说明。大多数路由器都提供了在防火墙上“打洞”的选项,因为许多流行的联网游戏都需要双向流量。
+
+如果你知道如何登录路由器(通常为 192.168.0.1 或 10.1.0.1,但可能因制造商的设置而异),则登录并查找配置面板来控制“防火墙”或“端口转发”或“应用”。
+
+找到路由器防火墙的首选项后,将端口 8090 添加到白名单。例如:
+
+![Adding YaCy to an ISP router][10]
+
+如果路由器正在进行端口转发,则必须使用相同的端口将传入的流量转发到计算机的 IP 地址。例如:
+
+![Adding YaCy to an ISP router][11]
+
+如果由于某种原因无法调整防火墙设置,那也没事。YaCy 将继续以初级模式运行并作为对等搜索网络的客户端运行。
+
+### 你的互联网
+
+使用 YaCy 搜索引擎可以做的不仅仅是被动搜索。你可以强制抓取不太显眼的网站,可以请求对网站进行网络抓取,可以选择使用 YaCy 进行本地搜索等等。你可以更好地控制*你的*互联网的所呈现的一切。高级用户越多,索引的网站就越多。索引的网站越多,所有用户的体验就越好。加入吧!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/2/open-source-search-engine
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
+[2]: https://yacy.net/
+[3]: https://linux.cn/article-11614-1.html
+[4]: https://adoptopenjdk.net/releases.html
+[5]: https://yacy.net/download_installation/
+[6]: https://opensource.com/sites/default/files/uploads/yacy-startpage.jpg (YaCy start page)
+[7]: https://opensource.com/sites/default/files/uploads/yacy-add-firefox.jpg (Adding YaCy to Firefox)
+[8]: https://opensource.com/resources/java
+[9]: https://opensource.com/article/19/7/make-linux-stronger-firewalls
+[10]: https://opensource.com/sites/default/files/uploads/router-add-app.jpg (Adding YaCy to an ISP router)
+[11]: https://opensource.com/sites/default/files/uploads/router-add-app1.jpg (Adding YaCy to an ISP router)
diff --git a/published/20200207 NVIDIA-s Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux.md b/published/20200207 NVIDIA-s Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux.md
new file mode 100644
index 0000000000..4a6e9f660f
--- /dev/null
+++ b/published/20200207 NVIDIA-s Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux.md
@@ -0,0 +1,82 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11888-1.html)
+[#]: subject: (NVIDIA’s Cloud Gaming Service GeForce NOW Shamelessly Ignores Linux)
+[#]: via: (https://itsfoss.com/geforce-now-linux/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+NVIDIA 的云游戏服务 GeForce NOW 无耻地忽略了Linux
+======
+
+NVIDIA 的 [GeForce NOW][1] 云游戏服务对于那些可能没有硬件但想使用 GeForce NOW 在最新的最好的游戏上获得尽可能好的游戏体验玩家来说是充满前景的(在线推流游戏,并可以在任何设备上玩)。
+
+该服务仅限于一些用户(以等待列表的形式)使用。然而,他们最近宣布 [GeForce NOW 面向所有人开放][2]。但实际上并不是。
+
+有趣的是,它**并不是面向全球所有区域**。而且,更糟的是 **GeForce NOW 不支持 Linux**。
+
+![][3]
+
+### GeForce NOW 并不是向“所有人开放”
+
+制作一个基于订阅的云服务来玩游戏的目的是消除平台依赖性。
+
+就像你通常使用浏览器访问网站一样,你应该能够在每个平台上玩游戏。是这个概念吧?
+
+![][4]
+
+好吧,这绝对不是火箭科学,但是 NVIDIA 仍然不支持 Linux(和 iOS)?
+
+### 是因为没有人使用 Linux 吗?
+
+我非常不同意这一点,即使这是某些不支持 Linux 的原因。如果真是这样,我不会使用 Linux 作为主要桌面操作系统来为 “It’s FOSS” 写文章。
+
+不仅如此,如果 Linux 不值一提,你认为为何一个 Twitter 用户会提到缺少 Linux 支持?
+
+![][5]
+
+是的,也许用户群不够大,但是在考虑将其作为基于云的服务时,**不支持 Linux** 显得没有意义。
+
+从技术上讲,如果 Linux 上没有游戏,那么 **Valve** 就不会在 Linux 上改进 [Steam Play][6] 来帮助更多用户在 Linux 上玩纯 Windows 的游戏。
+
+我不想说任何不正确的说法,但台式机 Linux 游戏的发展比以往任何时候都要快(即使统计上要比 Mac 和 Windows 要低)。
+
+### 云游戏不应该像这样
+
+![][7]
+
+如上所述,找到使用 Steam Play 的 Linux 玩家不难。只是你会发现 Linux 上游戏玩家的整体“市场份额”低于其他平台。
+
+即使这是事实,云游戏也不应该依赖于特定平台。而且,考虑到 GeForce NOW 本质上是一种基于浏览器的可以玩游戏的流媒体服务,所以对于像 NVIDIA 这样的大公司来说,支持 Linux 并不困难。
+
+来吧,Nvidia,*你想要我们相信在技术上支持 Linux 有困难?或者,你只是想说不值得支持 Linux 平台?*
+
+### 结语
+
+不管我为 GeForce NOW 服务发布而感到多么兴奋,当看到它根本不支持 Linux,我感到非常失望。
+
+如果像 GeForce NOW 这样的云游戏服务在不久的将来开始支持 Linux,**你可能没有理由使用 Windows 了**(*咳嗽*)。
+
+你怎么看待这件事?在下面的评论中让我知道你的想法。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/geforce-now-linux/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://www.nvidia.com/en-us/geforce-now/
+[2]: https://blogs.nvidia.com/blog/2020/02/04/geforce-now-pc-gaming/
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now-linux.jpg?ssl=1
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/nvidia-geforce-now.png?ssl=1
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/geforce-now-twitter-1.jpg?ssl=1
+[6]: https://itsfoss.com/steam-play/
+[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/ge-force-now.jpg?ssl=1
diff --git a/published/20200210 Install All Essential Media Codecs in Ubuntu With This Single Command -Beginner-s Tip.md b/published/20200210 Install All Essential Media Codecs in Ubuntu With This Single Command -Beginner-s Tip.md
new file mode 100644
index 0000000000..5c6d5e90ea
--- /dev/null
+++ b/published/20200210 Install All Essential Media Codecs in Ubuntu With This Single Command -Beginner-s Tip.md
@@ -0,0 +1,114 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11906-1.html)
+[#]: subject: (Install All Essential Media Codecs in Ubuntu With This Single Command [Beginner’s Tip])
+[#]: via: (https://itsfoss.com/install-media-codecs-ubuntu/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+一条命令在 Ubuntu 中安装所有基本的媒体编解码器
+======
+
+如果你刚刚安装了 Ubuntu 或其他 [Ubuntu 特色版本][1] 如 Kubuntu、Lubuntu 等,你会注意到系统无法播放某些音频或视频文件。
+
+对于视频文件,你可以[在 Ubuntu 上安装 VLC][2]。[VLC][3] 是 [Linux 上的最佳视频播放器][4]之一,它几乎可以播放任何视频文件格式。但你仍然会遇到无法播放音频和 flash 的麻烦。
+
+好消息是 [Ubuntu][5] 提供了一个软件包来安装所有基本的媒体编解码器:ubuntu-restricted-extras。
+
+![][6]
+
+### 什么是 Ubuntu Restricted Extras?
+
+ubuntu-restricted-extras 是一个包含各种基本软件,如 Flash 插件、[unrar][7]、[gstreamer][8]、mp4、[Ubuntu 中的 Chromium 浏览器][9]的编解码器等的软件包。
+
+由于这些软件不是开源软件,并且其中一些涉及软件专利,因此 Ubuntu 默认情况下不会安装它们。你必须使用 multiverse 仓库,它是 Ubuntu 专门为用户提供非开源软件而创建的仓库。
+
+请阅读本文以[了解有关各种 Ubuntu 仓库的更多信息][10]。
+
+### 如何安装 Ubuntu Restricted Extras?
+
+令我惊讶的是,我发现软件中心未列出 Ubuntu Restricted Extras。不管怎样,你都可以使用命令行安装该软件包,这非常简单。
+
+在菜单中搜索或使用[终端键盘快捷键 Ctrl+Alt+T][11] 打开终端。
+
+由于 ubuntu-restrcited-extras 软件包在 multiverse 仓库中,因此你应验证系统上已启用 multiverse 仓库:
+
+```
+sudo add-apt-repository multiverse
+```
+
+然后你可以使用以下命令安装:
+
+```
+sudo apt install ubuntu-restricted-extras
+```
+
+输入回车后,你会被要求输入密码,**当你输入密码时,屏幕不会有显示**。这是正常的。输入你的密码并回车。
+
+它将显示大量要安装的包。按回车确认选择。
+
+你会看到 [EULA][12](最终用户许可协议),如下所示:
+
+![Press Tab key to select OK and press Enter key][13]
+
+浏览此页面可能会很麻烦,但是请放心。只需按 Tab 键,它将高亮选项。当高亮在正确的选项上,按下回车确认你的选择。
+
+![Press Tab key to highlight Yes and press Enter key][14]
+
+安装完成后,由于新安装的媒体编解码器,你应该可以播放 MP3 和其他媒体格式了。
+
+##### 在 Kubuntu、Lubuntu、Xubuntu 上安装受限制的额外软件包
+
+请记住,Kubuntu、Lubuntu 和 Xubuntu 都有此软件包,并有各自不同的名称。它们本应使用相同的名字,但不幸的是并不是。
+
+在 Kubuntu 上,使用以下命令:
+
+```
+sudo apt install kubuntu-restricted-extras
+```
+
+在 Lubuntu 上,使用:
+
+```
+sudo apt install lubuntu-restricted-extras
+```
+
+在 Xubuntu 上,你应该使用:
+
+```
+sudo apt install xubuntu-restricted-extras
+```
+
+我一直建议将 ubuntu-restricted-extras 作为[安装 Ubuntu 后要做的基本事情][15]之一。只需一个命令即可在 Ubuntu 中安装多个编解码器。
+
+希望你喜欢 Ubuntu 初学者系列中这一技巧。以后,我将分享更多此类技巧。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-media-codecs-ubuntu/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/which-ubuntu-install/
+[2]: https://itsfoss.com/install-latest-vlc/
+[3]: https://www.videolan.org/index.html
+[4]: https://itsfoss.com/video-players-linux/
+[5]: https://ubuntu.com/
+[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/Media_Codecs_in_Ubuntu.png?ssl=1
+[7]: https://itsfoss.com/use-rar-ubuntu-linux/
+[8]: https://gstreamer.freedesktop.org/
+[9]: https://itsfoss.com/install-chromium-ubuntu/
+[10]: https://itsfoss.com/ubuntu-repositories/
+[11]: https://itsfoss.com/ubuntu-shortcuts/
+[12]: https://en.wikipedia.org/wiki/End-user_license_agreement
+[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/installing_ubuntu_restricted_extras.jpg?ssl=1
+[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/installing_ubuntu_restricted_extras_1.jpg?ssl=1
+[15]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
diff --git a/published/20200210 Playing Music on your Fedora Terminal with MPD and ncmpcpp.md b/published/20200210 Playing Music on your Fedora Terminal with MPD and ncmpcpp.md
new file mode 100644
index 0000000000..b78452578b
--- /dev/null
+++ b/published/20200210 Playing Music on your Fedora Terminal with MPD and ncmpcpp.md
@@ -0,0 +1,118 @@
+[#]: collector: (lujun9972)
+[#]: translator: (chai-yuan)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11909-1.html)
+[#]: subject: (Playing Music on your Fedora Terminal with MPD and ncmpcpp)
+[#]: via: (https://fedoramagazine.org/playing-music-on-your-fedora-terminal-with-mpd-and-ncmpcpp/)
+[#]: author: (Carmine Zaccagnino https://fedoramagazine.org/author/carzacc/)
+
+在你的 Fedora 终端上播放音乐
+======
+
+![][1]
+
+MPD(Music Playing Daemon),顾名思义,是一个音乐(Music)播放(Playing)守护进程(Daemon)。它可以播放音乐,并且作为一个守护进程,任何软件都可以与之交互并播放声音,包括一些 CLI 客户端。
+
+其中一个被称为 `ncmpcpp`,它是对之前 `ncmpc` 工具的改进。名字的变化与编写它们的语言没有太大关系:都是 C++,而之所以被称为 `ncmpcpp`,因为它是 “NCurses Music Playing Client Plus Plus”。 缘故
+
+### 安装 MPD 和 ncmpcpp
+
+`ncmpmpcc` 的客户端可以从官方 Fedora 库中通过 `dnf` 命令直接安装。
+
+```
+$ sudo dnf install ncmpcpp
+```
+
+另一方面,MPD 必须从 RPMFusion free 库安装,你可以通过运行:
+
+```
+$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
+```
+
+然后你可以运行下面的命令安装它:
+
+```
+$ sudo dnf install mpd
+```
+
+### 配置并启用 MPD
+
+设置 MPD 最简单的方法是以普通用户的身份运行它。默认情况是以专用 `mpd` 用户的身份运行它,但这会导致各种权限问题。
+
+在运行它之前,我们需要创建一个本地配置文件,允许我们作为普通用户运行。
+
+首先在 `~/.config` 里创建一个名叫 `mpd` 的目录:
+
+```
+$ mkdir ~/.config/mpd
+```
+
+将配置文件拷贝到此目录下:
+
+```
+$ cp /etc/mpd.conf ~/.config/mpd
+```
+
+然后用 `vim`、`nano` 或 `gedit` 之类的软件编辑它:
+
+```
+$ nano ~/.config/mpd/mpd.conf
+```
+
+我建议你通读所有内容,检查是否有任何需要做的事情,但对于大多数设置你都可以删除,只需保留以下内容:
+
+```
+db_file "~/.config/mpd/mpd.db"
+log_file "syslog"
+```
+
+现在你可以运行它了:
+
+```
+$ mpd
+```
+
+没有报错,这将在后台启动 MPD 守护进程。
+
+### 使用 ncmpcpp
+
+只需运行:
+
+```
+$ ncmpcpp
+```
+
+你将在终端中看到一个由 ncurses 所支持的图形用户界面。
+
+按下 `4` 键,然后就可以看到本地的音乐目录,用方向键进行选择并按下回车进行播放。
+
+多播放几个歌曲就会创建一个*播放列表*,让你可以使用 `>` 键(不是右箭头, 是右尖括号)移动到下一首,并使用 `<` 返回上一首。`+` 和 `–` 键可以调节音量。`Q` 键可以让你退出 `ncmpcpp` 但不停止播放音乐。你可以按下 `P` 来控制暂停和播放。
+
+你可以按下 `1` 键来查看当前播放列表(这是默认的视图)。从这个视图中,你可以按 `i` 查看有关当前歌曲的信息(标签)。按 `6` 可更改当前歌曲的标签。
+
+按 `\` 按钮将在视图顶部添加(或删除)信息面板。在左上角,你可以看到如下的内容:
+
+```
+[------]
+```
+
+按下 `r`、`z`、`y`、`R`、`x` 将会分别切换到 `repeat`、`random`、`single`、`consume` 和 `crossfade` 等播放模式,并将这个小指示器中的 `–` 字符替换为选定模式。
+
+按下 `F1` 键将会显示一些帮助文档,包含一系列的键绑定列表,因此无需在此处列出完整列表。所以继续吧!做一个极客,在你的终端上播放音乐!
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/playing-music-on-your-fedora-terminal-with-mpd-and-ncmpcpp/
+
+作者:[Carmine Zaccagnino][a]
+选题:[lujun9972][b]
+译者:[chai-yuan](https://github.com/chai-yuan)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/carzacc/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2020/02/play_music_mpd-816x346.png
+[2]: https://rpmfusion.org/Configuration
diff --git a/published/20200212 How to Change the Default Terminal in Ubuntu.md b/published/20200212 How to Change the Default Terminal in Ubuntu.md
new file mode 100644
index 0000000000..b90a79887f
--- /dev/null
+++ b/published/20200212 How to Change the Default Terminal in Ubuntu.md
@@ -0,0 +1,84 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-11903-1.html)
+[#]: subject: (How to Change the Default Terminal in Ubuntu)
+[#]: via: (https://itsfoss.com/change-default-terminal-ubuntu/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+如何在 Ubuntu 中更改默认终端
+======
+
+终端Terminal是 Linux 系统的关键部分。它能让你通过 shell 访问 Linux 系统。Linux 上有多个终端应用(技术上称为终端仿真器)。
+
+大多数[桌面环境][1]都有自己的终端实现。它们的外观可能有所不同,并且可能有不同的快捷键。例如,[Guake 终端][2]对高级用户非常有用,它提供了一些可能无法在发行版默认终端中使用的功能。
+
+你可以在系统上安装其他终端,并将其设为默认,并能通过[快捷键 Ctrl+Alt+T][3] 打开。
+
+现在问题来了,如何在 Ubuntu 中更改默认终端。它没有遵循[更改 Ubuntu 中的默认应用][4]的标准方式,要怎么做?
+
+### 更改 Ubuntu 中的默认终端
+
+![][5]
+
+在基于 Debian 的发行版中,有一个方便的命令行程序,称为 [update-alternatives][6],可用于处理默认应用。
+
+你可以使用它来更改默认的命令行文本编辑器、终端等。为此,请运行以下命令:
+
+```
+sudo update-alternatives --config x-terminal-emulator
+```
+
+它将显示系统上存在的所有可作为默认值的终端仿真器。当前的默认终端标有星号。
+
+```
+abhishek@nuc:~$ sudo update-alternatives --config x-terminal-emulator
+There are 2 choices for the alternative x-terminal-emulator (providing /usr/bin/x-terminal-emulator).
+
+ Selection Path Priority Status
+------------------------------------------------------------
+ 0 /usr/bin/gnome-terminal.wrapper 40 auto mode
+ 1 /usr/bin/gnome-terminal.wrapper 40 manual mode
+* 2 /usr/bin/st 15 manual mode
+
+Press to keep the current choice[*], or type selection number:
+```
+
+你要做的就是输入选择编号。对我而言,我想使用 GNOME 终端,而不是来自 [Regolith 桌面][7]的终端。
+
+```
+Press to keep the current choice[*], or type selection number: 1
+update-alternatives: using /usr/bin/gnome-terminal.wrapper to provide /usr/bin/x-terminal-emulator (x-terminal-emulator) in manual mode
+```
+
+> **自动模式 vs 手动模式**
+>
+> 你可能已经在 `update-alternatives` 命令的输出中注意到了自动模式和手动模式。
+>
+> 如果选择自动模式,那么在安装或删除软件包时,系统可能会自动决定默认应用。该决定受优先级数字的影响(如上一节中的命令输出所示)。
+>
+> 假设你的系统上安装了 5 个终端仿真器,并删除了默认的仿真器。现在,你的系统将检查哪些仿真器处于自动模式。如果有多个,它将选择优先级最高的一个作为默认仿真器。
+
+我希望你觉得这个小技巧有用。随时欢迎提出问题和建议。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/change-default-terminal-ubuntu/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/best-linux-desktop-environments/
+[2]: http://guake-project.org/
+[3]: https://itsfoss.com/ubuntu-shortcuts/
+[4]: https://itsfoss.com/change-default-applications-ubuntu/
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/switch_default_terminal_ubuntu.png?ssl=1
+[6]: https://manpages.ubuntu.com/manpages/trusty/man8/update-alternatives.8.html
+[7]: https://itsfoss.com/regolith-linux-desktop/
diff --git a/sources/README.md b/sources/README.md
new file mode 100644
index 0000000000..5615087474
--- /dev/null
+++ b/sources/README.md
@@ -0,0 +1 @@
+这里放待翻译的文件。
diff --git a/sources/news/20191207 New machine learning from Alibaba and Netflix, mimicking animal vision, and more open source news.md b/sources/news/20191207 New machine learning from Alibaba and Netflix, mimicking animal vision, and more open source news.md
deleted file mode 100644
index 93297c4cad..0000000000
--- a/sources/news/20191207 New machine learning from Alibaba and Netflix, mimicking animal vision, and more open source news.md
+++ /dev/null
@@ -1,93 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (New machine learning from Alibaba and Netflix, mimicking animal vision, and more open source news)
-[#]: via: (https://opensource.com/article/19/12/news-december-7)
-[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
-
-New machine learning from Alibaba and Netflix, mimicking animal vision, and more open source news
-======
-Catch up on the biggest open source headlines from the past two weeks.
-![Weekly news roundup with TV][1]
-
-In this edition of our open source news roundup, we take a look an open source election auditing tool, new open source from Alibaba and Netflix, mimicking animal vision, and more!
-
-### Alibaba and Netflix share machine learning and data science software
-
-Two companies at the forefront of machine learning and data science have just released some of their tools under open source licenses.
-
-Chinese ecommerce giant Alibaba just [open sourced the algorithm libraries][2] for its Alink platform. The algorithms "are essential to support machine learning tasks such as online product recommendations and smart customer services." According to Jia Yangqing, president of Alibaba Cloud, Alink is a good fit for "developers seeking big data and machine-learning tools." You can find the source code for Alink (which is under an Apache 2.0 license) [on GitHub][3], with documentation in both Chinese and English.
-
-Not to be outdone, streaming service Netflix just released its [Metaflow Python library][4] under an Apache 2.0 license. Metaflow enables data scientists to "see early on whether a prototyped model would fail in production, allowing them to fix whatever the issue was". It also works with a number of Python data science libraries, like SciKit Learn, Pytorch, and Tensorflow. You can grab Metaflow's code from [its GitHub repository][5] or learn more about it at the [Metaflow website][6].
-
-### Open source software to mimic animal vision
-
-Have you ever wondered how your dog or cat sees the world? Thanks to work by researchers at the University of Exeter in the UK and Australia's University of Queensland, you can find out. The team just released [software that allows humans to see the world as animals do][7].
-
-Called micaToolbox, the software can interpret digital photos and process images of various environments by mimicking the limitations of animal vision. Anyone with a camera, a computer, or smartphone can use the software without knowing how to code. But micaToolbox isn't just a novelty. It's a serious scientific tool that can help "help biologists better understand a variety of animal behaviors, including mating systems, distance-dependent signalling and mimicry." And, according to researcher Jolyon Troscianko, the software can help identify "how an animal's camouflage works so that we can manage our land to protect certain species."
-
-You can [download micaBox][8] or [browse its source code][9] on GitHub.
-
-### New tool for post-election auditing
-
-More and more aspects of our lives and institutions are being automated. With that comes an increased danger of systems breaking down or malicious someones tampering with those systems. Open source gives us an opportunity to look at exactly how the automation works.
-
-Elections, in particular, are increasingly vulnerable. To combat election tampering, the US Cybersecurity and Infrastructure Security Agency (CISA) has joined forces with the non-profit organization VotingWorks to create a [web-based application for auditing ballots][10].
-
-Called Arlo, the application is designed to ensure that "elections are secure, resilient, and transparent," said CISA's director Chris Krebs. Arlo works with a range of automated voting systems to help "officials compare audited votes to tabulated votes, and providing monitoring & reporting capabilities." Arlo was used to verify the results of recent state and local elections and is being further field-tested in the states of Georgia, Michigan, Missouri, Ohio, Pennsylvania, and Virginia.
-
-Arlo's source code, released under an AGPL-3.0 license, is [available on GitHub][11].
-
-### Royal Navy debuts open source application development kit
-
-Consistency across user interfaces is key to a successful set of applications and services. The UK's Royal Navy understands the importance of this and has released the [open source NELSON standards toolkit][12] to help its developers and suppliers "save time and give users a consistent experience."
-
-Named after the legendary British admiral, NELSON is intended to "maintain high visual consistency and user-experience quality across the different applications developed or subcontracted by the Royal Navy." The toolkit consists of a set of components including visual styles, typographic elements, forms, elements like buttons and checkboxes, and notifications.
-
-NELSON has its own [GitHub repository][13], from which the Royal Navy encourages developers to make pull requests.
-
-#### In other news
-
- * [Council group plans for open source revenues and benefits platform][14]
- * [Introducing Nebula, the open source global overlay network from Slack][15]
- * [webOS Open Source Edition 2.0 keeps Palm's spirit alive in cars and IoT][16]
- * [Duke University Introduces an Open Source Tool as an Alternative to a Monolithic LMS][17]
- * [Open Source Technology Could Be a Boon to Farmers][18]
-
-
-
-_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/12/news-december-7
-
-作者:[Scott Nesbitt][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/scottnesbitt
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
-[2]: https://www.zdnet.com/article/alibaba-cloud-publishes-machine-learning-algorithm-on-github/
-[3]: https://github.com/alibaba/alink
-[4]: https://www.zdnet.com/article/netflix-our-metaflow-python-library-for-faster-data-science-is-now-open-source/
-[5]: https://github.com/Netflix/metaflow
-[6]: https://metaflow.org/
-[7]: https://www.upi.com/Science_News/2019/12/03/Novel-software-helps-scientists-see-what-animals-see/5961575389734/
-[8]: http://www.empiricalimaging.com/download/micatoolbox/
-[9]: https://github.com/troscianko/micaToolbox
-[10]: https://www.zdnet.com/article/cisa-and-votingworks-release-open-source-post-election-auditing-tool/
-[11]: https://github.com/votingworks/arlo
-[12]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/open-source-royal-navy
-[13]: https://github.com/Royal-Navy/standards-toolkit
-[14]: https://www.ukauthority.com/articles/council-group-plans-for-open-source-revenues-and-benefits/
-[15]: https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack-884110a5579
-[16]: https://www.slashgear.com/webos-open-source-edition-2-0-keeps-palms-spirit-alive-in-cars-and-iot-25601309/
-[17]: https://iblnews.org/duke-university-introduces-an-open-source-tool-as-an-alternative-to-a-monolithic-lms/
-[18]: https://civileats.com/2019/12/02/open-source-technology-could-be-a-boon-to-farmers/
diff --git a/sources/news/20191209 First Ever Release of Ubuntu Cinnamon Distribution is Finally Here.md b/sources/news/20191209 First Ever Release of Ubuntu Cinnamon Distribution is Finally Here.md
deleted file mode 100644
index 09af65568d..0000000000
--- a/sources/news/20191209 First Ever Release of Ubuntu Cinnamon Distribution is Finally Here.md
+++ /dev/null
@@ -1,85 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (First Ever Release of Ubuntu Cinnamon Distribution is Finally Here!)
-[#]: via: (https://itsfoss.com/ubuntu-cinnamon/)
-[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
-
-First Ever Release of Ubuntu Cinnamon Distribution is Finally Here!
-======
-
-_**Brief: Ubuntu Cinnamon is a new distribution that utilizes Linux Mint’s Cinnamon desktop environment on top of Ubuntu code base. It’s first stable release is based on Ubuntu 19.10 Eoan Ermine.**_
-
-[Cinnamon][1] is Linux Mint’s flagship desktop environment. Like [MATE desktop][2], Cinnamon is also a product of dissatisfaction with GNOME 3. With the GNOME Classic like user interface and relatively lower hardware requirements, Cinnamon soon gathered a dedicated userbase.
-
-Like any other desktop environment out there, you can [install Cinnamon on Ubuntu][3] and other distributions.
-
-Installing multiple [desktop environments][4] (DE) is not a difficult task but it often leads to conflicts (with other DE’s elements) and may not always provide the best experience. This is why major Linux distributions separate spins/flavors with various popular desktop environments.
-
-[Ubuntu also has various official flavors][5] featuring [KDE][6] (Kubuntu), [LXQt][7] (Lubuntu), Xfce (Xubuntu), Budgie ([Ubuntu Budgie][8]) etc. Cinnamon was not in this list but Ubuntu Cinnamon Remix project is trying to change that.
-
-### Ubuntu Cinnamon distribution
-
-![Ubuntu Cinnamon Desktop Screenshot][9]
-
-[Ubuntu Cinnamon][10] (website under construction) is a new Linux distribution that brings Cinnamon desktop to Ubuntu distribution. Joshua Peisach is the lead developer for the project and he is being helped by other volunteer contributors. The ex-developer of the now discontinued Ubuntu GNOME project and some members from Ubuntu team are also advising the team to help with the development.
-
-![Ubuntu Cinnamon Remix Screeenshot 1][11]
-
-Do note that Ubuntu Cinnamon is not an official flavor of Ubuntu. They are trying to get the flavorship but I think that will take a few more releases.
-
-The first stable release of Ubuntu Cinnamon is based on [Ubuntu 19.10 Eoan Ermine][12]. It uses Calamares installer from Lubuntu and features Cinnamon desktop version 4.0.10. Naturally, it uses Nemo file manager and LightDM.
-
-It supports EFI and UEFI and only comes with 64-bit support.
-
-You’ll get your regular goodies like LibreOffice, Firefox and some GNOME software and games. You can of course install more applications as per your need.
-
-### Download and install Ubuntu Cinnamon
-
-Do note that this is the first ever release of Ubuntu Cinnamon and the developers are not that experienced at this moment.
-
-If you don’t like troubleshooting, don’t use it on your main system. I expect this release to have a few bugs and issues which will be fixed eventually as more users test it out.
-
-You can download Ubuntu Cinnamon ISO from Sourceforge website:
-
-[Download Ubuntu Cinnamon][13]
-
-### What next from here?
-
-The dev team has a few improvements planned for the 20.04 release. The changes are mostly on the cosmetics though. There will be new GRUB and Plymouth theme, layout application and welcome screen.
-
-I downloaded it and tried it in a live session. Here’s what this distribution looks like:
-
-[Subscribe to our YouTube channel for more Linux videos][14]
-
-Meanwhile, if you manage to try it on your own, why not share your experience in the comments? If you use Linux Mint, will you switch to Ubuntu Cinnamon in near future? What are your overall opinion about this new project? Do share it in the comment section.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/ubuntu-cinnamon/
-
-作者:[Abhishek Prakash][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/abhishek/
-[b]: https://github.com/lujun9972
-[1]: https://en.wikipedia.org/wiki/Cinnamon_(desktop_environment)
-[2]: https://mate-desktop.org/
-[3]: https://itsfoss.com/install-cinnamon-on-ubuntu/
-[4]: https://itsfoss.com/best-linux-desktop-environments/
-[5]: https://itsfoss.com/which-ubuntu-install/
-[6]: https://kde.org/
-[7]: https://lxqt.org/
-[8]: https://itsfoss.com/ubuntu-budgie-18-review/
-[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/ubuntu_cinnamon_distribution_screenshot.jpg?ssl=1
-[10]: https://ubuntucinnamon.org/
-[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/ubuntu_cinnamon_remix_screeenshot_1.jpg?ssl=1
-[12]: https://itsfoss.com/ubuntu-19-10-released/
-[13]: https://sourceforge.net/projects/ubuntu-cinnamon-remix/
-[14]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
diff --git a/sources/news/20191209 KubeCon gets bigger, the kernel gets better, and more industry trends.md b/sources/news/20191209 KubeCon gets bigger, the kernel gets better, and more industry trends.md
deleted file mode 100644
index 81b00a0ec9..0000000000
--- a/sources/news/20191209 KubeCon gets bigger, the kernel gets better, and more industry trends.md
+++ /dev/null
@@ -1,65 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (KubeCon gets bigger, the kernel gets better, and more industry trends)
-[#]: via: (https://opensource.com/article/19/12/kubecon-bigger-kernel-better-more-industry-trends)
-[#]: author: (Tim Hildred https://opensource.com/users/thildred)
-
-KubeCon gets bigger, the kernel gets better, and more industry trends
-======
-A weekly look at open source community, market, and industry trends.
-![Person standing in front of a giant computer screen with numbers, data][1]
-
-As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
-
-## [KubeCon showed Kubernetes is big, but is it a Unicorn?][2]
-
-> It’s hard to remember now but there was a time when Kubernetes was a distant No. 3 in terms of container orchestrators being used in the market. It’s also eye opening to now realize that [the firms][3] that hatched the two platforms that [towered over][4] Kubernetes have had to completely re-jigger their business models under the Kubernetes onslaught.
->
-> And full credit to the CNCF for attempting to diffuse some of that attention from Kubernetes by spending the vast majority of the KubeCon opening keynote address touting some of the nearly two dozen graduated, incubating, and sandbox projects it also hosts. But, it was really the Big K that stole the show.
-
-**The impact:** Open source is way more than the source code; governance is a big deal and can be the difference between longevity and irrelevance. Gathering, organizing, and maintaining humans is an entirely different skill set than doing the same for bits, but can have just as big an influence on the success of a project.
-
-## [Report: Kubernetes use on the rise][5]
-
-> At the same time, the Datadog report notes that container churn rates are approximately 10 times higher in orchestrated environments. Churn rates in container environments that lack an orchestration platform such as Kubernetes have increased in the last year as well. The average container lifespan at a typical company running infrastructure without orchestration is about two days, down from about six days in mid-2018. In 19% of those environments not running orchestration, the average container lifetime exceeded 30 days. That compares to only 3% of organizations running containers longer than 30 days in Kubernetes environments, according to the report’s findings.
-
-**The impact**: If your containers aren't churning, you're probably not getting the full benefit of the technology you've adopted.
-
-## [Upcoming Linux 5.5 kernel improves live patching, scheduling][6]
-
-> A new WFX Wi-Fi driver for the Silicon Labs WF200 ASIC transceiver is coming to Linux kernel 5.5. This particular wireless transceiver is geared toward low-power IoT devices and uses a 2.4 GHz 802.11b/g/n radio optimized for low power RF performance in crowded RF environments. This new driver can interface via both Serial Peripheral Interface (SPI) and Secure Digital Input Output (SDIO).
-
-**The impact**: The kernel's continued relevance is a direct result of the never-ending grind to keep being where people need it to be (i.e. basically everywhere).
-
-## [DigitalOcean Currents: December 2019][7]
-
-> In that spirit, this fall’s installment of our seasonal Currents report is dedicated to open source for the second year running. We surveyed more than 5800 developers around the world on the overall health and direction of the open source community. When we last checked in with the community in [2018][8], more than half of developers reported contributing to open source projects, and most felt the community was healthy and growing.
-
-**The impact**: While the good news outweighs the bad, there are a couple of things to keep an eye on: namely, making open source more inclusive and mitigating potential negative impact of big money.
-
-_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/12/kubecon-bigger-kernel-better-more-industry-trends
-
-作者:[Tim Hildred][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/thildred
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
-[2]: https://www.sdxcentral.com/articles/opinion-editorial/kubecon-showed-kubernetes-is-big-but-is-it-a-unicorn/2019/11/
-[3]: https://www.sdxcentral.com/articles/news/docker-unloads-enterprise-biz-to-mirantis/2019/11/
-[4]: https://www.sdxcentral.com/articles/news/mesosphere-is-now-d2iq-and-kubernetes-is-its-game/2019/08/
-[5]: https://containerjournal.com/topics/container-ecosystems/report-kubernetes-use-on-the-rise/
-[6]: https://thenewstack.io/upcoming-linux-5-5-kernel-improves-live-patching-scheduling/
-[7]: https://blog.digitalocean.com/digitalocean-currents-december-2019/
-[8]: https://www.digitalocean.com/currents/october-2018/
diff --git a/sources/news/20191214 Annual release cycle for Python, new Python Software Foundation fellows from Africa, and more updates.md b/sources/news/20191214 Annual release cycle for Python, new Python Software Foundation fellows from Africa, and more updates.md
deleted file mode 100644
index 85aef63325..0000000000
--- a/sources/news/20191214 Annual release cycle for Python, new Python Software Foundation fellows from Africa, and more updates.md
+++ /dev/null
@@ -1,75 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Annual release cycle for Python, new Python Software Foundation fellows from Africa, and more updates)
-[#]: via: (https://opensource.com/article/19/12/python-news-december)
-[#]: author: (Christian Heimes https://opensource.com/users/christian-heimes)
-
-Annual release cycle for Python, new Python Software Foundation fellows from Africa, and more updates
-======
-Find out what's going on in the Python community in December.
-![Python in a coffee cup.][1]
-
-The Python Software Foundation (PSF) is a nonprofit organization behind the Python programming language. I am fortunate to be a PSF Fellow (honorable member for life,) a Python core developer, and the liaison between my company, Red Hat, and the PSF. Part of that liaison work is providing updates on what’s happening in the Python community. Here’s a look at what we have going on in December.
-
-### Upcoming events
-
-A significant part of the Python community is its in-person events. These events are where users and contributors intermingle and learn together. Here are the big announcements of upcoming opportunities to connect.
-
-#### PyCon US 2020
-
-[PyCon US][2] is by far the largest annual Python event. The next PyCon is April 15-23, 2020, in Pittsburgh. The call for proposals is open to all until December 20, 2019. I’m planning to attend PyCon for the conference and its [famous post-con sprints][3].
-
-#### EuroPython
-
-EuroPython is the largest Python conference in Europe with about 1,000 attendees in the last years. [EP20][4] will be held in Dublin, Ireland, July 20-26, 2020. As a liaison for Red Hat, I’m proud to say that Red Hat sponsored EP18 in Edinburgh and donated the sponsoring tickets to Women Who Code Scotland.
-
-#### PyData
-
-[PyData][5] is a separate nonprofit related to the Python community through a focus on data science. They host many international events throughout the year, with upcoming events in [Austin, Texas][6], and [Warsaw, Poland][7] before the end of the year.
-
-### New PSF fellows from Africa
-
-The PSF promotes a few members to fellow every quarter. Yesterday, twelve new PSF fellows were [announced][8].
-
-I’d like to highlight the four new fellows from Ghana, who are also the organizers of the first pan-African [PyCon Africa][9], which took place in August 2019 in Accra, Ghana. The Python community in Africa is growing at an amazing speed. PyCon Africa 2020 will be in Accra again, and I’m planning to spend my summer vacation there.
-
-### Annual release cycle for Python
-
-Python used to release a new major version about every 18 months. This timeline will change with the Python 3.9 release. With [PEP 602,][10] a new major version of Python will be released annually in October. The new cadence means fewer changes between releases and more predictable release dates. October was chosen to align with Linux distribution releases such as Fedora. Miro Hrončok from the Python maintenance team joined the discussion and has helped to find a convenient release date for us; for more details, please see .
-
-### Steering council election
-
-The Python Steering Council governs the development of Python. It was established after [Guido van Rossum stepped down][11] as benevolent dictator for life. Python core developers elect a new steering council for every major Python release. For the upcoming term, nine candidates were nominated for five seats on the council (with Guido being nominated, but [withdrawing][12]). See for all the details. Election results are expected to be announced mid-December.
-
-That covers what’s new in the Python community for December. Stay tuned for more updates in the future and mark your calendars for the conferences mentioned above.
-
-So you have a great business idea for a wonderful IT product or service, and you want to build your...
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/12/python-news-december
-
-作者:[Christian Heimes][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/christian-heimes
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_python.jpg?itok=G04cSvp_ (Python in a coffee cup.)
-[2]: https://us.pycon.org/2020/
-[3]: https://opensource.com/article/19/5/pycon-developer-sprints
-[4]: https://www.europython-society.org/post/188741002380/europython-2020-venue-and-location-selected
-[5]: https://pydata.org/
-[6]: https://pydata.org/austin2019/
-[7]: https://pydata.org/warsaw2019/
-[8]: https://pyfound.blogspot.com/2019/11/python-software-foundation-fellow.html
-[9]: https://africa.pycon.org/
-[10]: https://www.python.org/dev/peps/pep-0602/
-[11]: https://opensource.com/article/19/6/command-line-heroes-python
-[12]: https://discuss.python.org/t/steering-council-nomination-guido-van-rossum-2020-term/2657/11
diff --git a/sources/news/20191219 2020 technology must haves, a guide to Kubernetes etcd, and more industry trends.md b/sources/news/20191219 2020 technology must haves, a guide to Kubernetes etcd, and more industry trends.md
deleted file mode 100644
index 198d4f8292..0000000000
--- a/sources/news/20191219 2020 technology must haves, a guide to Kubernetes etcd, and more industry trends.md
+++ /dev/null
@@ -1,63 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (2020 technology must haves, a guide to Kubernetes etcd, and more industry trends)
-[#]: via: (https://opensource.com/article/19/12/gartner-ectd-and-more-industry-trends)
-[#]: author: (Tim Hildred https://opensource.com/users/thildred)
-
-2020 technology must haves, a guide to Kubernetes etcd, and more industry trends
-======
-A weekly look at open source community, market, and industry trends.
-![Person standing in front of a giant computer screen with numbers, data][1]
-
-As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
-
-## [Gartner's top 10 infrastructure and operations trends for 2020][2]
-
-> “The vast majority of organisations that do not adopt a shared self-service platform approach will find that their DevOps initiatives simply do not scale,” said Winser. "Adopting a shared platform approach enables product teams to draw from an I&O digital toolbox of possibilities, while benefiting from high standards of governance and efficiency needed for scale."
-
-**The impact**: The breakneck change of technology development and adoption will not slow down next year, as the things you've been reading about for the last two years become things you have to figure out to deal with every day.
-
-## [A guide to Kubernetes etcd: All you need to know to set up etcd clusters][3]
-
-> Etcd is a distributed reliable key-value store which is simple, fast and secure. It acts like a backend service discovery and database, runs on different servers in Kubernetes clusters at the same time to monitor changes in clusters and to store state/configuration data that should to be accessed by a Kubernetes master or clusters. Additionally, etcd allows Kubernetes master to support discovery service so that deployed application can declare their availability for inclusion in service.
-
-**The impact**: This is actually way more than I needed to know about setting up etcd clusters, but now I have a mental model of what that could look like, and you can too.
-
-## [How the open source model could fuel the future of digital marketing][4]
-
-> In other words, the broad adoption of open source culture has the power to completely invert the traditional marketing funnel. In the future, prospective customers could be first introduced to “late funnel” materials and then buy into the broader narrative — a complete reversal of how traditional marketing approaches decision-makers today.
-
-**The impact**: The SEO on this cuts two ways: It can introduce uninitiated marketing people to open source and uninitiated technical people to the ways that technology actually gets adopted. Neat!
-
-## [Kubernetes integrates interoperability, storage, waits on sidecars][5]
-
-> In a [recent interview][6], Lachlan Evenson, and was also a lead on the Kubernetes 1.16 release, said sidecar containers was one of the features that team was a “little disappointed” it could not include in their release.
->
-> Guinevere Saenger, software engineer at GitHub and lead for the 1.17 release team, explained that sidecar containers gained increased focus “about a month ago,” and that its implementation “changes the pod spec, so this is a change that affects a lot of areas and needs to be handled with care.” She noted that it did move closer to completion and “will again be prioritized for 1.18.”
-
-**The impact**: You can read between the lines to understand a lot more about the Kubernetes sausage-making process. It's got governance, tradeoffs, themes, and timeframes; all the stuff that is often invisible to consumers of a project.
-
-_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/12/gartner-ectd-and-more-industry-trends
-
-作者:[Tim Hildred][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/thildred
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
-[2]: https://www.information-age.com/gartner-top-10-infrastructure-and-operations-trends-2020-123486509/
-[3]: https://superuser.openstack.org/articles/a-guide-to-kubernetes-etcd-all-you-need-to-know-to-set-up-etcd-clusters/
-[4]: https://www.forbes.com/sites/forbescommunicationscouncil/2019/11/19/how-the-open-source-model-could-fuel-the-future-of-digital-marketing/#71b602fb20a5
-[5]: https://www.sdxcentral.com/articles/news/kubernetes-integrates-interoperability-storage-waits-on-sidecars/2019/12/
-[6]: https://kubernetes.io/blog/2019/12/06/when-youre-in-the-release-team-youre-family-the-kubernetes-1.16-release-interview/
diff --git a/sources/news/20191219 Linux Mint 19.3 -Tricia- Released- Here-s What-s New and How to Get it.md b/sources/news/20191219 Linux Mint 19.3 -Tricia- Released- Here-s What-s New and How to Get it.md
deleted file mode 100644
index 507915626b..0000000000
--- a/sources/news/20191219 Linux Mint 19.3 -Tricia- Released- Here-s What-s New and How to Get it.md
+++ /dev/null
@@ -1,165 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Linux Mint 19.3 “Tricia” Released: Here’s What’s New and How to Get it)
-[#]: via: (https://itsfoss.com/linux-mint-19-3/)
-[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
-
-Linux Mint 19.3 “Tricia” Released: Here’s What’s New and How to Get it
-======
-
-_**Linux Mint 19.3 “Tricia” has been released. See what’s new in it and learn how to upgrade to Linux Mint 19.3.**_
-
-The Linux Mint team finally announced the release of Linux Mint 19.3 codenamed ‘Tricia’ with useful feature additions along with a ton of improvements under-the-hood.
-
-This is a point release based on the latest **Ubuntu 18.04.3** and it comes packed with the **Linux kernel 5.0**.
-
-I downloaded and quickly tested the edition featuring the [Cinnamon 4.4][1] desktop environment. You may also try the Xfce or MATE edition of Linux Mint 19.3.
-
-### Linux Mint 19.3: What’s New?
-
-![Linux Mint 19 3 Desktop][2]
-
-While being an LTS release that will be supported until 2023 – it brings in a couple of useful features and improvements. Let me highlight some of them for you.
-
-#### System Reports
-
-![][3]
-
-Right after installing Linux Mint 19.3 (or upgrading it), you will notice a warning icon on the right side of the panel (taskbar).
-
-When you click on it, you should be displayed a list of potential issues that you can take care of to ensure the best out of Linux Mint experience.
-
-For starters, it will suggest that you should create a root password, install a language pack, or update software packages – in the form of a warning. This is particularly useful to make sure that you perform important actions even after following the first set of steps on the welcome screen.
-
-#### Improved Language Settings
-
-Along with the ability to install/set a language, you will also get the ability to change the time format.
-
-So, the language settings are now more useful than ever before.
-
-#### HiDPI Support
-
-As a result of [HiDPI][4] support, the system tray icons will look crisp and overall, you should get a pleasant user experience on a high-res display.
-
-#### New Applications
-
-![Linux Mint Drawing App][5]
-
-With the new release, you will n longer find “**GIMP**” pre-installed.
-
-Even though GIMP is a powerful utility, they decided to add a simpler “**Drawing**” app to let users to easily crop/resize images while being able to tweak it a little.
-
-Also, **Gnote** replaces **Tomboy** as the default note-taking application on Linux Mint 19.3
-
-In addition to both these replacements, Celluloid video player has also been added instead of Xplayer. In case you did not know, Celluloid happens to be one of the [best open source video players][6] for Linux.
-
-#### Cinnamon 4.4 Desktop
-
-![Cinnamon 4 4 Desktop][7]
-
-In my case, the new Cinnamon 4.4 desktop experience introduces a couple of new abilities like adjusting/tweaking the panel zones individually as you can see in the screenshot above.
-
-#### Other Improvements
-
-There are several other improvements including more customizability options in the file manager and so on.
-
-You can read more about the detailed changes in the [official release notes][8].
-
-[Subscribe to our YouTube channel for more Linux videos][9]
-
-### Linux Mint 19 vs 19.1 vs 19.2 vs 19.3: What’s the difference?
-
-You probably already know that Linux Mint releases are based on Ubuntu long term support releases. Linux Mint 19 series is based on Ubuntu 18.04 LTS.
-
-Ubuntu LTS releases get ‘point releases’ on the interval of a few months. Point release basically consists of bug fixes and security updates that have been pushed since the last release of the LTS version. This is similar to the Service Pack concept in Windows XP if you remember it.
-
-If you are going to download Ubuntu 18.04 which was released in April 2018 in 2019, you’ll get Ubuntu 18.04.2. The ISO image of 18.04.2 will consist of 18.04 and the bug fixes and security updates applied till 18.04.2. Imagine if there were no point releases, then right after [installing Ubuntu 18.04][10], you’ll have to install a few gigabytes of system updates. Not very convenient, right?
-
-But Linux Mint has it slightly different. Linux Mint has a major release based on Ubuntu LTS release and then it has three minor releases based on Ubuntu LTS point releases.
-
-Mint 19 was based on Ubuntu 18.04, 19.1 was based on 18.04.1 and Mint 19.2 is based on Ubuntu 18.04.2. Similarly, Mint 19.3 is based on Ubuntu 18.04.3. It is worth noting that all Mint 19.x releases are long term support releases and will get security updates till 2023.
-
-Now, if you are using Ubuntu 18.04 and keep your system updated, you’ll automatically get updated to 18.04.1, 18.04.2 etc. That’s not the case in Linux Mint.
-
-Linux Mint minor releases also consist of _feature changes_ along with bug fixes and security updates and this is the reason why updating Linux Mint 19 won’t automatically put you on 19.1.
-
-Linux Mint gives you the option if you want the new features or not. For example, Mint 19.3 has Cinnamon 4.4 and several other visual changes. If you are happy with the existing features, you can stay on Mint 19.2. You’ll still get the necessary security and maintenance updates on Mint 19.2 till 2023.
-
-Now that you understand the concept of minor releases and want the latest minor release, let’s see how to upgrade to Mint 19.3.
-
-### Linux Mint 19.3: How to Upgrade?
-
-No matter whether you have Linux Mint 19.1 or 19, you can follow these steps to [upgrade Linux Mint version][11].
-
-**Note**: _You should consider making a system snapshot (just in case) for backup. In addition, the Linux Mint team advises you to disable the screensaver and upgrade Cinnamon spices (if installed) from the System settings._
-
-![][12]
-
- 1. Launch the Update Manager.
- 2. Now, refresh it to load up the latest available updates (or you can change the mirror if you want).
- 3. Once done, simply click on the Edit button to find “**Upgrade to Linux Mint 19.3 Tricia**” button similar to the image above.
- 4. Finally, just follow the on-screen instructions to easily update it.
-
-
-
-Based on your internet connection, it should take anything between a couple of minutes to 30 minutes.
-
-### Don’t see Mint 19.3 update yet? Here’s what you can do
-
-If you don’t see the option to upgrade to Linux Mint 19.3 Tricia, don’t lose hope. Here are a couple of things you can do.
-
-#### **Step 1: Make sure to use mint-upgrade-info version 1.1.3**
-
-Make sure that mint-upgrade-info is updated to version 1.1.3. You can try the install command that will update it to a newer version (if there is any).
-
-```
-sudo apt install mint-upgrade-info
-```
-
-#### **Step 2: Switch to default software sources**
-
-Chances are that you are using a mirror closer to you to get faster software downloads. But this could cause a problem as the mirrors might not have the new upgrade info yet.
-
-Go to Software Source and change the sources to default. Now run the update manager again and see if Mint 19.3 upgrade is available.
-
-### Download Linux Mint 19.3 ‘Tricia’
-
-If you want to perform a fresh install, you can easily download the latest available version from the official download page (depending on what edition you want).
-
-You will also find multiple mirrors available to download the ISOs – feel free to try the nearest mirror for potentially faster download.
-
-[Linux Mint 19.3][13]
-
-**Wrapping Up**
-
-Have you tried Linux Mint 19.3 yet? Let me know your thoughts in the comments down below.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/linux-mint-19-3/
-
-作者:[Ankush Das][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/ankush/
-[b]: https://github.com/lujun9972
-[1]: https://github.com/linuxmint/cinnamon/releases/tag/4.4.0
-[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/linux-mint-19-3-desktop.jpg?ssl=1
-[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/linux-mint-system-report.jpg?ssl=1
-[4]: https://wiki.archlinux.org/index.php/HiDPI
-[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/linux-mint-drawing-app.jpg?ssl=1
-[6]: https://itsfoss.com/video-players-linux/
-[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/cinnamon-4-4-desktop.jpg?ssl=1
-[8]: https://linuxmint.com/rel_tricia_cinnamon.php
-[9]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
-[10]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
-[11]: https://itsfoss.com/upgrade-linux-mint-version/
-[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/mintupgrade.png?ssl=1
-[13]: https://linuxmint.com/download.php
diff --git a/sources/news/20191221 Eliminating gender bias in open source software development, a database of microbes, and more open source news.md b/sources/news/20191221 Eliminating gender bias in open source software development, a database of microbes, and more open source news.md
deleted file mode 100644
index 3cf9d05097..0000000000
--- a/sources/news/20191221 Eliminating gender bias in open source software development, a database of microbes, and more open source news.md
+++ /dev/null
@@ -1,76 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Eliminating gender bias in open source software development, a database of microbes, and more open source news)
-[#]: via: (https://opensource.com/article/19/12/news-december-21)
-[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
-
-Eliminating gender bias in open source software development, a database of microbes, and more open source news
-======
-Catch up on the biggest open source headlines from the past two weeks.
-![Weekly news roundup with TV][1]
-
-In this edition of our open source news roundup, we take a look at eliminating gender bias in open source software development, an open source database of microbes, an open source index for cooperatives, and more!
-
-### Eliminating gender bias from open source development
-
-It's a sad fact that certain groups, among them women, are woefully underrepresented in open source projects. It's like a bug in the open source development process. Fortunately, there are initiatives to make that under representation a thing of the past. A study out of Oregon State University (OSU) intends to resolve the issue of the lack of women in open source development by "[finding these bugs and proposing redesigns around them][2], leading to more gender-inclusive tools used by software developers."
-
-The study will look at tools commonly used in open source development — including Eclipse, GitHub, and Hudson — to determine if they "significantly discourage newcomers, especially women, from joining OSS projects." According to Igor Steinmacher, one of the principal investigators of the study, the study will examine "how people use tools because the 'bugs' may be embedded in how the tool was designed, which may place people with different cognitive styles at a disadvantage."
-
-The developers of the tools being studied will walk through their software and answer questions based on specific personas. The researchers at OSU will suggest ways to redesign the software to eliminate gender bias and will "create a list of best practices for fixing gender-bias bugs in both products and processes."
-
-### Canadian university compiles open source microbial database
-
-What do you do when you have a vast amount of data but no way to effectively search and build upon it? You turn it into a database, of course. That's what researchers at Simon Fraser University in British Columbia, along with collaborators from around the globe, did with [information about chemical compounds created by bacteria and fungi][3]. Called the Natural Products Atlas, the database "holds information on nearly 25,000 natural compounds and serves as a knowledge base and repository for the global scientific community."
-
-Licensed under a Creative Commons Attribution 4.0 International License, the Natural Products Atlas "holds information on nearly 25,000 natural compounds and serves as a knowledge base and repository for the global scientific community." The [website for the Natural Products Atlas][4] hosts the database also includes a number of visualization tools and is fully searchable.
-
-Roger Linington, an associate professor at SFU who spearheaded the creation of the database, said that having "all the available data in one place and in a standardized format means we can now index natural compounds for anyone to freely access and learn more about."
-
-### Open source index for cooperatives
-
-Europe has long been a hotbed of both open source development and open source adoption. While European governments strongly advocate open source, non profits have been following suit. One of those is Cooperatives Europe, which is developing "[open source software to allow users to index co-op information and resources in a standardised way][5]."
-
-The idea behind the software, called Coop Starter, reinforces the [essential freedoms of free software][6]: it's intended to provide "education, training and information. The software may be used and repurposed by the public for their own needs and on their own infrastructure." Anyone can use it "to reference existing material on co-operative entrepreneurship" and can contribute "by sharing resources and information."
-
-The [code for Coop Starter][7], along with a related WordPress plugin, is available from Cooperative Europe's GitLab repository.
-
-#### In other news
-
- * [Nancy recognised as France’s top digital free and collaborative public service][8]
- * [Open Source and AI: Ready for primetime in government?][9]
- * [Open Software Means Kinder Science][10]
- * [New Open-Source CoE to be launched by Wipro and Oman’s Ministry of Tech & Communication][11]
-
-
-
-_Thanks, as always, to Opensource.com staff members and [Correspondents][12] for their help this week._
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/12/news-december-21
-
-作者:[Scott Nesbitt][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/scottnesbitt
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
-[2]: https://techxplore.com/news/2019-12-professors-gender-biased-bugs-open-source-software.html
-[3]: https://www.sfu.ca/sfunews/stories/2019/12/sfu-global-collaboration-creates-world-s-first-open-source-datab.html
-[4]: https://www.npatlas.org/joomla/
-[5]: https://www.thenews.coop/144412/sector/regional-organisations/cooperatives-europe-builds-open-source-index-for-the-co-op-movement/
-[6]: https://www.gnu.org/philosophy/free-sw.en.html
-[7]: https://git.happy-dev.fr/startinblox/applications/coop-starter
-[8]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/territoire-numerique-libre
-[9]: https://federalnewsnetwork.com/commentary/2019/12/open-source-and-ai-ready-for-primetime-in-government/
-[10]: https://blogs.scientificamerican.com/observations/open-software-means-kinder-science/
-[11]: https://www.indianweb2.com/2019/12/11/new-open-source-coe-to-be-launched-by-wipro-and-omans-ministry-of-tech-communication/
-[12]: https://opensource.com/correspondent-program
diff --git a/sources/news/20200104 Shocking- EA is Permanently Banning Linux Gamers on Battlefield V.md b/sources/news/20200104 Shocking- EA is Permanently Banning Linux Gamers on Battlefield V.md
deleted file mode 100644
index b0818e5fe8..0000000000
--- a/sources/news/20200104 Shocking- EA is Permanently Banning Linux Gamers on Battlefield V.md
+++ /dev/null
@@ -1,86 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (fuzheng1998)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Shocking! EA is Permanently Banning Linux Gamers on Battlefield V)
-[#]: via: (https://itsfoss.com/ea-banning-linux-gamers/)
-[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
-
-Shocking! EA is Permanently Banning Linux Gamers on Battlefield V
-======
-
-Only when I thought that [EA][1] as a game company might be getting better after [its decision to make its games available on Steam][2] – but it looks like that isn’t the case.
-
-In a [Reddit thread][3], a lot of Linux players seem to complain about getting banned by FairFight (which is the server-side anti-cheat engine used for BF V) just because they chose to play Battlefield V (BF V) on [Linux using Wine][4].
-
-![][5]
-
-### Is this a widespread issue?
-
-Unfortunately, it seems to be the case with a number of Linux players using Wine to play Battlefield V on Linux.
-
-You can also find users on [Lutris Gaming forums][6] and [Battlefield forums][7] talking about it.
-
-Of course, the userbase on Linux playing Battlefield V isn’t huge – but it still matters, right?
-
-### What’s exactly the issue here?
-
-It looks like EA’s anti-cheat tech considers [DXVK][8] (Vulkan-based implementation of DirectX which tries to solve compatibility issues) as cheating.
-
-So, basically, the compatibility layer that is being utilized to make it possible to run Battlefield V is being detected as a modified file through which you’re “**potentially**” cheating.
-
-![Battlefield V on Lutris][9]
-
-Even though this could be an innocent problem for the anti-cheat engine but EA does not seem to acknowledge that at all.
-
-Here’s what they respond with when one of the players wrote an email to EA in order to lift the ban:
-
-> After thoroughly investigating your account and concern, we found that your account was actioned correctly and will not remove this sanction from your account.
-
-Also, with all this going on, [Lutris Gaming][10] seems to be quite furious on EA’s behavior with the permanent bans:
-
-> It has come to our attention that several Battlefield 5 players have recently been banned for playing on Linux, and that EA has chosen not to revert these wrongful punishments. Due to this, we advise to refrain from playing any multiplayer games published by [@EA][11] in the future.
->
-> — Lutris Gaming (@LutrisGaming) [January 2, 2020][12]
-
-### Not just Battlefield V, it’s the same with Destiny 2
-
-As pointed by a Redditor in the same thread, Bungie also happens to consider Wine as an emulator (which is against their policy) and has banned players on Linux a while back.
-
-### EA needs to address the issue
-
-_We have reached out to EA for a comment on the issue_. _And, we’re still waiting for a response._
-
-I shall update the article if we have an official response from EA. However, considering Blizzard as an example, they should actually work on fixing the issue and [reverse the bans on players using Linux][13].
-
-I know that BF V does not offer native Linux support – but supporting the compatibility layer and not considering it as cheating would allow Linux users to experience the game which they rightfully own (or considering to purchase).
-
-What are your thoughts on this? Let me know your thoughts in the comments below.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/ea-banning-linux-gamers/
-
-作者:[Ankush Das][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/ankush/
-[b]: https://github.com/lujun9972
-[1]: https://www.ea.com/
-[2]: https://thenextweb.com/gaming/2019/10/29/ea-games-are-coming-back-to-steam-but-you-still-need-origin/
-[3]: https://www.reddit.com/r/linux/comments/ej3q2p/ea_is_permanently_banning_linux_players_on/
-[4]: https://itsfoss.com/install-latest-wine/
-[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/reddit-thread-ea.jpg?ssl=1
-[6]: https://forums.lutris.net/t/ea-banning-dxvk-on-battlefield-v/7810
-[7]: https://forums.battlefield.com/en-us/discussion/197938/ea-banning-dxvk-on-battlefield-v-play-linux
-[8]: https://github.com/doitsujin/dxvk
-[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/battlefield-v-lutris-gaming.png?ssl=1
-[10]: https://lutris.net/
-[11]: https://twitter.com/EA?ref_src=twsrc%5Etfw
-[12]: https://twitter.com/LutrisGaming/status/1212827248430059520?ref_src=twsrc%5Etfw
-[13]: https://www.altchar.com/game-news/blizzard-unbans-overwatch-players-who-used-linux-os-agF9y0G2gWjn
diff --git a/sources/news/20200130 NSA cloud advice, Facebook open source year in review, and more industry trends.md b/sources/news/20200130 NSA cloud advice, Facebook open source year in review, and more industry trends.md
new file mode 100644
index 0000000000..7a3a40d845
--- /dev/null
+++ b/sources/news/20200130 NSA cloud advice, Facebook open source year in review, and more industry trends.md
@@ -0,0 +1,71 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (NSA cloud advice, Facebook open source year in review, and more industry trends)
+[#]: via: (https://opensource.com/article/20/1/nsa-facebook-more-industry-trends)
+[#]: author: (Tim Hildred https://opensource.com/users/thildred)
+
+NSA cloud advice, Facebook open source year in review, and more industry trends
+======
+A weekly look at open source community and industry trends.
+![Person standing in front of a giant computer screen with numbers, data][1]
+
+As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
+
+## [Facebook open source year in review][2]
+
+> Last year was a busy one for our [open source][3] engineers. In 2019 we released 170 new open source projects, bringing our portfolio to a total of 579 [active repositories][3]. While it’s important for our internal engineers to contribute to these projects (and they certainly do — with more than 82,000 commits this year), we are also incredibly grateful for the massive support from external contributors. Approximately 2,500 external contributors committed more than 32,000 changes. In addition to these contributions, nearly 93,000 new people starred our projects this year, growing the most important component of any open source project — the community! Facebook Open Source would not be here without your contributions, so we want to thank you for your participation in 2019.
+
+**The impact**: Facebook got ~33% more changes than they would have had they decided to develop these as closed projects. Organizations addressing similar challenges got an 82,000-commit boost in exchange. What a clear illustration of the business impact of open source development.
+
+## [Cloud advice from the NSA][4]
+
+> This document divides cloud vulnerabilities into four classes (misconfiguration, poor access control, shared tenancy vulnerabilities, and supply chain vulnerabilities) that encompass the vast majority of known vulnerabilities. Cloud customers have a critical role in mitigating misconfiguration and poor access control, but can also take actions to protect cloud resources from the exploitation of shared tenancy and supply chain vulnerabilities. Descriptions of each vulnerability class along with the most effective mitigations are provided to help organizations lock down their cloud resources. By taking a risk-based approach to cloud adoption, organizations can securely benefit from the cloud’s extensive capabilities.
+
+**The impact**: The Fear, Uncertainty, and Doubt (FUD) that has been associated with cloud adoption is being debunked more all the time. None other then the US Department of Defense has done a lot of the thinking so you don't have to, and there is a good chance that their concerns are at least as dire as yours are.
+
+## [With Kubernetes, China Minsheng Bank transformed its legacy applications][5]
+
+> But all of CMBC’s legacy applications—for example, the core banking system, payment systems, and channel systems—were written in C and Java, using traditional architecture. “We wanted to do distributed applications because in the past we used VMs in our own data center, and that was quite expensive and with low resource utilization rate,” says Zhang. “Our biggest challenge is how to make our traditional legacy applications adaptable to the cloud native environment.” So far, around 20 applications are running in production on the Kubernetes platform, and 30 new applications are in active development to adopt the Kubernetes platform.
+
+**The impact**: This illustrates nicely the challenges and opportunities facing businesses in a competitive environment, and suggests a common adoption pattern. Do new stuff the new way, and move the old stuff as it makes sense.
+
+## [The '5 Rs' of the move to cloud native: Re-platform, re-host, re-factor, replace, retire][6]
+
+> The bottom line is that telcos and service providers will go cloud native when it is cheaper for them to migrate to the cloud and pay cloud costs than it is to remain in the data centre. That time is now and by adhering to the "5 Rs" of the move to cloud native, Re-platform, Re-host, Re-factor, Replace and/or Retire, the path is open, clearly marked and the goal eminently achievable.
+
+**The impact**: Cloud-native is basically used as a synonym for open source in this interview; there is no other type of technology that will deliver the same lift.
+
+## [Fedora CoreOS out of preview][7]
+
+> Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. It’s the successor to both [Fedora Atomic Host][8] and [CoreOS Container Linux][9] and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host. For more on the Fedora CoreOS philosophy, goals, and design, see the [announcement of the preview release][10].
+
+**The impact**: Collapsing these two branches of the Linux family tree into one another moves the state of the art forward for everyone (once you get through the migration).
+
+_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/nsa-facebook-more-industry-trends
+
+作者:[Tim Hildred][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/thildred
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
+[2]: https://opensource.com/article/20/1/hybrid-developer-future-industry-trends
+[3]: https://opensource.facebook.com/
+[4]: https://media.defense.gov/2020/Jan/22/2002237484/-1/-1/0/CSI-MITIGATING-CLOUD-VULNERABILITIES_20200121.PDF
+[5]: https://www.cncf.io/blog/2020/01/23/with-kubernetes-china-minsheng-bank-transformed-its-legacy-applications-and-moved-into-ai-blockchain-and-big-data/
+[6]: https://www.telecomtv.com/content/cloud-native/the-5-rs-of-the-move-to-cloud-native-re-platform-re-host-re-factor-replace-retire-37473/
+[7]: https://fedoramagazine.org/fedora-coreos-out-of-preview/
+[8]: https://www.projectatomic.io/
+[9]: https://coreos.com/os/docs/latest/
+[10]: https://fedoramagazine.org/introducing-fedora-coreos/
diff --git a/sources/news/20200205 The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news.md b/sources/news/20200205 The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news.md
new file mode 100644
index 0000000000..2e814fdfc0
--- /dev/null
+++ b/sources/news/20200205 The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news.md
@@ -0,0 +1,71 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news)
+[#]: via: (https://opensource.com/article/20/2/linux-java-and-other-industry-news)
+[#]: author: (Tim Hildred https://opensource.com/users/thildred)
+
+The Y2038 problem in the Linux kernel, 25 years of Java, and other industry news
+======
+A weekly look at open source community and industry trends.
+![Person standing in front of a giant computer screen with numbers, data][1]
+
+As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
+
+## [Need 32-bit Linux to run past 2038? When version 5.6 of the kernel pops, you're in for a treat][2]
+
+> Arnd Bergmann, an engineer working on the thorny Y2038 problem in the Linux kernel, posted to the [mailing list][3] that, yup, Linux 5.6 "should be the first release that can serve as a base for a 32-bit system designed to run beyond year 2038."
+
+**The impact:** Y2K didn't get fixed; it just got bigger and delayed. There is no magic in software or computers; just people trying to solve complicated problems as best they can, and some times introducing more complicated problems for different people to solve at some point in the future.
+
+## [What the dev? Celebrating Java's 25th anniversary][4]
+
+> Java is coming up on a big milestone: Its 25th anniversary! To celebrate, we take a look back over the last 25 years to see how Java has evolved over time. In this episode, Social Media and Online Editor Jenna Sargent talks to Rich Sharples, senior director of product management for middleware at Red Hat, to learn more.
+
+**The impact:** There is something comforting about immersing yourself in a deep well of lived experience. Rich clearly lived through what he is talking about and shares insider knowlege with you (and his dog).
+
+## [Do I need an API Gateway if I use a service mesh?][5]
+
+> This post may not be able to break through the noise around API Gateways and Service Mesh. However, it’s 2020 and there is still abundant confusion around these topics. I have chosen to write this to help bring real concrete explanation to help clarify differences, overlap, and when to use which. Feel free to [@ me on twitter (@christianposta)][6] if you feel I’m adding to the confusion, disagree, or wish to buy me a beer (and these are not mutually exclusive reasons).
+
+**The impact:** Yes, though they use similar terms and concepts they have different concerns and scopes.
+
+## [What Australia's AGL Energy learned about Cloud Native compliance][7]
+
+> This is really at the heart of what open source is, enabling everybody to contribute equally. Within large enterprises, there are controls that are needed, but if we can automate the management of the majority of these controls, we can enable an amazing culture and development experience.
+
+**The impact:** They say "software is eating the world" and "developers are the new kingmakers." The fact that compliance in an energy utility is subject to developer experience improvement basically proves both statements.
+
+## [Monoliths are the future][8]
+
+> And then what they end up doing is creating 50 deployables, but it’s really a _distributed_ monolith. So it’s actually the same thing, but instead of function calls and class instantiation, they’re initiating things and throwing it over a network and hoping that it comes back. And since they can’t reliably _make it_ come back, they introduce things like [Prometheus][9], [OpenTracing][10], all of this stuff. I’m like, **“What are you doing?!”**
+
+**The impact:** Do things for real reasons with a clear-eyed understanding of what those reasons are and how they'll make your business or your organization better.
+
+_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/2/linux-java-and-other-industry-news
+
+作者:[Tim Hildred][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/thildred
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
+[2]: https://www.theregister.co.uk/2020/01/30/linux_5_6_2038/
+[3]: https://lkml.org/lkml/2020/1/29/355
+[4]: https://whatthedev.buzzsprout.com/673192/2543290-celebrating-java-s-25th-anniversary-episode-16
+[5]: https://blog.christianposta.com/microservices/do-i-need-an-api-gateway-if-i-have-a-service-mesh/ (Do I Need an API Gateway if I Use a Service Mesh?)
+[6]: http://twitter.com/christianposta?lang=en
+[7]: https://thenewstack.io/what-australias-agl-energy-learned-about-cloud-native-compliance/
+[8]: https://changelog.com/posts/monoliths-are-the-future
+[9]: https://prometheus.io/
+[10]: https://opentracing.io
diff --git a/sources/news/20200211 Building a Linux desktop, CERN powered by Ceph, and more industry trends.md b/sources/news/20200211 Building a Linux desktop, CERN powered by Ceph, and more industry trends.md
new file mode 100644
index 0000000000..424f69bcc2
--- /dev/null
+++ b/sources/news/20200211 Building a Linux desktop, CERN powered by Ceph, and more industry trends.md
@@ -0,0 +1,66 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a Linux desktop, CERN powered by Ceph, and more industry trends)
+[#]: via: (https://opensource.com/article/20/2/linux-desktop-cern-more-industry-trends)
+[#]: author: (Tim Hildred https://opensource.com/users/thildred)
+
+Building a Linux desktop, CERN powered by Ceph, and more industry trends
+======
+A weekly look at open source community and industry trends.
+![Person standing in front of a giant computer screen with numbers, data][1]
+
+As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
+
+## [Building a Linux desktop for cloud-native development][2]
+
+> This post covers the building of my Linux Desktop PC for Cloud Native Development. I'll be covering everything from parts, to peripherals, to CLIs, to SaaS software with as many links and snippets as I can manage. I hope that you enjoy reading about my experience, learn something, and possibly go on to build your own Linux Desktop.
+
+**The impact**: I hope the irony is not lost on anyone that step 1, when doing cloud-native software development, is to install Linux on a physical computer.
+
+## [Enabling CERN’s particle physics research with open source][3]
+
+> Ceph is an open-source software-defined storage platform. While it’s not often in the spotlight, it’s working hard behind the scenes, playing a crucial role in enabling ambitious, world-renowned projects such as CERN’s particle physics research, Immunity Bio’s cancer research, The Human Brain Project, MeerKat radio telescope, and more. These ventures are propelling the collective understanding of our planet and the human race beyond imaginable realms, and the outcomes will forever change how we perceive our existence and potential.
+
+**The impact**: It is not often that you get to see a straight line drawn between storage and the perception of human existence. Thanks for that, CERN!
+
+## [2020 cloud predictions][4]
+
+> "Serverless" as a concept provides a simplified developer experience that will become a platform feature. More platform-as-a-service providers will incorporate serverless traits into the daily activities developers perform when building cloud-native applications, becoming the default computing paradigm for the cloud.
+
+**The impact:** All of the trends in the predictions in this post are basically about maturation as ideas like serverless, edge computing, DevOps, and other cloud-adjacent buzz words move from the early adopters into the early majority phase of the adoption curve.
+
+## [End-of-life announcement for CoreOS Container Linux][5]
+
+> As we've [previously announced][6], [Fedora CoreOS][7] is the official successor to CoreOS Container Linux. Fedora CoreOS is a [new Fedora Edition][8] built specifically for running containerized workloads securely and at scale. It combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host. For more on the Fedora CoreOS philosophy, goals, and design, see the [announcement of the preview release][9] and the [Fedora CoreOS documentation][10].
+
+**The impact**: Milestones like this are often bittersweet for both creators and users. The CoreOS team built something that their community loved to use, which is something to be celebrated. Hopefully, that community can find a [new home][11] in the wider [Fedora ecosystem][8].
+
+_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/2/linux-desktop-cern-more-industry-trends
+
+作者:[Tim Hildred][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/thildred
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
+[2]: https://blog.alexellis.io/building-a-linux-desktop-for-cloud-native-development/
+[3]: https://insidehpc.com/2020/02/how-ceph-powers-exciting-research-with-open-source/
+[4]: https://www.devopsdigest.com/2020-cloud-predictions-2
+[5]: https://coreos.com/os/eol/
+[6]: https://groups.google.com/d/msg/coreos-user/zgqkG88DS3U/PFP9yrKbAgAJ
+[7]: https://getfedora.org/coreos/
+[8]: https://fedoramagazine.org/fedora-coreos-out-of-preview/
+[9]: https://fedoramagazine.org/introducing-fedora-coreos/
+[10]: https://docs.fedoraproject.org/en-US/fedora-coreos/
+[11]: https://getfedora.org/en/coreos/
diff --git a/sources/news/20200212 OpenShot Video Editor Gets a Major Update With Version 2.5 Release.md b/sources/news/20200212 OpenShot Video Editor Gets a Major Update With Version 2.5 Release.md
new file mode 100644
index 0000000000..bb3b2dc59a
--- /dev/null
+++ b/sources/news/20200212 OpenShot Video Editor Gets a Major Update With Version 2.5 Release.md
@@ -0,0 +1,115 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (OpenShot Video Editor Gets a Major Update With Version 2.5 Release)
+[#]: via: (https://itsfoss.com/openshot-2-5-release/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+OpenShot Video Editor Gets a Major Update With Version 2.5 Release
+======
+
+[OpenShot][1] is one of the [best open-source video editors][2] out there. With all the features that it offered – it was already a good video editor on Linux.
+
+Now, with a major update to it (**v.2.5.0**), OpenShot has added a lot of new improvements and features. And, trust me, it’s not just any regular release – it is a huge release packed with features that you probably wanted for a very long time.
+
+In this article, I will briefly mention the key changes involved in the latest release.
+
+![][3]
+
+### OpenShot 2.5.0 Key Features
+
+Here are some of the major new features and improvements in OpenShot 2.5:
+
+#### Hardware Acceleration Support
+
+The hardware acceleration support is still an experimental addition – however, it is a useful feature to have.
+
+Instead of relying on your CPU to do all the hard work, you can utilize your GPU to encode/decode video data when working with MP4/H.264 video files.
+
+This will affect (or improve) the performance of OpenShot in a meaningful way.
+
+#### Support Importing/Exporting Files From Final Cut Pro & Premiere
+
+![][4]
+
+[Final Cut Pro][5] and [Adobe Premiere][6] are the two popular video editors for professional content creators. OpenShot 2.5 now allows you to work on projects created on these platforms. It can import (or export) the files from Final Cut Pro & Premiere in EDL & XML formats.
+
+#### Thumbnail Generation Improved
+
+This isn’t a big feature – but a necessary improvement to most of the video editors. You don’t want broken images in the thumbnails (your timeline/library). So, with this update, OpenShot now generates the thumbnails using a local HTTP server, can check multiple folder locations, and regenerate missing ones.
+
+#### Blender 2.8+ Support
+
+The new OpenShot release also supports the latest [Blender][7] (.blend) format – so it should come in handy if you’re using Blender as well.
+
+#### Easily Recover Previous Saves & Improved Auto-backup
+
+![][8]
+
+It was always a horror to lose your timeline work after you accidentally deleted it – which was then auto-saved to overwrite your saved project.
+
+Now, the auto-backup feature has improved with an added ability to easily recover your previous saved version of the project.
+
+Even though you can recover your previous saves now – you will find a limited number of the saved versions, so you have to still remain careful.
+
+#### Other Improvements
+
+In addition to all the key highlights mentioned above, you will also notice a performance improvement when using the keyframe system.
+
+Several other issues like SVG compatibility, exporting & modifying keyframe data, and resizable preview window have been fixed in this major update. For privacy-concerned users, OpenShot no longer sends usage data unless you opt-in to share it with them.
+
+For more information, you can take a look at [OpenShot’s official blog post][9] to get the release notes.
+
+### Installing OpenShot 2.5 on Linux
+
+You can simply download the .AppImage file from its [official download page][10] to [install the latest OpenShot version][11]. If you’re new to AppImage, you should also check out [how to use AppImage][12] on Linux to easily launch OpenShot.
+
+[Download Latest OpenShot Release][10]
+
+Some distributions like Arch Linux may also provide the latest OpenShot release with regular system updates.
+
+#### PPA available for Ubuntu-based distributions
+
+On Ubuntu-based distributions, if you don’t want to use AppImage, you can [use the official PPA][13] from OpenShot:
+
+```
+sudo add-apt-repository ppa:openshot.developers/ppa
+sudo apt update
+sudo apt install openshot-qt
+```
+
+You may want to know how to remove PPA if you want to uninstall it later.
+
+**Wrapping Up**
+
+With all the latest changes/improvements considered, do you see [OpenShot][11] as your primary [video editor on Linux][14]? If not, what more do you expect to see in OpenShot? Feel free to share your thoughts in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/openshot-2-5-release/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://www.openshot.org/
+[2]: https://itsfoss.com/open-source-video-editors/
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-2-5-0.png?ssl=1
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-xml-edl.png?ssl=1
+[5]: https://www.apple.com/in/final-cut-pro/
+[6]: https://www.adobe.com/in/products/premiere.html
+[7]: https://www.blender.org/
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/openshot-recovery.jpg?ssl=1
+[9]: https://www.openshot.org/blog/2020/02/08/openshot-250-released-video-editing-hardware-acceleration/
+[10]: https://www.openshot.org/download/
+[11]: https://itsfoss.com/openshot-video-editor-release/
+[12]: https://itsfoss.com/use-appimage-linux/
+[13]: https://itsfoss.com/ppa-guide/
+[14]: https://itsfoss.com/best-video-editing-software-linux/
diff --git a/sources/news/20200213 KDE Plasma 5.18 LTS Released With New Features.md b/sources/news/20200213 KDE Plasma 5.18 LTS Released With New Features.md
new file mode 100644
index 0000000000..8100290280
--- /dev/null
+++ b/sources/news/20200213 KDE Plasma 5.18 LTS Released With New Features.md
@@ -0,0 +1,121 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (KDE Plasma 5.18 LTS Released With New Features)
+[#]: via: (https://itsfoss.com/kde-plasma-5-18-release/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+KDE Plasma 5.18 LTS Released With New Features
+======
+
+[KDE plasma][1] desktop is undoubtedly one of the most impressive [Linux desktop environments][2] available out there right now.
+
+Now, with the latest release, the KDE Plasma desktop just got more awesome!
+
+KDE Plasma 5.18 marks itself as an LTS (Long Term Support) release i.e it will be maintained by the KDE contributors for the next 2 years while the regular versions are maintained for just 4 months.
+
+![KDE Plasma 5.18 on KDE Neon][3]
+
+So, if you want more stability on your KDE-powered Linux system, it would be a good idea to upgrade to KDE’s Plasma 5.18 LTS release.
+
+### KDE Plasma 5.18 LTS Features
+
+Here are the main new features added in this release:
+
+#### Emoji Selector
+
+![Emoji Selector in KDE][4]
+
+Normally, you would Google an emoji to copy it to your clipboard or simply use the good-old emoticons to express yourself.
+
+Now, with the latest update, you get an emoji selector in Plasma Desktop. You can simply find it by searching for it in the application launcher or by just pressing (Windows key/Meta/Super Key) + . (**period/dot)**.
+
+The shortcut should come in handy when you need to use an emoji while sending an email or any other sort of messages.
+
+#### Global Edit Mode
+
+![Global Edit Mode][5]
+
+You probably would have used the old desktop toolbox on the top-right corner of the screen in the Plasma desktop, but the new release gets rid of that and instead – provides you with a global edit mode when you right-click on the desktop and click on “**Customize Layout**“.
+
+#### Night Color Control
+
+![Night Color Control][6]
+
+Now, you can easily toggle the night color mode right from the system tray. In addition to that, you can even choose to set a keyboard shortcut for both night color and the do not disturb mode.
+
+#### Privacy Improvements For User Feedback
+
+![Improved Privacy][7]
+
+It is worth noting that KDE Plasma lets you control the user feedback information that you share with them.
+
+You can either choose to disable sharing any information at all or control the level of information you share (basic, intermediate, and detailed).
+
+#### Global Themes
+
+![Themes][8]
+
+You can either choose from the default global themes available or download community-crafted themes to set up on your system.
+
+#### UI Improvements
+
+There are several subtle improvements and changes. For instance, the look and feel of the notifications have improved.
+
+You can also notice a couple of differences in the software center (Discover) to help you easily install apps.
+
+Not just limited to that, but you also get the ability to mute the volume of a window from the taskbar (just like you normally do on your browser’s tab). Similarly, there are a couple of changes here and there to improve the KDE Plasma experience.
+
+#### Other Changes
+
+In addition to the visual changes and customization ability, the performance of KDE Plasma has improved when coupled with a graphics hardware.
+
+To know more about the changes, you can refer the [official announcement post][9] for KDE Plasma 5.18 LTS.
+
+[Subscribe to our YouTube channel for more Linux videos][10]
+
+### How To Get KDE Plasma 5.18 LTS?
+
+If you are using a rolling release distribution like Arch Linux, you might have got it with the system updates. If you haven’t performed an update yet, simply check for updates from the system settings.
+
+If you are using Kubuntu, you can add the Kubuntu backports PPA to update the Plasma desktop with the following commands:
+
+```
+sudo add-apt-repository ppa:kubuntu-ppa/backports
+sudo apt update && sudo apt full-upgrade
+```
+
+If you do not have KDE as your desktop environment, you can refer our article on [how to install KDE on Ubuntu][11] to get started.
+
+**Wrapping Up**
+
+KDE Plasma 5.18 may not involve a whole lot of changes – but being an LTS release, the key new features seem helpful and should come in handy to improve the Plasma desktop experience for everyone.
+
+What do you think about the latest Plasma desktop release? Feel free to let me know your thoughts in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/kde-plasma-5-18-release/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://kde.org/plasma-desktop/
+[2]: https://itsfoss.com/best-linux-desktop-environments/
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/kde-plasma-5-18-info.jpg?ssl=1
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/kde-plasma-emoji-pick.jpg?ssl=1
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/kde-plasma-global-editor.jpg?ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/kde-plasma-night-color.jpg?ssl=1
+[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/user-feedback-kde-plasma.png?ssl=1
+[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/kde-plasma-global-themes.jpg?ssl=1
+[9]: https://kde.org/announcements/plasma-5.18.0.php
+[10]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[11]: https://itsfoss.com/install-kde-on-ubuntu/
diff --git a/sources/news/README.md b/sources/news/README.md
new file mode 100644
index 0000000000..98d53847b1
--- /dev/null
+++ b/sources/news/README.md
@@ -0,0 +1 @@
+这里放新闻类文章,要求时效性
diff --git a/sources/talk/20170717 The Ultimate Guide to JavaScript Fatigue- Realities of our industry.md b/sources/talk/20170717 The Ultimate Guide to JavaScript Fatigue- Realities of our industry.md
deleted file mode 100644
index 923d4618a9..0000000000
--- a/sources/talk/20170717 The Ultimate Guide to JavaScript Fatigue- Realities of our industry.md
+++ /dev/null
@@ -1,221 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (The Ultimate Guide to JavaScript Fatigue: Realities of our industry)
-[#]: via: (https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html)
-[#]: author: (Lucas Fernandes Da Costa https://lucasfcosta.com)
-
-The Ultimate Guide to JavaScript Fatigue: Realities of our industry
-======
-
-**Complaining about JS Fatigue is just like complaining about the fact that humanity has created too many tools to solve the problems we have** , from email to airplanes and spaceships.
-
-Last week I’ve done a talk about this very same subject at the NebraskaJS 2017 Conference and I got so many positive feedbacks that I just thought this talk should also become a blog post in order to reach more people and help them deal with JS Fatigue and understand the realities of our industry. **My goal with this post is to change the way you think about software engineering in general and help you in any areas you might work on**.
-
-One of the things that has inspired me to write this blog post and that totally changed my life is [this great post by Patrick McKenzie, called “Don’t Call Yourself a Programmer and other Career Advice”][1]. **I highly recommend you read that**. Most of this blog post is advice based on what Patrick has written in that post applied to the JavaScript ecosystem and with a few more thoughts I’ve developed during these last years working in the tech industry.
-
-This first section is gonna be a bit philosophical, but I swear it will be worth reading.
-
-### Realities of Our Industry 101
-
-Just like Patrick has done in [his post][1], let’s start with the most basic and essential truth about our industry:
-
-Software solves business problems
-
-This is it. **Software does not exist to please us as programmers** and let us write beautiful code. Neither it exists to create jobs for people in the tech industry. **Actually, it exists to kill as many jobs as possible, including ours** , and this is why basic income will become much more important in the next few years, but that’s a whole other subject.
-
-I’m sorry to say that, but the reason things are that way is that there are only two things that matter in the software engineering (and any other industries):
-
-**Cost versus Revenue**
-
-**The more you decrease cost and increase revenue, the more valuable you are** , and one of the most common ways of decreasing cost and increasing revenue is replacing human beings by machines, which are more effective and usually cost less in the long run.
-
-You are not paid to write code
-
-**Technology is not a goal.** Nobody cares about which programming language you are using, nobody cares about which frameworks your team has chosen, nobody cares about how elegant your data structures are and nobody cares about how good is your code. **The only thing that somebody cares about is how much does your software cost and how much revenue it generates**.
-
-Writing beautiful code does not matter to your clients. We write beautiful code because it makes us more productive in the long run and this decreases cost and increases revenue.
-
-The whole reason why we try not to write bugs is not that we value correctness, but that **our clients** value correctness. If you have ever seen a bug becoming a feature you know what I’m talking about. That bug exists but it should not be fixed. That happens because our goal is not to fix bugs, our goal is to generate revenue. If our bugs make clients happy then they increase revenue and therefore we are accomplishing our goals.
-
-Reusable space rockets, self-driving cars, robots, artificial intelligence: these things do not exist just because someone thought it would be cool to create them. They exist because there are business interests behind them. And I’m not saying the people behind them just want money, I’m sure they think that stuff is also cool, but the truth is that if they were not economically viable or had any potential to become so, they would not exist.
-
-Probably I should not even call this section “Realities of Our Industry 101”, maybe I should just call it “Realities of Capitalism 101”.
-
-And given that our only goal is to increase revenue and decrease cost, I think we as programmers should be paying more attention to requirements and design and start thinking with our minds and participating more actively in business decisions, which is why it is extremely important to know the problem domain we are working on. How many times before have you found yourself trying to think about what should happen in certain edge cases that have not been thought before by your managers or business people?
-
-In 1975, Boehm has done a research in which he found out that about 64% of all errors in the software he was studying were caused by design, while only 36% of all errors were coding errors. Another study called [“Higher Order Software—A Methodology for Defining Software”][2] also states that **in the NASA Apollo project, about 73% of all errors were design errors**.
-
-The whole reason why Design and Requirements exist is that they define what problems we’re going to solve and solving problems is what generates revenue.
-
-> Without requirements or design, programming is the art of adding bugs to an empty text file.
->
-> * Louis Srygley
->
-
-
-This same principle also applies to the tools we’ve got available in the JavaScript ecosystem. Babel, webpack, react, Redux, Mocha, Chai, Typescript, all of them exist to solve a problem and we gotta understand which problem they are trying to solve, we need to think carefully about when most of them are needed, otherwise, we will end up having JS Fatigue because:
-
-JS Fatigue happens when people use tools they don't need to solve problems they don't have.
-
-As Donald Knuth once said: “Premature optimization is the root of all evil”. Remember that software only exists to solve business problems and most software out there is just boring, it does not have any high scalability or high-performance constraints. Focus on solving business problems, focus on decreasing cost and generating revenue because this is all that matters. Optimize when you need, otherwise you will probably be adding unnecessary complexity to your software, which increases cost, and not generating enough revenue to justify that.
-
-This is why I think we should apply [Test Driven Development][3] principles to everything we do in our job. And by saying this I’m not just talking about testing. **I’m talking about waiting for problems to appear before solving them. This is what TDD is all about**. As Kent Beck himself says: “TDD reduces fear” because it guides your steps and allows you take small steps towards solving your problems. One problem at a time. By doing the same thing when it comes to deciding when to adopt new technologies then we will also reduce fear.
-
-Solving one problem at a time also decreases [Analysis Paralysis][4], which is basically what happens when you open Netflix and spend three hours concerned about making the optimal choice instead of actually watching something. By solving one problem at a time we reduce the scope of our decisions and by reducing the scope of our decisions we have fewer choices to make and by having fewer choices to make we decrease Analysis Paralysis.
-
-Have you ever thought about how easier it was to decide what you were going to watch when there were only a few TV channels available? Or how easier it was to decide which game you were going to play when you had only a few cartridges at home?
-
-### But what about JavaScript?
-
-By the time I’m writing this post NPM has 489,989 packages and tomorrow approximately 515 new ones are going to be published.
-
-And the packages we use and complain about have a history behind them we must comprehend in order to understand why we need them. **They are all trying to solve problems.**
-
-Babel, Dart, CoffeeScript and other transpilers come from our necessity of writing code other than JavaScript but making it runnable in our browsers. Babel even lets us write new generation JavaScript and make sure it will work even on older browsers, which has always been a great problem given the inconsistencies and different amount of compliance to the ECMA Specification between browsers. Even though the ECMA spec is becoming more and more solid these days, we still need Babel. And if you want to read more about Babel’s history I highly recommend that you read [this excellent post by Henry Zhu][5].
-
-Module bundlers such as Webpack and Browserify also have their reason to exist. If you remember well, not so long ago we used to suffer a lot with lots of `script` tags and making them work together. They used to pollute the global namespace and it was reasonably hard to make them work together when one depended on the other. In order to solve this [`Require.js`][6] was created, but it still had its problems, it was not that straightforward and its syntax also made it prone to other problems, as you can see [in this blog post][7]. Then Node.js came with `CommonJS` imports, which were synchronous, simple and clean, but we still needed a way to make that work on our browsers and this is why we needed Webpack and Browserify.
-
-And Webpack itself actually solves more problems than that by allowing us to deal with CSS, images and many other resources as if they were JavaScript dependencies.
-
-Front-end frameworks are a bit more complicated, but the reason why they exist is to reduce the cognitive load when we write code so that we don’t need to worry about manipulating the DOM ourselves or even dealing with messy browser APIs (another problem JQuery came to solve), which is not only error prone but also not productive.
-
-This is what we have been doing this whole time in computer science. We use low-level abstractions and build even more abstractions on top of it. The more we worry about describing how our software should work instead of making it work, the more productive we are.
-
-But all those tools have something in common: **they exist because the web platform moves too fast**. Nowadays we’re using web technology everywhere: in web browsers, in desktop applications, in phone applications or even in watch applications.
-
-This evolution also creates problems we need to solve. PWAs, for example, do not exist only because they’re cool and we programmers have fun writing them. Remember the first section of this post: **PWAs exist because they create business value**.
-
-And usually standards are not fast enough to be created and therefore we need to create our own solutions to these things, which is why it is great to have such a vibrant and creative community with us. We’re solving problems all the time and **we are allowing natural selection to do its job**.
-
-The tools that suit us better thrive, get more contributors and develop themselves more quickly and sometimes other tools end up incorporating the good ideas from the ones that thrive and becoming even more popular than them. This is how we evolve.
-
-By having more tools we also have more choices. If you remember the UNIX philosophy well, it states that we should aim at creating programs that do one thing and do it well.
-
-We can clearly see this happening in the JS testing environment, for example, where we have Mocha for running tests and Chai for doing assertions, while in Java JUnit tries to do all these things. This means that if we have a problem with one of them or if we find another one that suits us better, we can simply replace that small part and still have the advantages of the other ones.
-
-The UNIX philosophy also states that we should write programs that work together. And this is exactly what we are doing! Take a look at Babel, Webpack and React, for example. They work very well together but we still do not need one to use the other. In the testing environment, for example, if we’re using Mocha and Chai all of a sudden we can just install Karma and run those same tests in multiple environments.
-
-### How to Deal With It
-
-My first advice for anyone suffering from JS Fatigue would definitely be to stay aware that **you don’t need to know everything**. Trying to learn it all at once, even when we don’t have to do so, only increases the feeling of fatigue. Go deep in areas that you love and for which you feel an inner motivation to study and adopt a lazy approach when it comes to the other ones. I’m not saying that you should be lazy, I’m just saying that you can learn those only when needed. Whenever you face a problem that requires you to use a certain technology to solve it, go learn.
-
-Another important thing to say is that **you should start from the beginning**. Make sure you have learned enough about JavaScript itself before using any JavaScript frameworks. This is the only way you will be able to understand them and bend them to your will, otherwise, whenever you face an error you have never seen before you won’t know which steps to take in order to solve it. Learning core web technologies such as CSS, HTML5, JavaScript and also computer science fundamentals or even how the HTTP protocol works will help you master any other technologies a lot more quickly.
-
-But please, don’t get too attached to that. Sometimes you gotta risk yourself and start doing things on your own. As Sacha Greif has written in [this blog post][8], spending too much time learning the fundamentals is just like trying to learn how to swim by studying fluid dynamics. Sometimes you just gotta jump into the pool and try to swim by yourself.
-
-And please, don’t get too attached to a single technology. All of the things we have available nowadays have already been invented in the past. Of course, they have different features and a brand new name, but, in their essence, they are all the same.
-
-If you look at NPM, it is nothing new, we already had Maven Central and Ruby Gems quite a long time ago.
-
-In order to transpile your code, Babel applies the very same principles and theory as some of the oldest and most well-known compilers, such as the GCC.
-
-Even JSX is not a new idea. It E4X (ECMAScript for XML) already existed more than 10 years ago.
-
-Now you might ask: “what about Gulp, Grunt and NPM Scripts?” Well, I’m sorry but we can solve all those problems with GNU Make in 1976. And actually, there are a reasonable number of JavaScript projects that still use it, such as Chai.js, for example. But we do not do that because we are hipsters that like vintage stuff. We use `make` because it solves our problems, and this is what you should aim at doing, as we’ve talked before.
-
-If you really want to understand a certain technology and be able to solve any problems you might face, please, dig deep. One of the most decisive factors to success is curiosity, so **dig deep into the technologies you like**. Try to understand them from bottom-up and whenever you think something is just “magic”, debunk that myth by exploring the codebase by yourself.
-
-In my opinion, there is no better quote than this one by Richard Feinman, when it comes to really learning something:
-
-> What I cannot create, I do not understand
-
-And just below this phrase, [in the same blackboard, Richard also wrote][9]:
-
-> Know how to solve every problem that has been solved
-
-Isn’t this just amazing?
-
-When Richard said that, he was talking about being able to take any theoretical result and re-derive it, but I think the exact same principle can be applied to software engineering. The tools that solve our problems have already been invented, they already exist, so we should be able to get to them all by ourselves.
-
-This is the very reason I love [some of the videos available in Egghead.io][10] in which Dan Abramov explains how to implement certain features that exist in Redux from scratch or [blog posts that teach you how to build your own JSX renderer][11].
-
-So why not trying to implement these things by yourself or going to GitHub and reading their codebase in order to understand how they work? I’m sure you will find a lot of useful knowledge out there. Comments and tutorials might lie and be incorrect sometimes, the code cannot.
-
-Another thing that we have been talking a lot in this post is that **you should not get ahead of yourself**. Follow a TDD approach and solve one problem at a time. You are paid to increase revenue and decrease cost and you do this by solving problems, this is the reason why software exists.
-
-And since we love comparing our role to the ones related to civil engineering, let’s do a quick comparison between software development and civil engineering, just as [Sam Newman does in his brilliant book called “Building Microservices”][12].
-
-We love calling ourselves “engineers” or “architects”, but is that term really correct? We have been developing software for what we know as computers less than a hundred years ago, while the Colosseum, for example, exists for about two thousand years.
-
-When was the last time you’ve seen a bridge falling and when was the last time your telephone or your browser crashed?
-
-In order to explain this, I’ll use an example I love.
-
-This is the beautiful and awesome city of Barcelona:
-
-![The City of Barcelona][13]
-
-When we look at it this way and from this distance, it just looks like any other city in the world, but when we look at it from above, this is how Barcelona looks:
-
-![Barcelona from above][14]
-
-As you can see, every block has the same size and all of them are very organized. If you’ve ever been to Barcelona you will also know how good it is to move through the city and how well it works.
-
-But the people that planned Barcelona could not predict what it was going to look like in the next two or three hundred years. In cities, people come in and people move through it all the time so what they had to do was make it grow organically and adapt as the time goes by. They had to be prepared for changes.
-
-This very same thing happens to our software. It evolves quickly, refactors are often needed and requirements change more frequently than we would like them to.
-
-So, instead of acting like a Software Engineer, act as a Town Planner. Let your software grow organically and adapt as needed. Solve problems as they come by but make sure everything still has its place.
-
-Doing this when it comes to software is even easier than doing this in cities due to the fact that **software is flexible, civil engineering is not**. **In the software world, our build time is compile time**. In Barcelona we cannot simply destroy buildings to give space to new ones, in Software we can do that a lot easier. We can break things all the time, we can make experiments because we can build as many times as we want and it usually takes seconds and we spend a lot more time thinking than building. Our job is purely intellectual.
-
-So **act like a town planner, let your software grow and adapt as needed**.
-
-By doing this you will also have better abstractions and know when it’s the right time to adopt them.
-
-As Sam Koblenski says:
-
-> Abstractions only work well in the right context, and the right context develops as the system develops.
-
-Nowadays something I see very often is people looking for boilerplates when they’re trying to learn a new technology, but, in my opinion, **you should avoid boilerplates when you’re starting out**. Of course boilerplates and generators are useful if you are already experienced, but they take a lot of control out of your hands and therefore you won’t learn how to set up a project and you won’t understand exactly where each piece of the software you are using fits.
-
-When you feel like you are struggling more than necessary to get something simple done, it might be the right time for you to look for an easier way to do this. In our role **you should strive to be lazy** , you should work to not work. By doing that you have more free time to do other things and this decreases cost and increases revenue, so that’s another way of accomplishing your goal. You should not only work harder, you should work smarter.
-
-Probably someone has already had the same problem as you’re having right now, but if nobody did it might be your time to shine and build your own solution and help other people.
-
-But sometimes you will not be able to realize you could be more effective in your tasks until you see someone doing them better. This is why it is so important to **talk to people**.
-
-By talking to people you share experiences that help each other’s careers and we discover new tools to improve our workflow and, even more important than that, learn how they solve their problems. This is why I like reading blog posts in which companies explain how they solve their problems.
-
-Especially in our area we like to think that Google and StackOverflow can answer all our questions, but we still need to know which questions to ask. I’m sure you have already had a problem you could not find a solution for because you didn’t know exactly what was happening and therefore didn’t know what was the right question to ask.
-
-But if I needed to sum this whole post in a single advice, it would be:
-
-Solve problems.
-
-Software is not a magic box, software is not poetry (unfortunately). It exists to solve problems and improves peoples’ lives. Software exists to push the world forward.
-
-**Now it’s your time to go out there and solve problems**.
-
-
---------------------------------------------------------------------------------
-
-via: https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html
-
-作者:[Lucas Fernandes Da Costa][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://lucasfcosta.com
-[b]: https://github.com/lujun9972
-[1]: http://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-programmer/
-[2]: http://ieeexplore.ieee.org/document/1702333/
-[3]: https://en.wikipedia.org/wiki/Test_Driven_Development
-[4]: https://en.wikipedia.org/wiki/Analysis_paralysis
-[5]: https://babeljs.io/blog/2016/12/07/the-state-of-babel
-[6]: http://requirejs.org
-[7]: https://benmccormick.org/2015/05/28/moving-past-requirejs/
-[8]: https://medium.freecodecamp.org/a-study-plan-to-cure-javascript-fatigue-8ad3a54f2eb1
-[9]: https://www.quora.com/What-did-Richard-Feynman-mean-when-he-said-What-I-cannot-create-I-do-not-understand
-[10]: https://egghead.io/lessons/javascript-redux-implementing-store-from-scratch
-[11]: https://jasonformat.com/wtf-is-jsx/
-[12]: https://www.barnesandnoble.com/p/building-microservices-sam-newman/1119741399/2677517060476?st=PLA&sid=BNB_DRS_Marketplace+Shopping+Books_00000000&2sid=Google_&sourceId=PLGoP4760&k_clickid=3x4760
-[13]: /assets/barcelona-city.jpeg
-[14]: /assets/barcelona-above.jpeg
-[15]: https://twitter.com/thewizardlucas
diff --git a/sources/talk/20170911 What every software engineer should know about search.md b/sources/talk/20170911 What every software engineer should know about search.md
deleted file mode 100644
index 474a63201f..0000000000
--- a/sources/talk/20170911 What every software engineer should know about search.md
+++ /dev/null
@@ -1,513 +0,0 @@
-What every software engineer should know about search
-============================================================
-
-![](https://cdn-images-1.medium.com/max/2000/1*5AlsVRQrewLw74uHYTZ36w.jpeg)
-
-
-### Want to build or improve a search experience? Start here.
-
-Ask a software engineer: “[How would you add search functionality to your product?][78]” or “[How do I build a search engine?][79]” You’ll probably immediately hear back something like: “Oh, we’d just launch an ElasticSearch cluster. Search is easy these days.”
-
-But is it? Numerous current products [still][80] [have][81] [suboptimal][82] [search][83] [experiences][84]. Any true search expert will tell you that few engineers have a very deep understanding of how search engines work, knowledge that’s often needed to improve search quality.
-
-Even though many open source software packages exist, and the research is vast, the knowledge around building solid search experiences is limited to a select few. Ironically, [searching online][85] for search-related expertise doesn’t yield any recent, thoughtful overviews.
-
-#### Emoji Legend
-
-```
-❗ “Serious” gotcha: consequences of ignorance can be deadly
-🔷 Especially notable idea or piece of technology
-☁️ ️Cloud/SaaS
-🍺 Open source / free software
-🦏 JavaScript
-🐍 Python
-☕ Java
-🇨 C/C++
-```
-
-### Why read this?
-
-Think of this post as a collection of insights and resources that could help you to build search experiences. It can’t be a complete reference, of course, but hopefully we can improve it based on feedback (please comment or reach out!).
-
-I’ll point at some of the most popular approaches, algorithms, techniques, and tools, based on my work on general purpose and niche search experiences of varying sizes at Google, Airbnb and several startups.
-
-❗️Not appreciating or understanding the scope and complexity of search problems can lead to bad user experiences, wasted engineering effort, and product failure.
-
-If you’re impatient or already know a lot of this, you might find it useful to jump ahead to the tools and services sections.
-
-### Some philosophy
-
-This is a long read. But most of what we cover has four underlying principles:
-
-#### 🔷 Search is an inherently messy problem:
-
-* Queries are highly variable. The search problems are highly variablebased on product needs.
-
-* Think about how different Facebook search (searching a graph of people).
-
-* YouTube search (searching individual videos).
-
-* Or how different both of those are are from Kayak ([air travel planning is a really hairy problem][2]).
-
-* Google Maps (making sense of geo-spacial data).
-
-* Pinterest (pictures of a brunch you might cook one day).
-
-#### Quality, metrics, and processes matter a lot:
-
-* There is no magic bullet (like PageRank) nor a magic ranking formula that makes for a good approach. Processes are always evolving collection of techniques and processes that solve aspects of the problem and improve overall experience, usually gradually and continuously.
-
-* ❗️In other words, search is not just just about building software that does ranking or retrieval (which we will discuss below) for a specific domain. Search systems are usually an evolving pipeline of components that are tuned and evolve over time and that build up to a cohesive experience.
-
-* In particular, the key to success in search is building processes for evaluation and tuning into the product and development cycles. A search system architect should think about processes and metrics, not just technologies.
-
-#### Use existing technologies first:
-
-* As in most engineering problems, don’t reinvent the wheel yourself. When possible, use existing services or open source tools. If an existing SaaS (such as [Algolia][3] or managed Elasticsearch) fits your constraints and you can afford to pay for it, use it. This solution will likely will be the best choice for your product at first, even if down the road you need to customize, enhance, or replace it.
-
-#### ❗️Even if you buy, know the details:
-
-* Even if you are using an existing open source or commercial solution, you should have some sense of the complexity of the search problem and where there are likely to be pitfalls.
-
-### Theory: the search problem
-
-Search is different for every product, and choices depend on many technical details of the requirements. It helps to identify the key parameters of your search problem:
-
-1. Size: How big is the corpus (a complete set of documents that need to be searched)? Is it thousands or billions of documents?
-
-2. Media: Are you searching through text, images, graphical relationships, or geospatial data?
-
-3. 🔷 Corpus control and quality: Are the sources for the documents under your control, or coming from a (potentially adversarial) third party? Are all the documents ready to be indexed or need to be cleaned up and selected?
-
-4. Indexing speed: Do you need real-time indexing, or is building indices in batch is fine?
-
-5. Query language: Are the queries structured, or you need to support unstructured ones?
-
-6. Query structure: Are your queries textual, images, sounds? Street addresses, record ids, people’s faces?
-
-7. Context-dependence: Do the results depend on who the user is, what is their history with the product, their geographical location, time of the day etc?
-
-8. Suggest support: Do you need to support incomplete queries?
-
-9. Latency: What are the serving latency requirements? 100 milliseconds or 100 seconds?
-
-10. Access control: Is it entirely public or should users only see a restricted subset of the documents?
-
-11. Compliance: Are there compliance or organizational limitations?
-
-12. Internationalization: Do you need to support documents with multilingual character sets or Unicode? (Hint: Always use UTF-8 unless you really know what you’re doing.) Do you need to support a multilingual corpus? Multilingual queries?
-
-Thinking through these points up front can help you make significant choices designing and building individual search system components.
-
- ** 此处有Canvas,请手动处理 **
-
-![](https://cdn-images-1.medium.com/max/1600/1*qTK1iCtyJUr4zOyw4IFD7A.jpeg)
-A production indexing pipeline.
-
-### Theory: the search pipeline
-
-Now let’s go through a list of search sub-problems. These are usually solved by separate subsystems that form a pipeline. What that means is that a given subsystem consumes the output of previous subsystems, and produces input for the following subsystems.
-
-This leads to an important property of the ecosystem: once you change how an upstream subsystem works, you need to evaluate the effect of the change and possibly change the behavior downstream.
-
-Here are the most important problems you need to solve:
-
-#### Index selection:
-
-given a set of documents (e.g. the entirety of the Internet, all the Twitter posts, all the pictures on Instagram), select a potentially smaller subset of documents that may be worthy for consideration as search results and only include those in the index, discarding the rest. This is done to keep your indexes compact, and is almost orthogonal to selecting the documents to show to the user. Examples of particular classes of documents that don’t make the cut may include:
-
-#### Spam:
-
-oh, all the different shapes and sizes of search spam! A giant topic in itself, worthy of a separate guide. [A good web spam taxonomy overview][86].
-
-#### Undesirable documents:
-
-domain constraints might require filtering: [porn][87], illegal content, etc. The techniques are similar to spam filtering, probably with extra heuristics.
-
-#### Duplicates:
-
-Or near-duplicates and redundant documents. Can be done with [Locality-sensitive hashing][88], [similarity measures][89], clustering techniques or even [clickthrough data][90]. A [good overview][91] of techniques.
-
-#### Low-utility documents:
-
-The definition of utility depends highly on the problem domain, so it’s hard to recommend the approaches here. Some ideas are: it might be possible to build a utility function for your documents; heuristics might work, or example an image that contains only black pixels is not a useful document; utility might be learned from user behavior.
-
-#### Index construction:
-
-For most search systems, document retrieval is performed using an [inverted index][92] — often just called the index.
-
-* The index is a mapping of search terms to documents. A search term could be a word, an image feature or any other document derivative useful for query-to-document matching. The list of the documents for a given term is called a [posting list][1]. It can be sorted by some metric, like document quality.
-
-* Figure out whether you need to index the data in real time.❗️Many companies with large corpora of documents use a batch-oriented indexing approach, but then find this is unsuited to a product where users expect results to be current.
-
-* With text documents, term extraction usually involves using NLP techniques, such as stop lists, [stemming][4] and [entity extraction][5]; for images or videos computer vision methods are used etc.
-
-* In addition, documents are mined for statistical and meta information, such as references to other documents (used in the famous [PageRank][6]ranking signal), [topics][7], counts of term occurrences, document size, entities A mentioned etc. That information can be later used in ranking signal construction or document clustering. Some larger systems might contain several indexes, e.g. for documents of different types.
-
-* Index formats. The actual structure and layout of the index is a complex topic, since it can be optimized in many ways. For instance there are [posting lists compression methods][8], one could target [mmap()able data representation][9] or use[ LSM-tree][10] for continuously updated index.
-
-#### Query analysis and document retrieval:
-
-Most popular search systems allow non-structured queries. That means the system has to extract structure out of the query itself. In the case of an inverted index, you need to extract search terms using [NLP][93] techniques.
-
-The extracted terms can be used to retrieve relevant documents. Unfortunately, most queries are not very well formulated, so it pays to do additional query expansion and rewriting, like:
-
-* [Term re-weighting][11].
-
-* [Spell checking][12]. Historical query logs are very useful as a dictionary.
-
-* [Synonym matching][13]. [Another survey][14].
-
-* [Named entity recognition][15]. A good approach is to use [HMM-based language modeling][16].
-
-* Query classification. Detect queries of particular type. For example, Google Search detects queries that contain a geographical entity, a porny query, or a query about something in the news. The retrieval algorithm can then make a decision about which corpora or indexes to look at.
-
-* Expansion through [personalization][17] or [local context][18]. Useful for queries like “gas stations around me”.
-
-#### Ranking:
-
-Given a list of documents (retrieved in the previous step), their signals, and a processed query, create an optimal ordering (ranking) for those documents.
-
-Originally, most ranking models in use were hand-tuned weighted combinations of all the document signals. Signal sets might include PageRank, clickthrough data, topicality information and [others][94].
-
-To further complicate things, many of those signals, such as PageRank, or ones generated by [statistical language models][95] contain parameters that greatly affect the performance of a signal. Those have to be hand-tuned too.
-
-Lately, 🔷 [learning to rank][96], signal-based discriminative supervised approaches are becoming more and more popular. Some popular examples of LtR are [McRank][97] and [LambdaRank][98] from Microsoft, and [MatrixNet][99] from Yandex.
-
-A new, [vector space based approach][100] for semantic retrieval and ranking is gaining popularity lately. The idea is to learn individual low-dimensional vector document representations, then build a model which maps queries into the same vector space.
-
-Then, retrieval is just finding several documents that are closest by some metric (e.g. Eucledian distance) to the query vector. Ranking is the distance itself. If the mapping of both the documents and queries is built well, the documents are chosen not by a fact of presence of some simple pattern (like a word), but how close the documents are to the query by _meaning_ .
-
-### Indexing pipeline operation
-
-Usually, each of the above pieces of the pipeline must be operated on a regular basis to keep the search index and search experience current.
-
-❗️Operating a search pipeline can be complex and involve a lot of moving pieces. Not only is the data moving through the pipeline, but the code for each module and the formats and assumptions embedded in the data will change over time.
-
-A pipeline can be run in “batch” or based on a regular or occasional basis (if indexing speed does not need to be real time) or in a streamed way (if real-time indexing is needed) or based on certain triggers.
-
-Some complex search engines (like Google) have several layers of pipelines operating on different time scales — for example, a page that changes often (like [cnn.com][101]) is indexed with a higher frequency than a static page that hasn’t changed in years.
-
-### Serving systems
-
-Ultimately, the goal of a search system is to accept queries, and use the index to return appropriately ranked results. While this subject can be incredibly complex and technical, we mention a few of the key aspects to this part of the system.
-
-* Performance: users notice when the system they interact with is laggy. ❗️Google has done [extensive research][19], and they have noticed that number of searches falls 0.6%, when serving is slowed by 300ms. They recommend to serve results under 200 ms for most of your queries. A good article [on the topic][20]. This is the hard part: the system needs to collect documents from, possibly, many computers, than merge them into possible a very long list and then sort that list in the ranking order. To complicate things further, ranking might be query-dependent, so, while sorting, the system is not just comparing 2 numbers, but performing computation.
-
-* 🔷 Caching results: is often necessary to achieve decent performance. ❗️ But caches are just one large gotcha. The might show stale results when indices are updated or some results are blacklisted. Purging caches is a can of warm of itself: a search system might not have the capacity to serve the entire query stream with an empty (cold) cache, so the [cache needs to be pre-warmed][21] before the queries start arriving. Overall, caches complicate a system’s performance profile. Choosing a cache size and a replacement algorithm is also a [challenge][22].
-
-* Availability: is often defined by an uptime/(uptime + downtime) metric. When index is distributed, in order to serve any search results, the system often needs to query all the shards for their share of results. ❗️That means, that if one shard is unavailable, the entire search system is compromised. The more machines are involved in serving the index — the higher the probability of one of them becoming defunct and bringing the whole system down.
-
-* Managing multiple indices: Indices for large systems may separated into shards (pieces) or divided by media type or indexing cadence (fresh versus long-term indices). Results can then be merged.
-
-* Merging results of different kinds: e.g. Google showing results from Maps, News etc.
-
- ** 此处有Canvas,请手动处理 **
-
-![](https://cdn-images-1.medium.com/max/1600/1*M8WQu17E7SDziV0rVwUKbw.jpeg)
-A human rater. Yeah, you should still have those.
-
-### Quality, evaluation, and improvement
-
-So you’ve launched your indexing pipeline and search servers, and it’s all running nicely. Unfortunately the road to a solid search experience only begins with running infrastructure.
-
-Next, you’ll need to build a set of processes around continuous search quality evaluation and improvement. In fact, this is actually most of the work and the hardest problem you’ll have to solve.
-
-🔷 What is quality? First, you’ll need to determine (and get your boss or the product lead to agree), what quality means in your case:
-
-* Self-reported user satisfaction (includes UX)
-
-* Perceived relevance of the returned results (not including UX)
-
-* Satisfaction relative to competitors
-
-* Satisfaction relative performance of the previous version of the search engine (e.g. last week)
-
-* [User engagement][23]
-
-Metrics: Some of these concepts can be quite hard to quantify. On the other hand, it’s incredibly useful to be able to express how well a search engine is performing in a single number, a quality metric.
-
-Continuously computing such a metric for your (and your competitors’) system you can both track your progress and explain how well you are doing to your boss. Here are some classical ways to quantify quality, that can help you construct your magic quality metric formula:
-
-* [Precision][24] and [recall][25] measure how well the retrieved set of documents corresponds to the set you expected to see.
-
-* [F score][26] (specifically F1 score) is a single number, that represents both precision and recall well.
-
-* [Mean Average Precision][27] (MAP) allows to quantify the relevance of the top returned results.
-
-* 🔷 [Normalized Discounted Cumulative Gain][28] (nDCG) is like MAP, but weights the relevance of the result by its position.
-
-* [Long and short clicks][29] — Allow to quantify how useful the results are to the real users.
-
-* [A good detailed overview][30].
-
-🔷 Human evaluations: Quality metrics might seem like statistical calculations, but they can’t all be done by automated calculations. Ultimately, metrics need to represent subjective human evaluation, and this is where a “human in the loop” comes into play.
-
-❗️Skipping human evaluation is probably the most spread reason of sub-par search experiences.
-
-Usually, at early stages the developers themselves evaluate the results manually. At later point [human raters][102] (or assessors) may get involved. Raters typically use custom tools to look at returned search results and provide feedback on the quality of the results.
-
-Subsequently, you can use the feedback signals to guide development, help make launch decisions or even feed them back into the index selection, retrieval or ranking systems.
-
-Here is the list of some other types of human-driven evaluation, that can be done on a search system:
-
-* Basic user evaluation: The user ranks their satisfaction with the whole experience
-
-* Comparative evaluation: Compare with other search results (compare with search results from earlier versions of the system or competitors)
-
-* Retrieval evaluation: The query analysis and retrieval quality is often evaluated using manually constructed query-document sets. A user is shown a query and the list of the retrieved documents. She can then mark all the documents that are relevant to the query, and the ones that are not. The resulting pairs of (query, [relevant docs]) are called a “golden set”. Golden sets are remarkably useful. For one, an engineer can set up automatic retrieval regression tests using those sets. The selection signal from golden sets can also be fed back as ground truth to term re-weighting and other query re-writing models.
-
-* Ranking evaluation: Raters are presented with a query and two documents side-by-side. The rater must choose the document that fits the query better. This creates a partial ordering on the documents for a given query. That ordering can be later be compared to the output of the ranking system. The usual ranking quality measures used are MAP and nDCG.
-
-#### Evaluation datasets:
-
-One should start thinking about the datasets used for evaluation (like “golden sets” mentioned above) early in the search experience design process. How you collect and update them? How you push them to the production eval pipeline? Is there a built-in bias?
-
-Live experiments:
-
-After your search engine catches on and gains enough users, you might want to start conducting [live search experiments][103] on a portion of your traffic. The basic idea is to turn some optimization on for a group of people, and then compare the outcome with that of a “control” group — a similar sample of your users that did not have the experiment feature on for them. How you would measure the outcome is, once again, very product specific: it could be clicks on results, clicks on ads etc.
-
-Evaluation cycle time: How fast you improve your search quality is directly related to how fast you can complete the above cycle of measurement and improvement. It is essential from the beginning to ask yourself, “how fast can we measure and improve our performance?”
-
-Will it take days, hours, minutes or seconds to make changes and see if they improve quality? ❗️Running evaluation should also be as easy as possible for the engineers and should not take too much hands-on time.
-
-### 🔷 So… How do I PRACTICALLY build it?
-
-This blogpost is not meant as a tutorial, but here is a brief outline of how I’d approach building a search experience right now:
-
-1. As was said above, if you can afford it — just buy the existing SaaS (some good ones are listed below). An existing service fits if:
-
-* Your experience is a “connected” one (your service or app has internet connection).
-
-* Does it support all the functionality you need out of box? This post gives a pretty good idea of what functions would you want. To name a few, I’d at least consider: support for the media you are searching; real-time indexing support; query flexibility, including context-dependent queries.
-
-* Given the size of the corpus and the expected [QpS][31], can you afford to pay for it for the next 12 months?
-
-* Can the service support your expected traffic within the required latency limits? In case when you are querying the service from an app, make sure that the given service is accessible quickly enough from where your users are.
-
-2\. If a hosted solution does not fit your needs or resources, you probably want to use one of the open source libraries or tools. In case of connected apps or websites, I’d choose ElasticSearch right now. For embedded experiences, there are multiple tools below.
-
-3\. You most likely want to do index selection and clean up your documents (say extract relevant text from HTML pages) before uploading them to the search index. This will decrease the index size and make getting to good results easier. If your corpus fits on a single machine, just write a script (or several) to do that. If not, I’d use [Spark][104].
-
- ** 此处有Canvas,请手动处理 **
-
-![](https://cdn-images-1.medium.com/max/1600/1*lGw4kVVQyj8E5by2GWVoQg.jpeg)
-You can never have too many tools.
-
-### ☁️ SaaS
-
-☁️ 🔷[Algolia][105] — a proprietary SaaS that indexes a client’s website and provides an API to search the website’s pages. They also have an API to submit your own documents, support context dependent searches and serve results really fast. If I were building a web search experience right now and could afford it, I’d probably use Algolia first — and buy myself time to build a comparable search experience.
-
-* Various ElasticSearch providers: AWS (☁️ [ElasticSearch Cloud)][32], ☁️[elastic.co][33] and from ☁️ [Qbox][34].
-
-* ☁️[ Azure Search][35] — a SaaS solution from Microsoft. Accessible through a REST API, it can scale to billions of documents. Has a Lucene query interface to simplify migrations from Lucene-based solutions.
-
-* ☁️[ Swiftype][36] — an enterprise SaaS that indexes your company’s internal services, like Salesforce, G Suite, Dropbox and the intranet site.
-
-### Tools and libraries
-
-🍺☕🔷[ Lucene][106] is the most popular IR library. Implements query analysis, index retrieval and ranking. Either of the components can be replaced by an alternative implementation. There is also a C port — 🍺[Lucy][107].
-
-* 🍺☕🔷[ Solr][37] is a complete search server, based on Lucene. It’s a part of the [Hadoop][38] ecosystem of tools.
-
-* 🍺☕🔷[ Hadoop][39] is the most widely used open source MapReduce system, originally designed as a indexing pipeline framework for Solr. It has been gradually loosing ground to 🍺[Spark][40] as the batch data processing framework used for indexing. ☁️[EMR][41] is a proprietary implementation of MapReduce on AWS.
-
-* 🍺☕🔷 [ElasticSearch][42] is also based on Lucene ([feature comparison with Solr][43]). It has been getting more attention lately, so much that a lot of people think of ES when they hear “search”, and for good reasons: it’s well supported, has [extensive API][44], [integrates with Hadoop][45] and [scales well][46]. There are open source and [Enterprise][47] versions. ES is also available as a SaaS on Can scale to billions of documents, but scaling to that point can be very challenging, so typical scenario would involve orders of magnitude smaller corpus.
-
-* 🍺🇨 [Xapian][48] — a C++-based IR library. Relatively compact, so good for embedding into desktop or mobile applications.
-
-* 🍺🇨 [Sphinx][49] — an full-text search server. Has a SQL-like query language. Can also act as a [storage engine for MySQL][50] or used as a library.
-
-* 🍺☕ [Nutch][51] — a web crawler. Can be used in conjunction with Solr. It’s also the tool behind [🍺Common Crawl][52].
-
-* 🍺🦏 [Lunr][53] — a compact embedded search library for web apps on the client-side.
-
-* 🍺🦏 [searchkit][54] — a library of web UI components to use with ElasticSearch.
-
-* 🍺🦏 [Norch][55] — a [LevelDB][56]-based search engine library for Node.js.
-
-* 🍺🐍 [Whoosh][57] — a fast, full-featured search library implemented in pure Python.
-
-* OpenStreetMaps has it’s own 🍺[deck of search software][58].
-
-### Datasets
-
-A few fun or useful data sets to try building a search engine or evaluating search engine quality:
-
-* 🍺🔷 [Commoncrawl][59] — a regularly-updated open web crawl data. There is a [mirror on AWS][60], accessible for free within the service.
-
-* 🍺🔷 [Openstreetmap data dump][61] is a very rich source of data for someone building a geospacial search engine.
-
-* 🍺 [Google Books N-grams][62] can be very useful for building language models.
-
-* 🍺 [Wikipedia dumps][63] are a classic source to build, among other things, an entity graph out of. There is a [wide range of helper tools][64] available.
-
-* [IMDb dumps][65] are a fun dataset to build a small toy search engine for.
-
-### References
-
-* [Modern Information Retrieval][66] by R. Baeza-Yates and B. Ribeiro-Neto is a good, deep academic treatment of the subject. This is a good overview for someone completely new to the topic.
-
-* [Information Retrieval][67] by S. Büttcher, C. Clarke and G. Cormack is another academic textbook with a wide coverage and is more up-to-date. Covers learn-to-rank and does a pretty good job at discussing theory of search systems evaluation. Also is a good overview.
-
-* [Learning to Rank][68] by T-Y Liu is a best theoretical treatment of LtR. Pretty thin on practical aspects though. Someone considering building an LtR system should probably check this out.
-
-* [Managing Gigabytes][69] — published in 1999, is still a definitive reference for anyone embarking on building an efficient index of a significant size.
-
-* [Text Retrieval and Search Engines][70] — a MOOC from Coursera. A decent overview of basics.
-
-* [Indexing the World Wide Web: The Journey So Far][71] ([PDF][72]), an overview of web search from 2012, by Ankit Jain and Abhishek Das of Google.
-
-* [Why Writing Your Own Search Engine is Hard][73] a classic article from 2004 from Anna Patterson.
-
-* [https://github.com/harpribot/awesome-information-retrieval][74] — a curated list of search-related resources.
-
-* A [great blog][75] on everything search by [Daniel Tunkelang][76].
-
-* Some good slides on [search engine evaluation][77].
-
-This concludes my humble attempt to make a somewhat-useful “map” for an aspiring search engine engineer. Did I miss something important? I’m pretty sure I did — you know, [the margin is too narrow][108] to contain this enormous topic. Let me know if you think that something should be here and is not — you can reach [me][109] at[ forwidur@gmail.com][110] or at [@forwidur][111].
-
-> P.S. — This post is part of a open, collaborative effort to build an online reference, the Open Guide to Practical AI, which we’ll release in draft form soon. See [this popular guide][112] for an example of what’s coming. If you’d like to get updates on or help with with this effort, sign up [here][113].
-
-> Special thanks to [Joshua Levy][114], [Leo Polovets][115] and [Abhishek Das][116] for reading drafts of this and their invaluable feedback!
-
-> Header image courtesy of [Mickaël Forrett][117]. The beautiful toolbox is called [The Studley Tool Chest][118].
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Max Grigorev
-distributed systems, data, AI
-
--------------
-
-
-via: https://medium.com/startup-grind/what-every-software-engineer-should-know-about-search-27d1df99f80d
-
-作者:[Max Grigorev][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://medium.com/@forwidur?source=post_header_lockup
-[1]:https://en.wikipedia.org/wiki/Inverted_index
-[2]:http://www.demarcken.org/carl/papers/ITA-software-travel-complexity/ITA-software-travel-complexity.pdf
-[3]:https://www.algolia.com/
-[4]:https://en.wikipedia.org/wiki/Stemming
-[5]:https://en.wikipedia.org/wiki/Named-entity_recognition
-[6]:http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf
-[7]:https://gofishdigital.com/semantic-topic-modeling/
-[8]:https://nlp.stanford.edu/IR-book/html/htmledition/postings-file-compression-1.html
-[9]:https://deplinenoise.wordpress.com/2013/03/31/fast-mmapable-data-structures/
-[10]:https://en.wikipedia.org/wiki/Log-structured_merge-tree
-[11]:http://orion.lcg.ufrj.br/Dr.Dobbs/books/book5/chap11.htm
-[12]:http://norvig.com/spell-correct.html
-[13]:http://nlp.stanford.edu/IR-book/html/htmledition/query-expansion-1.html
-[14]:https://www.iro.umontreal.ca/~nie/IFT6255/carpineto-Survey-QE.pdf
-[15]:https://en.wikipedia.org/wiki/Named-entity_recognition
-[16]:http://www.aclweb.org/anthology/P02-1060
-[17]:https://en.wikipedia.org/wiki/Personalized_search
-[18]:http://searchengineland.com/future-search-engines-context-217550
-[19]:http://services.google.com/fh/files/blogs/google_delayexp.pdf
-[20]:http://highscalability.com/latency-everywhere-and-it-costs-you-sales-how-crush-it
-[21]:https://stackoverflow.com/questions/22756092/what-does-it-mean-by-cold-cache-and-warm-cache-concept
-[22]:https://en.wikipedia.org/wiki/Cache_performance_measurement_and_metric
-[23]:http://blog.popcornmetrics.com/5-user-engagement-metrics-for-growth/
-[24]:https://en.wikipedia.org/wiki/Information_retrieval#Precision
-[25]:https://en.wikipedia.org/wiki/Information_retrieval#Recall
-[26]:https://en.wikipedia.org/wiki/F1_score
-[27]:http://fastml.com/what-you-wanted-to-know-about-mean-average-precision/
-[28]:https://en.wikipedia.org/wiki/Discounted_cumulative_gain
-[29]:http://www.blindfiveyearold.com/short-clicks-versus-long-clicks
-[30]:https://arxiv.org/pdf/1302.2318.pdf
-[31]:https://en.wikipedia.org/wiki/Queries_per_second
-[32]:https://aws.amazon.com/elasticsearch-service/
-[33]:https://www.elastic.co/
-[34]:https://qbox.io/
-[35]:https://azure.microsoft.com/en-us/services/search/
-[36]:https://swiftype.com/
-[37]:http://lucene.apache.org/solr/
-[38]:http://hadoop.apache.org/
-[39]:http://hadoop.apache.org/
-[40]:http://spark.apache.org/
-[41]:https://aws.amazon.com/emr/
-[42]:https://www.elastic.co/products/elasticsearch
-[43]:http://solr-vs-elasticsearch.com/
-[44]:https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html
-[45]:https://github.com/elastic/elasticsearch-hadoop
-[46]:https://www.elastic.co/guide/en/elasticsearch/guide/current/distributed-cluster.html
-[47]:https://www.elastic.co/cloud/enterprise
-[48]:https://xapian.org/
-[49]:http://sphinxsearch.com/
-[50]:https://mariadb.com/kb/en/mariadb/sphinx-storage-engine/
-[51]:https://nutch.apache.org/
-[52]:http://commoncrawl.org/
-[53]:https://lunrjs.com/
-[54]:https://github.com/searchkit/searchkit
-[55]:https://github.com/fergiemcdowall/norch
-[56]:https://github.com/google/leveldb
-[57]:https://bitbucket.org/mchaput/whoosh/wiki/Home
-[58]:http://wiki.openstreetmap.org/wiki/Search_engines
-[59]:http://commoncrawl.org/
-[60]:https://aws.amazon.com/public-datasets/common-crawl/
-[61]:http://wiki.openstreetmap.org/wiki/Downloading_data
-[62]:http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html
-[63]:https://dumps.wikimedia.org/
-[64]:https://www.mediawiki.org/wiki/Alternative_parsers
-[65]:http://www.imdb.com/interfaces
-[66]:https://www.amazon.com/dp/0321416910
-[67]:https://www.amazon.com/dp/0262528878/
-[68]:https://www.amazon.com/dp/3642142664/
-[69]:https://www.amazon.com/dp/1558605703
-[70]:https://www.coursera.org/learn/text-retrieval
-[71]:https://research.google.com/pubs/pub37043.html
-[72]:https://pdfs.semanticscholar.org/28d8/288bff1b1fc693e6d80c238de9fe8b5e8160.pdf
-[73]:http://queue.acm.org/detail.cfm?id=988407
-[74]:https://github.com/harpribot/awesome-information-retrieval
-[75]:https://medium.com/@dtunkelang
-[76]:https://www.cs.cmu.edu/~quixote/
-[77]:https://web.stanford.edu/class/cs276/handouts/lecture8-evaluation_2014-one-per-page.pdf
-[78]:https://stackoverflow.com/questions/34314/how-do-i-implement-search-functionality-in-a-website
-[79]:https://www.quora.com/How-to-build-a-search-engine-from-scratch
-[80]:https://github.com/isaacs/github/issues/908
-[81]:https://www.reddit.com/r/Windows10/comments/4jbxgo/can_we_talk_about_how_bad_windows_10_search_sucks/d365mce/
-[82]:https://www.reddit.com/r/spotify/comments/2apwpd/the_search_function_sucks_let_me_explain/
-[83]:https://medium.com/@RohitPaulK/github-issues-suck-723a5b80a1a3#.yp8ui3g9i
-[84]:https://thenextweb.com/opinion/2016/01/11/netflix-search-sucks-flixed-fixes-it/
-[85]:https://www.google.com/search?q=building+a+search+engine
-[86]:http://airweb.cse.lehigh.edu/2005/gyongyi.pdf
-[87]:https://www.researchgate.net/profile/Gabriel_Sanchez-Perez/publication/262371199_Explicit_image_detection_using_YCbCr_space_color_model_as_skin_detection/links/549839cf0cf2519f5a1dd966.pdf
-[88]:https://en.wikipedia.org/wiki/Locality-sensitive_hashing
-[89]:https://en.wikipedia.org/wiki/Similarity_measure
-[90]:https://www.microsoft.com/en-us/research/wp-content/uploads/2011/02/RadlinskiBennettYilmaz_WSDM2011.pdf
-[91]:http://infolab.stanford.edu/~ullman/mmds/ch3.pdf
-[92]:https://en.wikipedia.org/wiki/Inverted_index
-[93]:https://en.wikipedia.org/wiki/Natural_language_processing
-[94]:http://backlinko.com/google-ranking-factors
-[95]:http://times.cs.uiuc.edu/czhai/pub/slmir-now.pdf
-[96]:https://en.wikipedia.org/wiki/Learning_to_rank
-[97]:https://papers.nips.cc/paper/3270-mcrank-learning-to-rank-using-multiple-classification-and-gradient-boosting.pdf
-[98]:https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/lambdarank.pdf
-[99]:https://yandex.com/company/technologies/matrixnet/
-[100]:https://arxiv.org/abs/1708.02702
-[101]:http://cnn.com/
-[102]:http://static.googleusercontent.com/media/www.google.com/en//insidesearch/howsearchworks/assets/searchqualityevaluatorguidelines.pdf
-[103]:https://googleblog.blogspot.co.uk/2008/08/search-experiments-large-and-small.html
-[104]:https://spark.apache.org/
-[105]:https://www.algolia.com/
-[106]:https://lucene.apache.org/
-[107]:https://lucy.apache.org/
-[108]:https://www.brainyquote.com/quotes/quotes/p/pierredefe204944.html
-[109]:https://www.linkedin.com/in/grigorev/
-[110]:mailto:forwidur@gmail.com
-[111]:https://twitter.com/forwidur
-[112]:https://github.com/open-guides/og-aws
-[113]:https://upscri.be/d29cfe/
-[114]:https://twitter.com/ojoshe
-[115]:https://twitter.com/lpolovets
-[116]:https://www.linkedin.com/in/abhishek-das-3280053/
-[117]:https://www.behance.net/gallery/3530289/-HORIZON-
-[118]:https://en.wikipedia.org/wiki/Henry_O._Studley
diff --git a/sources/talk/20171030 Why I love technical debt.md b/sources/talk/20171030 Why I love technical debt.md
deleted file mode 100644
index da071d370a..0000000000
--- a/sources/talk/20171030 Why I love technical debt.md
+++ /dev/null
@@ -1,69 +0,0 @@
-Why I love technical debt
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory1.png?itok=nbSRovsj)
-This is not necessarily the title you'd expect for an article, I guess,* but I'm a fan of [technical debt][1]. There are two reasons for this: a Bad Reason and a Good Reason. I'll be upfront about the Bad Reason first, then explain why even that isn't really a reason to love it. I'll then tackle the Good Reason, and you'll nod along in agreement.
-
-### The Bad Reason I love technical debt
-
-We'll get this out of the way, then, shall we? The Bad Reason is that, well, there's just lots of it, it's interesting, it keeps me in a job, and it always provides a reason, as a security architect, for me to get involved in** projects that might give me something new to look at. I suppose those aren't all bad things. It can also be a bit depressing, because there's always so much of it, it's not always interesting, and sometimes I need to get involved even when I might have better things to do.
-
-And what's worse is that it almost always seems to be security-related, and it's always there. That's the bad part.
-
-Security, we all know, is the piece that so often gets left out, or tacked on at the end, or done in half the time it deserves, or done by people who have half an idea, but don't quite fully grasp it. I should be clear at this point: I'm not saying that this last reason is those people's fault. That people know they need security is fantastic. If we (the security folks) or we (the organization) haven't done a good enough job in making sufficient security resources--whether people, training, or visibility--available to those people who need it, the fact that they're trying is great and something we can work on. Let's call that a positive. Or at least a reason for hope.***
-
-### The Good Reason I love technical debt
-
-Let's get on to the other reason: the legitimate reason. I love technical debt when it's named.
-
-What does that mean?
-
-We all get that technical debt is a bad thing. It's what happens when you make decisions for pragmatic reasons that are likely to come back and bite you later in a project's lifecycle. Here are a few classic examples that relate to security:
-
- * Not getting around to applying authentication or authorization controls on APIs that might, at some point, be public.
- * Lumping capabilities together so it's difficult to separate out appropriate roles later on.
- * Hard-coding roles in ways that don't allow for customisation by people who may use your application in different ways from those you initially considered.
- * Hard-coding cipher suites for cryptographic protocols, rather than putting them in a config file where they can be changed or selected later.
-
-
-
-There are lots more, of course, but those are just a few that jump out at me and that I've seen over the years. Technical debt means making decisions that will mean more work later on to fix them. And that can't be good, can it?
-
-There are two words in the preceding paragraphs that should make us happy: they are "decisions" and "pragmatic." Because, in order for something to be named technical debt, I'd argue, it has to have been subject to conscious decision-making, and trade-offs must have been made--hopefully for rational reasons. Those reasons may be many and various--lack of qualified resources; project deadlines; lack of sufficient requirement definition--but if they've been made consciously, then the technical debt can be named, and if technical debt can be named, it can be documented.
-
-And if it's documented, we're halfway there. As a security guy, I know that I can't force everything that goes out of the door to meet all the requirements I'd like--but the same goes for the high availability gal, the UX team, the performance folks, etc.
-
-What we need--what we all need--is for documentation to exist about why decisions were made, because when we return to the problem we'll know it was thought about. And, what's more, the recording of that information might even make it into product documentation. "This API is designed to be used in a protected environment and should not be exposed on the public Internet" is a great piece of documentation. It may not be what a customer is looking for, but at least they know how to deploy the product, and, crucially, it's an opportunity for them to come back to the product manager and say, "We'd really like to deploy that particular API in this way. Could you please add this as a feature request?" Product managers like that. Very much.****
-
-The best thing, though, is not just that named technical debt is visible technical debt, but that if you encourage your developers to document the decisions in code,***** then there's a decent chance that they'll record some ideas about how this should be done in the future. If you're really lucky, they might even add some hooks in the code to make it easier (an "auth" parameter on the API, which is unused in the current version, but will make API compatibility so much simpler in new releases; or cipher entry in the config file that currently only accepts one option, but is at least checked by the code).
-
-I've been a bit disingenuous, I know, by defining technical debt as named technical debt. But honestly, if it's not named, then you can't know what it is, and until you know what it is, you can't fix it.******* My advice is this: when you're doing a release close-down (or in your weekly standup--EVERY weekly standup), have an agenda item to record technical debt. Name it, document it, be proud, sleep at night.
-
-* Well, apart from the obvious clickbait reason--for which I'm (a little) sorry.
-
-** I nearly wrote "poke my nose into."
-
-*** Work with me here.
-
-**** If you're software engineer/coder/hacker, here's a piece of advice: Learn to talk to product managers like real people, and treat them nicely. They (the better ones, at least) are invaluable allies when you need to prioritize features or have tricky trade-offs to make.
-
-***** Do this. Just do it. Documentation that isn't at least mirrored in code isn't real documentation.******
-
-****** Don't believe me? Talk to developers. "Who reads product documentation?" "Oh, the spec? I skimmed it. A few releases back. I think." "I looked in the header file; couldn't see it there."
-
-******* Or decide not to fix it, which may also be an entirely appropriate decision.
-
-This article originally appeared on [Alice, Eve, and Bob - a security blog][2] and is republished with permission.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/17/10/why-i-love-technical-debt
-
-作者:[Mike Bursell][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/mikecamel
-[1]:https://en.wikipedia.org/wiki/Technical_debt
-[2]:https://aliceevebob.wordpress.com/2017/08/29/why-i-love-technical-debt/
diff --git a/sources/talk/20171107 How to Monetize an Open Source Project.md b/sources/talk/20171107 How to Monetize an Open Source Project.md
deleted file mode 100644
index ab51006101..0000000000
--- a/sources/talk/20171107 How to Monetize an Open Source Project.md
+++ /dev/null
@@ -1,86 +0,0 @@
-How to Monetize an Open Source Project
-======
-
-![](http://www.itprotoday.com/sites/itprotoday.com/files/styles/article_featured_standard/public/ThinkstockPhotos-629994230_0.jpg?itok=5dZ68OTn)
-The problem for any small group of developers putting the finishing touches on a commercial open source application is figuring out how to monetize the software in order to keep the bills paid and food on the table. Often these small pre-startups will start by deciding which of the recognized open source business models they're going to adapt, whether that be following Red Hat's lead and offering professional services, going the SaaS route, releasing as open core or something else.
-
-Steven Grandchamp, general manager for MariaDB's North America operations and CEO for Denver-based startup [Drud Tech][1], thinks that might be putting the cart before the horse. With an open source project, the best first move is to get people downloading and using your product for free.
-
-**Related:** [Demand for Open Source Skills Continues to Grow][2]
-
-"The number one tangent to monetization in any open source product is adoption, because the key to monetizing an open source product is you flip what I would call the sales funnel upside down," he told ITPro at the recent All Things Open conference in Raleigh, North Carolina.
-
-In many ways, he said, selling open source solutions is the opposite of marketing traditional proprietary products, where adoption doesn't happen until after a contract is signed.
-
-**Related:** [Is Raleigh the East Coast's Silicon Valley?][3]
-
-"In a proprietary software company, you advertise, you market, you make claims about what the product can do, and then you have sales people talk to customers. Maybe you have a free trial or whatever. Maybe you have a small version. Maybe it's time bombed or something like that, but you don't really get to realize the benefit of the product until there's a contract and money changes hands."
-
-Selling open source solutions is different because of the challenge of selling software that's freely available as a GitHub download.
-
-"The whole idea is to put the product out there, let people use it, experiment with it, and jump on the chat channels," he said, pointing out that his company Drud has a public chat channel that's open to anybody using their product. "A subset of that group is going to raise their hand and go, 'Hey, we need more help. We'd like a tighter relationship with the company. We'd like to know where your road map's going. We'd like to know about customization. We'd like to know if maybe this thing might be on your road map.'"
-
-Grandchamp knows more than a little about making software pay, from both the proprietary and open source sides of the fence. In the 1980s he served as VP of research and development at Formation Technologies, and became SVP of R&D at John H. Harland after it acquired Formation in the mid-90s. He joined MariaDB in 2016, after serving eight years as CEO at OpenLogic, which was providing commercial support for more than 600 open-source projects at the time it was acquired by Rogue Wave Software. Along the way, there was a two year stint at Microsoft's Redmond campus.
-
-OpenLogic was where he discovered open source, and his experiences there are key to his approach for monetizing open source projects.
-
-"When I got to OpenLogic, I was told that we had 300 customers that were each paying $99 a year for access to our tool," he explained. "But the problem was that nobody was renewing the tool. So I called every single customer that I could find and said 'did you like the tool?'"
-
-It turned out that nearly everyone he talked to was extremely happy with the company's software, which ironically was the reason they weren't renewing. The company's tool solved their problem so well there was no need to renew.
-
-"What could we have offered that would have made you renew the tool?" he asked. "They said, 'If you had supported all of the open source products that your tool assembled for me, then I would have that ongoing relationship with you.'"
-
-Grandchamp immediately grasped the situation, and when the CTO said such support would be impossible, Grandchamp didn't mince words: "Then we don't have a company."
-
-"We figured out a way to support it," he said. "We created something called the Open Logic Expert Community. We developed relationships with committers and contributors to a couple of hundred open source packages, and we acted as sort of the hub of the SLA for our customers. We had some people on staff, too, who knew the big projects."
-
-After that successful launch, Grandchamp and his team began hearing from customers that they were confused over exactly what open source code they were using in their projects. That lead to the development of what he says was the first software-as-a-service compliance portal of open source, which could scan an application's code and produce a list of all of the open source code included in the project. When customers then expressed confusion over compliance issues, the SaaS service was expanded to flag potential licensing conflicts.
-
-Although the product lines were completely different, the same approach was used to monetize MariaDB, then called SkySQL, after MySQL co-founders Michael "Monty" Widenius, David Axmark, and Allan Larsson created the project by forking MySQL, which Oracle had acquired from Sun Microsystems in 2010.
-
-Again, users were approached and asked what things they would be willing to purchase.
-
-"They wanted different functionality in the database, and you didn't really understand this if you didn't talk to your customers," Grandchamp explained. "Monty and his team, while they were being acquired at Sun and Oracle, were working on all kinds of new functionality, around cloud deployments, around different ways to do clustering, they were working on lots of different things. That work, Oracle and MySQL didn't really pick up."
-
-Rolling in the new features customers wanted needed to be handled gingerly, because it was important to the folks at MariaDB to not break compatibility with MySQL. This necessitated a strategy around when the code bases would come together and when they would separate. "That road map, knowledge, influence and technical information was worth paying for."
-
-As with OpenLogic, MariaDB customers expressed a willingness to spend money on a variety of fronts. For example, a big driver in the early days was a project called Remote DBA, which helped customers make up for a shortage of qualified database administrators. The project could help with design issues, as well as monitor existing systems to take the workload off of a customer's DBA team. The service also offered access to MariaDB's own DBAs, many of whom had a history with the database going back to the early days of MySQL.
-
-"That was a subscription offering that people were definitely willing to pay for," he said.
-
-The company also learned, again by asking and listening to customers, that there were various types of support subscriptions that customers were willing to purchase, including subscriptions around capability and functionality, and a managed service component of Remote DBA.
-
-These days Grandchamp is putting much of his focus on his latest project, Drud, a startup that offers a suite of integrated, automated, open source development tools for developing and managing multiple websites, which can be running on any combination of content management systems and deployment platforms. It is monetized partially through modules that add features like a centralized dashboard and an "intelligence engine."
-
-As you might imagine, he got it off the ground by talking to customers and giving them what they indicated they'd be willing to purchase.
-
-"Our number one customer target is the agency market," he said. "The enterprise market is a big target, but I believe it's our second target, not our first. And the reason it's number two is they don't make decisions very fast. There are technology refresh cycles that have to come up, there are lots of politics involved and lots of different vendors. It's lucrative once you're in, but in a startup you've got to figure out how to pay your bills. I want to pay my bills today. I don't want to pay them in three years."
-
-Drud's focus on the agency market illustrates another consideration: the importance of understanding something about your customers' business. When talking with agencies, many said they were tired of being offered generic software that really didn't match their needs from proprietary vendors that didn't understand their business. In Drud's case, that understanding is built into the company DNA. The software was developed by an agency to fill its own needs.
-
-"We are a platform designed by an agency for an agency," Grandchamp said. "Right there is a relationship that they're willing to pay for. We know their business."
-
-Grandchamp noted that startups also need to be able to distinguish users from customers. Most of the people downloading and using commercial open source software aren't the people who have authorization to make purchasing decisions. These users, however, can point to the people who control the purse strings.
-
-"It's our job to build a way to communicate with those users, provide them value so that they'll give us value," he explained. "It has to be an equal exchange. I give you value of a tool that works, some advice, really good documentation, access to experts who can sort of guide you along. Along the way I'm asking you for pieces of information. Who do you work for? How are the technology decisions happening in your company? Are there other people in your company that we should refer the product to? We have to create the dialog."
-
-In the end, Grandchamp said, in the open source world the people who go out to find business probably shouldn't see themselves as salespeople, but rather, as problem solvers.
-
-"I believe that you're not really going to need salespeople in this model. I think you're going to need customer success people. I think you're going to need people who can enable your customers to be successful in a business relationship that's more highly transactional."
-
-"People don't like to be sold," he added, "especially in open source. The last person they want to see is the sales person, but they like to ply and try and consume and give you input and give you feedback. They love that."
-
---------------------------------------------------------------------------------
-
-via: http://www.itprotoday.com/software-development/how-monetize-open-source-project
-
-作者:[Christine Hall][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.itprotoday.com/author/christine-hall
-[1]:https://www.drud.com/
-[2]:http://www.itprotoday.com/open-source/demand-open-source-skills-continues-grow
-[3]:http://www.itprotoday.com/software-development/raleigh-east-coasts-silicon-valley
diff --git a/sources/talk/20171114 Why pair writing helps improve documentation.md b/sources/talk/20171114 Why pair writing helps improve documentation.md
deleted file mode 100644
index ff3bbb5888..0000000000
--- a/sources/talk/20171114 Why pair writing helps improve documentation.md
+++ /dev/null
@@ -1,87 +0,0 @@
-Why pair writing helps improve documentation
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd)
-
-Professional writers, at least in the Red Hat documentation team, nearly always work on docs alone. But have you tried writing as part of a pair? In this article, I'll explain a few benefits of pair writing.
-### What is pair writing?
-
-Pair writing is when two writers work in real time, on the same piece of text, in the same room. This approach improves document quality, speeds up writing, and allows writers to learn from each other. The idea of pair writing is borrowed from [pair programming][1].
-
-When pair writing, you and your colleague work on the text together, making suggestions and asking questions as needed. Meanwhile, you're observing each other's work. For example, while one is writing, the other writer observes details such as structure or context. Often discussion around the document turns into sharing experiences and opinions, and brainstorming about writing in general.
-
-At all times, the writing is done by only one person. Thus, you need only one computer, unless you want one writer to do online research while the other person does the writing. The text workflow is the same as if you are working alone: a text editor, the documentation source files, git, and so on.
-
-### Pair writing in practice
-
-My colleague Aneta Steflova and I have done more than 50 hours of pair writing working on the Red Hat Enterprise Linux System Administration docs and on the Red Hat Identity Management docs. I've found that, compared to writing alone, pair writing:
-
- * is as productive or more productive;
- * improves document quality;
- * helps writers share technical expertise; and
- * is more fun.
-
-
-
-### Speed
-
-Two writers writing one text? Sounds half as productive, right? Wrong. (Usually.)
-
-Pair writing can help you work faster because two people have solutions to a bigger set of problems, which means getting blocked less often during the process. For example, one time we wrote urgent API docs for identity management. I know at least the basics of web APIs, the REST protocol, and so on, which helped us speed through those parts of the documentation. Working alone, Aneta would have needed to interrupt the writing process frequently to study these topics.
-
-### Quality
-
-Poor wording or sentence structure, inconsistencies in material, and so on have a harder time surviving under the scrutiny of four eyes. For example, one of our pair writing documents was reviewed by an extremely critical developer, who was known for catching technical inaccuracies and bad structure. After this particular review, he said, "Perfect. Thanks a lot."
-
-### Sharing expertise
-
-Each of us lives in our own writing bubble, and we normally don't know how others approach writing. Pair writing can help you improve your own writing process. For example, Aneta showed me how to better handle assignments in which the developer has provided starting text (as opposed to the writer writing from scratch using their own knowledge of the subject), which I didn't have experience with. Also, she structures the docs thoroughly, which I began doing as well.
-
-As another example, I'm good enough at Vim that XML editing (e.g., tags manipulation) is enjoyable instead of torturous. Aneta saw how I was using Vim, asked about it, suffered through the learning curve, and now takes advantage of the Vim features that help me.
-
-Pair writing is especially good for helping and mentoring new writers, and it's a great way to get to know professionally (and have fun with) colleagues.
-
-### When pair writing shines
-
-In addition to benefits I've already listed, pair writing is especially good for:
-
- * **Working with[Bugzilla][2]** : Bugzillas can be cumbersome and cause problems, especially for administration-clumsy people (like me).
- * **Reviewing existing documents** : When documentation needs to be expanded or fixed, it is necessary to first examine the existing document.
- * **Learning new technology** : A fellow writer can be a better teacher than an engineer.
- * **Writing emails/requests for information to developers with well-chosen questions** : The difficulty of this task rises in proportion to the difficulty of technology you are documenting.
-
-
-
-Also, with pair writing, feedback is in real time, as-needed, and two-way.
-
-On the downside, pair writing can be a faster pace, giving a writer less time to mull over a topic or wording. On the other hand, generally peer review is not necessary after pair writing.
-
-### Words of caution
-
-To get the most out of pair writing:
-
- * Go into the project well prepared, otherwise you can waste your colleague's time.
- * Talkative types need to stay focused on the task, otherwise they end up talking rather than writing.
- * Be prepared for direct feedback. Pair writing is not for feedback-allergic writers.
- * Beware of session hijackers. Dominant personalities can turn pair writing into writing solo with a spectator. (However, it _can _ be good if one person takes over at times, as long as the less-experienced partner learns from the hijacker, or the more-experienced writer is providing feedback to the hijacker.)
-
-
-
-### Conclusion
-
-Pair writing is a meeting, but one in which you actually get work done. It's an activity that lets writers focus on the one indispensable thing in our vocation--writing.
-
-_This post was written with the help of pair writing with Aneta Steflova._
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/17/11/try-pair-writing
-
-作者:[Maxim Svistunov][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/maxim-svistunov
-[1]:https://developer.atlassian.com/blog/2015/05/try-pair-programming/
-[2]:https://www.bugzilla.org/
diff --git a/sources/talk/20171115 Why and How to Set an Open Source Strategy.md b/sources/talk/20171115 Why and How to Set an Open Source Strategy.md
deleted file mode 100644
index 79ec071b4d..0000000000
--- a/sources/talk/20171115 Why and How to Set an Open Source Strategy.md
+++ /dev/null
@@ -1,120 +0,0 @@
-Why and How to Set an Open Source Strategy
-============================================================
-
-![](https://www.linuxfoundation.org/wp-content/uploads/2017/11/open-source-strategy-1024x576.jpg)
-
-This article explains how to walk through, measure, and define strategies collaboratively in an open source community.
-
- _“If you don’t know where you are going, you’ll end up someplace else.” _ _—_ Yogi Berra
-
-Open source projects are generally started as a way to scratch one’s itch — and frankly that’s one of its greatest attributes. Getting code down provides a tangible method to express an idea, showcase a need, and solve a problem. It avoids over thinking and getting a project stuck in analysis-paralysis, letting the project pragmatically solve the problem at hand.
-
-Next, a project starts to scale up and gets many varied users and contributions, with plenty of opinions along the way. That leads to the next big challenge — how does a project start to build a strategic vision? In this article, I’ll describe how to walk through, measure, and define strategies collaboratively, in a community.
-
-Strategy may seem like a buzzword of the corporate world rather something that an open source community would embrace, so I suggest stripping away the negative actions that are sometimes associated with this word (e.g., staff reductions, discontinuations, office closures). Strategy done right isn’t a tool to justify unfortunate actions but to help show focus and where each community member can contribute.
-
-A good application of strategy achieves the following:
-
-* Why the project exists?
-
-* What the project looks to achieve?
-
-* What is the ideal end state for a project is.
-
-The key to success is answering these questions as simply as possible, with consensus from your community. Let’s look at some ways to do this.
-
-### Setting a mission and vision
-
- _“_ _Efforts and courage are not enough without purpose and direction.”_ — John F. Kennedy
-
-All strategic planning starts off with setting a course for where the project wants to go. The two tools used here are _Mission_ and _Vision_ . They are complementary terms, describing both the reason a project exists (mission) and the ideal end state for a project (vision).
-
-A great way to start this exercise with the intent of driving consensus is by asking each key community member the following questions:
-
-* What drove you to join and/or contribute the project?
-
-* How do you define success for your participation?
-
-In a company, you’d ask your customers these questions usually. But in open source projects, the customers are the project participants — and their time investment is what makes the project a success.
-
-Driving consensus means capturing the answers to these questions and looking for themes across them. At R Consortium, for example, I created a shared doc for the board to review each member’s answers to the above questions, and followed up with a meeting to review for specific themes that came from those insights.
-
-Building a mission flows really well from this exercise. The key thing is to keep the wording of your mission short and concise. Open Mainframe Project has done this really well. Here’s their mission:
-
- _Build community and adoption of Open Source on the mainframe by:_
-
-* _Eliminating barriers to Open Source adoption on the mainframe_
-
-* _Demonstrating value of the mainframe on technical and business levels_
-
-* _Strengthening collaboration points and resources for the community to thrive_
-
-At 40 words, it passes the key eye tests of a good mission statement; it’s clear, concise, and demonstrates the useful value the project aims for.
-
-The next stage is to reflect on the mission statement and ask yourself this question: What is the ideal outcome if the project accomplishes its mission? That can be a tough one to tackle. Open Mainframe Project put together its vision really well:
-
- _Linux on the Mainframe as the standard for enterprise class systems and applications._
-
-You could read that as a [BHAG][1], but it’s really more of a vision, because it describes a future state that is what would be created by the mission being fully accomplished. It also hits the key pieces to an effective vision — it’s only 13 words, inspirational, clear, memorable, and concise.
-
-Mission and vision add clarity on the who, what, why, and how for your project. But, how do you set a course for getting there?
-
-### Goals, Objectives, Actions, and Results
-
- _“I don’t focus on what I’m up against. I focus on my goals and I try to ignore the rest.”_ — Venus Williams
-
-Looking at a mission and vision can get overwhelming, so breaking them down into smaller chunks can help the project determine how to get started. This also helps prioritize actions, either by importance or by opportunity. Most importantly, this step gives you guidance on what things to focus on for a period of time, and which to put off.
-
-There are lots of methods of time bound planning, but the method I think works the best for projects is what I’ve dubbed the GOAR method. It’s an acronym that stands for:
-
-* Goals define what the project is striving for and likely would align and support the mission. Examples might be “Grow a diverse contributor base” or “Become the leading project for X.” Goals are aspirational and set direction.
-
-* Objectives show how you measure a goal’s completion, and should be clear and measurable. You might also have multiple objectives to measure the completion of a goal. For example, the goal “Grow a diverse contributor base” might have objectives such as “Have X total contributors monthly” and “Have contributors representing Y different organizations.”
-
-* Actions are what the project plans to do to complete an objective. This is where you get tactical on exactly what needs done. For example, the objective “Have contributors representing Y different organizations” would like have actions of reaching out to interested organizations using the project, having existing contributors mentor new mentors, and providing incentives for first time contributors.
-
-* Results come along the way, showing progress both positive and negative from the actions.
-
-You can put these into a table like this:
-
-| Goals | Objectives | Actions | Results |
-|:--|:--|:--|:--|
-| Grow a diverse contributor base | Have X total contributors monthly | Existing contributors mentor new mentors Providing incentives for first time contributors | |
-| | Have contributors representing Y different organizations | Reach out to interested organizations using the project | |
-
-
-In large organizations, monthly or quarterly goals and objectives often make sense; however, on open source projects, these time frames are unrealistic. Six- even 12-month tracking allows the project leadership to focus on driving efforts at a high level by nurturing the community along.
-
-The end result is a rubric that provides clear vision on where the project is going. It also lets community members more easily find ways to contribute. For example, your project may include someone who knows a few organizations using the project — this person could help introduce those developers to the codebase and guide them through their first commit.
-
-### What happens if the project doesn’t hit the goals?
-
- _“I have not failed. I’ve just found 10,000 ways that won’t work.”_ — Thomas A. Edison
-
-Figuring out what is within the capability of an organization — whether Fortune 500 or a small open source project — is hard. And, sometimes the expectations or market conditions change along the way. Does that make the strategy planning process a failure? Absolutely not!
-
-Instead, you can use this experience as a way to better understand your project’s velocity, its impact, and its community, and perhaps as a way to prioritize what is important and what’s not.
-
---------------------------------------------------------------------------------
-
-via: https://www.linuxfoundation.org/blog/set-open-source-strategy/
-
-作者:[ John Mertic][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linuxfoundation.org/author/jmertic/
-[1]:https://en.wikipedia.org/wiki/Big_Hairy_Audacious_Goal
-[2]:https://www.linuxfoundation.org/author/jmertic/
-[3]:https://www.linuxfoundation.org/category/blog/
-[4]:https://www.linuxfoundation.org/category/audience/c-level/
-[5]:https://www.linuxfoundation.org/category/audience/developer-influencers/
-[6]:https://www.linuxfoundation.org/category/audience/entrepreneurs/
-[7]:https://www.linuxfoundation.org/category/campaigns/membership/how-to/
-[8]:https://www.linuxfoundation.org/category/campaigns/events-campaigns/linux-foundation/
-[9]:https://www.linuxfoundation.org/category/audience/open-source-developers/
-[10]:https://www.linuxfoundation.org/category/audience/open-source-professionals/
-[11]:https://www.linuxfoundation.org/category/audience/open-source-users/
-[12]:https://www.linuxfoundation.org/category/blog/thought-leadership/
diff --git a/sources/talk/20171116 Why is collaboration so difficult.md b/sources/talk/20171116 Why is collaboration so difficult.md
deleted file mode 100644
index 6567b75dca..0000000000
--- a/sources/talk/20171116 Why is collaboration so difficult.md
+++ /dev/null
@@ -1,94 +0,0 @@
-Why is collaboration so difficult?
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_block_collaboration.png?itok=pKbXpr1e)
-
-Many contemporary definitions of "collaboration" define it simply as "working together"--and, in part, it is working together. But too often, we tend to use the term "collaboration" interchangeably with cognate terms like "cooperation" and "coordination." These terms also refer to some manner of "working together," yet there are subtle but important differences between them all.
-
-How does collaboration differ from coordination or cooperation? What is so important about collaboration specifically? Does it have or do something that coordination and cooperation don't? The short answer is a resounding "yes!"
-
-[This unit explores collaboration][1], a problematic term because it has become a simple buzzword for "working together." By the time you've studied the cases and practiced the exercises contained in this section, you will understand that it's so much more than that.
-
-### Not like the others
-
-"Coordination" can be defined as the ordering of a variety of people acting in an effective, unified manner toward an end goal or state
-
-In traditional organizations and businesses, people contributed according to their role definitions, such as in manufacturing, where each employee was responsible for adding specific components to the widget on an assembly line until the widget was complete. In contexts like these, employees weren't expected to contribute beyond their pre-defined roles (they were probably discouraged from doing so), and they didn't necessarily have a voice in the work or in what was being created. Often, a manager oversaw the unification of effort (hence the role "project coordinator"). Coordination is meant to connote a sense of harmony and unity, as if elements are meant to go together, resulting in efficiency among the ordering of the elements.
-
-One common assumption is that coordinated efforts are aimed at the same, single goal. So some end result is "successful" when people and parts work together seamlessly; when one of the parts breaks down and fails, then the whole goal fails. Many traditional businesses (for instance, those with command-and-control hierarchies) manage work through coordination.
-
-Cooperation is another term whose surface meaning is "working together." Rather than the sense of compliance that is part of "coordination," it carries a sense of agreement and helpfulness on the path toward completing a shared activity or goal.
-
-"Collaboration" also means "working together"--but that simple definition obscures the complex and often difficult process of collaborating.
-
-People tend to use the term "cooperation" when joining two semi-related entities where one or more entity could decide not to cooperate. The people and pieces that are part of a cooperative effort make the shared activity easier to perform or the shared goal easier to reach. "Cooperation" implies a shared goal or activity we agree to pursue jointly. One example is how police and witnesses cooperate to solve crimes.
-
-"Collaboration" also means "working together"--but that simple definition obscures the complex and often difficult process of collaborating.
-
-Sometimes collaboration involves two or more groups that do not normally work together; they are disparate groups or not usually connected. For instance, a traitor collaborates with the enemy, or rival businesses collaborate with each other. The subtlety of collaboration is that the two groups may have oppositional initial goals but work together to create a shared goal. Collaboration can be more contentious than coordination or cooperation, but like cooperation, any one of the entities could choose not to collaborate. Despite the contention and conflict, however, there is discourse--whether in the form of multi-way discussion or one-way feedback--because without discourse, there is no way for people to express a point of dissent that is ripe for negotiation.
-
-The success of any collaboration rests on how well the collaborators negotiate their needs to create the shared objective, and then how well they cooperate and coordinate their resources to execute a plan to reach their goals.
-
-### For example
-
-One way to think about these things is through a real-life example--like the writing of [this book][1].
-
-The editor, [Bryan][2], coordinates the authors' work through the call for proposals, setting dates and deadlines, collecting the writing, and meeting editing dates and deadlines for feedback about our work. He coordinates the authors, the writing, the communications. In this example, I'm not coordinating anything except myself (still a challenge most days!).
-
-The success of any collaboration rests on how well the collaborators negotiate their needs to create the shared objective, and then how well they cooperate and coordinate their resources to execute a plan to reach their goals.
-
-I cooperate with Bryan's dates and deadlines, and with the ways he has decided to coordinate the work. I propose the introduction on GitHub; I wait for approval. I comply with instructions, write some stuff, and send it to him by the deadlines. He cooperates by accepting a variety of document formats. I get his edits,incorporate them, send it back him, and so forth. If I don't cooperate (or something comes up and I can't cooperate), then maybe someone else writes this introduction instead.
-
-Bryan and I collaborate when either one of us challenges something, including pieces of the work or process that aren't clear, things that we thought we agreed to, or things on which we have differing opinions. These intersections are ripe for negotiation and therefore indicative of collaboration. They are the opening for us to negotiate some creative work.
-
-Once the collaboration is negotiated and settled, writing and editing the book returns to cooperation/coordination; that is why collaboration relies on the other two terms of joint work.
-
-One of the most interesting parts of this example (and of work and shared activity in general) is the moment-by-moment pivot from any of these terms to the other. The writing of this book is not completely collaborative, coordinated, or cooperative. It's a messy mix of all three.
-
-### Why is collaboration important?
-
-Collaboration is an important facet of contemporary organizations--specifically those oriented toward knowledge work--because it allows for productive disagreement between actors. That kind of disagreement then helps increase the level of engagement and provide meaning to the group's work.
-
-In his book, The Age of Discontinuity: Guidelines to our Changing Society, [Peter Drucker discusses][3] the "knowledge worker" and the pivot from work based on experience (e.g. apprenticeships) to work based on knowledge and the application of knowledge. This change in work and workers, he writes:
-
-> ...will make the management of knowledge workers increasingly crucial to the performance and achievement of the knowledge society. We will have to learn to manage the knowledge worker both for productivity and for satisfaction, both for achievement and for status. We will have to learn to give the knowledge worker a job big enough to challenge him, and to permit performance as a "professional."
-
-In other words, knowledge workers aren't satisfied with being subordinate--told what to do by managers as, if there is one right way to do a task. And, unlike past workers, they expect more from their work lives, including some level of emotional fulfillment or meaning-making from their work. The knowledge worker, according to Drucker, is educated toward continual learning, "paid for applying his knowledge, exercising his judgment, and taking responsible leadership." So it then follows that knowledge workers expect from work the chance to apply and share their knowledge, develop themselves professionally, and continuously augment their knowledge.
-
-Interesting to note is the fact that Peter Drucker wrote about those concepts in 1969, nearly 50 years ago--virtually predicting the societal and organizational changes that would reveal themselves, in part, through the development of knowledge sharing tools such as forums, bulletin boards, online communities, and cloud knowledge sharing like DropBox and GoogleDrive as well as the creation of social media tools such as MySpace, Facebook, Twitter, YouTube and countless others. All of these have some basis in the idea that knowledge is something to liberate and share.
-
-In this light, one might view the open organization as one successful manifestation of a system of management for knowledge workers. In other words, open organizations are a way to manage knowledge workers by meeting the needs of the organization and knowledge workers (whether employees, customers, or the public) simultaneously. The foundational values this book explores are the scaffolding for the management of knowledge, and they apply to ways we can:
-
- * make sure there's a lot of varied knowledge around (inclusivity)
- * help people come together and participate (community)
- * circulate information, knowledge, and decision making (transparency)
- * innovate and not become entrenched in old ways of thinking and being (adaptability)
- * develop a shared goal and work together to use knowledge (collaboration)
-
-
-
-Collaboration is an important process because of the participatory effect it has on knowledge work and how it aids negotiations between people and groups. As we've discovered, collaboration is more than working together with some degree of compliance; in fact, it describes a type of working together that overcomes compliance because people can disagree, question, and express their needs in a negotiation and in collaboration. And, collaboration is more than "working toward a shared goal"; collaboration is a process which defines the shared goals via negotiation and, when successful, leads to cooperation and coordination to focus activity on the negotiated outcome.
-
-Collaboration is an important process because of the participatory effect it has on knowledge work and how it aids negotiations between people and groups.
-
-Collaboration works best when the other four open organization values are present. For instance, when people are transparent, there is no guessing about what is needed, why, by whom, or when. Also, because collaboration involves negotiation, it also needs diversity (a product of inclusivity); after all, if we aren't negotiating among differing views, needs, or goals, then what are we negotiating? During a negotiation, the parties are often asked to give something up so that all may gain, so we have to be adaptable and flexible to the different outcomes that negotiation can provide. Lastly, collaboration is often an ongoing process rather than one which is quickly done and over, so it's best to enter collaboration as if you are part of the same community, desiring everyone to benefit from the negotiation. In this way, acts of authentic and purposeful collaboration directly necessitate the emergence of the other four values--transparency, inclusivity, adaptability, and community--as they assemble part of the organization's collective purpose spontaneously.
-
-### Collaboration in open organizations
-
-Traditional organizations advance an agreed-upon set of goals that people are welcome to support or not. In these organizations, there is some amount of discourse and negotiation, but often a higher-ranking or more powerful member of the organization intervenes to make a decision, which the membership must accept (and sometimes ignores). In open organizations, however, the focus is for members to perform their activity and to work out their differences; only if necessary would someone get involved (and even then would try to do it in the most minimal way that support the shared values of community, transparency, adaptability, collaboration and inclusivity.) This make the collaborative processes in open organizations "messier" (or "chaotic" to use Jim Whitehurst's term) but more participatory and, hopefully, innovative.
-
-This article is part of the [Open Organization Workbook project][1].
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/open-organization/17/11/what-is-collaboration
-
-作者:[Heidi Hess Von Ludewig][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/heidi-hess-von-ludewig
-[1]:https://opensource.com/open-organization/17/8/workbook-project-announcement
-[2]:http://opensource.com/users/bbehrens
-[3]:https://www.elsevier.com/books/the-age-of-discontinuity/drucker/978-0-434-90395-5
diff --git a/sources/talk/20171221 Changing how we use Slack solved our transparency and silo problems.md b/sources/talk/20171221 Changing how we use Slack solved our transparency and silo problems.md
deleted file mode 100644
index d68bab55bf..0000000000
--- a/sources/talk/20171221 Changing how we use Slack solved our transparency and silo problems.md
+++ /dev/null
@@ -1,95 +0,0 @@
-Changing how we use Slack solved our transparency and silo problems
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_abstract_pieces.jpg?itok=tGR1d2MU)
-
-Collaboration and information silos are a reality in most organizations today. People tend to regard them as huge barriers to innovation and organizational efficiency. They're also a favorite target for solutions from software tool vendors of all types.
-
-Tools by themselves, however, are seldom (if ever), the answer to a problem like organizational silos. The reason for this is simple: Silos are made of people, and human dynamics are key drivers for the existence of silos in the first place.
-
-So what is the answer?
-
-Successful communities are the key to breaking down silos. Tools play an important role in the process, but if you don't build successful communities around those tools, then you'll face an uphill battle with limited chances for success. Tools enable communities; they do not build them. This takes a thoughtful approach--one that looks at culture first, process second, and tools last.
-
-Successful communities are the key to breaking down silos.
-
-However, this is a challenge because, in most cases, this is not the way the process works in most businesses. Too many companies begin their journey to fix silos by thinking about tools first and considering metrics that don't evaluate the right factors for success. Too often, people choose tools for purely cost-based, compliance-based, or effort-based reasons--instead of factoring in the needs and desires of the user base. But subjective measures like "customer/user delight" are a real factor for these internal tools, and can make or break the success of both the tool adoption and the goal of increased collaboration.
-
-It's critical to understand the best technical tool (or what the business may consider the most cost-effective) is not always the solution that drives community, transparency, and collaboration forward. There is a reason that "Shadow IT"--users choosing their own tool solution, building community and critical mass around them--exists and is so effective: People who choose their own tools are more likely to stay engaged and bring others with them, breaking down silos organically.
-
-This is a story of how Autodesk ended up adopting Slack at enterprise scale to help solve our transparency and silo problems. Interestingly, Slack wasn't (and isn't) an IT-supported application at Autodesk. It's an enterprise solution that was adopted, built, and is still run by a group of passionate volunteers who are committed to a "default to open" paradigm.
-
-Utilizing Slack makes transparency happen for us.
-
-### Chat-tastrophe
-
-First, some perspective: My job at Autodesk is running our [Open@ADSK][1] initiative. I was originally hired to drive our open source strategy, but we quickly expanded my role to include driving open source best practices for internal development (inner source), and transforming how we collaborate internally as an organization. This last piece is where we pick up our story of Slack adoption in the company.
-
-But before we even begin to talk about our journey with Slack, let's address why lack of transparency and openness was a challenge for us. What is it that makes transparency such a desirable quality in organizations, and what was I facing when I started at Autodesk?
-
-Every company says they want "better collaboration." In our case, we are a 35-year-old software company that has been immensely successful at selling desktop "shrink-wrapped" software to several industries, including architecture, engineering, construction, manufacturing, and entertainment. But no successful company rests on its laurels, and Autodesk leadership recognized that a move to Cloud-based solutions for our products was key to the future growth of the company, including opening up new markets through product combinations that required Cloud computing and deep product integrations.
-
-The challenge in making this move was far more than just technical or architectural--it was rooted in the DNA of the company, in everything from how we were organized to how we integrated our products. The basic format of integration in our desktop products was file import/export. While this is undoubtedly important, it led to a culture of highly-specialized teams working in an environment that's more siloed than we'd like and not sharing information (or code). Prior to the move to a cloud-based approach, this wasn't as a much of a problem--but, in an environment that requires organizations to behave more like open source projects do, transparency, openness, and collaboration go from "nice-to-have" to "business critical."
-
-Like many companies our size, Autodesk has had many different collaboration solutions through the years, some of them commercial, and many of them home-grown. However, none of them effectively solved the many-to-many real-time collaboration challenge. Some reasons for this were technical, but many of them were cultural.
-
-I relied on a philosophy I'd formed through challenging experiences in my career: "Culture first, tools last."
-
-When someone first tasked me with trying to find a solution for this, I relied on a philosophy I'd formed through challenging experiences in my career: "Culture first, tools last." This is still a challenge for engineering folks like myself. We want to jump immediately to tools as the solution to any problem. However, it's critical to evaluate a company's ethos (culture), as well as existing processes to determine what kinds of tools might be a good fit. Unfortunately, I've seen too many cases where leaders have dictated a tool choice from above, based on the factors discussed earlier. I needed a different approach that relied more on fitting a tool into the culture we wanted to become, not the other way around.
-
-What I found at Autodesk were several small camps of people using tools like HipChat, IRC, Microsoft Lync, and others, to try to meet their needs. However, the most interesting thing I found was 85 separate instances of Slack in the company!
-
-Eureka! I'd stumbled onto a viral success (one enabled by Slack's ability to easily spin up "free" instances). I'd also landed squarely in what I like to call "silo-land."
-
-All of those instances were not talking to each other--so, effectively, we'd created isolated islands of information that, while useful to those in them, couldn't transform the way we operated as an enterprise. Essentially, our existing organizational culture was recreated in digital format in these separate Slack systems. Our organization housed a mix of these small, free instances, as well as multiple paid instances, which also meant we were not taking advantage of a common billing arrangement.
-
-My first (open source) thought was: "Hey, why aren't we using IRC, or some other open source tool, for this?" I quickly realized that didn't matter, as our open source engineers weren't the only people using Slack. People from all areas of the company--even senior leadership--were adopting Slack in droves, and, in some cases, convincing their management to pay for it!
-
-My second (engineering) thought was: "Oh, this is simple. We just collapse all 85 of those instances into a single cohesive Slack instance." What soon became obvious was that was the easy part of the solution. Much harder was the work of cajoling, convincing, and moving people to a single, transparent instance. Building in the "guard rails" to enable a closed source tool to provide this transparency was key. These guard rails came in the form of processes, guidelines, and community norms that were the hardest part of this transformation.
-
-### The real work begins
-
-As I began to slowly help users migrate to the common instance (paying for it was also a challenge, but a topic for another day), I discovered a dedicated group of power users who were helping each other in the #adsk-slack-help channel on our new common instance of Slack. These power users were, in effect, building the roots of our transparency and community through their efforts.
-
-The open source community manager in me quickly realized these users were the path to successfully scaling Slack at Autodesk. I enlisted five of them to help me, and, together we set about fabricating the community structure for the tool's rollout.
-
-We did, however, learn an important lesson about transparency and company culture along the way.
-
-Here I should note the distinction between a community structure/governance model and traditional IT policies: With the exception of security and data privacy/legal policies, volunteer admins and user community members completely define and govern our Slack instance. One of the keys to our success with Slack (currently approximately 9,100 users and roughly 4,300 public channels) was how we engaged and involved our users in building these governance structures. Things like channel naming conventions and our growing list of frequently asked questions were organic and have continued in that same vein. Our community members feel like their voices are heard (even if some disagree), and that they have been a part of the success of our deployment of Slack.
-
-We did, however, learn an important lesson about transparency and company culture along the way.
-
-### It's not the tool
-
-When we first launched our main Slack instance, we left the ability for anyone to make a channel private turned on. After about three months of usage, we saw a clear trend: More people were creating private channels (and messages) than they were public channels (the ratio was about two to one, private versus public). Since our effort to merge 85 Slack instances was intended to increase participation and transparency, we quickly adjusted our policy and turned off this feature for regular users. We instead implemented a policy of review by the admin team, with clear criteria (finance, legal, personnel discussions among the reasons) defined for private channels.
-
-This was probably the only time in this entire process that I regretted something.
-
-We took an amazing amount of flak for this decision because we were dealing with a corporate culture that was used to working in independent units that had minimal interaction with each other. Our defining moment of clarity (and the tipping point where things started to get better) occurred in an all-hands meeting when one of our senior executives asked me to address a question about Slack. I stood up to answer the question, and said (paraphrased from memory): "It's not about the tool. I could give you all the best, gold-plated collaboration platform in existence, but we aren't going to be successful if we don't change our approach to collaboration and learn to default to open."
-
-I didn't think anything more about that statement--until that senior executive starting using the phrase "default to open" in his slide decks, in his staff meetings, and with everyone he met. That one moment has defined what we have been trying to do with Slack: The tool isn't the sole reason we've been successful; it's the approach that we've taken around building a self-sustaining community that not only wants to use this tool, but craves the ability it gives them to work easily across the enterprise.
-
-### What we learned
-
-The tool isn't the sole reason we've been successful; it's the approach that we've taken around building a self-sustaining community that not only wants to use this tool, but craves the ability it gives them to work easily across the enterprise.
-
-I say all the time that this could have happened with other, similar tools (Hipchat, IRC, etc), but it works in this case specifically because we chose an approach of supporting a solution that the user community adopted for their needs, not strictly what the company may have chosen if the decision was coming from the top of the organizational chart. We put a lot of work into making it an acceptable solution (from the perspectives of security, legal, finance, etc.) for the company, but, ultimately, our success has come from the fact that we built this rollout (and continue to run the tool) as a community, not as a traditional corporate IT system.
-
-The most important lesson I learned through all of this is that transparency and community are evolutionary, not revolutionary. You have to understand where your culture is, where you want it to go, and utilize the lever points that the community is adopting itself to make sustained and significant progress. There is a fine balance point between an anarchy, and a thriving community, and we've tried to model our approach on the successful practices of today's thriving open source communities.
-
-Communities are personal. Tools come and go, but keeping your community at the forefront of your push to transparency is the key to success.
-
-This article is part of the [Open Organization Workbook project][2].
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/open-organization/17/12/chat-platform-default-to-open
-
-作者:[Guy Martin][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/guyma
-[1]:mailto:Open@ADSK
-[2]:https://opensource.com/open-organization/17/8/workbook-project-announcement
diff --git a/sources/talk/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md b/sources/talk/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md
deleted file mode 100644
index 9e35e0ede7..0000000000
--- a/sources/talk/20180109 How Mycroft used WordPress and GitHub to improve its documentation.md
+++ /dev/null
@@ -1,116 +0,0 @@
-How Mycroft used WordPress and GitHub to improve its documentation
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd)
-
-Image credits : Photo by Unsplash; modified by Rikki Endsley. CC BY-SA 4.0
-
-Imagine you've just joined a new technology company, and one of the first tasks you're assigned is to improve and centralize the organization's developer-facing documentation. There's just one catch: That documentation exists in many different places, across several platforms, and differs markedly in accuracy, currency, and style.
-
-So how did we tackle this challenge?
-
-### Understanding the scope
-
-As with any project, we first needed to understand the scope and bounds of the problem we were trying to solve. What documentation was good? What was working? What wasn't? How much documentation was there? What format was it in? We needed to do a **documentation audit**. Luckily, [Aneta Šteflova][1] had recently [published an article on OpenSource.com][2] about this, and it provided excellent guidance.
-
-![mycroft doc audit][4]
-
-Mycroft documentation audit, showing source, topic, medium, currency, quality and audience
-
-Next, every piece of publicly facing documentation was assessed for the topic it covered, the medium it used, currency, and quality. A pattern quickly emerged that different platforms had major deficiencies, allowing us to make a data-driven approach to decommission our existing Jekyll-based sites. The audit also highlighted just how fragmented our documentation sources were--we had developer-facing documentation across no fewer than seven sites. Although search engines were finding this content just fine, the fragmentation made it difficult for developers and users of Mycroft--our primary audiences--to navigate the information they needed. Again, this data helped us make the decision to centralize our documentation on to one platform.
-
-### Choosing a central platform
-
-As an organization, we wanted to constrain the number of standalone platforms in use. Over time, maintenance and upkeep of multiple platforms and integration touchpoints becomes cumbersome for any organization, but this is exacerbated for a small startup.
-
-One of the other business drivers in platform choice was that we had two primary but very different audiences. On one hand, we had highly technical developers who we were expecting would push documentation to its limits--and who would want to contribute to technical documentation using their tools of choice--[Git][5], [GitHub][6], and [Markdown][7]. Our second audience--end users--would primarily consume technical documentation and would want to do so in an inviting, welcoming platform that was visually appealing and provided additional features such as the ability to identify reading time and to provide feedback. The ability to capture feedback was also a key requirement from our side as without feedback on the quality of the documentation, we would not have a solid basis to undertake continuous quality improvement.
-
-Would we be able to identify one platform that met all of these competing needs?
-
-We realised that two platforms covered all of our needs:
-
- * [WordPress][8]: Our existing website is built on WordPress, and we have some reasonably robust WordPress skills in-house. The flexibility of WordPress also fulfilled our requirements for functionality like reading time and the ability to capture user feedback.
- * [GitHub][9]: Almost [all of Mycroft.AI's source code is available on GitHub][10], and our development team uses this platform daily.
-
-
-
-But how could we marry the two?
-
-
-![](https://opensource.com/sites/default/files/images/life-uploads/wordpress-github-sync.png)
-
-### Integrating WordPress and GitHub with WordPress GitHub Sync
-
-Luckily, our COO, [Nate Tomasi][11], spotted a WordPress plugin that promised to integrate the two.
-
-This was put through its paces on our test website, and it passed with flying colors. It was easy to install, had a straightforward configuration, which just required an OAuth token and webhook with GitHub, and provided two-way integration between WordPress and GitHub.
-
-It did, however, have a dependency--on Markdown--which proved a little harder to implement. We trialed several Markdown plugins, but each had several quirks that interfered with the rendering of non-Markdown-based content. After several days of frustration, and even an attempt to custom-write a plugin for our needs, we stumbled across [Parsedown Party][12]. There was much partying! With WordPress GitHub Sync and Parsedown Party, we had integrated our two key platforms.
-
-Now it was time to make our content visually appealing and usable for our user audience.
-
-### Reading time and feedback
-
-To implement the reading time and feedback functionality, we built a new [page template for WordPress][13], and leveraged plugins within the page template.
-
-Knowing the estimated reading time of an article in advance has been [proven to increase engagement with content][14] and provides developers and users with the ability to decide whether to read the content now or bookmark it for later. We tested several WordPress plugins for reading time, but settled on [Reading Time WP][15] because it was highly configurable and could be easily embedded into WordPress page templates. Our decision to place Reading Time at the top of the content was designed to give the user the choice of whether to read now or save for later. With Reading Time in place, we then turned our attention to gathering user feedback and ratings for our documentation.
-
-![](https://opensource.com/sites/default/files/images/life-uploads/screenshot-from-2017-12-08-00-55-31.png)
-
-There are several rating and feedback plugins available for WordPress. We needed one that could be easily customized for several use cases, and that could aggregate or summarize ratings. After some experimentation, we settled on [Multi Rating Pro][16] because of its wide feature set, especially the ability to create a Review Ratings page in WordPress--i.e., a central page where staff can review ratings without having to be logged in to the WordPress backend. The only gap we ran into here was the ability to set the display order of rating options--but it will likely be added in a future release.
-
-The WordPress GitHub Integration plugin also gave us the ability to link back to the GitHub repository where the original Markdown content was held, inviting technical developers to contribute to improving our documentation.
-
-### Updating the existing documentation
-
-Now that the "container" for our new documentation had been developed, it was time to update the existing content. Because much of our documentation had grown organically over time, there were no style guidelines to shape how keywords and code were styled. This was tackled first, so that it could be applied to all content. [You can see our content style guidelines on GitHub.][17]
-
-As part of the update, we also ran several checks to ensure that the content was technically accurate, augmenting the existing documentation with several images for better readability.
-
-There were also a couple of additional tools that made creating internal links for documentation pieces easier. First, we installed the [WP Anchor Header][18] plugin. This plugin provided a small but important function: adding `id` content in GitHub using the `[markdown-toc][19]` library, then simply copied in to the WordPress content, where they would automatically link to the `id` attributes to each ``, `` (and so on) element. This meant that internal anchors could be automatically generated on the command line from the Markdown content in GitHub using the `[markdown-toc][19]` library, then simply copied in to the WordPress content, where they would automatically link to the `id` attributes generated by WP Anchor Header.
-
-Next, we imported the updated documentation into WordPress from GitHub, and made sure we had meaningful and easy-to-search on slugs, descriptions, and keywords--because what good is excellent documentation if no one can find it?! A final activity was implementing redirects so that people hitting the old documentation would be taken to the new version.
-
-### What next?
-
-[Please do take a moment and have a read through our new documentation][20]. We know it isn't perfect--far from it--but we're confident that the mechanisms we've baked into our new documentation infrastructure will make it easier to identify gaps--and resolve them quickly. If you'd like to know more, or have suggestions for our documentation, please reach out to Kathy Reid on [Chat][21] (@kathy-mycroft) or via [email][22].
-
-_Reprinted with permission from[Mycroft.ai][23]._
-
-### About the author
-Kathy Reid - Director of Developer Relations @MycroftAI, President of @linuxaustralia. Kathy Reid has expertise in open source technology management, web development, video conferencing, digital signage, technical communities and documentation. She has worked in a number of technical and leadership roles over the last 20 years, and holds Arts and Science undergraduate degrees... more about Kathy Reid
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/1/rocking-docs-mycroft
-
-作者:[Kathy Reid][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/kathyreid
-[1]:https://opensource.com/users/aneta
-[2]:https://opensource.com/article/17/10/doc-audits
-[3]:/file/382466
-[4]:https://opensource.com/sites/default/files/images/life-uploads/mycroft-documentation-audit.png (mycroft documentation audit)
-[5]:https://git-scm.com/
-[6]:https://github.com/MycroftAI
-[7]:https://en.wikipedia.org/wiki/Markdown
-[8]:https://www.wordpress.org/
-[9]:https://github.com/
-[10]:https://github.com/mycroftai
-[11]:http://mycroft.ai/team/
-[12]:https://wordpress.org/plugins/parsedown-party/
-[13]:https://developer.wordpress.org/themes/template-files-section/page-template-files/
-[14]:https://marketingland.com/estimated-reading-times-increase-engagement-79830
-[15]:https://jasonyingling.me/reading-time-wp/
-[16]:https://multiratingpro.com/
-[17]:https://github.com/MycroftAI/docs-rewrite/blob/master/README.md
-[18]:https://wordpress.org/plugins/wp-anchor-header/
-[19]:https://github.com/jonschlinkert/markdown-toc
-[20]:https://mycroft.ai/documentation
-[21]:https://chat.mycroft.ai/
-[22]:mailto:kathy.reid@mycroft.ai
-[23]:https://mycroft.ai/blog/improving-mycrofts-documentation/
diff --git a/sources/talk/20180111 The open organization and inner sourcing movements can share knowledge.md b/sources/talk/20180111 The open organization and inner sourcing movements can share knowledge.md
deleted file mode 100644
index 272c1b03ae..0000000000
--- a/sources/talk/20180111 The open organization and inner sourcing movements can share knowledge.md
+++ /dev/null
@@ -1,121 +0,0 @@
-The open organization and inner sourcing movements can share knowledge
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gov_collaborative_risk.png?itok=we8DKHuL)
-Image by : opensource.com
-
-Red Hat is a company with roughly 11,000 employees. The IT department consists of roughly 500 members. Though it makes up just a fraction of the entire organization, the IT department is still sufficiently staffed to have many application service, infrastructure, and operational teams within it. Our purpose is "to enable Red Hatters in all functions to be effective, productive, innovative, and collaborative, so that they feel they can make a difference,"--and, more specifically, to do that by providing technologies and related services in a fashion that is as open as possible.
-
-Being open like this takes time, attention, and effort. While we always strive to be as open as possible, it can be difficult. For a variety of reasons, we don't always succeed.
-
-In this story, I'll explain a time when, in the rush to innovate, the Red Hat IT organization lost sight of its open ideals. But I'll also explore how returning to those ideals--and using the collaborative tactics of "inner source"--helped us to recover and greatly improve the way we deliver services.
-
-### About inner source
-
-Before I explain how inner source helped our team, let me offer some background on the concept.
-
-Inner source is the adoption of open source development practices between teams within an organization to promote better and faster delivery without requiring project resources be exposed to the world or openly licensed. It allows an organization to receive many of the benefits of open source development methods within its own walls.
-
-In this way, inner source aligns well with open organization strategies and principles; it provides a path for open, collaborative development. While the open organization defines its principles of openness broadly as transparency, inclusivity, adaptability, collaboration, and community--and covers how to use these open principles for communication, decision making, and many other topics--inner source is about the adoption of specific and tactical practices, processes, and patterns from open source communities to improve delivery.
-
-For instance, [the Open Organization Maturity Model][1] suggests that in order to be transparent, teams should, at minimum, share all project resources with the project team (though it suggests that it's generally better to share these resources with the entire organization). The common pattern in both inner source and open source development is to host all resources in a publicly available version control system, for source control management, which achieves the open organization goal of high transparency.
-
-Inner source aligns well with open organization strategies and principles.
-
-Another example of value alignment appears in the way open source communities accept contributions. In open source communities, source code is transparently available. Community contributions in the form of patches or merge requests are commonly accepted practices (even expected ones). This provides one example of how to meet the open organization's goal of promoting inclusivity and collaboration.
-
-### The challenge
-
-Early in 2014, Red Hat IT began its first steps toward making Amazon Web Services (AWS) a standard hosting offering for business critical systems. While teams within Red Hat IT had built several systems and services in AWS by this time, these were bespoke creations, and we desired to make deploying services to IT standards in AWS both simple and standardized.
-
-In order to make AWS cloud hosting meet our operational standards (while being scalable), the Cloud Enablement team within Red Hat IT decided that all infrastructure in AWS would be configured through code, rather than manually, and that everyone would use a standard set of tools. The Cloud Enablement team designed and built these standard tools; a separate group, the Platform Operations team, was responsible for provisioning and hosting systems and services in AWS using the tools.
-
-The Cloud Enablement team built a toolset, obtusely named "Template Util," based on AWS Cloud Formations configurations wrapped in a management layer to enforce certain configuration requirements and make stamping out multiple copies of services across environments easier. While the Template Util toolset technically met all our initial requirements, and we eventually provisioned the infrastructure for more than a dozen services with it, engineers in every team working with the tool found using it to be painful. Michael Johnson, one engineer using the tool, said "It made doing something relatively straightforward really complicated."
-
-Among the issues Template Util exhibited were:
-
- * Underlying cloud formations technologies implied constraints on application stack management at odds with how we managed our application systems.
- * The tooling was needlessly complex and brittle in places, using multiple layered templating technologies and languages making syntax issues hard to debug.
- * The code for the tool--and some of the data users needed to manipulate the tool--were kept in a repository that was difficult for most users to access.
- * There was no standard process to contributing or accepting changes.
- * The documentation was poor.
-
-
-
-As more engineers attempted to use the Template Util toolset, they found even more issues and limitations with the tools. Unhappiness continued to grow. To make matters worse, the Cloud Enablement team then shifted priorities to other deliverables without relinquishing ownership of the tool, so bug fixes and improvements to the tools were further delayed.
-
-The real, core issues here were our inability to build an inclusive community to collaboratively build shared tooling that met everyone's needs. Fear of losing "ownership," fear of changing requirements, and fear of seeing hard work abandoned all contributed to chronic conflict, which in turn led to poorer outcomes.
-
-### Crisis point
-
-By September 2015, more than a year after launching our first major service in AWS with the Template Util tool, we hit a crisis point.
-
-Many engineers refused to use the tools. That forced all of the related service provisioning work on a small set of engineers, further fracturing the community and disrupting service delivery roadmaps as these engineers struggled to deal with unexpected work. We called an emergency meeting and invited all the teams involved to find a solution.
-
-During the emergency meeting, we found that people generally thought we needed immediate change and should start the tooling effort over, but even the decision to start over wasn't unanimous. Many solutions emerged--sometimes multiple solutions from within a single team--all of which would require significant work to implement. While we couldn't reach a consensus on which solution to use during this meeting, we did reach an agreement to give proponents of different technologies two weeks to work together, across teams, to build their case with a prototype, which the community could then review.
-
-While we didn't reach a final and definitive decision, this agreement was the first point where we started to return to the open source ideals that guide our mission. By inviting all involved parties, we were able to be transparent and inclusive, and we could begin rebuilding our internal community. By making clear that we wanted to improve things and were open to new options, we showed our commitment to adaptability and meritocracy. Most importantly, the plan for building prototypes gave people a clear, return path to collaboration.
-
-When the community reviewed the prototypes, it determined that the clear leader was an Ansible-based toolset that would eventually become known, internally, as Ansicloud. (At the time, no one involved with this work had any idea that Red Hat would acquire Ansible the following month. It should also be noted that other teams within Red Hat have found tools based on Cloud Formation extremely useful, even when our specific Template Util tool did not find success.)
-
-This prototyping and testing phase didn't fix things overnight, though. While we had consensus on the general direction we needed to head, we still needed to improve the new prototype to the point at which engineers could use it reliably for production services.
-
-So over the next several months, a handful of engineers worked to further build and extend the Ansicloud toolset. We built three new production services. While we were sharing code, that sharing activity occurred at a low level of maturity. Some engineers had trouble getting access due to older processes. Other engineers headed in slightly different directions, with each engineer having to rediscover some of the core design issues themselves.
-
-### Returning to openness
-
-This led to a turning point: Building on top of the previous agreement, we focused on developing a unified vision and providing easier access. To do this, we:
-
- 1. created a list of specific goals for the project (both "must-haves" and "nice-to-haves"),
- 2. created an open issue log for the project to avoid solving the same problem repeatedly,
- 3. opened our code base so anyone in Red Hat could read or clone it, and
- 4. made it easy for engineers to get trusted committer access
-
-
-
-Our agreement to collaborate, our finally unified vision, and our improved tool development methods spurred the growth of our community. Ansicloud adoption spread throughout the involved organizations, but this led to a new problem: The tool started changing more quickly than users could adapt to it, and improvements that different groups submitted were beginning to affect other groups in unanticipated ways.
-
-These issues resulted in our recent turn to inner source practices. While every open source project operates differently, we focused on adopting some best practices that seemed common to many of them. In particular:
-
- * We identified the business owner of the project and the core-contributor group of developers who would govern the development of the tools and decide what contributions to accept. While we want to keep things open, we can't have people working against each other or breaking each other's functionality.
- * We developed a project README clarifying the purpose of the tool and specifying how to use it. We also created a CONTRIBUTING document explaining how to contribute, what sort of contributions would be useful, and what sort of tests a contribution would need to pass to be accepted.
- * We began building continuous integration and testing services for the Ansicloud tool itself. This helped us ensure we could quickly and efficiently validate contributions technically, before the project accepted and merged them.
-
-
-
-With these basic agreements, documents, and tools available, we were back onto the path of open collaboration and successful inner sourcing.
-
-### Why it matters
-
-Why does inner source matter?
-
-From a developer community point of view, shifting from a traditional siloed development model to the inner source model has produced significant, quantifiable improvements:
-
- * Contributions to our tooling have grown 72% per week (by number of commits).
- * The percentage of contributions from non-core committers has grown from 27% to 78%; the users of the toolset are driving its development.
- * The contributor list has grown by 15%, primarily from new users of the tool set, rather than core committers, increasing our internal community.
-
-
-
-And the tools we've delivered through this project have allowed us to see dramatic improvements in our business outcomes. Using the Ansicloud tools, 54 new multi-environment application service deployments were created in 385 days (compared to 20 services in 1,013 days with the Template Util tools). We've gone from one new service deployment in a 50-day period to one every week--a seven-fold increase in the velocity of our delivery.
-
-What really matters here is that the improvements we saw were not aberrations. Inner source provides common, easily understood patterns that organizations can adopt to effectively promote collaboration (not to mention other open organization principles). By mirroring open source production practices, inner source can also mirror the benefits of open source code, which have been seen time and time again: higher quality code, faster development, and more engaged communities.
-
-This article is part of the [Open Organization Workbook project][2].
-
-### about the author
-Tom Benninger - Tom Benninger is a Solutions Architect, Systems Engineer, and continual tinkerer at Red Hat, Inc. Having worked with startups, small businesses, and larger enterprises, he has experience within a broad set of IT disciplines. His current area of focus is improving Application Lifecycle Management in the enterprise. He has a particular interest in how open source, inner source, and collaboration can help support modern application development practices and the adoption of DevOps, CI/CD, Agile,...
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/open-organization/18/1/open-orgs-and-inner-source-it
-
-作者:[Tom Benninger][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/tomben
-[1]:https://opensource.com/open-organization/resources/open-org-maturity-model
-[2]:https://opensource.com/open-organization/17/8/workbook-project-announcement
diff --git a/sources/talk/20180112 in which the cost of structured data is reduced.md b/sources/talk/20180112 in which the cost of structured data is reduced.md
deleted file mode 100644
index 992ad57a39..0000000000
--- a/sources/talk/20180112 in which the cost of structured data is reduced.md
+++ /dev/null
@@ -1,181 +0,0 @@
-in which the cost of structured data is reduced
-======
-Last year I got the wonderful opportunity to attend [RacketCon][1] as it was hosted only 30 minutes away from my home. The two-day conference had a number of great talks on the first day, but what really impressed me was the fact that the entire second day was spent focusing on contribution. The day started out with a few 15- to 20-minute talks about how to contribute to a specific codebase (including that of Racket itself), and after that people just split off into groups focused around specific codebases. Each table had maintainers helping guide other folks towards how to work with the codebase and construct effective patch submissions.
-
-![lensmen chronicles][2]
-
-I came away from the conference with a great sense of appreciation for how friendly and welcoming the Racket community is, and how great Racket is as a swiss-army-knife type tool for quick tasks. (Not that it's unsuitable for large projects, but I don't have the opportunity to start any new large projects very frequently.)
-
-The other day I wanted to generate colored maps of the world by categorizing countries interactively, and Racket seemed like it would fit the bill nicely. The job is simple: show an image of the world with one country selected; when a key is pressed, categorize that country, then show the map again with all categorized countries colored, and continue with the next country selected.
-
-### GUIs and XML
-
-I have yet to see a language/framework more accessible and straightforward out of the box for drawing1. Here's the entry point which sets up state and then constructs a canvas that handles key input and display:
-```
-(define (main path)
- (let ([frame (new frame% [label "World color"])]
- [categorizations (box '())]
- [doc (call-with-input-file path read-xml/document)])
- (new (class canvas%
- (define/override (on-char event)
- (handle-key this categorizations (send event get-key-code)))
- (super-new))
- [parent frame]
- [paint-callback (draw doc categorizations)])
- (send frame show #t)))
-
-```
-
-While the class system is not one of my favorite things about Racket (most newer code seems to avoid it in favor of [generic interfaces][3] in the rare case that polymorphism is truly called for), the fact that classes can be constructed in a light-weight, anonymous way makes it much less onerous than it could be. This code sets up all mutable state in a [`box`][4] which you use in the way you'd use a `ref` in ML or Clojure: a mutable wrapper around an immutable data structure.
-
-The world map I'm using is [an SVG of the Robinson projection][5] from Wikipedia. If you look closely there's a call to bind `doc` that calls [`call-with-input-file`][6] with [`read-xml/document`][7] which loads up the whole map file's SVG; just about as easily as you could ask for.
-
-The data you get back from `read-xml/document` is in fact a [document][8] struct, which contains an `element` struct containing `attribute` structs and lists of more `element` structs. All very sensible, but maybe not what you would expect in other dynamic languages like Clojure or Lua where free-form maps reign supreme. Racket really wants structure to be known up-front when possible, which is one of the things that help it produce helpful error messages when things go wrong.
-
-Here's how we handle keyboard input; we're displaying a map with one country highlighted, and `key` here tells us what the user pressed to categorize the highlighted country. If that key is in the `categories` hash then we put it into `categorizations`.
-```
-(define categories #hash((select . "eeeeff")
- (#\1 . "993322")
- (#\2 . "229911")
- (#\3 . "ABCD31")
- (#\4 . "91FF55")
- (#\5 . "2439DF")))
-
-(define (handle-key canvas categorizations key)
- (cond [(equal? #\backspace key) (swap! categorizations cdr)]
- [(member key (dict-keys categories)) (swap! categorizations (curry cons key))]
- [(equal? #\space key) (display (unbox categorizations))])
- (send canvas refresh))
-
-```
-
-### Nested updates: the bad parts
-
-Finally once we have a list of categorizations, we need to apply it to the map document and display. We apply a [`fold`][9] reduction over the XML document struct and the list of country categorizations (plus `'select` for the country that's selected to be categorized next) to get back a "modified" document struct where the proper elements have the style attributes applied for the given categorization, then we turn it into an image and hand it to [`draw-pict`][10]:
-```
-
-(define (update original-doc categorizations)
- (for/fold ([doc original-doc])
- ([category (cons 'select (unbox categorizations))]
- [n (in-range (length (unbox categorizations)) 0 -1)])
- (set-style doc n (style-for category))))
-
-(define ((draw doc categorizations) _ context)
- (let* ([newdoc (update doc categorizations)]
- [xml (call-with-output-string (curry write-xml newdoc))])
- (draw-pict (call-with-input-string xml svg-port->pict) context 0 0)))
-
-```
-
-The problem is in that pesky `set-style` function. All it has to do is reach deep down into the `document` struct to find the `n`th `path` element (the one associated with a given country), and change its `'style` attribute. It ought to be a simple task. Unfortunately this function ends up being anything but simple:
-```
-
-(define (set-style doc n new-style)
- (let* ([root (document-element doc)]
- [g (list-ref (element-content root) 8)]
- [paths (element-content g)]
- [path (first (drop (filter element? paths) n))]
- [path-num (list-index (curry eq? path) paths)]
- [style-index (list-index (lambda (x) (eq? 'style (attribute-name x)))
- (element-attributes path))]
- [attr (list-ref (element-attributes path) style-index)]
- [new-attr (make-attribute (source-start attr)
- (source-stop attr)
- (attribute-name attr)
- new-style)]
- [new-path (make-element (source-start path)
- (source-stop path)
- (element-name path)
- (list-set (element-attributes path)
- style-index new-attr)
- (element-content path))]
- [new-g (make-element (source-start g)
- (source-stop g)
- (element-name g)
- (element-attributes g)
- (list-set paths path-num new-path))]
- [root-contents (list-set (element-content root) 8 new-g)])
- (make-document (document-prolog doc)
- (make-element (source-start root)
- (source-stop root)
- (element-name root)
- (element-attributes root)
- root-contents)
- (document-misc doc))))
-
-```
-
-The reason for this is that while structs are immutable, they don't support functional updates. Whenever you're working with immutable data structures, you want to be able to say "give me a new version of this data, but with field `x` replaced by the value of `(f (lookup x))`". Racket can [do this with dictionaries][11] but not with structs2. If you want a modified version you have to create a fresh one3.
-
-### Lenses to the rescue?
-
-![first lensman][12]
-
-When I brought this up in the `#racket` channel on Freenode, I was helpfully pointed to the 3rd-party [Lens][13] library. Lenses are a general-purpose way of composing arbitrarily nested lookups and updates. Unfortunately at this time there's [a flaw][14] preventing them from working with `xml` structs, so it seemed I was out of luck.
-
-But then I was pointed to [X-expressions][15] as an alternative to structs. The [`xml->xexpr`][16] function turns the structs into a deeply-nested list tree with symbols and strings in it. The tag is the first item in the list, followed by an associative list of attributes, then the element's children. While this gives you fewer up-front guarantees about the structure of the data, it does work around the lens issue.
-
-For this to work, we need to compose a new lens based on the "path" we want to use to drill down into the `n`th country and its `style` attribute. The [`lens-compose`][17] function lets us do that. Note that the order here might be backwards from what you'd expect; it works deepest-first (the way [`compose`][18] works for functions). Also note that defining one lens gives us the ability to both get nested values (with [`lens-view`][19]) and update them.
-```
-(define (style-lens n)
- (lens-compose (dict-ref-lens 'style)
- second-lens
- (list-ref-lens (add1 (* n 2)))
- (list-ref-lens 10)))
-```
-
-Our `` XML elements are under the 10th item of the root xexpr, (hence the [`list-ref-lens`][20] with 10) and they are interspersed with whitespace, so we have to double `n` to find the `` we want. The [`second-lens`][21] call gets us to that element's attribute alist, and [`dict-ref-lens`][22] lets us zoom in on the `'style` key out of that alist.
-
-Once we have our lens, it's just a matter of replacing `set-style` with a call to [`lens-set`][23] in our `update` function we had above, and then we're off:
-```
-(define (update doc categorizations)
- (for/fold ([d doc])
- ([category (cons 'select (unbox categorizations))]
- [n (in-range (length (unbox categorizations)) 0 -1)])
- (lens-set (style-lens n) d (list (style-for category)))))
-```
-
-![second stage lensman][24]
-
-Often times the trade-off between freeform maps/hashes vs structured data feels like one of convenience vs long-term maintainability. While it's unfortunate that they can't be used with the `xml` structs4, lenses provide a way to get the best of both worlds, at least in some situations.
-
-The final version of the code clocks in at 51 lines and is is available [on GitLab][25].
-
-๛
-
---------------------------------------------------------------------------------
-
-via: https://technomancy.us/185
-
-作者:[Phil Hagelberg][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://technomancy.us/
-[1]:https://con.racket-lang.org/
-[2]:https://technomancy.us/i/chronicles-of-lensmen.jpg
-[3]:https://docs.racket-lang.org/reference/struct-generics.html
-[4]:https://docs.racket-lang.org/reference/boxes.html?q=box#%28def._%28%28quote._~23~25kernel%29._box%29%29
-[5]:https://commons.wikimedia.org/wiki/File:BlankMap-World_gray.svg
-[6]:https://docs.racket-lang.org/reference/port-lib.html#(def._((lib._racket%2Fport..rkt)._call-with-input-string))
-[7]:https://docs.racket-lang.org/xml/index.html?q=read-xml#%28def._%28%28lib._xml%2Fmain..rkt%29._read-xml%2Fdocument%29%29
-[8]:https://docs.racket-lang.org/xml/#%28def._%28%28lib._xml%2Fmain..rkt%29._document%29%29
-[9]:https://docs.racket-lang.org/reference/for.html?q=for%2Ffold#%28form._%28%28lib._racket%2Fprivate%2Fbase..rkt%29._for%2Ffold%29%29
-[10]:https://docs.racket-lang.org/pict/Rendering.html?q=draw-pict#%28def._%28%28lib._pict%2Fmain..rkt%29._draw-pict%29%29
-[11]:https://docs.racket-lang.org/reference/dicts.html?q=dict-update#%28def._%28%28lib._racket%2Fdict..rkt%29._dict-update%29%29
-[12]:https://technomancy.us/i/first-lensman.jpg
-[13]:https://docs.racket-lang.org/lens/lens-guide.html
-[14]:https://github.com/jackfirth/lens/issues/290
-[15]:https://docs.racket-lang.org/pollen/second-tutorial.html?q=xexpr#%28part._.X-expressions%29
-[16]:https://docs.racket-lang.org/xml/index.html?q=xexpr#%28def._%28%28lib._xml%2Fmain..rkt%29._xml-~3exexpr%29%29
-[17]:https://docs.racket-lang.org/lens/lens-reference.html#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-compose%29%29
-[18]:https://docs.racket-lang.org/reference/procedures.html#%28def._%28%28lib._racket%2Fprivate%2Flist..rkt%29._compose%29%29
-[19]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-view%29%29
-[20]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Flist..rkt%29._list-ref-lens%29%29
-[21]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Flist..rkt%29._second-lens%29%29
-[22]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Fdict..rkt%29._dict-ref-lens%29%29
-[23]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-set%29%29
-[24]:https://technomancy.us/i/second-stage-lensman.jpg
-[25]:https://gitlab.com/technomancy/world-color/blob/master/world-color.rkt
diff --git a/sources/talk/20180124 Security Chaos Engineering- A new paradigm for cybersecurity.md b/sources/talk/20180124 Security Chaos Engineering- A new paradigm for cybersecurity.md
deleted file mode 100644
index 35c89150c8..0000000000
--- a/sources/talk/20180124 Security Chaos Engineering- A new paradigm for cybersecurity.md
+++ /dev/null
@@ -1,87 +0,0 @@
-Security Chaos Engineering: A new paradigm for cybersecurity
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_bank_vault_secure_safe.png?itok=YoW93h7C)
-
-Security is always changing and failure always exists.
-
-This toxic scenario requires a fresh perspective on how we think about operational security. We must understand that we are often the primary cause of our own security flaws. The industry typically looks at cybersecurity and failure in isolation or as separate matters. We believe that our lack of insight and operational intelligence into our own security control failures is one of the most common causes of security incidents and, subsequently, data breaches.
-
-> Fall seven times, stand up eight." --Japanese proverb
-
-The simple fact is that "to err is human," and humans derive their success as a direct result of the failures they encounter. Their rate of failure, how they fail, and their ability to understand that they failed in the first place are important building blocks to success. Our ability to learn through failure is inherent in the systems we build, the way we operate them, and the security we use to protect them. Yet there has been a lack of focus when it comes to how we approach preventative security measures, and the spotlight has trended toward the evolving attack landscape and the need to buy or build new solutions.
-
-### Security spending is continually rising and so are security incidents
-
-We spend billions on new information security technologies, however, we rarely take a proactive look at whether those security investments perform as expected. This has resulted in a continual increase in security spending on new solutions to keep up with the evolving attacks.
-
-Despite spending more on security, data breaches are continuously getting bigger and more frequent across all industries. We have marched so fast down this path of the "get-ahead-of-the-attacker" strategy that we haven't considered that we may be a primary cause of our own demise. How is it that we are building more and more security measures, but the problem seems to be getting worse? Furthermore, many of the notable data breaches over the past year were not the result of an advanced nation-state or spy-vs.-spy malicious advanced persistent threats (APTs); rather the principal causes of those events were incomplete implementation, misconfiguration, design flaws, and lack of oversight.
-
-The 2017 Ponemon Cost of a Data Breach Study breaks down the [root causes of data breaches][1] into three areas: malicious or criminal attacks, human factors or errors, and system glitches, including both IT and business-process failure. Of the three categories, malicious or criminal attacks comprises the largest distribution (47%), followed by human error (28%), and system glitches (25%). Cybersecurity vendors have historically focused on malicious root causes of data breaches, as it is the largest sole cause, but together human error and system glitches total 53%, a larger share of the overall problem.
-
-What is not often understood, whether due to lack of insight, reporting, or analysis, is that malicious or criminal attacks are often successful due to human error and system glitches. Both human error and system glitches are, at their root, primary markers of the existence of failure. Whether it's IT system failures, failures in process, or failures resulting from humans, it begs the question: "Should we be focusing on finding a method to identify, understand, and address our failures?" After all, it can be an arduous task to predict the next malicious attack, which often requires investment of time to sift threat intelligence, dig through forensic data, or churn threat feeds full of unknown factors and undetermined motives. Failure instrumentation, identification, and remediation are mostly comprised of things that we know, have the ability to test, and can measure.
-
-Failures we can analyze consist not only of IT, business, and general human factors but also the way we design, build, implement, configure, operate, observe, and manage security controls. People are the ones designing, building, monitoring, and managing the security controls we put in place to defend against malicious attackers. How often do we proactively instrument what we designed, built, and are operationally managing to determine if the controls are failing? Most organizations do not discover that their security controls were failing until a security incident results from that failure. The worst time to find out your security investment failed is during a security incident at 3 a.m.
-
-> Security incidents are not detective measures and hope is not a strategy when it comes to operating effective security controls.
-
-We hypothesize that a large portion of data breaches are caused not by sophisticated nation-state actors or hacktivists, but rather simple things rooted in human error and system glitches. Failure in security controls can arise from poor control placement, technical misconfiguration, gaps in coverage, inadequate testing practices, human error, and numerous other things.
-
-### The journey into Security Chaos Testing
-
-Our venture into this new territory of Security Chaos Testing has shifted our thinking about the root cause of many of our notable security incidents and data breaches.
-
-We were brought together by [Bruce Wong][2], who now works at Stitch Fix with Charles, one of the authors of this article. Prior to Stitch Fix, Bruce was a founder of the Chaos Engineering and System Reliability Engineering (SRE) practices at Netflix, the company commonly credited with establishing the field. Bruce learned about this article's other author, Aaron, through the open source [ChaoSlingr][3] Security Chaos Testing tool project, on which Aaron was a contributor. Aaron was interested in Bruce's perspective on the idea of applying Chaos Engineering to cybersecurity, which led Bruce to connect us to share what we had been working on. As security practitioners, we were both intrigued by the idea of Chaos Engineering and had each begun thinking about how this new method of instrumentation might have a role in cybersecurity.
-
-Within a short timeframe, we began finishing each other's thoughts around testing and validating security capabilities, which we collectively call "Security Chaos Engineering." We directly challenged many of the concepts we had come to depend on in our careers, such as compensating security controls, defense-in-depth, and how to design preventative security. Quickly we realized that we needed to challenge the status quo "set-it-and-forget-it" model and instead execute on continuous instrumentation and validation of security capabilities.
-
-Businesses often don't fully understand whether their security capabilities and controls are operating as expected until they are not. We had both struggled throughout our careers to provide measurements on security controls that go beyond simple uptime metrics. Our journey has shown us there is a need for a more pragmatic approach that emphasizes proactive instrumentation and experimentation over blind faith.
-
-### Defining new terms
-
-In the security industry, we have a habit of not explaining terms and assuming we are speaking the same language. To correct that, here are a few key terms in this new approach:
-
- * **(Security) Chaos Experiments** are foundationally rooted in the scientific method, in that they seek not to validate what is already known to be true or already known to be false, rather they are focused on deriving new insights about the current state.
- * **Security Chaos Engineering** is the discipline of instrumentation, identification, and remediation of failure within security controls through proactive experimentation to build confidence in the system's ability to defend against malicious conditions in production.
-
-
-
-### Security and distributed systems
-
-Consider the evolving nature of modern application design where systems are becoming more and more distributed, ephemeral, and immutable in how they operate. In this shifting paradigm, it is becoming difficult to comprehend the operational state and health of our systems' security. Moreover, how are we ensuring that it remains effective and vigilant as the surrounding environment is changing its parameters, components, and methodologies?
-
-What does it mean to be effective in terms of security controls? After all, a single security capability could easily be implemented in a wide variety of diverse scenarios in which failure may arise from many possible sources. For example, a standard firewall technology may be implemented, placed, managed, and configured differently depending on complexities in the business, web, and data logic.
-
-It is imperative that we not operate our business products and services on the assumption that something works. We must constantly, consistently, and proactively instrument our security controls to ensure they cut the mustard when it matters. This is why Security Chaos Testing is so important. What Security Chaos Engineering does is it provides a methodology for the experimentation of the security of distributed systems in order to build confidence in the ability to withstand malicious conditions.
-
-In Security Chaos Engineering:
-
- * Security capabilities must be end-to-end instrumented.
- * Security must be continuously instrumented to build confidence in the system's ability to withstand malicious conditions.
- * Readiness of a system's security defenses must be proactively assessed to ensure they are battle-ready and operating as intended.
- * The security capability toolchain must be instrumented from end to end to drive new insights into not only the effectiveness of the functionality within the toolchain but also to discover where added value and improvement can be injected.
- * Practiced instrumentation seeks to identify, detect, and remediate failures in security controls.
- * The focus is on vulnerability and failure identification, not failure management.
- * The operational effectiveness of incident management is sharpened.
-
-
-
-As Henry Ford said, "Failure is only the opportunity to begin again, this time more intelligently." Security Chaos Engineering and Security Chaos Testing give us that opportunity.
-
-Would you like to learn more? Join the discussion by following [@aaronrinehart][4] and [@charles_nwatu][5] on Twitter.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/1/new-paradigm-cybersecurity
-
-作者:[Aaron Rinehart][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/aaronrinehart
-[1]:https://www.ibm.com/security/data-breach
-[2]:https://twitter.com/bruce_m_wong?lang=en
-[3]:https://github.com/Optum/ChaoSlingr
-[4]:https://twitter.com/aaronrinehart
-[5]:https://twitter.com/charles_nwatu
diff --git a/sources/talk/20180131 How to write a really great resume that actually gets you hired.md b/sources/talk/20180131 How to write a really great resume that actually gets you hired.md
deleted file mode 100644
index b54b3944ae..0000000000
--- a/sources/talk/20180131 How to write a really great resume that actually gets you hired.md
+++ /dev/null
@@ -1,395 +0,0 @@
-How to write a really great resume that actually gets you hired
-============================================================
-
-
-![](https://cdn-images-1.medium.com/max/2000/1*k7HRLZAsuINP9vIs2BIh1g.png)
-
-This is a data-driven guide to writing a resume that actually gets you hired. I’ve spent the past four years analyzing which resume advice works regardless of experience, role, or industry. The tactics laid out below are the result of what I’ve learned. They helped me land offers at Google, Microsoft, and Twitter and have helped my students systematically land jobs at Amazon, Apple, Google, Microsoft, Facebook, and more.
-
-### Writing Resumes Sucks.
-
-It’s a vicious cycle.
-
-We start by sifting through dozens of articles by career “gurus,” forced to compare conflicting advice and make our own decisions on what to follow.
-
-The first article says “one page MAX” while the second says “take two or three and include all of your experience.”
-
-The next says “write a quick summary highlighting your personality and experience” while another says “summaries are a waste of space.”
-
-You scrape together your best effort and hit “Submit,” sending your resume into the ether. When you don’t hear back, you wonder what went wrong:
-
- _“Was it the single page or the lack of a summary? Honestly, who gives a s**t at this point. I’m sick of sending out 10 resumes every day and hearing nothing but crickets.”_
-
-
-![](https://cdn-images-1.medium.com/max/1000/1*_zQqAjBhB1R4fz55InrrIw.jpeg)
-How it feels to try and get your resume read in today’s world.
-
-Writing resumes sucks but it’s not your fault.
-
-The real reason it’s so tough to write a resume is because most of the advice out there hasn’t been proven against the actual end goal of getting a job. If you don’t know what consistently works, you can’t lay out a system to get there.
-
-It’s easy to say “one page works best” when you’ve seen it happen a few times. But how does it hold up when we look at 100 resumes across different industries, experience levels, and job titles?
-
-That’s what this article aims to answer.
-
-Over the past four years, I’ve personally applied to hundreds of companies and coached hundreds of people through the job search process. This has given me a huge opportunity to measure, analyze, and test the effectiveness of different resume strategies at scale.
-
-This article is going to walk through everything I’ve learned about resumes over the past 4 years, including:
-
-* Mistakes that more than 95% of people make, causing their resumes to get tossed immediately
-
-* Three things that consistently appear in the resumes of highly effective job searchers (who go on to land jobs at the world’s best companies)
-
-* A quick hack that will help you stand out from the competition and instantly build relationships with whomever is reading your resume (increasing your chances of hearing back and getting hired)
-
-* The exact resume template that got me interviews and offers at Google, Microsoft, Twitter, Uber, and more
-
-Before we get to the unconventional strategies that will help set you apart, we need to make sure our foundational bases are covered. That starts with understanding the mistakes most job seekers make so we can make our resume bulletproof.
-
-### Resume Mistakes That 95% Of People Make
-
-Most resumes that come through an online portal or across a recruiter’s desk are tossed out because they violate a simple rule.
-
-When recruiters scan a resume, the first thing they look for is mistakes. Your resume could be fantastic, but if you violate a rule like using an unprofessional email address or improper grammar, it’s going to get tossed out.
-
-Our goal is to fully understand the triggers that cause recruiters/ATS systems to make the snap decisions on who stays and who goes.
-
-In order to get inside the heads of these decision makers, I collected data from dozens of recruiters and hiring mangers across industries. These people have several hundred years of hiring experience under their belts and they’ve reviewed 100,000+ resumes across industries.
-
-They broke down the five most common mistakes that cause them to cut resumes from the pile:
-
-
-![](https://cdn-images-1.medium.com/max/1000/1*5Zbr3HFeKSjvPGZdq_LCKA.png)
-
-### The Five Most Common Resume Mistakes (According To Recruiters & Hiring Managers)
-
-Issue #1: Sloppiness (typos, spelling errors, & grammatical mistakes). Close to 60% of resumes have some sort of typo or grammatical issue.
-
-Solution: Have your resume reviewed by three separate sources — spell checking software, a friend, and a professional. Spell check should be covered if you’re using Microsoft Word or Google Docs to create your resume.
-
-A friend or family member can cover the second base, but make sure you trust them with reviewing the whole thing. You can always include an obvious mistake to see if they catch it.
-
-Finally, you can hire a professional editor on [Upwork][1]. It shouldn’t take them more than 15–20 minutes to review so it’s worth paying a bit more for someone with high ratings and lots of hours logged.
-
-Issue #2: Summaries are too long and formal. Many resumes include summaries that consist of paragraphs explaining why they are a “driven, results oriented team player.” When hiring managers see a block of text at the top of the resume, you can bet they aren’t going to read the whole thing. If they do give it a shot and read something similar to the sentence above, they’re going to give up on the spot.
-
-Solution: Summaries are highly effective, but they should be in bullet form and showcase your most relevant experience for the role. For example, if I’m applying for a new business sales role my first bullet might read “Responsible for driving $11M of new business in 2018, achieved 168% attainment (#1 on my team).”
-
-Issue #3: Too many buzz words. Remember our driven team player from the last paragraph? Phrasing like that makes hiring managers cringe because your attempt to stand out actually makes you sound like everyone else.
-
-Solution: Instead of using buzzwords, write naturally, use bullets, and include quantitative results whenever possible. Would you rather hire a salesperson who “is responsible for driving new business across the healthcare vertical to help companies achieve their goals” or “drove $15M of new business last quarter, including the largest deal in company history”? Skip the buzzwords and focus on results.
-
-Issue #4: Having a resume that is more than one page. The average employer spends six seconds reviewing your resume — if it’s more than one page, it probably isn’t going to be read. When asked, recruiters from Google and Barclay’s both said multiple page resumes “are the bane of their existence.”
-
-Solution: Increase your margins, decrease your font, and cut down your experience to highlight the most relevant pieces for the role. It may seem impossible but it’s worth the effort. When you’re dealing with recruiters who see hundreds of resumes every day, you want to make their lives as easy as possible.
-
-### More Common Mistakes & Facts (Backed By Industry Research)
-
-In addition to personal feedback, I combed through dozens of recruitment survey results to fill any gaps my contacts might have missed. Here are a few more items you may want to consider when writing your resume:
-
-* The average interviewer spends 6 seconds scanning your resume
-
-* The majority of interviewers have not looked at your resume until
- you walk into the room
-
-* 76% of resumes are discarded for an unprofessional email address
-
-* Resumes with a photo have an 88% rejection rate
-
-* 58% of resumes have typos
-
-* Applicant tracking software typically eliminates 75% of resumes due to a lack of keywords and phrases being present
-
-Now that you know every mistake you need to avoid, the first item on your to-do list is to comb through your current resume and make sure it doesn’t violate anything mentioned above.
-
-Once you have a clean resume, you can start to focus on more advanced tactics that will really make you stand out. There are a few unique elements you can use to push your application over the edge and finally get your dream company to notice you.
-
-
-![](https://cdn-images-1.medium.com/max/1000/1*KthhefFO33-8tm0kBEPbig.jpeg)
-
-### The 3 Elements Of A Resume That Will Get You Hired
-
-My analysis showed that highly effective resumes typically include three specific elements: quantitative results, a simple design, and a quirky interests section. This section breaks down all three elements and shows you how to maximize their impact.
-
-### Quantitative Results
-
-Most resumes lack them.
-
-Which is a shame because my data shows that they make the biggest difference between resumes that land interviews and resumes that end up in the trash.
-
-Here’s an example from a recent resume that was emailed to me:
-
-> Experience
-
-> + Identified gaps in policies and processes and made recommendations for solutions at the department and institution level
-
-> + Streamlined processes to increase efficiency and enhance quality
-
-> + Directly supervised three managers and indirectly managed up to 15 staff on multiple projects
-
-> + Oversaw execution of in-house advertising strategy
-
-> + Implemented comprehensive social media plan
-
-As an employer, that tells me absolutely nothing about what to expect if I hire this person.
-
-They executed an in-house marketing strategy. Did it work? How did they measure it? What was the ROI?
-
-They also also identified gaps in processes and recommended solutions. What was the result? Did they save time and operating expenses? Did it streamline a process resulting in more output?
-
-Finally, they managed a team of three supervisors and 15 staffers. How did that team do? Was it better than the other teams at the company? What results did they get and how did those improve under this person’s management?
-
-See what I’m getting at here?
-
-These types of bullets talk about daily activities, but companies don’t care about what you do every day. They care about results. By including measurable metrics and achievements in your resume, you’re showcasing the value that the employer can expect to get if they hire you.
-
-Let’s take a look at revised versions of those same bullets:
-
-> Experience
-
-> + Managed a team of 20 that consistently outperformed other departments in lead generation, deal size, and overall satisfaction (based on our culture survey)
-
-> + Executed in-house marketing strategy that resulted in a 15% increase in monthly leads along with a 5% drop in the cost per lead
-
-> + Implemented targeted social media campaign across Instagram & Pintrest, which drove an additional 50,000 monthly website visits and generated 750 qualified leads in 3 months
-
-If you were in the hiring manager’s shoes, which resume would you choose?
-
-That’s the power of including quantitative results.
-
-### Simple, Aesthetic Design That Hooks The Reader
-
-These days, it’s easy to get carried away with our mission to “stand out.” I’ve seen resume overhauls from graphic designers, video resumes, and even resumes [hidden in a box of donuts.][2]
-
-While those can work in very specific situations, we want to aim for a strategy that consistently gets results. The format I saw the most success with was a black and white Word template with sections in this order:
-
-* Summary
-
-* Interests
-
-* Experience
-
-* Education
-
-* Volunteer Work (if you have it)
-
-This template is effective because it’s familiar and easy for the reader to digest.
-
-As I mentioned earlier, hiring managers scan resumes for an average of 6 seconds. If your resume is in an unfamiliar format, those 6 seconds won’t be very comfortable for the hiring manager. Our brains prefer things we can easily recognize. You want to make sure that a hiring manager can actually catch a glimpse of who you are during their quick scan of your resume.
-
-If we’re not relying on design, this hook needs to come from the _Summary_ section at the top of your resume.
-
-This section should be done in bullets (not paragraph form) and it should contain 3–4 highlights of the most relevant experience you have for the role. For example, if I was applying for a New Business Sales position, my summary could look like this:
-
-> Summary
-
-> Drove quarterly average of $11M in new business with a quota attainment of 128% (#1 on my team)
-
-> Received award for largest sales deal of the year
-
-> Developed and trained sales team on new lead generation process that increased total leads by 17% in 3 months, resulting in 4 new deals worth $7M
-
-Those bullets speak directly to the value I can add to the company if I was hired for the role.
-
-### An “Interests” Section That’s Quirky, Unique, & Relatable
-
-This is a little “hack” you can use to instantly build personal connections and positive associations with whomever is reading your resume.
-
-Most resumes have a skills/interests section, but it’s usually parked at the bottom and offers little to no value. It’s time to change things up.
-
-[Research shows][3] that people rely on emotions, not information, to make decisions. Big brands use this principle all the time — emotional responses to advertisements are more influential on a person’s intent to buy than the content of an ad.
-
-You probably remember Apple’s famous “Get A Mac” campaign:
-
-
-When it came to specs and performance, Macs didn’t blow every single PC out of the water. But these ads solidified who was “cool” and who wasn’t, which was worth a few extra bucks to a few million people.
-
-By tugging at our need to feel “cool,” Apple’s campaign led to a [42% increase in market share][4] and a record sales year for Macbooks.
-
-Now we’re going to take that same tactic and apply it to your resume.
-
-If you can invoke an emotional response from your recruiter, you can influence the mental association they assign to you. This gives you a major competitive advantage.
-
-Let’s start with a question — what could you talk about for hours?
-
-It could be cryptocurrency, cooking, World War 2, World of Warcraft, or how Google’s bet on segmenting their company under the Alphabet is going to impact the technology sector over the next 5 years.
-
-Did a topic (or two) pop into year head? Great.
-
-Now think about what it would be like to have a conversation with someone who was just as passionate and knew just as much as you did on the topic. It’d be pretty awesome, right? _Finally, _ someone who gets it!
-
-That’s exactly the kind of emotional response we’re aiming to get from a hiring manager.
-
-There are five “neutral” topics out there that people enjoy talking about:
-
-1. Food/Drink
-
-2. Sports
-
-3. College
-
-4. Hobbies
-
-5. Geography (travel, where people are from, etc.)
-
-These topics are present in plenty of interest sections but we want to take them one step further.
-
-Let’s say you had the best night of your life at the Full Moon Party in Thailand. Which of the following two options would you be more excited to read:
-
-* Traveling
-
-* Ko Pha Ngan beaches (where the full moon party is held)
-
-Or, let’s say that you went to Duke (an ACC school) and still follow their basketball team. Which would you be more pumped about:
-
-* College Sports
-
-* ACC Basketball (Go Blue Devils!)
-
-In both cases, the second answer would probably invoke a larger emotional response because it is tied directly to your experience.
-
-I want you to think about your interests that fit into the five categories I mentioned above.
-
-Now I want you to write a specific favorite associated with each category in parentheses next to your original list. For example, if you wrote travel you can add (ask me about the time I was chased by an elephant in India) or (specifically meditation in a Tibetan monastery).
-
-Here is the [exact set of interests][5] I used on my resume when I interviewed at Google, Microsoft, and Twitter:
-
- _ABC Kitchen’s Atmosphere, Stumptown Coffee (primarily cold brew), Michael Lewis (Liar’s Poker), Fishing (especially fly), Foods That Are Vehicles For Hot Sauce, ACC Sports (Go Deacs!) & The New York Giants_
-
-
-![](https://cdn-images-1.medium.com/max/1000/1*ONxtGr_xUYmz4_Xe66aeng.jpeg)
-
-If you want to cheat here, my experience shows that anything about hot sauce is an instant conversation starter.
-
-### The Proven Plug & Play Resume Template
-
-Now that we have our strategies down, it’s time to apply these tactics to a real resume. Our goal is to write something that increases your chances of hearing back from companies, enhances your relationships with hiring managers, and ultimately helps you score the job offer.
-
-The example below is the exact resume that I used to land interviews and offers at Microsoft, Google, and Twitter. I was targeting roles in Account Management and Sales, so this sample is tailored towards those positions. We’ll break down each section below:
-
-
-![](https://cdn-images-1.medium.com/max/1000/1*B2RQ89ue2dGymRdwMY2lBA.png)
-
-First, I want you to notice how clean this is. Each section is clearly labeled and separated and flows nicely from top to bottom.
-
-My summary speaks directly to the value I’ve created in the past around company culture and its bottom line:
-
-* I consistently exceeded expectations
-
-* I started my own business in the space (and saw real results)
-
-* I’m a team player who prioritizes culture
-
-I purposefully include my Interests section right below my Summary. If my hiring manager’s six second scan focused on the summary, I know they’ll be interested. Those bullets cover all the subconscious criteria for qualification in sales. They’re going to be curious to read more in my Experience section.
-
-By sandwiching my Interests in the middle, I’m upping their visibility and increasing the chance of creating that personal connection.
-
-You never know — the person reading my resume may also be a hot sauce connoisseur and I don’t want that to be overlooked because my interests were sitting at the bottom.
-
-Next, my Experience section aims to flesh out the points made in my Summary. I mentioned exceeding my quota up top, so I included two specific initiatives that led to that attainment, including measurable results:
-
-* A partnership leveraging display advertising to drive users to a gamified experience. The campaign resulted in over 3000 acquisitions and laid the groundwork for the 2nd largest deal in company history.
-
-* A partnership with a top tier agency aimed at increasing conversions for a client by improving user experience and upgrading tracking during a company-wide website overhaul (the client has ~20 brand sites). Our efforts over 6 months resulted in a contract extension worth 316% more than their original deal.
-
-Finally, I included my education at the very bottom starting with the most relevant coursework.
-
-Download My Resume Templates For Free
-
-You can download a copy of the resume sample above as well as a plug and play template here:
-
-Austin’s Resume: [Click To Download][6]
-
-Plug & Play Resume Template: [Click To Download][7]
-
-### Bonus Tip: An Unconventional Resume “Hack” To Help You Beat Applicant Tracking Software
-
-If you’re not already familiar, Applicant Tracking Systems are pieces of software that companies use to help “automate” the hiring process.
-
-After you hit submit on your online application, the ATS software scans your resume looking for specific keywords and phrases (if you want more details, [this article][8] does a good job of explaining ATS).
-
-If the language in your resume matches up, the software sees it as a good fit for the role and will pass it on to the recruiter. However, even if you’re highly qualified for the role but you don’t use the right wording, your resume can end up sitting in a black hole.
-
-I’m going to teach you a little hack to help improve your chances of beating the system and getting your resume in the hands of a human:
-
-Step 1: Highlight and select the entire job description page and copy it to your clipboard.
-
-Step 2: Head over to [WordClouds.com][9] and click on the “Word List” button at the top. Towards the top of the pop up box, you should see a link for Paste/Type Text. Go ahead and click that.
-
-Step 3: Now paste the entire job description into the box, then hit “Apply.”
-
-WordClouds is going to spit out an image that showcases every word in the job description. The larger words are the ones that appear most frequently (and the ones you want to make sure to include when writing your resume). Here’s an example for a data a science role:
-
-
-![](https://cdn-images-1.medium.com/max/1000/1*O7VO1C9nhC9LZct7vexTbA.png)
-
-You can also get a quantitative view by clicking “Word List” again after creating your cloud. That will show you the number of times each word appeared in the job description:
-
-9 data
-
-6 models
-
-4 experience
-
-4 learning
-
-3 Experience
-
-3 develop
-
-3 team
-
-2 Qualifications
-
-2 statistics
-
-2 techniques
-
-2 libraries
-
-2 preferred
-
-2 research
-
-2 business
-
-When writing your resume, your goal is to include those words in the same proportions as the job description.
-
-It’s not a guaranteed way to beat the online application process, but it will definitely help improve your chances of getting your foot in the door!
-
-* * *
-
-### Want The Inside Info On Landing A Dream Job Without Connections, Without “Experience,” & Without Applying Online?
-
-[Click here to get the 5 free strategies that my students have used to land jobs at Google, Microsoft, Amazon, and more without applying online.][10]
-
- _Originally published at _ [_cultivatedculture.com_][11] _._
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-I help people land jobs they love and salaries they deserve at CultivatedCulture.com
-
-----------
-
-via: https://medium.freecodecamp.org/how-to-write-a-really-great-resume-that-actually-gets-you-hired-e18533cd8d17
-
-作者:[Austin Belcak ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://medium.freecodecamp.org/@austin.belcak
-[1]:http://www.upwork.com/
-[2]:https://www.thrillist.com/news/nation/this-guy-hides-his-resume-in-boxes-of-donuts-to-score-job-interviews
-[3]:https://www.psychologytoday.com/blog/inside-the-consumer-mind/201302/how-emotions-influence-what-we-buy
-[4]:https://www.businesswire.com/news/home/20070608005253/en/Apple-Mac-Named-Successful-Marketing-Campaign-2007
-[5]:http://cultivatedculture.com/resume-skills-section/
-[6]:https://drive.google.com/file/d/182gN6Kt1kBCo1LgMjtsGHOQW2lzATpZr/view?usp=sharing
-[7]:https://drive.google.com/open?id=0B3WIcEDrxeYYdXFPVlcyQlJIbWc
-[8]:https://www.jobscan.co/blog/8-things-you-need-to-know-about-applicant-tracking-systems/
-[9]:https://www.wordclouds.com/
-[10]:https://cultivatedculture.com/dreamjob/
-[11]:https://cultivatedculture.com/write-a-resume/
\ No newline at end of file
diff --git a/sources/talk/20180206 UQDS- A software-development process that puts quality first.md b/sources/talk/20180206 UQDS- A software-development process that puts quality first.md
deleted file mode 100644
index e9f7bb94ac..0000000000
--- a/sources/talk/20180206 UQDS- A software-development process that puts quality first.md
+++ /dev/null
@@ -1,99 +0,0 @@
-UQDS: A software-development process that puts quality first
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag)
-
-The Ultimate Quality Development System (UQDS) is a software development process that provides clear guidelines for how to use branches, tickets, and code reviews. It was invented more than a decade ago by Divmod and adopted by [Twisted][1], an event-driven framework for Python that underlies popular commercial platforms like HipChat as well as open source projects like Scrapy (a web scraper).
-
-Divmod, sadly, is no longer around—it has gone the way of many startups. Luckily, since many of its products were open source, its legacy lives on.
-
-When Twisted was a young project, there was no clear process for when code was "good enough" to go in. As a result, while some parts were highly polished and reliable, others were alpha quality software—with no way to tell which was which. UQDS was designed as a process to help an existing project with definite quality challenges ramp up its quality while continuing to add features and become more useful.
-
-UQDS has helped the Twisted project evolve from having frequent regressions and needing multiple release candidates to get a working version, to achieving its current reputation of stability and reliability.
-
-### UQDS's building blocks
-
-UQDS was invented by Divmod back in 2006. At that time, Continuous Integration (CI) was in its infancy and modern version control systems, which allow easy branch merging, were barely proofs of concept. Although Divmod did not have today's modern tooling, it put together CI, some ad-hoc tooling to make [Subversion branches][2] work, and a lot of thought into a working process. Thus the UQDS methodology was born.
-
-UQDS is based upon fundamental building blocks, each with their own carefully considered best practices:
-
- 1. Tickets
- 2. Branches
- 3. Tests
- 4. Reviews
- 5. No exceptions
-
-
-
-Let's go into each of those in a little more detail.
-
-#### Tickets
-
-In a project using the UQDS methodology, no change is allowed to happen if it's not accompanied by a ticket. This creates a written record of what change is needed and—more importantly—why.
-
- * Tickets should define clear, measurable goals.
- * Work on a ticket does not begin until the ticket contains goals that are clearly defined.
-
-
-
-#### Branches
-
-Branches in UQDS are tightly coupled with tickets. Each branch must solve one complete ticket, no more and no less. If a branch addresses either more or less than a single ticket, it means there was a problem with the ticket definition—or with the branch. Tickets might be split or merged, or a branch split and merged, until congruence is achieved.
-
-Enforcing that each branch addresses no more nor less than a single ticket—which corresponds to one logical, measurable change—allows a project using UQDS to have fine-grained control over the commits: A single change can be reverted or changes may even be applied in a different order than they were committed. This helps the project maintain a stable and clean codebase.
-
-#### Tests
-
-UQDS relies upon automated testing of all sorts, including unit, integration, regression, and static tests. In order for this to work, all relevant tests must pass at all times. Tests that don't pass must either be fixed or, if no longer relevant, be removed entirely.
-
-Tests are also coupled with tickets. All new work must include tests that demonstrate that the ticket goals are fully met. Without this, the work won't be merged no matter how good it may seem to be.
-
-A side effect of the focus on tests is that the only platforms that a UQDS-using project can say it supports are those on which the tests run with a CI framework—and where passing the test on the platform is a condition for merging a branch. Without this restriction on supported platforms, the quality of the project is not Ultimate.
-
-#### Reviews
-
-While automated tests are important to the quality ensured by UQDS, the methodology never loses sight of the human factor. Every branch commit requires code review, and each review must follow very strict rules:
-
- 1. Each commit must be reviewed by a different person than the author.
- 2. Start with a comment thanking the contributor for their work.
- 3. Make a note of something that the contributor did especially well (e.g., "that's the perfect name for that variable!").
- 4. Make a note of something that could be done better (e.g., "this line could use a comment explaining the choices.").
- 5. Finish with directions for an explicit next step, typically either merge as-is, fix and merge, or fix and submit for re-review.
-
-
-
-These rules respect the time and effort of the contributor while also increasing the sharing of knowledge and ideas. The explicit next step allows the contributor to have a clear idea on how to make progress.
-
-#### No exceptions
-
-In any process, it's easy to come up with reasons why you might need to flex the rules just a little bit to let this thing or that thing slide through the system. The most important fundamental building block of UQDS is that there are no exceptions. The entire community works together to make sure that the rules do not flex, not for any reason whatsoever.
-
-Knowing that all code has been approved by a different person than the author, that the code has complete test coverage, that each branch corresponds to a single ticket, and that this ticket is well considered and complete brings a piece of mind that is too valuable to risk losing, even for a single small exception. The goal is quality, and quality does not come from compromise.
-
-### A downside to UQDS
-
-While UQDS has helped Twisted become a highly stable and reliable project, this reliability hasn't come without cost. We quickly found that the review requirements caused a slowdown and backlog of commits to review, leading to slower development. The answer to this wasn't to compromise on quality by getting rid of UQDS; it was to refocus the community priorities such that reviewing commits became one of the most important ways to contribute to the project.
-
-To help with this, the community developed a bot in the [Twisted IRC channel][3] that will reply to the command `review tickets` with a list of tickets that still need review. The [Twisted review queue][4] website returns a prioritized list of tickets for review. Finally, the entire community keeps close tabs on the number of tickets that need review. It's become an important metric the community uses to gauge the health of the project.
-
-### Learn more
-
-The best way to learn about UQDS is to [join the Twisted Community][5] and see it in action. If you'd like more information about the methodology and how it might help your project reach a high level of reliability and stability, have a look at the [UQDS documentation][6] in the Twisted wiki.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/2/uqds
-
-作者:[Moshe Zadka][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/moshez
-[1]:https://twistedmatrix.com/trac/
-[2]:http://structure.usc.edu/svn/svn.branchmerge.html
-[3]:http://webchat.freenode.net/?channels=%23twisted
-[4]:https://twisted.reviews
-[5]:https://twistedmatrix.com/trac/wiki/TwistedCommunity
-[6]:https://twistedmatrix.com/trac/wiki/UltimateQualityDevelopmentSystem
diff --git a/sources/talk/20180207 Why Mainframes Aren-t Going Away Any Time Soon.md b/sources/talk/20180207 Why Mainframes Aren-t Going Away Any Time Soon.md
deleted file mode 100644
index 7a1a837ff9..0000000000
--- a/sources/talk/20180207 Why Mainframes Aren-t Going Away Any Time Soon.md
+++ /dev/null
@@ -1,73 +0,0 @@
-Why Mainframes Aren't Going Away Any Time Soon
-======
-
-![](http://www.datacenterknowledge.com/sites/datacenterknowledge.com/files/styles/article_featured_standard/public/ibm%20z13%20mainframe%202015%20getty.jpg?itok=uB8agshi)
-
-IBM's last earnings report showed the [first uptick in revenue in more than five years.][1] Some of that growth was from an expected source, cloud revenue, which was up 24 percent year over year and now accounts for 21 percent of Big Blue's take. Another major boost, however, came from a spike in mainframe revenue. Z series mainframe sales were up 70 percent, the company said.
-
-This may sound somewhat akin to a return to vacuum tube technology in a world where transistors are yesterday's news. In actuality, this is only a sign of the changing face of IT.
-
-**Related:** [One Click and Voilà, Your Entire Data Center is Encrypted][2]
-
-Modern mainframes definitely aren't your father's punch card-driven machines that filled entire rooms. These days, they most often run Linux and have found a renewed place in the data center, where they're being called upon to do a lot of heavy lifting. Want to know where the largest instance of Oracle's database runs? It's on a Linux mainframe. How about the largest implementation of SAP on the planet? Again, Linux on a mainframe.
-
-"Before the advent of Linux on the mainframe, the people who bought mainframes primarily were people who already had them," Leonard Santalucia explained to Data Center Knowledge several months back at the All Things Open conference. "They would just wait for the new version to come out and upgrade to it, because it would run cheaper and faster.
-
-**Related:** [IBM Designs a “Performance Beast” for AI][3]
-
-"When Linux came out, it opened up the door to other customers that never would have paid attention to the mainframe. In fact, probably a good three to four hundred new clients that never had mainframes before got them. They don't have any old mainframes hanging around or ones that were upgraded. These are net new mainframes."
-
-Although Santalucia is CTO at Vicom Infinity, primarily an IBM reseller, at the conference he was wearing his hat as chairperson of the Linux Foundation's Open Mainframe Project. He was joined in the conversation by John Mertic, the project's director of program management.
-
-Santalucia knows IBM's mainframes from top to bottom, having spent 27 years at Big Blue, the last eight as CTO for the company's systems and technology group.
-
-"Because of Linux getting started with it back in 1999, it opened up a lot of doors that were closed to the mainframe," he said. "Beforehand it was just z/OS, z/VM, z/VSE, z/TPF, the traditional operating systems. When Linux came along, it got the mainframe into other areas that it never was, or even thought to be in, because of how open it is, and because Linux on the mainframe is no different than Linux on any other platform."
-
-The focus on Linux isn't the only motivator behind the upsurge in mainframe use in data centers. Increasingly, enterprises with heavy IT needs are finding many advantages to incorporating modern mainframes into their plans. For example, mainframes can greatly reduce power, cooling, and floor space costs. In markets like New York City, where real estate is at a premium, electricity rates are high, and electricity use is highly taxed to reduce demand, these are significant advantages.
-
-"There was one customer where we were able to do a consolidation of 25 x86 cores to one core on a mainframe," Santalucia said. "They have several thousand machines that are ten and twenty cores each. So, as far as the eye could see in this data center, [x86 server workloads] could be picked up and moved onto this box that is about the size of a sub-zero refrigerator in your kitchen."
-
-In addition to saving on physical data center resources, this customer by design would likely see better performance.
-
-"When you look at the workload as it's running on an x86 system, the math, the application code, the I/O to manage the disk, and whatever else is attached to that system, is all run through the same chip," he explained. "On a Z, there are multiple chip architectures built into the system. There's one specifically just for the application code. If it senses the application needs an I/O or some mathematics, it sends it off to a separate processor to do math or I/O, all dynamically handled by the underlying firmware. Your Linux environment doesn't have to understand that. When it's running on a mainframe, it knows it's running on a mainframe and it will exploit that architecture."
-
-The operating system knows it's running on a mainframe because when IBM was readying its mainframe for Linux it open sourced something like 75,000 lines of code for Linux distributions to use to make sure their OS's were ready for IBM Z.
-
-"A lot of times people will hear there's 170 processors on the Z14," Santalucia said. "Well, there's actually another 400 other processors that nobody counts in that count of application chips, because it is taken for granted."
-
-Mainframes are also resilient when it comes to disaster recovery. Santalucia told the story of an insurance company located in lower Manhattan, within sight of the East River. The company operated a large data center in a basement that among other things housed a mainframe backed up to another mainframe located in Upstate New York. When Hurricane Sandy hit in 2012, the data center flooded, electrocuting two employees and destroying all of the servers, including the mainframe. But the mainframe's workload was restored within 24 hours from the remote backup.
-
-The x86 machines were all destroyed, and the data was never recovered. But why weren't they also backed up?
-
-"The reason they didn't do this disaster recovery the same way they did with the mainframe was because it was too expensive to have a mirror of all those distributed servers someplace else," he explained. "With the mainframe, you can have another mainframe as an insurance policy that's lower in price, called Capacity BackUp, and it just sits there idling until something like this happens."
-
-Mainframes are also evidently tough as nails. Santalucia told another story in which a data center in Japan was struck by an earthquake strong enough to destroy all of its x86 machines. The center's one mainframe fell on its side but continued to work.
-
-The mainframe also comes with built-in redundancy to guard against situations that would be disastrous with x86 machines.
-
-"What if a hard disk fails on a node in x86?" the Open Mainframe Project's Mertic asked. "You're taking down a chunk of that cluster potentially. With a mainframe you're not. A mainframe just keeps on kicking like nothing's ever happened."
-
-Mertic added that a motherboard can be pulled from a running mainframe, and again, "the thing keeps on running like nothing's ever happened."
-
-So how do you figure out if a mainframe is right for your organization? Simple, says Santalucia. Do the math.
-
-"The approach should be to look at it from a business, technical, and financial perspective -- not just a financial, total-cost-of-acquisition perspective," he said, pointing out that often, costs associated with software, migration, networking, and people are not considered. The break-even point, he said, comes when at least 20 to 30 servers are being migrated to a mainframe. After that point the mainframe has a financial advantage.
-
-"You can get a few people running the mainframe and managing hundreds or thousands of virtual servers," he added. "If you tried to do the same thing on other platforms, you'd find that you need significantly more resources to maintain an environment like that. Seven people at ADP handle the 8,000 virtual servers they have, and they need seven only in case somebody gets sick.
-
-"If you had eight thousand servers on x86, even if they're virtualized, do you think you could get away with seven?"
-
---------------------------------------------------------------------------------
-
-via: http://www.datacenterknowledge.com/hardware/why-mainframes-arent-going-away-any-time-soon
-
-作者:[Christine Hall][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://www.datacenterknowledge.com/archives/author/christine-hall
-[1]:http://www.datacenterknowledge.com/ibm/mainframe-sales-fuel-growth-ibm
-[2]:http://www.datacenterknowledge.com/design/one-click-and-voil-your-entire-data-center-encrypted
-[3]:http://www.datacenterknowledge.com/design/ibm-designs-performance-beast-ai
diff --git a/sources/talk/20180209 Arch Anywhere Is Dead, Long Live Anarchy Linux.md b/sources/talk/20180209 Arch Anywhere Is Dead, Long Live Anarchy Linux.md
deleted file mode 100644
index c3c78e84ad..0000000000
--- a/sources/talk/20180209 Arch Anywhere Is Dead, Long Live Anarchy Linux.md
+++ /dev/null
@@ -1,127 +0,0 @@
-Arch Anywhere Is Dead, Long Live Anarchy Linux
-======
-
-![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_main.jpg?itok=fyBpTjQW)
-
-Arch Anywhere was a distribution aimed at bringing Arch Linux to the masses. Due to a trademark infringement, Arch Anywhere has been completely rebranded to [Anarchy Linux][1]. And I’m here to say, if you’re looking for a distribution that will enable you to enjoy Arch Linux, a little Anarchy will go a very long way. This distribution is seriously impressive in what it sets out to do and what it achieves. In fact, anyone who previously feared Arch Linux can set those fears aside… because Anarchy Linux makes Arch Linux easy.
-
-Let’s face it; Arch Linux isn’t for the faint of heart. The installation alone will turn off many a new user (and even some seasoned users). That’s where distributions like Anarchy make for an easy bridge to Arch. With a live ISO that can be tested and then installed, Arch becomes as user-friendly as any other distribution.
-
-Anarchy Linux goes a little bit further than that, however. Let’s fire it up and see what it does.
-
-### The installation
-
-The installation of Anarchy Linux isn’t terribly challenging, but it’s also not quite as simple as for, say, [Ubuntu][2], [Linux Mint][3], or [Elementary OS][4]. Although you can run the installer from within the default graphical desktop environment (Xfce4), it’s still much in the same vein as Arch Linux. In other words, you’re going to have to do a bit of work—all within a text-based installer.
-
-To start, the very first step of the installer (Figure 1) requires you to update the mirror list, which will likely trip up new users.
-
-![Updating the mirror][6]
-
-Figure 1: Updating the mirror list is a necessity for the Anarchy Linux installation.
-
-[Used with permission][7]
-
-From the options, select Download & Rank New Mirrors. Tab down to OK and hit Enter on your keyboard. You can then select the nearest mirror (to your location) and be done with it. The next few installation screens are simple (keyboard layout, language, timezone, etc.). The next screen should surprise many an Arch fan. Anarchy Linux includes an auto partition tool. Select Auto Partition Drive (Figure 2), tab down to Ok, and hit Enter on your keyboard.
-
-![partitioning][9]
-
-Figure 2: Anarchy makes partitioning easy.
-
-[Used with permission][7]
-
-You will then have to select the drive to be used (if you only have one drive this is only a matter of hitting Enter). Once you’ve selected the drive, choose the filesystem type to be used (ext2/3/4, btrfs, jfs, reiserfs, xfs), tab down to OK, and hit Enter. Next you must choose whether you want to create SWAP space. If you select Yes, you’ll then have to define how much SWAP to use. The next window will stop many new users in their tracks. It asks if you want to use GPT (GUID Partition Table). This is different than the traditional MBR (Master Boot Record) partitioning. GPT is a newer standard and works better with UEFI. If you’ll be working with UEFI, go with GPT, otherwise, stick with the old standby, MBR. Finally select to write the changes to the disk, and your installation can continue.
-
-The next screen that could give new users pause, requires the selection of the desired installation. There are five options:
-
- * Anarchy-Desktop
-
- * Anarchy-Desktop-LTS
-
- * Anarchy-Server
-
- * Anarchy-Server-LTS
-
- * Anarchy-Advanced
-
-
-
-
-If you want long term support, select Anarchy-Desktop-LTS, otherwise click Anarchy-Desktop (the default), and tab down to Ok. Click Enter on your keyboard. After you select the type of installation, you will get to select your desktop. You can select from five options: Budgie, Cinnamon, GNOME, Openbox, and Xfce4.
-Once you’ve selected your desktop, give the machine a hostname, set the root password, create a user, and enable sudo for the new user (if applicable). The next section that will raise the eyebrows of new users is the software selection window (Figure 3). You must go through the various sections and select which software packages to install. Don’t worry, if you miss something, you can always installed it later.
-
-
-![software][11]
-
-Figure 3: Selecting the software you want on your system.
-
-[Used with permission][7]
-
-Once you’ve made your software selections, tab to Install (Figure 4), and hit Enter on your keyboard.
-
-![ready to install][13]
-
-Figure 4: Everything is ready to install.
-
-[Used with permission][7]
-
-Once the installation completes, reboot and enjoy Anarchy.
-
-### Post install
-
-I installed two versions of Anarchy—one with Budgie and one with GNOME. Both performed quite well, however you might be surprised to see that the version of GNOME installed is decked out with a dock. In fact, comparing the desktops side-by-side and they do a good job of resembling one another (Figure 5).
-
-![GNOME and Budgie][15]
-
-Figure 5: GNOME is on the right, Budgie is on the left.
-
-[Used with permission][7]
-
-My guess is that you’ll find all desktop options for Anarchy configured in such a way to offer a similar look and feel. Of course, the second you click on the bottom left “buttons”, you’ll see those similarities immediately disappear (Figure 6).
-
-![GNOME and Budgie][17]
-
-Figure 6: The GNOME Dash and the Budgie menu are nothing alike.
-
-[Used with permission][7]
-
-Regardless of which desktop you select, you’ll find everything you need to install new applications. Open up your desktop menu of choice and select Packages to search for and install whatever is necessary for you to get your work done.
-
-### Why use Arch Linux without the “Arch”?
-
-This is a valid question. The answer is simple, but revealing. Some users may opt for a distribution like [Arch Linux][18] because they want the feeling of “elitism” that comes with using, say, [Gentoo][19], without having to go through that much hassle. With regards to complexity, Arch rests below Gentoo, which means it’s accessible to more users. However, along with that complexity in the platform, comes a certain level of dependability that may not be found in others. So if you’re looking for a Linux distribution with high stability, that’s not quite as challenging as Gentoo or Arch to install, Anarchy might be exactly what you want. In the end, you’ll wind up with an outstanding desktop platform that’s easy to work with (and maintain), based on a very highly regarded distribution of Linux.
-
-That’s why you might opt for Arch Linux without the Arch.
-
-Anarchy Linux is one of the finest “user-friendly” takes on Arch Linux I’ve ever had the privilege of using. Without a doubt, if you’re looking for a friendlier version of a rather challenging desktop operating system, you cannot go wrong with Anarchy.
-
-Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/learn/intro-to-linux/2018/2/arch-anywhere-dead-long-live-anarchy-linux
-
-作者:[Jack Wallen][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/jlwallen
-[1]:https://anarchy-linux.org/
-[2]:https://www.ubuntu.com/
-[3]:https://linuxmint.com/
-[4]:https://elementary.io/
-[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_1.jpg?itok=WgHRqFTf (Updating the mirror)
-[7]:https://www.linux.com/licenses/category/used-permission
-[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_2.jpg?itok=D7HkR97t (partitioning)
-[10]:/files/images/anarchyinstall3jpg
-[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_3.jpg?itok=5-9E2u0S (software)
-[12]:/files/images/anarchyinstall4jpg
-[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_4.jpg?itok=fuSZqtZS (ready to install)
-[14]:/files/images/anarchyinstall5jpg
-[15]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_5.jpg?itok=4y9kiC8I (GNOME and Budgie)
-[16]:/files/images/anarchyinstall6jpg
-[17]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_6.jpg?itok=fJ7Lmdci (GNOME and Budgie)
-[18]:https://www.archlinux.org/
-[19]:https://www.gentoo.org/
-[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md b/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md
deleted file mode 100644
index 55618326c6..0000000000
--- a/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md
+++ /dev/null
@@ -1,149 +0,0 @@
-How writing can change your career for the better, even if you don't identify as a writer
-======
-Have you read Marie Kondo's book [The Life-Changing Magic of Tidying Up][1]? Or did you, like me, buy it and read a little bit and then add it to the pile of clutter next to your bed?
-
-Early in the book, Kondo talks about keeping possessions that "spark joy." In this article, I'll examine ways writing about what we and other people are doing in the open source world can "spark joy," or at least how writing can improve your career in unexpected ways.
-
-Because I'm a community manager and editor on Opensource.com, you might be thinking, "She just wants us to [write for Opensource.com][2]." And that is true. But everything I will tell you about why you should write is true, even if you never send a story in to Opensource.com. Writing can change your career for the better, even if you don't identify as a writer. Let me explain.
-
-### How I started writing
-
-Early in the first decade of my career, I transitioned from a customer service-related role at a tech publishing company into an editing role on Sys Admin Magazine. I was plugging along, happily laying low in my career, and then that all changed when I started writing about open source technologies and communities, and the people in them. But I did _not_ start writing voluntarily. The tl;dr: of it is that my colleagues at Linux New Media eventually talked me into launching our first blog on the [Linux Pro Magazine][3] site. And as it turns out, it was one of the best career decisions I've ever made. I would not be working on Opensource.com today had I not started writing about what other people in open source were doing all those years ago.
-
-When I first started writing, my goal was to raise awareness of the company I worked for and our publications, while also helping raise the visibility of women in tech. But soon after I started writing, I began seeing unexpected results.
-
-#### My network started growing
-
-When I wrote about a person, an organization, or a project, I got their attention. Suddenly the people I wrote about knew who I was. And because I was sharing knowledge—that is to say, I wasn't being a critic—I'd generally become an ally, and in many cases, a friend. I had a platform and an audience, and I was sharing them with other people in open source.
-
-#### I was learning
-
-In addition to promoting our website and magazine and growing my network, the research and fact-checking I did when writing articles helped me become more knowledgeable in my field and improve my tech chops.
-
-#### I started meeting more people IRL
-
-When I went to conferences, I found that my blog posts helped me meet people. I introduced myself to people I'd written about or learned about during my research, and I met new people to interview. People started knowing who I was because they'd read my articles. Sometimes people were even excited to meet me because I'd highlighted them, their projects, or someone or something they were interested in. I had no idea writing could be so exciting and interesting away from the keyboard.
-
-#### My conference talks improved
-
-I started speaking at events about a year after launching my blog. A few years later, I started writing articles based on my talks prior to speaking at events. The process of writing the articles helps me organize my talks and slides, and it was a great way to provide "notes" for conference attendees, while sharing the topic with a larger international audience that wasn't at the event in person.
-
-### What should you write about?
-
-Maybe you're interested in writing, but you struggle with what to write about. You should write about two things: what you know, and what you don't know.
-
-#### Write about what you know
-
-Writing about what you know can be relatively easy. For example, a script you wrote to help automate part of your daily tasks might be something you don't give any thought to, but it could make for a really exciting article for someone who hates doing that same task every day. That could be a relatively quick, short, and easy article for you to write, and you might not even think about writing it. But it could be a great contribution to the open source community.
-
-#### Write about what you don't know
-
-Writing about what you don't know can be much harder and more time consuming, but also much more fulfilling and help your career. I've found that writing about what I don't know helps me learn, because I have to research it and understand it well enough to explain it.
-
-> "When I write about a technical topic, I usually learn a lot more about it. I want to make sure my article is as good as it can be. So even if I'm writing about something I know well, I'll research the topic a bit more so I can make sure to get everything right." ~Jim Hall, FreeDOS project leader
-
-For example, I wanted to learn about machine learning, and I thought narrowing down the topic would help me get started. My team mate Jason Baker suggested that I write an article on the [Top 3 machine learning libraries for Python][4], which gave me a focus for research.
-
-The process of researching that article inspired another article, [3 cool machine learning projects using TensorFlow and the Raspberry Pi][5]. That article was also one of our most popular last year. I'm not an _expert_ on machine learning now, but researching the topic with writing an article in mind allowed me to give myself a crash course in the topic.
-
-### Why people in tech write
-
-Now let's look at a few benefits of writing that other people in tech have found. I emailed the Opensource.com writers' list and asked, and here's what writers told me.
-
-#### Grow your network or your project community
-
-Xavier Ho wrote for us for the first time last year ("[A programmer's cleaning guide for messy sensor data][6]"). He says: "I've been getting Twitter mentions from all over the world, including Spain, US, Australia, Indonesia, the UK, and other European countries. It shows the article is making some impact... This is the kind of reach I normally don't have. Hope it's really helping someone doing similar work!"
-
-#### Help people
-
-Writing about what other people are working on is a great way to help your fellow community members. Antoine Thomas, who wrote "[Linux helped me grow as a musician][7]", says, "I began to use open source years ago, by reading tutorials and documentation. That's why now I share my tips and tricks, experience or knowledge. It helped me to get started, so I feel that it's my turn to help others to get started too."
-
-#### Give back to the community
-
-[Jim Hall][8], who started the [FreeDOS project][9], says, "I like to write ... because I like to support the open source community by sharing something neat. I don't have time to be a program maintainer anymore, but I still like to do interesting stuff. So when something cool comes along, I like to write about it and share it."
-
-#### Highlight your community
-
-Emilio Velis wrote an article, "[Open hardware groups spread across the globe][10]", about projects in Central and South America. He explains, "I like writing about specific aspects of the open culture that are usually enclosed in my region (Latin America). I feel as if smaller communities and their ideas are hidden from the mainstream, so I think that creating this sense of broadness in participation is what makes some other cultures as valuable."
-
-#### Gain confidence
-
-[Don Watkins][11] is one of our regular writers and a [community moderator][12]. He says, "When I first started writing I thought I was an impostor, later I realized that many people feel that way. Writing and contributing to Opensource.com has been therapeutic, too, as it contributed to my self esteem and helped me to overcome feelings of inadequacy. … Writing has given me a renewed sense of purpose and empowered me to help others to write and/or see the valuable contributions that they too can make if they're willing to look at themselves in a different light. Writing has kept me younger and more open to new ideas."
-
-#### Get feedback
-
-One of our writers described writing as a feedback loop. He said that he started writing as a way to give back to the community, but what he found was that community responses give back to him.
-
-Another writer, [Stuart Keroff][13] says, "Writing for Opensource.com about the program I run at school gave me valuable feedback, encouragement, and support that I would not have had otherwise. Thousands upon thousands of people heard about the Asian Penguins because of the articles I wrote for the website."
-
-#### Exhibit expertise
-
-Writing can help you show that you've got expertise in a subject, and having writing samples on well-known websites can help you move toward better pay at your current job, get a new role at a different organization, or start bringing in writing income.
-
-[Jeff Macharyas][14] explains, "There are several ways I've benefitted from writing for Opensource.com. One, is the credibility I can add to my social media sites, resumes, bios, etc., just by saying 'I am a contributing writer to Opensource.com.' … I am hoping that I will be able to line up some freelance writing assignments, using my Opensource.com articles as examples, in the future."
-
-### Where should you publish your articles?
-
-That depends. Why are you writing?
-
-You can always post on your personal blog, but if you don't already have a lot of readers, your article might get lost in the noise online.
-
-Your project or company blog is a good option—again, you'll have to think about who will find it. How big is your company's reach? Or will you only get the attention of people who already give you their attention?
-
-Are you trying to reach a new audience? A bigger audience? That's where sites like Opensource.com can help. We attract more than a million page views a month, and more than 700,000 unique visitors. Plus you'll work with editors who will polish and help promote your article.
-
-We aren't the only site interested in your story. What are your favorite sites to read? They might want to help you share your story, and it's ok to pitch to multiple publications. Just be transparent about whether your article has been shared on other sites when working with editors. Occasionally, editors can even help you modify articles so that you can publish variations on multiple sites.
-
-#### Do you want to get rich by writing? (Don't count on it.)
-
-If your goal is to make money by writing, pitch your article to publications that have author budgets. There aren't many of them, the budgets don't tend to be huge, and you will be competing with experienced professional tech journalists who write seven days a week, 365 days a year, with large social media followings and networks. I'm not saying it can't be done—I've done it—but I am saying don't expect it to be easy or lucrative. It's not. (And frankly, I've found that nothing kills my desire to write much like having to write if I want to eat...)
-
-A couple of people have asked me whether Opensource.com pays for content, or whether I'm asking someone to write "for exposure." Opensource.com does not have an author budget, but I won't tell you to write "for exposure," either. You should write because it meets a need.
-
-If you already have a platform that meets your needs, and you don't need editing or social media and syndication help: Congratulations! You are privileged.
-
-### Spark joy!
-
-Most people don't know they have a story to tell, so I'm here to tell you that you probably do, and my team can help, if you just submit a proposal.
-
-Most people—myself included—could use help from other people. Sites like Opensource.com offer one way to get editing and social media services at no cost to the writer, which can be hugely valuable to someone starting out in their career, someone who isn't a native English speaker, someone who wants help with their project or organization, and so on.
-
-If you don't already write, I hope this article helps encourage you to get started. Or, maybe you already write. In that case, I hope this article makes you think about friends, colleagues, or people in your network who have great stories and experiences to share. I'd love to help you help them get started.
-
-I'll conclude with feedback I got from a recent writer, [Mario Corchero][15], a Senior Software Developer at Bloomberg. He says, "I wrote for Opensource because you told me to :)" (For the record, I "invited" him to write for our [PyCon speaker series][16] last year.) He added, "And I am extremely happy about it—not only did it help me at my workplace by gaining visibility, but I absolutely loved it! The article appeared in multiple email chains about Python and was really well received, so I am now looking to publish the second :)" Then he [wrote for us][17] again.
-
-I hope you find writing to be as fulfilling as we do.
-
-You can connect with Opensource.com editors, community moderators, and writers in our Freenode [IRC][18] channel #opensource.com, and you can reach me and the Opensource.com team by email at [open@opensource.com][19].
-
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/2/career-changing-magic-writing
-
-作者:[Rikki Endsley][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/rikki-endsley
-[1]:http://tidyingup.com/books/the-life-changing-magic-of-tidying-up-hc
-[2]:https://opensource.com/how-submit-article
-[3]:http://linuxpromagazine.com/
-[4]:https://opensource.com/article/17/2/3-top-machine-learning-libraries-python
-[5]:https://opensource.com/article/17/2/machine-learning-projects-tensorflow-raspberry-pi
-[6]:https://opensource.com/article/17/9/messy-sensor-data
-[7]:https://opensource.com/life/16/9/my-linux-story-musician
-[8]:https://opensource.com/users/jim-hall
-[9]:http://www.freedos.org/
-[10]:https://opensource.com/article/17/6/open-hardware-latin-america
-[11]:https://opensource.com/users/don-watkins
-[12]:https://opensource.com/community-moderator-program
-[13]:https://opensource.com/education/15/3/asian-penguins-Linux-middle-school-club
-[14]:https://opensource.com/users/jeffmacharyas
-[15]:https://opensource.com/article/17/5/understanding-datetime-python-primer
-[16]:https://opensource.com/tags/pycon
-[17]:https://opensource.com/article/17/9/python-logging
-[18]:https://opensource.com/article/16/6/getting-started-irc
-[19]:mailto:open@opensource.com
diff --git a/sources/talk/20180209 Why an involved user community makes for better software.md b/sources/talk/20180209 Why an involved user community makes for better software.md
deleted file mode 100644
index 2b51023e44..0000000000
--- a/sources/talk/20180209 Why an involved user community makes for better software.md
+++ /dev/null
@@ -1,47 +0,0 @@
-Why an involved user community makes for better software
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_cubestalk.png?itok=Ozw4NhGW)
-
-Imagine releasing a major new infrastructure service based on open source software only to discover that the product you deployed had evolved so quickly that the documentation for the version you released is no longer available. At Bloomberg, we experienced this problem firsthand in our deployment of OpenStack. In late 2016, we spent six months testing and rolling out [Liberty][1] on our OpenStack environment. By that time, Liberty was about a year old, or two versions behind the latest build.
-
-As our users started taking advantage of its new functionality, we found ourselves unable to solve a few tricky problems and to answer some detailed questions about its API. When we went looking for Liberty's documentation, it was nowhere to be found on the OpenStack website. Liberty, it turned out, had been labeled "end of life" and was no longer supported by the OpenStack developer community.
-
-The disappearance wasn't intentional, rather the result of a development community that had not anticipated the real-world needs of users. The documentation was stored in the source branch along with the source code, and, as Liberty was superseded by newer versions, it had been deleted. Worse, in the intervening months, the documentation for the newer versions had been completely restructured, and there was no way to easily rebuild it in a useful form. And believe me, we tried.
-
-The disappearance wasn't intentional, rather the result of a development community that had not anticipated the real-world needs of users. ]After consulting other users and our vendor, we found that OpenStack's development cadence of two releases per year had created some unintended, yet deeply frustrating, consequences. Older releases that were typically still widely in use were being superseded and effectively killed for the purposes of support.
-
-Eventually, conversations took place between OpenStack users and developers that resulted in changes. Documentation was moved out of the source branch, and users can now build documentation for whatever version they're using—more or less indefinitely. The problem was solved. (I'm especially indebted to my colleague [Chris Morgan][2], who was knee-deep in this effort and first wrote about it in detail for the [OpenStack Superuser blog][3].)
-
-Many other enterprise users were in the same boat as Bloomberg—running older versions of OpenStack that are three or four versions behind the latest build. There's a good reason for that: On average it takes a reasonably large enterprise about six months to qualify, test, and deploy a new version of OpenStack. And, from my experience, this is generally true of most open source infrastructure projects.
-
-For most of the past decade, companies like Bloomberg that adopted open source software relied on distribution vendors to incorporate, test, verify, and support much of it. These vendors provide long-term support (LTS) releases, which enable enterprise users to plan for upgrades on a two- or three-year cycle, knowing they'll still have support for a year or two, even if their deployment schedule slips a bit (as they often do). In the past few years, though, infrastructure software has advanced so rapidly that even the distribution vendors struggle to keep up. And customers of those vendors are yet another step removed, so many are choosing to deploy this type of software without vendor support.
-
-Losing vendor support also usually means there are no LTS releases; OpenStack, Kubernetes, and Prometheus, and many more, do not yet provide LTS releases of their own. As a result, I'd argue that healthy interaction between the development and user community should be high on the list of considerations for adoption of any open source infrastructure. Do the developers building the software pay attention to the needs—and frustrations—of the people who deploy it and make it useful for their enterprise?
-
-There is a solid model for how this should happen. We recently joined the [Cloud Native Computing Foundation][4], part of The Linux Foundation. It has a formal [end-user community][5], whose members include organizations just like us: enterprises that are trying to make open source software useful to their internal customers. Corporate members also get a chance to have their voices heard as they vote to select a representative to serve on the CNCF [Technical Oversight Committee][6]. Similarly, in the OpenStack community, Bloomberg is involved in the semi-annual Operators Meetups, where companies who deploy and support OpenStack for their own users get together to discuss their challenges and provide guidance to the OpenStack developer community.
-
-The past few years have been great for open source infrastructure. If you're working for a large enterprise, the opportunity to deploy open source projects like the ones mentioned above has made your company more productive and more agile.
-
-As large companies like ours begin to consume more open source software to meet their infrastructure needs, they're going to be looking at a long list of considerations before deciding what to use: license compatibility, out-of-pocket costs, and the health of the development community are just a few examples. As a result of our experiences, we'll add the presence of a vibrant and engaged end-user community to the list.
-
-Increased reliance on open source infrastructure projects has also highlighted a key problem: People in the development community have little experience deploying the software they work on into production environments or supporting the people who use it to get things done on a daily basis. The fast pace of updates to these projects has created some unexpected problems for the people who deploy and use them. There are numerous examples I can cite where open source projects are updated so frequently that new versions will, usually unintentionally, break backwards compatibility.
-
-As open source increasingly becomes foundational to the operation of so many enterprises, this cannot be allowed to happen, and members of the user community should assert themselves accordingly and press for the creation of formal representation. In the end, the software can only be better.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/2/important-conversation
-
-作者:[Kevin P.Fleming][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/kpfleming
-[1]:https://releases.openstack.org/liberty/
-[2]:https://www.linkedin.com/in/mihalis68/
-[3]:http://superuser.openstack.org/articles/openstack-at-bloomberg/
-[4]:https://www.cncf.io/
-[5]:https://www.cncf.io/people/end-user-community/
-[6]:https://www.cncf.io/people/technical-oversight-committee/
diff --git a/sources/talk/20180214 Can anonymity and accountability coexist.md b/sources/talk/20180214 Can anonymity and accountability coexist.md
deleted file mode 100644
index 8b15ed169c..0000000000
--- a/sources/talk/20180214 Can anonymity and accountability coexist.md
+++ /dev/null
@@ -1,79 +0,0 @@
-Can anonymity and accountability coexist?
-=========================================
-
-Anonymity might be a boon to more open, meritocratic organizational cultures. But does it conflict with another important value: accountability?
-
-![Can anonymity and accountability coexist?](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_Transparency_B.png?itok=SkP1mUt5 "Can anonymity and accountability coexist?")
-
-Image by :opensource.com
-
-### Get the newsletter
-
-Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
-
-Whistleblowing protections, crowdsourcing, anonymous voting processes, and even Glassdoor reviews—anonymous speech may take many forms in organizations.
-
-As well-established and valued as these anonymous feedback mechanisms may be, anonymous speech becomes a paradoxical idea when one considers how to construct a more open organization. While an inability to discern speaker identity seems non-transparent, an opportunity for anonymity may actually help achieve a _more inclusive and meritocratic_ environment.
-
-More about open organizations
-
-* [Download free Open Org books](https://opensource.com/open-organization/resources/book-series?src=too_resource_menu1a)
-* [What is an Open Organization?](https://opensource.com/open-organization/resources/open-org-definition?src=too_resource_menu2a)
-* [How open is your organization?](https://opensource.com/open-organization/resources/open-org-maturity-model?src=too_resource_menu3a)
-* [What is an Open Decision?](https://opensource.com/open-organization/resources/open-decision-framework?src=too_resource_menu4a)
-* [The Open Org two years later](https://www.redhat.com/en/about/blog/open-organization-two-years-later-and-going-strong?src=too_resource_menu4b&intcmp=70160000000h1s6AAA)
-
-But before allowing outlets for anonymous speech to propagate, however, leaders of an organization should carefully reflect on whether an organization's "closed" practices make anonymity the unavoidable alternative to free, non-anonymous expression. Though some assurance of anonymity is necessary in a few sensitive and exceptional scenarios, dependence on anonymous feedback channels within an organization may stunt the normalization of a culture that encourages diversity and community.
-
-### The benefits of anonymity
-
-In the case of [_Talley v. California (1960)_](https://supreme.justia.com/cases/federal/us/362/60/case.html), the Supreme Court voided a city ordinance prohibiting the anonymous distribution of handbills, asserting that "there can be no doubt that such an identification requirement would tend to restrict freedom to distribute information and thereby freedom of expression." Our judicial system has legitimized the notion that the protection of anonymity facilitates the expression of otherwise unspoken ideas. A quick scroll through any [subreddit](https://www.reddit.com/reddits/) exemplifies what the Court has codified: anonymity can foster [risk-taking creativity](https://www.reddit.com/r/sixwordstories/) and the [inclusion and support of marginalized voices](https://www.reddit.com/r/MyLittleSupportGroup/). Anonymity empowers individuals by granting them the safety to speak without [detriment to their reputations or, more importantly, their physical selves.](https://www.psychologytoday.com/blog/the-compassion-chronicles/201711/why-dont-victims-sexual-harassment-come-forward-sooner)
-
-For example, an anonymous suggestion program to garner ideas from members or employees in an organization may strengthen inclusivity and enhance the diversity of suggestions the organization receives. It would also make for a more meritocratic decision-making process, as anonymity would ensure that the quality of the articulated idea, rather than the rank and reputation of the articulator, is what's under evaluation. Allowing members to anonymously vote for anonymously-submitted ideas would help curb the influence of office politics in decisions affecting the organization's growth.
-
-### The harmful consequences of anonymity
-
-Yet anonymity and the open value of _accountability_ may come into conflict with one another. For instance, when establishing anonymous programs to drive greater diversity and more meritocratic evaluation of ideas, organizations may need to sacrifice the ability to hold speakers accountable for the opinions they express.
-
-Reliance on anonymous speech for serious organizational decision-making may also contribute to complacency in an organizational culture that falls short of openness. Outlets for anonymous speech may be as similar to open as crowdsourcing is—or rather, is not. [Like efforts to crowdsource creative ideas](https://opensource.com/business/10/4/why-open-source-way-trumps-crowdsourcing-way), anonymous suggestion programs may create an organizational environment in which diverse perspectives are only valued when an organization's leaders find it convenient to take advantage of members' ideas.
-
-Anonymity and the open value of accountability may come into conflict with one another.
-
-A similar concern holds for anonymous whistle-blowing or concern submission. Though anonymity is important for sexual harassment and assault reporting, regularly redirecting member concerns and frustrations to a "complaints box" makes it more difficult for members to hold their organization's leaders accountable for acting on concerns. It may also hinder intra-organizational support networks and advocacy groups from forming around shared concerns, as members would have difficulty identifying others with similar experiences. For example, many working mothers might anonymously submit requests for a lactation room in their workplace, then falsely attribute a lack of action from leaders to a lack of similar concerns from others.
-
-### An anonymity checklist
-
-Organizations in which anonymous speech is the primary mode of communication, like subreddits, have generated innovative works and thought-provoking discourse. These anonymous networks call attention to the potential for anonymity to help organizations pursue open values of diversity and meritocracy. Organizations in which anonymous speech is _not_ the main form of communication should acknowledge the strengths of anonymous speech, but carefully consider whether anonymity is the wisest means to the goal of sustainable openness.
-
-Leaders may find reflecting on the following questions useful prior to establishing outlets for anonymous feedback within their organizations:
-
-1\. _Availability of additional communication mechanisms_: Rather than investing time and resources into establishing a new, anonymous channel for communication, can the culture or structure of existing avenues of communication be reconfigured to achieve the same goal? This question echoes the open source affinity toward realigning, rather than reinventing, the wheel.
-
-2\. _Failure of other communication avenues:_ How and why is the organization ill-equipped to handle the sensitive issue/situation at hand through conventional (i.e. non-anonymous) means of communication?
-
-Careful deliberation on these questions may help prevent outlets for anonymous speech from leading to a dangerous sense of complacency.
-
-3\. _Consequences of anonymity:_ If implemented, could the anonymous mechanism stifle the normalization of face-to-face discourse about issues important to the organization's growth? If so, how can leaders ensure that members consider the anonymous communication channel a "last resort," without undermining the legitimacy of the anonymous system?
-
-4\. _Designing the anonymous communication channel:_ How can accountability be promoted in anonymous communication without the ability to determine the identity of speakers?
-
-5\. _Long-term considerations_: Is the anonymous feedback mechanism sustainable, or a temporary solution to a larger organizational issue? If the latter, is [launching a campaign](https://opensource.com/open-organization/16/6/8-steps-more-open-communications) to address overarching problems with the organization's communication culture feasible?
-
-These five points build off of one another to help leaders recognize the tradeoffs involved in legitimizing anonymity within their organization. Careful deliberation on these questions may help prevent outlets for anonymous speech from leading to a dangerous sense of complacency with a non-inclusive organizational structure.
-
-About the author
-----------------
-
-[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/osdc_default_avatar_1.png?itok=mmbfqFXm)](https://opensource.com/users/susiechoi)
-
-Susie Choi - Susie is an undergraduate student studying computer science at Duke University. She is interested in the implications of technological innovation and open source principles for issues relating to education and socioeconomic inequality.
-
-[More about me](https://opensource.com/users/susiechoi)
-
-* * *
-
-via: [https://opensource.com/open-organization/18/1/balancing-accountability-and-anonymity](https://opensource.com/open-organization/18/1/balancing-accountability-and-anonymity)
-
-作者: [Susie Choi](https://opensource.com/users/susiechoi) 选题者: [@lujun9972](https://github.com/lujun9972) 译者: [译者ID](https://github.com/译者ID) 校对: [校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
\ No newline at end of file
diff --git a/sources/talk/20180216 Q4OS Makes Linux Easy for Everyone.md b/sources/talk/20180216 Q4OS Makes Linux Easy for Everyone.md
deleted file mode 100644
index a868ed28d5..0000000000
--- a/sources/talk/20180216 Q4OS Makes Linux Easy for Everyone.md
+++ /dev/null
@@ -1,140 +0,0 @@
-Q4OS Makes Linux Easy for Everyone
-======
-
-![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os-main.png?itok=WDatcV-a)
-
-Modern Linux distributions tend to target a variety of users. Some claim to offer a flavor of the open source platform that anyone can use. And, I’ve seen some such claims succeed with aplomb, while others fall flat. [Q4OS][1] is one of those odd distributions that doesn’t bother to make such a claim but pulls off the feat anyway.
-
-So, who is the primary market for Q4OS? According to its website, the distribution is a:
-
-“fast and powerful operating system based on the latest technologies while offering highly productive desktop environment. We focus on security, reliability, long-term stability and conservative integration of verified new features. System is distinguished by speed and very low hardware requirements, runs great on brand new machines as well as legacy computers. It is also very applicable for virtualization and cloud computing.”
-
-What’s very interesting here is that the Q4OS developers offer commercial support for the desktop. Said support can cover the likes of system customization (including core level API programming) as well as user interface modifications.
-
-Once you understand this (and have installed Q4OS), the target audience becomes quite obvious: Business users looking for a Windows XP/7 replacement. But that should not prevent home users from giving Q4OS at try. It’s a Linux distribution that has a few unique tools that come together to make a solid desktop distribution.
-
-Let’s take a look at Q4OS and see if it’s a version of Linux that might work for you.
-
-### What Q4OS all about
-
-Q4OS that does an admirable job of being the open source equivalent of Windows XP/7. Out of the box, it pulls this off with the help of the [Trinity Desktop][2] (a fork of KDE). With a few tricks up its sleeve, Q4OS turns the Trinity Desktop into a remarkably similar desktop (Figure 1).
-
-![default desktop][4]
-
-Figure 1: The Q4OS default desktop.
-
-[Used with permission][5]
-
-When you fire up the desktop, you will be greeted by a Welcome screen that makes it very easy for new users to start setting up their desktop with just a few clicks. From this window, you can:
-
- * Run the Desktop Profiler (which allows you to select which desktop environment to use as well as between a full-featured desktop, a basic desktop, or a minimal desktop—Figure 2).
-
- * Install applications (which opens the Synaptic Package Manager).
-
- * Install proprietary codecs (which installs all the necessary media codecs for playing audio and video).
-
- * Turn on Desktop effects (if you want more eye candy, turn this on).
-
- * Switch to Kickoff start menu (switches from the default start menu to the newer kickoff menu).
-
- * Set Autologin (allows you to set login such that it won’t require your password upon boot).
-
-
-
-
-![Desktop Profiler][7]
-
-Figure 2: The Desktop Profiler allows you to further customize your desktop experience.
-
-[Used with permission][5]
-
-If you want to install a different desktop environment, open up the Desktop Profiler and then click the Desktop environments drop-down, in the upper left corner of the window. A new window will appear, where you can select your desktop of choice from the drop-down (Figure 3). Once back at the main Profiler Window, select which type of desktop profile you want, and then click Install.
-
-![Desktop Profiler][9]
-
-Figure 3: Installing a different desktop is quite simple from within the Desktop Profiler.
-
-[Used with permission][5]
-
-Note that installing a different desktop will not wipe the default desktop. Instead, it will allow you to select between the two desktops (at the login screen).
-
-### Installed software
-
-After selecting full-featured desktop, from the Desktop Profiler, I found the following user applications ready to go:
-
- * LibreOffice 5.2.7.2
-
- * VLC 2.2.7
-
- * Google Chrome 64.0.3282
-
- * Thunderbird 52.6.0 (Includes Lightning addon)
-
- * Synaptic 0.84.2
-
- * Konqueror 14.0.5
-
- * Firefox 52.6.0
-
- * Shotwell 0.24.5
-
-
-
-
-Obviously some of those applications are well out of date. Since this distribution is based on Debian, we can run and update/upgrade with the commands:
-```
-sudo apt update
-
-sudo apt upgrade
-
-```
-
-However, after running both commands, it seems everything is up to date. This particular release (2.4) is an LTS release (supported until 2022). Because of this, expect software to be a bit behind. If you want to test out the bleeding edge version (based on Debian “Buster”), you can download the testing image [here][10].
-
-### Security oddity
-
-There is one rather disturbing “feature” found in Q4OS. In the developer’s quest to make the distribution closely resemble Windows, they’ve made it such that installing software (from the command line) doesn’t require a password! You read that correctly. If you open the Synaptic package manager, you’re asked for a password. However (and this is a big however), open up a terminal window and issue a command like sudo apt-get install gimp. At this point, the software will install… without requiring the user to type a sudo password.
-
-Did you cringe at that? You should.
-
-I get it, the developers want to ease away the burden of Linux and make a platform the masses could easily adapt to. They’ve done a splendid job of doing just that. However, in the process of doing so, they’ve bypassed a crucial means of security. Is having as near an XP/7 clone as you can find on Linux worth that lack of security? I would say that if it enables more people to use Linux, then yes. But the fact that they’ve required a password for Synaptic (the GUI tool most Windows users would default to for software installation) and not for the command-line tool makes no sense. On top of that, bypassing passwords for the apt and dpkg commands could make for a significant security issue.
-
-Fear not, there is a fix. For those that prefer to require passwords for the command line installation of software, you can open up the file /etc/sudoers.d/30_q4os_apt and comment out the following three lines:
-```
-%sudo ALL = NOPASSWD: /usr/bin/apt-get *
-
-%sudo ALL = NOPASSWD: /usr/bin/apt-key *
-
-%sudo ALL = NOPASSWD: /usr/bin/dpkg *
-
-```
-
-Once commented out, save and close the file, and reboot the system. At this point, users will now be prompted for a password, should they run the apt-get, apt-key, or dpkg commands.
-
-### A worthy contender
-
-Setting aside the security curiosity, Q4OS is one of the best attempts at recreating Windows XP/7 I’ve come across in a while. If you have users who fear change, and you want to migrate them away from Windows, this distribution might be exactly what you need. I would, however, highly recommend you re-enable passwords for the apt-get, apt-key, and dpkg commands… just to be on the safe side.
-
-In any case, the addition of the Desktop Profiler, and the ability to easily install alternative desktops, makes Q4OS a distribution that just about anyone could use.
-
-Learn more about Linux through the free ["Introduction to Linux" ][11]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/learn/intro-to-linux/2018/2/q4os-makes-linux-easy-everyone
-
-作者:[JACK WALLEN][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/jlwallen
-[1]:https://q4os.org
-[2]:https://www.trinitydesktop.org/
-[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_1.jpg?itok=dalJk9Xf (default desktop)
-[5]:/licenses/category/used-permission
-[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_2.jpg?itok=GlouIm73 (Desktop Profiler)
-[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/q4os_3.jpg?itok=riSTP_1z (Desktop Profiler)
-[10]:https://q4os.org/downloads2.html
-[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/sources/talk/20180220 4 considerations when naming software development projects.md b/sources/talk/20180220 4 considerations when naming software development projects.md
deleted file mode 100644
index 1e1add0b68..0000000000
--- a/sources/talk/20180220 4 considerations when naming software development projects.md
+++ /dev/null
@@ -1,91 +0,0 @@
-4 considerations when naming software development projects
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hello-name-sticker-badge-tag.png?itok=fAgbMgBb)
-
-Working on a new open source project, you're focused on the code—getting that great new idea released so you can share it with the world. And you'll want to attract new contributors, so you need a terrific **name** for your project.
-
-We've all read guides for creating names, but how do you go about choosing the right one? Keeping that cool science fiction reference you're using internally might feel fun, but it won't mean much to new users you're trying to attract. A better approach is to choose a name that's memorable to new users and developers searching for your project.
-
-Names set expectations. Your project's name should showcase its functionality in the ecosystem and explain to users what your story is. In the crowded open source software world, it's important not to get entangled with other projects out there. Taking a little extra time now, before sending out that big announcement, will pay off later.
-
-Here are four factors to keep in mind when choosing a name for your project.
-
-### What does your project's code do?
-
-Start with your project: What does it do? You know the code intimately—but can you explain what it does to a new developer? Can you explain it to a CTO or non-developer at another company? What kinds of problems does your project solve for users?
-
-Your project's name needs to reflect what it does in a way that makes sense to newcomers who want to use or contribute to your project. That means considering the ecosystem for your technology and understanding if there are any naming styles or conventions used for similar kinds of projects. Imagine that you're trying to evaluate someone else's project: Would the name be appealing to you?
-
-Any distribution channels you push to are also part of the ecosystem. If your code will be in a Linux distribution, [npm][1], [CPAN][2], [Maven][3], or in a Ruby Gem, you need to review any naming standards or common practices for that package manager. Review any similar existing names in that distribution channel, and get a feel for naming styles of other programs there.
-
-### Who are the users and developers you want to attract?
-
-The hardest aspect of choosing a new name is putting yourself in the shoes of new users. You built this project; you already know how powerful it is, so while your cool name may sound great, it might not draw in new people. You need a name that is interesting to someone new, and that tells the world what problems your project solves.
-
-Great names depend on what kind of users you want to attract. Are you building an [Eclipse][4] plugin or npm module that's focused on developers? Or an analytics toolkit that brings visualizations to the average user? Understanding your user base and the kinds of open source contributors you want to attract is critical.
-
-Great names depend on what kind of users you want to attract.
-
-Take the time to think this through. Who does your project most appeal to, and how can it help them do their job? What kinds of problems does your code solve for end users? Understanding the target user helps you focus on what users need, and what kind of names or brands they respond to.
-
-Take the time to think this through. Who does your project most appeal to, and how can it help them do their job? What kinds of problems does your code solve for end users? Understanding the target user helps you focus on what users need, and what kind of names or brands they respond to.
-
-When you're open source, this equation changes a bit—your target is not just users; it's also developers who will want to contribute code back to your project. You're probably a developer, too: What kinds of names and brands excite you, and what images would entice you to try out someone else's new project?
-
-Once you have a better feel of what users and potential contributors expect, use that knowledge to refine your names. Remember, you need to step outside your project and think about how the name would appeal to someone who doesn't know how amazing your code is—yet. Once someone gets to your website, does the name synchronize with what your product does? If so, move to the next step.
-
-### Who else is using similar names for software?
-
-Now that you've tried on a user's shoes to evaluate potential names, what's next? Figuring out if anyone else is already using a similar name. It sometimes feels like all the best names are taken—but if you search carefully, you'll find that's not true.
-
-The first step is to do a few web searches using your proposed name. Search for the name, plus "software", "open source", and a few keywords for the functionality that your code provides. Look through several pages of results for each search to see what's out there in the software world.
-
-The first step is to do a few web searches using your proposed name.
-
-Unless you're using a completely made-up word, you'll likely get a lot of hits. The trick is understanding which search results might be a problem. Again, put on the shoes of a new user to your project. If you were searching for this great new product and saw the other search results along with your project's homepage, would you confuse them? Are the other search results even software products? If your product solves a similar problem to other search results, that's a problem: Users may gravitate to an existing product instead of a new one.
-
-Unless you're using a completely made-up word, you'll likely get a lot of hits. The trick is understanding which search results might be a problem. Again, put on the shoes of a new user to your project. If you were searching for this great new product and saw the other search results along with your project's homepage, would you confuse them? Are the other search results even software products? If your product solves a similar problem to other search results, that's a problem: Users may gravitate to an existing product instead of a new one.
-
-Similar non-software product names are rarely an issue unless they are famous trademarks—like Nike or Red Bull, for example—where the companies behind them won't look kindly on anyone using a similar name. Using the same name as a less famous non-software product might be OK, depending on how big your project gets.
-
-### How big do you plan to grow your project?
-
-Are you building a new node module or command-line utility, but not planning a career around it? Is your new project a million-dollar business idea, and you're thinking startup? Or is it something in between?
-
-If your project is a basic developer utility—something useful that developers will integrate into their workflow—then you have enough data to choose a name. Think through the ecosystem and how a new user would see your potential names, and pick one. You don't need perfection, just a name you're happy with that seems right for your project.
-
-If you're planning to build a business around your project, use these tips to develop a shortlist of names, but do more vetting before announcing the winner. Use for a business or major project requires some level of registered trademark search, which is usually performed by a law firm.
-
-### Common pitfalls
-
-Finally, when choosing a name, avoid these common pitfalls:
-
- * Using an esoteric acronym. If new users don't understand the name, they'll have a hard time finding you.
-
- * Using current pop-culture references. If you want your project's appeal to last, pick a name that will last.
-
- * Failing to consider non-English speakers. Does the name have a specific meaning in another language that might be confusing?
-
- * Using off-color jokes or potentially unsavory references. Even if it seems funny to developers, it may fall flat for newcomers and turn away contributors.
-
-
-
-
-Good luck—and remember to take the time to step out of your shoes and consider how a newcomer to your project will think of the name.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/2/choosing-project-names-four-key-considerations
-
-作者:[Shane Curcuru][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/shane-curcuru
-[1]:https://www.npmjs.com/
-[2]:https://www.cpan.org/
-[3]:https://maven.apache.org/
-[4]:https://www.eclipse.org/
diff --git a/sources/talk/20180221 3 warning flags of DevOps metrics.md b/sources/talk/20180221 3 warning flags of DevOps metrics.md
deleted file mode 100644
index a103a2bbca..0000000000
--- a/sources/talk/20180221 3 warning flags of DevOps metrics.md
+++ /dev/null
@@ -1,42 +0,0 @@
-3 warning flags of DevOps metrics
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D)
-
-Metrics. Measurements. Data. Monitoring. Alerting. These are all big topics for DevOps and for cloud-native infrastructure and application development more broadly. In fact, acm Queue, a magazine published by the Association of Computing Machinery, recently devoted an [entire issue][1] to the topic.
-
-I've argued before that we conflate a lot of things under the "metrics" term, from key performance indicators to critical failure alerts to data that may be vaguely useful someday for something or other. But that's a topic for another day. What I want to discuss here is how metrics affect behavior.
-
-In 2008, Daniel Ariely published [Predictably Irrational][2] , one of a number of books written around that time that introduced behavioral psychology and behavioral economics to the general public. One memorable quote from that book is the following: "Human beings adjust behavior based on the metrics they're held against. Anything you measure will impel a person to optimize his score on that metric. What you measure is what you'll get. Period."
-
-This shouldn't be surprising. It's a finding that's been repeatedly confirmed by research. It should also be familiar to just about anyone with business experience. It's certainly not news to anyone in sales management, for example. Base sales reps' (or their managers'!) bonuses solely on revenue, and they'll discount whatever it takes to maximize revenue even if it puts margin in the toilet. Conversely, want the sales force to push a new product line—which will probably take extra effort—but skip the [spiffs][3]? Probably not happening.
-
-And lest you think I'm unfairly picking on sales, this behavior is pervasive, all the way up to the CEO, as Ariely describes in [a 2010 Harvard Business Review article][4]. "CEOs care about stock value because that's how we measure them. If we want to change what they care about, we should change what we measure," writes Ariely.
-
-Think developers and operations folks are immune from such behaviors? Think again. Let's consider some problematic measurements. They're not all bad or wrong but, if you rely too much on them, warning flags should go up.
-
-### Three warning signs for DevOps metrics
-
-First, there are the quantity metrics. Lines of code or bugs fixed are perhaps self-evidently absurd. But there are also the deployments per week or per month that are so widely quoted to illustrate DevOps velocity relative to more traditional development and deployment practices. Speed is good. It's one of the reasons you're probably doing DevOps—but don't reward people on it excessively relative to quality and other measures.
-
-Second, it's obvious that you want to reward individuals who do their work quickly and well. Yes. But. Whether it's your local pro sports team or some project team you've been on, you can probably name someone who was really a talent, but was just so toxic and such a distraction for everyone else that they were a net negative for the team. Moral: Don't provide incentives that solely encourage individual behaviors. You may also want to put in place programs, such as peer rewards, that explicitly value collaboration. [As Red Hat's Jen Krieger told me][5] in a podcast last year: "Having those automated pots of awards, or some sort of system that's tracked for that, can only help teams feel a little more cooperative with one another as in, 'Hey, we're all working together to get something done.'"
-
-The third red flag area is incentives that don't actually incent because neither the individual nor the team has a meaningful ability to influence the outcome. It's often a good thing when DevOps metrics connect to business goals and outcomes. For example, customer ticket volume relates to perceived shortcomings in applications and infrastructure. And it's also a reasonable proxy for overall customer satisfaction, which certainly should be of interest to the executive suite. The best reward systems to drive DevOps behaviors should be tied to specific individual and team actions as opposed to just company success generally.
-
-You've probably noticed a common theme. That theme is balance. Velocity is good but so is quality. Individual achievement is good but not when it damages the effectiveness of the team. The overall success of the business is certainly important, but the best reward systems also tie back to actions and behaviors within development and operations.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/2/three-warning-flags-devops-metrics
-
-作者:[Gordon Haff][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/ghaff
-[1]:https://queue.acm.org/issuedetail.cfm?issue=3178368
-[2]:https://en.wikipedia.org/wiki/Predictably_Irrational
-[3]:https://en.wikipedia.org/wiki/Spiff
-[4]:https://hbr.org/2010/06/column-you-are-what-you-measure
-[5]:http://bitmason.blogspot.com/2015/09/podcast-making-devops-succeed-with-red.html
diff --git a/sources/talk/20180222 3 reasons to say -no- in DevOps.md b/sources/talk/20180222 3 reasons to say -no- in DevOps.md
deleted file mode 100644
index 5f27fbaf47..0000000000
--- a/sources/talk/20180222 3 reasons to say -no- in DevOps.md
+++ /dev/null
@@ -1,105 +0,0 @@
-3 reasons to say 'no' in DevOps
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_DesirePath.png?itok=N_zLVWlK)
-
-DevOps, it has often been pointed out, is a culture that emphasizes mutual respect, cooperation, continual improvement, and aligning responsibility with authority.
-
-Instead of saying no, it may be helpful to take a hint from improv comedy and say, "Yes, and..." or "Yes, but...". This opens the request from the binary nature of "yes" and "no" toward having a nuanced discussion around priority, capacity, and responsibility.
-
-However, sometimes you have no choice but to give a hard "no." These should be rare and exceptional, but they will occur.
-
-### Protecting yourself
-
-Both Agile and DevOps have been touted as ways to improve value to the customer and business, ultimately leading to greater productivity. While reasonable people can understand that the improvements will take time to yield, and the improvements will result in higher quality of work being done, and a better quality of life for those performing it, I think we can all agree that not everyone is reasonable. The less understanding that a person has of the particulars of a given task, the more likely they are to expect that it is a combination of "simple" and "easy."
-
-"You told me that [Agile/DevOps] is supposed to be all about us getting more productivity. Since we're doing [Agile/DevOps] now, you can take care of my need, right?"
-
-Like "Agile," some people have tried to use "DevOps" as a stick to coerce people to do more work than they can handle. Whether the person confronting you with this question is asking in earnest or is being manipulative doesn't really matter.
-
-The biggest areas of concern for me have been **capacity** , **firefighting/maintenance** , **level of quality** , and **" future me."** Many of these ultimately tie back to capacity, but they relate to a long-term effort in different respects.
-
-#### Capacity
-
-Capacity is simple: You know what your workload is, and how much flex occurs due to the unexpected. Exceeding your capacity will not only cause undue stress, but it could decrease the quality of your work and can injure your reputation with regards to making commitments.
-
-There are several avenues of discussion that can happen from here. The simplest is "Your request is reasonable, but I don't have the capacity to work on it." This seldom ends the conversation, and a discussion will often run up the flagpole to clarify priorities or reassign work.
-
-#### Firefighting/maintenance
-
-It's possible that the thing that you're being asked for won't take long to do, but it will require maintenance that you'll be expected to perform, including keeping it alive and fulfilling requests for it on behalf of others.
-
-An example in my mind is the Jenkins server that you're asked to stand up for someone else, but somehow end up being the sole owner and caretaker of. Even if you're careful to scope your level of involvement early on, you might be saddled with responsibility that you did not agree to. Should the service become unavailable, for example, you might be the one who is called. You might be called on to help triage a build that is failing. This is additional firefighting and maintenance work that you did not sign up for and now must fend off.
-
-This needs to be addressed as soon and publicly as possible. I'm not saying that (again, for example) standing up a Jenkins instance is a "no," but rather a ["Yes, but"][1]—where all parties understand that they take on the long-term care, feeding, and use of the product. Make sure to include all your bosses in this conversation so they can have your back.
-
-#### Level of quality
-
-There may be times when you are presented with requirements that include a timeframe that is...problematic. Perhaps you could get a "minimum (cough) viable (cough) product" out in that time. But it wouldn't be resilient or in any way ready for production. It might impact your time and productivity. It could end up hurting your reputation.
-
-The resulting conversation can get into the weeds, with lots of horse-trading about time and features. Another approach is to ask "What is driving this deadline? Where did that timeframe come from?" Discussing the bigger picture might lead to a better option, or that the timeline doesn't depend on the original date.
-
-#### Future me
-
-Ultimately, we are trying to protect "future you." These are lessons learned from the many times that "past me" has knowingly left "current me" to clean up. Sometimes we joke that "that's a problem for 'future me,'" but don't forget that 'future you' will just be 'you' eventually. I've cursed "past me" as a jerk many times. Do your best to keep other people from making "past you" be a jerk to "future you."
-
-I recognize that I have a significant amount of privilege in this area, but if you are told that you cannot say "no" on behalf of your own welfare, you should consider whether you are respected enough to maintain your autonomy.
-
-### Protecting the user experience
-
-Everyone should be an advocate for the user. Regardless of whether that user is right next to you, someone down the hall, or someone you have never met and likely never will, you must care for the customer.
-
-Behavior that is actively hostile to the user—whether it's a poor user experience or something more insidious like quietly violating reasonable expectations of privacy—deserves a "no." A common example of this would be automatically including people into a service or feature, forcing them to explicitly opt-out.
-
-If a "no" is not welcome, it bears considering, or explicitly asking, what the company's relationship with its customers is, who the company thinks of as it's customers, and what it thinks of them.
-
-When bringing up your objections, be clear about what they are. Additionally, remember that your coworkers are people too, and make it clear that you are not attacking their character; you simply find the idea disagreeable.
-
-### Legal, ethical, and moral grounds
-
-There might be situations that don't feel right. A simple test is to ask: "If this were to become public, or come up in a lawsuit deposition, would it be a scandal?"
-
-#### Ethics and morals
-
-If you are asked to lie, that should be a hard no.
-
-Remember if you will the Volkswagen Emissions Scandal of 2017? The emissions systems software was written such that it recognized that the vehicle was operated in a manner consistent with an emissions test, and would run more efficiently than under normal driving conditions.
-
-I don't know what you do in your job, or what your office is like, but I have a hard time imagining the Individual Contributor software engineer coming up with that as a solution on their own. In fact, I imagine a comment along the lines of "the engine engineers can't make their product pass the tests, so I need to hack the performance so that it will!"
-
-When the Volkswagen scandal came public, Volkswagen officials blamed the engineers. I find it unlikely that it came from the mind and IDE of an individual software engineer. Rather, it's more likely indicates significant systemic problems within the company culture.
-
-If you are asked to lie, get the request in writing, citing that the circumstances are suspect. If you are so privileged, decide whether you may decline the request on the basis that it is fundamentally dishonest and hostile to the customer, and would break the public's trust.
-
-#### Legal
-
-I am not a lawyer. If your work should involve legal matters, including requests from law enforcement, involve your company's legal counsel or speak with a private lawyer.
-
-With that said, if you are asked to provide information for law enforcement, I believe that you are within your rights to see the documentation that justifies the request. There should be a signed warrant. You should be provided with a copy of it, or make a copy of it yourself.
-
-When in doubt, begin recording and request legal counsel.
-
-It has been well documented that especially in the early years of the U.S. Patriot Act, law enforcement placed so many requests of telecoms that they became standard work, and the paperwork started slipping. While tedious and potentially stressful, make sure that the legal requirements for disclosure are met.
-
-If for no other reason, we would not want the good work of law enforcement to be put at risk because key evidence was improperly acquired, making it inadmissible.
-
-### Wrapping up
-
-You are going to be your single biggest advocate. There may be times when you are asked to compromise for the greater good. However, you should feel that your dignity is preserved, your autonomy is respected, and that your morals remain intact.
-
-If you don't feel that this is the case, get it on record, doing your best to communicate it calmly and clearly.
-
-Nobody likes being declined, but if you don't have the ability to say no, there may be a bigger problem than your environment not being DevOps.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/2/3-reasons-say-no-devops
-
-作者:[H. "Waldo" Grunenwal][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/gwaldo
-[1]:http://gwaldo.blogspot.com/2015/12/fear-and-loathing-in-systems.html
diff --git a/sources/talk/20180223 Plasma Mobile Could Give Life to a Mobile Linux Experience.md b/sources/talk/20180223 Plasma Mobile Could Give Life to a Mobile Linux Experience.md
deleted file mode 100644
index 583714836e..0000000000
--- a/sources/talk/20180223 Plasma Mobile Could Give Life to a Mobile Linux Experience.md
+++ /dev/null
@@ -1,123 +0,0 @@
-Plasma Mobile Could Give Life to a Mobile Linux Experience
-======
-
-![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/plasma-mobile_0.png?itok=uUIQFRcm)
-
-In the past few years, it’s become clear that, outside of powering Android, Linux on mobile devices has been a resounding failure. Canonical came close, even releasing devices running Ubuntu Touch. Unfortunately, the idea of [Scopes][1]was doomed before it touched down on its first piece of hardware and subsequently died a silent death.
-
-The next best hope for mobile Linux comes in the form of the [Samsung DeX][2] program. With DeX, users will be able to install an app (Linux On Galaxy—not available yet) on their Samsung devices, which would in turn allow them to run a full-blown Linux distribution. The caveat here is that you’ll be running both Android and Linux at the same time—which is not exactly an efficient use of resources. On top of that, most Linux distributions aren’t designed to run on such small form factors. The good news for DeX is that, when you run Linux on Galaxy and dock your Samsung device to DeX, that Linux OS will be running on your connected monitor—so form factor issues need not apply.
-
-Outside of those two options, a pure Linux on mobile experience doesn’t exist. Or does it?
-
-You may have heard of the [Purism Librem 5][3]. It’s a crowdfunded device that promises to finally bring a pure Linux experience to the mobile landscape. This device will be powered by a i.MX8 SoC chip, so it should run most any Linux operating system.
-
-Out of the box, the device will run an encrypted version of [PureOS][4]. However, last year Purism and KDE joined together to create a mobile version of the KDE desktop that could run on the Librem 5. Recently [ISOs were made available for a beta version of Plasma Mobile][5] and, judging from first glance, they’re onto something that makes perfect sense for a mobile Linux platform. I’ve booted up a live instance of Plasma Mobile to kick the tires a bit.
-
-What I saw seriously impressed me. Let’s take a look.
-
-### Testing platform
-
-Before you download the ISO and attempt to fire it up as a VirtualBox VM, you should know that it won’t work well. Because Plasma Mobile uses Wayland (and VirtualBox has yet to play well with that particular X replacement), you’ll find VirtualBox VM a less-than-ideal platform for the beta release. Also know that the Calamares installer doesn’t function well either. In fact, I have yet to get the OS installed on a non-mobile device. And since I don’t own a supported mobile device, I’ve had to run it as a live session on either a laptop or an [Antsle][6] antlet VM every time.
-
-### What makes Plasma Mobile special?
-
-This could be easily summed up by saying, Plasma Mobile got it all right. Instead of Canonical re-inventing a perfectly functioning wheel, the developers of KDE simply re-tooled the interface such that a full-functioning Linux distribution (complete with all the apps you’ve grown to love and depend upon) could work on a smaller platform. And they did a spectacular job. Even better, they’ve created an interface that any user of a mobile device could instantly feel familiar with.
-
-What you have with the Plasma Mobile interface (Figure 1) are the elements common to most Android home screens:
-
- * Quick Launchers
-
- * Notification Shade
-
- * App Drawer
-
- * Overview button (so you can go back to a previously used app, still running in memory)
-
- * Home button
-
-
-
-
-![KDE mobile][8]
-
-Figure 1: The Plasma Mobile desktop interface.
-
-[Used with permission][9]
-
-Because KDE went this route with the UX, it means there’s zero learning curve. And because this is an actual Linux platform, it takes that user-friendly mobile interface and overlays it onto a system that allows for easy installation and usage of apps like:
-
- * GIMP
-
- * LibreOffice
-
- * Audacity
-
- * Clementine
-
- * Dropbox
-
- * And so much more
-
-
-
-
-Unfortunately, without being able to install Plasma Mobile, you cannot really kick the tires too much, as the live user doesn’t have permission to install applications. However, once Plasma Mobile is fully installed, the Discover software center will allow you to install a host of applications (Figure 2).
-
-
-![Discover center][11]
-
-Figure 2: The Discover software center on Plasma Mobile.
-
-[Used with permission][9]
-
-Swipe up (or scroll down—depending on what hardware you’re using) to reveal the app drawer, where you can launch all of your installed applications (Figure 3).
-
-![KDE mobile][13]
-
-Figure 3: The Plasma Mobile app drawer ready to launch applications.
-
-[Used with permission][9]
-
-Open up a terminal window and you can take care of standard Linux admin tasks, such as using SSH to log into a remote server. Using apt, you can install all of the developer tools you need to make Plasma Mobile a powerful development platform.
-
-We’re talking serious mobile power—either from a phone or a tablet.
-
-### A ways to go
-
-Clearly Plasma Mobile is still way too early in development for it to be of any use to the average user. And because most virtual machine technology doesn’t play well with Wayland, you’re likely to get too frustrated with the current ISO image to thoroughly try it out. However, even without being able to fully install the platform (or get full usage out of it), it’s obvious KDE and Purism are going to have the ideal platform that will put Linux into the hands of mobile users.
-
-If you want to test the waters of Plasma Mobile on an actual mobile device, a handy list of supported hardware can be found [here][14] (for PostmarketOS) or [here][15] (for Halium). If you happen to be lucky enough to have a device that also includes Wi-Fi support, you’ll find you get more out of testing the environment.
-
-If you do have a supported device, you’ll need to use either [PostmarketOS][16] (a touch-optimized, pre-configured Alpine Linux that can be installed on smartphones and other mobile devices) or [Halium][15] (an application that creates an minimal Android layer which allows a new interface to interact with the Android kernel). Using Halium further limits the number of supported devices, as it has only been built for select hardware. However, if you’re willing, you can build your own Halium images (documentation for this process is found [here][17]). If you want to give PostmarketOS a go, [here are the necessary build instructions][18].
-
-Suffice it to say, Plasma Mobile isn’t nearly ready for mass market. If you’re a Linux enthusiast and want to give it a go, let either PostmarketOS or Halium help you get the operating system up and running on your device. Otherwise, your best bet is to wait it out and hope Purism and KDE succeed in bringing this oustanding mobile take on Linux to the masses.
-
-Learn more about Linux through the free ["Introduction to Linux" ][19]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/learn/intro-to-linux/2018/2/plasma-mobile-could-give-life-mobile-linux-experience
-
-作者:[JACK WALLEN][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/jlwallen
-[1]:https://launchpad.net/unity-scopes
-[2]:http://www.samsung.com/global/galaxy/apps/samsung-dex/
-[3]:https://puri.sm/shop/librem-5/
-[4]:https://www.pureos.net/
-[5]:http://blog.bshah.in/2018/01/26/trying-out-plasma-mobile/
-[6]:https://antsle.com/
-[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kdemobile_1.jpg?itok=EK3_vFVP (KDE mobile)
-[9]:https://www.linux.com/licenses/category/used-permission
-[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kdemobile_2.jpg?itok=CiUQ-MnB (Discover center)
-[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kdemobile_3.jpg?itok=i6V8fgK8 (KDE mobile)
-[14]:http://blog.bshah.in/2018/02/02/trying-out-plasma-mobile-part-two/
-[15]:https://github.com/halium/projectmanagement/issues?q=is%3Aissue+is%3Aopen+label%3APorts
-[16]:https://postmarketos.org/
-[17]:http://docs.halium.org/en/latest/
-[18]:https://wiki.postmarketos.org/wiki/Installation_guide
-[19]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/sources/talk/20180223 Why culture is the most important issue in a DevOps transformation.md b/sources/talk/20180223 Why culture is the most important issue in a DevOps transformation.md
deleted file mode 100644
index 8fe1b6f273..0000000000
--- a/sources/talk/20180223 Why culture is the most important issue in a DevOps transformation.md
+++ /dev/null
@@ -1,91 +0,0 @@
-Why culture is the most important issue in a DevOps transformation
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community2.png?itok=1blC7-NY)
-
-You've been appointed the DevOps champion in your organisation: congratulations. So, what's the most important issue that you need to address?
-
-It's the technology—tools and the toolchain—right? Everybody knows that unless you get the right tools for the job, you're never going to make things work. You need integration with your existing stack (though whether you go with tight or loose integration will be an interesting question), a support plan (vendor, third party, or internal), and a bug-tracking system to go with your source code management system. And that's just the start.
-
-No! Don't be ridiculous: It's clearly the process that's most important. If the team doesn't agree on how stand-ups are run, who participates, the frequency and length of the meetings, and how many people are required for a quorum, then you'll never be able to institute a consistent, repeatable working pattern.
-
-In fact, although both the technology and the process are important, there's a third component that is equally important, but typically even harder to get right: culture. Yup, it's that touch-feely thing we techies tend to struggle with.1
-
-### Culture
-
-I was visiting a midsized government institution a few months ago (not in the UK, as it happens), and we arrived a little early to meet the CEO and CTO. We were ushered into the CEO's office and waited for a while as the two of them finished participating in the daily stand-up. They apologised for being a minute or two late, but far from being offended, I was impressed. Here was an organisation where the culture of participation was clearly infused all the way up to the top.
-
-Not that culture can be imposed from the top—nor can you rely on it percolating up from the bottom3—but these two C-level execs were not only modelling the behaviour they expected from the rest of their team, but also seemed, from the brief discussion we had about the process afterwards, to be truly invested in it. If you can get management to buy into the process—and be seen buying in—you are at least likely to have problems with other groups finding plausible excuses to keep their distance and get away with it.
-
-So let's assume management believes you should give DevOps a go. Where do you start?
-
-Developers may well be your easiest target group. They are often keen to try new things and find ways to move things along faster, so they are often the group that can be expected to adopt new technologies and methodologies. DevOps arguably has been driven mainly by the development community.
-
-But you shouldn't assume all developers will be keen to embrace this change. For some, the way things have always been done—your Rick Parfitts of dev, if you will7—is fine. Finding ways to help them work efficiently in the new world is part of your job, not just theirs. If you have superstar developers who aren't happy with change, you risk alienating and losing them if you try to force them into your brave new world. What's worse, if they dig their heels in, you risk the adoption of your DevSecOps vision being compromised when they explain to their managers that things aren't going to change if it makes their lives more difficult and reduces their productivity.
-
-Maybe you're not going to be able to move all the systems and people to DevOps immediately. Maybe you're going to need to choose which apps start with and who will be your first DevOps champions. Maybe it's time to move slowly.
-
-### Not maybe: definitely
-
-No—I lied. You're definitely going to need to move slowly. Trying to change everything at once is a recipe for disaster.
-
-This goes for all elements of the change—which people to choose, which technologies to choose, which applications to choose, which user base to choose, which use cases to choose—bar one. For those elements, if you try to move everything in one go, you will fail. You'll fail for a number of reasons. You'll fail for reasons I can't imagine and, more importantly, for reasons you can't imagine. But some of the reasons will include:
-
- * People—most people—don't like change.
- * Technologies don't like change (you can't just switch and expect everything to still work).
- * Applications don't like change (things worked before, or at least failed in known ways). You want to change everything in one go? Well, they'll all fail in new and exciting9 ways.
- * Users don't like change.
- * Use cases don't like change.
-
-
-
-### The one exception
-
-You noticed I wrote "bar one" when discussing which elements you shouldn't choose to change all in one go? Well done.
-
-What's that exception? It's the initial team. When you choose your initial application to change and you're thinking about choosing the team to make that change, select the members carefully and select a complete set. This is important. If you choose just developers, just test folks, just security folks, just ops folks, or just management—if you leave out one functional group from your list—you won't have proved anything at all. Well, you might have proved to a small section of your community that it kind of works, but you'll have missed out on a trick. And that trick is: If you choose keen people from across your functional groups, it's much harder to fail.
-
-Say your first attempt goes brilliantly. How are you going to convince other people to replicate your success and adopt DevOps? Well, the company newsletter, of course. And that will convince how many people, exactly? Yes, that number.12 If, on the other hand, you have team members from across the functional parts or the organisation, when you succeed, they'll tell their colleagues and you'll get more buy-in next time.
-
-If it fails, if you've chosen your team wisely—if they're all enthusiastic and know that "fail often, fail fast" is good—they'll be ready to go again.
-
-Therefore, you need to choose enthusiasts from across your functional groups. They can work on the technologies and the process, and once that's working, it's the people who will create that cultural change. You can just sit back and enjoy. Until the next crisis, of course.
-
-1\. OK, you're right. It should be "with which we techies tend to struggle."2
-
-2\. You thought I was going to qualify that bit about techies struggling with touchy-feely stuff, didn't you? Read it again: I put "tend to." That's the best you're getting.
-
-3\. Is percolating a bottom-up process? I don't drink coffee,4 so I wouldn't know.
-
-4\. Do people even use percolators to make coffee anymore? Feel free to let me know in the comments. I may pretend interest if you're lucky.
-
-5\. For U.S. readers (and some other countries, maybe?), please substitute "check" for "tick" here.6
-
-6\. For U.S. techie readers, feel free to perform `s/tick/check/;`.
-
-7\. This is a Status Quo8 reference for which I'm extremely sorry.
-
-8\. For millennial readers, please consult your favourite online reference engine or just roll your eyes and move on.
-
-9\. For people who say, "but I love excitement," try being on call at 2 a.m. on a Sunday at the end of the quarter when your chief financial officer calls you up to ask why all of last month's sales figures have been corrupted with the letters "DEADBEEF."10
-
-10\. For people not in the know, this is a string often used by techies as test data because a) it's non-numerical; b) it's numerical (in hexadecimal); c) it's easy to search for in debug files; and d) it's funny.11
-
-11\. Though see.9
-
-12\. It's a low number, is all I'm saying.
-
-This article originally appeared on [Alice, Eve, and Bob – a security blog][1] and is republished with permission.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/2/most-important-issue-devops-transformation
-
-作者:[Mike Bursell][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/mikecamel
-[1]:https://aliceevebob.com/2018/02/06/moving-to-devops-whats-most-important/
diff --git a/sources/talk/20180301 How to hire the right DevOps talent.md b/sources/talk/20180301 How to hire the right DevOps talent.md
deleted file mode 100644
index bcf9bb3d20..0000000000
--- a/sources/talk/20180301 How to hire the right DevOps talent.md
+++ /dev/null
@@ -1,48 +0,0 @@
-How to hire the right DevOps talent
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6)
-
-DevOps culture is quickly gaining ground, and demand for top-notch DevOps talent is greater than ever at companies all over the world. With the [annual base salary for a junior DevOps engineer][1] now topping $100,000, IT professionals are hurrying to [make the transition into DevOps.][2]
-
-But how do you choose the right candidate to fill your DevOps role?
-
-### Overview
-
-Most teams are looking for candidates with a background in operations and infrastructure, software engineering, or development. This is in conjunction with skills that relate to configuration management, continuous integration, and deployment (CI/CD), as well as cloud infrastructure. Knowledge of container orchestration is also in high demand.
-
-In a perfect world, the two backgrounds would meet somewhere in the middle to form Dev and Ops, but in most cases, candidates lean toward one side or the other. Yet they must possess the skills necessary to understand the needs of their counterparts to work effectively as a team to achieve continuous delivery and deployment. Since every company is different, there is no single right or wrong since so much depends on a company’s tech stack and infrastructure, as well as the goals and the skills of other team members. So how do you focus your search?
-
-### Decide on the background
-
-Begin by assessing the strength of your current team. Do you have rock-star software engineers but lack infrastructure knowledge? Focus on closing the skill gaps. Just because you have the budget to hire a DevOps engineer doesn’t mean you should spend weeks, or even months, trying to find the best software engineer who also happens to use Kubernetes and Docker because they are currently the trend. Instead, look for someone who will provide the most value in your environment, and see how things go from there.
-
-### There is no “Ctrl + F” solution
-
-Instead of concentrating on specific tools, concentrate on a candidate's understanding of DevOps and CI/CD-related processes. You'll be better off with someone who understands methodologies over tools. It is more important to ensure that candidates comprehend the concept of CI/CD than to ask if they prefer Jenkins, Bamboo, or TeamCity. Don’t get too caught up in the exact toolchain—rather, focus on problem-solving skills and the ability to increase efficiency, save time, and automate manual processes. You don't want to miss out on the right candidate just because the word “Puppet” was not on their resume.
-
-### Check your ego
-
-As mentioned above, DevOps is a rapidly growing field, and DevOps engineers are in hot demand. That means candidates have great buying power. You may have an amazing company or product, but hiring top talent is no longer as simple as putting up a “Help Wanted” sign and waiting for top-quality applicants to rush in. I'm not suggesting that maintaining a reputation a great place to work is unimportant, but in today's environment, you need to make an effort to sell your position. Flaws or glitches in the hiring process, such as abruptly canceling interviews or not offering feedback after interviews, can lead to negative reviews spreading across the industry. Remember, it takes just a couple of minutes to leave a negative review on Glassdoor.
-
-### Contractor or permanent employee?
-
-Most recruiters and hiring managers immediately start searching for a full-time employee, even though they may have other options. If you’re looking to design, build, and implement a new DevOps environment, why not hire a senior person who has done this in the past? Consider hiring a senior contractor, along with a junior full-time hire. That way, you can tap the knowledge and experience of the contractor by having them work with the junior employee. Contractors can be expensive, but they bring invaluable knowledge—especially if the work can be done within a short timeframe.
-
-### Cultivate from within
-
-With so many companies competing for talent, it is difficult to find the right DevOps engineer. Not only will you need to pay top dollar to hire this person, but you must also consider that the search can take several months. However, since few companies are lucky enough to find the ideal DevOps engineer, consider searching for a candidate internally. You might be surprised at the talent you can cultivate from within your own organization.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/3/how-hire-right-des-talentvop
-
-作者:[Stanislav Ivaschenko][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/ilyadudkin
-[1]:https://www.glassdoor.com/Salaries/junior-devops-engineer-salary-SRCH_KO0,22.htm
-[2]:https://squadex.com/insights/system-administrator-making-leap-devops/
diff --git a/sources/talk/20180302 Beyond metrics- How to operate as team on today-s open source project.md b/sources/talk/20180302 Beyond metrics- How to operate as team on today-s open source project.md
deleted file mode 100644
index fb5454bbe4..0000000000
--- a/sources/talk/20180302 Beyond metrics- How to operate as team on today-s open source project.md
+++ /dev/null
@@ -1,53 +0,0 @@
-Beyond metrics: How to operate as team on today's open source project
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-women-meeting-team.png?itok=BdDKxT1w)
-
-How do we traditionally think about community health and vibrancy?
-
-We might quickly zero in on metrics related primarily to code contributions: How many companies are contributing? How many individuals? How many lines of code? Collectively, these speak to both the level of development activity and the breadth of the contributor base. The former speaks to whether the project continues to be enhanced and expanded; the latter to whether it has attracted a diverse group of developers or is controlled primarily by a single organization.
-
-The [Linux Kernel Development Report][1] tracks these kinds of statistics and, unsurprisingly, it appears extremely healthy on all counts.
-
-However, while development cadence and code contributions are still clearly important, other aspects of the open source communities are also coming to the forefront. This is in part because, increasingly, open source is about more than a development model. It’s also about making it easier for users and other interested parties to interact in ways that go beyond being passive recipients of code. Of course, there have long been user groups. But open source streamlines the involvement of users, just as it does software development.
-
-This was the topic of my discussion with Diane Mueller, the director of community development for OpenShift.
-
-When OpenShift became a container platform based in part on Kubernetes in version 3, Mueller saw a need to broaden the community beyond the core code contributors. In part, this was because OpenShift was increasingly touching a broad range of open source projects and organizations such those associated with the [Open Container Initiative (OCI)][2] and the [Cloud Native Computing Foundation (CNCF)][3]. In addition to users, cloud service providers who were offering managed services also wanted ways to get involved in the project.
-
-“What we tried to do was open up our minds about what the community constituted,” Mueller explained, adding, “We called it the [Commons][4] because Red Hat's near Boston, and I'm from that area. Boston Common is a shared resource, the grass where you bring your cows to graze, and you have your farmer's hipster market or whatever it is today that they do on Boston Common.”
-
-This new model, she said, was really “a new ecosystem that incorporated all of those different parties and different perspectives. We used a lot of virtual tools, a lot of new tools like Slack. We stepped up beyond the mailing list. We do weekly briefings. We went very virtual because, one, I don't scale. The Evangelist and Dev Advocate team didn't scale. We need to be able to get all that word out there, all this new information out there, so we went very virtual. We worked with a lot of people to create online learning stuff, a lot of really good tooling, and we had a lot of community help and support in doing that.”
-
-![diane mueller open shift][6]
-
-Diane Mueller, director of community development at Open Shift, discusses the role of strong user communities in open source software development. (Credit: Gordon Haff, CC BY-SA 4.0)
-
-However, one interesting aspect of the Commons model is that it isn’t just virtual. We see the same pattern elsewhere in many successful open source communities, such as the Linux kernel. Lots of day-to-day activities happen on mailings lists, IRC, and other collaboration tools. But this doesn’t eliminate the benefits of face-to-face time that allows for both richer and informal discussions and exchanges.
-
-This interview with Mueller took place in London the day after the [OpenShift Commons Gathering][7]. Gatherings are full-day events, held a number of times a year, which are typically attended by a few hundred people. Much of the focus is on users and user stories. In fact, Mueller notes, “Here in London, one of the Commons members, Secnix, was really the major reason we actually hosted the gathering here. Justin Cook did an amazing job organizing the venue and helping us pull this whole thing together in less than 50 days. A lot of the community gatherings and things are driven by the Commons members.”
-
-Mueller wants to focus on users more and more. “The OpenShift Commons gathering at [Red Hat] Summit will be almost entirely case studies,” she noted. “Users talking about what's in their stack. What lessons did they learn? What are the best practices? Sharing those ideas that they've done just like we did here in London.”
-
-Although the Commons model grew out of some specific OpenShift needs at the time it was created, Mueller believes it’s an approach that can be applied more broadly. “I think if you abstract what we've done, you can apply it to any existing open source community,” she said. “The foundations still, in some ways, play a nice role in giving you some structure around governance, and helping incubate stuff, and helping create standards. I really love what OCI is doing to create standards around containers. There's still a role for that in some ways. I think the lesson that we can learn from the experience and we can apply to other projects is to open up the community so that it includes feedback mechanisms and gives the podium away.”
-
-The evolution of the community model though approaches like the OpenShift Commons mirror the healthy evolution of open source more broadly. Certainly, some users have been involved in the development of open source software for a long time. What’s striking today is how widespread and pervasive direct user participation has become. Sure, open source remains central to much of modern software development. But it’s also becoming increasingly central to how users learn from each other and work together with their partners and developers.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/3/how-communities-are-evolving
-
-作者:[Gordon Haff][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/ghaff
-[1]:https://www.linuxfoundation.org/2017-linux-kernel-report-landing-page/
-[2]:https://www.opencontainers.org/
-[3]:https://www.cncf.io/
-[4]:https://commons.openshift.org/
-[5]:/file/388586
-[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/39369010275_7df2c3c260_z.jpg?itok=gIhnBl6F (diane mueller open shift)
-[7]:https://www.meetup.com/London-OpenShift-User-Group/events/246498196/
diff --git a/sources/talk/20180303 4 meetup ideas- Make your data open.md b/sources/talk/20180303 4 meetup ideas- Make your data open.md
deleted file mode 100644
index a431b8376a..0000000000
--- a/sources/talk/20180303 4 meetup ideas- Make your data open.md
+++ /dev/null
@@ -1,75 +0,0 @@
-4 meetup ideas: Make your data open
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_team_community_group.png?itok=Nc_lTsUK)
-
-[Open Data Day][1] (ODD) is an annual, worldwide celebration of open data and an opportunity to show the importance of open data in improving our communities.
-
-Not many individuals and organizations know about the meaningfulness of open data or why they might want to liberate their data from the restrictions of copyright, patents, and more. They also don't know how to make their data open—that is, publicly available for anyone to use, share, or republish with modifications.
-
-This year ODD falls on Saturday, March 3, and there are [events planned][2] in every continent except Antarctica. While it might be too late to organize an event for this year, it's never too early to plan for next year. Also, since open data is important every day of the year, there's no reason to wait until ODD 2019 to host an event in your community.
-
-There are many ways to build local awareness of open data. Here are four ideas to help plan an excellent open data event any time of year.
-
-### 1. Organize an entry-level event
-
-You can host an educational event at a local library, college, or another public venue about how open data can be used and why it matters for all of us. If possible, invite a [local speaker][3] or have someone present remotely. You could also have a roundtable discussion with several knowledgeable people in your community.
-
-Consider offering resources such as the [Open Data Handbook][4], which not only provides a guide to the philosophy and rationale behind adopting open data, but also offers case studies, use cases, how-to guides, and other material to support making data open.
-
-### 2. Organize an advanced-level event
-
-For a deeper experience, organize a hands-on training event for open data newbies. Ideas for good topics include [training teachers on open science][5], [creating audiovisual expressions from open data][6], and using [open government data][7] in meaningful ways.
-
-The options are endless. To choose a topic, think about what is locally relevant, identify issues that open data might be able to address, and find people who can do the training.
-
-### 3. Organize a hackathon
-
-Open data hackathons can be a great way to bring open data advocates, developers, and enthusiasts together under one roof. Hackathons are more than just training sessions, though; the idea is to build prototypes or solve real-life challenges that are tied to open data. In a hackathon, people in various groups can contribute to the entire assembly line in multiple ways, such as identifying issues by working collaboratively through [Etherpad][8] or creating focus groups.
-
-Once the hackathon is over, make sure to upload all the useful data that is produced to the internet with an open license.
-
-### 4. Release or relicense data as open
-
-Open data is about making meaningful data publicly available under open licenses while protecting any data that might put people's private information at risk. (Learn [how to protect private data][9].) Try to find existing, interesting, and useful data that is privately owned by individuals or organizations and negotiate with them to relicense or release the data online under any of the [recommended open data licenses][10]. The widely popular [Creative Commons licenses][11] (particularly the CC0 license and the 4.0 licenses) are quite compatible with relicensing public data. (See this FAQ from Creative Commons for more information on [openly licensing data][12].)
-
-Open data can be published on multiple platforms—your website, [GitHub][13], [GitLab][14], [DataHub.io][15], or anywhere else that supports open standards.
-
-### Tips for event success
-
-No matter what type of event you decide to do, here are some general planning tips to improve your chances of success.
-
- * Find a venue that's accessible to the people you want to reach, such as a library, a school, or a community center.
- * Create a curriculum that will engage the participants.
- * Invite your target audience—make sure to distribute information through social media, community events calendars, Meetup, and the like.
-
-
-
-Have you attended or hosted a successful open data event? If so, please share your ideas in the comments.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/3/celebrate-open-data-day
-
-作者:[Subhashish Panigraphi][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/psubhashish
-[1]:http://www.opendataday.org/
-[2]:http://opendataday.org/#map
-[3]:https://openspeakers.org/
-[4]:http://opendatahandbook.org/
-[5]:https://docs.google.com/forms/d/1BRsyzlbn8KEMP8OkvjyttGgIKuTSgETZW9NHRtCbT1s/viewform?edit_requested=true
-[6]:http://dattack.lv/en/
-[7]:https://www.eventbrite.co.nz/e/open-data-open-potential-event-friday-2-march-2018-tickets-42733708673
-[8]:http://etherpad.org/
-[9]:https://ssd.eff.org/en/module/keeping-your-data-safe
-[10]:https://opendatacommons.org/licenses/
-[11]:https://creativecommons.org/share-your-work/licensing-types-examples/
-[12]:https://wiki.creativecommons.org/wiki/Data#Frequently_asked_questions_about_data_and_CC_licenses
-[13]:https://github.com/MartinBriza/MediaWriter
-[14]:https://about.gitlab.com/
-[15]:https://datahub.io/
diff --git a/sources/talk/20180314 How to apply systems thinking in DevOps.md b/sources/talk/20180314 How to apply systems thinking in DevOps.md
deleted file mode 100644
index c35eb041bd..0000000000
--- a/sources/talk/20180314 How to apply systems thinking in DevOps.md
+++ /dev/null
@@ -1,89 +0,0 @@
-How to apply systems thinking in DevOps
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa)
-For most organizations, adopting DevOps requires a mindset shift. Unless you understand the core of [DevOps][1], you might think it's hype or just another buzzword—or worse, you might believe you have already adopted DevOps because you are using the right tools.
-
-Let’s dig deeper into what DevOps means, and explore how to apply systems thinking in your organization.
-
-### What is systems thinking?
-
-Systems thinking is a holistic approach to problem-solving. It's the opposite of analytical thinking, which separates a problem from the "bigger picture" to better understand it. Instead, systems thinking studies all the elements of a problem, along with the interactions between these elements.
-
-Most people are not used to thinking this way. Since childhood, most of us were taught math, science, and every other subject separately, by different teachers. This approach to learning follows us throughout our lives, from school to university to the workplace. When we first join an organization, we typically work in only one department.
-
-Unfortunately, the world is not that simple. Complexity, unpredictability, and sometimes chaos are unavoidable and require a broader way of thinking. Systems thinking helps us understand the systems we are part of, which in turn enables us to manage them rather than be controlled by them.
-
-According to systems thinking, everything is a system: your body, your family, your neighborhood, your city, your company, and even the communities you belong to. These systems evolve organically; they are alive and fluid. The better you understand a system's behavior, the better you can manage and leverage it. You become their change agent and are accountable for them.
-
-### Systems thinking and DevOps
-
-All systems include properties that DevOps addresses through its practices and tools. Awareness of these properties helps us properly adapt to DevOps. Let's look at the properties of a system and how DevOps relates to each one.
-
-### How systems work
-
-The figure below represents a system. To reach a goal, the system requires input, which is processed and generates output. Feedback is essential for moving the system toward the goal. Without a purpose, the system dies.
-
-![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/system.png?itok=UlqAf39I)
-
-If an organization is a system, its departments are subsystems. The flow of work moves through each department, starting with identifying a market need (the first input on the left) and moving toward releasing a solution that meets that need (the last output on the right). The output that each department generates serves as required input for the next department in the chain.
-
-The more specialized teams an organization has, the more handoffs happen between departments. The process of generating value to clients is more likely to create bottlenecks and thus it takes longer to deliver value. Also, when work is passed between teams, the gap between the goal and what has been done widens.
-
-DevOps aims to optimize the flow of work throughout the organization to deliver value to clients faster—in other words, DevOps reduces time to market. This is done in part by maximizing automation, but mainly by targeting the organization's goals. This empowers prioritization and reduces duplicated work and other inefficiencies that happen during the delivery process.
-
-### System deterioration
-
-All systems are affected by entropy. Nothing can prevent system degradation; that's irreversible. The tendency to decline shows the failure nature of systems. Moreover, systems are subject to threats of all types, and failure is a matter of time.
-
-To mitigate entropy, systems require constant maintenance and improvements. The effects of entropy can be delayed only when new actions are taken or input is changed.
-
-This pattern of deterioration and its opposite force, survival, can be observed in living organisms, social relationships, and other systems as well as in organizations. In fact, if an organization is not evolving, entropy is guaranteed to be increasing.
-
-DevOps attempts to break the entropy process within an organization by fostering continuous learning and improvement. With DevOps, the organization becomes fault-tolerant because it recognizes the inevitability of failure. DevOps enables a blameless culture that offers the opportunity to learn from failure. The [postmortem][2] is an example of a DevOps practice used by organizations that embrace inherent failure.
-
-The idea of intentionally embracing failure may sound counterintuitive, but that's exactly what happens in techniques like [Chaos Monkey][3]: Failure is intentionally introduced to improve availability and reliability in the system. DevOps suggests that putting some pressure into the system in a controlled way is not a bad thing. Like a muscle that gets stronger with exercise, the system benefits from the challenge.
-
-### System complexity
-
-The figure below shows how complex the systems can be. In most cases, one effect can have multiple causes, and one cause can generate multiple effects. The more elements and interactions a system has, the more complex the system.
-
-![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/system-complexity.png?itok=GYZS00Lm)
-
-In this scenario, we can't immediately identify the reason for a particular event. Likewise, we can't predict with 100% certainty what will happen if a specific action is taken. We are constantly making assumptions and dealing with hypotheses.
-
-System complexity can be explained using the scientific method. In a recent study, for example, mice that were fed excess salt showed suppressed cerebral blood flow. This same experiment would have had different results if, say, the mice were fed sugar and salt. One variable can radically change results in complex systems.
-
-DevOps handles complexity by encouraging experimentation—for example, using the scientific method—and reducing feedback cycles. Smaller changes inserted into the system can be tested and validated more quickly. With a "[fail-fast][4]" approach, organizations can pivot quickly and achieve resiliency. Reacting rapidly to changes makes organizations more adaptable.
-
-DevOps also aims to minimize guesswork and maximize understanding by making the process of delivering value more tangible. By measuring processes, revealing flaws and advantages, and monitoring as much as possible, DevOps helps organizations discover the changes they need to make.
-
-### System limitations
-
-All systems have constraints that limit their performance; a system's overall capacity is delimited by its restrictions. Most of us have learned from experience that systems operating too long at full capacity can crash, and most systems work better when they function with some slack. Ignoring limitations puts systems at risk. For example, when we are under too much stress for a long time, we get sick. Similarly, overused vehicle engines can be damaged.
-
-This principle also applies to organizations. Unfortunately, organizations can't put everything into a system at once. Although this limitation may sometimes lead to frustration, the quality of work usually improves when input is reduced.
-
-Consider what happened when the speed limit on the main roads in São Paulo, Brazil was reduced from 90 km/h to 70 km/h. Studies showed that the number of accidents decreased by 38.5% and the average speed increased by 8.7%. In other words, the entire road system improved and more vehicles arrived safely at their destinations.
-
-For organizations, DevOps suggests global rather than local improvements. It doesn't matter if some improvement is put after a constraint because there's no effect on the system at all. One constraint that DevOps addresses, for instance, is dependency on specialized teams. DevOps brings to organizations a more collaborative culture, knowledge sharing, and cross-functional teams.
-
-### Conclusion
-
-Before adopting DevOps, understand what is involved and how you want to apply it to your organization. Systems thinking will help you accomplish that while also opening your mind to new possibilities. DevOps may be seen as a popular trend today, but in 10 or 20 years, it will be status quo.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/3/how-apply-systems-thinking-devops
-
-作者:[Gustavo Muniz do Carmo][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/gustavomcarmo
-[1]:https://opensource.com/tags/devops
-[2]:https://landing.google.com/sre/book/chapters/postmortem-culture.html
-[3]:https://medium.com/netflix-techblog/the-netflix-simian-army-16e57fbab116
-[4]:https://en.wikipedia.org/wiki/Fail-fast
diff --git a/sources/talk/20180315 6 ways a thriving community will help your project succeed.md b/sources/talk/20180315 6 ways a thriving community will help your project succeed.md
deleted file mode 100644
index cf15b7f06f..0000000000
--- a/sources/talk/20180315 6 ways a thriving community will help your project succeed.md
+++ /dev/null
@@ -1,111 +0,0 @@
-6 ways a thriving community will help your project succeed
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_community_lead.jpg?itok=F9KKLI7x)
-NethServer is an open source product that my company, [Nethesis][1], launched just a few years ago. [The product][2] wouldn't be [what it is today][3] without the vibrant community that surrounds and supports it.
-
-In my previous article, I [discussed what organizations should expect to give][4] if they want to experience the benefits of thriving communities. In this article, I'll describe what organizations should expect to receive in return for their investments in the passionate people that make up their communities.
-
-Let's review six benefits.
-
-### 1\. Innovation
-
-"Open innovation" occurs when a company sharing information also listens to the feedback and suggestions from outside the company. As a company, we don't just look at the crowd for ideas. We innovate in, with, and through communities.
-
-You may know that "[the best way to have a good idea is to have a lot of ideas][5]." You can't always expect to have the right idea on your own, so having different point of views on your product is essential. How many truly disruptive ideas can a small company (like Nethesis) create? We're all young, caucasian, and European—while in our community, we can pick up a set of inspirations from a variety of people, with different genders, backgrounds, skills, and ethnicities.
-
-So the ability to invite the entire world to continuously improve the product is now no longer a dream; it's happening before our eyes. Your community could be the idea factory for innovation. With the community, you can really leverage the power of the collective.
-
-No matter who you are, most of the smartest people work for someone else. And community is the way to reach those smart people and work with them.
-
-### 2\. Research
-
-A community can be your strongest source of valuable product research.
-
-First, it can help you avoid "ivory tower development." [As Stack Exchange co-founder Jeff Atwood has said][6], creating an environment where developers have no idea who the users are is dangerous. Isolated developers, who have worked for years in their high towers, often encounter bad results because they don't have any clue about how users actually use their software. Developing in an Ivory tower keeps you away from your users and can only lead to bad decisions. A community brings developers back to reality and helps them stay grounded. Gone are the days of developers working in isolation with limited resources. In this day and age, thanks to the advent of open source communities research department is opening up to the entire world.
-
-No matter who you are, most of the smartest people work for someone else. And community is the way to reach those smart people and work with them.
-
-Second, a community can be an obvious source of product feedback—always necessary as you're researching potential paths forward. If someone gives you feedback, it means that person cares about you. It's a big gift. The community is a good place to acquire such invaluable feedback. Receiving early feedback is super important, because it reduces the cost of developing something that doesn't work in your target market. You can safely fail early, fail fast, and fail often.
-
-And third, communities help you generate comparisons with other projects. You can't know all the features, pros, and cons of your competitors' offerings. [The community, however, can.][7] Ask your community.
-
-### 3\. Perspective
-
-Communities enable companies to look at themselves and their products [from the outside][8], letting them catch strengths and weaknesses, and mostly realize who their products' audiences really are.
-
-Let me offer an example. When we launched the NethServer, we chose a catchy tagline for it. We were all convinced the following sentence was perfect:
-
-> [NethServer][9] is an operating system for Linux enthusiasts, designed for small offices and medium enterprises.
-
-Two years have passed since then. And we've learned that sentence was an epic fail.
-
-We failed to realize who our audience was. Now we know: NethServer is not just for Linux enthusiasts; actually, Windows users are the majority. It's not just for small offices and medium enterprises; actually, several home users install NethServer for personal use. Our community helps us to fully understand our product and look at it from our users' eyes.
-
-### 4\. Development
-
-In open source communities especially, communities can be a welcome source of product development.
-
-They can, first of all, provide testing and bug reporting. In fact, if I ask my developers about the most important community benefit, they'd answer "testing and bug reporting." Definitely. But because your code is freely available to the whole world, practically anyone with a good working knowledge of it (even hobbyists and other companies) has the opportunity to play with it, tweak it, and constantly improve it (even develop additional modules, as in our case). People can do more than just report bugs; they can fix those bugs, too, if they have the time and knowledge.
-
-But the community doesn't just create code. It can also generate resources like [how-to guides,][10] FAQs, support documents, and case studies. How much would it cost to fully translate your product in seven different languages? At NethServer, we got that for free—thanks to our community members.
-
-### 5\. Marketing
-
-Communities can help your company go global. Our small Italian company, for example, wasn't prepared for a global market. The community got us prepared. For example, we needed to study and improve our English so we could read and write correctly or speak in public without looking foolish for an audience. The community gently forced us to organize [our first NethServer Conference][11], too—only in English.
-
-A strong community can also help your organization attain the holy grail of marketers everywhere: word of mouth marketing (or what Seth Godin calls "[tribal marketing][12]").
-
-Communities ensure that your company's messaging travels not only from company to tribe but also "sideways," from tribe member to potential tribe member. The community will become your street team, spreading word of your organization and its projects to anyone who will listen.
-
-In addition, communities help organizations satisfy one of the most fundamental members needs: the desire to belong, to be involved in something bigger than themselves, and to change the world together.
-
-Never forget that working with communities is always a matter of giving and taking—striking a delicate balance between the company and the community.
-
-### 6\. Loyalty
-
-Attracting new users costs a business five times as much as keeping an existing one. So loyalty can have a huge impact on your bottom line. Quite simply, community helps us build brand loyalty. It's much more difficult to leave a group of people you're connected to than a faceless product or company. In a community, you're building connections with people, which is way more powerful than features or money (trust me!).
-
-### Conclusion
-
-Never forget that working with communities is always a matter of giving and taking—striking a delicate balance between the company and the community.
-
-And I wouldn't be honest with you if I didn't admit that the approach has some drawbacks. Doing everything in the open means moderating, evaluating, and processing of all the data you're receiving. Supporting your members and leading the discussions definitely takes time and resources. But, if you look at what a community enables, you'll see that all this is totally worth the effort.
-
-As my friend and mentor [David Spinks keeps saying over and over again][13], "Companies fail their communities when when they treat community as a tactic instead of making it a core part of their business philosophy." And [as I've said][4]: Communities aren't simply extensions of your marketing teams; "community" isn't an efficient short-term strategy. When community is a core part of your business philosophy, it can do so much more than give you short-term returns.
-
-At Nethesis we experience that every single day. As a small company, we could never have achieved the results we have without our community. Never.
-
-Community can completely set your business apart from every other company in the field. It can redefine markets. It can inspire millions of people, give them a sense of belonging, and make them feel an incredible bond with your company.
-
-And it can make you a whole lot of money.
-
-Community-driven companies will always win. Remember that.
-
-[Subscribe to our weekly newsletter][14] to learn more about open organizations.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/open-organization/18/3/why-build-community-3
-
-作者:[Alessio Fattorini][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/alefattorini
-[1]:http://www.nethesis.it/
-[2]:https://www.nethserver.org/
-[3]:https://distrowatch.com/table.php?distribution=nethserver
-[4]:https://opensource.com/open-organization/18/2/why-build-community-2
-[5]:https://www.goodreads.com/author/quotes/52938.Linus_Pauling
-[6]:https://blog.codinghorror.com/ivory-tower-development/
-[7]:https://community.nethserver.org/tags/comparison
-[8]:https://community.nethserver.org/t/improve-our-communication/2569
-[9]:http://www.nethserver.org/
-[10]:https://community.nethserver.org/c/howto
-[11]:https://community.nethserver.org/t/nethserver-conference-in-italy-sept-29-30-2017/6404
-[12]:https://www.ted.com/talks/seth_godin_on_the_tribes_we_lead
-[13]:http://cmxhub.com/article/community-business-philosophy-tactic/
-[14]:https://opensource.com/open-organization/resources/newsletter
diff --git a/sources/talk/20180315 Lessons Learned from Growing an Open Source Project Too Fast.md b/sources/talk/20180315 Lessons Learned from Growing an Open Source Project Too Fast.md
deleted file mode 100644
index 6ae7cbea2c..0000000000
--- a/sources/talk/20180315 Lessons Learned from Growing an Open Source Project Too Fast.md
+++ /dev/null
@@ -1,40 +0,0 @@
-Lessons Learned from Growing an Open Source Project Too Fast
-======
-![open source project][1]
-
-Are you managing an open source project or considering launching one? If so, it may come as a surprise that one of the challenges you can face is rapid growth. Matt Butcher, Principal Software Development Engineer at Microsoft, addressed this issue in a presentation at Open Source Summit North America. His talk covered everything from teamwork to the importance of knowing your goals and sticking to them.
-
-Butcher is no stranger to managing open source projects. As [Microsoft invests more deeply into open source][2], Butcher has been involved with many projects, including toolkits for Kubernetes and QueryPath, the jQuery-like library for PHP.
-
-Butcher described a case study involving Kubernetes Helm, a package system for Kubernetes. Helm arose from a company team-building hackathon, with an original team of three people giving birth to it. Within 18 months, the project had hundreds of contributors and thousands of active users.
-
-### Teamwork
-
-“We were stretched to our limits as we learned to grow,” Butcher said. “When you’re trying to set up your team of core maintainers and they’re all trying to work together, you want to spend some actual time trying to optimize for a process that lets you be cooperative. You have to adjust some expectations regarding how you treat each other. When you’re working as a group of open source collaborators, the relationship is not employer/employee necessarily. It’s a collaborative effort.”
-
-In addition to focusing on the right kinds of teamwork, Butcher and his collaborators learned that managing governance and standards is an ongoing challenge. “You want people to understand who makes decisions, how they make decisions and why they make the decisions that they make,” he said. “When we were a small project, there might have been two paragraphs in one of our documents on standards, but as a project grows and you get growing pains, these documented things gain a life of their own. They get their very own repositories, and they just keep getting bigger along with the project.”
-
-Should all discussion surrounding a open source project go on in public, bathed in the hot lights of community scrutiny? Not necessarily, Butcher noted. “A minor thing can get blown into catastrophic proportions in a short time because of misunderstandings and because something that should have been done in private ended up being public,” he said. “Sometimes we actually make architectural recommendations as a closed group. The reason we do this is that we don’t want to miscue the community. The people who are your core maintainers are core maintainers because they’re experts, right? These are the people that have been selected from the community because they understand the project. They understand what people are trying to do with it. They understand the frustrations and concerns of users.”
-
-### Acknowledge Contributions
-
-Butcher added that it is essential to acknowledge people’s contributions to keep the environment surrounding a fast-growing project from becoming toxic. “We actually have an internal rule in our core maintainers guide that says, ‘Make sure that at least one comment that you leave on a code review, if you’re asking for changes, is a positive one,” he said. “It sounds really juvenile, right? But it serves a specific purpose. It lets somebody know, ‘I acknowledge that you just made a gift of your time and your resources.”
-
-Want more tips on successfully launching and managing open source projects? Stay tuned for more insight from Matt Butcher’s talk, in which he provides specific project management issues faced by Kubernetes Helm.
-
-For more information, be sure to check out [The Linux Foundation’s growing list of Open Source Guides for the Enterprise][3], covering topics such as starting an open source project, improving your open source impact, and participating in open source communities.
-
---------------------------------------------------------------------------------
-
-via: https://www.linuxfoundation.org/blog/lessons-learned-from-growing-an-open-source-project-too-fast/
-
-作者:[Sam Dean][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linuxfoundation.org/author/sdean/
-[1]:https://www.linuxfoundation.org/wp-content/uploads/2018/03/huskies-2279627_1920.jpg
-[2]:https://thenewstack.io/microsoft-shifting-emphasis-open-source/
-[3]:https://www.linuxfoundation.org/resources/open-source-guides/
diff --git a/sources/talk/20180316 How to avoid humiliating newcomers- A guide for advanced developers.md b/sources/talk/20180316 How to avoid humiliating newcomers- A guide for advanced developers.md
deleted file mode 100644
index e433e85d5f..0000000000
--- a/sources/talk/20180316 How to avoid humiliating newcomers- A guide for advanced developers.md
+++ /dev/null
@@ -1,119 +0,0 @@
-How to avoid humiliating newcomers: A guide for advanced developers
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy)
-Every year in New York City, a few thousand young men come to town, dress up like Santa Claus, and do a pub crawl. One year during this SantaCon event, I was walking on the sidewalk and minding my own business, when I saw an extraordinary scene. There was a man dressed up in a red hat and red jacket, and he was talking to a homeless man who was sitting in a wheelchair. The homeless man asked Santa Claus, "Can you spare some change?" Santa dug into his pocket and brought out a $5 bill. He hesitated, then gave it to the homeless man. The homeless man put the bill in his pocket.
-
-In an instant, something went wrong. Santa yelled at the homeless man, "I gave you $5. I wanted to give you one dollar, but five is the smallest I had, so you oughtta be grateful. This is your lucky day, man. You should at least say thank you!"
-
-This was a terrible scene to witness. First, the power difference was terrible: Santa was an able-bodied white man with money and a home, and the other man was black, homeless, and using a wheelchair. It was also terrible because Santa Claus was dressed like the very symbol of generosity! And he was behaving like Santa until, in an instant, something went wrong and he became cruel.
-
-This is not merely a story about Drunk Santa, however; this is a story about technology communities. We, too, try to be generous when we answer new programmers' questions, and every day our generosity turns to rage. Why?
-
-### My cruelty
-
-I'm reminded of my own bad behavior in the past. I was hanging out on my company's Slack when a new colleague asked a question.
-
-> **New Colleague:** Hey, does anyone know how to do such-and-such with MongoDB?
-> **Jesse:** That's going to be implemented in the next release.
-> **New Colleague:** What's the ticket number for that feature?
-> **Jesse:** I memorize all ticket numbers. It's #12345.
-> **New Colleague:** Are you sure? I can't find ticket 12345.
-
-He had missed my sarcasm, and his mistake embarrassed him in front of his peers. I laughed to myself, and then I felt terrible. As one of the most senior programmers at MongoDB, I should not have been setting this example. And yet, such behavior is commonplace among programmers everywhere: We get sarcastic with newcomers, and we humiliate them.
-
-### Why does it matter?
-
-Perhaps you are not here to make friends; you are here to write code. If the code works, does it matter if we are nice to each other or not?
-
-A few months ago on the Stack Overflow blog, David Robinson showed that [Python has been growing dramatically][1], and it is now the top language that people view questions about on Stack Overflow. Even in the most pessimistic forecast, it will far outgrow the other languages this year.
-
-![Projections for programming language popularity][2]
-
-If you are a Python expert, then the line surging up and to the right is good news for you. It does not represent competition, but confirmation. As more new programmers learn Python, our expertise becomes ever more valuable, and we will see that reflected in our salaries, our job opportunities, and our job security.
-
-But there is a danger. There are soon to be more new Python programmers than ever before. To sustain this growth, we must welcome them, and we are not always a welcoming bunch.
-
-### The trouble with Stack Overflow
-
-I searched Stack Overflow for rude answers to beginners' questions, and they were not hard to find.
-
-![An abusive answer on StackOverflow][3]
-
-The message is plain: If you are asking a question this stupid, you are doomed. Get out.
-
-I immediately found another example of bad behavior:
-
-![Another abusive answer on Stack Overflow][4]
-
-Who has never been confused by Unicode in Python? Yet the message is clear: You do not belong here. Get out.
-
-Do you remember how it felt when you needed help and someone insulted you? It feels terrible. And it decimates the community. Some of our best experts leave every day because they see us treating each other this way. Maybe they still program Python, but they are no longer participating in conversations online. This cruelty drives away newcomers, too, particularly members of groups underrepresented in tech who might not be confident they belong. People who could have become the great Python programmers of the next generation, but if they ask a question and somebody is cruel to them, they leave.
-
-This is not in our interest. It hurts our community, and it makes our skills less valuable because we drive people out. So, why do we act against our own interests?
-
-### Why generosity turns to rage
-
-There are a few scenarios that really push my buttons. One is when I act generously but don't get the acknowledgment I expect. (I am not the only person with this resentment: This is probably why Drunk Santa snapped when he gave a $5 bill to a homeless man and did not receive any thanks.)
-
-Another is when answering requires more effort than I expect. An example is when my colleague asked a question on Slack and followed-up with, "What's the ticket number?" I had judged how long it would take to help him, and when he asked for more help, I lost my temper.
-
-These scenarios boil down to one problem: I have expectations for how things are going to go, and when those expectations are violated, I get angry.
-
-I've been studying Buddhism for years, so my understanding of this topic is based in Buddhism. I like to think that the Buddha discussed the problem of expectations in his first tech talk when, in his mid-30s, he experienced a breakthrough after years of meditation and convened a small conference to discuss his findings. He had not rented a venue, so he sat under a tree. The attendees were a handful of meditators the Buddha had met during his wanderings in northern India. The Buddha explained that he had discovered four truths:
-
- * First, that to be alive is to be dissatisfied—to want things to be better than they are now.
- * Second, this dissatisfaction is caused by wants; specifically, by our expectation that if we acquire what we want and eliminate what we do not want, it will make us happy for a long time. This expectation is unrealistic: If I get a promotion or if I delete 10 emails, it is temporarily satisfying, but it does not make me happy over the long-term. We are dissatisfied because every material thing quickly disappoints us.
- * The third truth is that we can be liberated from this dissatisfaction by accepting our lives as they are.
- * The fourth truth is that the way to transform ourselves is to understand our minds and to live a generous and ethical life.
-
-
-
-I still get angry at people on the internet. It happened to me recently, when someone posted a comment on [a video I published about Python co-routines][5]. It had taken me months of research and preparation to create this video, and then a newcomer commented, "I want to master python what should I do."
-
-![Comment on YouTube][6]
-
-This infuriated me. My first impulse was to be sarcastic, "For starters, maybe you could spell Python with a capital P and end a question with a question mark." Fortunately, I recognized my anger before I acted on it, and closed the tab instead. Sometimes liberation is just a Command+W away.
-
-### What to do about it
-
-If you joined a community with the intent to be helpful but on occasion find yourself flying into a rage, I have a method to prevent this. For me, it is the step when I ask myself, "Am I angry?" Knowing is most of the battle. Online, however, we can lose track of our emotions. It is well-established that one reason we are cruel on the internet is because, without seeing or hearing the other person, our natural empathy is not activated. But the other problem with the internet is that, when we use computers, we lose awareness of our bodies. I can be angry and type a sarcastic message without even knowing I am angry. I do not feel my heart pound and my neck grow tense. So, the most important step is to ask myself, "How do I feel?"
-
-If I am too angry to answer, I can usually walk away. As [Thumper learned in Bambi][7], "If you can't say something nice, don't say nothing at all."
-
-### The reward
-
-Helping a newcomer is its own reward, whether you receive thanks or not. But it does not hurt to treat yourself to a glass of whiskey or a chocolate, or just a sigh of satisfaction after your good deed.
-
-But besides our personal rewards, the payoff for the Python community is immense. We keep the line surging up and to the right. Python continues growing, and that makes our own skills more valuable. We welcome new members, people who might not be sure they belong with us, by reassuring them that there is no such thing as a stupid question. We use Python to create an inclusive and diverse community around writing code. And besides, it simply feels good to be part of a community where people treat each other with respect. It is the kind of community that I want to be a member of.
-
-### The three-breath vow
-
-There is one idea I hope you remember from this article: To control our behavior online, we must occasionally pause and notice our feelings. I invite you, if you so choose, to repeat the following vow out loud:
-
-> I vow
-> to take three breaths
-> before I answer a question online.
-
-This article is based on a talk, [Why Generosity Turns To Rage, and What To Do About It][8], that Jesse gave at PyTennessee in February. For more insight for Python developers, attend [PyCon 2018][9], May 9-17 in Cleveland, Ohio.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/3/avoid-humiliating-newcomers
-
-作者:[A. Jesse][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/emptysquare
-[1]:https://stackoverflow.blog/2017/09/06/incredible-growth-python/
-[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/projections.png?itok=5QTeJ4oe (Projections for programming language popularity)
-[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/abusive-answer-1.jpg?itok=BIWW10Rl (An abusive answer on StackOverflow)
-[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/abusive-answer-2.jpg?itok=0L-n7T-k (Another abusive answer on Stack Overflow)
-[5]:https://www.youtube.com/watch?v=7sCu4gEjH5I
-[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/i-want-to-master-python.png?itok=Y-2u1XwA (Comment on YouTube)
-[7]:https://www.youtube.com/watch?v=nGt9jAkWie4
-[8]:https://www.pytennessee.org/schedule/presentation/175/
-[9]:https://us.pycon.org/2018/
diff --git a/sources/talk/20180320 Easily Fund Open Source Projects With These Platforms.md b/sources/talk/20180320 Easily Fund Open Source Projects With These Platforms.md
deleted file mode 100644
index 8c02ca228b..0000000000
--- a/sources/talk/20180320 Easily Fund Open Source Projects With These Platforms.md
+++ /dev/null
@@ -1,96 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: subject: (Easily Fund Open Source Projects With These Platforms)
-[#]: via: (https://itsfoss.com/open-source-funding-platforms/)
-[#]: author: ([Ambarish Kumar](https://itsfoss.com/author/ambarish/))
-[#]: url: ( )
-
-Easily Fund Open Source Projects With These Platforms
-======
-
-**Brief: We list out some funding platforms you can use to financially support open source projects. **
-
-Financial support is one of the many ways to [help Linux and Open Source community][1]. This is why you see “Donate” option on the websites of most open source projects.
-
-While the big corporations have the necessary funding and resources, most open source projects are developed by individuals in their spare time. However, it does require one’s efforts, time and probably includes some overhead costs too. Monetary supports surely help drive the project development.
-
-If you would like to support open source projects financially, let me show you some platforms dedicated to open source and/or Linux.
-
-### Funding platforms for Open Source projects
-
-![Open Source funding platforms][2]
-
-Just to clarify, we are not associated with any of the funding platforms mentioned here.
-
-#### 1\. Liberapay
-
-[Gratipay][3] was probably the biggest platform for funding open source projects and people associated with the project, which got shut down at the end of the year 2017. However, there’s a fork – Liberapay that works as a recurrent donation platform for the open source projects and the contributors.
-
-[Liberapay][4] is a non-profit, open source organization that helps in a periodic donation to a project. You can create an account as a contributor and ask the people who would really like to help (usually the consumer of your products) to donate.
-
-To receive a donation, you will have to create an account on Liberapay, brief what you do and about your project, reasons for asking for the donation and what will be done with the money you receive.
-
-For someone who would like to donate, they would have to add money to their accounts and set up a period for payment that can be weekly, monthly or yearly to someone. There’s a mail triggered when there is not much left to donate.
-
-The currency supported are dollars and Euro as of now and you can always put up a badge on Github, your Twitter profile or website for a donation.
-
-#### 2\. Bountysource
-
-[Bountysource][5] is a funding platform for open source software that has a unique way of paying a developer for his time and work int he name of Bounties.
-
-There are basically two campaigns, bounties and salt campaign.
-
-Under the Bounties, users declare bounties aka cash prizes on open issues that they believe should be fixed or any new features which they want to see in the software they are using. A developer can then go and fix it to receive the cash prize.
-
-Salt Campaign is like any other funding, anyone can pay a recurring amount to a project or an individual working for an open source project for as long as they want.
-
-Bountysource accepts any software that is approved by Free Software Foundation or Open Source Initiatives. The bounties can be placed using PayPal, Bitcoin or the bounty itself if owned previously. Bountysource supports a no. of issue tracker currently like GitHub, Bugzilla, Google Code, Jira, Launchpad etc.
-
-#### 3\. Open Collective
-
-[Open Collective][6] is another popular funding initiative where a person who is willing to receive the donation for the work he is doing in Open Source world can create a page. He can submit the expense reports for the project he is working on. A contributor can add money to his account and pay him for his expenses.
-
-The complete process is transparent and everyone can track whoever is associated with Open Collective. The contributions are visible along with the unpaid expenses. There is also the option to contribute on a recurring basis.
-
-Open Collective currently has more than 500 collectives being backed up by more than 5000 users.
-
-The fact that it is transparent and you know what you are contributing to, drives more accountability. Some common example of collective include hosting costs, community maintenance, travel expenses etc.
-
-Though Open Collective keeps 10% of all the transactions, it is still a nice way to get your expenses covered in the process of contributing towards an open source project.
-
-#### 4\. Open Source Grants
-
-[Open Source Grants][7] is still in its beta stage and has not matured yet. They are looking for projects that do not have any stable funding and adds value to open source community. Most open source projects are run by a small community in a free time and they are trying to fund them so that the developers can work full time on the projects.
-
-They are equally searching for companies that want to help open source enthusiasts. The process of submitting a project is still being worked upon, and hopefully, in coming days we will see a working way of funding.
-
-### Final Words
-
-In the end, I would also like to mention [Patreon][8]. This funding platform is not exclusive to open source but is focused on creators of all kinds. Some projects like [elementary OS have created their accounts on Patreon][9] so that you can support the project on a recurring basis.
-
-Think Free Speech, not Free Beer. Your small contribution to a project can help it sustain in the long run. For the developers, the above platform can provide a good way to cover up their expenses.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/open-source-funding-platforms/
-
-作者:[Ambarish Kumar][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/ambarish/
-[b]: https://github.com/lujun9972
-[1]: https://itsfoss.com/help-linux-grow/
-[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/03/Fund-Open-Source-projects.png?resize=800%2C450&ssl=1
-[3]: https://itsfoss.com/gratipay-open-source/
-[4]: https://liberapay.com/
-[5]: https://www.bountysource.com/
-[6]: https://opencollective.com/
-[7]: https://foundation.travis-ci.org/grants/
-[8]: https://www.patreon.com/
-[9]: https://www.patreon.com/elementary
diff --git a/sources/talk/20180321 8 tips for better agile retrospective meetings.md b/sources/talk/20180321 8 tips for better agile retrospective meetings.md
deleted file mode 100644
index ec45bf17f0..0000000000
--- a/sources/talk/20180321 8 tips for better agile retrospective meetings.md
+++ /dev/null
@@ -1,66 +0,0 @@
-8 tips for better agile retrospective meetings
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_meeting.png?itok=4_CivQgp)
-I’ve often thought that retrospectives should be called prospectives, as that term concerns the future rather than focusing on the past. The retro itself is truly future-looking: It’s the space where we can ask the question, “With what we know now, what’s the next experiment we need to try for improving our lives, and the lives of our customers?”
-
-### What’s a retro supposed to look like?
-
-There are two significant loops in product development: One produces the desired potentially shippable nugget. The other is where we examine how we’re working—not only to avoid doing what didn’t work so well, but also to determine how we can amplify the stuff we do well—and devise an experiment to pull into the next production loop to improve how our team is delighting our customers. This is the loop on the right side of this diagram:
-
-
-![Retrospective 1][2]
-
-### When retros implode
-
-While attending various teams' iteration retrospective meetings, I saw a common thread of malcontent associated with a relentless focus on continuous improvement.
-
-One of the engineers put it bluntly: “[Our] continuous improvement feels like we are constantly failing.”
-
-The teams talked about what worked, restated the stuff that didn’t work (perhaps already feeling like they were constantly failing), nodded to one another, and gave long sighs. Then one of the engineers (already late for another meeting) finally summed up the meeting: “Ok, let’s try not to submit all of the code on the last day of the sprint.” There was no opportunity to amplify the good, as the good was not discussed.
-
-In effect, here’s what the retrospective felt like:
-
-![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/retro_2.jpg?itok=HrDkppCG)
-
-The anti-pattern is where retrospectives become dreaded sessions where we look back at the last iteration, make two columns—what worked and what didn’t work—and quickly come to some solution for the next iteration. There is no [scientific method][3] involved. There is no data gathering and research, no hypothesis, and very little deep thought. The result? You don’t get an experiment or a potential improvement to pull into the next iteration.
-
-### 8 tips for better retrospectives
-
- 1. Amplify the good! Instead of focusing on what didn’t work well, why not begin the retro by having everyone mention one positive item first?
- 2. Don’t jump to a solution. Thinking about a problem deeply instead of trying to solve it right away might be a better option.
- 3. If the retrospective doesn’t make you feel excited about an experiment, maybe you shouldn’t try it in the next iteration.
- 4. If you’re not analyzing how to improve, ([5 Whys][4], [force-field analysis][5], [impact mapping][6], or [fish-boning][7]), you might be jumping to solutions too quickly.
- 5. Vary your methods. If every time you do a retrospective you ask, “What worked, what didn’t work?” and then vote on the top item from either column, your team will quickly get bored. [Retromat][8] is a great free retrospective tool to help vary your methods.
- 6. End each retrospective by asking for feedback on the retro itself. This might seem a bit meta, but it works: Continually improving the retrospective is recursively improving as a team.
- 7. Remove the impediments. Ask how you are enabling the team's search for improvement, and be prepared to act on any feedback.
- 8. There are no "iteration police." Take breaks as needed. Deriving hypotheses from analysis and coming up with experiments involves creativity, and it can be taxing. Every once in a while, go out as a team and enjoy a nice retrospective lunch.
-
-
-
-This article was inspired by [Retrospective anti-pattern: continuous improvement should not feel like constantly failing][9], posted at [Podojo.com][10].
-
-**[See our related story,[How to build a business case for DevOps transformation][11].]**
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/3/tips-better-agile-retrospective-meetings
-
-作者:[Catherine Louis][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/catherinelouis
-[1]:/file/389021
-[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/retro_1.jpg?itok=bggmHN1Q (Retrospective 1)
-[3]:https://en.wikipedia.org/wiki/Scientific_method
-[4]:https://en.wikipedia.org/wiki/5_Whys
-[5]:https://en.wikipedia.org/wiki/Force-field_analysis
-[6]:https://opensource.com/open-organization/17/6/experiment-impact-mapping
-[7]:https://en.wikipedia.org/wiki/Ishikawa_diagram
-[8]:https://plans-for-retrospectives.com/en/?id=28
-[9]:http://www.podojo.com/retrospective-anti-pattern-continuous-improvement-should-not-feel-like-constantly-failing/
-[10]:http://www.podojo.com/
-[11]:https://opensource.com/article/18/2/how-build-business-case-devops-transformation
diff --git a/sources/talk/20180323 7 steps to DevOps hiring success.md b/sources/talk/20180323 7 steps to DevOps hiring success.md
deleted file mode 100644
index cdea0c65ac..0000000000
--- a/sources/talk/20180323 7 steps to DevOps hiring success.md
+++ /dev/null
@@ -1,56 +0,0 @@
-7 steps to DevOps hiring success
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6)
-As many of us in the DevOps scene know, most companies are hiring, or, at least, trying to do so. The required skills and job descriptions can change entirely from company to company. As a broad overview, most teams are looking for a candidate from either an operations and infrastructure background or someone from a software engineering and development background, then combined with key skills relating to continuous integration, configuration management, continuous delivery/deployment, and cloud infrastructure. Currently in high-demand is knowledge of container orchestration.
-
-In the ideal world, the two backgrounds will meet somewhere in the middle to form Dev and Ops, but in most cases, there is a lean toward one side or the other while maintaining sufficient skills to understand the needs and demands of their counterparts to work collaboratively and achieve the end goal of continuous delivery/deployment. Every company is different and there isn’t necessarily a right or wrong here. It all depends on your infrastructure, tech stack, other team members’ skills, and the individual goals you hope to achieve by hiring this individual.
-
-### Focus your hiring
-
-Now, given the various routes to becoming a DevOps practitioner, how do hiring managers focus their search and selection process to ensure that they’re hitting the mark?
-
-#### Decide on the background
-
-Assess the strengths of your existing team. Do you already have some amazing software engineers but you’re lacking the infrastructure knowledge? Aim to close these gaps in skills. You may have been given the budget to hire for DevOps, but you don’t have to spend weeks/months searching for the best software engineer who happens to use Docker and Kubernetes because they are the current hot trends in this space. Find the person who will provide the most value in your environment and go from there.
-
-#### Contractor or permanent employee?
-
-Many hiring managers will automatically start searching for a full-time permanent employee when their needs may suggest that they have other options. Sometimes a contractor is your best bet or maybe contract-hire. If you’re aiming to design, implement and build a new DevOps environment, why not find a senior person who has done this a number of times already? Try hiring a senior contractor and bring on a junior full-time hire in parallel; this way, you’ll be able to retain the external contractor knowledge by having them work alongside the junior hire. Contractors can be expensive, but the knowledge they bring can be invaluable, especially if the work can be completed over a shorter time frame. Again, this is just another point of view and you might be best off with a full-time hire to grow the team.
-
-#### CTRL F is not the solution
-
-Focus on their understanding of DevOps and CI/CD-related processes over specific tools. I believe the best approach is to focus on finding someone who understands the methodologies over the tools. Does your candidate understand the concept of continuous integration or the concept of continuous delivery? That’s more important than asking whether your candidate uses Jenkins versus Bamboo versus TeamCity and so on. Try not to get caught up in the exact tool chain. The focus should be on the candidates’ ability to solve problems. Are they obsessed with increasing efficiency, saving time, automating manual processes and constantly searching for flaws in the system? They might be the person you were looking for, but you missed them because you didn’t see the word "Puppet" on the resume.
-
-#### Work closely with your internal talent acquisition team and/or an external recruiter
-
-Be clear and precise with what you’re looking for and have an ongoing, open communication with recruiters. They can and will help you if used effectively. The job of these recruiters is to save you time by sourcing candidates while you’re focusing on your day-to-day role. Work closely with them and deliver in the same way that you would expect them to deliver for you. If you say you will review a candidate by X time, do it. If they say they’ll have a candidate in your inbox by Y time, make sure they do it, too. Start by setting up an initial call to talk through your requirement, lay out a timeline in which you expect candidates by a specific time, and explain your process in terms of when you will interview, how many interview rounds, and how soon after you will be able to make a final decision on whether to offer or reject the candidates. If you can get this relationship working well, you’ll save lots of time. And make sure your internal teams are focused on supporting your process, not blocking it.
-
-#### $$$
-
-Decide how much you want to pay. It’s not all about the money, but you can waste a lot of your and other people’s time if you don’t lock down the ballpark salary or hourly rate that you can afford. If your budget doesn’t stretch as far as your competitors’, you need to consider what else can help sell the opportunity. Flexible working hours and remote working options are some great ways to do this. Most companies have snacks, beer, and cool offices nowadays, so focus on the real value such as the innovative work your team is doing and how awesome your game-changing product might be.
-
-#### Drop the ego
-
-You may have an amazing company and/or product, but you also have some hot competition. Everyone is hiring in this space and candidates have a lot of the buying power. It is no longer as simple as saying, "We are hiring" and the awesome candidates come flowing in. You need to sell your opportunities. Maintaining a reputation as a great place to work is also important. A poor hiring process, such as interviewing without giving feedback, can contribute to bad rumors being spread across the industry. It only takes a few minutes to leave a sour review on Glassdoor.
-
-#### A smooth process is a successful One
-
-"Let’s get every single person within the company to do a one-hour interview with the new DevOps person we are hiring!" No, let’s not do that. Two or three stages should be sufficient. You have managers and directors for a reason. Trust your instinct and use your experience to make decisions on who will fit into your organization. Some of the most successful companies can do one phone screen followed by an in-person meeting. During the in-person interview, spend a morning or afternoon allowing the candidate to meet the relevant leaders and senior members of their direct team, then take them for lunch, dinner, or drinks where you can see how they are on a social level. If you can’t have a simple conversation with them, then you probably won’t enjoy working with them. If the thumbs are up, make the hire and don’t wait around. A good candidate will usually have numerous offers on the table at the same time.
-
-If all goes well, you should be inviting your shiny new employee or contractor into the office in the next few weeks and hopefully many more throughout the year.
-
-This article was originally published on [DevOps.com][1] and republished with author permission.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/3/7-steps-devops-hiring-success
-
-作者:[Conor Delanbanque][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/cdelanbanque
-[1]:https://devops.com/7-steps-devops-hiring-success/
diff --git a/sources/talk/20180330 Meet OpenAuto, an Android Auto emulator for Raspberry Pi.md b/sources/talk/20180330 Meet OpenAuto, an Android Auto emulator for Raspberry Pi.md
deleted file mode 100644
index bac0819e74..0000000000
--- a/sources/talk/20180330 Meet OpenAuto, an Android Auto emulator for Raspberry Pi.md
+++ /dev/null
@@ -1,81 +0,0 @@
-Meet OpenAuto, an Android Auto emulator for Raspberry Pi
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb_computer_person_general_.png?itok=BRGJXU7e)
-
-In 2015, Google introduced [Android Auto][1], a system that allows users to project certain apps from their Android smartphones onto a car's infotainment display. Android Auto's driver-friendly interface, with larger touchscreen buttons and voice commands, aims to make it easier and safer for drivers to control navigation, music, podcasts, radio, phone calls, and more while keeping their eyes on the road. Android Auto can also run as an app on an Android smartphone, enabling owners of older-model vehicles without modern head unit displays to take advantage of these features.
-
-While there are many [apps][2] available for Android Auto, developers are working to add to its catalog. A new, open source tool named [OpenAuto][3] is hoping to make that easier by giving developers a way to emulate Android Auto on a Raspberry Pi. With OpenAuto, developers can test their applications in conditions similar to how they'll work on an actual car head unit.
-
-OpenAuto's creator, Michal Szwaj, answered some questions about his project for Opensource.com. Some responses have been edited for conciseness and clarity.
-
-### What is OpenAuto?
-
-In a nutshell, OpenAuto is an emulator for the Android Auto head unit. It emulates the head unit software and allows you to use Android Auto on your PC or on any other embedded platform like Raspberry Pi 3.
-
-Head unit software is a frontend for the Android Auto projection. All magic related to the Android Auto, like navigation, Google Voice Assistant, or music playback, is done on the Android device. Projection of Android Auto on the head unit is accomplished using the [H.264][4] codec for video and [PCM][5] codec for audio streaming. This is what the head unit software mostly does—it decodes the H.264 video stream and PCM audio streams and plays them back together. Another function of the head unit is providing user inputs. OpenAuto supports both touch events and hard keys.
-
-### What platforms does OpenAuto run on?
-
-My target platform for deployment of the OpenAuto is Raspberry Pi 3 computer. For successful deployment, I needed to implement support of video hardware acceleration using the Raspberry Pi 3 GPU (VideoCore 4). Thanks to this, Android Auto projection on the Raspberry Pi 3 computer can be handled even using 1080p@60 fps resolution. I used [OpenMAX IL][6] and IL client libraries delivered together with the Raspberry Pi firmware to implement video hardware acceleration.
-
-Taking advantage of the fact that the Raspberry Pi operating system is Raspbian based on Debian Linux, OpenAuto can be also built for any other Linux-based platform that provides support for hardware video decoding. Most of the Linux-based platforms provide support for hardware video decoding directly in GStreamer. Thanks to highly portable libraries like Boost and [Qt][7], OpenAuto can be built and run on the Windows platform. Support of MacOS is being implemented by the community and should be available soon.
-
-![][https://www.youtube.com/embed/k9tKRqIkQs8?origin=https://opensource.com&enablejsapi=1]
-
-### What software libraries does the project use?
-
-The core of the OpenAuto is the [aasdk][8] library, which provides support for all Android Auto features. aasdk library is built on top of the Boost, libusb, and OpenSSL libraries. [libusb][9] implements communication between the head unit and an Android device (via USB bus). [Boost][10] provides support for the asynchronous mechanisms for communication. It is required for high efficiency and scalability of the head unit software. [OpenSSL][11] is used for encrypting communication.
-
-The aasdk library is designed to be fully reusable for any purposes related to implementation of the head unit software. You can use it to build your own head unit software for your desired platform.
-
-Another very important library used in OpenAuto is Qt. It provides support for OpenAuto's multimedia, user input, and graphical interface. And the build system OpenAuto is using is [CMake][12].
-
-Note: The Android Auto protocol is taken from another great Android Auto head unit project called [HeadUnit][13]. The people working on this project did an amazing job in reverse engineering the AndroidAuto protocol and creating the protocol buffers that structurize all messages.
-
-### What equipment do you need to run OpenAuto on Raspberry Pi?
-
-In addition to a Raspberry Pi 3 computer and an Android device, you need:
-
- * **USB sound card:** The Raspberry Pi 3 doesn't have a microphone input, which is required to use Google Voice Assistant
- * **Video output device:** You can use either a touchscreen or any other video output device connected to HDMI or composite output (RCA)
- * **Input device:** For example, a touchscreen or a USB keyboard
-
-
-
-### What else do you need to get started?
-
-In order to use OpenAuto, you must build it first. On the OpenAuto's wiki page you can find [detailed instructions][14] for how to build it for the Raspberry Pi 3 platform. On other Linux-based platforms, the build process will look very similar.
-
-On the wiki page you can also find other useful instructions, such as how to configure the Bluetooth Hands-Free Profile (HFP) and Advanced Audio Distribution Profile (A2DP) and PulseAudio.
-
-### What else should we know about OpenAuto?
-
-OpenAuto allows anyone to create a head unit based on the Raspberry Pi 3 hardware. Nevertheless, you should always be careful about safety and keep in mind that OpenAuto is just an emulator. It was not certified by any authority and was not tested in a driving environment, so using it in a car is not recommended.
-
-OpenAuto is licensed under GPLv3. For more information, visit the [project's GitHub page][3], where you can find its source code and other information.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/3/openauto-emulator-Raspberry-Pi
-
-作者:[Michal Szwaj][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/michalszwaj
-[1]:https://www.android.com/auto/faq/
-[2]:https://play.google.com/store/apps/collection/promotion_3001303_android_auto_all
-[3]:https://github.com/f1xpl/openauto
-[4]:https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC
-[5]:https://en.wikipedia.org/wiki/Pulse-code_modulation
-[6]:https://www.khronos.org/openmaxil
-[7]:https://www.qt.io/
-[8]:https://github.com/f1xpl/aasdk
-[9]:http://libusb.info/
-[10]:http://www.boost.org/
-[11]:https://www.openssl.org/
-[12]:https://cmake.org/
-[13]:https://github.com/gartnera/headunit
-[14]:https://github.com/f1xpl/
diff --git a/sources/talk/20180403 3 pitfalls everyone should avoid with hybrid multicloud.md b/sources/talk/20180403 3 pitfalls everyone should avoid with hybrid multicloud.md
deleted file mode 100644
index b128be62f0..0000000000
--- a/sources/talk/20180403 3 pitfalls everyone should avoid with hybrid multicloud.md
+++ /dev/null
@@ -1,87 +0,0 @@
-3 pitfalls everyone should avoid with hybrid multicloud
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_darwincloud_520x292_0311LL.png?itok=74DLgd8Q)
-
-This article was co-written with [Roel Hodzelmans][1].
-
-We're all told the cloud is the way to ensure a digital future for our businesses. But which cloud? From cloud to hybrid cloud to hybrid multi-cloud, you need to make choices, and these choices don't preclude the daily work of enhancing your customers' experience or agile delivery of the applications they need.
-
-This article is the first in a four-part series on avoiding pitfalls in hybrid multi-cloud computing. Let's start by examining multi-cloud, hybrid cloud, and hybrid multi-cloud and what makes them different from one another.
-
-### Hybrid vs. multi-cloud
-
-There are many conversations you may be having in your business around moving to the cloud. For example, you may want to take your on-premises computing capacity and turn it into your own private cloud. You may wish to provide developers with a cloud-like experience using the same resources you already have. A more traditional reason for expansion is to use external computing resources to augment those in your own data centers. The latter leads you to the various public cloud providers, as well as to our first definition, multi-cloud.
-
-#### Multi-cloud
-
-Multi-cloud means using multiple clouds from multiple providers for multiple tasks.
-
-![Multi-cloud][3]
-
-Figure 1. Multi-cloud IT with multiple isolated cloud environments
-
-Typically, multi-cloud refers to the use of several different public clouds in order to achieve greater flexibility, lower costs, avoid vendor lock-in, or use specific regional cloud providers.
-
-A challenge of the multi-cloud approach is achieving consistent policies, compliance, and management with different providers involved.
-
-Multi-cloud is mainly a strategy to expand your business while leveraging multi-vendor cloud solutions and spreading the risk of lock-in. Figure 1 shows the isolated nature of cloud services in this model, without any sort of coordination between the services and business applications. Each is managed separately, and applications are isolated to services found in their environments.
-
-#### Hybrid cloud
-
-Hybrid cloud solves issues where isolation and coordination are central to the solution. It is a combination of one or more public and private clouds with at least a degree of workload portability, integration, orchestration, and unified management.
-
-![Hybrid cloud][5]
-
-Figure 2. Hybrid clouds may be on or off premises, but must have a degree of interoperability
-
-The key issue here is that there is an element of interoperability, migration potential, and a connection between tasks running in public clouds and on-premises infrastructure, even if it's not always seamless or otherwise fully implemented.
-
-If your cloud model is missing portability, integration, orchestration, and management, then it's just a bunch of clouds, not a hybrid cloud.
-
-The cloud environments in Fig. 2 include at least one private and public cloud. They can be off or on premises, but they have some degree of the following:
-
- * Interoperability
- * Application portability
- * Data portability
- * Common management
-
-
-
-As you can probably guess, combining multi-cloud and hybrid cloud results in a hybrid multi-cloud. But what does that look like?
-
-### Hybrid multi-cloud
-
-Hybrid multi-cloud pulls together multiple clouds and provides the tools to ensure interoperability between the various services in hybrid and multi-cloud solutions.
-
-![Hybrid multi-cloud][7]
-
-Figure 3. Hybrid multi-cloud solutions using open technologies
-
-Bringing these together can be a serious challenge, but the result ensures better use of resources without isolation in their respective clouds.
-
-Fig. 3 shows an example of hybrid multi-cloud based on open technologies for interoperability, workload portability, and management.
-
-### Moving forward: Pitfalls of hybrid multi-cloud
-
-In part two of this series, we'll look at the first of three pitfalls to avoid with hybrid multi-cloud. Namely, why cost is not always the obvious motivator when determining how to transition your business to the cloud.
-
-This article is based on "[3 pitfalls everyone should avoid with hybrid multi-cloud][8]," a talk the authors will be giving at [Red Hat Summit 2018][9], which will be held May 8-10 in San Francisco. [Register by May 7][9] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/4/pitfalls-hybrid-multi-cloud
-
-作者:[Eric D.Schabell][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/eschabell
-[1]:https://opensource.com/users/roelh
-[3]:https://opensource.com/sites/default/files/u128651/multi-cloud.png (Multi-cloud)
-[5]:https://opensource.com/sites/default/files/u128651/hybrid-cloud.png (Hybrid cloud)
-[7]:https://opensource.com/sites/default/files/u128651/hybrid-multicloud.png (Hybrid multi-cloud)
-[8]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=153892
-[9]:https://www.redhat.com/en/summit/2018
diff --git a/sources/talk/20180404 Is the term DevSecOps necessary.md b/sources/talk/20180404 Is the term DevSecOps necessary.md
deleted file mode 100644
index 96b544e7c4..0000000000
--- a/sources/talk/20180404 Is the term DevSecOps necessary.md
+++ /dev/null
@@ -1,51 +0,0 @@
-Is the term DevSecOps necessary?
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2)
-First came the term "DevOps."
-
-It has many different aspects. For some, [DevOps][1] is mostly about a culture valuing collaboration, openness, and transparency. Others focus more on key practices and principles such as automating everything, constantly iterating, and instrumenting heavily. And while DevOps isn’t about specific tools, certain platforms and tooling make it a more practical proposition. Think containers and associated open source cloud-native technologies like [Kubernetes][2] and CI/CD pipeline tools like [Jenkins][3]—as well as native Linux capabilities.
-
-However, one of the earliest articulated concepts around DevOps was the breaking down of the “wall of confusion” specifically between developers and operations teams. This was rooted in the idea that developers didn’t think much about operational concerns and operators didn’t think much about application development. Add the fact that developers want to move quickly and operators care more about (and tend to be measured on) stability than speed, and it’s easy to see why it was difficult to get the two groups on the same page. Hence, DevOps came to symbolize developers and operators working more closely together, or even merging roles to some degree.
-
-Of course, calls for improved communications and better-integrated workflows were never just about dev and ops. Business owners should be part of conversations as well. And there are the actual users of the software. Indeed, you can write up an almost arbitrarily long list of stakeholders concerned with the functionality, cost, reliability, and other aspects of software and its associated infrastructure. Which raises the question that many have asked: “What’s so special about security that we need a DevSecOps term?”
-
-I’m glad you asked.
-
-The first is simply that it serves as a useful reminder. If developers and operations were historically two of the most common silos in IT organizations, security was (and often still is) another. Security people are often thought of as conservative gatekeepers for whom “no” often seems the safest response to new software releases and technologies. Security’s job is to protect the company, even if that means putting the brakes on a speedy development process.
-
-Many aspects of traditional security, and even its vocabulary, can also seem arcane to non-specialists. This has also contributed to the notion that security is something apart from mainstream IT. I often share the following anecdote: A year or two ago I was leading a security discussion at a [DevOpsDays][4] event in London in which we were talking about traditional security roles. One of the participants raised his hand and admitted that he was one of those security gatekeepers. He went on to say that this was the first time in his career that he had ever been to a conference that wasn’t a traditional security conference like RSA. (He also noted that he was going to broaden both his and his team’s horizons more.)
-
-So DevSecOps perhaps shouldn’t be a needed term. But explicitly calling it out seems like a good practice at a time when software security threats are escalating.
-
-The second reason is that the widespread introduction of cloud-native technologies, particularly those built around containers, are closely tied to DevOps practices. These new technologies are both leading to and enabling greater scale and more dynamic infrastructures. Static security policies and checklists no longer suffice. Security must become a continuous activity. And it must be considered at every stage of your application and infrastructure lifecycle.
-
-**Here are a few examples:**
-
-You need to secure the pipeline and applications. You need to use trusted sources for content so that you know who has signed off on container images and that they’re up-to-date with the most recent patches. Your continuous integration system must integrate automated security testing. You’ll sometimes hear people talking about “shifting security left,” which means earlier in the process so that problems can be dealt with sooner. But it’s actually better to think about embedding security throughout the entire pipeline at each step of the testing, integration, deployment, and ongoing management process.
-
-You need to secure the underlying infrastructure. This means securing the host Linux kernel from container escapes and securing containers from each other. It means using a container orchestration platform with integrated security features. It means defending the network by using network namespaces to isolate applications from other applications within a cluster and isolate environments (such as dev, test, and production) from each other.
-
-And it means taking advantage of the broader security ecosystem such as container content scanners and vulnerability management tools.
-
-In short, it’s DevSecOps because modern application development and container platforms require a new type of Dev and a new type of Ops. But they also require a new type of Sec. Thus, DevSecOps.
-
-**[See our related story,[Security and the SRE: How chaos engineering can play a key role][5].]**
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/4/devsecops
-
-作者:[Gordon Haff][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/ghaff
-[1]:https://opensource.com/resources/devops
-[2]:https://kubernetes.io/
-[3]:https://jenkins.io/
-[4]:https://www.devopsdays.org/
-[5]:https://opensource.com/article/18/3/through-looking-glass-security-sre
diff --git a/sources/talk/20180405 Rethinking -ownership- across the organization.md b/sources/talk/20180405 Rethinking -ownership- across the organization.md
deleted file mode 100644
index d41a3a86dc..0000000000
--- a/sources/talk/20180405 Rethinking -ownership- across the organization.md
+++ /dev/null
@@ -1,125 +0,0 @@
-Rethinking "ownership" across the organization
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chain.png?itok=sgAjswFf)
-Differences in organizational design don't necessarily make some organizations better than others—just better suited to different purposes. Any style of organization must account for its models of ownership (the way tasks get delegated, assumed, executed) and responsibility (the way accountability for those tasks gets distributed and enforced). Conventional organizations and open organizations treat these issues differently, however, and those difference can be jarring for anyone hopping transitioning from one organizational model to another. But transitions are ripe for stumbling over—oops, I mean, learning from.
-
-Let's do that.
-
-### Ownership explained
-
-In most organizations (and according to typical project management standards), work on projects proceeds in five phases:
-
- * Initiation: Assess project feasibility, identify deliverables and stakeholders, assess benefits
- * Planning (Design): Craft project requirements, scope, and schedule; develop communication and quality plans
- * Executing: Manage task execution, implement plans, maintain stakeholder relationships
- * Monitoring/Controlling: Manage project performance, risk, and quality of deliverables
- * Closing: Sign-off on completion requirements, release resources
-
-
-
-The list above is not exhaustive, but I'd like to add one phase that is often overlooked: the "Adoption" phase, frequently needed for strategic projects where a change to the culture or organization is required for "closing" or completion.
-
- * Adoption: Socializing the work of the project; providing communication, training, or integration into processes and standard workflows.
-
-
-
-Examining project phases is one way contrast the expression of ownership and responsibility in organizations.
-
-### Two models, contrasted
-
-In my experience, "ownership" in a traditional software organization works like this.
-
-A manager or senior technical associate initiates a project with senior stakeholders and, with the authority to champion and guide the project, they bestow the project on an associate at some point during the planning and execution stages. Frequently, but not always, the groundwork or fundamental design of the work has already been defined and approved—sometimes even partially solved. Employees are expected to see the project through execution and monitoring to completion.
-
-Employees cut their teeth on a "starter project," where they prove their abilities to a management chain (for example, I recall several such starter projects that were already defined by a manager and architect, and I was assigned to help implement them). Employees doing a good job on a project for which they're responsible get rewarded with additional opportunities, like a coveted assignment, a new project, or increased responsibility.
-
-An associate acting as "owner" of work is responsible and accountable for that work (if someone, somewhere, doesn't do their job, then the responsible employee either does the necessary work herself or alerts a manager to the problem.) A sense of ownership begins to feel stable over time: Employees generally work on the same projects, and in the same areas for an extended period. For some employees, it means the development of deep expertise. That's because the social network has tighter integration between people and the work they do, so moving around and changing roles and projects is rather difficult.
-
-This process works differently in an open organization.
-
-Associates continually define the parameters of responsibility and ownership in an open organization—typically in light of their interests and passions. Associates have more agency to perform all the stages of the project themselves, rather than have pre-defined projects assigned to them. This places additional emphasis on leadership skills in an open organization, because the process is less about one group of people making decisions for others, and more about how an associate manages responsibilities and ownership (whether or not they roughly follow the project phases while being inclusive, adaptable, and community-focused, for example).
-
-Being responsible for all project phases can make ownership feel more risky for associates in an open organization. Proposing a new project, designing it, and leading its implementation takes initiative and courage—especially when none of this is pre-defined by leadership. It's important to get continuous buy-in, which comes with questions, criticisms, and resistance not only from leaders but also from peers. By default, in open organizations this makes associates leaders; they do much the same work that higher-level leaders do in conventional organizations. And incidentally, this is why Jim Whitehurst, in The Open Organization, cautions us about the full power of "transparency" and the trickiness of getting people's real opinions and thoughts whether we like them or not. The risk is not as high in a traditional organization, because in those organizations leaders manage some of it by shielding associates from heady discussions that arise.
-
-The reward in an Open Organization is more opportunity—offers of new roles, promotions, raises, etc., much like in a conventional organization. Yet in the case of open organizations, associates have developed reputations of excellence based on their own initiatives, rather than on pre-sanctioned opportunities from leadership.
-
-### Thinking about adoption
-
-Any discussion of ownership and responsibility involves addressing the issue of buy-in, because owning a project means we are accountable to our sponsors and users—our stakeholders. We need our stakeholders to buy-into our idea and direction, or we need users to adopt an innovation we've created with our stakeholders. Achieving buy-in for ideas and work is important in each type of organization, and it's difficult in both traditional and open systems—but for different reasons.
-
-Open organizations better allow highly motivated associates, who are ambitious and skilled, to drive their careers. But support for their ideas is required across the organization, rather than from leadership alone.
-
-Penetrating a traditional organization's closely knit social ties can be difficult, and it takes time. In such "command-and-control" environments, one would think that employees are simply "forced" to do whatever leaders want them to do. In some cases that's true (e.g., a travel reimbursement system). However, with more innovative programs, this may not be the case; the adoption of a program, tool, or process can be difficult to achieve by fiat, just like in an open organization. And yet these organizations tend to reduce redundancies of work and effort, because "ownership" here involves leaders exerting responsibility over clearly defined "domains" (and because those domains don't change frequently, knowing "who's who"—who's in charge, who to contact with a request or inquiry or idea—can be easier).
-
-Open organizations better allow highly motivated associates, who are ambitious and skilled, to drive their careers. But support for their ideas is required across the organization, rather than from leadership alone. Points of contact and sources of immediate support can be less obvious, and this means achieving ownership of a project or acquiring new responsibility takes more time. And even then someone's idea may never get adopted. A project's owner can change—and the idea of "ownership" itself is more flexible. Ideas that don't get adopted can even be abandoned, leaving a great idea unimplemented or incomplete. Because any associate can "own" an idea in an open organization, these organizations tend to exhibit more redundancy. (Some people immediately think this means "wasted effort," but I think it can augment the implementation and adoption of innovative solutions. By comparing these organizations, we can also see why Jim Whitehurst calls this kind of culture "chaotic" in The Open Organization).
-
-### Two models of ownership
-
-In my experience, I've seen very clear differences between conventional and open organizations when it comes to the issues of ownership and responsibility.
-
-In an traditional organization:
-
- * I couldn't "own" things as easily
- * I felt frustrated, wanting to take initiative and always needing permission
- * I could more easily see who was responsible because stakeholder responsibility was more clearly sanctioned and defined
- * I could more easily "find" people, because the organizational network was more fixed and stable
- * I more clearly saw what needed to happen (because leadership was more involved in telling me).
-
-
-
-Over time, I've learned the following about ownership and responsibility in an open organization:
-
- * People can feel good about what they are doing because the structure rewards behavior that's more self-driven
- * Responsibility is less clear, especially in situations where there's no leader
- * In cases where open organizations have "shared responsibility," there is the possibility that no one in the group identified with being responsible; often there is lack of role clarity ("who should own this?")
- * More people participate
- * Someone's leadership skills must be stronger because everyone is "on their own"; you are the leader.
-
-
-
-### Making it work
-
-On the subject of ownership, each type of organization can learn from the other. The important thing to remember here: Don't make changes to one open or conventional value without considering all the values in both organizations.
-
-Sound confusing? Maybe these tips will help.
-
-If you're a more conventional organization trying to act more openly:
-
- * Allow associates to take ownership out of passion or interest that align with the strategic goals of the organization. This enactment of meritocracy can help them build a reputation for excellence and execution.
- * But don't be afraid sprinkle in a bit of "high-level perspective" in the spirit of transparency; that is, an associate should clearly communicate plans to their leadership, so the initiative doesn't create irrelevant or unneeded projects.
- * Involving an entire community (as when, for example, the associate gathers feedback from multiple stakeholders and user groups) aids buy-in and creates beneficial feedback from the diversity of perspectives, and this helps direct the work.
- * Exploring the work with the community [doesn't mean having to come to consensus with thousands of people][1]. Use the [Open Decision Framework][2] to set limits and be transparent about what those limits are so that feedback and participation is organized ad boundaries are understood.
-
-
-
-If you're already an open organization, then you should remember:
-
- * Although associates initiate projects from "the bottom up," leadership needs to be involved to provide guidance, input to the vision, and circulate centralized knowledge about ownership and responsibility creating a synchronicity of engagement that is transparent to the community.
- * Ownership creates responsibility, and the definition and degree of these should be something both associates and leaders agree upon, increasing the transparency of expectations and accountability during the project. Don't make this a matter of oversight or babysitting, but rather [a collaboration where both parties give and take][3]—associates initiate, leaders guide; associates own, leaders support.
-
-
-
-Leadership education and mentorship, as it pertains to a particular organization, needs to be available to proactive associates, especially since there is often a huge difference between supporting individual contributors and guiding and coordinating a multiplicity of contributions.
-
-["Owning your own career"][4] can be difficult when "ownership" isn't a concept an organization completely understands.
-
-[Subscribe to our weekly newsletter][5] to learn more about open organizations.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/open-organization/18/4/rethinking-ownership-across-organization
-
-作者:[Heidi Hess von Ludewig][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/heidi-hess-von-ludewig
-[1]:https://opensource.com/open-organization/17/8/achieving-alignment-in-openorg
-[2]:https://opensource.com/open-organization/resources/open-decision-framework
-[3]:https://opensource.com/open-organization/17/11/what-is-collaboration
-[4]:https://opensource.com/open-organization/17/12/drive-open-career-forward
-[5]:https://opensource.com/open-organization/resources/newsletter
diff --git a/sources/talk/20180410 Microservices Explained.md b/sources/talk/20180410 Microservices Explained.md
deleted file mode 100644
index 1d7e946a12..0000000000
--- a/sources/talk/20180410 Microservices Explained.md
+++ /dev/null
@@ -1,61 +0,0 @@
-Microservices Explained
-======
-
-![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloud-microservices.jpg?itok=GpoWiDeG)
-Microservices is not a new term. Like containers, the concept been around for a while, but it’s become a buzzword recently as many companies embark on their cloud native journey. But, what exactly does the term microservices mean? Who should care about it? In this article, we’ll take a deep dive into the microservices architecture.
-
-### Evolution of microservices
-
-Patrick Chanezon, Chief Developer Advocate for Docker provided a brief history lesson during our conversation: In the late 1990s, developers started to structure their applications into monoliths where massive apps hadall features and functionalities baked into them. Monoliths were easy to write and manage. Companies could have a team of developers who built their applications based on customer feedback through sales and marketing teams. The entire developer team would work together to build tightly glued pieces as an app that can be run on their own app servers. It was a popular way of writing and delivering web applications.
-
-There is a flip side to the monolithic coin. Monoliths slow everything and everyone down. It’s not easy to update one service or feature of the application. The entire app needs to be updated and a new version released. It takes time. There is a direct impact on businesses. Organizations could not respond quickly to keep up with new trends and changing market dynamics. Additionally, scalability was challenging.
-
-Around 2011, SOA (Service Oriented Architecture) became popular where developers could cram multi-tier web applications as software services inside a VM (virtual machine). It did allow them to add or update services independent of each other. However, scalability still remained a problem.
-
-“The scale out strategy then was to deploy multiple copies of the virtual machine behind a load balancer. The problems with this model are several. Your services can not scale or be upgraded independently as the VM is your lowest granularity for scale. VMs are bulky as they carry extra weight of an operating system, so you need to be careful about simply deploying multiple copies of VMs for scaling,” said Madhura Maskasky, co-founder and VP of Product at Platform9.
-
-Some five years ago when Docker hit the scene and containers became popular, SOA faded out in favor of “microservices” architecture. “Containers and microservices fix a lot of these problems. Containers enable deployment of microservices that are focused and independent, as containers are lightweight. The Microservices paradigm, combined with a powerful framework with native support for the paradigm, enables easy deployment of independent services as one or more containers as well as easy scale out and upgrade of these,” said Maskasky.
-
-### What’s are microservices?
-
-Basically, a microservice architecture is a way of structuring applications. With the rise of containers, people have started to break monoliths into microservices. “The idea is that you are building your application as a set of loosely coupled services that can be updated and scaled separately under the container infrastructure,” said Chanezon.
-
-“Microservices seem to have evolved from the more strictly defined service-oriented architecture (SOA), which in turn can be seen as an expression object oriented programming concepts for networked applications. Some would call it just a rebranding of SOA, but the term “microservices” often implies the use of even smaller functional components than SOA, RESTful APIs exchanging JSON, lighter-weight servers (often containerized, and modern web technologies and protocols,” said Troy Topnik, SUSE Senior Product Manager, Cloud Application Platform.
-
-Microservices provides a way to scale development and delivery of large, complex applications by breaking them down that allows the individual components to evolve independently from each other.
-
-“Microservices architecture brings more flexibility through the independence of services, enabling organizations to become more agile in how they deliver new business capabilities or respond to changing market conditions. Microservices allows for using the ‘right tool for the right task’, meaning that apps can be developed and delivered by the technology that will be best for the task, rather than being locked into a single technology, runtime or framework,” said Christian Posta, senior principal application platform specialist, Red Hat.
-
-### Who consumes microservices?
-
-“The main consumers of microservices architecture patterns are developers and application architects,” said Topnik. As far as admins and DevOps engineers are concerned their role is to build and maintain the infrastructure and processes that support microservices.
-
-“Developers have been building their applications traditionally using various design patterns for efficient scale out, high availability and lifecycle management of their applications. Microservices done along with the right orchestration framework help simplify their lives by providing a lot of these features out of the box. A well-designed application built using microservices will showcase its benefits to the customers by being easy to scale, upgrade, debug, but without exposing the end customer to complex details of the microservices architecture,” said Maskasky.
-
-### Who needs microservices?
-
-Everyone. Microservices is the modern approach to writing and deploying applications more efficiently. If an organization cares about being able to write and deploy its services at a faster rate they should care about it. If you want to stay ahead of your competitors, microservices is the fastest route. Security is another major benefit of the microservices architecture, as this approach allows developers to keep up with security and bug fixes, without having to worry about downtime.
-
-“Application developers have always known that they should build their applications in a modular and flexible way, but now that enough of them are actually doing this, those that don’t risk being left behind by their competitors,” said Topnik.
-
-If you are building a new application, you should design it as microservices. You never have to hold up a release if one team is late. New functionalities are available when they're ready, and the overall system never breaks.
-
-“We see customers using this as an opportunity to also fix other problems around their application deployment -- such as end-to-end security, better observability, deployment and upgrade issues,” said Maskasky.
-
-Failing to do so means you would be stuck in the traditional stack, which means microservices won’t be able to add any value to it. If you are building new applications, microservices is the way to go.
-
-Learn more about cloud-native at [KubeCon + CloudNativeCon Europe][1], coming up May 2-4 in Copenhagen, Denmark.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/2018/4/microservices-explained
-
-作者:[SWAPNIL BHARTIYA][a]
-译者:[runningwater](https://github.com/runningwater)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/arnieswap
-[1]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/attend/register/
diff --git a/sources/talk/20180412 Management, from coordination to collaboration.md b/sources/talk/20180412 Management, from coordination to collaboration.md
deleted file mode 100644
index 1262f88300..0000000000
--- a/sources/talk/20180412 Management, from coordination to collaboration.md
+++ /dev/null
@@ -1,71 +0,0 @@
-Management, from coordination to collaboration
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab2.png?itok=uMO9zn5U)
-
-Any organization is fundamentally a pattern of interactions between people. The nature of those interactions—their quality, their frequency, their outcomes—is the most important product an organization can create. Perhaps counterintuitively, recognizing this fact has never been more important than it is today—a time when digital technologies are reshaping not only how we work but also what we do when we come together.
-
-
-And yet many organizational leaders treat those interactions between people as obstacles or hindrances to avoid or eliminate, rather than as the powerful sources of innovation they really are.
-
-That's why we're observing that some of the most successful organizations today are those capable of shifting the way they think about the value of the interactions in the workplace. And to do that, they've radically altered their approach to management and leadership.
-
-### Moving beyond mechanical management
-
-Simply put, traditionally managed organizations treat unanticipated interactions between stakeholders as potentially destructive forces—and therefore as costs to be mitigated.
-
-This view has a long, storied history in the field of economics. But it's perhaps nowhere more clear than in the early writing of Nobel Prize-winning economist[Ronald Coase][1]. In 1937, Coase published "[The Nature of the Firm][2]," an essay about the reasons people organized into firms to work on large-scale projects—rather than tackle those projects alone. Coase argued that when the cost of coordinating workers together inside a firm is less than that of similar market transactions outside, people will tend to organize so they can reap the benefits of lower operating costs.
-
-But at some point, Coase's theory goes, the work of coordinating interactions between so many people inside the firm actually outweighs the benefits of having an organization in the first place. The complexity of those interactions becomes too difficult to handle. Management, then, should serve the function of decreasing this complexity. Its primary goal is coordination, eliminating the costs associated with messy interpersonal interactions that could slow the firm and reduce its efficiency. As one Fortune 100 CEO recently told me, "Failures happen most often around organizational handoffs."
-
-This makes sense to people practicing what I've called "[mechanical management][3]," where managing people is the act of keeping them focused on specific, repeatable, specialized tasks. Here, management's key function is optimizing coordination costs—ensuring that every specialized component of the finely-tuned organizational machine doesn't impinge on the others and slow them down. Managers work to avoid failures by coordinating different functions across the organization (accounts payable, research and development, engineering, human resources, sales, and so on) to get them to operate toward a common goal. And managers create value by controlling information flows, intervening only when functions become misaligned.
-
-Today, when so many of these traditionally well-defined tasks have become automated, value creation is much more a result of novel innovation and problem solving—not finding new ways to drive efficiency from repeatable processes. But numerous studies demonstrate that innovative, problem-solving activity occurs much more regularly when people work in cross-functional teams—not as isolated individuals or groups constrained by single-functional silos. This kind of activity can lead to what some call "accidental integration": the serendipitous innovation that occurs when old elements combine in new and unforeseen ways.
-
-That's why working collaboratively has now become a necessity that managers need to foster, not eliminate.
-
-### From coordination to collaboration
-
-Reframing the value of the firm—from something that coordinated individual transactions to something that produces novel innovations—means rethinking the value of the relations at the core of our organizations. And that begins with reimagining the task of management, which is no longer concerned primarily with minimizing coordination costs but maximizing cooperation opportunities.
-
-Too few of our tried-and-true management practices have this goal. If they're seeking greater innovation, managers need to encourage more interactions between people in different functional areas, not fewer. A cross-functional team may not be as efficient as one composed of people with the same skill sets. But a cross-functional team is more likely to be the one connecting points between elements in your organization that no one had ever thought to connect (the one more likely, in other words, to achieve accidental integration).
-
-Working collaboratively has now become a necessity that managers need to foster, not eliminate.
-
-I have three suggestions for leaders interested in making this shift:
-
-First, define organizations around processes, not functions. We've seen this strategy work in enterprise IT, for example, in the case of [DevOps][4], where teams emerge around end goals (like a mobile application or a website), not singular functions (like developing, testing, and production). In DevOps environments, the same team that writes the code is responsible for maintaining it once it's in production. (We've found that when the same people who write the code are the ones woken up when it fails at 3 a.m., we get better code.)
-
-Second, define work around the optimal organization rather than the organization around the work. Amazon is a good example of this strategy. Teams usually stick to the "[Two Pizza Rule][5]" when establishing optimal conditions for collaboration. In other words, Amazon leaders have determined that the best-sized team for maximum innovation is about 10 people, or a group they can feed with two pizzas. If the problem gets bigger than that two-pizza team can handle, they split the problem into two simpler problems, dividing the work between multiple teams rather than adding more people to the single team.
-
-And third, to foster creative behavior and really get people cooperating with one another, do whatever you can to cultivate a culture of honest and direct feedback. Be straightforward and, as I wrote in The Open Organization, let the sparks fly; have frank conversations and let the best ideas win.
-
-### Let it go
-
-I realize that asking managers to significantly shift the way they think about their roles can lead to fear and skepticism. Some managers define their performance (and their very identities) by the control they exert over information and people. But the more you dictate the specific ways your organization should do something, the more static and brittle that activity becomes. Agility requires letting go—giving up a certain degree of control.
-
-Front-line managers will see their roles morph from dictating and monitoring to enabling and supporting. Instead of setting individual-oriented goals, they'll need to set group-oriented goals. Instead of developing individual incentives, they'll need to consider group-oriented incentives.
-
-Because ultimately, their goal should be to[create the context in which their teams can do their best work][6].
-
-[Subscribe to our weekly newsletter][7] to learn more about open organizations.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/open-organization/18/4/management-coordination-collaboration
-
-作者:[Jim Whitehurst][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/remyd
-[1]:https://news.uchicago.edu/article/2013/09/02/ronald-h-coase-founding-scholar-law-and-economics-1910-2013
-[2]:http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0335.1937.tb00002.x/full
-[3]:https://opensource.com/open-organization/18/2/try-learn-modify
-[4]:https://enterprisersproject.com/devops
-[5]:https://www.fastcompany.com/3037542/productivity-hack-of-the-week-the-two-pizza-approach-to-productive-teamwork
-[6]:https://opensource.com/open-organization/16/3/what-it-means-be-open-source-leader
-[7]:https://opensource.com/open-organization/resources/newsletter
diff --git a/sources/talk/20180416 For project safety back up your people, not just your data.md b/sources/talk/20180416 For project safety back up your people, not just your data.md
deleted file mode 100644
index 0dc6d41fa5..0000000000
--- a/sources/talk/20180416 For project safety back up your people, not just your data.md
+++ /dev/null
@@ -1,79 +0,0 @@
-For project safety back up your people, not just your data
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel)
-The [FSF][1] was founded in 1985, Perl in 1987 ([happy 30th birthday, Perl][2]!), and Linux in 1991. The [term open source][3] and the [Open Source Initiative][4] both came into being in 1998 (and [turn 20 years old][5] in 2018). Since then, free and open source software has grown to become the default choice for software development, enabling incredible innovation.
-
-We, the greater open source community, have come of age. Millions of open source projects exist today, and each year the [GitHub Octoverse][6] reports millions of new public repositories. We rely on these projects every day, and many of us could not operate our services or our businesses without them.
-
-So what happens when the leaders of these projects move on? How can we help ease those transitions while ensuring that the projects thrive? By teaching and encouraging **succession planning**.
-
-### What is succession planning?
-
-Succession planning is a popular topic among business executives, boards of directors, and human resources professionals, but it doesn't often come up with maintainers of free and open source projects. Because the concept is common in business contexts, that's where you'll find most resources and advice about establishing a succession plan. As you might expect, most of these articles aren't directly applicable to FOSS, but they do form a springboard from which we can launch our own ideas about succession planning.
-
-According to [Wikipedia][7]:
-
-> Succession planning is a process for identifying and developing new leaders who can replace old leaders when they leave, retire, or die.
-
-In my opinion, this definition doesn't apply very well to free and open source software projects. I primarily object to the use of the term leaders. For the collaborative projects of FOSS, everyone can be some form of leader. Roles other than "project founder" or "benevolent dictator for life" are just as important. Any project role that is measured by bus factor is one that can benefit from succession planning.
-
-> A project's bus factor is the number of team members who, if hit by a bus, would endanger the smooth operation of the project. The smallest and worst bus factor is 1: when only a single person's loss would put the project in jeopardy. It's a somewhat grim but still very useful concept.
-
-I propose that instead of viewing succession planning as a leadership pipeline, free and open source projects should view it as a skills pipeline. What sorts of skills does your project need to continue functioning well, and how can you make sure those skills always exist in your community?
-
-### Benefits of succession planning
-
-When I talk to project maintainers about succession planning, they often respond with something like, "We've been pretty successful so far without having to think about this. Why should we start now?"
-
-Aside from the fact that the phrase, "We've always done it this way" is probably one of the most dangerous in the English language, and hearing (or saying) it should send up red flags in any community, succession planning provides plenty of very real benefits:
-
- * **Continuity** : When someone leaves, what happens to the tasks they were performing? Succession planning helps ensure those tasks continue uninterrupted and no one is left hanging.
- * **Avoiding a power vacuum** : When a person leaves a role with no replacement, it can lead to confusion, delays, and often most damaging, political woes. After all, it's much easier to fix delays than hurt feelings. A succession plan helps alleviate the insecure and unstable time when someone in a vital role moves on.
- * **Increased project/organization longevity** : The thinking required for succession planning is the same sort of thinking that contributes to project longevity. Ensuring continuity in leadership, culture, and productivity also helps ensure the project will continue. It will evolve, but it will survive.
- * **Reduced workload/pressure on current leaders** : When a single team member performs a critical role in the project, they often feel pressure to be constantly "on." This can lead to burnout and worse, resignations. A succession plan ensures that all important individuals have a backup or successor. The knowledge that someone can take over is often enough to reduce the pressure, but it also means that key players can take breaks or vacations without worrying that their role will be neglected in their absence.
- * **Talent development** : Members of the FOSS community talk a lot about mentoring these days, and that's great. However, most of the conversation is around mentoring people to contribute code to a project. There are many different ways to contribute to free and open source software projects beyond programming. A robust succession plan recognizes these other forms of contribution and provides mentoring to prepare people to step into critical non-programming roles.
- * **Inspiration for new members** : It can be very motivational for new or prospective community members to see that a project uses its succession plan. Not only does it show them that the project is well-organized and considers its own health and welfare as well as that of its members, but it also clearly shows new members how they can grow in the community. An obvious path to critical roles and leadership positions inspires new members to stick around to walk that path.
- * **Diversity of thoughts/get out of a rut** : Succession plans provide excellent opportunities to bring in new people and ideas to the critical roles of a project. [Studies show][8] that diverse leadership teams are more effective and the projects they lead are more innovative. Using your project's succession plan to mentor people from different backgrounds and with different perspectives will help strengthen and evolve the project in a healthy way.
- * **Enabling meritocracy** : Unfortunately, what often passes for meritocracy in many free and open source projects is thinly veiled hostility toward new contributors and diverse opinions—hostility that's delivered from within an echo chamber. Meritocracy without a mentoring program and healthy governance structure is simply an excuse to practice subjective discrimination while hiding behind unexpressed biases. A well-executed succession plan helps teams reach the goal of a true meritocracy. What counts as merit for any given role, and how to reach that level of merit, are openly, honestly, and completely documented. The entire community will be able to see and judge which members are on the path or deserve to take on a particular critical role.
-
-
-
-### Why it doesn't happen
-
-Succession planning isn't a panacea, and it won't solve all problems for all projects, but as described above, it offers a lot of worthwhile benefits to your project.
-
-Despite that, very few free and open source projects or organizations put much thought into it. I was curious why that might be, so I asked around. I learned that the reasons for not having a succession plan fall into one of five different buckets:
-
- * **Too busy** : Many people recognize succession planning (or lack thereof) as a problem for their project but just "hadn't ever gotten around to it" because there's "always something more important to work on." I understand and sympathize with this, but I suspect the problem may have more to do with prioritization than with time availability.
- * **Don't think of it** : Some people are so busy and preoccupied that they haven't considered, "Hey, what would happen if Jen had to leave the project?" This never occurs to them. After all, Jen's always been there when they need her, right? And that will always be the case, right?
- * **Don't want to think of it** : Succession planning shares a trait with estate planning: It's associated with negative feelings like loss and can make people address their own mortality. Some people are uncomfortable with this and would rather not consider it at all than take the time to make the inevitable easier for those they leave behind.
- * **Attitude of current leaders** : A few of the people with whom I spoke didn't want to recognize that they're replaceable, or to consider that they may one day give up their power and influence on the project. While this was (thankfully) not a common response, it was alarming enough to deserve its own bucket. Failure of someone in a critical role to recognize or admit that they won't be around forever can set a project up for failure in the long run.
- * **Don't know where to start** : Many people I interviewed realize that succession planning is something that their project should be doing. They were even willing to carve out the time to tackle this very large task. What they lacked was any guidance on how to start the process of creating a succession plan.
-
-
-
-As you can imagine, something as important and people-focused as a succession plan isn't easy to create, and it doesn't happen overnight. Also, there are many different ways to do it. Each project has its own needs and critical roles. One size does not fit all where succession plans are concerned.
-
-There are, however, some guidelines for how every project could proceed with the succession plan creation process. I'll cover these guidelines in my next article.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/4/passing-baton-succession-planning-foss-leadership
-
-作者:[VM(Vicky) Brasseur][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/vmbrasseur
-[1]:http://www.fsf.org
-[2]:https://opensource.com/article/17/10/perl-turns-30
-[3]:https://opensource.com/article/18/2/coining-term-open-source-software
-[4]:https://opensource.org
-[5]:https://opensource.org/node/910
-[6]:https://octoverse.github.com
-[7]:https://en.wikipedia.org/wiki/Succession_planning
-[8]:https://hbr.org/2016/11/why-diverse-teams-are-smarter
diff --git a/sources/talk/20180417 How to develop the FOSS leaders of the future.md b/sources/talk/20180417 How to develop the FOSS leaders of the future.md
deleted file mode 100644
index a65dc9dabd..0000000000
--- a/sources/talk/20180417 How to develop the FOSS leaders of the future.md
+++ /dev/null
@@ -1,93 +0,0 @@
-How to develop the FOSS leaders of the future
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T)
-Do you hold a critical role in a free and open source software project? Would you like to make it easier for the next person to step into your shoes, while also giving yourself the freedom to take breaks and avoid burnout?
-
-Of course you would! But how do you get started?
-
-Before you do anything, remember that this is a free or open source project. As with all things in FOSS, your succession planning should happen in collaboration with others. The [Principle of Least Astonishment][1] also applies: Don't work on your plan in isolation, then spring it on the entire community. Work together and publicly, so no one is caught off guard when the cultural or governance changes start happening.
-
-### Identify and analyse critical roles
-
-As a project leader, your first step is to identify the critical roles in your community. While it can help to ask each community members what role they perform, it's important to realize that most people perform multiple roles. Make sure you consider every role that each community member plays in the project.
-
-Once you've identified the roles and determined which ones are critical to your project, the next step is to list all of the duties and responsibilities for each of those critical roles. Be very honest here. List the duties and responsibilities you think each role has, then ask the person who performs that role to list the duties the role actually has. You'll almost certainly find that the second list is longer than the first.
-
-### Refactor large roles
-
-During this process, have you discovered any roles that encompass a large number of duties and responsibilities? Large roles are like large methods in your code: They're a sign of a problem, and they need to be refactored to make them easier to maintain. One of the easiest and most effective steps in succession planning for FOSS projects is to split up each large role into two or more smaller roles and distribute these to other community members. With that one step, you've greatly improved the [bus factor][2] for your project. Even better, you've made each one of those new, smaller roles much more accessible and less intimidating for new community members. People are much more likely to volunteer for a role if it's not a massive burden.
-
-### Limit role tenure
-
-Another way to make a role more enticing is to limit its tenure. Community members will be more willing to step into roles that aren't open-ended. They can look at their life and work plans and ask themselves, "Can I take on this role for the next eighteen months?" (or whatever term limit you set).
-
-Setting term limits also helps those who are currently performing the role. They know when they can set aside those duties and move on to something else, which can help alleviate burnout. Also, setting a term limit creates a pool of people who have performed the role and are qualified to step in if needed, which can also mitigate burnout.
-
-### Knowledge transfer
-
-Once you've identified and defined the critical roles in your project, most of what remains is knowledge transfer. Even small projects involve a lot of moving parts and knowledge that needs to be where everyone can see, share, use, and contribute to it. What sort of knowledge should you be collecting? The answer will vary by project, needs, and role, but here are some of the most common (and commonly overlooked) types of information needed to implement a succession plan:
-
- * **Roles and their duties** : You've spent a lot of time identifying, analyzing, and potentially refactoring roles and their duties. Make sure this information doesn't get lost.
- * **Policies and procedures** : None of those duties occur in a vacuum. Each duty must be performed in a particular way (procedures) when particular conditions are met (policies). Take stock of these details for every duty of every role.
- * **Resources** : What accounts are associated with the project, or are necessary for it to operate? Who helps you with meetup space, sponsorship, or in-kind services? Such information is vital to project operation but can be easily lost when the responsible community member moves on.
- * **Credentials** : Ideally, every external service required by the project will use a login that goes to an email address designated for a specific role (`sre@project.org`) rather than to a personal address. Every role's address should include multiple people on the distribution list to ensure that important messages (such as downtime or bogus "forgot password" requests) aren't missed. The credentials for every service should be kept in a secure keystore, with access limited to the fewest number of people possible.
- * **Project history** : All community members benefit greatly from learning the history of the project. Collecting project history information can clarify why decisions were made in the past, for example, and reveal otherwise unexpressed requirements and values of the community. Project histories can also help new community members understand "inside jokes," jargon, and other cultural factors.
- * **Transition plans** : A succession plan doesn't do much good if project leaders haven't thought through how to transition a role from one person to another. How will you locate and prepare people to take over a critical role? Since the project has already done a lot of thinking and knowledge transfer, transition plans for each role may be easier to put together.
-
-
-
-Doing a complete knowledge transfer for all roles in a project can be an enormous undertaking, but the effort is worth it. To avoid being overwhelmed by such a daunting task, approach it one role at a time, finishing each one before you move onto the next. Limiting the scope in this way makes both progress and success much more likely.
-
-### Document, document, document!
-
-Succession planning takes time. The community will be making a lot of decisions and collecting a lot of information, so make sure nothing gets lost. It's important to document everything (not just in email threads). Where knowledge is concerned, documentation scales and people do not. Include even the things that you think are obvious—what's obvious to a more seasoned community member may be less so to a newbie, so don't skip steps or information.
-
-Gather these decisions, processes, policies, and other bits of information into a single place, even if it's just a collection of markdown files in the main project repository. The "how" and "where" of the documentation can be sorted out later. It's better to capture key information first and spend time [bike-shedding][3] a documentation system later.
-
-Once you've collected all of this information, you should understand that it's unlikely that anyone will read it. I know, it seems unfair, but that's just how things usually work out. The reason? There is simply too much documentation and too little time. To address this, add an abstract, or summary, at the top of each item. Often that's all a person needs, and if not, the complete document is there for a deep dive. Recognizing and adapting to how most people use documentation increases the likelihood that they will use yours.
-
-Above all, don't skip the documentation process. Without documentation, succession plans are impossible.
-
-### New leaders
-
-If you don't yet perform a critical role but would like to, you can contribute to the succession planning process while apprenticing your way into one of those roles.
-
-For starters, actively look for opportunities to learn and contribute. Shadow people in critical roles. You'll learn how the role is done, and you can document it to help with the succession planning process. You'll also get the opportunity to see whether it's a role you're interested in pursuing further.
-
-Asking for mentorship is a great way to get yourself closer to taking on a critical role in the project. Even if you haven't heard that mentoring is available, it's perfectly OK to ask about it. The people already in those roles are usually happy to mentor others, but often are too busy to think about offering mentorship. Asking is a helpful reminder to them that they should be helping to train people to take over their role when they need a break.
-
-As you perform your own tasks, actively seek out feedback. This will not only improve your skills, but it shows that you're interested in doing a better job for the community. This commitment will pay off when your project needs people to step into critical roles.
-
-Finally, as you communicate with more experienced community members, take note of anecdotes about the history of the project and how it operates. This history is very important, especially for new contributors or people stepping into critical roles. It provides the context necessary for new contributors to understand what things do or don't work and why. As you hear these stories, document them so they can be passed on to those who come after you.
-
-### Succession planning examples
-
-While too few FOSS projects are actively considering succession planning, some are doing a great job of trying to reduce their bus factor and prevent maintainer burnout.
-
-[Exercism][4] isn't just an excellent tool for gaining fluency in programming languages. It's also an [open source project][5] that goes out of its way to help contributors [land their first patch][6]. In 2016, the project reviewed the health of each language track and [discovered that many were woefully maintained][7]. There simply weren't enough people covering each language, so maintainers were burning out. The Exercism community recognized the risk this created and pushed to find new maintainers for as many language tracks as possible. As a result, the project was able to revive several tracks from near-death and develop a structure for inviting people to become maintainers.
-
-The purpose of the [Vox Pupuli][8] project is to serve as a sort of succession plan for the [Puppet module][9] community. When a maintainer no longer wishes or is able to work on their module, they can bequeath it to the Vox Pupuli community. This community of 30 collaborators shares responsibility for maintaining all the modules it accepts into the project. The large number of collaborators ensures that no single person bears the burden of maintenance while also providing a long and fruitful life for every module in the project.
-
-These are just two examples of how some FOSS projects are tackling succession planning. Share your stories in the comments below.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/4/succession-planning-how-develop-foss-leaders-future
-
-作者:[VM(Vicky) Brasseur)][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/vmbrasseur
-[1]:https://en.wikipedia.org/wiki/Principle_of_least_astonishment
-[2]:https://en.wikipedia.org/wiki/Bus_factor
-[3]:https://en.wikipedia.org/wiki/Law_of_triviality
-[4]:http://exercism.io
-[5]:https://github.com/exercism/exercism.io
-[6]:https://github.com/exercism/exercism.io/blob/master/CONTRIBUTING.md
-[7]:https://tinyletter.com/exercism/letters/exercism-track-health-check-new-maintainers
-[8]:https://voxpupuli.org
-[9]:https://forge.puppet.com
diff --git a/sources/talk/20180418 Is DevOps compatible with part-time community teams.md b/sources/talk/20180418 Is DevOps compatible with part-time community teams.md
deleted file mode 100644
index e78b96959f..0000000000
--- a/sources/talk/20180418 Is DevOps compatible with part-time community teams.md
+++ /dev/null
@@ -1,73 +0,0 @@
-Is DevOps compatible with part-time community teams?
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1)
-DevOps seems to be the talk of the IT world of late—and for good reason. DevOps has streamlined the process and production of IT development and operations. However, there is also an upfront cost to embracing a DevOps ideology, in terms of time, effort, knowledge, and financial investment. Larger companies may have the bandwidth, budget, and time to make the necessary changes, but is it feasible for part-time, resource-strapped communities?
-
-Part-time communities are teams of like-minded people who take on projects outside of their normal work schedules. The members of these communities are driven by passion and a shared purpose. For instance, one such community is the [ALM | DevOps Rangers][1]. With 100 rangers engaged across the globe, a DevOps solution may seem daunting; nonetheless, they took on the challenge and embraced the ideology. Through their example, we've learned that DevOps is not only feasible but desirable in smaller teams. To read about their transformation, check out [How DevOps eliminates development bottlenecks][2].
-
-> “DevOps is the union of people, process, and products to enable continuous delivery of value to our end customers.” - Donovan Brown
-
-### The cost of DevOps
-
-As stated above, there is an upfront "cost" to DevOps. The cost manifests itself in many forms, such as the time and collaboration between development, operations, and other stakeholders, planning a smooth-flowing process that delivers continuous value, finding the best DevOps products, and training the team in new technologies, to name a few. This aligns directly with Donovan's definition of DevOps, in fact—a **process** for delivering **continuous value** and the **people** who make that happen.
-
-Streamlined DevOps takes a lot of planning and training just to create the process, and that doesn't even consider the testing phase. We also can't forget the existing in-flight projects that need to be converted into the new system. While the cost increases the more pervasive the transformation—for instance, if an organization aims to unify its entire development organization under a single process, then that would cost more versus transforming a single pilot or subset of the entire portfolio—these upfront costs must be addressed regardless of their scale. There are a lot of resources and products already out there that can be implemented for a smoother transition—but again, we face the time and effort that will be necessary just to research which ones might work best.
-
-In the case of the ALM | DevOps Rangers, they had to halt all projects for a couple of sprints to set up the initial process. Many organizations would not be able to do that. Even part-time groups might have very good reasons to keep things moving, which only adds to the complexity. In such scenarios, additional cutover planning (and therefore additional cost) is needed, and the overall state of the community is one of flux and change, which adds risk, which—you guessed it—requires more cost to mitigate.
-
-There is also an ongoing "cost" that teams will face with a DevOps mindset: Simple maintenance of the system, training and transitioning new team members, and keeping up with new, improved technologies are all a part of the process.
-
-### DevOps for a part-time community
-
-Whereas larger companies can dedicate a single manager or even a team to the task over overseeing the continuous integration and continuous deployment (CI/CD) pipelines, part-time community teams don't have the bandwidth to give. With such a massive undertaking we must ask: Is it even worth it for groups with fewer resources to take on DevOps for their community? Or should they abandon the idea of DevOps altogether?
-
-The answer to that is dependent on a few variables, such as the ability of the teams to be self-managing, the time and effort each member is willing to put into the transformation, and the dedication of the community to the process.
-
-### Example: Benefits of DevOps in a part-time community
-
-Luckily, we aren't without examples to demonstrate just how DevOps can benefit a smaller group. Let's take a quick look at the ALM Rangers again. The results from their transformation help us understand how DevOps changed their community:
-
-![](https://opensource.com/sites/default/files/images/life-uploads/devops.png)
-
-As illustrated, there are some huge benefits for part-time community teams. Planning goes from long, arduous design sessions to a quick prototyping and storyboarding process. Builds become automated, reliable, and resilient. Testing and bug detection are proactive instead of reactive, which turns into a happier clientele. Multiple full-time program managers are replaced with self-managing teams with a single part-time manager to oversee projects. Teams become smaller and more efficient, which equates to higher production rates and higher-quality project delivery. With results like these, it's hard to argue against DevOps.
-
-Still, the upfront and ongoing costs aren't right for every community. The number-one most important aspect of any DevOps transformation is the mindset of the people involved. Adopting the idea of self-managing teams who work in autonomy instead of the traditional chain-of-command scheme can be a challenge for any group. The members must be willing to work independently without a lot of oversight and take ownership of their features and user experience, but at the same time, work in a setting that is fully transparent to the rest of the community. **The success or failure of a DevOps strategy lies on the team.**
-
-### Making the DevOps transition in 4 steps
-
-Another important question to ask: How can a low-bandwidth group make such a massive transition? The good news is that a DevOps transformation doesn’t need to happen all at once. Taken in smaller, more manageable steps, organizations of any size can embrace DevOps.
-
- 1. Determine why DevOps may be the solution you need. Are your projects bottlenecking? Are they running over budget and over time? Of course, these concerns are common for any community, big or small. Answering these questions leads us to step two:
- 2. Develop the right framework to improve the engineering process. DevOps is all about automation, collaboration, and streamlining. Rather than trying to fit everyone into the same process box, the framework should support the work habits, preferences, and delivery needs of the community. Some broad standards should be established (for example, that all teams use a particular version control system). Beyond that, however, let the teams decide their own best process.
- 3. Use the current products that are already available if they meet your needs. Why reinvent the wheel?
- 4. Finally, implement and test the actual DevOps solution. This is, of course, where the actual value of DevOps is realized. There will likely be a few issues and some heartburn, but it will all be worth it in the end because, once established, the products of the community’s work will be nimbler and faster for the users.
-
-
-
-### Reuse DevOps solutions
-
-One benefit to creating effective CI/CD pipelines is the reusability of those pipelines. Although there is no one-size fits all solution, anyone can adopt a process. There are several pre-made templates available for you to examine, such as build templates on VSTS, ARM templates to deploy Azure resources, and "cookbook"-style textbooks from technical publishers. Once it identifies a process that works well, a community can also create its own template by defining and establishing standards and making that template easily discoverable by the entire community. For more information on DevOps journeys and tools, check out [this site][3].
-
-### Summary
-
-Overall, the success or failure of DevOps relies on the culture of a community. It doesn't matter if the community is a large, resource-rich enterprise or a small, resource-sparse, part-time group. DevOps will still bring solid benefits. The difference is in the approach for adoption and the scale of that adoption. There are both upfront and ongoing costs, but the value greatly outweighs those costs. Communities can use any of the powerful tools available today for their pipelines, and they can also leverage reusability, such as templates, to reduce upfront implementation costs. DevOps is most certainly feasible—and even critical—for the success of part-time community teams.
-
-**[See our related story,[How DevOps eliminates development bottlenecks][4].]**
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/4/devops-compatible-part-time-community-teams
-
-作者:[Edward Fry][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/edwardf
-[1]:https://github.com/ALM-Rangers
-[2]:https://opensource.com/article/17/11/devops-rangers-transformation
-[3]:https://www.visualstudio.com/devops/
-[4]:https://opensource.com/article/17/11/devops-rangers-transformation
diff --git a/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md b/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md
deleted file mode 100644
index 29e4ea2f48..0000000000
--- a/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md
+++ /dev/null
@@ -1,109 +0,0 @@
-3 tips for organizing your open source project's workflow on GitHub
-======
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ)
-
-Managing an open source project is challenging work, and the challenges grow as a project grows. Eventually, a project may need to meet different requirements and span multiple repositories. These problems aren't technical, but they are important to solve to scale a technical project. [Business process management][1] methodologies such as agile and [kanban][2] bring a method to the madness. Developers and managers can make realistic decisions for estimating deadlines and team bandwidth with an organized development focus.
-
-At the [UNICEF Office of Innovation][3], we use GitHub project boards to organize development on the MagicBox project. [MagicBox][4] is a full-stack application and open source platform to serve and visualize data for decision-making in humanitarian crises and emergencies. The project spans multiple GitHub repositories and works with multiple developers. With GitHub project boards, we organize our work across multiple repositories to better understand development focus and team bandwidth.
-
-Here are three tips from the UNICEF Office of Innovation on how to organize your open source projects with the built-in project boards on GitHub.
-
-### 1\. Bring development discussion to issues and pull requests
-
-Transparency is a critical part of an open source community. When mapping out new features or milestones for a project, the community needs to see and understand a decision or why a specific direction was chosen. Filing new GitHub issues for features and milestones is an easy way for someone to follow the project direction. GitHub issues and pull requests are the cards (or building blocks) of project boards. To be successful with GitHub project boards, you need to use issues and pull requests.
-
-
-![GitHub issues for magicbox-maps, MagicBox's front-end application][6]
-
-GitHub issues for magicbox-maps, MagicBox's front-end application.
-
-The UNICEF MagicBox team uses GitHub issues to track ongoing development milestones and other tasks to revisit. The team files new GitHub issues for development goals, feature requests, or bugs. These goals or features may come from external stakeholders or the community. We also use the issues as a place for discussion on those tasks. This makes it easy to cross-reference in the future and visualize upcoming work on one of our projects.
-
-Once you begin using GitHub issues and pull requests as a way of discussing and using your project, organizing with project boards becomes easier.
-
-### 2\. Set up kanban-style project boards
-
-GitHub issues and pull requests are the first step. After you begin using them, it may become harder to visualize what work is in progress and what work is yet to begin. [GitHub's project boards][7] give you a platform to visualize and organize cards into different columns.
-
-There are two types of project boards available:
-
- * **Repository** : Boards for use in a single repository
- * **Organization** : Boards for use in a GitHub organization across multiple repositories (but private to organization members)
-
-
-
-The choice you make depends on the structure and size of your projects. The UNICEF MagicBox team uses boards for development and documentation at the organization level, and then repository-specific boards for focused work (like our [community management board][8]).
-
-#### Creating your first board
-
-Project boards are found on your GitHub organization page or on a specific repository. You will see the Projects tab in the same row as Issues and Pull requests. From the page, you'll see a green button to create a new project.
-
-There, you can set a name and description for the project. You can also choose templates to set up basic columns and sorting for your board. Currently, the only options are for kanban-style boards.
-
-
-![Creating a new GitHub project board.][10]
-
-Creating a new GitHub project board.
-
-After creating the project board, you can make adjustments to it as needed. You can create new columns, [set up automation][11], and add pre-existing GitHub issues and pull requests to the project board.
-
-You may notice new options for the metadata in each GitHub issue and pull request. Inside of an issue or pull request, you can add it to a project board. If you use automation, it will automatically enter a column you configured.
-
-### 3\. Build project boards into your workflow
-
-After you set up a project board and populate it with issues and pull requests, you need to integrate it into your workflow. Project boards are effective only when actively used. The UNICEF MagicBox team uses the project boards as a way to track our progress as a team, update external stakeholders on development, and estimate team bandwidth for reaching our milestones.
-
-
-![Tracking progress][13]
-
-Tracking progress with GitHub project boards.
-
-If you are an open source project and community, consider using the project boards for development-focused meetings. It also helps remind you and other core contributors to spend five minutes each day updating progress as needed. If you're at a company using GitHub to do open source work, consider using project boards to update other team members and encourage participation inside of GitHub issues and pull requests.
-
-Once you begin using the project board, yours may look like this:
-
-
-![Development progress board][15]
-
-Development progress board for all UNICEF MagicBox repositories in organization-wide GitHub project boards.
-
-### Open alternatives
-
-GitHub project boards require your project to be on GitHub to take advantage of this functionality. While GitHub is a popular repository for open source projects, it's not an open source platform itself. Fortunately, there are open source alternatives to GitHub with tools to replicate the workflow explained above. [GitLab Issue Boards][16] and [Taiga][17] are good alternatives that offer similar functionality.
-
-### Go forth and organize!
-
-With these tools, you can bring a method to the madness of organizing your open source project. These three tips for using GitHub project boards encourage transparency in your open source project and make it easier to track progress and milestones in the open.
-
-Do you use GitHub project boards for your open source project? Have any tips for success that aren't mentioned in the article? Leave a comment below to share how you make sense of your open source projects.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/4/keep-your-project-organized-git-repo
-
-作者:[Justin W.Flory][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/jflory
-[1]:https://en.wikipedia.org/wiki/Business_process_management
-[2]:https://en.wikipedia.org/wiki/Kanban_(development)
-[3]:http://unicefstories.org/about/
-[4]:http://unicefstories.org/magicbox/
-[5]:/file/393356
-[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-open-issues.png?itok=OcWPX575 (GitHub issues for magicbox-maps, MagicBox's front-end application)
-[7]:https://help.github.com/articles/about-project-boards/
-[8]:https://github.com/unicef/magicbox/projects/3?fullscreen=true
-[9]:/file/393361
-[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-project-boards-create-board.png?itok=pp7SXH9g (Creating a new GitHub project board.)
-[11]:https://help.github.com/articles/about-automation-for-project-boards/
-[12]:/file/393351
-[13]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-issues-metadata.png?itok=xp5auxCQ (Tracking progress)
-[14]:/file/393366
-[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-project-boards-overview.png?itok=QSbOOOkF (Development progress board)
-[16]:https://about.gitlab.com/features/issueboard/
-[17]:https://taiga.io/
diff --git a/sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md b/sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md
deleted file mode 100644
index 10511c3a7d..0000000000
--- a/sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md
+++ /dev/null
@@ -1,39 +0,0 @@
-What You Don’t Know About Linux Open Source Could Be Costing to More Than You Think
-======
-
-If you would like to test out Linux before completely switching it as your everyday driver, there are a number of means by which you can do it. Linux was not intended to run on Windows, and Windows was not meant to host Linux. To begin with, and perhaps most of all, Linux is open source computer software. In any event, Linux outperforms Windows on all your hardware.
-
-If you’ve always wished to try out Linux but were never certain where to begin, have a look at our how to begin guide for Linux. Linux is not any different than Windows or Mac OS, it’s basically an Operating System but the leading different is the fact that it is Free for everyone. Employing Linux today isn’t any more challenging than switching from one sort of smartphone platform to another.
-
-You’re most likely already using Linux, whether you are aware of it or not. Linux has a lot of distinct versions to suit nearly any sort of user. Today, Linux is a small no-brainer. Linux plays an essential part in keeping our world going.
-
-Even then, it is dependent on the build of Linux that you’re using. Linux runs a lot of the underbelly of cloud operations. Linux is also different in that, even though the core pieces of the Linux operating system are usually common, there are lots of distributions of Linux, like different software alternatives. While Linux might seem intimidatingly intricate and technical to the ordinary user, contemporary Linux distros are in reality very user-friendly, and it’s no longer the case you have to have advanced skills to get started using them. Linux was the very first major Internet-centred open-source undertaking. Linux is beginning to increase the range of patches it pushes automatically, but several of the security patches continue to be opt-in only.
-
-You are able to remove Linux later in case you need to. Linux plays a vital part in keeping our world going. Linux supplies a huge library of functionality which can be leveraged to accelerate development.
-
-Even then, it’s dependent on the build of Linux that you’re using. Linux is also different in that, even though the core pieces of the Linux operating system are typically common, there are lots of distributions of Linux, like different software alternatives. While Linux might seem intimidatingly intricate and technical to the ordinary user, contemporary Linux distros are in fact very user-friendly, and it’s no longer the case you require to have advanced skills to get started using them. Linux runs a lot of the underbelly of cloud operations. Linux is beginning to increase the range of patches it pushes automatically, but several of the security patches continue to be opt-in only. Read More, open source projects including Linux are incredibly capable because of the contributions that all these individuals have added over time.
-
-### Life After Linux Open Source
-
-The development edition of the manual typically has more documentation, but might also document new characteristics that aren’t in the released version. Fortunately, it’s so lightweight you can just jump to some other version in case you don’t like it. It’s extremely hard to modify the compiled version of the majority of applications and nearly not possible to see exactly the way the developer created different sections of the program.
-
-On the challenges of bottoms-up go-to-market It’s really really hard to grasp the difference between your organic product the product your developers use and love and your company product, which ought to be, effectively, a different product. As stated by the report, it’s going to be hard for developers to switch. Developers are now incredibly important and influential in the purchasing procedure. Some OpenWrt developers will attend the event and get ready to reply to your questions!
-
-When the program is installed, it has to be configured. Suppose you discover that the software you bought actually does not do what you would like it to do. Open source software is much more common than you believe, and an amazing philosophy to live by. Employing open source software gives an inexpensive method to bootstrap a business. It’s more difficult to deal with closed source software generally. So regarding Application and Software, you’re all set if you are prepared to learn an alternate software or finding a means to make it run on Linux. Possibly the most famous copyleft software is Linux.
-
-Article sponsored by [Vegas Palms online slots][1]
-
-
---------------------------------------------------------------------------------
-
-via: https://linuxaria.com/article/what-you-dont-know-about-linux-open-source-could-be-costing-to-more-than-you-think
-
-作者:[Marc Fisher][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://linuxaria.com
-[1]:https://www.vegaspalmscasino.com/casino-games/slots/
diff --git a/sources/talk/20180424 There-s a Server in Every Serverless Platform.md b/sources/talk/20180424 There-s a Server in Every Serverless Platform.md
deleted file mode 100644
index 9bc935c06d..0000000000
--- a/sources/talk/20180424 There-s a Server in Every Serverless Platform.md
+++ /dev/null
@@ -1,87 +0,0 @@
-There’s a Server in Every Serverless Platform
-======
-
-![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/servers.jpg?itok=i_gyObMP)
-Serverless computing or Function as a Service (FaaS) is a new buzzword created by an industry that loves to coin new terms as market dynamics change and technologies evolve. But what exactly does it mean? What is serverless computing?
-
-Before getting into the definition, let’s take a brief history lesson from Sirish Raghuram, CEO and co-founder of Platform9, to understand the evolution of serverless computing.
-
-“In the 90s, we used to build applications and run them on hardware. Then came virtual machines that allowed users to run multiple applications on the same hardware. But you were still running the full-fledged OS for each application. The arrival of containers got rid of OS duplication and process level isolation which made it lightweight and agile,” said Raghuram.
-
-Serverless, specifically, Function as a Service, takes it to the next level as users are now able to code functions and run them at the granularity of build, ship and run. There is no complexity of underlying machinery needed to run those functions. No need to worry about spinning containers using Kubernetes. Everything is hidden behind the scenes.
-
-“That’s what is driving a lot of interest in function as a service,” said Raghuram.
-
-### What exactly is serverless?
-
-There is no single definition of the term, but to build some consensus around the idea, the [Cloud Native Computing Foundation (CNCF)][1] Serverless Working Group wrote a [white paper][2] to define serverless computing.
-
-According to the white paper, “Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.”
-
-Ken Owens, a member of the Technical Oversight Committee at CNCF said that the primary goal of serverless computing is to help users build and run their applications without having to worry about the cost and complexity of servers in terms of provisioning, management and scaling.
-
-“Serverless is a natural evolution of cloud-native computing. The CNCF is advancing serverless adoption through collaboration and community-driven initiatives that will enable interoperability,” [said][3] Chris Aniszczyk, COO, CNCF.
-
-### It’s not without servers
-
-First things first, don’t get fooled by the term “serverless.” There are still servers in serverless computing. Remember what Raghuram said: all the machinery is hidden; it’s not gone.
-
-The clear benefit here is that developers need not concern themselves with tasks that don’t add any value to their deliverables. Instead of worrying about managing the function, they can dedicate their time to adding featured and building apps that add business value. Time is money and every minute saved in management goes toward innovation. Developers don’t have to worry about scaling based on peaks and valleys; it’s automated. Because cloud providers charge only for the duration that functions are run, developers cut costs by not having to pay for blinking lights.
-
-But… someone still has to do the work behind the scenes. There are still servers offering FaaS platforms.
-
-In the case of public cloud offerings like Google Cloud Platform, AWS, and Microsoft Azure, these companies manage the servers and charge customers for running those functions. In the case of private cloud or datacenters, where developers don’t have to worry about provisioning or interacting with such servers, there are other teams who do.
-
-The CNCF white paper identifies two groups of professionals that are involved in the serverless movement: developers and providers. We have already talked about developers. But, there are also providers that offer serverless platforms; they deal with all the work involved in keeping that server running.
-
-That’s why many companies, like SUSE, refrain from using the term “serverless” and prefer the term function as a service, because they offer products that run those “serverless” servers. But what kind of functions are these? Is it the ultimate future of app delivery?
-
-### Event-driven computing
-
-Many see serverless computing as an umbrella that offers FaaS among many other potential services. According to CNCF, FaaS provides event-driven computing where functions are triggered by events or HTTP requests. “Developers run and manage application code with functions that are triggered by events or HTTP requests. Developers deploy small units of code to the FaaS, which are executed as needed as discrete actions, scaling without the need to manage servers or any other underlying infrastructure,” said the white paper.
-
-Does that mean FaaS is the silver bullet that solves all problems for developing and deploying applications? Not really. At least not at the moment. FaaS does solve problems in several use cases and its scope is expanding. A good use case of FaaS could be the functions that an application needs to run when an event takes place.
-
-Let’s take an example: a user takes a picture from a phone and uploads it to the cloud. Many things happen when the picture is uploaded - it’s scanned (exif data is read), a thumbnail is created, based on deep learning/machine learning the content of the image is analyzed, the information of the image is stored in the database. That one event of uploading that picture triggers all those functions. Those functions die once the event is over. That’s what FaaS does. It runs code quickly to perform all those tasks and then disappears.
-
-That’s just one example. Another example could be an IoT device where a motion sensor triggers an event that instructs the camera to start recording and sends the clip to the designated contant. Your thermostat may trigger the fan when the sensor detects a change in temperature. These are some of the many use cases where function as a service make more sense than the traditional approach. Which also says that not all applications (at least at the moment, but that will change as more organizations embrace the serverless platform) can be run as function as service.
-
-According to CNCF, serverless computing should be considered if you have these kinds of workloads:
-
- * Asynchronous, concurrent, easy to parallelize into independent units of work
-
- * Infrequent or has sporadic demand, with large, unpredictable variance in scaling requirements
-
- * Stateless, ephemeral, without a major need for instantaneous cold start time
-
- * Highly dynamic in terms of changing business requirements that drive a need for accelerated developer velocity
-
-
-
-
-### Why should you care?
-
-Serverless is a very new technology and paradigm, just the way VMs and containers transformed the app development and delivery models, FaaS can also bring dramatic changes. We are still in the early days of serverless computing. As the market evolves, consensus is created and new technologies evolve, and FaaS may grow beyond the workloads and use cases mentioned here.
-
-What is becoming quite clear is that companies who are embarking on their cloud native journey must have serverless computing as part of their strategy. The only way to stay ahead of competitors is by keeping up with the latest technologies and trends.
-
-It’s about time to put serverless into servers.
-
-For more information, check out the CNCF Working Group's serverless whitepaper [here][2]. And, you can learn more at [KubeCon + CloudNativeCon Europe][4], coming up May 2-4 in Copenhagen, Denmark.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/2018/4/theres-server-every-serverless-platform
-
-作者:[SWAPNIL BHARTIYA][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/arnieswap
-[1]:https://www.cncf.io/
-[2]:https://github.com/cncf/wg-serverless/blob/master/whitepaper/cncf_serverless_whitepaper_v1.0.pdf
-[3]:https://www.cncf.io/blog/2018/02/14/cncf-takes-first-step-towards-serverless-computing/
-[4]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/attend/register/
diff --git a/sources/talk/20180511 Looking at the Lispy side of Perl.md b/sources/talk/20180511 Looking at the Lispy side of Perl.md
deleted file mode 100644
index 1fa51b314c..0000000000
--- a/sources/talk/20180511 Looking at the Lispy side of Perl.md
+++ /dev/null
@@ -1,357 +0,0 @@
-Looking at the Lispy side of Perl
-======
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
-Some programming languages (e.g., C) have named functions only, whereas others (e.g., Lisp, Java, and Perl) have both named and unnamed functions. A lambda is an unnamed function, with Lisp as the language that popularized the term. Lambdas have various uses, but they are particularly well-suited for data-rich applications. Consider this depiction of a data pipeline, with two processing stages shown:
-
-![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/data_source.png?itok=OON2cC2R)
-
-### Lambdas and higher-order functions
-
-The filter and transform stages can be implemented as higher-order functions—that is, functions that can take a function as an argument. Suppose that the depicted pipeline is part of an accounts-receivable application. The filter stage could consist of a function named `filter_data`, whose single argument is another function—for example, a `high_buyers` function that filters out amounts that fall below a threshold. The transform stage might convert amounts in U.S. dollars to equivalent amounts in euros or some other currency, depending on the function plugged in as the argument to the higher-order `transform_data` function. Changing the filter or the transform behavior requires only plugging in a different function argument to the higher order `filter_data` or `transform_data` functions.
-
-Lambdas serve nicely as arguments to higher-order functions for two reasons. First, lambdas can be crafted on the fly, and even written in place as arguments. Second, lambdas encourage the coding of pure functions, which are functions whose behavior depends solely on the argument(s) passed in; such functions have no side effects and thereby promote safe concurrent programs.
-
-Perl has a straightforward syntax and semantics for lambdas and higher-order functions, as shown in the following example:
-
-### A first look at lambdas in Perl
-
-```
-#!/usr/bin/perl
-
-use strict;
-use warnings;
-
-## References to lambdas that increment, decrement, and do nothing.
-## $_[0] is the argument passed to each lambda.
-my $inc = sub { $_[0] + 1 }; ## could use 'return $_[0] + 1' for clarity
-my $dec = sub { $_[0] - 1 }; ## ditto
-my $nop = sub { $_[0] }; ## ditto
-
-sub trace {
- my ($val, $func, @rest) = @_;
- print $val, " ", $func, " ", @rest, "\nHit RETURN to continue...\n";
- <STDIN>;
-}
-
-## Apply an operation to a value. The base case occurs when there are
-## no further operations in the list named @rest.
-sub apply {
- my ($val, $first, @rest) = @_;
- trace($val, $first, @rest) if 1; ## 0 to stop tracing
-
- return ($val, apply($first->($val), @rest)) if @rest; ## recursive case
- return ($val, $first->($val)); ## base case
-}
-
-my $init_val = 0;
-my @ops = ( ## list of lambda references
- $inc, $dec, $dec, $inc,
- $inc, $inc, $inc, $dec,
- $nop, $dec, $dec, $nop,
- $nop, $inc, $inc, $nop
- );
-
-## Execute.
-print join(' ', apply($init_val, @ops)), "\n";
-## Final line of output: 0 1 0 -1 0 1 2 3 2 2 1 0 0 0 1 2 2strictwarningstraceSTDINapplytraceapplyapply
-```
-
-The lispy program shown above highlights the basics of Perl lambdas and higher-order functions. Named functions in Perl start with the keyword `sub` followed by a name:
-```
-sub increment { ... } # named function
-
-```
-
-An unnamed or anonymous function omits the name:
-```
-sub {...} # lambda, or unnamed function
-
-```
-
-In the lispy example, there are three lambdas, and each has a reference to it for convenience. Here, for review, is the `$inc` reference and the lambda referred to:
-```
-my $inc = sub { $_[0] + 1 };
-
-```
-
-The lambda itself, the code block to the right of the assignment operator `=`, increments its argument `$_[0]` by 1. The lambda’s body is written in Lisp style; that is, without either an explicit `return` or a semicolon after the incrementing expression. In Perl, as in Lisp, the value of the last expression in a function’s body becomes the returned value if there is no explicit `return` statement. In this example, each lambda has only one expression in its body—a simplification that befits the spirit of lambda programming.
-
-The `trace` function in the lispy program helps to clarify how the program works (as I'll illustrate below). The higher-order function `apply`, a nod to a Lisp function of the same name, takes a numeric value as its first argument and a list of lambda references as its second argument. The `apply` function is called initially, at the bottom of the program, with zero as the first argument and the list named `@ops` as the second argument. This list consists of 16 lambda references from among `$inc` (increment a value), `$dec` (decrement a value), and `$nop` (do nothing). The list could contain the lambdas themselves, but the code is easier to write and to understand with the more concise lambda references.
-
-The logic of the higher-order `apply` function can be clarified as follows:
-
- 1. The argument list passed to `apply` in typical Perl fashion is separated into three pieces:
-```
-my ($val, $first, @rest) = @_; ## break the argument list into three elements
-
-```
-
-The first element `$val` is a numeric value, initially `0`. The second element `$first` is a lambda reference, one of `$inc` `$dec`, or `$nop`. The third element `@rest` is a list of any remaining lambda references after the first such reference is extracted as `$first`.
-
- 2. If the list `@rest` is not empty after its first element is removed, then `apply` is called recursively. The two arguments to the recursively invoked `apply` are:
-
- * The value generated by applying lambda operation `$first` to numeric value `$val`. For example, if `$first` is the incrementing lambda to which `$inc` refers, and `$val` is 2, then the new first argument to `apply` would be 3.
- * The list of remaining lambda references. Eventually, this list becomes empty because each call to `apply` shortens the list by extracting its first element.
-
-
-
-Here is some output from a sample run of the lispy program, with `%` as the command-line prompt:
-```
-% ./lispy.pl
-
-0 CODE(0x8f6820) CODE(0x8f68c8)CODE(0x8f68c8)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)...
-Hit RETURN to continue...
-
-1 CODE(0x8f68c8) CODE(0x8f68c8)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)...
-Hit RETURN to continue
-```
-
-The first output line can be clarified as follows:
-
- * The `0` is the numeric value passed as an argument in the initial (and thus non-recursive) call to function `apply`. The argument name is `$val` in `apply`.
- * The `CODE(0x8f6820)` is a reference to one of the lambdas, in this case the lambda to which `$inc` refers. The second argument is thus the address of some lambda code. The argument name is `$first` in `apply`
- * The third piece, the series of `CODE` references, is the list of lambda references beyond the first. The argument name is `@rest` in `apply`.
-
-
-
-The second line of output shown above also deserves a look. The numeric value is now `1`, the result of incrementing `0`: the initial lambda is `$inc` and the initial value is `0`. The extracted reference `CODE(0x8f68c8)` is now `$first`, as this reference is the first element in the `@rest` list after `$inc` has been extracted earlier.
-
-Eventually, the `@rest` list becomes empty, which ends the recursive calls to `apply`. In this case, the function `apply` simply returns a list with two elements:
-
- 1. The numeric value taken in as an argument (in the sample run, 2).
- 2. This argument transformed by the lambda (also 2 because the last lambda reference happens to be `$nop` for do nothing).
-
-
-
-The lispy example underscores that Perl supports lambdas without any special fussy syntax: A lambda is just an unnamed code block, perhaps with a reference to it for convenience. Lambdas themselves, or references to them, can be passed straightforwardly as arguments to higher-order functions such as `apply` in the lispy example. Invoking a lambda through a reference is likewise straightforward. In the `apply` function, the call is:
-```
-$first->($val) ## $first is a lambda reference, $val a numeric argument passed to the lambda
-
-```
-
-### A richer code example
-
-The next code example puts a lambda and a higher-order function to practical use. The example implements Conway’s Game of Life, a cellular automaton that can be represented as a matrix of cells. Such a matrix goes through various transformations, each yielding a new generation of cells. The Game of Life is fascinating because even relatively simple initial configurations can lead to quite complex behavior. A quick look at the rules governing cell birth, survival, and death is in order.
-
-Consider this 5x5 matrix, with a star representing a live cell and a dash representing a dead one:
-```
- ----- ## initial configuration
- --*--
- --*--
- --*--
- -----
-```
-
-The next generation becomes:
-```
- ----- ## next generation
- -----
- -***-
- ----
- -----
-```
-
-As life continues, the generations oscillate between these two configurations.
-
-Here are the rules determining birth, death, and survival for a cell. A given cell has between three neighbors (a corner cell) and eight neighbors (an interior cell):
-
- * A dead cell with exactly three live neighbors comes to life.
- * A live cell with more than three live neighbors dies from over-crowding.
- * A live cell with two or three live neighbors survives; hence, a live cell with fewer than two live neighbors dies from loneliness.
-
-
-
-In the initial configuration shown above, the top and bottom live cells die because neither has two or three live neighbors. By contrast, the middle live cell in the initial configuration gains two live neighbors, one on either side, in the next generation.
-
-## Conway’s Game of Life
-```
-#!/usr/bin/perl
-
-### A simple implementation of Conway's game of life.
-# Usage: ./gol.pl [input file] ;; If no file name given, DefaultInfile is used.
-
-use constant Dead => "-";
-use constant Alive => "*";
-use constant DefaultInfile => 'conway.in';
-
-use strict;
-use warnings;
-
-my $dimension = undef;
-my @matrix = ();
-my $generation = 1;
-
-sub read_data {
- my $datafile = DefaultInfile;
- $datafile = shift @ARGV if @ARGV;
- die "File $datafile does not exist.\n" if !-f $datafile;
- open(INFILE, "<$datafile");
-
- ## Check 1st line for dimension;
- $dimension = <INFILE>;
- die "1st line of input file $datafile not an integer.\n" if $dimension !~ /\d+/;
-
- my $record_count = 0;
- while (<INFILE>) {
- chomp($_);
- last if $record_count++ == $dimension;
- die "$_: bad input record -- incorrect length\n" if length($_) != $dimension;
- my @cells = split(//, $_);
- push @matrix, @cells;
- }
- close(INFILE);
- draw_matrix();
-}
-
-sub draw_matrix {
- my $n = $dimension * $dimension;
- print "\n\tGeneration $generation\n";
- for (my $i = 0; $i < $n; $i++) {
- print "\n\t" if ($i % $dimension) == 0;
- print $matrix[$i];
- }
- print "\n\n";
- $generation++;
-}
-
-sub has_left_neighbor {
- my ($ind) = @_;
- return ($ind % $dimension) != 0;
-}
-
-sub has_right_neighbor {
- my ($ind) = @_;
- return (($ind + 1) % $dimension) != 0;
-}
-
-sub has_up_neighbor {
- my ($ind) = @_;
- return (int($ind / $dimension)) != 0;
-}
-
-sub has_down_neighbor {
- my ($ind) = @_;
- return (int($ind / $dimension) + 1) != $dimension;
-}
-
-sub has_left_up_neighbor {
- my ($ind) = @_;
- ($ind) && has_up_neighbor($ind);
-}
-
-sub has_right_up_neighbor {
- my ($ind) = @_;
- ($ind) && has_up_neighbor($ind);
-}
-
-sub has_left_down_neighbor {
- my ($ind) = @_;
- ($ind) && has_down_neighbor($ind);
-}
-
-sub has_right_down_neighbor {
- my ($ind) = @_;
- ($ind) && has_down_neighbor($ind);
-}
-
-sub compute_cell {
- my ($ind) = @_;
- my @neighbors;
-
- # 8 possible neighbors
- push(@neighbors, $ind - 1) if has_left_neighbor($ind);
- push(@neighbors, $ind + 1) if has_right_neighbor($ind);
- push(@neighbors, $ind - $dimension) if has_up_neighbor($ind);
- push(@neighbors, $ind + $dimension) if has_down_neighbor($ind);
- push(@neighbors, $ind - $dimension - 1) if has_left_up_neighbor($ind);
- push(@neighbors, $ind - $dimension + 1) if has_right_up_neighbor($ind);
- push(@neighbors, $ind + $dimension - 1) if has_left_down_neighbor($ind);
- push(@neighbors, $ind + $dimension + 1) if has_right_down_neighbor($ind);
-
- my $count = 0;
- foreach my $n (@neighbors) {
- $count++ if $matrix[$n] eq Alive;
- }
-
- if ($matrix[$ind] eq Alive) && (($count == 2) || ($count == 3)); ## survival
- if ($matrix[$ind] eq Dead) && ($count == 3); ## birth
- ; ## death
-}
-
-sub again_or_quit {
- print "RETURN to continue, 'q' to quit.\n";
- my $flag = <STDIN>;
- chomp($flag);
- return ($flag eq 'q') ? 1 : 0;
-}
-
-sub animate {
- my @new_matrix;
- my $n = $dimension * $dimension - 1;
-
- while (1) { ## loop until user signals stop
- @new_matrix = map {compute_cell($_)} (0..$n); ## generate next matrix
-
- splice @matrix; ## empty current matrix
- push @matrix, @new_matrix; ## repopulate matrix
- draw_matrix(); ## display the current matrix
-
- last if again_or_quit(); ## continue?
- splice @new_matrix; ## empty temp matrix
- }
-}
-
-### Execute
-read_data(); ## read initial configuration from input file
-animate(); ## display and recompute the matrix until user tires
-```
-
-The gol program (see [Conway’s Game of Life][1]) has almost 140 lines of code, but most of these involve reading the input file, displaying the matrix, and bookkeeping tasks such as determining the number of live neighbors for a given cell. Input files should be configured as follows:
-```
- 5
- -----
- --*--
- --*--
- --*--
- -----
-```
-
-The first record gives the matrix side, in this case 5 for a 5x5 matrix. The remaining rows are the contents, with stars for live cells and spaces for dead ones.
-
-The code of primary interest resides in two functions, `animate` and `compute_cell`. The `animate` function constructs the next generation, and this function needs to call `compute_cell` on every cell in order to determine the cell’s new status as either alive or dead. How should the `animate` function be structured?
-
-The `animate` function has a `while` loop that iterates until the user decides to terminate the program. Within this `while` loop the high-level logic is straightforward:
-
- 1. Create the next generation by iterating over the matrix cells, calling function `compute_cell` on each cell to determine its new status. At issue is how best to do the iteration. A loop nested inside the `while `loop would do, of course, but nested loops can be clunky. Another way is to use a higher-order function, as clarified shortly.
- 2. Replace the current matrix with the new one.
- 3. Display the next generation.
- 4. Check if the user wants to continue: if so, continue; otherwise, terminate.
-
-
-
-Here, for review, is the call to Perl’s higher-order `map` function, with the function’s name again a nod to Lisp. This call occurs as the first statement within the `while` loop in `animate`:
-```
-while (1) {
- @new_matrix = map {compute_cell($_)} (0..$n); ## generate next matrixcompute_cell
-```
-
-The `map` function takes two arguments: an unnamed code block (a lambda!), and a list of values passed to this code block one at a time. In this example, the code block calls the `compute_cell` function with one of the matrix indexes, 0 through the matrix size - 1. Although the matrix is displayed as two-dimensional, it is implemented as a one-dimensional list.
-
-Higher-order functions such as `map` encourage the code brevity for which Perl is famous. My view is that such functions also make code easier to write and to understand, as they dispense with the required but messy details of loops. In any case, lambdas and higher-order functions make up the Lispy side of Perl.
-
-If you're interested in more detail, I recommend Mark Jason Dominus's book, [Higher-Order Perl][2].
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/5/looking-lispy-side-perl
-
-作者:[Marty Kalin][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/mkalindepauledu
-[1]:https://trello-attachments.s3.amazonaws.com/575088ec94ca6ac38b49b30e/5ad4daf12f6b6a3ac2318d28/c0700c7379983ddf61f5ab5ab4891f0c/lispyPerl.html#gol (Conway’s Game of Life)
-[2]:https://www.elsevier.com/books/higher-order-perl/dominus/978-1-55860-701-9
diff --git a/sources/talk/20180527 Whatever Happened to the Semantic Web.md b/sources/talk/20180527 Whatever Happened to the Semantic Web.md
deleted file mode 100644
index 22d48c150a..0000000000
--- a/sources/talk/20180527 Whatever Happened to the Semantic Web.md
+++ /dev/null
@@ -1,106 +0,0 @@
-Whatever Happened to the Semantic Web?
-======
-In 2001, Tim Berners-Lee, inventor of the World Wide Web, published an article in Scientific American. Berners-Lee, along with two other researchers, Ora Lassila and James Hendler, wanted to give the world a preview of the revolutionary new changes they saw coming to the web. Since its introduction only a decade before, the web had fast become the world’s best means for sharing documents with other people. Now, the authors promised, the web would evolve to encompass not just documents but every kind of data one could imagine.
-
-They called this new web the Semantic Web. The great promise of the Semantic Web was that it would be readable not just by humans but also by machines. Pages on the web would be meaningful to software programs—they would have semantics—allowing programs to interact with the web the same way that people do. Programs could exchange data across the Semantic Web without having to be explicitly engineered to talk to each other. According to Berners-Lee, Lassila, and Hendler, a typical day living with the myriad conveniences of the Semantic Web might look something like this:
-
-> The entertainment system was belting out the Beatles’ “We Can Work It Out” when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctor’s office: “Mom needs to see a specialist and then has to have a series of physical therapy sessions. Biweekly or something. I’m going to have my agent set up the appointments.” Pete immediately agreed to share the chauffeuring. At the doctor’s office, Lucy instructed her Semantic Web agent through her handheld Web browser. The agent promptly retrieved the information about Mom’s prescribed treatment within a 20-mile radius of her home and with a rating of excellent or very good on trusted rating services. It then began trying to find a match between available appointment times (supplied by the agents of individual providers through their Web sites) and Pete’s and Lucy’s busy schedules.
-
-The vision was that the Semantic Web would become a playground for intelligent “agents.” These agents would automate much of the work that the world had only just learned to do on the web.
-
-![][1]
-
-For a while, this vision enticed a lot of people. After new technologies such as AJAX led to the rise of what Silicon Valley called Web 2.0, Berners-Lee began referring to the Semantic Web as Web 3.0. Many thought that the Semantic Web was indeed the inevitable next step. A New York Times article published in 2006 quotes a speech Berners-Lee gave at a conference in which he said that the extant web would, twenty years in the future, be seen as only the “embryonic” form of something far greater. A venture capitalist, also quoted in the article, claimed that the Semantic Web would be “profound,” and ultimately “as obvious as the web seems obvious to us today.”
-
-Of course, the Semantic Web we were promised has yet to be delivered. In 2018, we have “agents” like Siri that can do certain tasks for us. But Siri can only do what it can because engineers at Apple have manually hooked it up to a medley of web services each capable of answering only a narrow category of questions. An important consequence is that, without being large and important enough for Apple to care, you cannot advertise your services directly to Siri from your own website. Unlike the physical therapists that Berners-Lee and his co-authors imagined would be able to hang out their shingles on the web, today we are stuck with giant, centralized repositories of information. Today’s physical therapists must enter information about their practice into Google or Yelp, because those are the only services that the smartphone agents know how to use and the only ones human beings will bother to check. The key difference between our current reality and the promised Semantic future is best captured by this throwaway aside in the excerpt above: “…appointment times (supplied by the agents of individual providers through **their** Web sites)…”
-
-In fact, over the last decade, the web has not only failed to become the Semantic Web but also threatened to recede as an idea altogether. We now hardly ever talk about “the web” and instead talk about “the internet,” which as of 2016 has become such a common term that newspapers no longer capitalize it. (To be fair, they stopped capitalizing “web” too.) Some might still protest that the web and the internet are two different things, but the distinction gets less clear all the time. The web we have today is slowly becoming a glorified app store, just the easiest way among many to download software that communicates with distant servers using closed protocols and schemas, making it functionally identical to the software ecosystem that existed before the web. How did we get here? If the effort to build a Semantic Web had succeeded, would the web have looked different today? Or have there been so many forces working against a decentralized web for so long that the Semantic Web was always going to be stillborn?
-
-### Semweb Hucksters and Their Metacrap
-
-To some more practically minded engineers, the Semantic Web was, from the outset, a utopian dream.
-
-The basic idea behind the Semantic Web was that everyone would use a new set of standards to annotate their webpages with little bits of XML. These little bits of XML would have no effect on the presentation of the webpage, but they could be read by software programs to divine meaning that otherwise would only be available to humans.
-
-The bits of XML were a way of expressing metadata about the webpage. We are all familiar with metadata in the context of a file system: When we look at a file on our computers, we can see when it was created, when it was last updated, and whom it was originally created by. Likewise, webpages on the Semantic Web would be able to tell your browser who authored the page and perhaps even where that person went to school, or where that person is currently employed. In theory, this information would allow Semantic Web browsers to answer queries across a large collection of webpages. In their article for Scientific American, Berners-Lee and his co-authors explain that you could, for example, use the Semantic Web to look up a person you met at a conference whose name you only partially remember.
-
-Cory Doctorow, a blogger and digital rights activist, published an influential essay in 2001 that pointed out the many problems with depending on voluntarily supplied metadata. A world of “exhaustive, reliable” metadata would be wonderful, he argued, but such a world was “a pipe-dream, founded on self-delusion, nerd hubris, and hysterically inflated market opportunities.” Doctorow had found himself in a series of debates over the Semantic Web at tech conferences and wanted to catalog the serious issues that the Semantic Web enthusiasts (Doctorow calls them “semweb hucksters”) were overlooking. The essay, titled “Metacrap,” identifies seven problems, among them the obvious fact that most web users were likely to provide either no metadata at all or else lots of misleading metadata meant to draw clicks. Even if users were universally diligent and well-intentioned, in order for the metadata to be robust and reliable, users would all have to agree on a single representation for each important concept. Doctorow argued that in some cases a single representation might not be appropriate, desirable, or fair to all users.
-
-Indeed, the web had already seen people abusing the HTML `` tag (introduced at least as early as HTML 4) in an attempt to improve the visibility of their webpages in search results. In a 2004 paper, Ben Munat, then an academic at Evergreen State College, explains how search engines once experimented with using keywords supplied via the `` tag to index results, but soon discovered that unscrupulous webpage authors were including tags unrelated to the actual content of their webpage. As a result, search engines came to ignore the `` tag in favor of using complex algorithms to analyze the actual content of a webpage. Munat concludes that a general-purpose Semantic Web is unworkable, and that the focus should be on specific domains within medicine and science.
-
-Others have also seen the Semantic Web project as tragically flawed, though they have located the flaw elsewhere. Aaron Swartz, the famous programmer and another digital rights activist, wrote in an unfinished book about the Semantic Web published after his death that Doctorow was “attacking a strawman.” Nobody expected that metadata on the web would be thoroughly accurate and reliable, but the Semantic Web, or at least a more realistically scoped version of it, remained possible. The problem, in Swartz’ view, was the “formalizing mindset of mathematics and the institutional structure of academics” that the “semantic Webheads” brought to bear on the challenge. In forums like the World Wide Web Consortium (W3C), a huge amount of effort and discussion went into creating standards before there were any applications out there to standardize. And the standards that emerged from these “Talmudic debates” were so abstract that few of them ever saw widespread adoption. The few that did, like XML, were “uniformly scourges on the planet, offenses against hardworking programmers that have pushed out sensible formats (like JSON) in favor of overly-complicated hairballs with no basis in reality.” The Semantic Web might have thrived if, like the original web, its standards were eagerly adopted by everyone. But that never happened because—as [has been discussed][2] on this blog before—the putative benefits of something like XML are not easy to sell to a programmer when the alternatives are both entirely sufficient and much easier to understand.
-
-### Building the Semantic Web
-
-If the Semantic Web was not an outright impossibility, it was always going to require the contributions of lots of clever people working in concert.
-
-The long effort to build the Semantic Web has been said to consist of four phases. The first phase, which lasted from 2001 to 2005, was the golden age of Semantic Web activity. Between 2001 and 2005, the W3C issued a slew of new standards laying out the foundational technologies of the Semantic future.
-
-The most important of these was the Resource Description Framework (RDF). The W3C issued the first version of the RDF standard in 2004, but RDF had been floating around since 1997, when a W3C working group introduced it in a draft specification. RDF was originally conceived of as a tool for modeling metadata and was partly based on earlier attempts by Ramanathan Guha, an Apple engineer, to develop a metadata system for files stored on Apple computers. The Semantic Web working groups at W3C repurposed RDF to represent arbitrary kinds of general knowledge.
-
-RDF would be the grammar in which Semantic webpages expressed information. The grammar is a simple one: Facts about the world are expressed in RDF as triplets of subject, predicate, and object. Tim Bray, who worked with Ramanathan Guha on an early version of RDF, gives the following example, describing TV shows and movies:
-
-```
-@prefix rdf: .
-
-@prefix ex: .
-
-
-ex:vincent_donofrio ex:starred_in ex:law_and_order_ci .
-
-ex:law_and_order_ci rdf:type ex:tv_show .
-
-ex:the_thirteenth_floor ex:similar_plot_as ex:the_matrix .
-```
-
-The syntax is not important, especially since RDF can be represented in a number of formats, including XML and JSON. This example is in a format called Turtle, which expresses RDF triplets as straightforward sentences terminated by periods. The three essential sentences, which appear above after the `@prefix` preamble, state three facts: Vincent Donofrio starred in Law and Order, Law and Order is a type of TV Show, and the movie The Thirteenth Floor has a similar plot as The Matrix. (If you don’t know who Vincent Donofrio is and have never seen The Thirteenth Floor, I, too, was watching Nickelodeon and sipping Capri Suns in 1999.)
-
-Other specifications finalized and drafted during this first era of Semantic Web development describe all the ways in which RDF can be used. RDF in Attributes (RDFa) defines how RDF can be embedded in HTML so that browsers, search engines, and other programs can glean meaning from a webpage. RDF Schema and another standard called OWL allows RDF authors to demarcate the boundary between valid and invalid RDF statements in their RDF documents. RDF Schema and OWL, in other words, are tools for creating what are known as ontologies, explicit specifications of what can and cannot be said within a specific domain. An ontology might include a rule, for example, expressing that no person can be the mother of another person without also being a parent of that person. The hope was that these ontologies would be widely used not only to check the accuracy of RDF found in the wild but also to make inferences about omitted information.
-
-In 2006, Tim Berners-Lee posted a short article in which he argued that the existing work on Semantic Web standards needed to be supplemented by a concerted effort to make semantic data available on the web. Furthermore, once on the web, it was important that semantic data link to other kinds of semantic data, ensuring the rise of a data-based web as interconnected as the existing web. Berners-Lee used the term “linked data” to describe this ideal scenario. Though “linked data” was in one sense just a recapitulation of the original vision for the Semantic Web, it became a term that people could rally around and thus amounted to a rebranding of the Semantic Web project.
-
-Berners-Lee’s article launched the second phase of the Semantic Web’s development, where the focus shifted from setting standards and building toy examples to creating and popularizing large RDF datasets. Perhaps the most successful of these datasets was [DBpedia][3], a giant repository of RDF triplets extracted from Wikipedia articles. DBpedia, which made heavy use of the Semantic Web standards that had been developed in the first half of the 2000s, was a standout example of what could be accomplished using the W3C’s new formats. Today DBpedia describes 4.58 million entities and is used by organizations like the NY Times, BBC, and IBM, which employed DBpedia as a knowledge source for IBM Watson, the Jeopardy-winning artificial intelligence system.
-
-![][4]
-
-The third phase of the Semantic Web’s development involved adapting the W3C’s standards to fit the actual practices and preferences of web developers. By 2008, JSON had begun its meteoric rise to popularity. Whereas XML came packaged with a bunch of associated technologies of indeterminate purpose (XLST, XPath, XQuery, XLink), JSON was just JSON. It was less verbose and more readable. Manu Sporny, an entrepreneur and member of the W3C, had already started using JSON at his company and wanted to find an easy way for RDFa and JSON to work together. The result would be JSON-LD, which in essence was RDF reimagined for a world that had chosen JSON over XML. Sporny, together with his CTO, Dave Longley, issued a draft specification of JSON-LD in 2010. For the next few years, JSON-LD and an updated RDF specification would be the primary focus of Semantic Web work at the W3C. JSON-LD could be used on its own or it could be embedded within a `