+```
+
+Vim Surround 有很多其它选项,你可以参照 [GitHub][7] 上的说明尝试它们。
+
+### 4、Vim Gitgutter
+
+[Vim Gitgutter][8] 插件对使用 Git 作为版本控制工具的人来说非常有用。它会在 Vim 的行号列旁显示 `git diff` 的差异标记。假设你有如下已提交过的代码:
+
+```
+ 1 package main
+ 2
+ 3 import "fmt"
+ 4
+ 5 func main() {
+ 6 x := true
+ 7 items := []string{"tv", "pc", "tablet"}
+ 8
+ 9 if x {
+ 10 for _, i := range items {
+ 11 fmt.Println(i)
+ 12 }
+ 13 }
+ 14 }
+```
+
+当你做出一些修改后,Vim Gitgutter 会显示如下标记:
+
+```
+ 1 package main
+ 2
+ 3 import "fmt"
+ 4
+_ 5 func main() {
+ 6 items := []string{"tv", "pc", "tablet"}
+ 7
+~ 8 if len(items) > 0 {
+ 9 for _, i := range items {
+ 10 fmt.Println(i)
++ 11 fmt.Println("------")
+ 12 }
+ 13 }
+ 14 }
+```
+
+`_` 标记表示在第 5 行和第 6 行之间删除了一行。`~` 表示第 8 行有修改,`+` 表示新增了第 11 行。
+
+另外,Vim Gitgutter 允许你用 `[c` 和 `]c` 在多个有修改的块之间跳转,甚至可以用 `Leader+hs` 来暂存某个变更集。
+
+这个插件提供了对变更的即时视觉反馈,如果你用 Git 的话,有了它简直是如虎添翼。
+
+### 5、VIM Fugitive
+
+[Vim Fugitive][9] 是另一个将 Git 工作流集成到 Vim 中的超棒插件。它对 Git 做了一些封装,可以让你在 Vim 里直接执行 Git 命令并将结果集成在 Vim 界面里。这个插件有超多的特性,更多信息请访问它的 [GitHub][10] 项目页面。
+
+这里有一个使用 Vim Fugitive 的基础 Git 工作流示例。设想我们已经对下面的 Go 代码做出修改,你可以用 `:Gblame` 调用 `git blame` 来查看每行最后的提交信息:
+
+```
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 1 package main
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 2
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 3 import "fmt"
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 4
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│_ 5 func main() {
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 6 items := []string{"tv", "pc", "tablet"}
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 7
+00000000 (Not Committed Yet 2018-12-05 18:55:00 -0500)│~ 8 if len(items) > 0 {
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 9 for _, i := range items {
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 10 fmt.Println(i)
+00000000 (Not Committed Yet 2018-12-05 18:55:00 -0500)│+ 11 fmt.Println("------")
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 12 }
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 13 }
+e9949066 (Ricardo Gerardi 2018-12-05 18:17:19 -0500)│ 14 }
+```
+
+可以看到第 8 行和第 11 行显示还未提交。用 `:Gstatus` 命令检查仓库当前的状态:
+
+```
+ 1 # On branch master
+ 2 # Your branch is up to date with 'origin/master'.
+ 3 #
+ 4 # Changes not staged for commit:
+ 5 # (use "git add ..." to update what will be committed)
+ 6 # (use "git checkout -- ..." to discard changes in working directory)
+ 7 #
+ 8 # modified: vim-5plugins/examples/test1.go
+ 9 #
+ 10 no changes added to commit (use "git add" and/or "git commit -a")
+--------------------------------------------------------------------------------------------------------
+ 1 package main
+ 2
+ 3 import "fmt"
+ 4
+_ 5 func main() {
+ 6 items := []string{"tv", "pc", "tablet"}
+ 7
+~ 8 if len(items) > 0 {
+ 9 for _, i := range items {
+ 10 fmt.Println(i)
++ 11 fmt.Println("------")
+ 12 }
+ 13 }
+ 14 }
+```
+
+Vim Fugitive 在分割的窗口里显示 `git status` 的输出结果。你可以在该行按下 `-` 键用该文件的名字暂存这个文件的提交,再按一次 `-` 可以取消暂存。这个信息会随着你的操作自动更新:
+
+```
+ 1 # On branch master
+ 2 # Your branch is up to date with 'origin/master'.
+ 3 #
+ 4 # Changes to be committed:
+ 5 # (use "git reset HEAD ..." to unstage)
+ 6 #
+ 7 # modified: vim-5plugins/examples/test1.go
+ 8 #
+--------------------------------------------------------------------------------------------------------
+ 1 package main
+ 2
+ 3 import "fmt"
+ 4
+_ 5 func main() {
+ 6 items := []string{"tv", "pc", "tablet"}
+ 7
+~ 8 if len(items) > 0 {
+ 9 for _, i := range items {
+ 10 fmt.Println(i)
++ 11 fmt.Println("------")
+ 12 }
+ 13 }
+ 14 }
+```
+
+现在你可以用 `:Gcommit` 来提交修改了。Vim Fugitive 会打开另一个分割窗口让你输入提交信息:
+
+```
+ 1 vim-5plugins: Updated test1.go example file
+ 2 # Please enter the commit message for your changes. Lines starting
+ 3 # with '#' will be ignored, and an empty message aborts the commit.
+ 4 #
+ 5 # On branch master
+ 6 # Your branch is up to date with 'origin/master'.
+ 7 #
+ 8 # Changes to be committed:
+ 9 # modified: vim-5plugins/examples/test1.go
+ 10 #
+```
+
+按 `:wq` 保存文件完成提交:
+
+```
+[master c3bf80f] vim-5plugins: Updated test1.go example file
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+Press ENTER or type command to continue
+```
+
+然后你可以再用 `:Gstatus` 检查结果并用 `:Gpush` 把新的提交推送到远程。
+
+```
+ 1 # On branch master
+ 2 # Your branch is ahead of 'origin/master' by 1 commit.
+ 3 # (use "git push" to publish your local commits)
+ 4 #
+ 5 nothing to commit, working tree clean
+```
+
+Vim Fugitive 的 GitHub 项目主页有很多屏幕录像展示了它的更多功能和工作流,如果你喜欢它并想多学一些,快去看看吧。
+
+### 接下来?
+
+这些 Vim 插件都是程序开发者的神器!还有另外两类开发者常用的插件:自动完成插件和语法检查插件。它些大都是和具体的编程语言相关的,以后我会在一些文章中介绍它们。
+
+你在写代码时是否用到一些其它 Vim 插件?请在评论区留言分享。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/vim-plugins-developers
+
+作者:[Ricardo Gerardi][a]
+选题:[lujun9972][b]
+译者:[pityonline](https://github.com/pityonline)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/rgerardi
+[b]: https://github.com/lujun9972
+[1]: https://www.vim.org/
+[2]: https://www.vim.org/scripts/script.php?script_id=3599
+[3]: https://github.com/jiangmiao/auto-pairs
+[4]: https://github.com/scrooloose/nerdcommenter
+[5]: http://vim.wikia.com/wiki/Filetype.vim
+[6]: https://www.vim.org/scripts/script.php?script_id=1697
+[7]: https://github.com/tpope/vim-surround
+[8]: https://github.com/airblade/vim-gitgutter
+[9]: https://www.vim.org/scripts/script.php?script_id=2975
+[10]: https://github.com/tpope/vim-fugitive
diff --git a/published/201902/20190110 Toyota Motors and its Linux Journey.md b/published/201902/20190110 Toyota Motors and its Linux Journey.md
new file mode 100644
index 0000000000..d89f4f2a29
--- /dev/null
+++ b/published/201902/20190110 Toyota Motors and its Linux Journey.md
@@ -0,0 +1,61 @@
+[#]: collector: (lujun9972)
+[#]: translator: (jdh8383)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10543-1.html)
+[#]: subject: (Toyota Motors and its Linux Journey)
+[#]: via: (https://itsfoss.com/toyota-motors-linux-journey)
+[#]: author: (Malcolm Dean https://itsfoss.com/toyota-motors-linux-journey)
+
+丰田汽车的 Linux 之旅
+======
+
+我之前跟丰田汽车北美分公司的 Brian.R.Lyons(丰田发言人)聊了聊,话题是关于 Linux 在丰田和雷克萨斯汽车的信息娱乐系统上的实施方案。我了解到一些汽车制造商使用了 Automotive Grade Linux(AGL)。
+
+然后我写了一篇短文,记录了我和 Brian 的讨论内容,谈及了丰田和 Linux 的一些渊源。希望 Linux 的狂热粉丝们能够喜欢这次对话。
+
+全部的[丰田和雷克萨斯汽车都将会使用 Automotive Grade Linux(AGL)][1],主要是用于车载信息娱乐系统。这项措施对于丰田集团来说是至关重要的,因为据 Lyons 先生所说:“作为技术的引领者之一,丰田认识到,赶上科技快速进步最好的方法就是接受开源发展的理念。”
+
+丰田和众多汽车制造公司都认为,与使用非自由软件相比,采用基于 Linux 的操作系统在更新和升级方面会更加廉价和快捷。
+
+这简直太棒了!Linux 终于跟汽车结合起来了。我每天都在电脑上使用 Linux;能看到这个优秀的软件在一个完全不同的产业领域里大展拳脚真是太好了。
+
+我很好奇丰田是什么时候开始使用 [Automotive Grade Linux(AGL)][2]的。按照 Lyons 先生的说法,这要追溯到 2011 年。
+
+> “自 AGL 项目在五年前启动之始,作为活跃的会员和贡献者,丰田与其他顶级制造商和供应商展开合作,着手开发一个基于 Linux 的强大平台,并不断地增强其功能和安全性。”
+
+![丰田信息娱乐系统][3]
+
+[丰田于 2011 年加入了 Linux 基金会][4],与其他汽车制造商和软件公司就 IVI(车内信息娱乐系统)展开讨论,最终在 2012 年,Linux 基金会内部成立了 Automotive Grade Linux 工作组。
+
+丰田在 AGL 工作组里首先提出了“代码优先”的策略,这在开源领域是很常见的做法。然后丰田和其他汽车制造商、IVI 一线厂家,软件公司等各方展开对话,根据各方的技术需求详细制定了初始方向。
+
+在加入 Linux 基金会的时候,丰田就已经意识到,在一线公司之间共享软件代码将会是至关重要的。因为要维护如此复杂的软件系统,对于任何一家顶级厂商都是一笔不小的开销。丰田和它的一级供货商想把更多的资源用在开发新功能和新的用户体验上,而不是用在维护各自的代码上。
+
+各个汽车公司联合起来深入合作是一件大事。许多公司都达成了这样的共识,因为他们都发现开发维护私有软件其实更费钱。
+
+今天,在全球市场上,丰田和雷克萨斯的全部车型都使用了 AGL。
+
+身为雷克萨斯的销售人员,我认为这是一大进步。我和其他销售顾问都曾接待过很多回来找技术专员的客户,他们想更多的了解自己车上的信息娱乐系统到底都能做些什么。
+
+这件事本身对于 Linux 社区和用户是个重大利好。虽然那个我们每天都在使用的操作系统变了模样,被推到了更广阔的舞台上,但它仍然是那个 Linux,简单、包容而强大。
+
+未来将会如何发展呢?我希望它能少出差错,为消费者带来更佳的用户体验。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/toyota-motors-linux-journey
+
+作者:[Malcolm Dean][a]
+选题:[lujun9972][b]
+译者:[jdh8383](https://github.com/jdh8383)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/toyota-motors-linux-journey
+[b]: https://github.com/lujun9972
+[1]: https://www.linuxfoundation.org/press-release/2018/01/automotive-grade-linux-hits-road-globally-toyota-amazon-alexa-joins-agl-support-voice-recognition/
+[2]: https://www.automotivelinux.org/
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/toyota-interiors.jpg?resize=800%2C450&ssl=1
+[4]: https://www.linuxfoundation.org/press-release/2011/07/toyota-joins-linux-foundation/
diff --git a/published/20190114 Hegemon - A Modular System And Hardware Monitoring Tool For Linux.md b/published/201902/20190114 Hegemon - A Modular System And Hardware Monitoring Tool For Linux.md
similarity index 100%
rename from published/20190114 Hegemon - A Modular System And Hardware Monitoring Tool For Linux.md
rename to published/201902/20190114 Hegemon - A Modular System And Hardware Monitoring Tool For Linux.md
diff --git a/published/201902/20190114 Remote Working Survival Guide.md b/published/201902/20190114 Remote Working Survival Guide.md
new file mode 100644
index 0000000000..0c51a15885
--- /dev/null
+++ b/published/201902/20190114 Remote Working Survival Guide.md
@@ -0,0 +1,133 @@
+[#]: collector: (lujun9972)
+[#]: translator: (beamrolling)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10518-1.html)
+[#]: subject: (Remote Working Survival Guide)
+[#]: via: (https://www.jonobacon.com/2019/01/14/remote-working-survival/)
+[#]: author: (Jono Bacon https://www.jonobacon.com/author/admin/)
+
+远程工作生存指南
+======
+
+
+
+远程工作似乎是最近的一个热门话题。CNBC 报道称,[70% 的专业人士至少每周在家工作一次][1]。同样地,CoSo Cloud 调查发现, [77% 的人在远程工作时效率更高][2] ,而 aftercollege 的一份调查显示,[8% 的千禧一代会更多地考虑提供远程工作的公司][3]。 这看起来很合理:技术、网络以及文化似乎越来越推动了远程工作的发展。哦,自制咖啡也比以前任何时候更好喝了。
+
+目前,我准备写另一篇关于公司如何优化远程工作的文章(所以请确保你加入我们的会员以持续关注——这是免费的)。
+
+但今天,我想 **分享一些个人如何做好远程工作的建议**。不管你是全职远程工作者,或者是可以选择一周某几天在家工作的人,希望这篇文章对你有用。
+
+眼下,你需要明白,**远程工作不是万能药**。当然,穿着睡衣满屋子乱逛,听听反社会音乐,喝一大杯咖啡看起来似乎挺完美的,但这不适合每个人。
+
+有的人需要办公室的空间。有的人需要办公室的社会元素。有的人需要从家里走出来。有的人在家里缺乏保持专注的自律。有的人因为好几年未缴退税而怕政府工作人员来住处敲门。
+
+**远程工作就好像一块肌肉:如果你锻炼并且保持它,那么它能带来极大的力量和能力**。如果不这么做,结果就不一样了。
+
+在我职业生涯的大多数时间里,我在家工作。我喜欢这么做。当我在家工作的时候,我更有效率,更开心,更有能力。我并非不喜欢在办公室工作,我享受办公室的社会元素,但我更喜欢在家工作时我的“空间”。我喜欢听重金属音乐,但当整个办公室的人不想听到 [After The Burial][5] 的时候,这会引起一些问题。
+
+![][6]
+
+*“Squirrel.” [图片来源][7]*
+
+我已经学会了如何正确平衡工作、旅行以及其他元素来管理我的远程工作,以下是我的一些建议。请务必**在评论中分享一些你的建议**。
+
+### 1、你需要纪律和习惯(以及了解你的“波动”)
+
+远程工作确实是需要训练的一块肌肉。就像练出真正的肌肉一样,它需要一个明确的习惯混以健康的纪律。
+
+永远保持穿戴整齐(不要穿睡衣)。设置你一天工作的开始和结束时间(大多时候我从早上 9 点工作到下午 6 点)。选好你的午餐休息时间(我的是中午 12 点)。选好你的早晨仪式(我的是电子邮件,紧接着是全面审查客户需求)。决定你的主工作场所在哪(我的主工作场所是我家庭办公室)。决定好每天你什么时候运动(大多数时候我在下午 5 点运动)。
+
+**设计一个实际的习惯并坚持 66 天**。建立一个习惯需要很长时间,尽量不要偏离你的习惯。你越坚持这个习惯,做下去所花费的功夫越少。在这 66 天的末尾,你想都不会想,自然而然地就按习惯去做了。
+
+话虽这么说,我们又不住在真空里 ([更干净,或者别的什么][8])。我们都有自己的“波动”。
+
+“波动”是你为了改变做事的方法时,对日常做出的一些改变。举个例子,夏天的时候我通常需要更多的阳光。那时我经常会在室外的花园工作。临近假期的时候我更容易分心,所以我在上班时间会更需要呆在室内。有时候我只想要多点人际接触,因此我会在咖啡馆里工作几周。有时候我就是喜欢在厨房或者长椅上工作。你需要认识你的“波动”并倾听你的身体。 **首先养成习惯,然后在你认识到自己的“波动”的时候再对它进行适当的调整**。
+
+### 2、与你的上司及同事一起设立预期目标
+
+不是每个人都知道怎么远程工作,如果你的公司对远程工作没那么熟悉,你尤其需要和同事一起设立预期目标。
+
+这件事十分简单:**当你要设计自己的日常工作的时候,清楚地跟你的上司和团队进行交流。**让他们知道如何找到你,紧急情况下如何联系你,以及你在家的时候如何保持合作。
+
+在这里通信方式至关重要。有些远程工作者很怕离开他们的电脑,因为害怕当他们不在的时候有人给他们发消息(他们担心别人会觉得他们在边吃奇多边看 Netflix)。
+
+你需要离开一会的时间。你需要在吃午餐的时候眼睛不用一直盯着电脑屏幕。你又不是 911 接线员。**设定预期:有时候你可能不能立刻回复,但你会尽快回复**。
+
+同样地,设定你的通常可响应的时间范围的预期。举个例子,我对客户设立的预期是我一般每天早上 9 点到下午 6 点工作。当然,如果某个客户急需某样东西,我很乐意在这段时间外回应他,但作为一个一般性规则,我通常只在这段时间内工作。这对于生活的平衡是必要的。
+
+### 3、分心是你的敌人,它们需要管理
+
+我们都会分心,这是人类的本能。让你分心的事情可能是你的孩子回家了,想玩救援机器人;可能是看看Facebook、Instagram,或者 Twitter 以确保你不会错过任何不受欢迎的政治观点,或者某人的午餐图片;可能是你生活中即将到来的某件事带走了你的注意力(例如,即将举办的婚礼、活动,或者一次大旅行)。
+
+**你需要明白什么让你分心以及如何管理它**。举个例子,我知道我的电子邮件和 Twitter 会让我分心。我经常查看它们,并且每次查看都会让我脱离我正在工作的空间。拿水或者咖啡的时候我总会分心去吃零食,看 Youtube 的视频。
+
+![][9]
+
+*我的分心克星*
+
+由数字信息造成的分心有一个简单对策:**锁起来**。关闭选项卡,直到你完成了你手头的事情。有一大堆工作的时候我总这么干:我把让我分心的东西锁起来,直到做完手头的工作。这需要控制能力,但所有的一切都需要。
+
+因为别人影响而分心的元素更难解决。如果你是有家庭的人,你需要明确表示,在你工作的时候常需要独处。这也是为什么家庭办公室这么重要:你需要设一些“爸爸/妈妈正在工作”的界限。如果有急事才能进来,否则让孩子自个儿玩去。
+
+把让你分心的事锁起来有许多方法:把你的电话静音;把自己的 Facebook 状态设成“离开”;换到一个没有让你分心的事的房间(或建筑物)。再重申一次,了解是什么让你分心并控制好它。如果不这么做,你会永远被分心的事摆布。
+
+### 4、(良好的)关系需要面对面的关注
+
+有些角色比其他角色更适合远程工作。例如,我见过工程、质量保证、支持、安全以及其他团队(通常更专注于数字信息协作)的出色工作。其他团队,如设计或营销,往往在远程环境下更难熬(因为它们更注重触觉性)。
+
+但是,对于任何团队而言,建立牢固的关系至关重要,而现场讨论、协作和社交很有必要。我们的许多感官(例如肢体语言)在数字环境中被剔除,而这些在我们建立信任和关系的方式中发挥着关键作用。
+
+![][10]
+
+*火箭也很有帮助*
+
+这尤为重要,如果(a)你初来这家公司,需要建立关系;(b)对某种角色不熟悉,需要和你的团队建立关系;或者(c)你处于领导地位,构建团队融入和参与是你工作的关键部分。
+
+**解决方法是?合理搭配远程工作与面对面的时间。** 如果你的公司就在附近,可以用一部分的时间在家工作,一部分时间在公司工作。如果你的公司比较远,安排定期前往办公室(并对你的上司设定你需要这么做的预期)。例如,当我在 XPRIZE 工作的时候,我每几周就会飞往洛杉矶几天。当我在 Canonical 工作时(总部在伦敦),我们每三个月来一次冲刺。
+
+### 5、保持专注,不要松懈
+
+本文所有内容的关键在于构建一种(远程工作的)能力,并培养远程工作的肌肉。这就像建立你的日常惯例,坚持它,并认识你的“波动”和让你分心的事情以及如何管理它们一样简单。
+
+我以一种相当具体的方式来看待这个世界:**我们所做的一切都有机会得到改进和完善**。举个例子,我已经公开演讲超过 15 年,但我总是能发现新的改进方法,以及修复新的错误(说到这些,请参阅我的 [提升你公众演讲的10个方法][11])。
+
+发现新的改善方法,以及把每个绊脚石和错误视为一个开启新的不同的“啊哈!”时刻让人兴奋。远程工作和这没什么不同:寻找有助于解锁方式的模式,让你的远程工作时间更高效,更舒适,更有趣。
+
+![][12]
+
+*看看这些书。它们非常适合个人发展。参阅我的 [150 美元个人发展工具包][13] 文章*
+
+……但别为此狂热。有的人花尽他们每一分钟来寻求如何变得更好,他们经常以“做得还不够好”、“完成度不够高”等为由打击自己,无法达到他们内心关于完美的不切实际的观点。
+
+我们都是人,我们是有生命的,不是机器人。始终致力于改进,但要明白不是所有东西都是完美的。你应该有一些休息日或休息周。你也会因为压力和倦怠而挣扎。你也会遇到一些在办公室比远程工作更容易的情况。从这些时刻中学习,但不要沉迷于此。生命太短暂了。
+
+**你有什么提示,技巧和建议吗?你如何管理远程工作?我的建议中还缺少什么吗?在评论区中与我分享!**
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.jonobacon.com/2019/01/14/remote-working-survival/
+
+作者:[Jono Bacon][a]
+选题:[lujun9972][b]
+译者:[beamrolling](https://github.com/beamrolling)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.jonobacon.com/author/admin/
+[b]: https://github.com/lujun9972
+[1]: https://www.cnbc.com/2018/05/30/70-percent-of-people-globally-work-remotely-at-least-once-a-week-iwg-study.html
+[2]: http://www.cosocloud.com/press-release/connectsolutions-survey-shows-working-remotely-benefits-employers-and-employees
+[3]: https://www.aftercollege.com/cf/2015-annual-survey
+[4]: https://www.jonobacon.com/join/
+[5]: https://www.facebook.com/aftertheburial/
+[6]: https://www.jonobacon.com/wp-content/uploads/2019/01/aftertheburial2.jpg
+[7]: https://skullsnbones.com/burial-live-photos-vans-warped-tour-denver-co/
+[8]: https://www.youtube.com/watch?v=wK1PNNEKZBY
+[9]: https://www.jonobacon.com/wp-content/uploads/2019/01/IMG_20190114_102429-1024x768.jpg
+[10]: https://www.jonobacon.com/wp-content/uploads/2019/01/15381733956_3325670fda_k-1024x576.jpg
+[11]: https://www.jonobacon.com/2018/12/11/10-ways-to-up-your-public-speaking-game/
+[12]: https://www.jonobacon.com/wp-content/uploads/2019/01/DwVBxhjX4AgtJgV-1024x532.jpg
+[13]: https://www.jonobacon.com/2017/11/13/150-dollar-personal-development-kit/
diff --git a/translated/tech/20190115 Comparing 3 open source databases- PostgreSQL, MariaDB, and SQLite.md b/published/201902/20190115 Comparing 3 open source databases- PostgreSQL, MariaDB, and SQLite.md
similarity index 52%
rename from translated/tech/20190115 Comparing 3 open source databases- PostgreSQL, MariaDB, and SQLite.md
rename to published/201902/20190115 Comparing 3 open source databases- PostgreSQL, MariaDB, and SQLite.md
index 6638fb8fb7..c47bb62e94 100644
--- a/translated/tech/20190115 Comparing 3 open source databases- PostgreSQL, MariaDB, and SQLite.md
+++ b/published/201902/20190115 Comparing 3 open source databases- PostgreSQL, MariaDB, and SQLite.md
@@ -1,86 +1,68 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10512-1.html)
[#]: subject: (Comparing 3 open source databases: PostgreSQL, MariaDB, and SQLite)
[#]: via: (https://opensource.com/article/19/1/open-source-databases)
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
开源数据库 PostgreSQL、MariaDB 和 SQLite 的对比
======
-> 要知道如何选择最适合你的需求的开源数据库。
+
+> 了解如何选择最适合你的需求的开源数据库。

-在现代企业的技术领域中,开源软件已经成为了一股不可忽视的重要力量。借助[开源运动open source movement][1]的东风,很多重大的技术已经得到了长足的发展。
+在现代的企业级技术领域中,开源软件已经成为了一股不可忽视的重要力量。借助[开源运动][1]open source movement的东风,涌现除了许多重大的技术突破。
-个中原因显而易见,尽管一些基于 Linux 的开源网络标准可能不如著名厂商的产品那么受欢迎,但是不同制造商的智能设备之间能够互相通信,开源技术功不可没。当然也有不少人认为开源应用比厂商提供的产品更加好,所以无论如何,使用开源数据库进行开发确实是相当有利的。
+个中原因显而易见,尽管一些基于 Linux 的开源网络标准可能不如专有厂商的那么受欢迎,但是不同制造商的智能设备之间能够互相通信,开源技术功不可没。当然也有不少人认为开源开发出来的应用比厂商提供的产品更加好,所以无论如何,使用开源数据库进行开发确实是相当有利的。
-和其它类型的应用软件一样,[不同的开源数据库管理系统之间在功能和特性上可能会存在着比较大的差异][2]。因此,如果要为整个团队选择一个开源数据库,那么应该重点考察数据库是否对用户友好、是否能够持续适应团队需求、是否能够提供足够安全的功能等方面的因素。
+和其它类型的应用软件一样,不同的开源数据库管理系统之间在功能和特性上可能会存在着比较大的差异。换言之,[不是所有的开源数据库都是平等的][2]。因此,如果要为整个组织选择一个开源数据库,那么应该重点考察数据库是否对用户友好、是否能够持续适应团队需求、是否能够提供足够安全的功能等方面的因素。
-出于这方面考虑,我们在这篇文章中对一些开源数据库进行了概述和对比。但也有可能略过了一些常用的数据库。需要提到的是,MongoDB 最近更改了它的许可证,因此它已经不是完全的开源产品了。从商业角度来看,这个决定是很有意义的,因为 MongoDB 已经成为了[约 27000 家公司][3]在数据库托管方面的实际解决方案,这也意味着 MongoDB 已经不再被视为真正的开源产品。
+出于这方面考虑,我们在这篇文章中对一些开源数据库进行了概述和优缺点对比。遗憾的是,我们必须忽略一些最常用的数据库。值得注意的是,MongoDB 最近更改了它的许可证,因此它已经不是真正的开源产品了。从商业角度来看,这个决定是很有意义的,因为 MongoDB 已经成为了数据库托管实际上的解决方案,[约 27000 家公司][3]在使用它,但这也意味着 MongoDB 已经不再被视为真正的开源产品。
-另外,在 MySQL 被 Oracle 收购之后,这个产品就已经不再具有开源性质了。MySQL 在过去相当长的一段时间里都是很多项目的首选数据库,因此它的案例也是摆在其它开源数据库面前的一个巨大挑战。
+另外,自从 MySQL 被 Oracle 收购之后,这个产品就已经不再具有开源性质了,MySQL 可以说是数十年来首选的开源数据库。然而,这为其它真正的开源数据库解决方案提供了挑战它的空间。
-下面讨论一下我们提到的三个开源数据库。
+下面是三个值得考虑的开源数据库。
### PostgreSQL
-[PostgreSQL][4] 可以说是开源数据库中的一个重要成员。无论是哪种规模的企业,PostgreSQL 可能都是它们的首选解决方案。Oracle 对 MySQL 的收购在当时来说可能具有一定的商业意义,但是随着云存储的日益壮大,[开发者对 MySQL 的依赖程度或许并不如以前那么大了][5]。
+没有 [PostgreSQL][4] 的开源数据库清单肯定是不完整的。PostgreSQL 一直都是各种规模企业的首选解决方案。Oracle 对 MySQL 的收购在当时来说可能具有一定的商业意义,但是随着云存储的日益壮大,[开发者对 MySQL 的依赖程度或许并不如以前那么大了][5]。
-尽管 PostgreSQL 不是一个最近几年才面世的新产品,但它却是借助了 [MySQL 衰落][6]的机会才逐渐成为最受欢迎的开源数据库之一。由于它和 MySQL 的工作方式非常相似,因此很多热衷于使用开源软件的开发者都纷纷转向 PostgreSQL。
+尽管 PostgreSQL 不是一个最近几年才面世的新产品,但它却是借助了 [MySQL 相对衰落][6]的机会才逐渐成为最受欢迎的开源数据库之一。由于它和 MySQL 的工作方式非常相似,因此很多热衷于使用开源软件的开发者都纷纷转向 PostgreSQL。
#### 优势
- * 目前 PostgreSQL 最显著的优点是它的算法效率高,因此它的性能就比其它的数据库也高一些。这一点在处理大型数据集的时候就可以很明显地体现出来了,否则在运算过程中 I/O 会成为瓶颈。
+ * 目前 PostgreSQL 最显著的优点是它的核心算法的效率,这意味着它的性能优于许多宣称更先进数据库。这一点在处理大型数据集的时候就可以很明显地体现出来了,否则 I/O 处理会成为瓶颈。
* PostgreSQL 也是最灵活的开源数据库之一,使用 Python、Perl、Java、Ruby、C 或者 R 都能够很方便地调用数据库。
* 作为最常用的几个开源数据库之中,PostgreSQL 的社区支持是做得最好的。
-
-
-
#### 劣势
- * 在数据量比较大的时候,PostgreSQL 的效率毋庸置疑是很高的,但对于数据量较小的情况,使用 PostgreSQL 就显得不如其它的一些工具轻量级了。
-
+ * 在数据量比较大的时候,PostgreSQL 的效率毋庸置疑是很高的,但对于数据量较小的情况,使用 PostgreSQL 就显得不如其它的一些工具快了。
* 尽管拥有一个很优秀的社区支持,但 PostgreSQL 的核心文档仍然需要作出改进。
-
- * 如果你需要使用并行计算或者集群化等高级工具,就需要安装 PostgreSQL 的第三方插件。尽管官方有计划将这些功能逐步添加到主要版本当中,但可能会需要再等待好几年才能实现。
-
-
-
+ * 如果你需要使用并行计算或者集群化等高级工具,就需要安装 PostgreSQL 的第三方插件。尽管官方有计划将这些功能逐步添加到主要版本当中,但可能会需要再等待好几年才能出现在标准版本中。
### MariaDB
-[MariaDB][7] 是 MySQL 的真正开源发行版本(在 [GNU GPLv2][8] 下发布)。在 Oracle 收购 MySQL 之后,MySQL 的一些核心开发人员认为 Oracle 会破坏 MySQL 的开源理念,因此建立了 MariaDB 这个独立的分支。
+[MariaDB][7] 是 MySQL 的真正开源的发行版本(在 [GNU GPLv2][8] 下发布)。在 Oracle 收购 MySQL 之后,MySQL 的一些核心开发人员认为 Oracle 会破坏 MySQL 的开源理念,因此建立了 MariaDB 这个独立的分支。
-MariaDB 在开发过程中替换了 MySQL 的几个关键组件,但仍然尽可能地保持兼容 MySQL。MariaDB 使用了 Aria 作为存储引擎,这个存储引擎既可以作为事务式引擎,也可以作为非事务式引擎。在 MariaDB 独立出来之前,就[有一些人推测][10] Aria 会成为 MySQL 未来版本中的标准引擎。
+MariaDB 在开发过程中替换了 MySQL 的几个关键组件,但仍然尽可能地保持兼容 MySQL。MariaDB 使用了 Aria 作为存储引擎,这个存储引擎既可以作为事务式引擎,也可以作为非事务式引擎。在 MariaDB 分叉出来之前,就[有一些人推测][10] Aria 会成为 MySQL 未来版本中的标准引擎。
#### 优势
* 由于 MariaDB [频繁进行安全发布][11],很多用户选择使用 MariaDB 而不选择 MySQL。尽管这不一定代表 MariaDB 会比 MySQL 更加安全,但确实表明它的开发社区对安全性十分重视。
-
- * 有一些人认为,MariaDB 的主要优点就是它在坚持开源的同时会与 MySQL 保持高度兼容,这就表示从 MySQL 向 MariaDB 的迁移会非常容易。
-
+ * 有一些人认为,MariaDB 的主要优点就是它在坚持开源的同时会与 MySQL 保持高度兼容,这就意味着从 MySQL 向 MariaDB 的迁移会非常容易。
* 也正是由于这种兼容性,MariaDB 也可以和其它常用于 MySQL 的语言配合使用,因此从 MySQL 迁移到 MariaDB 之后,学习和调试代码的时间成本会非常低。
-
- * 你可以将 WordPress 和 MariaDB(而不是 MySQL)[配合使用][12]从而获得更好的性能和更丰富的功能。WordPress 是[最受欢迎的内容管理系统Content Management System][13](CMS),并且拥有活跃的开源开发者社区。各种第三方插件在 WordPress 和 MariaDB 配合使用时都能够正常工作。
-
-
-
+ * 你可以将 WordPress 和 MariaDB(而不是 MySQL)[配合使用][12]从而获得更好的性能和更丰富的功能。WordPress 是[最受欢迎的][13]内容管理系统Content Management System(CMS),占据了一半的互联网份额,并且拥有活跃的开源开发者社区。各种第三方插件在 WordPress 和 MariaDB 配合使用时都能够正常工作。
#### 劣势
* MariaDB 有时会变得比较臃肿,尤其是它的 IDX 日志文件在长期使用之后会变得非常大,最终导致性能下降。
-
- * MariaDB 的缓存并没有期望中那么快,这可能会让人有所失望。
-
+ * 缓存是 MariaDB 的另一个工作领域,并没有期望中那么快,这可能会让人有所失望。
* 尽管 MariaDB 最初承诺兼容 MySQL,但目前 MariaDB 已经不是完全兼容 MySQL。如果要从 MySQL 迁移到 MariaDB,就需要额外做一些兼容工作。
-
-
-
### SQLite
[SQLite][14] 可以说是世界上实现最多的数据库引擎,因为它被很多流行的 web 浏览器、操作系统和手机所采用。它最初是作为 MySQL 的轻量级分支所开发的。SQLite 和很多其它的数据库不同,它不采用客户端-服务端的引擎架构,而是将整个软件嵌入到每个实现当中。
@@ -90,30 +72,20 @@ MariaDB 在开发过程中替换了 MySQL 的几个关键组件,但仍然尽
#### 优势
* 如果你需要构建和实现一个小型数据库,SQLite [可能是最好的选择][15]。它小而灵活,不需要费工夫寻求各种变通方案,就可以在嵌入式系统中实现。
-
- * SQLite 体积很小,因此速度也很快。其它的一些高级数据库可能会使用复杂的优化方式来提高效率,但不如SQLite 这样减小数据库大小更为直接。
-
- * SQLite 被广泛采用也导致它可能是兼容性最高的数据库。如果你希望将应用程序集成到智能手机上,只要有第三方应用程序使用到了 SQLite,就能够正常运行数据库了。
-
-
-
+ * SQLite 体积很小,因此速度极快。其它的一些高级数据库可能会使用复杂的优化方式来提高效率,但SQLite 采用了一种更简单的方法:通过减小数据库及其处理软件的大小,以使处理的数据更少。
+ * SQLite 被广泛采用也导致它可能是兼容性最高的数据库。如果你希望将应用程序集成到智能手机上,这一点尤为重要:只要是可以工作于广泛环境中的第三方应用程序,就可以原生运行于 iOS 上。
#### 劣势
- * SQLite 的轻量意味着它缺少了很多其它大型数据库的常见功能。例如数据加密就是[抵御网络攻击][16]的标准功能,而 SQLite 却没有内置这个功能。
-
- * SQLite 的广泛流行和源码公开使它易于使用,但是也让它更容易遭受攻击。这是它最大的劣势。SQLite 经常被发现高位的漏洞,例如最近的 [Magellan][17]。
-
+ * SQLite 的体积小意味着它缺少了很多其它大型数据库的常见功能。例如数据加密就是[抵御黑客攻击][16]的标准功能,而 SQLite 却没有内置这个功能。
+ * SQLite 的广泛流行和源码公开使它易于使用,但是也让它更容易遭受攻击。这是它最大的劣势。SQLite 经常被发现高危的漏洞,例如最近的 [Magellan][17]。
* 尽管 SQLite 单文件的方式拥有速度上的优势,但是要使用它实现多用户环境却比较困难。
-
-
-
### 哪个开源数据库才是最好的?
-当然,对于开源数据库的选择还是取决于业务的需求以及系统的体量。对于小型数据库或者是使用量比较小的数据库,可以使用比较轻量级的解决方案,这样不仅可以加快实现的速度,而且由于系统的复杂程度不算太高,花在调试上的时间成本也不会太高。
+当然,对于开源数据库的选择还是取决于业务的需求,尤其是系统的体量。对于小型数据库或者是使用量比较小的数据库,可以使用比较轻量级的解决方案,这样不仅可以加快实现的速度,而且由于系统的复杂程度不算太高,花在调试上的时间成本也不会太高。
-而对于大型的系统,尤其是业务增长速度较快的业务,最好还是花时间使用更复杂的数据库(例如 PostgreSQL)。这是一个磨刀不误砍柴工的选择,能够让你不至于在后期再重新选择另一款数据库。
+而对于大型的系统,尤其是在成长性企业中,最好还是花时间使用更复杂的数据库(例如 PostgreSQL)。这是一个磨刀不误砍柴工的选择,能够让你不至于在后期再重新选择另一款数据库。
--------------------------------------------------------------------------------
@@ -122,7 +94,7 @@ via: https://opensource.com/article/19/1/open-source-databases
作者:[Sam Bocetta][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/201902/20190115 Getting started with Sandstorm, an open source web app platform.md b/published/201902/20190115 Getting started with Sandstorm, an open source web app platform.md
new file mode 100644
index 0000000000..ae67f2ede2
--- /dev/null
+++ b/published/201902/20190115 Getting started with Sandstorm, an open source web app platform.md
@@ -0,0 +1,60 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10535-1.html)
+[#]: subject: (Getting started with Sandstorm, an open source web app platform)
+[#]: via: (https://opensource.com/article/19/1/productivity-tool-sandstorm)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
+
+开始使用 Sandstorm 吧,一个开源 Web 应用平台
+======
+
+> 了解 Sandstorm,这是我们在开源工具系列中的第三篇,它将在 2019 年提高你的工作效率。
+
+
+
+每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
+
+这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第三个工具来帮助你在 2019 年更有效率。
+
+### Sandstorm
+
+保持高效不仅仅需要待办事项以及让事情有组织。通常它需要一组工具以使工作流程顺利进行。
+
+
+
+[Sandstorm][1] 是打包的开源应用集合,它们都可从一个 Web 界面访问,也可在中央控制台进行管理。你可以自己托管或使用 [Sandstorm Oasis][2] 服务。它按用户收费。
+
+
+
+Sandstorm 有一个市场,在这里可以轻松安装应用。应用包括效率类、财务、笔记、任务跟踪、聊天、游戏等等。你还可以按照[开发人员文档][3]中的应用打包指南打包自己的应用并上传它们。
+
+
+
+安装后,用户可以创建 [grain][4] - 容器化后的应用数据实例。默认情况下,grain 是私有的,它可以与其他 Sandstorm 用户共享。这意味着它们默认是安全的,用户可以选择与他人共享的内容。
+
+
+
+Sandstorm 可以从几个不同的外部源进行身份验证,也可以使用无需密码的基于电子邮件的身份验证。使用外部服务意味着如果你已使用其中一种受支持的服务,那么就无需管理另一组凭据。
+
+最后,Sandstorm 使安装和使用支持的协作应用变得快速,简单和安全。
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/productivity-tool-sandstorm
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney (Kevin Sonney)
+[b]: https://github.com/lujun9972
+[1]: https://sandstorm.io/
+[2]: https://oasis.sandstorm.io
+[3]: https://docs.sandstorm.io/en/latest/developing/
+[4]: https://sandstorm.io/how-it-works
diff --git a/published/201902/20190116 The Evil-Twin Framework- A tool for improving WiFi security.md b/published/201902/20190116 The Evil-Twin Framework- A tool for improving WiFi security.md
new file mode 100644
index 0000000000..760c2ed1cf
--- /dev/null
+++ b/published/201902/20190116 The Evil-Twin Framework- A tool for improving WiFi security.md
@@ -0,0 +1,230 @@
+[#]: collector: (lujun9972)
+[#]: translator: (hopefully2333)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10568-1.html)
+[#]: subject: (The Evil-Twin Framework: A tool for improving WiFi security)
+[#]: via: (https://opensource.com/article/19/1/evil-twin-framework)
+[#]: author: (André Esser https://opensource.com/users/andreesser)
+
+Evil-Twin 框架:一个用于提升 WiFi 安全性的工具
+======
+
+> 了解一款用于对 WiFi 接入点安全进行渗透测试的工具。
+
+
+
+越来越多的设备通过无线传输的方式连接到互联网,以及,大范围可用的 WiFi 接入点为攻击者攻击用户提供了很多机会。通过欺骗用户连接到[虚假的 WiFi 接入点][1],攻击者可以完全控制用户的网络连接,这将使得攻击者可以嗅探和篡改用户的数据包,将用户的连接重定向到一个恶意的网站,并通过网络发起其他的攻击。
+
+为了保护用户并告诉他们如何避免线上的危险操作,安全审计人员和安全研究员必须评估用户的安全实践能力,用户常常在没有确认该 WiFi 接入点为安全的情况下就连接上了该网络,安全审计人员和研究员需要去了解这背后的原因。有很多工具都可以对 WiFi 的安全性进行审计,但是没有一款工具可以测试大量不同的攻击场景,也没有能和其他工具集成得很好的工具。
+
+Evil-Twin Framework(ETF)用于解决 WiFi 审计过程中的这些问题。审计者能够使用 ETF 来集成多种工具并测试该 WiFi 在不同场景下的安全性。本文会介绍 ETF 的框架和功能,然后会提供一些案例来说明该如何使用这款工具。
+
+### ETF 的架构
+
+ETF 的框架是用 [Python][2] 写的,因为这门开发语言的代码非常易读,也方便其他开发者向这个项目贡献代码。除此之外,很多 ETF 的库,比如 [Scapy][3],都是为 Python 开发的,很容易就能将它们用于 ETF。
+
+ETF 的架构(图 1)分为不同的彼此交互的模块。该框架的设置都写在一个单独的配置文件里。用户可以通过 `ConfigurationManager` 类里的用户界面来验证并修改这些配置。其他模块只能读取这些设置并根据这些设置进行运行。
+
+![Evil-Twin Framework Architecture][5]
+
+*图 1:Evil-Twin 的框架架构*
+
+ETF 支持多种与框架交互的用户界面,当前的默认界面是一个交互式控制台界面,类似于 [Metasploit][6] 那种。正在开发用于桌面/浏览器使用的图形用户界面(GUI)和命令行界面(CLI),移动端界面也是未来的一个备选项。用户可以使用交互式控制台界面来修改配置文件里的设置(最终会使用 GUI)。用户界面可以与存在于这个框架里的每个模块进行交互。
+
+WiFi 模块(AirCommunicator)用于支持多种 WiFi 功能和攻击类型。该框架确定了 Wi-Fi 通信的三个基本支柱:数据包嗅探、自定义数据包注入和创建接入点。三个主要的 WiFi 通信模块 AirScanner、AirInjector,和 AirHost,分别用于数据包嗅探、数据包注入,和接入点创建。这三个类被封装在主 WiFi 模块 AirCommunicator 中,AirCommunicator 在启动这些服务之前会先读取这些服务的配置文件。使用这些核心功能的一个或多个就可以构造任意类型的 WiFi 攻击。
+
+要使用中间人(MITM)攻击(这是一种攻击 WiFi 客户端的常见手法),ETF 有一个叫做 ETFITM(Evil-Twin Framework-in-the-Middle)的集成模块,这个模块用于创建一个 web 代理,来拦截和修改经过的 HTTP/HTTPS 数据包。
+
+许多其他的工具也可以利用 ETF 创建的 MITM。通过它的可扩展性,ETF 能够支持它们,而不必单独地调用它们,你可以通过扩展 Spawner 类来将这些工具添加到框架里。这使得开发者和安全审计人员可以使用框架里预先配置好的参数字符来调用程序。
+
+扩展 ETF 的另一种方法就是通过插件。有两类插件:WiFi 插件和 MITM 插件。MITM 插件是在 MITM 代理运行时可以执行的脚本。代理会将 HTTP(s) 请求和响应传递给可以记录和处理它们的插件。WiFi 插件遵循一个更加复杂的执行流程,但仍然会给想参与开发并且使用自己插件的贡献者提供一个相对简单的 API。WiFi 插件还可以进一步地划分为三类,其中每个对应一个核心 WiFi 通信模块。
+
+每个核心模块都有一些特定事件能触发响应的插件的执行。举个例子,AirScanner 有三个已定义的事件,可以对其响应进行编程处理。事件通常对应于服务开始运行之前的设置阶段、服务正在运行时的中间执行阶段、服务完成后的卸载或清理阶段。因为 Python 允许多重继承,所以一个插件可以继承多个插件类。
+
+上面的图 1 是框架架构的摘要。从 ConfigurationManager 指出的箭头意味着模块会从中读取信息,指向它的箭头意味着模块会写入/修改配置。
+
+### 使用 ETF 的例子
+
+ETF 可以通过多种方式对 WiFi 的网络安全或者终端用户的 WiFi 安全意识进行渗透测试。下面的例子描述了这个框架的一些渗透测试功能,例如接入点和客户端检测、对使用 WPA 和 WEP 类型协议的接入点进行攻击,和创建 evil twin 接入点。
+
+这些例子是使用 ETF 和允许进行 WiFi 数据捕获的 WiFi 卡设计的。它们也在 ETF 设置命令中使用了下面这些缩写:
+
+ * **APS** Access Point SSID
+ * **APB** Access Point BSSID
+ * **APC** Access Point Channel
+ * **CM** Client MAC address
+
+在实际的测试场景中,确保你使用了正确的信息来替换这些缩写。
+
+#### 在解除认证攻击后捕获 WPA 四次握手的数据包。
+
+这个场景(图 2)做了两个方面的考虑:解除认证攻击de-authentication attack和捕获 WPA 四次握手数据包的可能性。这个场景从一个启用了 WPA/WPA2 的接入点开始,这个接入点有一个已经连上的客户端设备(在本例中是一台智能手机)。目的是通过常规的解除认证攻击(LCTT 译注:类似于 DoS 攻击)来让客户端断开和 WiFi 的网络,然后在客户端尝试重连的时候捕获 WPA 的握手包。重连会在断开连接后马上手动完成。
+
+![Scenario for capturing a WPA handshake after a de-authentication attack][8]
+
+*图 2:在解除认证攻击后捕获 WPA 握手包的场景*
+
+在这个例子中需要考虑的是 ETF 的可靠性。目的是确认工具是否一直都能捕获 WPA 的握手数据包。每个工具都会用来多次复现这个场景,以此来检查它们在捕获 WPA 握手数据包时的可靠性。
+
+使用 ETF 来捕获 WPA 握手数据包的方法不止一种。一种方法是使用 AirScanner 和 AirInjector 两个模块的组合;另一种方法是只使用 AirInjector。下面这个场景是使用了两个模块的组合。
+
+ETF 启用了 AirScanner 模块并分析 IEEE 802.11 数据帧来发现 WPA 握手包。然后 AirInjecto 就可以使用解除认证攻击来强制客户端断开连接,以进行重连。必须在 ETF 上执行下面这些步骤才能完成上面的目标:
+
+ 1. 进入 AirScanner 配置模式:`config airscanner`
+ 2. 设置 AirScanner 不跳信道:`config airscanner`
+ 3. 设置信道以嗅探经过 WiFi 接入点信道的数据(APC):`set fixed_sniffing_channel = `
+ 4. 使用 CredentialSniffer 插件来启动 AirScanner 模块:`start airscanner with credentialsniffer`
+ 5. 从已嗅探的接入点列表中添加目标接入点的 BSSID(APS):`add aps where ssid = `
+ 6. 启用 AirInjector 模块,在默认情况下,它会启用解除认证攻击:`start airinjector`
+
+这些简单的命令设置能让 ETF 在每次测试时执行成功且有效的解除认证攻击。ETF 也能在每次测试的时候捕获 WPA 的握手数据包。下面的代码能让我们看到 ETF 成功的执行情况。
+
+```
+███████╗████████╗███████╗
+██╔════╝╚══██╔══╝██╔════╝
+█████╗ ██║ █████╗
+██╔══╝ ██║ ██╔══╝
+███████╗ ██║ ██║
+╚══════╝ ╚═╝ ╚═╝
+
+
+[+] Do you want to load an older session? [Y/n]: n
+[+] Creating new temporary session on 02/08/2018
+[+] Enter the desired session name:
+ETF[etf/aircommunicator/]::> config airscanner
+ETF[etf/aircommunicator/airscanner]::> listargs
+ sniffing_interface = wlan1; (var)
+ probes = True; (var)
+ beacons = True; (var)
+ hop_channels = false; (var)
+fixed_sniffing_channel = 11; (var)
+ETF[etf/aircommunicator/airscanner]::> start airscanner with
+arpreplayer caffelatte credentialsniffer packetlogger selfishwifi
+ETF[etf/aircommunicator/airscanner]::> start airscanner with credentialsniffer
+[+] Successfully added credentialsniffer plugin.
+[+] Starting packet sniffer on interface 'wlan1'
+[+] Set fixed channel to 11
+ETF[etf/aircommunicator/airscanner]::> add aps where ssid = CrackWPA
+ETF[etf/aircommunicator/airscanner]::> start airinjector
+ETF[etf/aircommunicator/airscanner]::> [+] Starting deauthentication attack
+ - 1000 bursts of 1 packets
+ - 1 different packets
+[+] Injection attacks finished executing.
+[+] Starting post injection methods
+[+] Post injection methods finished
+[+] WPA Handshake found for client '70:3e:ac:bb:78:64' and network 'CrackWPA'
+```
+
+#### 使用 ARP 重放攻击并破解 WEP 无线网络
+
+下面这个场景(图 3)将关注[地址解析协议][9](ARP)重放攻击的效率和捕获包含初始化向量(IVs)的 WEP 数据包的速度。相同的网络可能需要破解不同数量的捕获的 IVs,所以这个场景的 IVs 上限是 50000。如果这个网络在首次测试期间,还未捕获到 50000 IVs 就崩溃了,那么实际捕获到的 IVs 数量会成为这个网络在接下来的测试里的新的上限。我们使用 `aircrack-ng` 对数据包进行破解。
+
+测试场景从一个使用 WEP 协议进行加密的 WiFi 接入点和一台知道其密钥的离线客户端设备开始 —— 为了测试方便,密钥使用了 12345,但它可以是更长且更复杂的密钥。一旦客户端连接到了 WEP 接入点,它会发送一个不必要的 ARP 数据包;这是要捕获和重放的数据包。一旦被捕获的包含 IVs 的数据包数量达到了设置的上限,测试就结束了。
+
+![Scenario for capturing a WPA handshake after a de-authentication attack][11]
+
+*图 3:在进行解除认证攻击后捕获 WPA 握手包的场景*
+
+ETF 使用 Python 的 Scapy 库来进行包嗅探和包注入。为了最大限度地解决 Scapy 里的已知的性能问题,ETF 微调了一些低级库,来大大加快包注入的速度。对于这个特定的场景,ETF 为了更有效率地嗅探,使用了 `tcpdump` 作为后台进程而不是 Scapy,Scapy 用于识别加密的 ARP 数据包。
+
+这个场景需要在 ETF 上执行下面这些命令和操作:
+
+ 1. 进入 AirScanner 设置模式:`config airscanner`
+ 2. 设置 AirScanner 不跳信道:`set hop_channels = false`
+ 3. 设置信道以嗅探经过接入点信道的数据(APC):`set fixed_sniffing_channel = `
+ 4. 进入 ARPReplayer 插件设置模式:`config arpreplayer`
+ 5. 设置 WEP 网络目标接入点的 BSSID(APB):`set target_ap_bssid `
+ 6. 使用 ARPReplayer 插件启动 AirScanner 模块:`start airscanner with arpreplayer`
+
+在执行完这些命令后,ETF 会正确地识别加密的 ARP 数据包,然后成功执行 ARP 重放攻击,以此破坏这个网络。
+
+#### 使用一款全能型蜜罐
+
+图 4 中的场景使用相同的 SSID 创建了多个接入点,对于那些可以探测到但是无法接入的 WiFi 网络,这个技术可以发现网络的加密类型。通过启动具有所有安全设置的多个接入点,客户端会自动连接和本地缓存的接入点信息相匹配的接入点。
+
+![Scenario for capturing a WPA handshake after a de-authentication attack][13]
+
+*图 4:在解除认证攻击后捕获 WPA 握手包数据。*
+
+使用 ETF,可以去设置 `hostapd` 配置文件,然后在后台启动该程序。`hostapd` 支持在一张无线网卡上通过设置虚拟接口开启多个接入点,并且因为它支持所有类型的安全设置,因此可以设置完整的全能蜜罐。对于使用 WEP 和 WPA(2)-PSK 的网络,使用默认密码,和对于使用 WPA(2)-EAP 的网络,配置“全部接受”策略。
+
+对于这个场景,必须在 ETF 上执行下面的命令和操作:
+
+ 1. 进入 APLauncher 设置模式:`config aplauncher`
+ 2. 设置目标接入点的 SSID(APS):`set ssid = `
+ 3. 设置 APLauncher 为全部接收的蜜罐:`set catch_all_honeypot = true`
+ 4. 启动 AirHost 模块:`start airhost`
+
+使用这些命令,ETF 可以启动一个包含所有类型安全配置的完整全能蜜罐。ETF 同样能自动启动 DHCP 和 DNS 服务器,从而让客户端能与互联网保持连接。ETF 提供了一个更好、更快、更完整的解决方案来创建全能蜜罐。下面的代码能够看到 ETF 的成功执行。
+
+```
+███████╗████████╗███████╗
+██╔════╝╚══██╔══╝██╔════╝
+█████╗ ██║ █████╗
+██╔══╝ ██║ ██╔══╝
+███████╗ ██║ ██║
+╚══════╝ ╚═╝ ╚═╝
+
+
+[+] Do you want to load an older session? [Y/n]: n
+[+] Creating ne´,cxzw temporary session on 03/08/2018
+[+] Enter the desired session name:
+ETF[etf/aircommunicator/]::> config aplauncher
+ETF[etf/aircommunicator/airhost/aplauncher]::> setconf ssid CatchMe
+ssid = CatchMe
+ETF[etf/aircommunicator/airhost/aplauncher]::> setconf catch_all_honeypot true
+catch_all_honeypot = true
+ETF[etf/aircommunicator/airhost/aplauncher]::> start airhost
+[+] Killing already started processes and restarting network services
+[+] Stopping dnsmasq and hostapd services
+[+] Access Point stopped...
+[+] Running airhost plugins pre_start
+[+] Starting hostapd background process
+[+] Starting dnsmasq service
+[+] Running airhost plugins post_start
+[+] Access Point launched successfully
+[+] Starting dnsmasq service
+```
+
+### 结论和以后的工作
+
+这些场景使用常见和众所周知的攻击方式来帮助验证 ETF 测试 WIFI 网络和客户端的能力。这个结果同样证明了该框架的架构能在平台现有功能的优势上开发新的攻击向量和功能。这会加快新的 WiFi 渗透测试工具的开发,因为很多的代码已经写好了。除此之外,将 WiFi 技术相关的东西都集成到一个单独的工具里,会使 WiFi 渗透测试更加简单高效。
+
+ETF 的目标不是取代现有的工具,而是为它们提供补充,并为安全审计人员在进行 WiFi 渗透测试和提升用户安全意识时,提供一个更好的选择。
+
+ETF 是 [GitHub][14] 上的一个开源项目,欢迎社区为它的开发做出贡献。下面是一些您可以提供帮助的方法。
+
+当前 WiFi 渗透测试的一个限制是无法在测试期间记录重要的事件。这使得报告已经识别到的漏洞更加困难且准确性更低。这个框架可以实现一个记录器,每个类都可以来访问它并创建一个渗透测试会话报告。
+
+ETF 工具的功能涵盖了 WiFi 渗透测试的方方面面。一方面,它让 WiFi 目标侦察、漏洞挖掘和攻击这些阶段变得更加容易。另一方面,它没有提供一个便于提交报告的功能。增加了会话的概念和会话报告的功能,比如在一个会话期间记录重要的事件,会极大地增加这个工具对于真实渗透测试场景的价值。
+
+另一个有价值的贡献是扩展该框架来促进 WiFi 模糊测试。IEEE 802.11 协议非常的复杂,考虑到它在客户端和接入点两方面都会有多种实现方式。可以假设这些实现都包含 bug 甚至是安全漏洞。这些 bug 可以通过对 IEEE 802.11 协议的数据帧进行模糊测试来进行发现。因为 Scapy 允许自定义的数据包创建和数据包注入,可以通过它实现一个模糊测试器。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/evil-twin-framework
+
+作者:[André Esser][a]
+选题:[lujun9972][b]
+译者:[hopefully2333](https://github.com/hopefully2333)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/andreesser
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Rogue_access_point
+[2]: https://www.python.org/
+[3]: https://scapy.net
+[4]: /file/417776
+[5]: https://opensource.com/sites/default/files/uploads/pic1.png (Evil-Twin Framework Architecture)
+[6]: https://www.metasploit.com
+[7]: /file/417781
+[8]: https://opensource.com/sites/default/files/uploads/pic2.png (Scenario for capturing a WPA handshake after a de-authentication attack)
+[9]: https://en.wikipedia.org/wiki/Address_Resolution_Protocol
+[10]: /file/417786
+[11]: https://opensource.com/sites/default/files/uploads/pic3.png (Scenario for capturing a WPA handshake after a de-authentication attack)
+[12]: /file/417791
+[13]: https://opensource.com/sites/default/files/uploads/pic4.png (Scenario for capturing a WPA handshake after a de-authentication attack)
+[14]: https://github.com/Esser420/EvilTwinFramework
diff --git a/translated/tech/20190117 How to Update-Change Users Password in Linux Using Different Ways.md b/published/201902/20190117 How to Update-Change Users Password in Linux Using Different Ways.md
similarity index 63%
rename from translated/tech/20190117 How to Update-Change Users Password in Linux Using Different Ways.md
rename to published/201902/20190117 How to Update-Change Users Password in Linux Using Different Ways.md
index 2c8cc10e3c..0b0b5132bd 100644
--- a/translated/tech/20190117 How to Update-Change Users Password in Linux Using Different Ways.md
+++ b/published/201902/20190117 How to Update-Change Users Password in Linux Using Different Ways.md
@@ -1,50 +1,37 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10514-1.html)
[#]: subject: (How to Update/Change Users Password in Linux Using Different Ways)
[#]: via: (https://www.2daygeek.com/linux-passwd-chpasswd-command-set-update-change-users-password-in-linux-using-shell-script/)
[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/)
-如何使用不同的方式更新或更改 Linux 用户密码
+如何使用不同的方式更改 Linux 用户密码
======
-在 Linux 中创建用户账号时,设置用户密码是一件基本的事情。
+在 Linux 中创建用户账号时,设置用户密码是一件基本的事情。每个人都使用 `passwd` 命令跟上用户名,比如 `passwd USERNAME` 来为用户设置密码。
-每个人都使用 passwd 命令和用户名,比如 `passwd USERNAME` 来为用户设置密码。
+确保你一定要设置一个难以猜测的密码,这可以帮助你使系统更安全。我的意思是,密码应该是字母、符号和数字的组合。此外,出于安全原因,我建议你至少每月更改一次密码。
-确保你一定设置一个难以猜测的密码,这可以帮助你使系统更安全。
+当你使用 `passwd` 命令时,它会要求你输入两次密码来设置。这是一种设置用户密码的原生方法。
-我的意思是,密码应该是字母,符合和数字的组合。
+如果你不想两次更新密码,并希望以不同的方式进行更新,怎么办呢?当然,这可以的,有可能做到。
-此外,出于安全原因,我建议你至少每月更改一次密码。
-
-当你使用 passwd 命令时,它会要求你输入两次密码来设置。这是一种设置用户密码的原生方法。
-
-如果你不想两次更新密码,并希望以不同的方式进行更新,怎么办呢?
-
-当然,这可以的,有可能做到。
-
-如果你是 Linux 管理员,你可能已经多次问过下面的问题。
-
-你们可能,也可能没有得到这些问题的答案。
+如果你是 Linux 管理员,你可能已经多次问过下面的问题。你可能、也可能没有得到这些问题的答案。
无论如何,不要担心,我们会回答你所有的问题。
- * 如何用一条命令更新或更改用户密码?
- * 如何在 Linux 中为多个用户更新或更改相同的密码?
- * 如何在 Linux 中更新或更改多个用户的密码?
- * 如何在 Linux 中更新或更改多个用户的密码?(to 校正:这句和上一句有不同?)
- * 如何在 Linux 中更新或更改多个用户的不同密码?
- * 如何在多个 Linux 服务器中更新或更改用户的密码?
- * 如何在多个 Linux 服务器中更新或更改多个用户的密码?
+ * 如何用一条命令更改用户密码?
+ * 如何在 Linux 中为多个用户更改为相同的密码?
+ * 如何在 Linux 中更改多个用户的密码?
+ * 如何在 Linux 中为多个用户更改为不同的密码?
+ * 如何在多个 Linux 服务器中更改用户的密码?
+ * 如何在多个 Linux 服务器中更改多个用户的密码?
+### 方法-1:使用 passwd 命令
-
-### 方法-1: 使用 passwd 命令
-
-passwd 命令是在 Linux 中为用户设置,更新或更改密码的标准方法。以下是标准方法。
+`passwd` 命令是在 Linux 中为用户设置、更改密码的标准方法。以下是标准方法。
```
# passwd renu
@@ -63,17 +50,17 @@ Changing password for user thanu.
passwd: all authentication tokens updated successfully.
```
-### 方法-2: 使用 chpasswd 命令
+### 方法-2:使用 chpasswd 命令
-chpasswd 是另一个命令,允许我们为 Linux 中的用户设置,更新或更改密码。如果希望在一条命令中使用 chpasswd 命令更改用户密码,用以下格式。
+`chpasswd` 是另一个命令,允许我们为 Linux 中的用户设置、更改密码。如果希望在一条命令中使用 `chpasswd` 命令更改用户密码,用以下格式。
```
# echo "thanu:new_password" | chpasswd
```
-### 方法-3: 如何为多个用户设置不同的密码
+### 方法-3:如何为多个用户设置不同的密码
-如果你要为 Linux 中的多个用户设置,更新或更改密码,并且使用不同的密码,使用以下脚本。
+如果你要为 Linux 中的多个用户设置、更改密码,并且使用不同的密码,使用以下脚本。
为此,首先我们需要使用以下命令获取用户列表。下面的命令将列出拥有 `/home` 目录的用户,并将输出重定向到 `user-list.txt` 文件。
@@ -81,7 +68,7 @@ chpasswd 是另一个命令,允许我们为 Linux 中的用户设置,更新
# cat /etc/passwd | grep "/home" | cut -d":" -f1 > user-list.txt
```
-使用 cat 命令列出用户。如果你不想重置特定用户的密码,那么从列表中移除该用户。
+使用 `cat` 命令列出用户。如果你不想重置特定用户的密码,那么从列表中移除该用户。
```
# cat user-list.txt
@@ -92,7 +79,7 @@ thanu
renu
```
-创建以下小 shell 脚本来实现此目的。
+创建以下 shell 小脚本来实现此目的。
```
# vi password-update.sh
@@ -130,9 +117,9 @@ Changing password for user renu.
passwd: all authentication tokens updated successfully.
```
-### 方法-4: 如何为多个用户设置相同的密码
+### 方法-4:如何为多个用户设置相同的密码
-如果要在 Linux 中为多个用户设置,更新或更改相同的密码,使用以下脚本。
+如果要在 Linux 中为多个用户设置、更改相同的密码,使用以下脚本。
```
# vi password-update.sh
@@ -145,7 +132,7 @@ chage -d 0 $user
done
```
-### 方法-5: 如何在多个服务器中更改用户密码
+### 方法-5:如何在多个服务器中更改用户密码
如果希望更改多个服务器中的用户密码,使用以下脚本。在本例中,我们将更改 `renu` 用户的密码,确保你必须提供你希望更新密码的用户名而不是我们的用户名。
@@ -179,9 +166,9 @@ Retype new password: Changing password for user renu.
passwd: all authentication tokens updated successfully.
```
-### 方法-6: 如何使用 pssh 命令更改多个服务器中的用户密码
+### 方法-6:如何使用 pssh 命令更改多个服务器中的用户密码
-pssh 是一个在多个主机上并行执行 ssh 的程序。它提供了一些特性,例如向所有进程发送输入,向 sshh 传递密码,将输出保存到文件以及超时处理。导航到以下链接以了解关于 **[PSSH 命令][1]**的更多信息。
+`pssh` 是一个在多个主机上并行执行 ssh 连接的程序。它提供了一些特性,例如向所有进程发送输入,向 ssh 传递密码,将输出保存到文件以及超时处理。导航到以下链接以了解关于 [PSSH 命令][1]的更多信息。
```
# pssh -i -h /tmp/server-list.txt "printf '%s\n' new_pass new_pass | passwd --stdin root"
@@ -203,9 +190,9 @@ Stderr: New password: BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
```
-### 方法-7: 如何使用 chpasswd 命令更改多个服务器中的用户密码
+### 方法-7:如何使用 chpasswd 命令更改多个服务器中的用户密码
-或者,我们可以使用 chpasswd 命令更新多个服务器中的用户密码。
+或者,我们可以使用 `chpasswd` 命令更新多个服务器中的用户密码。
```
# ./password-update.sh
@@ -217,13 +204,21 @@ ssh [email protected]$server 'echo "magi:new_password" | chpasswd'
done
```
-### 方法-8: 如何使用 chpasswd 命令在 Linux 服务器中更改多个用户的密码
+### 方法-8:如何使用 chpasswd 命令在 Linux 服务器中更改多个用户的密码
为此,首先创建一个文件,以下面的格式更新用户名和密码。在本例中,我创建了一个名为 `user-list.txt` 的文件。
参考下面的详细信息。
-创建下面的小 shell 脚本来实现这一点。
+```
+# cat user-list.txt
+magi:new@123
+daygeek:new@123
+thanu:new@123
+renu:new@123
+```
+
+创建下面的 shell 小脚本来实现这一点。
```
# vi password-update.sh
@@ -242,7 +237,7 @@ via: https://www.2daygeek.com/linux-passwd-chpasswd-command-set-update-change-us
作者:[Vinoth Kumar][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/20190119 Get started with Roland, a random selection tool for the command line.md b/published/201902/20190119 Get started with Roland, a random selection tool for the command line.md
similarity index 100%
rename from published/20190119 Get started with Roland, a random selection tool for the command line.md
rename to published/201902/20190119 Get started with Roland, a random selection tool for the command line.md
diff --git a/published/20190120 Get started with HomeBank, an open source personal finance app.md b/published/201902/20190120 Get started with HomeBank, an open source personal finance app.md
similarity index 100%
rename from published/20190120 Get started with HomeBank, an open source personal finance app.md
rename to published/201902/20190120 Get started with HomeBank, an open source personal finance app.md
diff --git a/published/201902/20190121 Get started with TaskBoard, a lightweight kanban board.md b/published/201902/20190121 Get started with TaskBoard, a lightweight kanban board.md
new file mode 100644
index 0000000000..bff5c209c7
--- /dev/null
+++ b/published/201902/20190121 Get started with TaskBoard, a lightweight kanban board.md
@@ -0,0 +1,59 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10539-1.html)
+[#]: subject: (Get started with TaskBoard, a lightweight kanban board)
+[#]: via: (https://opensource.com/article/19/1/productivity-tool-taskboard)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
+
+开始使用 TaskBoard 吧,一款轻量级看板
+======
+
+> 了解我们在开源工具系列中的第九个工具,它将帮助你在 2019 年提高工作效率。
+
+
+
+每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
+
+这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第九个工具来帮助你在 2019 年更有效率。
+
+### TaskBoard
+
+正如我在本系列的[第二篇文章][1]中所写的那样,[看板][2]现在非常受欢迎。但并非所有的看板都是相同的。[TaskBoard][3] 是一个易于在现有 Web 服务器上部署的 PHP 应用,它有一些易于使用和管理的功能。
+
+
+
+[安装][4]它只需要解压 Web 服务器上的文件,运行一两个脚本,并确保目录可正常访问。第一次启动时,你会看到一个登录页面,然后可以就可以添加用户和制作看板了。看板创建选项包括添加要使用的列以及设置卡片的默认颜色。你还可以将用户分配给指定看板,这样每个人都只能看到他们需要查看的看板。
+
+用户管理是轻量级的,所有帐户都是服务器的本地帐户。你可以为服务器上的每个用户设置默认看板,用户也可以设置自己的默认看板。当有人在多个看板上工作时,这个选项非常有用。
+
+
+
+TaskBoard 还允许你创建自动操作,包括更改用户分配、列或卡片类别这些操作。虽然 TaskBoard 不如其他一些看板应用那么强大,但你可以设置自动操作,使看板用户更容易看到卡片、清除截止日期,并根据需要自动为人们分配新卡片。例如,在下面的截图中,如果将卡片分配给 “admin” 用户,那么它的颜色将更改为红色,并且当将卡片分配给我的用户时,其颜色将更改为蓝绿色。如果项目已添加到“待办事项”列,我还添加了一个操作来清除项目的截止日期,并在发生这种情况时自动将卡片分配给我的用户。
+
+
+
+卡片非常简单。虽然它们没有开始日期,但它们确实有结束日期和点数字段。点数可用于估计所需的时间、所需的工作量或仅是一般优先级。使用点数是可选的,但如果你使用 TaskBoard 进行 scrum 规划或其他敏捷技术,那么这是一个非常方便的功能。你还可以按用户和类别过滤视图。这对于正在进行多个工作流的团队非常有用,因为它允许团队负责人或经理了解进度状态或人员工作量。
+
+
+
+如果你需要一个相当轻便的看板,请看下 TaskBoard。它安装快速,有一些很好的功能,且非常、非常容易使用。它还足够的灵活性,可用于开发团队,个人任务跟踪等等。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/productivity-tool-taskboard
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney (Kevin Sonney)
+[b]: https://github.com/lujun9972
+[1]: https://linux.cn/article-10454-1.html
+[2]: https://en.wikipedia.org/wiki/Kanban
+[3]: https://taskboard.matthewross.me/
+[4]: https://taskboard.matthewross.me/docs/
diff --git a/translated/tech/20190122 Dcp (Dat Copy) - Easy And Secure Way To Transfer Files Between Linux Systems.md b/published/201902/20190122 Dcp (Dat Copy) - Easy And Secure Way To Transfer Files Between Linux Systems.md
similarity index 62%
rename from translated/tech/20190122 Dcp (Dat Copy) - Easy And Secure Way To Transfer Files Between Linux Systems.md
rename to published/201902/20190122 Dcp (Dat Copy) - Easy And Secure Way To Transfer Files Between Linux Systems.md
index 266f82a30c..ed9f269196 100644
--- a/translated/tech/20190122 Dcp (Dat Copy) - Easy And Secure Way To Transfer Files Between Linux Systems.md
+++ b/published/201902/20190122 Dcp (Dat Copy) - Easy And Secure Way To Transfer Files Between Linux Systems.md
@@ -1,83 +1,79 @@
[#]: collector: (lujun9972)
-[#]: translator: (dianbanjiu )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: translator: (dianbanjiu)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10516-1.html)
[#]: subject: (Dcp (Dat Copy) – Easy And Secure Way To Transfer Files Between Linux Systems)
[#]: via: (https://www.2daygeek.com/dcp-dat-copy-secure-way-to-transfer-files-between-linux-systems/)
[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/)
-Dcp(Dat Copy)—— 在不同 Linux 系统之间传输文件的安全又容易的方式
+dcp:采用对等网络传输文件的方式
======
-Linux 本就有 scp 和 rsync 可以完美地完成这个任务。然而我们今天还是想试点新东西。
-
-同时我们也很鼓励那些使用不同的理论和新技术开发新东西的开发者。
+Linux 本就有 `scp` 和 `rsync` 可以完美地完成这个任务。然而我们今天还是想试点新东西。同时我们也想鼓励那些使用不同的理论和新技术开发新东西的开发者。
我们也写过其他很多有关这个主题的文章,你可以点击下面的链接访问这些内容。
-它们分别是 **[OnionShare][1]**,**[Magic Wormhole][2]**,**[Transfer.sh][3]** 和 **ffsend**。
+它们分别是 [OnionShare][1]、[Magic Wormhole][2]、[Transfer.sh][3] 和 ffsend。
-### 什么是 Dcp?
+### 什么是 dcp?
-[dcp][4] 在不同主机之间使用对等网络复制文件。
+[dcp][4] 可以在不同主机之间使用 Dat 对等网络复制文件。
-dcp 被视作一个像是 scp 这样工具的替代品,无需在主机间进行 SSH 授权。
+`dcp` 被视作一个像是 `scp` 这样工具的替代品,而无需在主机间进行 SSH 授权。
-这可以让你在两个主机间传输文件时,无需操心所述主机之间互相访问的细节,以及这些主机是否使用了 NATs。
+这可以让你在两个主机间传输文件时,无需操心所述主机之间互相访问的细节,以及这些主机是否使用了 NAT。
-dcp 零配置、安全、快速、且是 P2P 传输。这并不是一个商用软件,使用产生的风险将由使用者自己承担。
+`dcp` 零配置、安全、快速、且是 P2P 传输。这并不是一个商用软件,使用产生的风险将由使用者自己承担。
### 什么是 Dat 协议
-Dat 是一个 P2P 协议,是一个致力于下一代 Web 的社区驱动的项目。
+Dat 是一个 P2P 协议,是一个致力于下一代 Web 的由社区驱动的项目。
### dcp 如何工作
-dcp 将会为指定的文件或者文件夹创建一个 dat 归档,并生成一个公钥,使用这个公钥可以让其他人从另外一台主机上下载上面的文件。
+`dcp` 将会为指定的文件或者文件夹创建一个 dat 归档,并生成一个公开密钥,使用这个公开密钥可以让其他人从另外一台主机上下载上面的文件。
-使用网络共享的任何数据都使用归档的公钥加密,也就是说文件的接收权仅限于那些知道公钥的人。
+使用网络共享的任何数据都使用该归档的公开密钥加密,也就是说文件的接收权仅限于那些拥有该公开密钥的人。
### dcp 使用案例
- * 向多个同事发送文件 —— 只需要告诉他们生成的公钥,然后他们就可以在他们的机器上收到对应的文件了。
+ * 向多个同事发送文件 —— 只需要告诉他们生成的公开密钥,然后他们就可以在他们的机器上收到对应的文件了。
* 无需设置 SSH 授权就可以在你本地网络的两个不同物理机上同步文件。
* 无需压缩文件并把文件上传到云端就可以轻松地发送文件。
* 当你有 shell 授权而没有 SSH 授权时也可以复制文件到远程服务器上。
* 在没有很好的 SSH 支持的 Linux/macOS 以及 Windows 系统之间分享文件。
-
-
### 如何在 Linux 上安装 NodeJS & npm?
-dcp 是用 JavaScript 写成的,所以在安装 dcp 前,需要先安装 NodeJS。在 Linux 上使用下面的命令安装 NodeJS。
+`dcp` 是用 JavaScript 写成的,所以在安装 `dcp` 前,需要先安装 NodeJS。在 Linux 上使用下面的命令安装 NodeJS。
-**Fedora** 系统,使用 **[DNF 命令][5]** 安装 NodeJS & npm。
+Fedora 系统,使用 [DNF 命令][5] 安装 NodeJS & npm。
```
$ sudo dnf install nodejs npm
```
-**`Debian/Ubuntu`** 系统,使用 **[APT-GET 命令][6]** 或者 **[APT 命令][6]** 安装 NodeJS & npm。
+Debian/Ubuntu 系统,使用 [APT-GET 命令][6] 或者 [APT 命令][6] 安装 NodeJS & npm。
```
$ sudo apt install nodejs npm
```
-**`Arch Linux`** 系统,使用 **[Pacman 命令][8]** 安装 NodeJS & npm。
+Arch Linux 系统,使用 [Pacman 命令][8] 安装 NodeJS & npm。
```
$ sudo pacman -S nodejs npm
```
-**`RHEL/CentOS`** 系统,使用 **[YUM 命令][9]** 安装 NodeJS & npm。
+RHEL/CentOS 系统,使用 [YUM 命令][9] 安装 NodeJS & npm。
```
$ sudo yum install epel-release
$ sudo yum install nodejs npm
```
-**`openSUSE Leap`** 系统,使用 **[Zypper 命令][10]** 安装 NodeJS & npm。
+openSUSE Leap 系统,使用 [Zypper 命令][10] 安装 NodeJS & npm。
```
$ sudo zypper nodejs6
@@ -85,26 +81,27 @@ $ sudo zypper nodejs6
### 如何在 Linux 上安装 dcp?
-在安装好 NodeJS 后,使用下面的 npm 命令安装 dcp。
+在安装好 NodeJS 后,使用下面的 `npm` 命令安装 `dcp`。
-npm 是一个 JavaScript 的包管理。它是 JavaScript 的运行环境 Node.js 的默认包管理。
+`npm` 是一个 JavaScript 的包管理器。它是 JavaScript 的运行环境 Node.js 的默认包管理器。
```
# npm i -g dat-cp
```
+
### 如何通过 dcp 发送文件?
-在 dcp 命令后跟你想要传输的文件或者文件夹。而且无需注明目标机器的名字。
+在 `dcp` 命令后跟你想要传输的文件或者文件夹。而且无需注明目标机器的名字。
```
# dcp [File Name Which You Want To Transfer]
```
-在你运行 dcp 命令时将会为传送的文件生成一个 dat 归档。一旦执行完成将会在页面底部生成一个公钥。
+在你运行 `dcp` 命令时将会为传送的文件生成一个 dat 归档。一旦执行完成将会在页面底部生成一个公开密钥。(LCTT 译注:此处并非非对称加密中的公钥/私钥对,而是一种公开的密钥,属于对称加密。)
### 如何通过 dcp 接收文件
-在远程服务器上输入公钥即可接收对应的文件或者文件夹。
+在远程服务器上输入公开密钥即可接收对应的文件或者文件夹。
```
# dcp [Public Key]
@@ -117,24 +114,31 @@ npm 是一个 JavaScript 的包管理。它是 JavaScript 的运行环境 Node.j
```
下面这个例子我们将会传输单个文件。
+
![][12]
-上述文件的输出。
+上述文件传输的输出。
+
![][13]
如果你想传输不止一个文件,使用下面的格式。
+
![][14]
-上述文件的输出。
+上述文件传输的输出。
+
![][15]
递归复制文件夹。
+
![][16]
上述文件夹传输的输出。
+
![][17]
这种方式下你只能够下载一次文件或者文件夹,不可以多次下载。这也就意味着一旦你下载了这些文件或者文件夹,这个链接就会立即失效。
+
![][18]
也可以在手册页查看更多的相关选项。
@@ -149,8 +153,8 @@ via: https://www.2daygeek.com/dcp-dat-copy-secure-way-to-transfer-files-between-
作者:[Vinoth Kumar][a]
选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
+译者:[dianbanjiu](https://github.com/dianbanjiu)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/201902/20190122 Get started with Go For It, a flexible to-do list application.md b/published/201902/20190122 Get started with Go For It, a flexible to-do list application.md
new file mode 100644
index 0000000000..e2602b216c
--- /dev/null
+++ b/published/201902/20190122 Get started with Go For It, a flexible to-do list application.md
@@ -0,0 +1,61 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10557-1.html)
+[#]: subject: (Get started with Go For It, a flexible to-do list application)
+[#]: via: (https://opensource.com/article/19/1/productivity-tool-go-for-it)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
+
+开始使用 Go For It 吧,一个灵活的待办事项列表程序
+======
+
+> Go For It,是我们开源工具系列中的第十个工具,它将使你在 2019 年更高效,它在 Todo.txt 系统的基础上构建,以帮助你完成更多工作。
+
+
+
+每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
+
+这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第 10 个工具来帮助你在 2019 年更有效率。
+
+### Go For It
+
+有时,人们要高效率需要的不是一个花哨的看板或一组笔记,而是一个简单、直接的待办事项清单。像“将项目添加到列表中,在完成后检查”一样基本的东西。为此,[纯文本 Todo.txt 系统][1]可能是最容易使用的系统之一,几乎所有系统都支持它。
+
+
+
+[Go For It][2] 是一个简单易用的 Todo.txt 图形界面。如果你已经在使用 Todo.txt,它可以与现有文件一起使用,如果还没有,那么可以同时创建待办事项和完成事项。它允许拖放任务排序,允许用户按照他们想要执行的顺序组织待办事项。它还支持 [Todo.txt 格式指南][3]中所述的优先级、项目和上下文。而且,只需单击任务列表中的项目或者上下文就可通过它们过滤任务。
+
+
+
+一开始,Go For It 可能看起来与任何其他 Todo.txt 程序相同,但外观可能是骗人的。将 Go For It 与其他程序真正区分开的功能是它包含一个内置的[番茄工作法][4]计时器。选择要完成的任务,切换到“计时器”选项卡,然后单击“启动”。任务完成后,只需单击“完成”,它将自动重置计时器并选择列表中的下一个任务。你可以暂停并重新启动计时器,也可以单击“跳过”跳转到下一个任务(或中断)。在当前任务剩余 60 秒时,它会发出警告。任务的默认时间设置为 25 分钟,中断的默认时间设置为 5 分钟。你可以在“设置”页面中调整,同时还能调整 Todo.txt 和 done.txt 文件的目录的位置。
+
+
+
+Go For It 的第三个选项卡是“已完成”,允许你查看已完成的任务并在需要时将其清除。能够看到你已经完成的可能是非常激励的,也是一种了解你在更长的过程中进度的好方法。
+
+
+
+它还有 Todo.txt 的所有其他优点。Go For It 的列表可以被其他使用相同格式的程序访问,包括 [Todo.txt 的原始命令行工具][5]和任何已安装的[附加组件][6]。
+
+Go For It 旨在成为一个简单的工具来帮助管理你的待办事项列表并完成这些项目。如果你已经使用过 Todo.txt,那么 Go For It 是你的工具箱的绝佳补充,如果你还没有,这是一个尝试最简单、最灵活系统之一的好机会。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/productivity-tool-go-for-it
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney (Kevin Sonney)
+[b]: https://github.com/lujun9972
+[1]: http://todotxt.org/
+[2]: http://manuel-kehl.de/projects/go-for-it/
+[3]: https://github.com/todotxt/todo.txt
+[4]: https://en.wikipedia.org/wiki/Pomodoro_Technique
+[5]: https://github.com/todotxt/todo.txt-cli
+[6]: https://github.com/todotxt/todo.txt-cli/wiki/Todo.sh-Add-on-Directory
diff --git a/published/201902/20190122 How To Copy A File-Folder From A Local System To Remote System In Linux.md b/published/201902/20190122 How To Copy A File-Folder From A Local System To Remote System In Linux.md
new file mode 100644
index 0000000000..66407b0156
--- /dev/null
+++ b/published/201902/20190122 How To Copy A File-Folder From A Local System To Remote System In Linux.md
@@ -0,0 +1,393 @@
+[#]: collector: (lujun9972)
+[#]: translator: (luming)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10569-1.html)
+[#]: subject: (How To Copy A File/Folder From A Local System To Remote System In Linux?)
+[#]: via: (https://www.2daygeek.com/linux-scp-rsync-pscp-command-copy-files-folders-in-multiple-servers-using-shell-script/)
+[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/)
+
+如何在 Linux 上复制文件/文件夹到远程系统?
+======
+
+从一个服务器复制文件到另一个服务器,或者从本地到远程复制是 Linux 管理员的日常任务之一。
+
+我觉得不会有人不同意,因为无论在哪里这都是你的日常操作之一。有很多办法都能处理这个任务,我们试着加以概括。你可以挑一个喜欢的方法。当然,看看其他命令也能在别的地方帮到你。
+
+我已经在自己的环境下测试过所有的命令和脚本了,因此你可以直接用到日常工作当中。
+
+通常大家都倾向 `scp`,因为它是文件复制的原生命令native command之一。但本文所列出的其它命令也很好用,建议你尝试一下。
+
+文件复制可以轻易地用以下四种方法。
+
+- `scp`:在网络上的两个主机之间复制文件,它使用 `ssh` 做文件传输,并使用相同的认证方式,具有相同的安全性。
+- `rsync`:是一个既快速又出众的多功能文件复制工具。它能本地复制、通过远程 shell 在其它主机之间复制,或者与远程的 `rsync` 守护进程daemon 之间复制。
+- `pscp`:是一个并行复制文件到多个主机上的程序。它提供了诸多特性,例如为 `scp` 配置免密传输,保存输出到文件,以及超时控制。
+- `prsync`:也是一个并行复制文件到多个主机上的程序。它也提供了诸多特性,例如为 `ssh` 配置免密传输,保存输出到 文件,以及超时控制。
+
+### 方式 1:如何在 Linux 上使用 scp 命令从本地系统向远程系统复制文件/文件夹?
+
+`scp` 命令可以让我们从本地系统复制文件/文件夹到远程系统上。
+
+我会把 `output.txt` 文件从本地系统复制到 `2g.CentOS.com` 远程系统的 `/opt/backup` 文件夹下。
+
+```
+# scp output.txt root@2g.CentOS.com:/opt/backup
+
+output.txt 100% 2468 2.4KB/s 00:00
+```
+
+从本地系统复制两个文件 `output.txt` 和 `passwd-up.sh` 到远程系统 `2g.CentOs.com` 的 `/opt/backup` 文件夹下。
+
+```
+# scp output.txt passwd-up.sh root@2g.CentOS.com:/opt/backup
+
+output.txt 100% 2468 2.4KB/s 00:00
+passwd-up.sh 100% 877 0.9KB/s 00:00
+```
+
+从本地系统复制 `shell-script` 文件夹到远程系统 `2g.CentOs.com` 的 `/opt/back` 文件夹下。
+
+这会连同`shell-script` 文件夹下所有的文件一同复制到`/opt/back` 下。
+
+```
+# scp -r /home/daygeek/2g/shell-script/ root@:/opt/backup/
+
+output.txt 100% 2468 2.4KB/s 00:00
+ovh.sh 100% 76 0.1KB/s 00:00
+passwd-up.sh 100% 877 0.9KB/s 00:00
+passwd-up1.sh 100% 7 0.0KB/s 00:00
+server-list.txt 100% 23 0.0KB/s 00:00
+```
+
+### 方式 2:如何在 Linux 上使用 scp 命令和 Shell 脚本复制文件/文件夹到多个远程系统上?
+
+如果你想复制同一个文件到多个远程服务器上,那就需要创建一个如下面那样的小 shell 脚本。
+
+并且,需要将服务器添加进 `server-list.txt` 文件。确保添加成功后,每个服务器应当单独一行。
+
+最终,你想要的脚本就像下面这样:
+
+```
+# file-copy.sh
+
+#!/bin/sh
+for server in `more server-list.txt`
+do
+ scp /home/daygeek/2g/shell-script/output.txt root@$server:/opt/backup
+done
+```
+
+完成之后,给 `file-copy.sh` 文件设置可执行权限。
+
+```
+# chmod +x file-copy.sh
+```
+
+最后运行脚本完成复制。
+
+```
+# ./file-copy.sh
+
+output.txt 100% 2468 2.4KB/s 00:00
+output.txt 100% 2468 2.4KB/s 00:00
+```
+
+使用下面的脚本可以复制多个文件到多个远程服务器上。
+
+```
+# file-copy.sh
+
+#!/bin/sh
+for server in `more server-list.txt`
+do
+ scp /home/daygeek/2g/shell-script/output.txt passwd-up.sh root@$server:/opt/backup
+done
+```
+
+下面结果显示所有的两个文件都复制到两个服务器上。
+
+```
+# ./file-cp.sh
+
+output.txt 100% 2468 2.4KB/s 00:00
+passwd-up.sh 100% 877 0.9KB/s 00:00
+output.txt 100% 2468 2.4KB/s 00:00
+passwd-up.sh 100% 877 0.9KB/s 00:00
+```
+
+使用下面的脚本递归地复制文件夹到多个远程服务器上。
+
+```
+# file-copy.sh
+
+#!/bin/sh
+for server in `more server-list.txt`
+do
+ scp -r /home/daygeek/2g/shell-script/ root@$server:/opt/backup
+done
+```
+
+上述脚本的输出。
+
+```
+# ./file-cp.sh
+
+output.txt 100% 2468 2.4KB/s 00:00
+ovh.sh 100% 76 0.1KB/s 00:00
+passwd-up.sh 100% 877 0.9KB/s 00:00
+passwd-up1.sh 100% 7 0.0KB/s 00:00
+server-list.txt 100% 23 0.0KB/s 00:00
+
+output.txt 100% 2468 2.4KB/s 00:00
+ovh.sh 100% 76 0.1KB/s 00:00
+passwd-up.sh 100% 877 0.9KB/s 00:00
+passwd-up1.sh 100% 7 0.0KB/s 00:00
+server-list.txt 100% 23 0.0KB/s 00:00
+```
+
+### 方式 3:如何在 Linux 上使用 pscp 命令复制文件/文件夹到多个远程系统上?
+
+`pscp` 命令可以直接让我们复制文件到多个远程服务器上。
+
+使用下面的 `pscp` 命令复制单个文件到远程服务器。
+
+```
+# pscp.pssh -H 2g.CentOS.com /home/daygeek/2g/shell-script/output.txt /opt/backup
+
+[1] 18:46:11 [SUCCESS] 2g.CentOS.com
+```
+
+使用下面的 `pscp` 命令复制多个文件到远程服务器。
+
+```
+# pscp.pssh -H 2g.CentOS.com /home/daygeek/2g/shell-script/output.txt ovh.sh /opt/backup
+
+[1] 18:47:48 [SUCCESS] 2g.CentOS.com
+```
+
+使用下面的 `pscp` 命令递归地复制整个文件夹到远程服务器。
+
+```
+# pscp.pssh -H 2g.CentOS.com -r /home/daygeek/2g/shell-script/ /opt/backup
+
+[1] 18:48:46 [SUCCESS] 2g.CentOS.com
+```
+
+使用下面的 `pscp` 命令使用下面的命令复制单个文件到多个远程服务器。
+
+```
+# pscp.pssh -h server-list.txt /home/daygeek/2g/shell-script/output.txt /opt/backup
+
+[1] 18:49:48 [SUCCESS] 2g.CentOS.com
+[2] 18:49:48 [SUCCESS] 2g.Debian.com
+```
+
+使用下面的 `pscp` 命令复制多个文件到多个远程服务器。
+
+```
+# pscp.pssh -h server-list.txt /home/daygeek/2g/shell-script/output.txt passwd-up.sh /opt/backup
+
+[1] 18:50:30 [SUCCESS] 2g.Debian.com
+[2] 18:50:30 [SUCCESS] 2g.CentOS.com
+```
+
+使用下面的命令递归地复制文件夹到多个远程服务器。
+
+```
+# pscp.pssh -h server-list.txt -r /home/daygeek/2g/shell-script/ /opt/backup
+
+[1] 18:51:31 [SUCCESS] 2g.Debian.com
+[2] 18:51:31 [SUCCESS] 2g.CentOS.com
+```
+
+### 方式 4:如何在 Linux 上使用 rsync 命令复制文件/文件夹到多个远程系统上?
+
+`rsync` 是一个即快速又出众的多功能文件复制工具。它能本地复制、通过远程 shell 在其它主机之间复制,或者在远程 `rsync` 守护进程daemon 之间复制。
+
+使用下面的 `rsync` 命令复制单个文件到远程服务器。
+
+```
+# rsync -avz /home/daygeek/2g/shell-script/output.txt root@2g.CentOS.com:/opt/backup
+
+sending incremental file list
+output.txt
+
+sent 598 bytes received 31 bytes 1258.00 bytes/sec
+total size is 2468 speedup is 3.92
+```
+
+使用下面的 `rsync` 命令复制多个文件到远程服务器。
+
+```
+# rsync -avz /home/daygeek/2g/shell-script/output.txt passwd-up.sh root@2g.CentOS.com:/opt/backup
+
+sending incremental file list
+output.txt
+passwd-up.sh
+
+sent 737 bytes received 50 bytes 1574.00 bytes/sec
+total size is 2537 speedup is 3.22
+```
+
+使用下面的 `rsync` 命令通过 `ssh` 复制单个文件到远程服务器。
+
+```
+# rsync -avzhe ssh /home/daygeek/2g/shell-script/output.txt root@2g.CentOS.com:/opt/backup
+
+sending incremental file list
+output.txt
+
+sent 598 bytes received 31 bytes 419.33 bytes/sec
+total size is 2.47K speedup is 3.92
+```
+
+使用下面的 `rsync` 命令通过 `ssh` 递归地复制文件夹到远程服务器。这种方式只复制文件不包括文件夹。
+
+```
+# rsync -avzhe ssh /home/daygeek/2g/shell-script/ root@2g.CentOS.com:/opt/backup
+
+sending incremental file list
+./
+output.txt
+ovh.sh
+passwd-up.sh
+passwd-up1.sh
+server-list.txt
+
+sent 3.85K bytes received 281 bytes 8.26K bytes/sec
+total size is 9.12K speedup is 2.21
+```
+
+### 方式 5:如何在 Linux 上使用 rsync 命令和 Shell 脚本复制文件/文件夹到多个远程系统上?
+
+如果你想复制同一个文件到多个远程服务器上,那也需要创建一个如下面那样的小 shell 脚本。
+
+```
+# file-copy.sh
+
+#!/bin/sh
+for server in `more server-list.txt`
+do
+ rsync -avzhe ssh /home/daygeek/2g/shell-script/ root@2g.CentOS.com$server:/opt/backup
+done
+```
+
+上面脚本的输出。
+
+```
+# ./file-copy.sh
+
+sending incremental file list
+./
+output.txt
+ovh.sh
+passwd-up.sh
+passwd-up1.sh
+server-list.txt
+
+sent 3.86K bytes received 281 bytes 8.28K bytes/sec
+total size is 9.13K speedup is 2.21
+
+sending incremental file list
+./
+output.txt
+ovh.sh
+passwd-up.sh
+passwd-up1.sh
+server-list.txt
+
+sent 3.86K bytes received 281 bytes 2.76K bytes/sec
+total size is 9.13K speedup is 2.21
+```
+
+### 方式 6:如何在 Linux 上使用 scp 命令和 Shell 脚本从本地系统向多个远程系统复制文件/文件夹?
+
+在上面两个 shell 脚本中,我们需要事先指定好文件和文件夹的路径,这儿我做了些小修改,让脚本可以接收文件或文件夹作为输入参数。当你每天需要多次执行复制时,这将会非常有用。
+
+```
+# file-copy.sh
+
+#!/bin/sh
+for server in `more server-list.txt`
+do
+scp -r $1 root@2g.CentOS.com$server:/opt/backup
+done
+```
+
+输入文件名并运行脚本。
+
+```
+# ./file-copy.sh output1.txt
+
+output1.txt 100% 3558 3.5KB/s 00:00
+output1.txt 100% 3558 3.5KB/s 00:00
+```
+
+### 方式 7:如何在 Linux 系统上用非标准端口复制文件/文件夹到远程系统?
+
+如果你想使用非标准端口,使用下面的 shell 脚本复制文件或文件夹。
+
+如果你使用了非标准Non-Standard端口,确保像下面 `scp` 命令那样指定好了端口号。
+
+```
+# file-copy-scp.sh
+
+#!/bin/sh
+for server in `more server-list.txt`
+do
+scp -P 2222 -r $1 root@2g.CentOS.com$server:/opt/backup
+done
+```
+
+运行脚本,输入文件名。
+
+```
+# ./file-copy.sh ovh.sh
+
+ovh.sh 100% 3558 3.5KB/s 00:00
+ovh.sh 100% 3558 3.5KB/s 00:00
+```
+
+如果你使用了非标准Non-Standard端口,确保像下面 `rsync` 命令那样指定好了端口号。
+
+```
+# file-copy-rsync.sh
+
+#!/bin/sh
+for server in `more server-list.txt`
+do
+rsync -avzhe 'ssh -p 2222' $1 root@2g.CentOS.com$server:/opt/backup
+done
+```
+
+运行脚本,输入文件名。
+
+```
+# ./file-copy-rsync.sh passwd-up.sh
+sending incremental file list
+passwd-up.sh
+
+sent 238 bytes received 35 bytes 26.00 bytes/sec
+total size is 159 speedup is 0.58
+
+sending incremental file list
+passwd-up.sh
+
+sent 238 bytes received 35 bytes 26.00 bytes/sec
+total size is 159 speedup is 0.58
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-scp-rsync-pscp-command-copy-files-folders-in-multiple-servers-using-shell-script/
+
+作者:[Prakash Subramanian][a]
+选题:[lujun9972][b]
+译者:[LuuMing](https://github.com/LuuMing)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/prakash/
+[b]: https://github.com/lujun9972
diff --git a/published/201902/20190123 Book Review- Fundamentals of Linux.md b/published/201902/20190123 Book Review- Fundamentals of Linux.md
new file mode 100644
index 0000000000..bdde86d16e
--- /dev/null
+++ b/published/201902/20190123 Book Review- Fundamentals of Linux.md
@@ -0,0 +1,75 @@
+[#]: collector: (lujun9972)
+[#]: translator: (mySoul8012)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10565-1.html)
+[#]: subject: (Book Review: Fundamentals of Linux)
+[#]: via: (https://itsfoss.com/fundamentals-of-linux-book-review)
+[#]: author: (John Paul https://itsfoss.com/author/john/)
+
+书评:《Linux 基础》
+======
+
+介绍 Linux 的基础知识以及它的工作原理的书很多,今天,我们将会点评这样一本书。这次讨论的主题为 Oliver Pelz 所写的 《[Linux 基础][1]Fundamentals of Linux》,由 [PacktPub][2] 出版。
+
+[Oliver Pelz][3] 是一位拥有超过十年软件开发经验的开发者和系统管理员,拥有生物信息学学位证书。
+
+### 《Linux 基础》
+
+![Fundamental of Linux books][4]
+
+正如可以从书名中猜到那样,《Linux 基础》的目标是为读者打下一个从了解 Linux 到学习 Linux 命令行的坚实基础。这本书一共有两百多页,因此它专注于教给用户日常任务和解决经常遇到的问题。本书是为想要成为 Linux 管理员的读者而写的。
+
+第一章首先概述了虚拟化。本书作者指导了读者如何在 [VirtualBox][6] 中创建 [CentOS][5] 实例。如何克隆实例,如何使用快照。并且同时你也会学习到如何通过 SSH 命令连接到虚拟机。
+
+第二章介绍了 Linux 命令行的基础知识,包括 shell 通配符,shell 展开,如何使用包含空格和特殊字符的文件名称。如何来获取命令手册的帮助页面。如何使用 `sed`、`awk` 这两个命令。如何浏览 Linux 的文件系统。
+
+第三章更深入的介绍了 Linux 文件系统。你将了解如何在 Linux 中文件是如何链接的,以及如何搜索它们。你还将获得用户、组,以及文件权限的大概了解。由于本章的重点介绍了如何与文件进行交互。因此还将会介绍如何从命令行中读取文本文件,以及初步了解如何使用 vim 编辑器。
+
+第四章重点介绍了如何使用命令行。以及涵盖的重要命令。如 `cat`、`sort`、`awk`、`tee`、`tar`、`rsync`、`nmap`、`htop` 等。你还将会了解到进程,以及它们如何彼此通讯。这一章还介绍了 Bash shell 脚本编程。
+
+第五章同时也是本书的最后一章,将会介绍 Linux 和其他高级命令,以及网络的概念。本书的作者讨论了 Linux 是如何处理网络,并提供使用多个虚拟机的示例。同时还将会介绍如何安装新的程序,如何设置防火墙。
+
+### 关于这本书的思考
+
+Linux 的基础知识只有五章和少少的 200 来页可能看起来有些短,但是也涵盖了相当多的信息。同时也将会获得如何使用命令行所需要的知识的一切。
+
+使用本书的时候,需要注意一件事情,即,本书专注于对命令行的关注,没有任何关于如何使用图形化的用户界面的任何教程。这是因为在 Linux 中有太多不同的桌面环境,以及很多的类似的系统应用,因此很难编写一本可以涵盖所有变种的书。此外,还有部分原因还因为本书的面向的用户群体为潜在的 Linux 管理员。
+
+当我看到作者使用 Centos 教授 Linux 的时候有点惊讶。我原本以为他会使用更为常见的 Linux 的发行版本,例如 Ubuntu、Debian 或者 Fedora。原因在于 Centos 是为服务器设计的发行版本。随着时间的推移变化很小,能够为 Linux 的基础知识打下一个非常坚实的基础。
+
+我自己使用 Linux 已经操作五年了。我大部分时间都在使用桌面版本的 Linux。我有些时候会使用命令行操作。但我并没有花太多的时间在那里。我使用鼠标完成了本书中涉及到的很多操作。现在呢。我同时也知道了如何通过终端做到同样的事情。这种方式不会改变我完成任务的方式,但是会有助于自己理解幕后发生的事情。
+
+如果你刚刚使用 Linux,或者计划使用。我不会推荐你阅读这本书。这可能有点绝对化。但是如何你已经花了一些时间在 Linux 上。或者可以快速掌握某种技术语言。那么这本书很适合你。
+
+如果你认为本书适合你的学习需求。你可以从以下链接获取到该书:
+
+- [下载《Linux 基础》](https://www.packtpub.com/networking-and-servers/fundamentals-linux)
+
+我们将在未来几个月内尝试点评更多 Linux 书籍,敬请关注我们。
+
+你最喜欢的关于 Linux 的入门书籍是什么?请在下面的评论中告诉我们。
+
+如果你发现这篇文章很有趣,请花一点时间在社交媒体、Hacker News或 [Reddit][8] 上分享。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/fundamentals-of-linux-book-review
+
+作者:[John Paul][a]
+选题:[lujun9972][b]
+译者:[mySoul8012](https://github.com/mySoul8012)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/john/
+[b]: https://github.com/lujun9972
+[1]: https://www.packtpub.com/networking-and-servers/fundamentals-linux
+[2]: https://www.packtpub.com/
+[3]: http://www.oliverpelz.de/index.html
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/fundamentals-of-linux-book-review.jpeg?resize=800%2C450&ssl=1
+[5]: https://centos.org/
+[6]: https://www.virtualbox.org/
+[7]: https://www.centos.org/
+[8]: http://reddit.com/r/linuxusersgroup
diff --git a/published/20190123 Commands to help you monitor activity on your Linux server.md b/published/201902/20190123 Commands to help you monitor activity on your Linux server.md
similarity index 100%
rename from published/20190123 Commands to help you monitor activity on your Linux server.md
rename to published/201902/20190123 Commands to help you monitor activity on your Linux server.md
diff --git a/published/201902/20190124 Get started with LogicalDOC, an open source document management system.md b/published/201902/20190124 Get started with LogicalDOC, an open source document management system.md
new file mode 100644
index 0000000000..35e90d4839
--- /dev/null
+++ b/published/201902/20190124 Get started with LogicalDOC, an open source document management system.md
@@ -0,0 +1,63 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10561-1.html)
+[#]: subject: (Get started with LogicalDOC, an open source document management system)
+[#]: via: (https://opensource.com/article/19/1/productivity-tool-logicaldoc)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
+
+开始使用 LogicalDOC 吧,一个开源文档管理系统
+======
+
+> 使用 LogicalDOC 更好地跟踪文档版本,这是我们开源工具系列中的第 12 个工具,它将使你在 2019 年更高效。
+
+
+
+每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
+
+这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第 12 个工具来帮助你在 2019 年更有效率。
+
+### LogicalDOC
+
+高效部分表现在能够在你需要时找到你所需的东西。我们都看到过塞满名称类似的文件的目录,这是每次更改文档时为了跟踪所有版本而重命名这些文件而导致的。例如,我的妻子是一名作家,她在将文档发送给审稿人之前,她经常使用新名称保存文档修订版。
+
+
+
+程序员对此一个自然的解决方案是 Git 或者其他版本控制器,但这个不适用于文档作者,因为用于代码的系统通常不能很好地兼容商业文本编辑器使用的格式。之前有人说,“改变格式就行”,[这不是适合每个人的选择][1]。同样,许多版本控制工具对于非技术人员来说并不是非常友好。在大型组织中,有一些工具可以解决此问题,但它们还需要大型组织的资源来运行、管理和支持它们。
+
+
+
+[LogicalDOC CE][2] 是为解决此问题而编写的开源文档管理系统。它允许用户签入、签出、查看版本、搜索和锁定文档,并保留版本历史记录,类似于程序员使用的版本控制工具。
+
+LogicalDOC 可在 Linux、MacOS 和 Windows 上[安装][3],使用基于 Java 的安装程序。在安装时,系统将提示你提供数据库存储位置,并提供只在本地文件存储的选项。你将获得访问服务器的 URL 和默认用户名和密码,以及保存用于自动安装脚本选项。
+
+登录后,LogicalDOC 的默认页面会列出你已标记、签出的文档以及有关它们的最新说明。切换到“文档”选项卡将显示你有权访问的文件。你可以在界面中选择文件或使用拖放来上传文档。如果你上传 ZIP 文件,LogicalDOC 会解压它,并将其中的文件添加到仓库中。
+
+
+
+右键单击文件将显示一个菜单选项,包括检出文件、锁定文件以防止更改,以及执行大量其他操作。签出文件会将其下载到本地计算机以便编辑。在重新签入之前,其他任何人都无法修改签出的文件。当重新签入文件时(使用相同的菜单),用户可以向版本添加标签,并且需要备注对其执行的操作。
+
+
+
+查看早期版本只需在“版本”页面下载就行。对于某些第三方服务,它还有导入和导出选项,内置 [Dropbox][4] 支持。
+
+文档管理不仅仅是能够负担得起昂贵解决方案的大公司才能有的。LogicalDOC 可帮助你追踪文档的版本历史,并为难以管理的文档提供了安全的仓库。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/productivity-tool-logicaldoc
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney (Kevin Sonney)
+[b]: https://github.com/lujun9972
+[1]: http://www.antipope.org/charlie/blog-static/2013/10/why-microsoft-word-must-die.html
+[2]: https://www.logicaldoc.com/download-logicaldoc-community
+[3]: https://docs.logicaldoc.com/en/installation
+[4]: https://dropbox.com
diff --git a/published/20190124 Understanding Angle Brackets in Bash.md b/published/201902/20190124 Understanding Angle Brackets in Bash.md
similarity index 100%
rename from published/20190124 Understanding Angle Brackets in Bash.md
rename to published/201902/20190124 Understanding Angle Brackets in Bash.md
diff --git a/published/201902/20190125 PyGame Zero- Games without boilerplate.md b/published/201902/20190125 PyGame Zero- Games without boilerplate.md
new file mode 100644
index 0000000000..523c7e4561
--- /dev/null
+++ b/published/201902/20190125 PyGame Zero- Games without boilerplate.md
@@ -0,0 +1,101 @@
+[#]: collector: (lujun9972)
+[#]: translator: (bestony)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10532-1.html)
+[#]: subject: (PyGame Zero: Games without boilerplate)
+[#]: via: (https://opensource.com/article/19/1/pygame-zero)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
+
+PyGame Zero: 无需模板的游戏开发
+======
+
+> 在你的游戏开发过程中有了 PyGame Zero,和枯燥的模板说再见吧。
+
+
+
+Python 是一个很好的入门级编程语言。并且,游戏是一个很好的入门项目:它们是可视化的,自驱动的,并且可以很愉快的与朋友和家人分享。虽然,绝大多数的 Python 写就的库,比如 [PyGame][1] ,会让初学者因为忘记微小的细节很容易导致什么都没渲染而感到困扰。
+
+在理解所有部分的作用之前,他们会将其中的许多部分都视为“无意识的模板文件”——需要复制和粘贴到程序中才能使其工作的神奇段落。
+
+[PyGame Zero][2] 试图通过在 PyGame 上放置一个抽象层来弥合这一差距,因此它字面上并不需要模板。
+
+我们在说的“字面”,就是在指字面。
+
+这是一个合格的 PyGame Zero 文件:
+
+```
+# This comment is here for clarity reasons
+```
+
+我们可以将它放在一个 `game.py` 文件里,并运行:
+
+```
+$ pgzrun game.py
+```
+
+这将会展示一个窗口,并运行一个可以通过关闭窗口或按下 `CTRL-C` 中断的游戏循环。
+
+遗憾的是,这将是一场无聊的游戏。什么都没发生。
+
+为了让它更有趣一点,我们可以画一个不同的背景:
+
+```
+def draw():
+ screen.fill((255, 0, 0))
+```
+
+这将会把背景色从黑色换为红色。但是这仍是一个很无聊的游戏,什么都没发生。我们可以让它变的更有意思一点:
+
+```
+colors = [0, 0, 0]
+
+def draw():
+ screen.fill(tuple(colors))
+
+def update():
+ colors[0] = (colors[0] + 1) % 256
+```
+
+这将会让窗口从黑色开始,逐渐变亮,直到变为亮红色,再返回黑色,一遍一遍循环。
+
+`update` 函数更新了参数的值,而 `draw` 基于这些参数渲染这个游戏。
+
+即使是这样,这里也没有任何方式给玩家与这个游戏的交互的方式。让我们试试其他一些事情:
+
+```
+colors = [0, 0, 0]
+
+def draw():
+ screen.fill(tuple(colors))
+
+def update():
+ colors[0] = (colors[0] + 1) % 256
+
+def on_key_down(key, mod, unicode):
+ colors[1] = (colors[1] + 1) % 256
+```
+
+现在,按下按键来提升亮度。
+
+这些包括游戏循环的三个重要部分:响应用户输入,更新参数和重新渲染屏幕。
+
+PyGame Zero 提供了更多功能,包括绘制精灵图和播放声音片段的功能。
+
+试一试,看看你能想出什么类型的游戏!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/pygame-zero
+
+作者:[Moshe Zadka][a]
+选题:[lujun9972][b]
+译者:[bestony](https://github.com/bestony)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://www.pygame.org/news
+[2]: https://pygame-zero.readthedocs.io/en/stable/
diff --git a/published/201902/20190125 Top 5 Linux Distributions for Development in 2019.md b/published/201902/20190125 Top 5 Linux Distributions for Development in 2019.md
new file mode 100644
index 0000000000..ae358bfbe0
--- /dev/null
+++ b/published/201902/20190125 Top 5 Linux Distributions for Development in 2019.md
@@ -0,0 +1,138 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10534-1.html)
+[#]: subject: (Top 5 Linux Distributions for Development in 2019)
+[#]: via: (https://www.linux.com/blog/2019/1/top-5-linux-distributions-development-2019)
+[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
+
+5 个用于开发工作的 Linux 发行版
+======
+
+> 这五个发行版用于开发工作将不会让你失望。
+
+
+
+Linux 上最受欢迎的任务之一肯定是开发。理由很充分:业务依赖于 Linux。没有 Linux,技术根本无法满足当今不断发展的世界的需求。因此,开发人员不断努力改善他们的工作环境。而进行此类改善的一种方法就是拥有合适的平台。值得庆幸的是,这就是 Linux,所以你总是有很多选择。
+
+但有时候,太多的选择本身就是一个问题。哪种发行版适合你的开发需求?当然,这取决于你正在开发的工作,但某些发行版更适合作为你的工作任务的基础。我将重点介绍我认为 2019 年最适合开发人员的五个发行版。
+
+### Ubuntu
+
+无需赘言。虽然 Linux Mint 的忠实用户无疑是一个非常忠诚的群体(这是有充分的理由的,他们选择的发行版很棒),但 Ubuntu Linux 在这里更被认可。为什么?因为有像 [AWS][1] 这样的云服务商存在,Ubuntu 成了部署最多的服务器操作系统之一。这意味着在 Ubuntu 桌面发行版上进行开发可以更轻松地转换为 Ubuntu Server。而且因为 Ubuntu 使得开发、使用和部署容器非常容易,所以你想要使用这个平台是完全合理的。而 Ubuntu 与其包含的 Snap 软件包相结合,使得这个 Canonical(Ubuntu 发行版背后的公司)的操作系统如虎添翼。
+
+但这不仅是你可以用 Ubuntu 做什么,而是你可以轻松做到。几乎对于所有的任务,Ubuntu 都是一个非常易用的发行版。而且因为 Ubuntu 如此受欢迎,所以你可以从 Ubuntu “软件” 应用的图形界面里轻松安装你想要使用的每个工具和 IDE(图 1)。
+
+![Ubuntu][3]
+
+*图 1:可以在 Ubuntu “软件”工具里面找到开发者工具。*
+
+如果你正在寻求易用、易于迁移,以及大量的工具,那么将 Ubuntu 作为开发平台就不会有错。
+
+### openSUSE
+
+我将 openSUSE 添加到此列表中有一个非常具体的原因。它不仅是一个出色的桌面发行版,它还是市场上最好的滚动发行版之一。因此,如果你希望用最新的软件开发、发布最新的软件,[openSUSE Tumbleweed][5] 应该是你的首选之一。如果你想使用最喜欢的 IDE 的最新版本,如果你总是希望确保使用最新的库和工具包进行开发,那么 Tumbleweed 就是你的平台。
+
+但 openSUSE 不仅提供滚动发布版本。如果你更愿意使用标准发行版,那么 [openSUSE Leap][6] 就是你想要的。
+
+当然,它不仅有标准版或滚动版,openSUSE 平台还有一个名为 [Kubic][7] 的 Kubernetes 特定版本,该版本基于 openSUSE MicroOS 上的 Kubernetes。但即使你没有为 Kubernetes 进行开发,你也会发现许多软件和工具可供使用。
+
+openSUSE 还提供了选择桌面环境的能力,或者你也可以选择通用桌面或服务器(图 2)。
+
+![openSUSE][9]
+
+*图 2: 正在安装 openSUSE Tumbleweed。*
+
+### Fedora
+
+使用 Fedora 作为开发平台才有意义。为什么?这个发行版本身似乎是面向开发人员的。通过定期的六个月发布周期,开发人员可以确保他们不会一直使用过时的软件。当你需要最新的工具和库时,这很重要。如果你正在开发企业级业务,Fedora 是一个理想的平台,因为它是红帽企业 Linux(RHEL)的上游。这意味着向 RHEL 的过渡应该是无痛的。这一点很重要,特别是如果你希望将你的项目带到一个更大的市场(一个比以桌面为中心的目标更深的领域)。
+
+Fedora 还提供了你将体验到的最佳 GNOME 体验之一(图 3)。换言之,这是非常稳定和快速的桌面。
+
+![GNOME][11]
+
+*图 3:Fedora 上的 GNOME 桌面。*
+
+但是如果 GNOME 不是你的菜,你还可以选择安装一个 [Fedora 花样版][12](包括 KDE、XFCE、LXQT、Mate-Compiz、Cinnamon、LXDE 和 SOAS 等桌面环境)。
+
+### Pop!_OS
+
+如果这个列表中我没有包括 [System76][13] 平台专门为他们的硬件定制的操作系统(虽然它也在其他硬件上运行良好),那我算是失职了。为什么我要包含这样的发行版,尤其是它还并未远离其所基于的 Ubuntu 平台?主要是因为如果你计划从 System76 购买台式机或笔记本电脑,那它就是你想要的发行版。但是你为什么要这样做呢(特别是考虑到 Linux 几乎适用于所有现成的硬件)?因为 System76 销售的出色硬件。随着他们的 Thelio 桌面的发布,这是你可以使用的市场上最强大的台式计算机之一。如果你正在努力开发大型应用程序(特别是那些非常依赖于非常大的数据库或需要大量处理能力进行编译的应用程序),为什么不用最好的计算机呢?而且由于 Pop!_OS 完全适用于 System76 硬件,因此这是一个明智的选择。
+
+由于 Pop!_OS 基于 Ubuntu,因此你可以轻松获得其所基于的 Ubuntu 可用的所有工具(图 4)。
+
+![Pop!_OS][15]
+
+*图 4:运行在 Pop!_OS 上的 Anjunta IDE*
+
+Pop!_OS 也会默认加密驱动器,因此你可以放心你的工作可以避免窥探(如果你的硬件落入坏人之手)。
+
+### Manjaro
+
+对于那些喜欢在 Arch Linux 上开发,但不想经历安装和使用 Arch Linux 的所有环节的人来说,那选择就是 Manjaro。Manjaro 可以轻松地启动和运行一个基于 Arch Linux 的发行版(就像安装和使用 Ubuntu 一样简单)。
+
+但是 Manjaro 对开发人员友好的原因(除了享受 Arch 式好处)是你可以下载好多种不同口味的桌面。从[Manjaro 下载页面][16] 中,你可以获得以下口味:
+
+ * GNOME
+ * XFCE
+ * KDE
+ * OpenBox
+ * Cinnamon
+ * I3
+ * Awesome
+ * Budgie
+ * Mate
+ * Xfce 开发者预览版
+ * KDE 开发者预览版
+ * GNOME 开发者预览版
+ * Architect
+ * Deepin
+
+值得注意的是它的开发者版本(面向测试人员和开发人员),Architect 版本(适用于想要从头开始构建 Manjaro 的用户)和 Awesome 版本(图 5,适用于开发人员处理日常工作的版本)。使用 Manjaro 的一个警告是,与任何滚动版本一样,你今天开发的代码可能明天无法运行。因此,你需要具备一定程度的敏捷性。当然,如果你没有为 Manjaro(或 Arch)做开发,并且你正在进行工作更多是通用的(或 Web)开发,那么只有当你使用的工具被更新了且不再适合你时,才会影响你。然而,这种情况发生的可能性很小。和大多数 Linux 发行版一样,你会发现 Manjaro 有大量的开发工具。
+
+![Manjaro][18]
+
+*图 5:Manjaro Awesome 版对于开发者来说很棒。*
+
+Manjaro 还支持 AUR(Arch User Repository —— Arch 用户的社区驱动软件库),其中包括最先进的软件和库,以及 [Unity Editor][19] 或 yEd 等专有应用程序。但是,有个关于 AUR 的警告:AUR 包含的软件中被怀疑发现了恶意软件。因此,如果你选择使用 AUR,请谨慎操作,风险自负。
+
+### 其实任何 Linux 都可以
+
+说实话,如果你是开发人员,几乎任何 Linux 发行版都可以工作。如果从命令行执行大部分开发,则尤其如此。但是如果你喜欢在可靠的桌面上运行一个好的图形界面程序,试试这些发行版中的一个,它们不会令人失望。
+
+通过 Linux 基金会和 edX 的免费[“Linux 简介”][20]课程了解有关 Linux 的更多信息。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2019/1/top-5-linux-distributions-development-2019
+
+作者:[Jack Wallen][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/jlwallen
+[b]: https://github.com/lujun9972
+[1]: https://aws.amazon.com/
+[2]: https://www.linux.com/files/images/dev1jpg
+[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev_1.jpg?itok=7QJQWBKi (Ubuntu)
+[4]: https://www.linux.com/licenses/category/used-permission
+[5]: https://en.opensuse.org/Portal:Tumbleweed
+[6]: https://en.opensuse.org/Portal:Leap
+[7]: https://software.opensuse.org/distributions/tumbleweed
+[8]: /files/images/dev2jpg
+[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev_2.jpg?itok=1GJmpr1t (openSUSE)
+[10]: /files/images/dev3jpg
+[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev_3.jpg?itok=_6Ki4EOo (GNOME)
+[12]: https://spins.fedoraproject.org/
+[13]: https://system76.com/
+[14]: /files/images/dev4jpg
+[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev_4.jpg?itok=nNG2Ax24 (Pop!_OS)
+[16]: https://manjaro.org/download/
+[17]: /files/images/dev5jpg
+[18]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev_5.jpg?itok=RGfF2UEi (Manjaro)
+[19]: https://unity3d.com/unity/editor
+[20]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/translated/tech/20190126 Get started with Tint2, an open source taskbar for Linux.md b/published/201902/20190126 Get started with Tint2, an open source taskbar for Linux.md
similarity index 81%
rename from translated/tech/20190126 Get started with Tint2, an open source taskbar for Linux.md
rename to published/201902/20190126 Get started with Tint2, an open source taskbar for Linux.md
index ef58de9fe5..1efd095a38 100644
--- a/translated/tech/20190126 Get started with Tint2, an open source taskbar for Linux.md
+++ b/published/201902/20190126 Get started with Tint2, an open source taskbar for Linux.md
@@ -1,16 +1,16 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10511-1.html)
[#]: subject: (Get started with Tint2, an open source taskbar for Linux)
[#]: via: (https://opensource.com/article/19/1/productivity-tool-tint2)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
-开始使用 Tint2,一款 Linux 中的开源任务栏
+开始使用 Tint2 吧,一款 Linux 中的开源任务栏
======
-Tint2 是我们在开源工具系列中的第 14 个工具,它将在 2019 年提高你的工作效率,能在任何窗口管理器中提供一致的用户体验。
+> Tint2 是我们在开源工具系列中的第 14 个工具,它将在 2019 年提高你的工作效率,能在任何窗口管理器中提供一致的用户体验。

@@ -20,7 +20,7 @@ Tint2 是我们在开源工具系列中的第 14 个工具,它将在 2019 年
### Tint2
-让我提高工作效率的最佳方法之一是使用尽可能不让我分心的干净界面。作为 Linux 用户,这意味着使用一种最小的窗口管理器,如 [Openbox][1]、[i3][2] 或 [Awesome][3]。它们每种都有让我更有效率的自定义选项。但让我失望的一件事是,它们都没有一致的配置,所以我不得不经常重新调整我的窗口管理器。
+让我提高工作效率的最佳方法之一是使用尽可能不让我分心的干净界面。作为 Linux 用户,这意味着使用一种极简的窗口管理器,如 [Openbox][1]、[i3][2] 或 [Awesome][3]。它们每种都有让我更有效率的自定义选项。但让我失望的一件事是,它们都没有一致的配置,所以我不得不经常重新调整我的窗口管理器。

@@ -30,7 +30,7 @@ Tint2 是我们在开源工具系列中的第 14 个工具,它将在 2019 年

-启动配置工具能让你选择主题并自定义屏幕的顶部、底部和侧边栏。我建议从最接近你想要的主题开始,然后从那里进行自定义。
+启动该配置工具能让你选择主题并自定义屏幕的顶部、底部和侧边栏。我建议从最接近你想要的主题开始,然后从那里进行自定义。

@@ -47,7 +47,7 @@ via: https://opensource.com/article/19/1/productivity-tool-tint2
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -56,4 +56,4 @@ via: https://opensource.com/article/19/1/productivity-tool-tint2
[1]: http://openbox.org/wiki/Main_Page
[2]: https://i3wm.org/
[3]: https://awesomewm.org/
-[4]: https://gitlab.com/o9000/tint2
\ No newline at end of file
+[4]: https://gitlab.com/o9000/tint2
diff --git a/translated/tech/20190127 Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops.md b/published/201902/20190127 Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops.md
similarity index 60%
rename from translated/tech/20190127 Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops.md
rename to published/201902/20190127 Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops.md
index cce0f28165..aa42bef0ea 100644
--- a/translated/tech/20190127 Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops.md
+++ b/published/201902/20190127 Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops.md
@@ -1,15 +1,17 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10525-1.html)
[#]: subject: (Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops)
[#]: via: (https://opensource.com/article/19/1/productivity-tool-edex-ui)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
-开始使用 eDEX-UI,一款受《创:战纪》影响的平板电脑和台式机终端程序
+开始使用 eDEX-UI 吧,一款受《电子世界争霸战》影响的终端程序
======
-使用 eDEX-UI 让你的工作更有趣,这是我们开源工具系列中的第 15 个工具,它将使你在 2019 年更高效。
+
+> 使用 eDEX-UI 让你的工作更有趣,这是我们开源工具系列中的第 15 个工具,它将使你在 2019 年更高效。
+

每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
@@ -18,25 +20,25 @@
### eDEX-UI
-当[《创:战纪》][1]上映时我才 11是岁。我不能否认,尽管这部电影充满幻想,但它对我后来的职业选择产生了影响。
+当[《电子世界争霸战》][1]上映时我才 11 岁。我不能否认,尽管这部电影充满幻想,但它对我后来的职业选择产生了影响。

-[eDEX-UI][2] 是一款专为平板电脑和台式机设计的跨平台终端程序,它受到《创:战纪》用户界面的启发。它在选项卡式界面中有五个终端,因此可以轻松地在任务之间切换,以及显示有用的系统信息。
+[eDEX-UI][2] 是一款专为平板电脑和台式机设计的跨平台终端程序,它的用户界面受到《电子世界争霸战》的启发。它在选项卡式界面中有五个终端,因此可以轻松地在任务之间切换,以及显示有用的系统信息。
在启动时,eDEX-UI 会启动一系列的东西,其中包含它所基于的 ElectronJS 系统的信息。启动后,eDEX-UI 会显示系统信息、文件浏览器、键盘(用于平板电脑)和主终端选项卡。其他四个选项卡(被标记为 EMPTY)没有加载任何内容,并且当你单击它时将启动一个 shell。eDEX-UI 中的默认 shell 是 Bash(如果在 Windows 上,则可能需要将其更改为 PowerShell 或 cmd.exe)。

-更改文件浏览器中的目录将更改活动终端中的目录,反之亦然。文件浏览器可以执行你期望的所有操作,包括在单击文件时打开关联的应用。唯一的例外是 eDEX-UI 的 settings.json 文件(默认是 .config/eDEX-UI),它会打开配置编辑器。这允许你为终端设置 shell 命令、更改主题以及修改用户界面的其他几个设置。主题也保存在配置目录中,因为它们也是 JSON 文件,所以创建自定义主题非常简单。
+更改文件浏览器中的目录也将更改活动终端中的目录,反之亦然。文件浏览器可以执行你期望的所有操作,包括在单击文件时打开关联的应用。唯一的例外是 eDEX-UI 的 `settings.json` 文件(默认在 `.config/eDEX-UI`),它会打开配置编辑器。这允许你为终端设置 shell 命令、更改主题以及修改用户界面的其他几个设置。主题也保存在配置目录中,因为它们也是 JSON 文件,所以创建自定义主题非常简单。

-eDEX-UI 允许你使用完全仿真运行五个终端。默认终端类型是 xterm-color,这意味着它支持全色彩。需要注意的一点是,输入时键盘会亮起,因此如果你在平板电脑上使用 eDEX-UI,键盘可能会在人们看见屏幕的环境中带来安全风险。因此最好在这些设备上使用没有键盘的主题,尽管在打字时看起来确实很酷。
+eDEX-UI 允许你使用完全仿真运行五个终端。默认终端类型是 xterm-color,这意味着它支持全色彩。需要注意的一点是,输入时键会亮起,因此如果你在平板电脑上使用 eDEX-UI,键盘可能会在人们看见屏幕的环境中带来安全风险。因此最好在这些设备上使用没有键盘的主题,尽管在打字时看起来确实很酷。

-虽然 eDEX-UI 仅支持五个终端窗口,但这对我来说已经足够了。在平板电脑上,eDEX-UI 给了我网络空间的感觉而不会影响我的效率。在桌面上,eDEX-UI 支持所有功能,并让我在我的同事面前显得很酷。
+虽然 eDEX-UI 仅支持五个终端窗口,但这对我来说已经足够了。在平板电脑上,eDEX-UI 给了我网络空间感而不会影响我的效率。在桌面上,eDEX-UI 支持所有功能,并让我在我的同事面前显得很酷。
--------------------------------------------------------------------------------
@@ -45,7 +47,7 @@ via: https://opensource.com/article/19/1/productivity-tool-edex-ui
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/201902/20190128 3 simple and useful GNOME Shell extensions.md b/published/201902/20190128 3 simple and useful GNOME Shell extensions.md
new file mode 100644
index 0000000000..3125f76e71
--- /dev/null
+++ b/published/201902/20190128 3 simple and useful GNOME Shell extensions.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10536-1.html)
+[#]: subject: (3 simple and useful GNOME Shell extensions)
+[#]: via: (https://fedoramagazine.org/3-simple-and-useful-gnome-shell-extensions/)
+[#]: author: (Ryan Lerch https://fedoramagazine.org/introducing-flatpak/)
+
+3 个简单实用的 GNOME Shell 扩展
+======
+
+
+Fedora 工作站的默认桌面 GNOME Shell,因其极简、整洁的用户界面而闻名,并深受许多用户的喜爱。它还以可使用扩展添加到 stock 界面的能力而闻名。在本文中,我们将介绍 GNOME Shell 的 3 个简单且有用的扩展。这三个扩展为你的桌面提供了更多的行为,可以完成你可能每天都会做的简单任务。
+
+### 安装扩展程序
+
+安装 GNOME Shell 扩展的最快捷、最简单的方法是使用“软件”应用。有关详细信息,请查看 Magazine [以前的文章](https://fedoramagazine.org/install-extensions-via-software-application/):
+
+
+
+### 可移动驱动器菜单
+
+![][1]
+
+*Fedora 29 中的 Removable Drive Menu 扩展*
+
+首先是 [Removable Drive Menu][2] 扩展。如果你的计算机中有可移动驱动器,它是一个可在系统托盘中添加一个 widget 的简单工具。它可以使你轻松打开可移动驱动器中的文件,或者快速方便地弹出驱动器以安全移除设备。
+
+![][3]
+
+*软件应用中的 Removable Drive Menu*
+
+### 扩展之扩展
+
+![][4]
+
+如果你一直在安装和尝试新扩展,那么 [Extensions][5] 扩展非常有用。它提供了所有已安装扩展的列表,允许你启用或禁用它们。此外,如果该扩展有设置,那么可以快速打开每个扩展的设置对话框。
+
+![][6]
+
+*软件中的 Extensions 扩展*
+
+### 无用的时钟移动
+
+![][7]
+
+最后的是列表中最简单的扩展。[Frippery Move Clock][8],只是将时钟位置从顶部栏的中心向右移动,位于状态区旁边。
+
+![][9]
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/3-simple-and-useful-gnome-shell-extensions/
+
+作者:[Ryan Lerch][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/introducing-flatpak/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/01/removable-disk-1024x459.jpg
+[2]: https://extensions.gnome.org/extension/7/removable-drive-menu/
+[3]: https://fedoramagazine.org/wp-content/uploads/2019/01/removable-software-1024x723.png
+[4]: https://fedoramagazine.org/wp-content/uploads/2019/01/extensions-extension-1024x459.jpg
+[5]: https://extensions.gnome.org/extension/1036/extensions/
+[6]: https://fedoramagazine.org/wp-content/uploads/2019/01/extensions-software-1024x723.png
+[7]: https://fedoramagazine.org/wp-content/uploads/2019/01/move_clock-1024x189.jpg
+[8]: https://extensions.gnome.org/extension/2/move-clock/
+[9]: https://fedoramagazine.org/wp-content/uploads/2019/01/Screenshot-from-2019-01-28-21-53-18-1024x723.png
diff --git a/translated/tech/20190128 Using more to view text files at the Linux command line.md b/published/201902/20190128 Using more to view text files at the Linux command line.md
similarity index 52%
rename from translated/tech/20190128 Using more to view text files at the Linux command line.md
rename to published/201902/20190128 Using more to view text files at the Linux command line.md
index f10ac248fc..9929654a94 100644
--- a/translated/tech/20190128 Using more to view text files at the Linux command line.md
+++ b/published/201902/20190128 Using more to view text files at the Linux command line.md
@@ -1,23 +1,24 @@
[#]: collector: (lujun9972)
-[#]: translator: ( dianbanjiu )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: translator: (dianbanjiu)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10531-1.html)
[#]: subject: (Using more to view text files at the Linux command line)
[#]: via: (https://opensource.com/article/19/1/more-text-files-linux)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
在 Linux 命令行使用 more 查看文本文件
======
-文本文件和 Linux 一直是携手并进的。或者说看起来如此。那你又是依靠哪些让你使用起来很舒服的工具来查看这些文本文件的呢?
+
+> 文本文件和 Linux 一直是携手并进的。或者说看起来如此。那你又是依靠哪些让你使用起来很舒服的工具来查看这些文本文件的呢?

-Linux 下有很多实用工具可以让你在终端界面查看文本文件。其中一个就是 [**more**][1]。
+Linux 下有很多实用工具可以让你在终端界面查看文本文件。其中一个就是 [more][1]。
-**more** 跟我之前另一篇文章里写到的工具 —— **[less][2]** 很相似。它们之间的主要不同点在于 **more** 只允许你向前查看文件。
+`more` 跟我之前另一篇文章里写到的工具 —— [less][2] 很相似。它们之间的主要不同点在于 `more` 只允许你向前查看文件。
-尽管它能提供的功能看起来很有限,不过它依旧有很多有用的特性值得你去了解。下面让我们来快速浏览一下 **more** 可以做什么,以及如何使用它吧。
+尽管它能提供的功能看起来很有限,不过它依旧有很多有用的特性值得你去了解。下面让我们来快速浏览一下 `more` 可以做什么,以及如何使用它吧。
### 基础使用
@@ -35,9 +36,9 @@ $ more jekyll-article.md

-使用空格键可以向下翻页,输入 **q** 可以退出。
+使用空格键可以向下翻页,输入 `q` 可以退出。
-如果你想在这个文件中搜索一些文本,输入 **/** 字符并在其后加上你想要查找的文字。例如你要查看的字段是 terminal,只需输入:
+如果你想在这个文件中搜索一些文本,输入 `/` 字符并在其后加上你想要查找的文字。例如你要查看的字段是 “terminal”,只需输入:
```
/terminal
@@ -45,12 +46,13 @@ $ more jekyll-article.md

-搜索的内容是区分大小写的,所以输入 /terminal 跟 /Terminal 会出现不同的结果。
+搜索的内容是区分大小写的,所以输入 `/terminal` 跟 `/Terminal` 会出现不同的结果。
### 和其他实用工具组合使用
-你可以通过管道将其他命令行工具得到的文本传输到 **more**。你问为什么这样做?因为有时这些工具获取的文本会超过终端一页可以显示的限度。
-想要做到这个,先输入你想要使用的完整命令,后面跟上管道符(**|**),管道符后跟 **more**。假设现在有一个有很多文件的目录。你就可以组合 **more** 跟 **ls** 命令完整查看这个目录当中的内容。
+你可以通过管道将其他命令行工具得到的文本传输到 `more`。你问为什么这样做?因为有时这些工具获取的文本会超过终端一页可以显示的限度。
+
+想要做到这个,先输入你想要使用的完整命令,后面跟上管道符(`|`),管道符后跟 `more`。假设现在有一个有很多文件的目录。你就可以组合 `more` 跟 `ls` 命令完整查看这个目录当中的内容。
```shell
$ ls | more
@@ -58,7 +60,7 @@ $ ls | more

-你可以组合 **more** 和 **grep** 命令,从而实现在多个文件中找到指定的文本。下面是我在多篇文章的源文件中查找 productivity 的例子。
+你可以组合 `more` 和 `grep` 命令,从而实现在多个文件中找到指定的文本。下面是我在多篇文章的源文件中查找 “productivity” 的例子。
```shell
$ grep ‘productivity’ core.md Dict.md lctt2014.md lctt2016.md lctt2018.md README.md | more
@@ -66,17 +68,17 @@ $ grep ‘productivity’ core.md Dict.md lctt2014.md lctt2016.md lctt2018.md RE

-另外一个可以和 **more** 组合的实用工具是 **ps**(列出你系统上正在运行的进程)。当你的系统上运行了很多的进程,你现在想要查看他们的时候,这个组合将会派上用场。例如你想找到一个你需要杀死的进程,只需输入下面的命令:
+另外一个可以和 `more` 组合的实用工具是 `ps`(列出你系统上正在运行的进程)。当你的系统上运行了很多的进程,你现在想要查看他们的时候,这个组合将会派上用场。例如你想找到一个你需要杀死的进程,只需输入下面的命令:
```shell
$ ps -u scott | more
```
-注意用你的用户名替换掉 scott。
+注意用你的用户名替换掉 “scott”。

-就像我文章开篇提到的, **more** 很容易使用。尽管不如它的双胞胎兄弟 **less** 那般灵活,但是仍然值得了解一下。
+就像我文章开篇提到的, `more` 很容易使用。尽管不如它的双胞胎兄弟 `less` 那般灵活,但是仍然值得了解一下。
--------------------------------------------------------------------------------
@@ -85,7 +87,7 @@ via: https://opensource.com/article/19/1/more-text-files-linux
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[dianbanjiu](https://github.com/dianbanjiu)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/20190128 fdisk - Easy Way To Manage Disk Partitions In Linux.md b/published/201902/20190128 fdisk - Easy Way To Manage Disk Partitions In Linux.md
similarity index 100%
rename from published/20190128 fdisk - Easy Way To Manage Disk Partitions In Linux.md
rename to published/201902/20190128 fdisk - Easy Way To Manage Disk Partitions In Linux.md
diff --git a/published/201902/20190129 Get started with gPodder, an open source podcast client.md b/published/201902/20190129 Get started with gPodder, an open source podcast client.md
new file mode 100644
index 0000000000..f5449a95de
--- /dev/null
+++ b/published/201902/20190129 Get started with gPodder, an open source podcast client.md
@@ -0,0 +1,65 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10567-1.html)
+[#]: subject: (Get started with gPodder, an open source podcast client)
+[#]: via: (https://opensource.com/article/19/1/productivity-tool-gpodder)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
+
+开始使用 gPodder 吧,一个开源播客客户端
+======
+
+> 使用 gPodder 将你的播客同步到你的设备上,gPodder 是我们开源工具系列中的第 17 个工具,它将在 2019 年提高你的工作效率。
+
+
+
+每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
+
+这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第 17 个工具来帮助你在 2019 年更有效率。
+
+### gPodder
+
+我喜欢播客。哎呀,我非常喜欢它们,因此我录制了其中的三个(你可以在[我的个人资料][1]中找到它们的链接)。我从播客那里学到了很多东西,并在我工作时在后台播放它们。但是,如何在多台桌面和移动设备之间保持同步可能会有一些挑战。
+
+[gPodder][2] 是一个简单的跨平台播客下载器、播放器和同步工具。它支持 RSS feed、[FeedBurner][3]、[YouTube][4] 和 [SoundCloud][5],它还有一个开源的同步服务,你可以根据需要运行它。gPodder 不直接播放播客。相反,它会使用你选择的音频或视频播放器。
+
+
+
+安装 gPodder 非常简单。安装程序适用于 Windows 和 MacOS,同时也有用于主要的 Linux 发行版的软件包。如果你的发行版中没有它,你可以直接从 Git 下载运行。通过 “Add Podcasts via URL” 菜单,你可以输入播客的 RSS 源 URL 或其他服务的 “特殊” URL。gPodder 将获取节目列表并显示一个对话框,你可以在其中选择要下载的节目或在列表上标记旧节目。
+
+
+
+它一个更好的功能是,如果 URL 已经在你的剪贴板中,gPodder 会自动将它放入播放 URL 中,这样你就可以很容易地将新的播客添加到列表中。如果你已有播客 feed 的 OPML 文件,那么可以上传并导入它。还有一个发现选项,让你可搜索 [gPodder.net][6] 上的播客,这是由编写和维护 gPodder 的人员提供的自由及开源的播客的列表网站。
+
+
+
+[mygpo][7] 服务器在设备之间同步播客。gPodder 默认使用 [gPodder.net][8] 的服务器,但是如果你想要运行自己的服务器,那么可以在配置文件中更改它(请注意,你需要直接修改配置文件)。同步能让你在桌面和移动设备之间保持列表一致。如果你在多个设备上收听播客(例如,我在我的工作电脑、家用电脑和手机上收听),这会非常有用,因为这意味着无论你身在何处,你都拥有最近的播客和节目列表而无需一次又一次地设置。
+
+
+
+单击播客节目将显示与其关联的文本,单击“播放”将启动设备的默认音频或视频播放器。如果要使用默认之外的其他播放器,可以在 gPodder 的配置设置中更改此设置。
+
+通过 gPodder,你可以轻松查找、下载和收听播客,在设备之间同步这些播客,在易于使用的界面中访问许多其他功能。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/productivity-tool-gpodder
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney (Kevin Sonney)
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/users/ksonney
+[2]: https://gpodder.github.io/
+[3]: https://feedburner.google.com/
+[4]: https://youtube.com
+[5]: https://soundcloud.com/
+[6]: http://gpodder.net
+[7]: https://github.com/gpodder/mygpo
+[8]: http://gPodder.net
diff --git a/translated/tech/20190129 More About Angle Brackets in Bash.md b/published/201902/20190129 More About Angle Brackets in Bash.md
similarity index 73%
rename from translated/tech/20190129 More About Angle Brackets in Bash.md
rename to published/201902/20190129 More About Angle Brackets in Bash.md
index de4f8331ca..a34d4fc963 100644
--- a/translated/tech/20190129 More About Angle Brackets in Bash.md
+++ b/published/201902/20190129 More About Angle Brackets in Bash.md
@@ -1,14 +1,15 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10529-1.html)
[#]: subject: (More About Angle Brackets in Bash)
[#]: via: (https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
Bash 中尖括号的更多用法
======
+> 在这篇文章,我们继续来深入探讨尖括号的更多其它用法。

@@ -22,19 +23,20 @@ Bash 中尖括号的更多用法
diff <(ls /original/dir/) <(ls /backup/dir/)
```
-[`diff`][2] 命令是一个逐行比较两个文件之间差异的工具。在上面的例子中,就使用了 `<` 让 `diff` 认为两个 `ls` 命令输出的结果都是文件,从而能够比较它们之间的差异。
+[diff][2] 命令是一个逐行比较两个文件之间差异的工具。在上面的例子中,就使用了 `<` 让 `diff` 认为两个 `ls` 命令输出的结果都是文件,从而能够比较它们之间的差异。
要注意,在 `<` 和 `(...)` 之间是没有空格的。
我尝试在我的图片目录和它的备份目录执行上面的命令,输出的是以下结果:
```
-diff <(ls /My/Pictures/) <(ls /My/backup/Pictures/) 5d4 < Dv7bIIeUUAAD1Fc.jpg:large.jpg
+diff <(ls /My/Pictures/) <(ls /My/backup/Pictures/)
+5d4 < Dv7bIIeUUAAD1Fc.jpg:large.jpg
```
输出结果中的 `<` 表示 `Dv7bIIeUUAAD1Fc.jpg:large.jpg` 这个文件存在于左边的目录(`/My/Pictures`)但不存在于右边的目录(`/My/backup/Pictures`)中。也就是说,在备份过程中可能发生了问题,导致这个文件没有被成功备份。如果 `diff` 没有显示出任何输出结果,就表明两个目录中的文件是一致的。
-看到这里你可能会想到,既然可以通过 `<` 将一些命令行的输出内容作为一个文件,提供给一个需要接受文件格式的命令,那么在上一篇文章的“最喜欢的演员排序”例子中,就可以省去中间的一些步骤,直接对输出内容执行 `sort` 操作了。
+看到这里你可能会想到,既然可以通过 `<` 将一些命令行的输出内容作为一个文件提供给一个需要接受文件格式的命令,那么在上一篇文章的“最喜欢的演员排序”例子中,就可以省去中间的一些步骤,直接对输出内容执行 `sort` 操作了。
确实如此,这个例子可以简化成这样:
@@ -42,7 +44,7 @@ diff <(ls /My/Pictures/) <(ls /My/backup/Pictures/) 5d4 < Dv7bIIeUUAAD1Fc.jpg:la
sort -r <(while read -r name surname films;do echo $films $name $surname ; done < CBactors)
```
-### Here string
+### Here 字符串
除此以外,尖括号的重定向功能还有另一种使用方式。
@@ -52,9 +54,9 @@ sort -r <(while read -r name surname films;do echo $films $name $surname ; done
myvar="Hello World" echo $myvar | tr '[:lower:]' '[:upper:]' HELLO WORLD
```
-[`tr`][3] 命令可以将一个字符串转换为某种格式。在上面的例子中,就使用了 `tr` 将字符串中的所有小写字母都转换为大写字母。
+[tr][3] 命令可以将一个字符串转换为某种格式。在上面的例子中,就使用了 `tr` 将字符串中的所有小写字母都转换为大写字母。
-要理解的是,这个传递过程的重点不是变量,而是变量的值,也就是字符串 `Hello World`。这样的字符串叫做 here string,含义是“这就是我们要处理的字符串”。但对于上面的例子,还可以用更直观的方式的处理,就像下面这样:
+要理解的是,这个传递过程的重点不是变量,而是变量的值,也就是字符串 `Hello World`。这样的字符串叫做 HERE 字符串,含义是“这就是我们要处理的字符串”。但对于上面的例子,还可以用更直观的方式的处理,就像下面这样:
```
tr '[:lower:]' '[:upper:]' <<< $myvar
@@ -75,13 +77,13 @@ via: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
-[1]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
+[1]: https://linux.cn/article-10502-1.html
[2]: https://linux.die.net/man/1/diff
[3]: https://linux.die.net/man/1/tr
diff --git a/published/201902/20190130 Get started with Budgie Desktop, a Linux environment.md b/published/201902/20190130 Get started with Budgie Desktop, a Linux environment.md
new file mode 100644
index 0000000000..58a8a6505c
--- /dev/null
+++ b/published/201902/20190130 Get started with Budgie Desktop, a Linux environment.md
@@ -0,0 +1,61 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10547-1.html)
+[#]: subject: (Get started with Budgie Desktop, a Linux environment)
+[#]: via: (https://opensource.com/article/19/1/productivity-tool-budgie-desktop)
+[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
+
+开始使用 Budgie 吧,一款 Linux 桌面环境
+======
+
+> 使用 Budgie 按需配置你的桌面,这是我们开源工具系列中的第 18 个工具,它将在 2019 年提高你的工作效率。
+
+
+
+每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
+
+这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第 18 个工具来帮助你在 2019 年更有效率。
+
+### Budgie 桌面
+
+Linux 中有许多桌面环境。从易于使用并有令人惊叹图形界面的 [GNOME 桌面][1](在大多数主要 Linux 发行版上是默认桌面)和 [KDE][2],到极简主义的 [Openbox][3],再到高度可配置的平铺化的 [i3][4],有很多选择。我要寻找的桌面环境需要速度、不引人注目和干净的用户体验。当桌面不适合你时,很难会有高效率。
+
+
+
+[Budgie 桌面][5]是 [Solus][6] Linux 发行版的默认桌面,它在大多数主要 Linux 发行版的附加软件包中提供。它基于 GNOME,并使用了许多你可能已经在计算机上使用的相同工具和库。
+
+其默认桌面非常简约,只有面板和空白桌面。Budgie 包含一个集成的侧边栏(称为 Raven),通过它可以快速访问日历、音频控件和设置菜单。Raven 还包含一个集成的通知区域,其中包含与 MacOS 类似的统一系统消息显示。
+
+
+
+点击 Raven 中的齿轮图标会显示 Budgie 的控制面板及其配置。由于 Budgie 仍处于开发阶段,与 GNOME 或 KDE 相比,它的选项有点少,我希望随着时间的推移它会有更多的选项。顶部面板选项允许用户配置顶部面板的排序、位置和内容,这很不错。
+
+
+
+Budgie 的 Welcome 应用(首次登录时展示)包含安装其他软件、面板小程序、截图和 Flatpack 软件包的选项。这些小程序有处理网络、截图、额外的时钟和计时器等等。
+
+
+
+Budgie 提供干净稳定的桌面。它响应迅速,有许多选项,允许你根据需要自定义它。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/productivity-tool-budgie-desktop
+
+作者:[Kevin Sonney][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksonney (Kevin Sonney)
+[b]: https://github.com/lujun9972
+[1]: https://www.gnome.org/
+[2]: https://www.kde.org/
+[3]: http://openbox.org/wiki/Main_Page
+[4]: https://i3wm.org/
+[5]: https://getsol.us/solus/experiences/
+[6]: https://getsol.us/home/
diff --git a/published/201902/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md b/published/201902/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md
new file mode 100644
index 0000000000..430b6210dd
--- /dev/null
+++ b/published/201902/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md
@@ -0,0 +1,111 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10577-1.html)
+[#]: subject: (Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro)
+[#]: via: (https://itsfoss.com/olive-video-editor)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+Olive:一款以 Final Cut Pro 为目标的开源视频编辑器
+======
+
+[Olive][1] 是一个正在开发的新的开源视频编辑器。这个非线性视频编辑器旨在提供高端专业视频编辑软件的免费替代品。目标高么?我认为是的。
+
+如果你读过我们的 [Linux 中的最佳视频编辑器][2]这篇文章,你可能已经注意到大多数“专业级”视频编辑器(如 [Lightworks][3] 或 DaVinciResolve)既不免费也不开源。
+
+[Kdenlive][4] 和 Shotcut 也是此类,但它通常无法达到专业视频编辑的标准(这是许多 Linux 用户说的)。
+
+爱好者级和专业级的视频编辑之间的这种差距促使 Olive 的开发人员启动了这个项目。
+
+![Olive Video Editor][5]
+
+*Olive 视频编辑器界面*
+
+Libre Graphics World 中有一篇详细的[关于 Olive 的点评][6]。实际上,这是我第一次知道 Olive 的地方。如果你有兴趣了解更多信息,请阅读该文章。
+
+### 在 Linux 中安装 Olive 视频编辑器
+
+> 提醒你一下。Olive 正处于发展的早期阶段。你会发现很多 bug 和缺失/不完整的功能。你不应该把它当作你的主要视频编辑器。
+
+如果你想测试 Olive,有几种方法可以在 Linux 上安装它。
+
+#### 通过 PPA 在基于 Ubuntu 的发行版中安装 Olive
+
+你可以在 Ubuntu、Mint 和其他基于 Ubuntu 的发行版使用官方 PPA 安装 Olive。
+
+```
+sudo add-apt-repository ppa:olive-editor/olive-editor
+sudo apt-get update
+sudo apt-get install olive-editor
+```
+
+#### 通过 Snap 安装 Olive
+
+如果你的 Linux 发行版支持 Snap,则可以使用以下命令进行安装。
+
+```
+sudo snap install --edge olive-editor
+```
+
+#### 通过 Flatpak 安装 Olive
+
+如果你的 [Linux 发行版支持 Flatpak][7],你可以通过 Flatpak 安装 Olive 视频编辑器。
+
+- [Flatpak 地址](https://flathub.org/apps/details/org.olivevideoeditor.Olive)
+
+#### 通过 AppImage 使用 Olive
+
+不想安装吗?下载 [AppImage][8] 文件,将其设置为可执行文件并运行它。32 位和 64 位 AppImage 文件都有。你应该下载相应的文件。
+
+- [下载 Olive 的 AppImage](https://github.com/olive-editor/olive/releases/tag/continuous)
+
+Olive 也可用于 Windows 和 macOS。你可以从它的[下载页面][9]获得它。
+
+### 想要支持 Olive 视频编辑器的开发吗?
+
+如果你喜欢 Olive 尝试实现的功能,并且想要支持它,那么你可以通过以下几种方式。
+
+如果你在测试 Olive 时发现一些 bug,请到它们的 GitHub 仓库中报告。
+
+- [提交 bug 报告以帮助 Olive](https://github.com/olive-editor/olive/issues)
+
+如果你是程序员,请浏览 Olive 的源代码,看看你是否可以通过编码技巧帮助项目。
+
+- [Olive 的 GitHub 仓库](https://github.com/olive-editor/olive)
+
+在经济上为项目做贡献是另一种可以帮助开发开源软件的方法。你可以通过成为赞助人来支持 Olive。
+
+- [赞助 Olive](https://www.patreon.com/olivevideoeditor)
+
+如果你没有支持 Olive 的金钱或编码技能,你仍然可以帮助它。在社交媒体或你经常访问的 Linux/软件相关论坛和群组中分享这篇文章或 Olive 的网站。一点微小的口碑都能间接地帮助它。
+
+### 你如何看待 Olive?
+
+评判 Olive 还为时过早。我希望能够持续快速开发,并且在年底之前发布 Olive 的稳定版(如果我没有过于乐观的话)。
+
+你如何看待 Olive?你是否认同开发人员针对专业用户的目标?你希望 Olive 拥有哪些功能?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/olive-video-editor
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://www.olivevideoeditor.org/
+[2]: https://itsfoss.com/best-video-editing-software-linux/
+[3]: https://www.lwks.com/
+[4]: https://kdenlive.org/en/
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/olive-video-editor-interface.jpg?resize=800%2C450&ssl=1
+[6]: http://libregraphicsworld.org/blog/entry/introducing-olive-new-non-linear-video-editor
+[7]: https://itsfoss.com/flatpak-guide/
+[8]: https://itsfoss.com/use-appimage-linux/
+[9]: https://www.olivevideoeditor.org/download.php
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/olive-video-editor-interface.jpg?fit=800%2C450&ssl=1
diff --git a/published/201902/20190131 Will quantum computing break security.md b/published/201902/20190131 Will quantum computing break security.md
new file mode 100644
index 0000000000..33323796ce
--- /dev/null
+++ b/published/201902/20190131 Will quantum computing break security.md
@@ -0,0 +1,89 @@
+[#]: collector: (lujun9972)
+[#]: translator: (HankChow)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10566-1.html)
+[#]: subject: (Will quantum computing break security?)
+[#]: via: (https://opensource.com/article/19/1/will-quantum-computing-break-security)
+[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
+
+量子计算会打破现有的安全体系吗?
+======
+
+> 你会希望[某黑客][6]J. Random Hacker假冒你的银行吗?
+
+
+
+近年来,量子计算机quantum computer已经出现在大众的视野当中。量子计算机被认为是第六类计算机,这六类计算机包括:
+
+1. 人力Humans:在人造的计算工具出现之前,人类只能使用人力去进行计算。而承担计算工作的人,只能被称为“计算者”。
+2. 模拟计算工具Mechanical analogue:由人类制造的一些模拟计算过程的小工具,例如[安提凯希拉装置][1]Antikythera mechanism、星盘astrolabe、计算尺slide rule等等。
+3. 机械工具Mechanical digital:在这一个类别中包括了运用到离散数学但未使用电子技术进行计算的工具,例如算盘abacus、Charles Babbage 的差分机Difference Engine等等。
+4. 电子模拟计算工具Electronic analogue:这一个类别的计算机多数用于军事方面的用途,例如炸弹瞄准器、枪炮瞄准装置等等。
+5. 电子计算机Electronic digital:我在这里会稍微冒险一点,我觉得 Colossus 是第一台电子计算机,[^1] :这一类几乎包含现代所有的电子设备,从移动电话到超级计算机,都在这个类别当中。
+6. 量子计算机Quantum computer:即将进入我们的生活,而且与之前的几类完全不同。
+
+### 什么是量子计算?
+
+量子计算Quantum computing的概念来源于量子力学quantum mechanics,使用的计算方式和我们平常使用的普通计算非常不同。如果想要深入理解,建议从参考[维基百科上的定义][2]开始。对我们来说,最重要的是理解这一点:量子计算机使用量子位qubit进行计算。在这样的前提下,对于很多数学算法和运算操作,量子计算机的计算速度会比普通计算机要快得多。
+
+这里的“快得多”是按数量级来说的“快得多”。在某些情况下,一个计算任务如果由普通计算机来执行,可能要耗费几年或者几十年才能完成,但如果由量子计算机来执行,就只需要几秒钟。这样的速度甚至令人感到可怕。因为量子计算机会非常擅长信息的加密解密计算,即使在没有密钥的情况下,也能快速完成繁重的计算任务。
+
+这意味着,如果拥有足够强大的量子计算机,那么你的所有信息都会被一览无遗,任何被加密的数据都可以被正确解密出来,甚至伪造数字签名也会成为可能。这确实是一个严重的问题。谁也不想被某个黑客冒充成自己在用的银行,更不希望自己在区块链上的交易被篡改得面目全非。
+
+### 好消息
+
+尽管上面的提到的问题非常可怕,但也不需要太担心。
+
+首先,如果要实现上面提到的能力,一台可以操作大量量子位的量子计算机是必不可少的,而这个硬件上的要求就是一个很高的门槛。[^4] 目前普遍认为,规模大得足以有效破解经典加密算法的量子计算机在最近几年还不可能出现。
+
+其次,除了攻击现有的加密算法需要大量的量子位以外,还需要很多量子位来保证容错性。
+
+还有,尽管确实有一些理论上的模型阐述了量子计算机如何对一些现有的算法作出攻击,但是要让这样的理论模型实际运作起来的难度会比我们[^5] 想象中大得多。事实上,有一些攻击手段也是未被完全确认是可行的,又或者这些攻击手段还需要继续耗费很多年的改进才能到达如斯恐怖的程度。
+
+最后,还有很多专业人士正在研究能够防御量子计算的算法(这样的算法也被称为“后量子算法post-quantum algorithms”)。如果这些防御算法经过测试以后投入使用,我们就可以使用这些算法进行加密,来对抗量子计算了。
+
+总而言之,很多专家都认为,我们现有的加密方式在未来 5 年甚至未来 10 年内都是安全的,不需要过分担心。
+
+### 也有坏消息
+
+但我们也并不是高枕无忧了,以下两个问题就值得我们关注:
+
+1. 人们在设计应用系统的时候仍然没有对量子计算作出太多的考量。如果设计的系统可能会使用 10 年以上,又或者数据加密和签名的时间跨度在 10 年以上,那么就必须考虑量子计算在未来会不会对系统造成不利的影响。
+2. 新出现的防御量子计算的算法可能会是专有的。也就是说,如果基于这些防御量子计算的算法来设计系统,那么在系统落地的时候,可能会需要为此付费。尽管我是支持开源的,尤其是[开源密码学][3],但我最担心的就是无法开源这方面的内容。而且最糟糕的是,在建立新的协议标准时(不管是事实标准还是通过标准组织建立的标准),无论是故意的,还是无意忽略,或者是没有好的开源替代品,他们都很可能使用专有算法而排除使用开源算法。
+
+### 我们要怎样做?
+
+幸运的是,针对上述两个问题,我们还是有应对措施的。首先,在整个系统的设计阶段,就需要考虑到它是否会受到量子计算的影响,并作出相应的规划。当然了,不需要现在就立即采取行动,因为当前的技术水平也没法实现有效的方案,但至少也要[在加密方面保持敏捷性][4],以便在任何需要的时候为你的协议和系统更换更有效的加密算法。[^7]
+
+其次是参与开源运动。尽可能鼓励密码学方面的有识之士团结起来,支持开放标准,并投入对非专有的防御量子计算的算法研究当中去。这一点也算是当务之急,因为号召更多的人重视起来并加入研究,比研究本身更为重要。
+
+本文首发于《[Alice, Eve, and Bob][5]》,并在作者同意下重新发表。
+
+[^1]: 我认为把它称为第一台电子可编程计算机是公平的。我知道有早期的非可编程的,也有些人声称是 ENIAC,但我没有足够的空间或精力在这里争论这件事。
+[^2]: No。
+[^3]: See 2. Don't get me wrong, by the way—I grew up near Weston-super-Mare, and it's got things going for it, but it's not Mayfair.
+[^4]: 如果量子物理学家说很难,那么在我看来,就很难。
+[^5]: 而且我假设我们都不是量子物理学家或数学家。
+[^6]: I'm definitely not.
+[^7]: 而且不仅仅是出于量子计算的原因:我们现有的一些经典算法很可能会陷入其他非量子攻击,例如新的数学方法。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/will-quantum-computing-break-security
+
+作者:[Mike Bursell][a]
+选题:[lujun9972][b]
+译者:[HankChow](https://github.com/HankChow)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mikecamel
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Antikythera_mechanism
+[2]: https://en.wikipedia.org/wiki/Quantum_computing
+[3]: https://opensource.com/article/17/10/many-eyes
+[4]: https://aliceevebob.com/2017/04/04/disbelieving-the-many-eyes-hypothesis/
+[5]: https://aliceevebob.com/2019/01/08/will-quantum-computing-break-security/
+[6]: https://www.techopedia.com/definition/20225/j-random-hacker
diff --git a/published/201902/20190201 Top 5 Linux Distributions for New Users.md b/published/201902/20190201 Top 5 Linux Distributions for New Users.md
new file mode 100644
index 0000000000..5641dd8796
--- /dev/null
+++ b/published/201902/20190201 Top 5 Linux Distributions for New Users.md
@@ -0,0 +1,111 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10553-1.html)
+[#]: subject: (Top 5 Linux Distributions for New Users)
+[#]: via: (https://www.linux.com/blog/learn/2019/2/top-5-linux-distributions-new-users)
+[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
+
+5 个面向新手的 Linux 发行版
+======
+
+> 5 个可使用新用户有如归家般感觉的发行版。
+
+
+
+从最初的 Linux 到现在,Linux 已经发展了很长一段路。但是,无论你曾经多少次听说过现在使用 Linux 有多容易,仍然会有表示怀疑的人。而要真的承担得其这份声明,桌面必须足够简单,以便不熟悉 Linux 的人也能够使用它。事实上大量的桌面发行版使这成为了现实。
+
+### 无需 Linux 知识
+
+将这个清单误解为又一个“最佳用户友好型 Linux 发行版”的清单可能很简单。但这不是我们要在这里看到的。这二者之间有什么不同?就我的目的而言,定义的界限是 Linux 是否真正起到了使用的作用。换句话说,你是否可以将这个桌面操作系统放在一个用户面前,并让他们应用自如而无需懂得 Linux 知识呢?
+
+不管你相信与否,有些发行版就能做到。这里我将介绍给你 5 个这样的发行版。这些或许你全都听说过。它们或许不是你所选择的发行版,但可以向你保证它们无需过多关注,而是将用户放在眼前的。
+
+我们来看看选中的几个。
+
+### Elementary OS
+
+[Elementary OS](https://elementary.io/) 的理念主要围绕人们如何实际使用他们的桌面。开发人员和设计人员不遗余力地创建尽可能简单的桌面。在这个过程中,他们致力于去 Linux 化的 Linux。这并不是说他们已经从这个等式中删除了 Linux。不,恰恰相反,他们所做的就是创建一个与你所发现的一样的中立的操作系统。Elementary OS 是如此流畅,以确保一切都完美合理。从单个 Dock 到每个人都清晰明了的应用程序菜单,这是一个桌面,而不用提醒用户说,“你正在使用 Linux!” 事实上,其布局本身就让人联想到 Mac,但附加了一个简单的应用程序菜单(图 1)。
+
+![Elementary OS Juno][2]
+
+*图 1:Elementary OS Juno 应用菜单*
+
+将 Elementary OS 放在此列表中的另一个重要原因是它不像其他桌面发行版那样灵活。当然,有些用户会对此不以为然,但是如果桌面没有向用户扔出各种花哨的定制诱惑,那么就会形成一个非常熟悉的环境:一个既不需要也不允许大量修修补补的环境。操作系统在让新用户熟悉该平台这一方面还有很长的路要走。
+
+与任何现代 Linux 桌面发行版一样,Elementary OS 包括了应用商店,称为 AppCenter,用户可以在其中安装所需的所有应用程序,而无需触及命令行。
+
+### 深度操作系统
+
+[深度操作系统](https://www.deepin.org/)不仅得到了市场上最漂亮的台式机之一的赞誉,它也像任何桌面操作系统一样容易上手。其桌面界面非常简单,对于毫无 Linux 经验的用户来说,它的上手速度非常快。事实上,你很难找到无法立即上手使用 Deepin 桌面的用户。而这里唯一可能的障碍可能是其侧边栏控制中心(图 2)。
+
+![][5]
+
+*图 2:Deepin 的侧边栏控制编码*
+
+但即使是侧边栏控制面板,也像市场上的任何其他配置工具一样直观。任何使用过移动设备的人对于这种布局都很熟悉。至于打开应用程序,Deepin 的启动器采用了 macOS Launchpad 的方式。此按钮位于桌面底座上通常最右侧的位置,因此用户立即就可以会意,知道它可能类似于标准的“开始”菜单。
+
+与 Elementary OS(以及市场上大多数 Linux 发行版)类似,深度操作系统也包含一个应用程序商店(简称为“商店”),可以轻松安装大量应用程序。
+
+### Ubuntu
+
+你知道肯定有它。[Ubuntu](https://www.ubuntu.com/) 通常在大多数用户友好的 Linux 列表中占据首位。因为它是少数几个不需要懂得 Linux 就能使用的桌面之一。但在采用 GNOME(和 Unity 谢幕)之前,情况并非如此。因为 Unity 经常需要进行一些调整才能达到一点 Linux 知识都不需要的程度(图 3)。现在 Ubuntu 已经采用了 GNOME,并将其调整到甚至不需要懂得 GNOME 的程度,这个桌面使得对 Linux 的简单性和可用性的要求不再是迫切问题。
+
+![Ubuntu 18.04][7]
+
+*图 3:Ubuntu 18.04 桌面可使用马上熟悉起来*
+
+与 Elementary OS 不同,Ubuntu 对用户毫无阻碍。因此,任何想从桌面上获得更多信息的人都可以拥有它。但是,其开箱即用的体验对于任何类型的用户都是足够的。任何一个让用户不知道他们触手可及的力量有多少的桌面,肯定不如 Ubuntu。
+
+### Linux Mint
+
+我需要首先声明,我从来都不是 [Linux Mint](https://linuxmint.com/) 的忠实粉丝。但这并不是说我不尊重开发者的工作,而更多的是一种审美观点。我更喜欢现代化的桌面环境。但是,旧式的学校计算机桌面的隐喻(可以在默认的 Cinnamon 桌面中找到)可以让几乎每个人使用它的人都格外熟悉。Linux Mint 使用任务栏、开始按钮、系统托盘和桌面图标(图 4),提供了一个需要零学习曲线的界面。事实上,一些用户最初可能会被愚弄,以为他们正在使用 Windows 7 的克隆版。甚至是它的更新警告图标也会让用户感到非常熟悉。
+
+![Linux Mint][9]
+
+*图 4:Linux Mint 的 Cinnamon 桌面非常像 Windows 7*
+
+因为 Linux Mint 受益于其所基于的 Ubuntu,它不仅会让你马上熟悉起来,而且具有很高的可用性。无论你是否对底层平台有所了解,用户都会立即感受到宾至如归的感觉。
+
+### Ubuntu Budgie
+
+我们的列表将以这样一个发行版做结:它也能让用户忘记他们正在使用 Linux,并且使用常用工具变得简单、美观。使 Ubuntu 融合 Budgie 桌面可以构成一个令人印象深刻的易用发行版。虽然其桌面布局(图 5)可能不太一样,但毫无疑问,适应这个环境并不需要浪费时间。实际上,除了 Dock 默认居于桌面的左侧,[Ubuntu Budgie](https://ubuntubudgie.org/) 确实看起来像 Elementary OS。
+
+![Budgie][11]
+
+*图 5:Budgie 桌面既漂亮又简单*
+
+Ubuntu Budgie 中的系统托盘/通知区域提供了一些不太多见的功能,比如:快速访问 Caffeine(一种保持桌面清醒的工具)、快速笔记工具(用于记录简单笔记)、Night Lite 开关、原地下拉菜单(用于快速访问文件夹),当然还有 Raven 小程序/通知侧边栏(与深度操作系统中的控制中心侧边栏类似,但不太优雅)。Budgie 还包括一个应用程序菜单(左上角),用户可以访问所有已安装的应用程序。打开一个应用程序,该图标将出现在 Dock 中。右键单击该应用程序图标,然后选择“保留在 Dock”以便更快地访问。
+
+Ubuntu Budgie 的一切都很直观,所以几乎没有学习曲线。这种发行版既优雅又易于使用,不能再好了。
+
+### 选择一个吧
+
+至此介绍了 5 个 Linux 发行版,它们各自以自己的方式提供了让任何用户都马上熟悉的桌面体验。虽然这些可能不是你对顶级发行版的选择,但对于那些不熟悉 Linux 的用户来说,却不能否定它们的价值。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/2/top-5-linux-distributions-new-users
+
+作者:[Jack Wallen][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/jlwallen
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/files/images/elementaryosjpg-2
+[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/elementaryos_0.jpg?itok=KxgNUvMW (Elementary OS Juno)
+[3]: https://www.linux.com/licenses/category/used-permission
+[4]: https://www.linux.com/files/images/deepinjpg
+[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/deepin.jpg?itok=VV381a9f
+[6]: https://www.linux.com/files/images/ubuntujpg-1
+[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_1.jpg?itok=bax-_Tsg (Ubuntu 18.04)
+[8]: https://www.linux.com/files/images/linuxmintjpg
+[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linuxmint.jpg?itok=8sPon0Cq (Linux Mint )
+[10]: https://www.linux.com/files/images/budgiejpg-0
+[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/budgie_0.jpg?itok=zcf-AHmj (Budgie)
+[12]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/published/201902/20190205 DNS and Root Certificates.md b/published/201902/20190205 DNS and Root Certificates.md
new file mode 100644
index 0000000000..3a921def35
--- /dev/null
+++ b/published/201902/20190205 DNS and Root Certificates.md
@@ -0,0 +1,136 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10533-1.html)
+[#]: subject: (DNS and Root Certificates)
+[#]: via: (https://lushka.al/dns-and-certificates/)
+[#]: author: (Anxhelo Lushka https://lushka.al/)
+
+DNS 和根证书
+======
+
+> 关于 DNS 和根证书你需要了解的内容。
+
+由于最近发生的一些事件,我们(Privacy Today 组织)感到有必要写一篇关于此事的短文。它适用于所有读者,因此它将保持简单 —— 技术细节可能会在稍后的文章发布。
+
+### 什么是 DNS,为什么它与你有关?
+
+DNS 的意思是域名系统Domain Name System,你每天都会接触到它。每当你的 Web 浏览器或任何其他应用程序连接到互联网时,它就很可能会使用域名。简单来说,域名就是你键入的地址:例如 [duckduckgo.com][1]。你的计算机需要知道它所导向的地方,会向 DNS 解析器寻求帮助。而它将返回类似 [176.34.155.23][2] 这样的 IP —— 这就是连接时所需要知道的公开网络地址。 此过程称为 DNS 查找。
+
+这对你的隐私、安全以及你的自由都有一定的影响:
+
+#### 隐私
+
+由于你要求解析器获取域名的 IP,因此它会确切地知道你正在访问哪些站点,并且由于“物联网”(通常缩写为 IoT),甚至它还知道你在家中使用的是哪个设备。
+
+#### 安全
+
+你可以相信解析器返回的 IP 是正确的。有一些检查措施可以确保如此,在正常情况下这一般不是问题。但这些可能措施会被破坏,这就是写作本文的原因。如果返回的 IP 不正确,你可能会被欺骗引向了恶意的第三方 —— 甚至你都不会注意到任何差异。在这种情况下,你的隐私会受到更大的危害,因为不仅会被跟踪你访问了什么网站,甚至你访问的内容也会被跟踪。第三方可以准确地看到你正在查看的内容,收集你输入的个人信息(例如密码)等等。你的整个身份可以轻松接管。
+
+#### 自由
+
+审查通常是通过 DNS 实施的。这不是最有效的方法,但它非常普遍。即使在西方国家,它也经常被公司和政府使用。他们使用与潜在攻击者相同的方法;当你查询 IP 地址时,他们不会返回正确的 IP。他们可以表现得就好像某个域名不存在,或完全将访问指向别处。
+
+### DNS 查询的方式
+
+#### 由你的 ISP 提供的第三方 DNS 解析器
+
+大多数人都在使用由其互联网接入提供商(ISP)提供的第三方解析器。当你连接调制解调器时(LCTT 译注:或宽带路由器),这些 DNS 解析器就会被自动取出,而你可能从来没注意过它。
+
+#### 你自己选择的第三方 DNS 解析器
+
+如果你已经知道 DNS 意味着什么,那么你可能会决定使用你选择的另一个 DNS 解析器。这可能会改善这种情况,因为它使你的 ISP 更难以跟踪你,并且你可以避免某些形式的审查。尽管追踪和审查仍然是可能的,但这种方法并没有被广泛使用。
+
+#### 你自己(本地)的 DNS 解析器
+
+你可以自己动手,避免使用别人的 DNS 解析器的一些危险。如果你对此感兴趣,请告诉我们。
+
+### 根证书
+
+#### 什么是根证书?
+
+每当你访问以 https 开头的网站时,你都会使用它发送的证书与之通信。它使你的浏览器能够加密通信并确保没有人可以窥探。这就是为什么每个人都被告知在登录网站时要注意 https(而不是 http)。证书本身仅用于验证是否为某个域所生成。以及:
+
+这就是根证书的来源。可以其视为一个更高的级别,用来确保其下的级别是正确的。它验证发送给你的证书是否已由证书颁发机构授权。此权限确保创建证书的人实际上是真正的运营者。
+
+这也被称为信任链。默认情况下,你的操作系统包含一组这些根证书,以确保该信任链的存在。
+
+#### 滥用
+
+我们现在知道:
+
+* DNS 解析器在你发送域名时向你发送 IP 地址
+* 证书允许加密你的通信,并验证它们是否为你访问的域生成
+* 根证书验证该证书是否合法,并且是由真实站点运营者创建的
+
+**怎么会被滥用呢?**
+
+* 如前所述,恶意 DNS 解析器可能会向你发送错误的 IP 以进行审查。它们还可以将你导向完全不同的网站。
+* 这个网站可以向你发送假的证书。
+* 恶意的根证书可以“验证”此假证书。
+
+对你来说,这个网站看起来绝对没问题;它在网址中有 https,如果你点击它,它会说已经通过验证。就像你了解到的一样,对吗?**不对!**
+
+它现在可以接收你要发送给原站点的所有通信。这会绕过想要避免被滥用而创建的检查。你不会收到错误消息,你的浏览器也不会发觉。
+
+**而你所有的数据都会受到损害!**
+
+### 结论
+
+#### 风险
+
+* 使用恶意 DNS 解析器总是会损害你的隐私,但只要你注意 https,你的安全性就不会受到损害。
+* 使用恶意 DNS 解析程序和恶意根证书,你的隐私和安全性将完全受到损害。
+
+#### 可以采取的动作
+
+**不要安装第三方根证书!**只有非常少的例外情况才需要这样做,并且它们都不适用于一般最终用户。
+
+**不要被那些“广告拦截”、“军事级安全”或类似的东西营销噱头所吸引**。有一些方法可以自行使用 DNS 解析器来增强你的隐私,但安装第三方根证书永远不会有意义。你正在将自己置身于陷阱之中。
+
+### 实际看看
+
+**警告**
+
+有位友好的系统管理员提供了一个现场演示,你可以实时看到自己。这是真事。
+
+**千万不要输入私人数据!之后务必删除证书和该 DNS!**
+
+如果你不知道如何操作,那就不要安装它。虽然我们相信我们的朋友,但你不要随便安装随机和未知的第三方根证书。
+
+#### 实际演示
+
+链接在这里:
+
+* 设置所提供的 DNS 解析器
+* 安装所提供的根证书
+* 访问 并输入随机登录数据
+* 你的数据将显示在该网站上
+
+### 延伸信息
+
+如果你对更多技术细节感兴趣,请告诉我们。如果有足够多感兴趣的人,我们可能会写一篇文章,但是目前最重要的部分是分享基础知识,这样你就可以做出明智的决定,而不会因为营销和欺诈而陷入陷阱。请随时提出对你很关注的其他主题。
+
+这篇文章来自 [Privacy Today 频道][3]。[Privacy Today][4] 是一个关于隐私、开源、自由哲学等所有事物的组织!
+
+所有内容均根据 CC BY-NC-SA 4.0 获得许可。([署名 - 非商业性使用 - 共享 4.0 国际][5])。
+
+--------------------------------------------------------------------------------
+
+via: https://lushka.al/dns-and-certificates/
+
+作者:[Anxhelo Lushka][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://lushka.al/
+[b]: https://github.com/lujun9972
+[1]: https://duckduckgo.com
+[2]: http://176.34.155.23
+[3]: https://t.me/privacytoday
+[4]: https://t.me/joinchat/Awg5A0UW-tzOLX7zMoTDog
+[5]: https://creativecommons.org/licenses/by-nc-sa/4.0/
diff --git a/published/201902/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md b/published/201902/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md
new file mode 100644
index 0000000000..830ff13fd9
--- /dev/null
+++ b/published/201902/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md
@@ -0,0 +1,142 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10550-1.html)
+[#]: subject: (Installing Kali Linux on VirtualBox: Quickest & Safest Way)
+[#]: via: (https://itsfoss.com/install-kali-linux-virtualbox)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+在 VirtualBox 上安装 Kali Linux 的最安全快捷的方式
+======
+
+> 本教程将向你展示如何以最快的方式在运行于 Windows 和 Linux 上的 VirtualBox 上安装 Kali Linux。
+
+[Kali Linux][1] 是最好的[黑客][2] 和安全爱好者的 Linux 发行版之一。
+
+由于它涉及像黑客这样的敏感话题,它就像一把双刃剑。我们过去在一篇详细的 [Kali Linux 点评](https://linux.cn/article-10198-1.html)中对此进行了讨论,所以我不会再次赘述。
+
+虽然你可以通过替换现有的操作系统来安装 Kali Linux,但通过虚拟机使用它将是一个更好、更安全的选择。
+
+使用 Virtual Box,你可以将 Kali Linux 当做 Windows / Linux 系统中的常规应用程序一样,几乎就和在系统中运行 VLC 或游戏一样简单。
+
+在虚拟机中使用 Kali Linux 也是安全的。无论你在 Kali Linux 中做什么都不会影响你的“宿主机系统”(即你原来的 Windows 或 Linux 操作系统)。你的实际操作系统将不会受到影响,宿主机系统中的数据将是安全的。
+
+![Kali Linux on Virtual Box][3]
+
+### 如何在 VirtualBox 上安装 Kali Linux
+
+我将在这里使用 [VirtualBox][4]。它是一个很棒的开源虚拟化解决方案,适用于任何人(无论是专业或个人用途)。它可以免费使用。
+
+在本教程中,我们将特指 Kali Linux 的安装,但你几乎可以安装任何其他已有 ISO 文件的操作系统或预先构建好的虚拟机存储文件。
+
+**注意:**这些相同的步骤适用于运行在 Windows / Linux 上的 VirtualBox。
+
+正如我已经提到的,你可以安装 Windows 或 Linux 作为宿主机。但是,在本文中,我安装了 Windows 10(不要讨厌我!),我会尝试在 VirtualBox 中逐步安装 Kali Linux。
+
+而且,最好的是,即使你碰巧使用 Linux 发行版作为主要操作系统,相同的步骤也完全适用!
+
+想知道怎么样做吗?让我们来看看…
+
+### 在 VirtualBox 上安装 Kali Linux 的逐步指导
+
+我们将使用专为 VirtualBox 制作的定制 Kali Linux 镜像。当然,你还可以下载 Kali Linux 的 ISO 文件并创建一个新的虚拟机,但是为什么在你有一个简单的替代方案时还要这样做呢?
+
+#### 1、下载并安装 VirtualBox
+
+你需要做的第一件事是从 Oracle 官方网站下载并安装 VirtualBox。
+
+- [下载 VirtualBox](https://www.virtualbox.org/wiki/Downloads)
+
+下载了安装程序后,只需双击它即可安装 VirtualBox。在 Ubuntu / Fedora Linux 上安装 VirtualBox 也是一样的。
+
+#### 2、下载就绪的 Kali Linux 虚拟镜像
+
+VirtualBox 成功安装后,前往 [Offensive Security 的下载页面][5] 下载用于 VirtualBox 的虚拟机镜像。如果你改变主意想使用 [VMware][6],也有用于它的。
+
+![Kali Linux Virtual Box Image][7]
+
+如你所见,文件大小远远超过 3 GB,你应该使用 torrent 方式或使用 [下载管理器][8] 下载它。
+
+#### 3、在 VirtualBox 上安装 Kali Linux
+
+一旦安装了 VirtualBox 并下载了 Kali Linux 镜像,你只需将其导入 VirtualBox 即可使其正常工作。
+
+以下是如何导入 Kali Linux 的 VirtualBox 镜像:
+
+**步骤 1**:启动 VirtualBox。你会注意到有一个 “Import” 按钮,点击它。
+
+![virtualbox import][9]
+
+*点击 “Import” 按钮*
+
+**步骤 2**:接着,浏览找到你刚刚下载的文件并选择它导入(如你在下图所见)。文件名应该以 “kali linux” 开头,并以 “.ova” 扩展名结束。
+
+![virtualbox import file][10]
+
+*导入 Kali Linux 镜像*
+
+选择好之后,点击 “Next” 进行处理。
+
+**步骤 3**:现在,你将看到要导入的这个虚拟机的设置。你可以自定义它们,这是你的自由。如果你想使用默认设置,也没关系。
+
+你需要选择具有足够存储空间的路径。我永远不会在 Windows 上推荐使用 C:驱动器。
+
+![virtualbox kali linux settings][11]
+
+*以 VDI 方式导入硬盘驱动器*
+
+这里,VDI 方式的硬盘驱动器是指通过分配其存储空间设置来实际挂载该硬盘驱动器。
+
+完成设置后,点击 “Import” 并等待一段时间。
+
+**步骤 4**:你现在将看到这个虚拟机已经列出了。所以,只需点击 “Start” 即可启动它。
+
+你最初可能会因 USB 端口 2.0 控制器支持而出现错误,你可以将其禁用以解决此问题,或者只需按照屏幕上的说明安装其他软件包进行修复即可。现在就完成了!
+
+![kali linux on windows virtual box][12]
+
+*运行于 VirtualBox 中的 Kali Linux*
+
+我希望本指南可以帮助你在 VirtualBox 上轻松安装 Kali Linux。当然,Kali Linux 有很多有用的工具可用于渗透测试 —— 祝你好运!
+
+**提示**:Kali Linux 和 Ubuntu 都是基于 Debian 的。如果你在使用 Kali Linux 时遇到任何问题或错误,可以按照互联网上针对 Ubuntu 或 Debian 的教程进行操作。
+
+### 赠品:免费的 Kali Linux 指南手册
+
+如果你刚刚开始使用 Kali Linux,那么了解如何使用 Kali Linux 是一个好主意。
+
+Kali Linux 背后的公司 Offensive Security 已经创建了一本指南,介绍了 Linux 的基础知识,Kali Linux 的基础知识、配置和设置。它还有一些关于渗透测试和安全工具的章节。
+
+基本上,它拥有你开始使用 Kali Linux 时所需知道的一切。最棒的是这本书可以免费下载。
+
+- [免费下载 Kali Linux 揭秘](https://kali.training/downloads/Kali-Linux-Revealed-1st-edition.pdf)
+
+如果你遇到问题或想分享在 VirtualBox 上运行 Kali Linux 的经验,请在下面的评论中告诉我们。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-kali-linux-virtualbox
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://www.kali.org/
+[2]: https://itsfoss.com/linux-hacking-penetration-testing/
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?resize=800%2C450&ssl=1
+[4]: https://www.virtualbox.org/
+[5]: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/
+[6]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
+[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box-image.jpg?resize=800%2C347&ssl=1
+[8]: https://itsfoss.com/4-best-download-managers-for-linux/
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-import-kali-linux.jpg?ssl=1
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-linux-next.jpg?ssl=1
+[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-kali-linux-settings.jpg?ssl=1
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-on-windows-virtualbox.jpg?resize=800%2C429&ssl=1
+[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?fit=800%2C450&ssl=1
diff --git a/published/201902/20190206 4 cool new projects to try in COPR for February 2019.md b/published/201902/20190206 4 cool new projects to try in COPR for February 2019.md
new file mode 100644
index 0000000000..92523ddb46
--- /dev/null
+++ b/published/201902/20190206 4 cool new projects to try in COPR for February 2019.md
@@ -0,0 +1,95 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10554-1.html)
+[#]: subject: (4 cool new projects to try in COPR for February 2019)
+[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-february-2019/)
+[#]: author: (Dominik Turecek https://fedoramagazine.org)
+
+COPR 仓库中 4 个很酷的新软件(2019.2)
+======
+
+
+
+COPR 是个人软件仓库[集合][1],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不被 Fedora 基础设施不支持或没有被该项目所签名。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
+
+这是 COPR 中一组新的有趣项目。
+
+### CryFS
+
+[CryFS][2] 是一个加密文件系统。它设计与云存储一同使用,主要是 Dropbox,尽管它也可以与其他存储提供商一起使用。CryFS 不仅加密文件系统中的文件,还会加密元数据、文件大小和目录结构。
+
+#### 安装说明
+
+仓库目前为 Fedora 28 和 29 以及 EPEL 7 提供 CryFS。要安装 CryFS,请使用以下命令:
+
+```
+sudo dnf copr enable fcsm/cryfs
+sudo dnf install cryfs
+```
+
+### Cheat
+
+[Cheat][3] 是一个用于在命令行中查看各种备忘录的工具,用来提醒仅偶尔使用的程序的使用方法。对于许多 Linux 程序,`cheat` 提供了来自手册页的精简后的信息,主要关注最常用的示例。除了内置的备忘录,`cheat` 允许你编辑现有的备忘录或从头开始创建新的备忘录。
+
+![][4]
+
+#### 安装说明
+
+仓库目前为 Fedora 28、29 和 Rawhide 以及 EPEL 7 提供 `cheat`。要安装 `cheat`,请使用以下命令:
+
+```
+sudo dnf copr enable tkorbar/cheat
+sudo dnf install cheat
+```
+
+### Setconf
+
+[setconf][5] 是一个简单的程序,作为 `sed` 的替代方案,用于对配置文件进行更改。`setconf` 唯一能做的就是找到指定文件中的密钥并更改其值。`setconf` 仅提供很少的选项来更改其行为 - 例如,取消更改行的注释。
+
+#### 安装说明
+
+仓库目前为 Fedora 27、28 和 29 提供 `setconf`。要安装 `setconf`,请使用以下命令:
+
+```
+sudo dnf copr enable jamacku/setconf
+sudo dnf install setconf
+```
+
+### Reddit 终端查看器
+
+[Reddit 终端查看器][6],或称为 `rtv`,提供了从终端浏览 Reddit 的界面。它提供了 Reddit 的基本功能,因此你可以登录到你的帐户,查看 subreddits、评论、点赞和发现新主题。但是,rtv 目前不支持 Reddit 标签。
+
+![][7]
+
+#### 安装说明
+
+该仓库目前为 Fedora 29 和 Rawhide 提供 Reddit Terminal Viewer。要安装 Reddit Terminal Viewer,请使用以下命令:
+
+```
+sudo dnf copr enable tc01/rtv
+sudo dnf install rtv
+```
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-february-2019/
+
+作者:[Dominik Turecek][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org
+[b]: https://github.com/lujun9972
+[1]: https://copr.fedorainfracloud.org/
+[2]: https://www.cryfs.org/
+[3]: https://github.com/chrisallenlane/cheat
+[4]: https://fedoramagazine.org/wp-content/uploads/2019/01/cheat.png
+[5]: https://setconf.roboticoverlords.org/
+[6]: https://github.com/michael-lazar/rtv
+[7]: https://fedoramagazine.org/wp-content/uploads/2019/01/rtv.png
diff --git a/published/201902/20190207 10 Methods To Create A File In Linux.md b/published/201902/20190207 10 Methods To Create A File In Linux.md
new file mode 100644
index 0000000000..ef7d8e8228
--- /dev/null
+++ b/published/201902/20190207 10 Methods To Create A File In Linux.md
@@ -0,0 +1,315 @@
+[#]: collector: (lujun9972)
+[#]: translator: (dianbanjiu)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10549-1.html)
+[#]: subject: (10 Methods To Create A File In Linux)
+[#]: via: (https://www.2daygeek.com/linux-command-to-create-a-file/)
+[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/)
+
+在 Linux 上创建文件的 10 个方法
+======
+
+我们都知道,在 Linux 上,包括设备在内的一切都是文件。Linux 管理员每天应该会多次执行文件创建活动(可能是 20 次,50 次,甚至是更多,这依赖于他们的环境)。如果你想 [在Linux上创建一个特定大小的文件][1],查看前面的这个链接。
+
+高效创建一个文件是非常重要的能力。为什么我说高效?如果你了解一些高效进行你当前活动的方式,你就可以事半功倍。这将会节省你很多的时间。你可以把这些有用的时间用到到其他重要的事情上。
+
+我下面将会介绍多个在 Linux 上创建文件的方法。我建议你选择几个简单高效的来辅助你的工作。你不必安装下列的任何一个命令,因为它们已经作为 Linux 核心工具的一部分安装到你的系统上了。
+
+创建文件可以通过以下六个方式来完成。
+
+ * `>`:标准重定向符允许我们创建一个 0KB 的空文件。
+ * `touch`:如果文件不存在的话,`touch` 命令将会创建一个 0KB 的空文件。
+ * `echo`:通过一个参数显示文本的某行。
+ * `printf`:用于显示在终端给定的文本。
+ * `cat`:它串联并打印文件到标准输出。
+ * `vi`/`vim`:Vim 是一个向上兼容 Vi 的文本编辑器。它常用于编辑各种类型的纯文本。
+ * `nano`:是一个简小且用户友好的编辑器。它复制了 `pico` 的外观和优点,但它是自由软件。
+ * `head`:用于打印一个文件开头的一部分。
+ * `tail`:用于打印一个文件的最后一部分。
+ * `truncate`:用于缩小或者扩展文件的尺寸到指定大小。
+
+### 在 Linux 上使用重定向符(>)创建一个文件
+
+标准重定向符允许我们创建一个 0KB 的空文件。它通常用于重定向一个命令的输出到一个新文件中。在没有命令的情况下使用重定向符号时,它会创建一个文件。
+
+但是它不允许你在创建文件时向其中输入任何文本。然而它对于不是很勤劳的管理员是非常简单有用的。只需要输入重定向符后面跟着你想要的文件名。
+
+```
+$ > daygeek.txt
+```
+
+使用 `ls` 命令查看刚刚创建的文件。
+
+```
+$ ls -lh daygeek.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:00 daygeek.txt
+```
+
+### 在 Linux 上使用 touch 命令创建一个文件
+
+`touch` 命令常用于将每个文件的访问和修改时间更新为当前时间。
+
+如果指定的文件名不存在,将会创建一个新的文件。`touch` 不允许我们在创建文件的同时向其中输入一些文本。它默认创建一个 0KB 的空文件。
+
+```
+$ touch daygeek1.txt
+```
+
+使用 `ls` 命令查看刚刚创建的文件。
+
+```
+$ ls -lh daygeek1.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:02 daygeek1.txt
+```
+
+### 在 Linux 上使用 echo 命令创建一个文件
+
+`echo` 内置于大多数的操作系统中。它常用于脚本、批处理文件,以及作为插入文本的单个命令的一部分。
+
+它允许你在创建一个文件时就向其中输入一些文本。当然也允许你在之后向其中输入一些文本。
+
+```
+$ echo "2daygeek.com is a best Linux blog to learn Linux" > daygeek2.txt
+```
+
+使用 `ls` 命令查看刚刚创建的文件。
+
+```
+$ ls -lh daygeek2.txt
+-rw-rw-r-- 1 daygeek daygeek 49 Feb 4 02:04 daygeek2.txt
+```
+
+可以使用 `cat` 命令查看文件的内容。
+
+```
+$ cat daygeek2.txt
+2daygeek.com is a best Linux blog to learn Linux
+```
+
+你可以使用两个重定向符 (`>>`) 添加其他内容到同一个文件。
+
+```
+$ echo "It's FIVE years old blog" >> daygeek2.txt
+```
+
+你可以使用 `cat` 命令查看添加的内容。
+
+```
+$ cat daygeek2.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+### 在 Linux 上使用 printf 命令创建一个新的文件
+
+`printf` 命令也可以以类似 `echo` 的方式执行。
+
+`printf` 命令常用来显示在终端窗口给出的字符串。`printf` 可以有格式说明符、转义序列或普通字符。
+
+```
+$ printf "2daygeek.com is a best Linux blog to learn Linux\n" > daygeek3.txt
+```
+
+使用 `ls` 命令查看刚刚创建的文件。
+
+```
+$ ls -lh daygeek3.txt
+-rw-rw-r-- 1 daygeek daygeek 48 Feb 4 02:12 daygeek3.txt
+```
+
+使用 `cat` 命令查看文件的内容。
+
+```
+$ cat daygeek3.txt
+2daygeek.com is a best Linux blog to learn Linux
+```
+
+你可以使用两个重定向符 (`>>`) 添加其他的内容到同一个文件中去。
+
+```
+$ printf "It's FIVE years old blog\n" >> daygeek3.txt
+```
+
+你可以使用 `cat` 命令查看这个文件中添加的内容。
+
+```
+$ cat daygeek3.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+### 在 Linux 中使用 cat 创建一个文件
+
+`cat` 表示串联concatenate。在 Linux 经常用于读取一个文件中的数据。
+
+`cat` 是在类 Unix 系统中最常使用的命令之一。它提供了三个与文本文件相关的功能:显示一个文件的内容、组合多个文件的内容到一个输出以及创建一个新的文件。(LCTT 译注:如果 `cat` 命令后如果不带任何文件的话,下面的命令在回车后也不会立刻结束,回车后的操作可以按 `Ctrl-C` 或 `Ctrl-D` 来结束。)
+
+```
+$ cat > daygeek4.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+使用 `ls` 命令查看创建的文件。
+
+```
+$ ls -lh daygeek4.txt
+-rw-rw-r-- 1 daygeek daygeek 74 Feb 4 02:18 daygeek4.txt
+```
+
+使用 `cat` 命令查看文件的内容。
+
+```
+$ cat daygeek4.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+如果你想向同一个文件中添加其他内容,使用两个连接的重定向符(`>>`)。
+
+```
+$ cat >> daygeek4.txt
+This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
+```
+
+你可以使用 `cat` 命令查看添加的内容。
+
+```
+$ cat daygeek4.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
+```
+
+### 在 Linux 上使用 vi/vim 命令创建一个文件
+
+`vim` 是一个向上兼容 `vi` 的文本编辑器。它通常用来编辑所有种类的纯文本。在编辑程序时特别有用。
+
+`vim` 中有很多功能可以用于编辑单个文件。
+
+```
+$ vi daygeek5.txt
+
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+使用 `ls` 查看刚才创建的文件。
+
+```
+$ ls -lh daygeek5.txt
+-rw-rw-r-- 1 daygeek daygeek 75 Feb 4 02:23 daygeek5.txt
+```
+
+使用 `cat` 命令查看文件的内容。
+
+```
+$ cat daygeek5.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+```
+
+### 在 Linux 上使用 nano 命令创建一个文件
+
+`nano` 是一个编辑器,它是一个自由版本的 `pico` 克隆。`nano` 是一个小且用户友好的编辑器。它复制了 `pico` 的外观及优点,并且是一个自由软件,它添加了 `pico` 缺乏的一系列特性,像是打开多个文件、逐行滚动、撤销/重做、语法高亮、行号等等。
+
+```
+$ nano daygeek6.txt
+
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
+```
+
+使用 `ls` 命令查看创建的文件。
+
+```
+$ ls -lh daygeek6.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:26 daygeek6.txt
+```
+
+使用 `cat` 命令来查看一个文件的内容。
+
+```
+$ cat daygeek6.txt
+2daygeek.com is a best Linux blog to learn Linux
+It's FIVE years old blog
+This website is maintained by Magesh M, It's licensed under CC BY-NC 4.0.
+```
+
+### 在 Linux 上使用 head 命令创建一个文件
+
+`head` 命令通常用于输出一个文件开头的一部分。它默认会打印一个文件的开头 10 行到标准输出。如果有多个文件,则每个文件前都会有一个标题,用来表示文件名。
+
+```
+$ head -c 0K /dev/zero > daygeek7.txt
+```
+
+使用 `ls` 命令查看创建的文件。
+
+```
+$ ls -lh daygeek7.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:30 daygeek7.txt
+```
+
+### 在 Linux 上使用 tail 创建一个文件
+
+`tail` 命令通常用来输出一个文件最后的一部分。它默认会打印每个文件的最后 10 行到标准输出。如果有多个文件,则每个文件前都会有一个标题,用来表示文件名。
+
+```
+$ tail -c 0K /dev/zero > daygeek8.txt
+```
+
+使用 `ls` 命令查看创建的文件。
+
+```
+$ ls -lh daygeek8.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:31 daygeek8.txt
+```
+
+### 在 Linux 上使用 truncate 命令创建一个文件
+
+`truncate` 命令通常用作将一个文件的尺寸缩小或者扩展为某个指定的尺寸。
+
+```
+$ truncate -s 0K daygeek9.txt
+```
+
+使用 `ls` 命令检查创建的文件。
+
+```
+$ ls -lh daygeek9.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:37 daygeek9.txt
+```
+
+在这篇文章中,我使用这十个命令分别创建了下面的这十个文件。
+
+```
+$ ls -lh daygeek*
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:02 daygeek1.txt
+-rw-rw-r-- 1 daygeek daygeek 74 Feb 4 02:07 daygeek2.txt
+-rw-rw-r-- 1 daygeek daygeek 74 Feb 4 02:15 daygeek3.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:20 daygeek4.txt
+-rw-rw-r-- 1 daygeek daygeek 75 Feb 4 02:23 daygeek5.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:26 daygeek6.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:32 daygeek7.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:32 daygeek8.txt
+-rw-rw-r-- 1 daygeek daygeek 148 Feb 4 02:38 daygeek9.txt
+-rw-rw-r-- 1 daygeek daygeek 0 Feb 4 02:00 daygeek.txt
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-command-to-create-a-file/
+
+作者:[Vinoth Kumar][a]
+选题:[lujun9972][b]
+译者:[dianbanjiu](https://github.com/dianbanjiu)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/vinoth/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/create-a-file-in-specific-certain-size-linux/
diff --git a/published/201902/20190207 How to determine how much memory is installed, used on Linux systems.md b/published/201902/20190207 How to determine how much memory is installed, used on Linux systems.md
new file mode 100644
index 0000000000..96b474f7e4
--- /dev/null
+++ b/published/201902/20190207 How to determine how much memory is installed, used on Linux systems.md
@@ -0,0 +1,228 @@
+[#]: collector: (lujun9972)
+[#]: translator: (leommxj)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10571-1.html)
+[#]: subject: (How to determine how much memory is installed, used on Linux systems)
+[#]: via: (https://www.networkworld.com/article/3336174/linux/how-much-memory-is-installed-and-being-used-on-your-linux-systems.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+如何在 Linux 系统中判断安装、使用了多少内存
+======
+
+> 有几个命令可以报告在 Linux 系统上安装和使用了多少内存。根据你使用的命令,你可能会被细节淹没,也可能获得快速简单的答案。
+
+
+
+在 Linux 系统中有很多种方法获取有关安装了多少内存的信息及查看多少内存正在被使用。有些命令提供了大量的细节,而其他命令提供了简洁但不一定易于理解的答案。在这篇文章中,我们将介绍一些查看内存及其使用状态的有用的工具。
+
+在我们开始之前,让我们先来回顾一些基础知识。物理内存和虚拟内存并不是一回事。后者包括配置为交换空间的磁盘空间。交换空间可能包括为此目的特意留出来的分区,以及在创建新的交换分区不可行时创建的用来增加可用交换空间的文件。有些 Linux 命令会提供关于两者的信息。
+
+当物理内存占满时,交换空间通过提供可以用来存放内存中非活动页的磁盘空间来扩展内存。
+
+`/proc/kcore` 是在内存管理中起作用的一个文件。这个文件看上去是个普通文件(虽然非常大),但它并不占用任何空间。它就像其他 `/proc` 下的文件一样是个虚拟文件。
+
+```
+$ ls -l /proc/kcore
+-r--------. 1 root root 140737477881856 Jan 28 12:59 /proc/kcore
+```
+
+有趣的是,下面查询的两个系统并没有安装相同大小的内存,但 `/proc/kcore` 的大小却是相同的。第一个系统安装了 4 GB 的内存,而第二个系统安装了 6 GB。
+
+```
+system1$ ls -l /proc/kcore
+-r--------. 1 root root 140737477881856 Jan 28 12:59 /proc/kcore
+system2$ ls -l /proc/kcore
+-r-------- 1 root root 140737477881856 Feb 5 13:00 /proc/kcore
+```
+
+一种不靠谱的解释说这个文件代表可用虚拟内存的大小(没准要加 4 KB),如果这样,这些系统的虚拟内存可就是 128TB 了!这个数字似乎代表了 64 位系统可以寻址多少内存,而不是当前系统有多少可用内存。在命令行中计算 128 TB 和这个文件大小加上 4 KB 很容易。
+
+```
+$ expr 1024 \* 1024 \* 1024 \* 1024 \* 128
+140737488355328
+$ expr 1024 \* 1024 \* 1024 \* 1024 \* 128 + 4096
+140737488359424
+```
+
+另一个用来检查内存的更人性化的命令是 `free`。它会给出一个易于理解的内存报告。
+
+```
+$ free
+ total used free shared buff/cache available
+Mem: 6102476 812244 4090752 13112 1199480 4984140
+Swap: 2097148 0 2097148
+```
+
+使用 `-g` 选项,`free` 会以 GB 为单位返回结果。
+
+```
+$ free -g
+ total used free shared buff/cache available
+Mem: 5 0 3 0 1 4
+Swap: 1 0 1
+```
+
+使用 `-t` 选项,`free` 会显示与无附加选项时相同的值(不要把 `-t` 选项理解成 TB),并额外在输出的底部添加一行总计数据。
+
+```
+$ free -t
+ total used free shared buff/cache available
+Mem: 6102476 812408 4090612 13112 1199456 4983984
+Swap: 2097148 0 2097148
+Total: 8199624 812408 6187760
+```
+
+当然,你也可以选择同时使用两个选项。
+
+```
+$ free -tg
+ total used free shared buff/cache available
+Mem: 5 0 3 0 1 4
+Swap: 1 0 1
+Total: 7 0 5
+```
+
+如果你尝试用这个报告来解释“这个系统安装了多少内存?”,你可能会感到失望。上面的报告就是在前文说的装有 6 GB 内存的系统上运行的结果。这并不是说这个结果是错的,这就是系统对其可使用的内存的看法。
+
+`free` 命令也提供了每隔 X 秒刷新显示的选项(下方示例中 X 为 10)。
+
+```
+$ free -s 10
+ total used free shared buff/cache available
+Mem: 6102476 812280 4090704 13112 1199492 4984108
+Swap: 2097148 0 2097148
+
+ total used free shared buff/cache available
+Mem: 6102476 812260 4090712 13112 1199504 4984120
+Swap: 2097148 0 2097148
+```
+
+使用 `-l` 选项,`free` 命令会提供高低内存使用信息。
+
+```
+$ free -l
+ total used free shared buff/cache available
+Mem: 6102476 812376 4090588 13112 1199512 4984000
+Low: 6102476 2011888 4090588
+High: 0 0 0
+Swap: 2097148 0 2097148
+```
+
+查看内存的另一个选择是 `/proc/meminfo` 文件。像 `/proc/kcore` 一样,这也是一个虚拟文件,它可以提供关于安装或使用了多少内存以及可用内存的报告。显然,空闲内存和可用内存并不是同一回事。`MemFree` 看起来代表未使用的 RAM。`MemAvailable` 则是对于启动新程序时可使用的内存的一个估计。
+
+```
+$ head -3 /proc/meminfo
+MemTotal: 6102476 kB
+MemFree: 4090596 kB
+MemAvailable: 4984040 kB
+```
+
+如果只想查看内存总计,可以使用下面的命令之一:
+
+```
+$ awk '/MemTotal/ {print $2}' /proc/meminfo
+6102476
+$ grep MemTotal /proc/meminfo
+MemTotal: 6102476 kB
+```
+
+`DirectMap` 将内存信息分为几类。
+
+```
+$ grep DirectMap /proc/meminfo
+DirectMap4k: 213568 kB
+DirectMap2M: 6076416 kB
+```
+
+`DirectMap4k` 代表被映射成标准 4 k 页的内存大小,`DirectMap2M` 则显示了被映射为 2 MB 的页的内存大小。
+
+`getconf` 命令将会提供比我们大多数人想要看到的更多的信息。
+
+```
+$ getconf -a | more
+LINK_MAX 65000
+_POSIX_LINK_MAX 65000
+MAX_CANON 255
+_POSIX_MAX_CANON 255
+MAX_INPUT 255
+_POSIX_MAX_INPUT 255
+NAME_MAX 255
+_POSIX_NAME_MAX 255
+PATH_MAX 4096
+_POSIX_PATH_MAX 4096
+PIPE_BUF 4096
+_POSIX_PIPE_BUF 4096
+SOCK_MAXBUF
+_POSIX_ASYNC_IO
+_POSIX_CHOWN_RESTRICTED 1
+_POSIX_NO_TRUNC 1
+_POSIX_PRIO_IO
+_POSIX_SYNC_IO
+_POSIX_VDISABLE 0
+ARG_MAX 2097152
+ATEXIT_MAX 2147483647
+CHAR_BIT 8
+CHAR_MAX 127
+--More--
+```
+
+使用类似下面的命令来将其输出精简为指定的内容,你会得到跟前文提到的其他命令相同的结果。
+
+```
+$ getconf -a | grep PAGES | awk 'BEGIN {total = 1} {if (NR == 1 || NR == 3) total *=$NF} END {print total / 1024" kB"}'
+6102476 kB
+```
+
+上面的命令通过将下方输出的第一行和最后一行的值相乘来计算内存。
+
+```
+PAGESIZE 4096 <==
+_AVPHYS_PAGES 1022511
+_PHYS_PAGES 1525619 <==
+```
+
+自己动手计算一下,我们就知道这个值是怎么来的了。
+
+```
+$ expr 4096 \* 1525619 / 1024
+6102476
+```
+
+显然值得为以上的指令之一设置个 `alias`。
+
+另一个具有非常易于理解的输出的命令是 `top` 。在 `top` 输出的前五行,你可以看到一些数字显示多少内存正被使用。
+
+```
+$ top
+top - 15:36:38 up 8 days, 2:37, 2 users, load average: 0.00, 0.00, 0.00
+Tasks: 266 total, 1 running, 265 sleeping, 0 stopped, 0 zombie
+%Cpu(s): 0.2 us, 0.4 sy, 0.0 ni, 99.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
+MiB Mem : 3244.8 total, 377.9 free, 1826.2 used, 1040.7 buff/cache
+MiB Swap: 3536.0 total, 3535.7 free, 0.3 used. 1126.1 avail Mem
+```
+
+最后一个命令将会以一个非常简洁的方式回答“系统安装了多少内存?”:
+
+```
+$ sudo dmidecode -t 17 | grep "Size.*MB" | awk '{s+=$2} END {print s / 1024 "GB"}'
+6GB
+```
+
+取决于你想要获取多少细节,Linux 系统提供了许多用来查看系统安装内存以及使用/空闲内存的选择。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3336174/linux/how-much-memory-is-installed-and-being-used-on-your-linux-systems.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[leommxj](https://github.com/leommxj)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://www.facebook.com/NetworkWorld/
+[2]: https://www.linkedin.com/company/network-world
diff --git a/published/201902/20190208 How To Install And Use PuTTY On Linux.md b/published/201902/20190208 How To Install And Use PuTTY On Linux.md
new file mode 100644
index 0000000000..a4615d4b78
--- /dev/null
+++ b/published/201902/20190208 How To Install And Use PuTTY On Linux.md
@@ -0,0 +1,147 @@
+[#]: collector: (lujun9972)
+[#]: translator: (zhs852)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10552-1.html)
+[#]: subject: (How To Install And Use PuTTY On Linux)
+[#]: via: (https://www.ostechnix.com/how-to-install-and-use-putty-on-linux/)
+[#]: author: (SK https://www.ostechnix.com/author/sk/)
+
+在 Linux 中安装并使用 PuTTY
+======
+
+
+
+PuTTY 是一个自由开源且支持包括 SSH、Telnet 和 Rlogin 在内的多种协议的 GUI 客户端。一般来说,Windows 管理员们会把 PuTTY 当成 SSH 或 Telnet 客户端来在本地 Windows 系统和远程 Linux 服务器之间建立连接。不过,PuTTY 可不是 Windows 的独占软件。它在 Linux 用户之中也是很流行的。本篇文章将会告诉你如何在 Linux 中安装并使用 PuTTY。
+
+### 在 Linux 中安装 PuTTY
+
+PuTTY 已经包含在了许多 Linux 发行版的官方源中。举个例子,在 Arch Linux 中,我们可以通过这个命令安装 PuTTY:
+
+```shell
+$ sudo pacman -S putty
+```
+
+在 Debian、Ubuntu 或是 Linux Mint 中安装它:
+
+```shell
+$ sudo apt install putty
+```
+
+### 使用 PuTTY 访问远程 Linux 服务器
+
+在安装完 PuTTY 之后,你可以在菜单或启动器中打开它。如果你想用终端打开它,也是可以的:
+
+```shell
+$ putty
+```
+
+PuTTY 的默认界面长这个样子:
+
+
+
+如你所见,许多选项都配上了说明。在左侧面板中,你可以配置许多项目,比如:
+
+ 1. 修改 PuTTY 登录会话选项;
+ 2. 修改终端模拟器控制选项,控制各个按键的功能;
+ 3. 控制终端响铃的声音;
+ 4. 启用/禁用终端的高级功能;
+ 5. 设定 PuTTY 窗口大小;
+ 6. 控制命令回滚长度(默认是 2000 行);
+ 7. 修改 PuTTY 窗口或光标的外观;
+ 8. 调整窗口边缘;
+ 9. 调整字体;
+ 10. 保存登录信息;
+ 11. 设置代理;
+ 12. 修改各协议的控制选项;
+ 13. 以及更多。
+
+所有选项基本都有注释,相信你理解起来不难。
+
+### 使用 PuTTY 访问远程 Linux 服务器
+
+请在左侧面板点击 “Session” 选项卡,输入远程主机名(或 IP 地址)。然后,请选择连接类型(比如 Telnet、Rlogin 以及 SSH 等)。根据你选择的连接类型,PuTTY 会自动选择对应连接类型的默认端口号(比如 SSH 是 22、Telnet 是 23),如果你修改了默认端口号,别忘了手动把它输入到 “Port” 里。在这里,我用 SSH 连接到远程主机。在输入所有信息后,请点击 “Open”。
+
+
+
+如果这是你首次连接到这个远程主机,PuTTY 会显示一个安全警告,问你是否信任你连接到的远程主机。点击 “Accept” 即可将远程主机的密钥加入 PuTTY 的缓存当中:
+
+![PuTTY 安全警告][2]
+
+接下来,输入远程主机的用户名和密码。然后你就成功地连接上远程主机啦。
+
+
+
+#### 使用密钥验证访问远程主机
+
+一些 Linux 管理员可能在服务器上配置了密钥认证。举个例子,在用 PuTTY 访问 AMS 实例的时候,你需要指定密钥文件的位置。PuTTY 可以使用它自己的格式(`.ppk` 文件)来进行公钥验证。
+
+首先输入主机名或 IP。之后,在 “Category” 选项卡中,展开 “Connection”,再展开 “SSH”,然后选择 “Auth”,之后便可选择 `.ppk` 密钥文件了。
+
+![][3]
+
+点击 “Accept” 来关闭安全提示。然后,输入远程主机的密码(如果密钥被密码保护)来建立连接。
+
+#### 保存 PuTTY 会话
+
+有些时候,你可能需要多次连接到同一个远程主机,你可以保存这些会话并在之后不输入信息访问他们。
+
+请输入主机名(或 IP 地址),并提供一个会话名称,然后点击 “Save”。如果你有密钥文件,请确保你在点击 “Save” 按钮之前指定它们。
+
+![][4]
+
+现在,你可以通过选择 “Saved sessions”,然后点击 “Load”,再点击 “Open” 来启动连接。
+
+#### 使用 PuTTY 安全复制客户端(pscp)来将文件传输到远程主机中
+
+通常来说,Linux 用户和管理员会使用 `scp` 这个命令行工具来从本地往远程主机传输文件。不过 PuTTY 给我们提供了一个叫做 PuTTY 安全复制客户端PuTTY Secure Copy Client(简写为 `pscp`)的工具来干这个事情。如果你的本地主机运行的是 Windows,你可能需要这个工具。PSCP 在 Windows 和 Linux 下都是可用的。
+
+使用这个命令来将 `file.txt` 从本地的 Arch Linux 拷贝到远程的 Ubuntu 上:
+
+```shell
+pscp -i test.ppk file.txt sk@192.168.225.22:/home/sk/
+```
+
+让我们来分析这个命令:
+
+ * `-i test.ppk`:访问远程主机所用的密钥文件;
+ * `file.txt`:要拷贝到远程主机的文件;
+ * `sk@192.168.225.22`:远程主机的用户名与 IP;
+ * `/home/sk/`:目标路径。
+
+要拷贝一个目录,请使用 `-r`(递归Recursive)参数:
+
+```shell
+ pscp -i test.ppk -r dir/ sk@192.168.225.22:/home/sk/
+```
+
+要使用 `pscp` 传输文件,请执行以下命令:
+
+```shell
+pscp -i test.ppk c:\documents\file.txt.txt sk@192.168.225.22:/home/sk/
+```
+
+你现在应该了解了 PuTTY 是什么,知道了如何安装它和如何使用它。同时,你也学习到了如何使用 `pscp` 程序在本地和远程主机上传输文件。
+
+以上便是所有了,希望这篇文章对你有帮助。
+
+干杯!
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-install-and-use-putty-on-linux/
+
+作者:[SK][a]
+选题:[lujun9972][b]
+译者:[zhs852](https://github.com/zhs852)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: http://www.ostechnix.com/wp-content/uploads/2019/02/putty-2.png
+[3]: http://www.ostechnix.com/wp-content/uploads/2019/02/putty-4.png
+[4]: http://www.ostechnix.com/wp-content/uploads/2019/02/putty-5.png
diff --git a/published/201902/20190214 Drinking coffee with AWK.md b/published/201902/20190214 Drinking coffee with AWK.md
new file mode 100644
index 0000000000..eb83412bbd
--- /dev/null
+++ b/published/201902/20190214 Drinking coffee with AWK.md
@@ -0,0 +1,119 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10555-1.html)
+[#]: subject: (Drinking coffee with AWK)
+[#]: via: (https://opensource.com/article/19/2/drinking-coffee-awk)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
+
+用 AWK 喝咖啡
+======
+> 用一个简单的 AWK 程序跟踪你的同事喝咖啡的欠款。
+
+
+
+以下基于一个真实的故事,虽然一些名字和细节有所改变。
+
+> 很久以前,在一个遥远的地方,有一间~~庙~~(划掉)办公室。由于各种原因,这个办公室没有购买速溶咖啡。所以那个办公室的一些人聚在一起决定建立“咖啡角”。
+>
+> 咖啡角的一名成员会购买一些速溶咖啡,而其他成员会付给他钱。有人喝咖啡比其他人多,所以增加了“半成员”的级别:半成员每周允许喝的咖啡限量,并可以支付其它成员支付的一半。
+
+管理这事非常操心。而我刚读过《Unix 编程环境》这本书,想练习一下我的 [AWK][1] 编程技能,所以我自告奋勇创建了一个系统。
+
+第 1 步:我用一个数据库来记录成员及其应支付给咖啡角的欠款。我是以 AWK 便于处理的格式记录的,其中字段用冒号分隔:
+
+```
+member:john:1:22
+member:jane:0.5:33
+member:pratyush:0.5:17
+member:jing:1:27
+```
+
+上面的第一个字段标识了这是哪一种行(`member`)。第二个字段是成员的名字(即他们的电子邮件用户名,但没有 @ )。下一个字段是其成员级别(成员 = 1,或半会员 = 0.5)。最后一个字段是他们欠咖啡角的钱。正数表示他们欠咖啡角钱,负数表示咖啡角欠他们。
+
+第 2 步:我记录了咖啡角的收入和支出:
+
+```
+payment:jane:33
+payment:pratyush:17
+bought:john:60
+payback:john:50
+```
+
+Jane 付款 $33,Pratyush 付款 $17,John 买了价值 $60 的咖啡,而咖啡角还款给 John $50。
+
+第 3 步:我准备写一些代码,用来处理成员和付款,并生成记录了新欠账的更新的成员文件。
+
+```
+#!/usr/bin/env --split-string=awk -F: -f
+```
+
+释伴行(`#!`)需要做一些调整,我使用 `env` 命令来允许从释伴行传递多个参数:具体来说,AWK 的 `-F` 命令行参数会告诉它字段分隔符是什么。
+
+AWK 程序就是一个规则序列(也可以包含函数定义,但是对于这个咖啡角应用来说不需要)
+
+第一条规则读取该成员文件。当我运行该命令时,我总是首先给它的是成员文件,然后是付款文件。它使用 AWK 关联数组来在 `members` 数组中记录成员级别,以及在 `debt` 数组中记录当前欠账。
+
+```
+$1 == "member" {
+ members[$2]=$3
+ debt[$2]=$4
+ total_members += $3
+}
+```
+
+第二条规则在记录付款(`payment`)时减少欠账。
+
+```
+$1 == "payment" {
+ debt[$2] -= $3
+}
+```
+
+还款(`payback`)则相反:它增加欠账。这可以优雅地支持意外地给了某人太多钱的情况。
+
+```
+$1 == "payback" {
+ debt[$2] += $3
+}
+```
+
+最复杂的部分出现在有人购买(`bought`)速溶咖啡供咖啡角使用时。它被视为付款(`payment`),并且该人的债务减少了适当的金额。接下来,它计算每个会员的费用。它根据成员的级别对所有成员进行迭代并增加欠款
+
+```
+$1 == "bought" {
+ debt[$2] -= $3
+ per_member = $3/total_members
+ for (x in members) {
+ debt[x] += per_member * members[x]
+ }
+}
+```
+
+`END` 模式很特殊:当 AWK 没有更多的数据要处理时,它会一次性执行。此时,它会使用更新的欠款数生成新的成员文件。
+
+```
+END {
+ for (x in members) {
+ printf "%s:%s:%s\n", x, members[x], debt[x]
+ }
+}
+```
+
+再配合一个遍历成员文件,并向人们发送提醒电子邮件以支付他们的会费(积极清账)的脚本,这个系统管理咖啡角相当一段时间。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/drinking-coffee-awk
+
+作者:[Moshe Zadka][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/AWK
diff --git a/published/201902/20190216 How To Grant And Remove Sudo Privileges To Users On Ubuntu.md b/published/201902/20190216 How To Grant And Remove Sudo Privileges To Users On Ubuntu.md
new file mode 100644
index 0000000000..a53e354779
--- /dev/null
+++ b/published/201902/20190216 How To Grant And Remove Sudo Privileges To Users On Ubuntu.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10576-1.html)
+[#]: subject: (How To Grant And Remove Sudo Privileges To Users On Ubuntu)
+[#]: via: (https://www.ostechnix.com/how-to-grant-and-remove-sudo-privileges-to-users-on-ubuntu/)
+[#]: author: (SK https://www.ostechnix.com/author/sk/)
+
+如何在 Ubuntu 上为用户授予和移除 sudo 权限
+======
+
+
+
+如你所知,用户可以在 Ubuntu 系统上使用 sudo 权限执行任何管理任务。在 Linux 机器上创建新用户时,他们无法执行任何管理任务,直到你将其加入 `sudo` 组的成员。在这个简短的教程中,我们将介绍如何将普通用户添加到 `sudo` 组以及移除给定的权限,使其成为普通用户。
+
+### 在 Linux 上向普通用户授予 sudo 权限
+
+通常,我们使用 `adduser` 命令创建新用户,如下所示。
+
+```
+$ sudo adduser ostechnix
+```
+
+如果你希望新创建的用户使用 `sudo` 执行管理任务,只需使用以下命令将它添加到 `sudo` 组:
+
+```
+$ sudo usermod -a -G sudo hduser
+```
+
+上面的命令将使名为 `ostechnix` 的用户成为 `sudo` 组的成员。
+
+你也可以使用此命令将用户添加到 `sudo` 组。
+
+```
+$ sudo adduser ostechnix sudo
+```
+
+现在,注销并以新用户身份登录,以使此更改生效。此时用户已成为管理用户。
+
+要验证它,只需在任何命令中使用 `sudo` 作为前缀。
+
+```
+$ sudo mkdir /test
+[sudo] password for ostechnix:
+```
+
+### 移除用户的 sudo 权限
+
+有时,你可能希望移除特定用户的 `sudo` 权限,而不用在 Linux 中删除它。要将任何用户设为普通用户,只需将其从 `sudo` 组中删除即可。
+
+比如说如果要从 `sudo` 组中删除名为 `ostechnix` 的用户,只需运行:
+
+```
+$ sudo deluser ostechnix sudo
+```
+
+示例输出:
+
+```
+Removing user `ostechnix' from group `sudo' ...
+Done.
+```
+
+此命令仅从 `sudo` 组中删除用户 `ostechnix`,但不会永久地从系统中删除用户。现在,它成为了普通用户,无法像 `sudo` 用户那样执行任何管理任务。
+
+此外,你可以使用以下命令撤消用户的 `sudo` 访问权限:
+
+```
+$ sudo gpasswd -d ostechnix sudo
+```
+
+从 `sudo` 组中删除用户时请小心。不要从 `sudo` 组中删除真正的管理员。
+
+使用命令验证用户 `ostechnix` 是否已从 `sudo` 组中删除:
+
+```
+$ sudo -l -U ostechnix
+User ostechnix is not allowed to run sudo on ubuntuserver.
+```
+
+是的,用户 `ostechnix` 已从 `sudo` 组中删除,他无法执行任何管理任务。
+
+从 `sudo` 组中删除用户时请小心。如果你的系统上只有一个 `sudo` 用户,并且你将他从 `sudo` 组中删除了,那么就无法执行任何管理操作,例如在系统上安装、删除和更新程序。所以,请小心。在我们的下一篇教程中,我们将解释如何恢复用户的 `sudo` 权限。
+
+就是这些了。希望这篇文章有用。还有更多好东西。敬请期待!
+
+干杯!
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-grant-and-remove-sudo-privileges-to-users-on-ubuntu/
+
+作者:[SK][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
diff --git a/published/201902/20190219 3 tools for viewing files at the command line.md b/published/201902/20190219 3 tools for viewing files at the command line.md
new file mode 100644
index 0000000000..eb975657c2
--- /dev/null
+++ b/published/201902/20190219 3 tools for viewing files at the command line.md
@@ -0,0 +1,99 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10573-1.html)
+[#]: subject: (3 tools for viewing files at the command line)
+[#]: via: (https://opensource.com/article/19/2/view-files-command-line)
+[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
+
+在命令行查看文件的 3 个工具
+======
+
+> 看一下 `less`、Antiword 和 `odt2xt` 这三个实用程序,它们都可以在终端中查看文件。
+
+
+
+我常说,你不需要使用命令行也可以高效使用 Linux —— 我知道许多 Linux 用户从不打开终端窗口,并且也用的挺好。然而,即使我不认为自己是一名技术人员,我也会在命令行上花费大约 20% 的计算时间,包括操作文件、处理文本和使用实用程序。
+
+我经常在终端窗口中做的一件事是查看文件,无论是文本还是需要用到文字处理器的文件。有时使用命令行实用程序比启动文本编辑器或文字处理器更容易。
+
+下面是我在命令行中用来查看文件的三个实用程序。
+
+### less
+
+[less][1] 的美妙之处在于它易于使用,它将你正在查看的文件分解为块(或页面),这使得它们更易于阅读。你可以使用它在命令行查看文本文件,例如 README、HTML 文件、LaTeX 文件或其他任何纯文本文件。我在[上一篇文章][2]中介绍了 `less`。
+
+要使用 `less`,只需输入:
+
+```
+less file_name
+```
+
+
+
+通过按键盘上的空格键或 `PgDn` 键向下滚动文件,按 `PgUp` 键向上移动文件。要停止查看文件,按键盘上的 `Q` 键。
+
+### Antiword
+
+[Antiword][3] 是一个很好地实用小程序,你可以使用它将 Word 文档转换为纯文本。只要你想,还可以将它们转换为 [PostScript][4] 或 [PDF][5]。在本文中,让我们继续使用文本转换。
+
+Antiword 可以读取和转换 Word 2.0 到 2003 版本创建的文件(LCTT 译注:此处疑为 Word 2000,因为 Word 2.0 for DOS 发布于 1984 年,而 WinWord 2.0 发布于 1991 年,都似乎太老了)。它不能读取 DOCX 文件 —— 如果你尝试这样做,Antiword 会显示一条错误消息,表明你尝试读取的是一个 ZIP 文件。这在技术上说是正确的,但仍然令人沮丧。
+
+要使用 Antiword 查看 Word 文档,输入以下命令:
+
+```
+antiword file_name.doc
+```
+
+Antiword 将文档转换为文本并显示在终端窗口中。不幸的是,它不能在终端中将文档分解成页面。不过,你可以将 Antiword 的输出重定向到 `less` 或 [more][6] 之类的实用程序,一遍对其进行分页。通过输入以下命令来执行此操作:
+
+```
+antiword file_name.doc | less
+```
+
+如果你是命令行的新手,那么我告诉你 `|` 称为管道。这就是重定向。
+
+
+
+### odt2txt
+
+作为一个优秀的开源公民,你会希望尽可能多地使用开放格式。对于你的文字处理需求,你可能需要处理 [ODT][7] 文件(由诸如 LibreOffice Writer 和 AbiWord 等文字处理器使用)而不是 Word 文件。即使没有,也可能会遇到 ODT 文件。而且,即使你的计算机上没有安装 Writer 或 AbiWord,也很容易在命令行中查看它们。
+
+怎样做呢?用一个名叫 [odt2txt][8] 的实用小程序。正如你猜到的那样,`odt2txt` 将 ODT 文件转换为纯文本。要使用它,运行以下命令:
+
+```
+odt2txt file_name.odt
+```
+
+与 Antiword 一样,`odt2txt` 将文档转换为文本并在终端窗口中显示。和 Antiword 一样,它不会对文档进行分页。但是,你也可以使用以下命令将 `odt2txt` 的输出管道传输到 `less` 或 `more` 这样的实用程序中:
+
+```
+odt2txt file_name.odt | more
+```
+
+
+
+你有一个最喜欢的在命令行中查看文件的实用程序吗?欢迎留下评论与社区分享。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/view-files-command-line
+
+作者:[Scott Nesbitt][a]
+选题:[lujun9972][b]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/scottnesbitt
+[b]: https://github.com/lujun9972
+[1]: https://www.gnu.org/software/less/
+[2]: https://opensource.com/article/18/4/using-less-view-text-files-command-line
+[3]: http://www.winfield.demon.nl/
+[4]: http://en.wikipedia.org/wiki/PostScript
+[5]: http://en.wikipedia.org/wiki/Portable_Document_Format
+[6]: https://opensource.com/article/19/1/more-text-files-linux
+[7]: http://en.wikipedia.org/wiki/OpenDocument
+[8]: https://github.com/dstosberg/odt2txt
diff --git a/published/201902/20190219 How to List Installed Packages on Ubuntu and Debian -Quick Tip.md b/published/201902/20190219 How to List Installed Packages on Ubuntu and Debian -Quick Tip.md
new file mode 100644
index 0000000000..2eec9eb896
--- /dev/null
+++ b/published/201902/20190219 How to List Installed Packages on Ubuntu and Debian -Quick Tip.md
@@ -0,0 +1,200 @@
+[#]: collector: (lujun9972)
+[#]: translator: (guevaraya)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10562-1.html)
+[#]: subject: (How to List Installed Packages on Ubuntu and Debian [Quick Tip])
+[#]: via: (https://itsfoss.com/list-installed-packages-ubuntu)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+如何列出 Ubuntu 和 Debian 上已安装的软件包
+======
+
+当你安装了 [Ubuntu 并想好好用一用][1]。但在将来某个时候,你肯定会遇到忘记曾经安装了那些软件包。
+
+这个是完全正常。没有人要求你把系统里所有已安装的软件包都记住。但是问题是,如何才能知道已经安装了哪些软件包?如何查看安装过的软件包呢?
+
+### 列出 Ubuntu 和 Debian 上已安装的软件包
+
+![列出已安装的软件包][2]
+
+如果你经常用 [apt 命令][3],你可能觉得会有个命令像 `apt` 一样可以列出已安装的软件包。不算全错。
+
+[apt-get 命令][4] 没有类似列出已安装软件包的简单的选项,但是 `apt` 有一个这样的命令:
+
+```
+apt list --installed
+```
+
+这个会显示使用 `apt` 命令安装的所有的软件包。同时也会包含由于依赖而被安装的软件包。也就是说不仅会包含你曾经安装的程序,而且会包含大量库文件和间接安装的软件包。
+
+![用 atp 命令列出显示已安装的软件包][5]
+
+*用 atp 命令列出显示已安装的软件包*
+
+由于列出出来的已安装的软件包太多,用 `grep` 过滤特定的软件包是一个比较好的办法。
+
+```
+apt list --installed | grep program_name
+```
+
+如上命令也可以检索出使用 .deb 软件包文件安装的软件。是不是很酷?
+
+如果你阅读过 [apt 与 apt-get 对比][7]的文章,你可能已经知道 `apt` 和 `apt-get` 命令都是基于 [dpkg][8]。也就是说用 `dpkg` 命令可以列出 Debian 系统的所有已经安装的软件包。
+
+```
+dpkg-query -l
+```
+
+你可以用 `grep` 命令检索指定的软件包。
+
+![用 dpkg 命令列出显示已经安装的软件包][9]!
+
+*用 dpkg 命令列出显示已经安装的软件包*
+
+现在你可以搞定列出 Debian 的软件包管理器安装的应用了。那 Snap 和 Flatpak 这个两种应用呢?如何列出它们?因为它们不能被 `apt` 和 `dpkg` 访问。
+
+显示系统里所有已经安装的 [Snap 软件包][10],可以这个命令:
+
+```
+snap list
+```
+
+Snap 可以用绿色勾号标出哪个应用来自经过认证的发布者。
+
+![列出已经安装的 Snap 软件包][11]
+
+*列出已经安装的 Snap 软件包*
+
+显示系统里所有已安装的 [Flatpak 软件包][12],可以用这个命令:
+
+```
+flatpak list
+```
+
+让我来个汇总:
+
+
+用 `apt` 命令显示已安装软件包:
+
+```
+apt list –installed
+```
+
+用 `dpkg` 命令显示已安装软件包:
+
+```
+dpkg-query -l
+```
+
+列出系统里 Snap 已安装软件包:
+
+```
+snap list
+```
+
+列出系统里 Flatpak 已安装软件包:
+
+```
+flatpak list
+```
+
+### 显示最近安装的软件包
+
+现在你已经看过以字母顺序列出的已经安装软件包了。如何显示最近已经安装的软件包?
+
+幸运的是,Linux 系统保存了所有发生事件的日志。你可以参考最近安装软件包的日志。
+
+有两个方法可以来做。用 `dpkg` 命令的日志或者 `apt` 命令的日志。
+
+你仅仅需要用 `grep` 命令过滤已经安装的软件包日志。
+
+```
+grep " install " /var/log/dpkg.log
+```
+
+这会显示所有的软件安装包,其中包括最近安装的过程中所依赖的软件包。
+
+```
+2019-02-12 12:41:42 install ubuntu-make:all 16.11.1ubuntu1
+2019-02-13 21:03:02 install xdg-desktop-portal:amd64 0.11-1
+2019-02-13 21:03:02 install libostree-1-1:amd64 2018.8-0ubuntu0.1
+2019-02-13 21:03:02 install flatpak:amd64 1.0.6-0ubuntu0.1
+2019-02-13 21:03:02 install xdg-desktop-portal-gtk:amd64 0.11-1
+2019-02-14 11:49:10 install qml-module-qtquick-window2:amd64 5.9.5-0ubuntu1.1
+2019-02-14 11:49:10 install qml-module-qtquick2:amd64 5.9.5-0ubuntu1.1
+2019-02-14 11:49:10 install qml-module-qtgraphicaleffects:amd64 5.9.5-0ubuntu1
+```
+
+你也可以查看 `apt` 历史命令日志。这个仅会显示用 `apt` 命令安装的的程序。但不会显示被依赖安装的软件包,详细的日志在日志里可以看到。有时你只是想看看对吧?
+
+```
+grep " install " /var/log/apt/history.log
+```
+
+具体的显示如下:
+
+```
+Commandline: apt install pinta
+Commandline: apt install pinta
+Commandline: apt install tmux
+Commandline: apt install terminator
+Commandline: apt install moreutils
+Commandline: apt install ubuntu-make
+Commandline: apt install flatpak
+Commandline: apt install cool-retro-term
+Commandline: apt install ubuntu-software
+```
+
+![显示最近已安装的软件包][13]
+
+*显示最近已安装的软件包*
+
+`apt` 的历史日志非常有用。因为他显示了什么时候执行了 `apt` 命令,哪个用户执行的命令以及安装的软件包名。
+
+### 小技巧:在软件中心显示已安装的程序包名
+
+如果你觉得终端和命令行交互不友好,还有一个方法可以查看系统的程序名。
+
+可以打开软件中心,然后点击已安装标签。你可以看到系统上已经安装的程序包名
+
+![Ubuntu 软件中心显示已安装的软件包][14]
+
+*在软件中心显示已安装的软件包*
+
+这个不会显示库和其他命令行的东西,有可能你也不想看到它们,因为你的大量交互都是在 GUI。此外,你也可以用 Synaptic 软件包管理器。
+
+### 结束语
+
+我希望这个简易的教程可以帮你查看 Ubuntu 和基于 Debian 的发行版的已安装软件包。
+
+如果你对本文有什么问题或建议,请在下面留言。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/list-installed-packages-ubuntu
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[guevaraya](https://github.com/guevaraya)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/getting-started-with-ubuntu/
+[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/list-installed-packages.png?resize=800%2C450&ssl=1
+[3]: https://itsfoss.com/apt-command-guide/
+[4]: https://itsfoss.com/apt-get-linux-guide/
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/list-installed-packages-in-ubuntu-with-apt.png?resize=800%2C407&ssl=1
+[6]: https://itsfoss.com/install-deb-files-ubuntu/
+[7]: https://itsfoss.com/apt-vs-apt-get-difference/
+[8]: https://wiki.debian.org/dpkg
+[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/list-installed-packages-with-dpkg.png?ssl=1
+[10]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/
+[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/list-installed-snap-packages.png?ssl=1
+[12]: https://itsfoss.com/flatpak-guide/
+[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/apt-list-recently-installed-packages.png?resize=800%2C187&ssl=1
+[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/installed-software-ubuntu.png?ssl=1
+[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/list-installed-packages.png?fit=800%2C450&ssl=1
diff --git a/published/20190206 And, Ampersand, and - in Linux.md b/published/20190206 And, Ampersand, and - in Linux.md
new file mode 100644
index 0000000000..5c85abc111
--- /dev/null
+++ b/published/20190206 And, Ampersand, and - in Linux.md
@@ -0,0 +1,196 @@
+[#]: collector: (lujun9972)
+[#]: translator: (HankChow)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10587-1.html)
+[#]: subject: (And, Ampersand, and & in Linux)
+[#]: via: (https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux)
+[#]: author: (Paul Brown https://www.linux.com/users/bro66)
+
+Linux 中的 &
+======
+
+> 这篇文章将了解一下 & 符号及它在 Linux 命令行中的各种用法。
+
+
+
+如果阅读过我之前的三篇文章([1][1]、[2][2]、[3][3]),你会觉得掌握连接各个命令之间的连接符号用法也是很重要的。实际上,命令的用法并不难,例如 `mkdir`、`touch` 和 `find` 也分别可以简单概括为“建立新目录”、“更新文件”和“在目录树中查找文件”而已。
+
+但如果要理解
+
+```
+mkdir test_dir 2>/dev/null || touch images.txt && find . -iname "*jpg" > backup/dir/images.txt &
+```
+
+这一串命令的目的,以及为什么要这样写,就没有这么简单了。
+
+关键之处就在于命令之间的连接符号。掌握了这些符号的用法,不仅可以让你更好理解整体的工作原理,还可以让你知道如何将不同的命令有效地结合起来,提高工作效率。
+
+在这一篇文章和接下来的文章中,我会介绍如何使用 `&` 号和管道符号(`|`)在不同场景下的使用方法。
+
+### 幕后工作
+
+我来举一个简单的例子,看看如何使用 `&` 号将下面这个命令放到后台运行:
+
+```
+cp -R original/dir/ backup/dir/
+```
+
+这个命令的目的是将 `original/dir/` 的内容递归地复制到 `backup/dir/` 中。虽然看起来很简单,但是如果原目录里面的文件太大,在执行过程中终端就会一直被卡住。
+
+所以,可以在命令的末尾加上一个 `&` 号,将这个任务放到后台去执行:
+
+```
+cp -R original/dir/ backup/dir/ &
+```
+
+任务被放到后台执行之后,就可以立即继续在同一个终端上工作了,甚至关闭终端也不影响这个任务的正常执行。需要注意的是,如果要求这个任务输出内容到标准输出中(例如 `echo` 或 `ls`),即使使用了 `&`,也会等待这些输出任务在前台运行完毕。
+
+当使用 `&` 将一个进程放置到后台运行的时候,Bash 会提示这个进程的进程 ID。在 Linux 系统中运行的每一个进程都有一个唯一的进程 ID,你可以使用进程 ID 来暂停、恢复或者终止对应的进程,因此进程 ID 是非常重要的。
+
+这个时候,只要你还停留在启动进程的终端当中,就可以使用以下几个命令来对管理后台进程:
+
+ * `jobs` 命令可以显示当前终端正在运行的进程,包括前台运行和后台运行的进程。它对每个正在执行中的进程任务分配了一个序号(这个序号不是进程 ID),可以使用这些序号来引用各个进程任务。
+
+ ```
+ $ jobs
+[1]- Running cp -i -R original/dir/* backup/dir/ &
+[2]+ Running find . -iname "*jpg" > backup/dir/images.txt &
+```
+ * `fg` 命令可以将后台运行的进程任务放到前台运行,这样可以比较方便地进行交互。根据 `jobs` 命令提供的进程任务序号,再在前面加上 `%` 符号,就可以把相应的进程任务放到前台运行。
+
+ ```
+ $ fg %1 # 将上面序号为 1 的 cp 任务放到前台运行
+cp -i -R original/dir/* backup/dir/
+```
+ 如果这个进程任务是暂停状态,`fg` 命令会将它启动起来。
+ * 使用 `ctrl+z` 组合键可以将前台运行的任务暂停,仅仅是暂停,而不是将任务终止。当使用 `fg` 或者 `bg` 命令将任务重新启动起来的时候,任务会从被暂停的位置开始执行。但 [sleep][4] 命令是一个特例,`sleep` 任务被暂停的时间会计算在 `sleep` 时间之内。因为 `sleep` 命令依据的是系统时钟的时间,而不是实际运行的时间。也就是说,如果运行了 `sleep 30`,然后将任务暂停 30 秒以上,那么任务恢复执行的时候会立即终止并退出。
+ * `bg` 命令会将任务放置到后台执行,如果任务是暂停状态,也会被启动起来。
+
+ ```
+ $ bg %1
+[1]+ cp -i -R original/dir/* backup/dir/ &
+```
+
+如上所述,以上几个命令只能在同一个终端里才能使用。如果启动进程任务的终端被关闭了,或者切换到了另一个终端,以上几个命令就无法使用了。
+
+如果要在另一个终端管理后台进程,就需要其它工具了。例如可以使用 [kill][5] 命令从另一个终端终止某个进程:
+
+```
+kill -s STOP
+```
+
+这里的 PID 就是使用 `&` 将进程放到后台时 Bash 显示的那个进程 ID。如果你当时没有把进程 ID 记录下来,也可以使用 `ps` 命令(代表 process)来获取所有正在运行的进程的进程 ID,就像这样:
+
+```
+ps | grep cp
+```
+
+执行以后会显示出包含 `cp` 字符串的所有进程,例如上面例子中的 `cp` 进程。同时还会显示出对应的进程 ID:
+
+```
+$ ps | grep cp
+14444 pts/3 00:00:13 cp
+```
+
+在这个例子中,进程 ID 是 14444,因此可以使用以下命令来暂停这个后台进程:
+
+```
+kill -s STOP 14444
+```
+
+注意,这里的 `STOP` 等同于前面提到的 `ctrl+z` 组合键的效果,也就是仅仅把进程暂停掉。
+
+如果想要把暂停了的进程启动起来,可以对进程发出 `CONT` 信号:
+
+```
+kill -s CONT 14444
+```
+
+这个给出一个[可以向进程发出的常用信号][6]列表。如果想要终止一个进程,可以发送 `TERM` 信号:
+
+```
+kill -s TERM 14444
+```
+
+如果进程不响应 `TERM` 信号并拒绝退出,还可以发送 `KILL` 信号强制终止进程:
+
+```
+kill -s KILL 14444
+```
+
+强制终止进程可能会有一定的风险,但如果遇到进程无节制消耗资源的情况,这样的信号还是能够派上用场的。
+
+另外,如果你不确定进程 ID 是否正确,可以在 `ps` 命令中加上 `x` 参数:
+
+```
+$ ps x| grep cp
+14444 pts/3 D 0:14 cp -i -R original/dir/Hols_2014.mp4
+ original/dir/Hols_2015.mp4 original/dir/Hols_2016.mp4
+ original/dir/Hols_2017.mp4 original/dir/Hols_2018.mp4 backup/dir/
+```
+
+这样就可以看到是不是你需要的进程 ID 了。
+
+最后介绍一个将 `ps` 和 `grep` 结合到一起的命令:
+
+```
+$ pgrep cp
+8
+18
+19
+26
+33
+40
+47
+54
+61
+72
+88
+96
+136
+339
+6680
+13735
+14444
+```
+
+`pgrep` 可以直接将带有字符串 `cp` 的进程的进程 ID 显示出来。
+
+可以加上一些参数让它的输出更清晰:
+
+```
+$ pgrep -lx cp
+14444 cp
+```
+
+在这里,`-l` 参数会让 `pgrep` 将进程的名称显示出来,`-x` 参数则是让 `pgrep` 完全匹配 `cp` 这个命令。如果还想了解这个命令的更多细节,可以尝试运行 `pgrep -ax`。
+
+### 总结
+
+在命令的末尾加上 `&` 可以让我们理解前台进程和后台进程的概念,以及如何管理这些进程。
+
+在 UNIX/Linux 术语中,在后台运行的进程被称为守护进程daemon。如果你曾经听说过这个词,那你现在应该知道它的意义了。
+
+和其它符号一样,`&` 在命令行中还有很多别的用法。在下一篇文章中,我会更详细地介绍。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
+
+作者:[Paul Brown][a]
+选题:[lujun9972][b]
+译者:[HankChow](https://github.com/HankChow)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/bro66
+[b]: https://github.com/lujun9972
+[1]: https://linux.cn/article-10465-1.html
+[2]: https://linux.cn/article-10502-1.html
+[3]: https://linux.cn/article-10529-1.html
+[4]: https://ss64.com/bash/sleep.html
+[5]: https://bash.cyberciti.biz/guide/Sending_signal_to_Processes
+[6]: https://www.computerhope.com/unix/signals.htm
+
diff --git a/published/20190206 Getting started with Vim visual mode.md b/published/20190206 Getting started with Vim visual mode.md
new file mode 100644
index 0000000000..fff2bafe1a
--- /dev/null
+++ b/published/20190206 Getting started with Vim visual mode.md
@@ -0,0 +1,120 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10589-1.html)
+[#]: subject: (Getting started with Vim visual mode)
+[#]: via: (https://opensource.com/article/19/2/getting-started-vim-visual-mode)
+[#]: author: (Susan Lauber https://opensource.com/users/susanlauber)
+
+Vim 可视化模式入门
+======
+
+> 可视化模式使得在 Vim 中高亮显示和操作文本变得更加容易。
+
+
+
+Ansible 剧本文件是 YAML 格式的文本文件,经常与它们打交道的人通过他们偏爱的编辑器和扩展插件以使格式化更容易。
+
+当我使用大多数 Linux 发行版中提供的默认编辑器来教学 Ansible 时,我经常使用 Vim 的可视化模式。它可以让我在屏幕上高亮显示我的操作 —— 我要编辑什么以及我正在做的文本处理任务,以便使我的学生更容易学习。
+
+### Vim 的可视化模式
+
+使用 Vim 编辑文本时,可视化模式对于识别要操作的文本块非常有用。
+
+Vim 的可视模式有三个模式:字符、行和块。进入每种模式的按键是:
+
+ * 字符模式: `v` (小写)
+ * 行模式: `V` (大写)
+ * 块模式: `Ctrl+v`
+
+下面是使用每种模式简化工作的一些方法。
+
+### 字符模式
+
+字符模式可以高亮显示段落中的一个句子或句子中的一个短语,然后,可以使用任何 Vim 编辑命令删除、复制、更改/修改可视化模式识别的文本。
+
+#### 移动一个句子
+
+要将句子从一个地方移动到另一个地方,首先打开文件并将光标移动到要移动的句子的第一个字符。
+
+
+
+ * 按下 `v` 键进入可视化字符模式。单词 `VISUAL` 将出现在屏幕底部。
+ * 使用箭头来高亮显示所需的文本。你可以使用其他导航命令,例如 `w` 高亮显示至下一个单词的开头,`$` 来包含该行的其余部分。
+ * 在文本高亮显示后,按下 `d` 删除文本。
+ * 如果你删除得太多或不够,按下 `u` 撤销并重新开始。
+ * 将光标移动到新位置,然后按 `p` 粘贴文本。
+
+#### 改变一个短语
+
+你还可以高亮显示要替换的一段文本。
+
+
+
+ * 将光标放在要更改的第一个字符处。
+ * 按下 `v` 进入可视化字符模式。
+ * 使用导航命令(如箭头键)高亮显示该部分。
+ * 按下 `c` 可更改高亮显示的文本。
+ * 高亮显示的文本将消失,你将处于插入模式,你可以在其中添加新文本。
+ * 输入新文本后,按下 `Esc` 返回命令模式并保存你的工作。
+
+
+
+### 行模式
+
+使用 Ansible 剧本时,任务的顺序很重要。使用可视化行模式将 Ansible 任务移动到该剧本文件中的其他位置。
+
+#### 操纵多行文本
+
+
+
+ * 将光标放在要操作的文本的第一行或最后一行的任何位置。
+ * 按下 `Shift+V` 进入行模式。单词 `VISUAL LINE` 将出现在屏幕底部。
+ * 使用导航命令(如箭头键)高亮显示多行文本。
+ * 高亮显示所需文本后,使用命令来操作它。按下 `d` 删除,然后将光标移动到新位置,按下 `p` 粘贴文本。
+ * 如果要复制该 Ansible 任务,可以使用 `y`(yank)来代替 `d`(delete)。
+
+#### 缩进一组行
+
+使用 Ansible 剧本或 YAML 文件时,缩进很重要。高亮显示的块可以使用 `>` 和 `<` 键向右或向左移动。
+
+
+
+ * 按下 `>` 增加所有行的缩进。
+ * 按下 `<` 减少所有行的缩进。
+
+尝试其他 Vim 命令将它们应用于高亮显示的文本。
+
+### 块模式
+
+可视化块模式对于操作特定的表格数据文件非常有用,但它作为验证 Ansible 剧本文件缩进的工具也很有帮助。
+
+Ansible 任务是个项目列表,在 YAML 中,每个列表项都以一个破折号跟上一个空格开头。破折号必须在同一列中对齐,以达到相同的缩进级别。仅凭肉眼很难看出这一点。缩进 Ansible 任务中的其他行也很重要。
+
+#### 验证任务列表缩进相同
+
+
+
+ * 将光标放在列表项的第一个字符上。
+ * 按下 `Ctrl+v` 进入可视化块模式。单词 `VISUAL BLOCK` 将出现在屏幕底部。
+ * 使用箭头键高亮显示单个字符列。你可以验证每个任务的缩进量是否相同。
+ * 使用箭头键向右或向左展开块,以检查其它缩进是否正确。
+
+
+
+尽管我对其它 Vim 编辑快捷方式很熟悉,但我仍然喜欢使用可视化模式来整理我想要出来处理的文本。当我在讲演过程中演示其它概念时,我的学生将会在这个“对他们而言很新”的文本编辑器中看到一个可以高亮文本并可以点击删除的工具。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/getting-started-vim-visual-mode
+
+作者:[Susan Lauber][a]
+选题:[lujun9972][b]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/susanlauber
+[b]: https://github.com/lujun9972
diff --git a/published/20190208 7 steps for hunting down Python code bugs.md b/published/20190208 7 steps for hunting down Python code bugs.md
new file mode 100644
index 0000000000..1d89b8fd2d
--- /dev/null
+++ b/published/20190208 7 steps for hunting down Python code bugs.md
@@ -0,0 +1,116 @@
+[#]: collector: (lujun9972)
+[#]: translator: (LazyWolfLin)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10603-1.html)
+[#]: subject: (7 steps for hunting down Python code bugs)
+[#]: via: (https://opensource.com/article/19/2/steps-hunting-code-python-bugs)
+[#]: author: (Maria Mckinley https://opensource.com/users/parody)
+
+Python 七步捉虫法
+======
+
+> 了解一些技巧助你减少代码查错时间。
+
+
+
+在周五的下午三点钟(为什么是这个时间?因为事情总会在周五下午三点钟发生),你收到一条通知,客户发现你的软件出现一个错误。在有了初步的怀疑后,你联系运维,查看你的软件日志以了解发生了什么,因为你记得收到过日志已经搬家了的通知。
+
+结果这些日志被转移到了你获取不到的地方,但它们正在导入到一个网页应用中——所以到时候你可以用这个漂亮的应用来检索日志,但是,这个应用现在还没完成。这个应用预计会在几天内完成。我知道,你觉得这完全不切实际。然而并不是,日志或者日志消息似乎经常在错误的时间消失不见。在我们开始查错前,一个忠告:经常检查你的日志以确保它们还在你认为它们应该在的地方,并记录你认为它们应该记的东西。当你不注意的时候,这些东西往往会发生令人惊讶的变化。
+
+好的,你找到了日志或者尝试了呼叫运维人员,而客户确实发现了一个错误。甚至你可能认为你已经知道错误在哪儿。
+
+你立即打开你认为可能有问题的文件并开始查错。
+
+### 1、先不要碰你的代码
+
+阅读代码,你甚至可能会想到该阅读哪些部分。但是在开始搞乱你的代码前,请重现导致错误的调用并把它变成一个测试。这将是一个集成测试,因为你可能还有其他疑问,目前你还不能准确地知道问题在哪儿。
+
+确保这个测试结果是失败的。这很重要,因为有时你的测试不能重现失败的调用,尤其是你使用了可以混淆测试的 web 或者其他框架。很多东西可能被存储在变量中,但遗憾的是,只通过观察测试,你在测试里调用的东西并不总是明显可见的。当我尝试着重现这个失败的调用时,我并不是说我要创建一个可以通过的测试,但是,好吧,我确实是创建了一个测试,但我不认为这特别不寻常。
+
+> 从自己的错误中吸取教训。
+
+### 2、编写错误的测试
+
+现在,你有了一个失败的测试,或者可能是一个带有错误的测试,那么是时候解决问题了。但是在你开干之前,让我们先检查下调用栈,因为这样可以更轻松地解决问题。
+
+调用栈包括你已经启动但尚未完成地所有任务。因此,比如你正在烤蛋糕并准备往面糊里加面粉,那你的调用栈将是:
+
+* 做蛋糕
+* 打面糊
+* 加面粉
+
+你已经开始做蛋糕,开始打面糊,而你现在正在加面粉。往锅底抹油不在这个列表中,因为你已经完成了,而做糖霜不在这个列表上因为你还没开始做。
+
+如果你对调用栈不清楚,我强烈建议你使用 [Python Tutor][1],它能帮你在执行代码时观察调用栈。
+
+现在,如果你的 Python 程序出现了错误, Python 解释器会帮你打印出当前调用栈。这意味着无论那一时刻程序在做什么,很明显错误发生在调用栈的底部。
+
+### 3、始终先检查调用栈底部
+
+在栈底你不仅能看到发生了哪个错误,而且通常可以在调用栈的最后一行发现问题。如果栈底对你没有帮助,而你的代码还没有经过代码分析,那么使用代码分析是非常有用的。我推荐 pylint 或者 flake8。通常情况下,它会指出我一直忽略的错误的地方。
+
+如果错误看起来很迷惑,你下一步行动可能是用 Google 搜索它。如果你搜索的内容不包含你的代码的相关信息,如变量名、文件等,那你将获得更好的搜索结果。如果你使用的是 Python 3(你应该使用它),那么搜索内容包含 Python 3 是有帮助的,否则 Python 2 的解决方案往往会占据大多数。
+
+很久以前,开发者需要在没有搜索引擎的帮助下解决问题。那是一段黑暗时光。充分利用你可以使用的所有工具。
+
+不幸的是,有时候问题发生在更早阶段,但只有在调用栈底部执行的地方才显现出来。就像当蛋糕没有膨胀时,忘记加发酵粉的事才被发现。
+
+那就该检查整个调用栈。问题更可能在你的代码而不是 Python 标准库或者第三方包,所以先检查调用栈内你的代码。另外,在你的代码中放置断点通常会更容易检查代码。在调用栈的代码中放置断点,然后看看周围是否如你预期。
+
+“但是,玛丽,”我听到你说,“如果我有一个调用栈,那这些都是有帮助的,但我只有一个失败的测试。我该从哪里开始?”
+
+pdb,一个 Python 调试器。
+
+找到你代码里会被这个调用命中的地方。你应该能够找到至少一个这样的地方。在那里打上一个 pdb 的断点。
+
+#### 一句题外话
+
+为什么不使用 `print` 语句呢?我曾经依赖于 `print` 语句。有时候,它们仍然很方便。但当我开始处理复杂的代码库,尤其是有网络调用的代码库,`print` 语句就变得太慢了。我最终在各种地方都加上了 `print` 语句,但我没法追踪它们的位置和原因,而且变得更复杂了。但是主要使用 pdb 还有一个更重要的原因。假设你添加一条 `print` 语句去发现错误问题,而且 `print` 语句必须早于错误出现的地方。但是,看看你放 `print` 语句的函数,你不知道你的代码是怎么执行到那个位置的。查看代码是寻找调用路径的好方法,但看你以前写的代码是恐怖的。是的,我会用 `grep` 处理我的代码库以寻找调用函数的地方,但这会变得乏味,而且搜索一个通用函数时并不能缩小搜索范围。pdb 就变得非常有用。
+
+你遵循我的建议,打上 pdb 断点并运行你的测试。然而测试再次失败,但是没有任何一个断点被命中。留着你的断点,并运行测试套件中一个同这个失败的测试非常相似的测试。如果你有个不错的测试套件,你应该能够找到一个这样的测试。它会命中了你认为你的失败测试应该命中的代码。运行这个测试,然后当它运行到你的断点,按下 `w` 并检查调用栈。如果你不知道如何查看因为其他调用而变得混乱的调用栈,那么在调用栈的中间找到属于你的代码,并在堆栈中该代码的上一行放置一个断点。再试一次新的测试。如果仍然没命中断点,那么继续,向上追踪调用栈并找出你的调用在哪里脱轨了。如果你一直没有命中断点,最后到了追踪的顶部,那么恭喜你,你发现了问题:你的应用程序名称拼写错了。
+
+> 没有经验,小白,一点都没有经验。
+
+### 4、修改代码
+
+如果你仍觉得迷惑,在你稍微改变了一些的地方尝试新的测试。你能让新的测试跑起来么?有什么是不同的呢?有什么是相同的呢?尝试改变一下别的东西。当你有了你的测试,以及可能也还有其它的测试,那就可以开始安全地修改代码了,确定是否可以缩小问题范围。记得从一个新提交开始解决问题,以便于可以轻松地撤销无效地更改。(这就是版本控制,如果你没有使用过版本控制,这将会改变你的生活。好吧,可能它只是让编码更容易。查阅“[版本控制可视指南][2]”,以了解更多。)
+
+### 5、休息一下
+
+尽管如此,当它不再感觉起来像一个有趣的挑战或者游戏而开始变得令人沮丧时,你最好的举措是脱离这个问题。休息一下。我强烈建议你去散步并尝试考虑别的事情。
+
+### 6、把一切写下来
+
+当你回来了,如果你没有突然受到启发,那就把你关于这个问题所知的每一个点信息写下来。这应该包括:
+
+ * 真正造成问题的调用
+ * 真正发生了什么,包括任何错误信息或者相关的日志信息
+ * 你真正期望发生什么
+ * 到目前为止,为了找出问题,你做了什么工作;以及解决问题中你发现的任何线索。
+
+有时这里有很多信息,但相信我,从零碎中挖掘信息是很烦人。所以尽量简洁,但是要完整。
+
+### 7、寻求帮助
+
+我经常发现写下所有信息能够启迪我想到还没尝试过的东西。当然,有时候我在点击求助邮件(或表单)的提交按钮后立刻意识到问题是是什么。无论如何,当你在写下所有东西仍一无所获时,那就试试向他人发邮件求助。首先是你的同事或者其他参与你的项目的人,然后是该项目的邮件列表。不要害怕向人求助。大多数人都是友善和乐于助人的,我发现在 Python 社区里尤其如此。
+
+Maria McKinley 已在 [PyCascades 2019][4] 演讲 [代码查错][3],2 月 23-24,于西雅图。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/steps-hunting-code-python-bugs
+
+作者:[Maria Mckinley][a]
+选题:[lujun9972][b]
+译者:[LazyWolfLin](https://github.com/LazyWolfLin)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/parody
+[b]: https://github.com/lujun9972
+[1]: http://www.pythontutor.com/
+[2]: https://betterexplained.com/articles/a-visual-guide-to-version-control/
+[3]: https://2019.pycascades.com/talks/hunting-the-bugs
+[4]: https://2019.pycascades.com/
diff --git a/published/20190212 Ampersands and File Descriptors in Bash.md b/published/20190212 Ampersands and File Descriptors in Bash.md
new file mode 100644
index 0000000000..1458375ae6
--- /dev/null
+++ b/published/20190212 Ampersands and File Descriptors in Bash.md
@@ -0,0 +1,164 @@
+[#]: collector: (lujun9972)
+[#]: translator: (zero-mk)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10591-1.html)
+[#]: subject: (Ampersands and File Descriptors in Bash)
+[#]: via: (https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash)
+[#]: author: (Paul Brown https://www.linux.com/users/bro66)
+
+Bash 中的 & 符号和文件描述符
+======
+
+> 了解如何将 “&” 与尖括号结合使用,并从命令行中获得更多信息。
+
+
+
+在我们探究大多数链式 Bash 命令中出现的所有的杂项符号(`&`、`|`、`;`、`>`、`<`、`{`、`[`、`(`、`)`、`]`、`}` 等等)的任务中,[我们一直在仔细研究 & 符号][1]。
+
+[上次,我们看到了如何使用 & 把可能需要很长时间运行的进程放到后台运行][1]。但是,`&` 与尖括号 `<` 结合使用,也可用于将输出或输出通过管道导向其他地方。
+
+在 [前面的][2] [尖括号教程中][3],你看到了如何使用 `>`,如下:
+
+```
+ls > list.txt
+```
+
+将 `ls` 输出传递给 `list.txt` 文件。
+
+现在我们看到的是简写:
+
+```
+ls 1> list.txt
+```
+
+在这种情况下,`1` 是一个文件描述符,指向标准输出(`stdout`)。
+
+以类似的方式,`2` 指向标准错误输出(`stderr`):
+
+```
+ls 2> error.log
+```
+
+所有错误消息都通过管道传递给 `error.log` 文件。
+
+回顾一下:`1>` 是标准输出(`stdout`),`2>` 是标准错误输出(`stderr`)。
+
+第三个标准文件描述符,`0<` 是标准输入(`stdin`)。你可以看到它是一个输入,因为箭头(`<`)指向`0`,而对于 `1` 和 `2`,箭头(`>`)是指向外部的。
+
+### 标准文件描述符有什么用?
+
+如果你在阅读本系列以后,你已经多次使用标准输出(`1>`)的简写形式:`>`。
+
+例如,当(假如)你知道你的命令会抛出一个错误时,像 `stderr`(`2`)这样的东西也很方便,但是 Bash 告诉你的东西是没有用的,你不需要看到它。如果要在 `home/` 目录中创建目录,例如:
+
+```
+mkdir newdir
+```
+
+如果 `newdir/` 已经存在,`mkdir` 将显示错误。但你为什么要关心这些呢?(好吧,在某些情况下你可能会关心,但并非总是如此。)在一天结束时,`newdir` 会以某种方式让你填入一些东西。你可以通过将错误消息推入虚空(即 ``/dev/null`)来抑制错误消息:
+
+```
+mkdir newdir 2> /dev/null
+```
+
+这不仅仅是 “让我们不要看到丑陋和无关的错误消息,因为它们很烦人”,因为在某些情况下,错误消息可能会在其他地方引起一连串错误。比如说,你想找到 `/etc` 下所有的 `.service` 文件。你可以这样做:
+
+```
+find /etc -iname "*.service"
+```
+
+但事实证明,在大多数系统中,`find` 显示的错误会有许多行,因为普通用户对 `/etc` 下的某些文件夹没有读取访问权限。它使读取正确的输出变得很麻烦,如果 `find` 是更大的脚本的一部分,它可能会导致行中的下一个命令排队。
+
+相反,你可以这样做:
+
+```
+find /etc -iname "*.service" 2> /dev/null
+```
+
+而且你只得到你想要的结果。
+
+### 文件描述符入门
+
+单独的文件描述符 `stdout` 和 `stderr` 还有一些注意事项。如果要将输出存储在文件中,请执行以下操作:
+
+```
+find /etc -iname "*.service" 1> services.txt
+```
+
+工作正常,因为 `1>` 意味着 “发送标准输出且自身标准输出(非标准错误)到某个地方”。
+
+但这里存在一个问题:如果你想把命令抛出的错误信息记录到文件,而结果中没有错误信息你该怎么**做**?上面的命令并不会这样做,因为它只写入 `find` 正确的结果,而:
+
+```
+find /etc -iname "*.service" 2> services.txt
+```
+
+只会写入命令抛出的错误信息。
+
+我们如何得到两者?请尝试以下命令:
+
+```
+find /etc -iname "*.service" &> services.txt
+```
+
+…… 再次和 `&` 打个招呼!
+
+我们一直在说 `stdin`(`0`)、`stdout`(`1`)和 `stderr`(`2`)是“文件描述符”。文件描述符是一种特殊构造,是指向文件的通道,用于读取或写入,或两者兼而有之。这来自于将所有内容都视为文件的旧 UNIX 理念。想写一个设备?将其视为文件。想写入套接字并通过网络发送数据?将其视为文件。想要读取和写入文件?嗯,显然,将其视为文件。
+
+因此,在管理命令的输出和错误的位置时,将目标视为文件。因此,当你打开它们来读取和写入它们时,它们都会获得文件描述符。
+
+这是一个有趣的效果。例如,你可以将内容从一个文件描述符传递到另一个文件描述符:
+
+```
+find /etc -iname "*.service" 1> services.txt 2>&1
+```
+
+这会将 `stderr` 导向到 `stdout`,而 `stdout` 通过管道被导向到一个文件中 `services.txt` 中。
+
+它再次出现:`&` 发信号通知 Bash `1` 是目标文件描述符。
+
+标准文件描述符的另一个问题是,当你从一个管道传输到另一个时,你执行此操作的顺序有点违反直觉。例如,按照上面的命令。它看起来像是错误的方式。你应该像这样阅读它:“将输出导向到文件,然后将错误导向到标准输出。” 看起来错误输出会在后面,并且在输出到标准输出(`1`)已经完成时才发送。
+
+但这不是文件描述符的工作方式。文件描述符不是文件的占位符,而是文件的输入和(或)输出通道。在这种情况下,当你做 `1> services.txt` 时,你的意思是 “打开一个写管道到 `services.txt` 并保持打开状态”。`1` 是你要使用的管道的名称,它将保持打开状态直到该行的结尾。
+
+如果你仍然认为这是错误的方法,试试这个:
+
+```
+find /etc -iname "*.service" 2>&1 1>services.txt
+```
+
+并注意它是如何不工作的;注意错误是如何被导向到终端的,而只有非错误的输出(即 `stdout`)被推送到 `services.txt`。
+
+这是因为 Bash 从左到右处理 `find` 的每个结果。这样想:当 Bash 到达 `2>&1` 时,`stdout` (`1`)仍然是指向终端的通道。如果 `find` 给 Bash 的结果包含一个错误,它将被弹出到 `2`,转移到 `1`,然后留在终端!
+
+然后在命令结束时,Bash 看到你要打开 `stdout`(`1`) 作为到 `services.txt` 文件的通道。如果没有发生错误,结果将通过通道 `1` 进入文件。
+
+相比之下,在:
+
+```
+find /etc -iname "*.service" 1>services.txt 2>&1
+```
+
+`1` 从一开始就指向 `services.txt`,因此任何弹出到 `2` 的内容都会导向到 `1` ,而 `1` 已经指向最终去的位置 `services.txt`,这就是它工作的原因。
+
+在任何情况下,如上所述 `&>` 都是“标准输出和标准错误”的缩写,即 `2>&1`。
+
+这可能有点多,但不用担心。重新导向文件描述符在 Bash 命令行和脚本中是司空见惯的事。随着本系列的深入,你将了解更多关于文件描述符的知识。下周见!
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
+
+作者:[Paul Brown][a]
+选题:[lujun9972][b]
+译者:[zero-mk](https://github.com/zero-mk)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/bro66
+[b]: https://github.com/lujun9972
+[1]: https://linux.cn/article-10587-1.html
+[2]: https://linux.cn/article-10502-1.html
+[3]: https://linux.cn/article-10529-1.html
diff --git a/published/20190212 How To Check CPU, Memory And Swap Utilization Percentage In Linux.md b/published/20190212 How To Check CPU, Memory And Swap Utilization Percentage In Linux.md
new file mode 100644
index 0000000000..cee9dc5f2c
--- /dev/null
+++ b/published/20190212 How To Check CPU, Memory And Swap Utilization Percentage In Linux.md
@@ -0,0 +1,218 @@
+[#]: collector: (lujun9972)
+[#]: translator: (An-DJ)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10595-1.html)
+[#]: subject: (How To Check CPU, Memory And Swap Utilization Percentage In Linux?)
+[#]: via: (https://www.2daygeek.com/linux-check-cpu-memory-swap-utilization-percentage/)
+[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/)
+
+如何查看 Linux 下 CPU、内存和交换分区的占用率?
+======
+
+在 Linux 下有很多可以用来查看内存占用情况的命令和选项,但是我并没有看见关于内存占用率的更多的信息。
+
+在大多数情况下我们只想查看内存使用情况,并没有考虑占用的百分比究竟是多少。如果你想要了解这些信息,那你看这篇文章就对了。我们将会详细地在这里帮助你解决这个问题。
+
+这篇教程将会帮助你在面对 Linux 服务器下频繁的内存高占用情况时,确定内存使用情况。
+
+而在同时,如果你使用的是 `free -m` 或者 `free -g`,占用情况描述地也并不是十分清楚。
+
+这些格式化命令属于 Linux 高级命令。它将会对 Linux 专家和中等水平 Linux 使用者非常有用。
+
+### 方法-1:如何查看 Linux 下内存占用率?
+
+我们可以使用下面命令的组合来达到此目的。在该方法中,我们使用的是 `free` 和 `awk` 命令的组合来获取内存占用率。
+
+如果你正在寻找其他有关于内存的文章,你可以导航到如下链接。这些文章有 [free 命令][1]、[smem 命令][2]、[ps_mem 命令][3]、[vmstat 命令][4] 及 [查看物理内存大小的多种方式][5]。
+
+要获取不包含百分比符号的内存占用率:
+
+```
+$ free -t | awk 'NR == 2 {print "Current Memory Utilization is : " $3/$2*100}'
+或
+$ free -t | awk 'FNR == 2 {print "Current Memory Utilization is : " $3/$2*100}'
+
+Current Memory Utilization is : 20.4194
+```
+
+要获取不包含百分比符号的交换分区占用率:
+
+```
+$ free -t | awk 'NR == 3 {print "Current Swap Utilization is : " $3/$2*100}'
+或
+$ free -t | awk 'FNR == 3 {print "Current Swap Utilization is : " $3/$2*100}'
+
+Current Swap Utilization is : 0
+```
+
+要获取包含百分比符号及保留两位小数的内存占用率:
+
+```
+$ free -t | awk 'NR == 2 {printf("Current Memory Utilization is : %.2f%"), $3/$2*100}'
+或
+$ free -t | awk 'FNR == 2 {printf("Current Memory Utilization is : %.2f%"), $3/$2*100}'
+
+Current Memory Utilization is : 20.42%
+```
+
+要获取包含百分比符号及保留两位小数的交换分区占用率:
+
+```
+$ free -t | awk 'NR == 3 {printf("Current Swap Utilization is : %.2f%"), $3/$2*100}'
+或
+$ free -t | awk 'FNR == 3 {printf("Current Swap Utilization is : %.2f%"), $3/$2*100}'
+
+Current Swap Utilization is : 0.00%
+```
+
+如果你正在寻找有关于交换分区的其他文章,你可以导航至如下链接。这些链接有 [使用 LVM(逻辑盘卷管理)创建和扩展交换分区][6],[创建或扩展交换分区的多种方式][7] 和 [创建/删除和挂载交换分区文件的多种方式][8]。
+
+键入 `free` 命令会更好地作出阐释:
+
+```
+$ free
+ total used free shared buff/cache available
+Mem: 15867 3730 9868 1189 2269 10640
+Swap: 17454 0 17454
+Total: 33322 3730 27322
+```
+
+细节如下:
+
+ * `free`:是一个标准命令,用于在 Linux 下查看内存使用情况。
+ * `awk`:是一个专门用来做文本数据处理的强大命令。
+ * `FNR == 2`:该命令给出了每一个输入文件的行数。其基本上用于挑选出给定的行(针对于这里,它选择的是行号为 2 的行)
+ * `NR == 2`:该命令给出了处理的行总数。其基本上用于过滤给出的行(针对于这里,它选择的是行号为 2 的行)
+ * `$3/$2*100`:该命令将列 3 除以列 2 并将结果乘以 100。
+ * `printf`:该命令用于格式化和打印数据。
+ * `%.2f%`:默认情况下,其打印小数点后保留 6 位的浮点数。使用后跟的格式来约束小数位。
+
+### 方法-2:如何查看 Linux 下内存占用率?
+
+我们可以使用下面命令的组合来达到此目的。在这种方法中,我们使用 `free`、`grep` 和 `awk` 命令的组合来获取内存占用率。
+
+要获取不包含百分比符号的内存占用率:
+
+```
+$ free -t | grep Mem | awk '{print "Current Memory Utilization is : " $3/$2*100}'
+Current Memory Utilization is : 20.4228
+```
+
+要获取不包含百分比符号的交换分区占用率:
+
+```
+$ free -t | grep Swap | awk '{print "Current Swap Utilization is : " $3/$2*100}'
+Current Swap Utilization is : 0
+```
+
+要获取包含百分比符号及保留两位小数的内存占用率:
+
+```
+$ free -t | grep Mem | awk '{printf("Current Memory Utilization is : %.2f%"), $3/$2*100}'
+Current Memory Utilization is : 20.43%
+```
+
+要获取包含百分比符号及保留两位小数的交换空间占用率:
+
+```
+$ free -t | grep Swap | awk '{printf("Current Swap Utilization is : %.2f%"), $3/$2*100}'
+Current Swap Utilization is : 0.00%
+```
+
+### 方法-1:如何查看 Linux 下 CPU 的占用率?
+
+我们可以使用如下命令的组合来达到此目的。在这种方法中,我们使用 `top`、`print` 和 `awk` 命令的组合来获取 CPU 的占用率。
+
+如果你正在寻找其他有关于 CPU(LCTT 译注:原文误为 memory)的文章,你可以导航至如下链接。这些文章有 [top 命令][9]、[htop 命令][10]、[atop 命令][11] 及 [Glances 命令][12]。
+
+如果在输出中展示的是多个 CPU 的情况,那么你需要使用下面的方法。
+
+```
+$ top -b -n1 | grep ^%Cpu
+%Cpu0 : 5.3 us, 0.0 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
+%Cpu1 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
+%Cpu2 : 0.0 us, 0.0 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 5.3 si, 0.0 st
+%Cpu3 : 5.3 us, 0.0 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
+%Cpu4 : 10.5 us, 15.8 sy, 0.0 ni, 73.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
+%Cpu5 : 0.0 us, 5.0 sy, 0.0 ni, 95.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
+%Cpu6 : 5.3 us, 0.0 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
+%Cpu7 : 5.3 us, 0.0 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
+```
+
+要获取不包含百分比符号的 CPU 占用率:
+
+```
+$ top -b -n1 | grep ^%Cpu | awk '{cpu+=$9}END{print "Current CPU Utilization is : " 100-cpu/NR}'
+Current CPU Utilization is : 21.05
+```
+
+要获取包含百分比符号及保留两位小数的 CPU 占用率:
+
+```
+$ top -b -n1 | grep ^%Cpu | awk '{cpu+=$9}END{printf("Current CPU Utilization is : %.2f%"), 100-cpu/NR}'
+Current CPU Utilization is : 14.81%
+```
+
+### 方法-2:如何查看 Linux 下 CPU 的占用率?
+
+我们可以使用如下命令的组合来达到此目的。在这种方法中,我们使用的是 `top`、`print`/`printf` 和 `awk` 命令的组合来获取 CPU 的占用率。
+
+如果在单个输出中一起展示了所有的 CPU 的情况,那么你需要使用下面的方法。
+
+```
+$ top -b -n1 | grep ^%Cpu
+%Cpu(s): 15.3 us, 7.2 sy, 0.8 ni, 69.0 id, 6.7 wa, 0.0 hi, 1.0 si, 0.0 st
+```
+
+要获取不包含百分比符号的 CPU 占用率:
+
+```
+$ top -b -n1 | grep ^%Cpu | awk '{print "Current CPU Utilization is : " 100-$8}'
+Current CPU Utilization is : 5.6
+```
+
+要获取包含百分比符号及保留两位小数的 CPU 占用率:
+
+```
+$ top -b -n1 | grep ^%Cpu | awk '{printf("Current CPU Utilization is : %.2f%"), 100-$8}'
+Current CPU Utilization is : 5.40%
+```
+
+如下是一些细节:
+
+ * `top`:是一种用于查看当前 Linux 系统下正在运行的进程的非常好的命令。
+ * `-b`:选项允许 `top` 命令切换至批处理的模式。当你从本地系统运行 `top` 命令至远程系统时,它将会非常有用。
+ * `-n1`:迭代次数。
+ * `^%Cpu`:过滤以 `%CPU` 开头的行。
+ * `awk`:是一种专门用来做文本数据处理的强大命令。
+ * `cpu+=$9`:对于每一行,将第 9 列添加至变量 `cpu`。
+ * `printf`:该命令用于格式化和打印数据。
+ * `%.2f%`:默认情况下,它打印小数点后保留 6 位的浮点数。使用后跟的格式来限制小数位数。
+ * `100-cpu/NR`:最终打印出 CPU 平均占用率,即用 100 减去其并除以行数。
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-check-cpu-memory-swap-utilization-percentage/
+
+作者:[Vinoth Kumar][a]
+选题:[lujun9972][b]
+译者:[An-DJ](https://github.com/An-DJ)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/vinoth/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/free-command-to-check-memory-usage-statistics-in-linux/
+[2]: https://www.2daygeek.com/smem-linux-memory-usage-statistics-reporting-tool/
+[3]: https://www.2daygeek.com/ps_mem-report-core-memory-usage-accurately-in-linux/
+[4]: https://www.2daygeek.com/linux-vmstat-command-examples-tool-report-virtual-memory-statistics/
+[5]: https://www.2daygeek.com/easy-ways-to-check-size-of-physical-memory-ram-in-linux/
+[6]: https://www.2daygeek.com/how-to-create-extend-swap-partition-in-linux-using-lvm/
+[7]: https://www.2daygeek.com/add-extend-increase-swap-space-memory-file-partition-linux/
+[8]: https://www.2daygeek.com/shell-script-create-add-extend-swap-space-linux/
+[9]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
+[10]: https://www.2daygeek.com/linux-htop-command-linux-system-performance-resource-monitoring-tool/
+[11]: https://www.2daygeek.com/atop-system-process-performance-monitoring-tool/
+[12]: https://www.2daygeek.com/install-glances-advanced-real-time-linux-system-performance-monitoring-tool-on-centos-fedora-ubuntu-debian-opensuse-arch-linux/
diff --git a/published/20190212 Two graphical tools for manipulating PDFs on the Linux desktop.md b/published/20190212 Two graphical tools for manipulating PDFs on the Linux desktop.md
new file mode 100644
index 0000000000..77c68dc718
--- /dev/null
+++ b/published/20190212 Two graphical tools for manipulating PDFs on the Linux desktop.md
@@ -0,0 +1,97 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10584-1.html)
+[#]: subject: (Two graphical tools for manipulating PDFs on the Linux desktop)
+[#]: via: (https://opensource.com/article/19/2/manipulating-pdfs-linux)
+[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
+
+两款 Linux 桌面中的图形化操作 PDF 的工具
+======
+
+> PDF-Shuffler 和 PDF Chain 是在 Linux 中修改 PDF 的绝佳工具。
+
+
+
+由于我谈论和写作了些 PDF 及使用它们的工具的文章,有些人认为我喜欢这种格式。其实我并不是,由于各种原因,我不会深入它。
+
+我不会说 PDF 是我个人和职业生活中的一个躲不开的坏事 - 而实际上它们不是那么好。通常即使有更好的替代方案来交付文档,通常我也必须使用 PDF。
+
+当我使用 PDF 时,通常是在白天工作时在其他的操作系统上使用,我使用 Adobe Acrobat 进行操作。但是当我必须在 Linux 桌面上使用 PDF 时呢?我们来看看我用来操作 PDF 的两个图形工具。
+
+### PDF-Shuffler
+
+顾名思义,你可以使用 [PDF-Shuffler][1] 在 PDF 文件中移动页面。它可以做得更多,但该软件的功能是有限的。这并不意味着 PDF-Shuffler 没用。它有用,很有用。
+
+你可以将 PDF-Shuffler 用来:
+
+ * 从 PDF 文件中提取页面
+ * 将页面添加到文件中
+ * 重新排列文件中的页面
+
+请注意,PDF-Shuffler 有一些依赖项,如 pyPDF 和 python-gtk。通常,通过包管理器安装它是最快且最不令人沮丧的途径。
+
+假设你想从 PDF 中提取页面,也许是作为你书中的样本章节。选择 “File > Add”打开 PDF 文件。
+
+
+
+要提取第 7 页到第 9 页,请按住 `Ctrl` 并单击选择页面。然后,右键单击并选择 “Export selection”。
+
+
+
+选择要保存文件的目录,为其命名,然后单击 “Save”。
+
+要添加文件 —— 例如,要添加封面或重新插入已扫描的且已签名的合同或者应用 - 打开 PDF 文件,然后选择 “File > Add” 并找到要添加的 PDF 文件。单击 “Open”。
+
+PDF-Shuffler 有个不好的地方就是添加页面到你正在处理的 PDF 文件末尾。单击并将添加的页面拖动到文件中的所需位置。你一次只能在文件中单击并拖动一个页面。
+
+
+
+### PDF Chain
+
+我是 [PDFtk][2] 的忠实粉丝,它是一个可以对 PDF 做一些有趣操作的命令行工具。由于我不经常使用它,我不记得所有 PDFtk 的命令和选项。
+
+[PDF Chain][3] 是 PDFtk 命令行的一个很好的替代品。它可以让你一键使用 PDFtk 最常用的命令。无需使用菜单,你可以:
+
+* 合并 PDF(包括旋转一个或多个文件的页面)
+* 从 PDF 中提取页面并将其保存到单个文件中
+* 为 PDF 添加背景或水印
+* 将附件添加到文件
+
+
+
+你也可以做得更多。点击 “Tools” 菜单,你可以:
+
+* 从 PDF 中提取附件
+* 压缩或解压缩文件
+* 从文件中提取元数据
+* 用外部[数据][4]填充 PDF 表格
+* [扁平化][5] PDF
+* 从 PDF 表单中删除 [XML 表格结构][6](XFA)数据
+
+老实说,我只使用 PDF Chain 或 PDFtk 提取附件、压缩或解压缩 PDF。其余的对我来说基本没听说。
+
+### 总结
+
+Linux 上用于处理 PDF 的工具数量一直让我感到吃惊。它们的特性和功能的广度和深度也是如此。无论是命令行还是图形,我总能找到一个能做我需要的。在大多数情况下,PDF Mod 和 PDF Chain 对我来说效果很好。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/manipulating-pdfs-linux
+
+作者:[Scott Nesbitt][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/scottnesbitt
+[b]: https://github.com/lujun9972
+[1]: https://savannah.nongnu.org/projects/pdfshuffler/
+[2]: https://en.wikipedia.org/wiki/PDFtk
+[3]: http://pdfchain.sourceforge.net/
+[4]: http://www.verypdf.com/pdfform/fdf.htm
+[5]: http://pdf-tips-tricks.blogspot.com/2009/03/flattening-pdf-layers.html
+[6]: http://en.wikipedia.org/wiki/XFA
diff --git a/published/20190213 How to use Linux Cockpit to manage system performance.md b/published/20190213 How to use Linux Cockpit to manage system performance.md
new file mode 100644
index 0000000000..2209d07df3
--- /dev/null
+++ b/published/20190213 How to use Linux Cockpit to manage system performance.md
@@ -0,0 +1,83 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10583-1.html)
+[#]: subject: (How to use Linux Cockpit to manage system performance)
+[#]: via: (https://www.networkworld.com/article/3340038/linux/sitting-in-the-linux-cockpit.html)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+如何使用 Linux Cockpit 来管理系统性能
+======
+
+> Linux Cockpit 是一个基于 Web 界面的应用,它提供了对系统的图形化管理。看下它能够控制哪些。
+
+
+
+如果你还没有尝试过相对较新的 Linux Cockpit,你可能会对它所能做的一切感到惊讶。它是一个用户友好的基于 web 的控制台,提供了一些非常简单的方法来管理 Linux 系统 —— 通过 web。你可以通过一个非常简单的 web 来监控系统资源、添加或删除帐户、监控系统使用情况、关闭系统以及执行其他一些其他任务。它的设置和使用也非常简单。
+
+虽然许多 Linux 系统管理员将大部分时间花在命令行上,但使用 PuTTY 等工具访问远程系统并不总能提供最有用的命令输出。Linux Cockpit 提供了图形和易于使用的表单,来查看性能情况并对系统进行更改。
+
+Linux Cockpit 能让你查看系统性能的许多方面并进行配置更改,但任务列表可能取决于你使用的特定 Linux。任务分类包括以下内容:
+
+* 监控系统活动(CPU、内存、磁盘 IO 和网络流量) —— **系统**
+* 查看系统日志条目 —— **日志**
+* 查看磁盘分区的容量 —— **存储**
+* 查看网络活动(发送和接收) —— **网络**
+* 查看用户帐户 —— **帐户**
+* 检查系统服务的状态 —— **服务**
+* 提取已安装应用的信息 —— **应用**
+* 查看和安装可用更新(如果以 root 身份登录)并在需要时重新启动系统 —— **软件更新**
+* 打开并使用终端窗口 —— **终端**
+
+某些 Linux Cockpit 安装还允许你运行诊断报告、转储内核、检查 SELinux(安全)设置和列出订阅。
+
+以下是 Linux Cockpit 显示的系统活动示例:
+
+![cockpit activity][1]
+
+*Linux Cockpit 显示系统活动*
+
+### 如何设置 Linux Cockpit
+
+在某些 Linux 发行版(例如,最新的 RHEL)中,Linux Cockpit 可能已经安装并可以使用。在其他情况下,你可能需要采取一些简单的步骤来安装它并使其可使用。
+
+例如,在 Ubuntu 上,这些命令应该可用:
+
+```
+$ sudo apt-get install cockpit
+$ man cockpit <== just checking
+$ sudo systemctl enable --now cockpit.socket
+$ netstat -a | grep 9090
+tcp6 0 0 [::]:9090 [::]:* LISTEN
+$ sudo systemctl enable --now cockpit.socket
+$ sudo ufw allow 9090
+```
+
+启用 Linux Cockpit 后,在浏览器中打开 `https://:9090`
+
+可以在 [Cockpit 项目][2] 中找到可以使用 Cockpit 的发行版列表以及安装说明。
+
+没有额外的配置,Linux Cockpit 将无法识别 `sudo` 权限。如果你被禁止使用 Cockpit 进行更改,你将会在你点击的按钮上看到一个红色的通用禁止标志。
+
+要使 `sudo` 权限有效,你需要确保用户位于 `/etc/group` 文件中的 `wheel`(RHEL)或 `adm` (Debian)组中,即服务器当以 root 用户身份登录 Cockpit 并且用户在登录 Cockpit 时选择“重用我的密码”时,已勾选了 “Server Administrator”。
+
+在你管理的系统位在千里之外或者没有控制台时,能使用图形界面控制也不错。虽然我喜欢在控制台上工作,但我偶然也乐于见到图形或者按钮。Linux Cockpit 为日常管理任务提供了非常有用的界面。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3340038/linux/sitting-in-the-linux-cockpit.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/cockpit-activity-100787994-large.jpg
+[2]: https://cockpit-project.org/running.html
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
diff --git a/published/20190216 FinalCrypt - An Open Source File Encryption Application.md b/published/20190216 FinalCrypt - An Open Source File Encryption Application.md
new file mode 100644
index 0000000000..3619ccacc1
--- /dev/null
+++ b/published/20190216 FinalCrypt - An Open Source File Encryption Application.md
@@ -0,0 +1,119 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10588-1.html)
+[#]: subject: (FinalCrypt – An Open Source File Encryption Application)
+[#]: via: (https://itsfoss.com/finalcrypt/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+FinalCrypt:一个开源文件加密应用
+======
+
+我通常不会加密文件,但如果我打算整理我的重要文件或凭证,加密程序就会派上用场。
+
+你可能已经在使用像 [GnuPG][1] 这样的程序来帮助你加密/解密 Linux 上的文件。还有 [EncryptPad][2] 也可以加密你的笔记。
+
+但是,我看到了一个名为 FinalCrypt 的新的免费开源加密工具。你可以在 [GitHub 页面][3]上查看其最新的版本和源码。
+
+在本文中,我将分享使用此工具的经验。请注意,我不会将它与其他程序进行比较 —— 因此,如果你想要多个程序之间的详细比较,请在评论中告诉我们。
+
+![FinalCrypt][4]
+
+### 使用 FinalCrypt 加密文件
+
+FinalCrypt 使用[一次性密码本][5]密钥生成密码来加密文件。换句话说,它会生成一个 OTP 密钥,你将使用该密钥加密或解密你的文件。
+
+根据你指定的密钥大小,密钥是完全随机的。因此,没有密钥文件就无法解密文件。
+
+虽然 OTP 密钥用于加密/解密简单而有效,但管理或保护密钥文件对某些人来说可能是不方便的。
+
+如果要使用 FinalCrypt,可以从它的网站下载 DEB/RPM 文件。FinalCrypt 也可用于 Windows 和 macOS。
+
+- [下载 FinalCrypt](https://sites.google.com/site/ronuitholland/home/finalcrypt)
+
+下载后,只需双击该 [deb][6] 或 rpm 文件就能安装。如果需要,你还可以从源码编译。
+
+### 使用 FileCrypt
+
+该视频演示了如何使用FinalCrypt:
+
+
+
+如果你喜欢 Linux 相关的视频,请[订阅我们的 YouTube 频道][7]。
+
+安装 FinalCrypt 后,你将在已安装的应用列表中找到它。从这里启动它。
+
+启动后,你将看到(分割的)两栏,一个进行加密/解密,另一个选择 OTP 文件。
+
+![Using FinalCrypt for encrypting files in Linux][8]
+
+首先,你必须生成 OTP 密钥。下面是做法:
+
+![finalcrypt otp][9]
+
+请注意你的文件名可以是任何内容 —— 但你需要确保密钥文件的大小大于或等于要加密的文件。我觉得这很荒谬,但事实就是如此。
+
+![][10]
+
+生成文件后,选择窗口右侧的密钥,然后选择要在窗口左侧加密的文件。
+
+生成 OTP 后,你会看到高亮显示的校验和、密钥文件大小和有效状态:
+
+![][11]
+
+选择之后,你只需要点击 “Encrypt” 来加密这些文件,如果已经加密,那么点击 “Decrypt” 来解密这些文件。
+
+![][12]
+
+你还可以在命令行中使用 FinalCrypt 来自动执行加密作业。
+
+#### 如何保护你的 OTP 密钥?
+
+加密/解密你想要保护的文件很容易。但是,你应该在哪里保存你的 OTP 密钥?
+
+如果你未能将 OTP 密钥保存在安全的地方,那么它几乎没用。
+
+嗯,最好的方法之一是使用专门的 USB 盘保存你的密钥。只需要在解密文件时将它插入即可。
+
+除此之外,如果你认为足够安全,你可以将密钥保存在[云服务][13]中。
+
+有关 FinalCrypt 的更多信息,请访问它的网站。
+
+[FinalCrypt](https://sites.google.com/site/ronuitholland/home/finalcrypt)
+
+### 总结
+
+它开始时看上去有点复杂,但它实际上是 Linux 中一个简单且用户友好的加密程序。如果你想看看其他的,还有一些其他的[加密保护文件夹][14]的程序。
+
+你如何看待 FinalCrypt?你还知道其他类似可能更好的程序么?请在评论区告诉我们,我们将会查看的!
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/finalcrypt/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://www.gnupg.org/
+[2]: https://itsfoss.com/encryptpad-encrypted-text-editor-linux/
+[3]: https://github.com/ron-from-nl/FinalCrypt
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt.png?resize=800%2C450&ssl=1
+[5]: https://en.wikipedia.org/wiki/One-time_pad
+[6]: https://itsfoss.com/install-deb-files-ubuntu/
+[7]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt.jpg?fit=800%2C439&ssl=1
+[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt-otp-key.jpg?resize=800%2C443&ssl=1
+[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt-otp-generate.jpg?ssl=1
+[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt-key.jpg?fit=800%2C420&ssl=1
+[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt-encrypt.jpg?ssl=1
+[13]: https://itsfoss.com/cloud-services-linux/
+[14]: https://itsfoss.com/password-protect-folder-linux/
+[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt.png?fit=800%2C450&ssl=1
diff --git a/published/20190217 How to Change User Password in Ubuntu -Beginner-s Tutorial.md b/published/20190217 How to Change User Password in Ubuntu -Beginner-s Tutorial.md
new file mode 100644
index 0000000000..dbaf1ca52e
--- /dev/null
+++ b/published/20190217 How to Change User Password in Ubuntu -Beginner-s Tutorial.md
@@ -0,0 +1,124 @@
+[#]: collector: (lujun9972)
+[#]: translator: (An-DJ)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10580-1.html)
+[#]: subject: (How to Change User Password in Ubuntu [Beginner’s Tutorial])
+[#]: via: (https://itsfoss.com/change-password-ubuntu)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+新手教程:Ubuntu 下如何修改用户密码
+======
+
+> 想要在 Ubuntu 下修改 root 用户的密码?那我们来学习下如何在 Ubuntu Linux 下修改任意用户的密码。我们会讨论在终端下修改和在图形界面(GUI)修改两种做法。
+
+那么,在 Ubuntu 下什么时候会需要修改密码呢?这里我给出如下两种场景。
+
+- 当你刚安装 [Ubuntu][1] 系统时,你会创建一个用户并且为之设置一个密码。这个初始密码可能安全性较弱或者太过于复杂,你会想要对它做出修改。
+- 如果你是系统管理员,你可能需要去修改在你管理的系统内其他用户的密码。
+
+当然,你可能会有其他的一些原因做这样的一件事。不过现在问题来了,我们到底如何在 Ubuntu 或其它 Linux 系统下修改单个用户的密码呢?
+
+在这个快速教程中,我将会展示给你在 Ubuntu 中如何使用命令行和图形界面(GUI)两种方式修改密码。
+
+### 在 Ubuntu 中修改用户密码 —— 通过命令行
+
+![如何在 Ubuntu Linux 下修改用户密码][2]
+
+在 Ubuntu 下修改用户密码其实非常简单。事实上,在任何 Linux 发行版上修改的方式都是一样的,因为你要使用的是叫做 `passwd` 的普通 Linux 命令来达到此目的。
+
+如果你想要修改你的当前密码,只需要简单地在终端执行此命令:
+
+```
+passwd
+```
+
+系统会要求你输入当前密码和两次新的密码。
+
+在键入密码时,你不会从屏幕上看到任何东西。这在 UNIX 和 Linux 系统中是非常正常的表现。
+
+```
+passwd
+Changing password for abhishek.
+(current) UNIX password:
+Enter new UNIX password:
+Retype new UNIX password:
+passwd: password updated successfully
+```
+
+由于这是你的管理员账户,你刚刚修改了 Ubuntu 下 sudo 密码,但你甚至没有意识到这个操作。(LCTT 译注:执行 sudo 操作时,输入的是的用户自身的密码,此处修改的就是自身的密码。而所说的“管理员账户”指的是该用户处于可以执行 `sudo` 命令的用户组中。本文此处描述易引起误会,特注明。)
+
+![在 Linux 命令行中修改用户密码][3]
+
+如果你想要修改其他用户的密码,你也可以使用 `passwd` 命令来做。但是在这种情况下,你将不得不使用`sudo`。(LCTT 译注:此处执行 `sudo`,要先输入你的 sudo 密码 —— 如上提示已经修改,再输入给其它用户设置的新密码 —— 两次。)
+
+```
+sudo passwd
+```
+
+如果你对密码已经做出了修改,不过之后忘记了,不要担心。你可以[很容易地在Ubuntu下重置密码][4].
+
+### 修改 Ubuntu 下 root 用户密码
+
+默认情况下,Ubuntu 中 root 用户是没有密码的。不必惊讶,你并不是在 Ubuntu 下一直使用 root 用户。不太懂?让我快速地给你解释下。
+
+当[安装 Ubuntu][5] 时,你会被强制创建一个用户。这个用户拥有管理员访问权限。这个管理员用户可以通过 `sudo` 命令获得 root 访问权限。但是,该用户使用的是自身的密码,而不是 root 账户的密码(因为就没有)。
+
+你可以使用 `passwd` 命令来设置或修改 root 用户的密码。然而,在大多数情况下,你并不需要它,而且你不应该去做这样的事。
+
+你将必须使用 `sudo` 命令(对于拥有管理员权限的账户)。~~如果 root 用户的密码之前没有被设置,它会要求你设置。另外,你可以使用已有的 root 密码对它进行修改。~~(LCTT 译注:此处描述有误,使用 `sudo` 或直接以 root 用户执行 `passwd` 命令时,不需要输入该被改变密码的用户的当前密码。)
+
+```
+sudo password root
+```
+
+### 在 Ubuntu 下使用图形界面(GUI)修改密码
+
+我这里使用的是 GNOME 桌面环境,Ubuntu 版本为 18.04。这些步骤对于其他的桌面环境和 Ubuntu 版本应该差别不大。
+
+打开菜单(按下 `Windows`/`Super` 键)并搜索 “Settings”(设置)。
+
+在 “Settings” 中,向下滚动一段距离打开进入 “Details”。
+
+![在 Ubuntu GNOME Settings 中进入 Details][6]
+
+在这里,点击 “Users” 获取系统下可见的所有用户。
+
+![Ubuntu 下用户设置][7]
+
+你可以选择任一你想要的用户,包括你的主要管理员账户。你需要先解锁用户并点击 “Password” 区域。
+
+![Ubuntu 下修改用户密码][8]
+
+你会被要求设置密码。如果你正在修改的是你自己的密码,你将必须也输入当前使用的密码。
+
+![Ubuntu 下修改用户密码][9]
+
+做好这些后,点击上面的 “Change” 按钮,这样就完成了。你已经成功地在 Ubuntu 下修改了用户密码。
+
+我希望这篇快速精简的小教程能够帮助你在 Ubuntu 下修改用户密码。如果你对此还有一些问题或建议,请在下方留下评论。
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/change-password-ubuntu
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[An-DJ](https://github.com/An-DJ)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://www.ubuntu.com/
+[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-password-ubuntu-linux.png?resize=800%2C450&ssl=1
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-user-password-linux-1.jpg?resize=800%2C253&ssl=1
+[4]: https://itsfoss.com/how-to-hack-ubuntu-password/
+[5]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
+[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-user-password-ubuntu-gui-2.jpg?resize=800%2C484&ssl=1
+[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-user-password-ubuntu-gui-3.jpg?resize=800%2C488&ssl=1
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-user-password-ubuntu-gui-4.jpg?resize=800%2C555&ssl=1
+[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-user-password-ubuntu-gui-1.jpg?ssl=1
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-password-ubuntu-linux.png?fit=800%2C450&ssl=1
diff --git a/published/20190219 Logical - in Bash.md b/published/20190219 Logical - in Bash.md
new file mode 100644
index 0000000000..990b73311e
--- /dev/null
+++ b/published/20190219 Logical - in Bash.md
@@ -0,0 +1,194 @@
+[#]: collector: "lujun9972"
+[#]: translator: "zero-mk"
+[#]: reviewer: "wxy"
+[#]: publisher: "wxy"
+[#]: url: "https://linux.cn/article-10596-1.html"
+[#]: subject: "Logical & in Bash"
+[#]: via: "https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash"
+[#]: author: "Paul Brown https://www.linux.com/users/bro66"
+
+Bash 中的逻辑和(&)
+======
+> 在 Bash 中,你可以使用 & 作为 AND(逻辑和)操作符。
+
+
+
+有人可能会认为两篇文章中的 `&` 意思差不多,但实际上并不是。虽然 [第一篇文章讨论了如何在命令末尾使用 & 来将命令转到后台运行][1],在之后剖析了流程管理,第二篇文章将 [ & 看作引用文件描述符的方法][2],这些文章让我们知道了,与 `<` 和 `>` 结合使用后,你可以将输入或输出引导到别的地方。
+
+但我们还没接触过作为 AND 操作符使用的 `&`。所以,让我们来看看。
+
+### & 是一个按位运算符
+
+如果你十分熟悉二进制数操作,你肯定听说过 AND 和 OR 。这些是按位操作,对二进制数的各个位进行操作。在 Bash 中,使用 `&` 作为 AND 运算符,使用 `|` 作为 OR 运算符:
+
+**AND**:
+
+```
+0 & 0 = 0
+0 & 1 = 0
+1 & 0 = 0
+1 & 1 = 1
+```
+
+**OR**:
+
+```
+0 | 0 = 0
+0 | 1 = 1
+1 | 0 = 1
+1 | 1 = 1
+```
+
+你可以通过对任何两个数字进行 AND 运算并使用 `echo` 输出结果:
+
+```
+$ echo $(( 2 & 3 )) # 00000010 AND 00000011 = 00000010
+2
+$ echo $(( 120 & 97 )) # 01111000 AND 01100001 = 01100000
+96
+```
+
+OR(`|`)也是如此:
+
+```
+$ echo $(( 2 | 3 )) # 00000010 OR 00000011 = 00000011
+3
+$ echo $(( 120 | 97 )) # 01111000 OR 01100001 = 01111001
+121
+```
+
+说明:
+
+1. 使用 `(( ... ))` 告诉 Bash 双括号之间的内容是某种算术或逻辑运算。`(( 2 + 2 ))`、 `(( 5 % 2 ))` (`%` 是[求模][3]运算符)和 `((( 5 % 2 ) + 1))`(等于 3)都可以工作。
+2. [像变量一样][4],使用 `$` 提取值,以便你可以使用它。
+3. 空格并没有影响:`((2+3))` 等价于 `(( 2+3 ))` 和 `(( 2 + 3 ))`。
+4. Bash 只能对整数进行操作。试试这样做: `(( 5 / 2 ))` ,你会得到 `2`;或者这样 `(( 2.5 & 7 ))` ,但会得到一个错误。然后,在按位操作中使用除了整数之外的任何东西(这就是我们现在所讨论的)通常是你不应该做的事情。
+
+**提示:** 如果你想看看十进制数字在二进制下会是什么样子,你可以使用 `bc` ,这是一个大多数 Linux 发行版都预装了的命令行计算器。比如:
+
+```
+bc <<< "obase=2; 97"
+```
+
+这个操作将会把 `97` 转换成十二进制(`obase` 中的 `o` 代表 “output” ,也即,“输出”)。
+
+```
+bc <<< "ibase=2; 11001011"
+```
+
+这个操作将会把 `11001011` 转换成十进制(`ibase` 中的 `i` 代表 “input”,也即,“输入”)。
+
+### && 是一个逻辑运算符
+
+虽然它使用与其按位表达相同的逻辑原理,但 Bash 的 `&&` 运算符只能呈现两个结果:`1`(“真值”)和`0`(“假值”)。对于 Bash 来说,任何不是 `0` 的数字都是 “真值”,任何等于 `0` 的数字都是 “假值”。什么也是 “假值”同时也不是数字呢:
+
+```
+$ echo $(( 4 && 5 )) # 两个非零数字,两个为 true = true
+1
+$ echo $(( 0 && 5 )) # 有一个为零,一个为 false = false
+0
+$ echo $(( b && 5 )) # 其中一个不是数字,一个为 false = false
+0
+```
+
+与 `&&` 类似, OR 对应着 `||` ,用法正如你想的那样。
+
+以上这些都很简单……直到它用在命令的退出状态时。
+
+### && 是命令退出状态的逻辑运算符
+
+[正如我们在之前的文章中看到的][2],当命令运行时,它会输出错误消息。更重要的是,对于今天的讨论,它在结束时也会输出一个数字。此数字称为“返回码”,如果为 0,则表示该命令在执行期间未遇到任何问题。如果是任何其他数字,即使命令完成,也意味着某些地方出错了。
+
+所以 0 意味着是好的,任何其他数字都说明有问题发生,并且,在返回码的上下文中,0 意味着“真”,其他任何数字都意味着“假”。对!这 **与你所熟知的逻辑操作完全相反** ,但是你能用这个做什么? 不同的背景,不同的规则。这种用处很快就会显现出来。
+
+让我们继续!
+
+返回码 *临时* 储存在 [特殊变量][5] `?` 中 —— 是的,我知道:这又是一个令人迷惑的选择。但不管怎样,[别忘了我们在讨论变量的文章中说过][4],那时我们说你要用 `$` 符号来读取变量中的值,在这里也一样。所以,如果你想知道一个命令是否顺利运行,你需要在命令结束后,在运行别的命令之前马上用 `$?` 来读取 `?` 变量的值。
+
+试试下面的命令:
+
+```
+$ find /etc -iname "*.service"
+find: '/etc/audisp/plugins.d': Permission denied
+/etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service
+/etc/systemd/system/dbus-org.freedesktop.ModemManager1.service
+[......]
+```
+
+[正如你在上一篇文章中看到的一样][2],普通用户权限在 `/etc` 下运行 `find` 通常将抛出错误,因为它试图读取你没有权限访问的子目录。
+
+所以,如果你在执行 `find` 后立马执行……
+
+```
+echo $?
+```
+
+……,它将打印 `1`,表明存在错误。
+
+(注意:当你在一行中运行两遍 `echo $?` ,你将得到一个 `0` 。这是因为 `$?` 将包含第一个 `echo $?` 的返回码,而这条命令按理说一定会执行成功。所以学习如何使用 `$?` 的第一课就是: **单独执行 `$?`** 或者将它保存在别的安全的地方 —— 比如保存在一个变量里,不然你会很快丢失它。)
+
+一个直接使用 `?` 变量的用法是将它并入一串链式命令列表,这样 Bash 运行这串命令时若有任何操作失败,后面命令将终止。例如,你可能熟悉构建和编译应用程序源代码的过程。你可以像这样手动一个接一个地运行它们:
+
+```
+$ configure
+.
+.
+.
+$ make
+.
+.
+.
+$ make install
+.
+.
+.
+```
+
+你也可以把这三行合并成一行……
+
+```
+$ configure; make; make install
+```
+
+…… 但你要希望上天保佑。
+
+为什么这样说呢?因为你这样做是有缺点的,比方说 `configure` 执行失败了, Bash 将仍会尝试执行 `make` 和 `sudo make install`——就算没东西可 `make` ,实际上,是没东西会安装。
+
+聪明一点的做法是:
+
+```
+$ configure && make && make install
+```
+
+这将从每个命令中获取退出码,并将其用作链式 `&&` 操作的操作数。
+
+但是,没什么好抱怨的,Bash 知道如果 `configure` 返回非零结果,整个过程都会失败。如果发生这种情况,不必运行 `make` 来检查它的退出代码,因为无论如何都会失败的。因此,它放弃运行 `make`,只是将非零结果传递给下一步操作。并且,由于 `configure && make` 传递了错误,Bash 也不必运行`make install`。这意味着,在一长串命令中,你可以使用 `&&` 连接它们,并且一旦失败,你可以节省时间,因为其他命令会立即被取消运行。
+
+你可以类似地使用 `||`,OR 逻辑操作符,这样就算只有一部分命令成功执行,Bash 也能运行接下来链接在一起的命令。
+
+鉴于所有这些(以及我们之前介绍过的内容),你现在应该更清楚地了解我们在 [这篇文章开头][1] 出现的命令行:
+
+```
+mkdir test_dir 2>/dev/null || touch backup/dir/images.txt && find . -iname "*jpg" > backup/dir/images.txt &
+```
+
+因此,假设你从具有读写权限的目录运行上述内容,它做了什么以及如何做到这一点?它如何避免不合时宜且可能导致执行中断的错误?下周,除了给你这些答案的结果,我们将讨论圆括号,不要错过了哟!
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
+
+作者:[Paul Brown][a]
+选题:[lujun9972][b]
+译者:[zero-MK](https://github.com/zero-mk)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/bro66
+[b]: https://github.com/lujun9972
+[1]: https://linux.cn/article-10587-1.html
+[2]: https://linux.cn/article-10591-1.html
+[3]: https://en.wikipedia.org/wiki/Modulo_operation
+[4]: https://www.linux.com/blog/learn/2018/12/bash-variables-environmental-and-otherwise
+[5]: https://www.gnu.org/software/bash/manual/html_node/Special-Parameters.html
diff --git a/published/20190220 An Automated Way To Install Essential Applications On Ubuntu.md b/published/20190220 An Automated Way To Install Essential Applications On Ubuntu.md
new file mode 100644
index 0000000000..fb7f2ffde2
--- /dev/null
+++ b/published/20190220 An Automated Way To Install Essential Applications On Ubuntu.md
@@ -0,0 +1,135 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10600-1.html)
+[#]: subject: (An Automated Way To Install Essential Applications On Ubuntu)
+[#]: via: (https://www.ostechnix.com/an-automated-way-to-install-essential-applications-on-ubuntu/)
+[#]: author: (SK https://www.ostechnix.com/author/sk/)
+
+在 Ubuntu 上自动化安装基本应用的方法
+======
+
+
+
+默认安装的 Ubuntu 并未预先安装所有必需的应用。你可能需要在网上花几个小时或者向其他 Linux 用户寻求帮助才能找到并安装 Ubuntu 所需的应用。如果你是新手,那么你肯定需要花更多的时间来学习如何从命令行(使用 `apt-get` 或 `dpkg`)或从 Ubuntu 软件中心搜索和安装应用。一些用户,特别是新手,可能希望轻松快速地安装他们喜欢的每个应用。如果你是其中之一,不用担心。在本指南中,我们将了解如何使用名为 “Alfred” 的简单命令行程序在 Ubuntu 上安装基本应用。
+
+Alfred 是用 Python 语言编写的自由、开源脚本。它使用 Zenity 创建了一个简单的图形界面,用户只需点击几下鼠标即可轻松选择和安装他们选择的应用。你不必花费数小时来搜索所有必要的应用程序、PPA、deb、AppImage、snap 或 flatpak。Alfred 将所有常见的应用、工具和小程序集中在一起,并自动安装所选的应用。如果你是最近从 Windows 迁移到 Ubuntu Linux 的新手,Alfred 会帮助你在新安装的 Ubuntu 系统上进行无人值守的软件安装,而无需太多用户干预。请注意,还有一个名称相似的 Mac OS 应用,但两者有不同的用途。
+
+### 在 Ubuntu 上安装 Alfred
+
+Alfred 安装很简单!只需下载脚本并启动它。就这么简单。
+
+```
+$ wget https://raw.githubusercontent.com/derkomai/alfred/master/alfred.py
+$ python3 alfred.py
+```
+
+或者,使用 `wget` 下载脚本,如上所示,只需将 `alfred.py` 移动到 `$PATH` 中:
+
+```
+$ sudo cp alfred.py /usr/local/bin/alfred
+```
+
+使其可执行:
+
+```
+$ sudo chmod +x /usr/local/bin/alfred
+```
+
+并使用命令启动它:
+
+```
+$ alfred
+```
+
+### 使用 Alfred 脚本轻松快速地在 Ubuntu 上安装基本应用程序
+
+按照上面所说启动 Alfred 脚本。这就是 Alfred 默认界面的样子。
+
+![][2]
+
+如你所见,Alfred 列出了许多最常用的应用类型,例如:
+
+ * 网络浏览器,
+ * 邮件客户端,
+ * 消息,
+ * 云存储客户端,
+ * 硬件驱动程序,
+ * 编解码器,
+ * 开发者工具,
+ * Android,
+ * 文本编辑器,
+ * Git,
+ * 内核更新工具,
+ * 音频/视频播放器,
+ * 截图工具,
+ * 录屏工具,
+ * 视频编码器,
+ * 流媒体应用,
+ * 3D 建模和动画工具,
+ * 图像查看器和编辑器,
+ * CAD 软件,
+ * PDF 工具,
+ * 游戏模拟器,
+ * 磁盘管理工具,
+ * 加密工具,
+ * 密码管理器,
+ * 存档工具,
+ * FTP 软件,
+ * 系统资源监视器,
+ * 应用启动器等。
+
+你可以选择任何一个或多个应用并立即安装它们。在这里,我将安装 “Developer bundle”,因此我选择它并单击 OK 按钮。
+
+![][3]
+
+现在,Alfred 脚本将自动你的 Ubuntu 系统上添加必要仓库、PPA 并开始安装所选的应用。
+
+![][4]
+
+安装完成后,你将看到以下消息。
+
+![][5]
+
+恭喜你!已安装选定的软件包。
+
+你可以使用以下命令[在 Ubuntu 上查看最近安装的应用][6]:
+
+```
+$ grep " install " /var/log/dpkg.log
+```
+
+你可能需要重启系统才能使用某些已安装的应用。类似地,你可以方便地安装列表中的任何程序。
+
+提示一下,还有一个由不同的开发人员编写的类似脚本,名为 `post_install.sh`。它与 Alfred 完全相同,但提供了一些不同的应用。请查看以下链接获取更多详细信息。
+
+- [Ubuntu Post Installation Script](https://www.ostechnix.com/ubuntu-post-installation-script/)
+
+这两个脚本能让懒惰的用户,特别是新手,只需点击几下鼠标就能够轻松快速地安装他们想要在 Ubuntu Linux 中使用的大多数常见应用、工具、更新、小程序,而无需依赖官方或者非官方文档的帮助。
+
+就是这些了。希望这篇文章有用。还有更多好东西。敬请期待!
+
+干杯!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/an-automated-way-to-install-essential-applications-on-ubuntu/
+
+作者:[SK][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]: http://www.ostechnix.com/wp-content/uploads/2019/02/alfred-1.png
+[3]: http://www.ostechnix.com/wp-content/uploads/2019/02/alfred-2.png
+[4]: http://www.ostechnix.com/wp-content/uploads/2019/02/alfred-4.png
+[5]: http://www.ostechnix.com/wp-content/uploads/2019/02/alfred-5-1.png
+[6]: https://www.ostechnix.com/list-installed-packages-sorted-installation-date-linux/
diff --git a/published/20190221 Bash-Insulter - A Script That Insults An User When Typing A Wrong Command.md b/published/20190221 Bash-Insulter - A Script That Insults An User When Typing A Wrong Command.md
new file mode 100644
index 0000000000..d2844f8a05
--- /dev/null
+++ b/published/20190221 Bash-Insulter - A Script That Insults An User When Typing A Wrong Command.md
@@ -0,0 +1,188 @@
+[#]: collector: "lujun9972"
+[#]: translator: "zero-mk"
+[#]: reviewer: "wxy"
+[#]: publisher: "wxy"
+[#]: url: "https://linux.cn/article-10593-1.html"
+[#]: subject: "Bash-Insulter : A Script That Insults An User When Typing A Wrong Command"
+[#]: via: "https://www.2daygeek.com/bash-insulter-insults-the-user-when-typing-wrong-command/"
+[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
+
+Bash-Insulter:一个在输入错误命令时嘲讽用户的脚本
+======
+
+这是一个非常有趣的脚本,每当用户在终端输入错误的命令时,它都会嘲讽用户。
+
+它让你在解决一些问题时会感到快乐。有的人在受到终端嘲讽的时候感到不愉快。但是,当我受到终端的批评时,我真的很开心。
+
+这是一个有趣的 CLI 工具,在你弄错的时候,会用随机短语嘲讽你。此外,它允许你添加自己的短语。
+
+### 如何在 Linux 上安装 Bash-Insulter?
+
+在安装 Bash-Insulter 之前,请确保你的系统上安装了 git。如果没有,请使用以下命令安装它。
+
+对于 Fedora 系统, 请使用 [DNF 命令][1] 安装 git。
+
+```
+$ sudo dnf install git
+```
+
+对于 Debian/Ubuntu 系统,请使用 [APT-GET 命令][2] 或者 [APT 命令][3] 安装 git。
+
+```
+$ sudo apt install git
+```
+
+对于基于 Arch Linux 的系统,请使用 [Pacman 命令][4] 安装 git。
+
+```
+$ sudo pacman -S git
+```
+
+对于 RHEL/CentOS 系统,请使用 [YUM 命令][5] 安装 git。
+
+```
+$ sudo yum install git
+```
+
+对于 openSUSE Leap 系统,请使用 [Zypper 命令][6] 安装 git。
+
+```
+$ sudo zypper install git
+```
+
+我们可以通过克隆clone开发人员的 GitHub 存储库轻松地安装它。
+
+首先克隆 Bash-insulter 存储库。
+
+```
+$ git clone https://github.com/hkbakke/bash-insulter.git bash-insulter
+```
+
+将下载的文件移动到文件夹 `/etc` 下。
+
+```
+$ sudo cp bash-insulter/src/bash.command-not-found /etc/
+```
+
+将下面的代码添加到 `/etc/bash.bashrc` 文件中。
+
+```
+$ vi /etc/bash.bashrc
+
+#Bash Insulter
+if [ -f /etc/bash.command-not-found ]; then
+ . /etc/bash.command-not-found
+fi
+```
+
+运行以下命令使更改生效。
+
+```
+$ sudo source /etc/bash.bashrc
+```
+
+你想测试一下安装是否生效吗?你可以试试在终端上输入一些错误的命令,看看它如何嘲讽你。
+
+```
+$ unam -a
+
+$ pin 2daygeek.com
+```
+
+![][8]
+
+如果你想附加你自己的短语,则导航到以下文件并更新它。你可以在 `messages` 部分中添加短语。
+
+```
+# vi /etc/bash.command-not-found
+
+print_message () {
+
+ local messages
+ local message
+
+ messages=(
+ "Boooo!"
+ "Don't you know anything?"
+ "RTFM!"
+ "Haha, n00b!"
+ "Wow! That was impressively wrong!"
+ "Pathetic"
+ "The worst one today!"
+ "n00b alert!"
+ "Your application for reduced salary has been sent!"
+ "lol"
+ "u suk"
+ "lol... plz"
+ "plz uninstall"
+ "And the Darwin Award goes to.... ${USER}!"
+ "ERROR_INCOMPETENT_USER"
+ "Incompetence is also a form of competence"
+ "Bad."
+ "Fake it till you make it!"
+ "What is this...? Amateur hour!?"
+ "Come on! You can do it!"
+ "Nice try."
+ "What if... you type an actual command the next time!"
+ "What if I told you... it is possible to type valid commands."
+ "Y u no speak computer???"
+ "This is not Windows"
+ "Perhaps you should leave the command line alone..."
+ "Please step away from the keyboard!"
+ "error code: 1D10T"
+ "ACHTUNG! ALLES TURISTEN UND NONTEKNISCHEN LOOKENPEEPERS! DAS KOMPUTERMASCHINE IST NICHT FÜR DER GEFINGERPOKEN UND MITTENGRABEN! ODERWISE IST EASY TO SCHNAPPEN DER SPRINGENWERK, BLOWENFUSEN UND POPPENCORKEN MIT SPITZENSPARKEN. IST NICHT FÜR GEWERKEN BEI DUMMKOPFEN. DER RUBBERNECKEN SIGHTSEEREN KEEPEN DAS COTTONPICKEN HÄNDER IN DAS POCKETS MUSS. ZO RELAXEN UND WATSCHEN DER BLINKENLICHTEN."
+ "Pro tip: type a valid command!"
+ "Go outside."
+ "This is not a search engine."
+ "(╯°□°)╯︵ ┻━┻"
+ "¯\_(ツ)_/¯"
+ "So, I'm just going to go ahead and run rm -rf / for you."
+ "Why are you so stupid?!"
+ "Perhaps computers is not for you..."
+ "Why are you doing this to me?!"
+ "Don't you have anything better to do?!"
+ "I am _seriously_ considering 'rm -rf /'-ing myself..."
+ "This is why you get to see your children only once a month."
+ "This is why nobody likes you."
+ "Are you even trying?!"
+ "Try using your brain the next time!"
+ "My keyboard is not a touch screen!"
+ "Commands, random gibberish, who cares!"
+ "Typing incorrect commands, eh?"
+ "Are you always this stupid or are you making a special effort today?!"
+ "Dropped on your head as a baby, eh?"
+ "Brains aren't everything. In your case they're nothing."
+ "I don't know what makes you so stupid, but it really works."
+ "You are not as bad as people say, you are much, much worse."
+ "Two wrongs don't make a right, take your parents as an example."
+ "You must have been born on a highway because that's where most accidents happen."
+ "If what you don't know can't hurt you, you're invulnerable."
+ "If ignorance is bliss, you must be the happiest person on earth."
+ "You're proof that god has a sense of humor."
+ "Keep trying, someday you'll do something intelligent!"
+ "If shit was music, you'd be an orchestra."
+ "How many times do I have to flush before you go away?"
+ )
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/bash-insulter-insults-the-user-when-typing-wrong-command/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[zero-mk](https://github.com/zero-mk)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[2]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[4]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[5]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[6]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
+[7]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[8]: https://www.2daygeek.com/wp-content/uploads/2019/02/bash-insulter-insults-the-user-when-typing-wrong-command-1.png
diff --git a/published/20190223 Regex groups and numerals.md b/published/20190223 Regex groups and numerals.md
new file mode 100644
index 0000000000..8e963d8fdb
--- /dev/null
+++ b/published/20190223 Regex groups and numerals.md
@@ -0,0 +1,59 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10594-1.html)
+[#]: subject: (Regex groups and numerals)
+[#]: via: (https://leancrew.com/all-this/2019/02/regex-groups-and-numerals/)
+[#]: author: (Dr.Drang https://leancrew.com)
+
+正则表达式的分组和数字
+======
+
+大约一周前,我在编辑一个程序时想要更改一些变量名。我之前认为这将是一个简单的正则表达式查找/替换。只是这没有我想象的那么简单。
+
+变量名为 `a10`、`v10` 和 `x10`,我想分别将它们改为 `a30`、`v30` 和 `x30`。我想到使用 BBEdit 的查找窗口并输入:
+
+![Mistaken BBEdit replacement pattern][2]
+
+我不能简单地 `30` 替换为 `10`,因为代码中有一些与变量无关的数字 `10`。我认为我很聪明,所以我不想写三个非正则表达式替换,`a10`、`v10` 和 `x10`,每个一个。但是我却没有注意到替换模式中的蓝色。如果我这样做了,我会看到 BBEdit 将我的替换模式解释为“匹配组 13,后面跟着 `0`,而不是”匹配组 1,后面跟着 `30`,后者是我想要的。由于匹配组 13 是空白的,因此所有变量名都会被替换为 `0`。
+
+你看,BBEdit 可以在搜索模式中匹配多达 99 个分组,严格来说,我们应该在替换模式中引用它们时使用两位数字。但在大多数情况下,我们可以使用 `\1` 到 `\9` 而不是 `\01` 到 `\09`,因为这没有歧义。换句话说,如果我尝试将 `a10`、`v10` 和 `x10` 更改为 `az`、`vz` 和 `xz`,那么使用 `\1z`的替换模式就可以了。因为后面的 `z` 意味着不会误解释该模式中 `\1`。
+
+因此,在撤消替换后,我将模式更改为这样:
+
+![Two-digit BBEdit replacement pattern][3]
+
+它可以正常工作。
+
+还有另一个选择:命名组。这是使用 `var` 作为模式名称:
+
+![Named BBEdit replacement pattern][4]
+
+我从来都没有使用过命名组,无论正则表达式是在文本编辑器还是在脚本中。我的总体感觉是,如果模式复杂到我必须使用变量来跟踪所有组,那么我应该停下来并将问题分解为更小的部分。
+
+顺便说一下,你可能已经听说 BBEdit 正在庆祝它诞生[25周年][5]。当一个有良好文档的应用有如此长的历史时,手册的积累能让人愉快地回到过去的日子。当我在 BBEdit 手册中查找命名组的表示法时,我遇到了这个说明:
+
+![BBEdit regex manual excerpt][6]
+
+BBEdit 目前的版本是 12.5。第一次出现于 2001 年的 V6.5。但手册希望确保长期客户(我记得是在 V4 的时候第一次购买)不会因行为变化而感到困惑,即使这些变化几乎发生在二十年前。
+
+--------------------------------------------------------------------------------
+
+via: https://leancrew.com/all-this/2019/02/regex-groups-and-numerals/
+
+作者:[Dr.Drang][a]
+选题:[lujun9972][b]
+译者:[s](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://leancrew.com
+[b]: https://github.com/lujun9972
+[1]: https://leancrew.com/all-this/2019/02/automation-evolution/
+[2]: https://leancrew.com/all-this/images2019/20190223-Mistaken%20BBEdit%20replacement%20pattern.png (Mistaken BBEdit replacement pattern)
+[3]: https://leancrew.com/all-this/images2019/20190223-Two-digit%20BBEdit%20replacement%20pattern.png (Two-digit BBEdit replacement pattern)
+[4]: https://leancrew.com/all-this/images2019/20190223-Named%20BBEdit%20replacement%20pattern.png (Named BBEdit replacement pattern)
+[5]: https://merch.barebones.com/
+[6]: https://leancrew.com/all-this/images2019/20190223-BBEdit%20regex%20manual%20excerpt.png (BBEdit regex manual excerpt)
diff --git a/sources/talk/20120911 Doug Bolden, Dunnet (IF).md b/sources/talk/20120911 Doug Bolden, Dunnet (IF).md
new file mode 100644
index 0000000000..c856dc5be0
--- /dev/null
+++ b/sources/talk/20120911 Doug Bolden, Dunnet (IF).md
@@ -0,0 +1,52 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Doug Bolden, Dunnet (IF))
+[#]: via: (http://www.wyrmis.com/games/if/dunnet.html)
+[#]: author: (W Doug Bolden http://www.wyrmis.com)
+
+Doug Bolden, Dunnet (IF)
+======
+
+### Dunnet (IF)
+
+#### Review
+
+When I began becoming a semi-serious hobbyist of IF last year, I mostly focused on Infocom, Adventures Unlimited, other Scott Adams based games, and freeware titles. I went on to buy some from Malinche. I picked up _1893_ and _Futureboy_ and (most recnetly) _Treasures of a Slave Kingdom_. I downloaded a lot of free games from various sites. With all of my research and playing, I never once read anything that talked about a game being bundled with Emacs.
+
+Partially, this is because I am a Vim guy. But I used to use Emacs. Kind of a lot. For probably my first couple of years with Linux. About as long as I have been a diehard Vim fan, now. I just never explored, it seems.
+
+I booted up Emacs tonight, and my fonts were hosed. Still do not know exactly why. I surfed some menus to find out what was going wrong and came across a menu option called "Adventure" under Games, which I assumed (I know, I know) meant the Crowther and Woods and 1977 variety. When I clicked it tonight, thinking that it has been a few months since I chased a bird around with a cage in a mine so I can fight off giant snakes or something, I was brought up text involving ends of roads and shovels. Trees, if shaken, that kill me with a coconut. This was not the game I thought it was.
+
+I dug around (or, in purely technical terms, typed "help") and got directed to [this website][1]. Well, here was an IF I had never touched before. Brand spanking new to me. I had planned to play out some _ToaSK_ tonight, but figured that could wait. Besides, I was not quite in the mood for the jocular fun of S. John Ross's commerical IF outing. I needed something a little more direct, and this apparently it.
+
+Most of the game plays out just like the _Colossal Cave Adventure_ cousins of the oldschool (generally commercial) IF days. There are items you pick. Each does a single task (well, there could be one exception to this, I guess). You collect treasures. Winning is a combination of getting to the end and turning in the treasures. The game slightly tweaks the formula by allowing multiple drop off points for the treasures. Since there is a weight limit, though, you usually have to drop them off at a particular time to avoid getting stuck. At several times, your "item cache" is flushed, so to speak, meaning you have to go back and replay earlier portions to find out how to bring things foward. Damage to items can occur to stop you from being able to play. Replaying is pretty much unavoidable, unless you guess outcomes just right.
+
+It also inherits many problems from the era it came. There is a twisty maze. I'm not sure how big it is. I just cheated and looked up a walkthrough for the maze portion. I plan on going back and replaying up to the maze bit and mapping it out, though. I was just mentally and physically beat when I played and knew that I was going to have to call it quits on the game for the night or cheat through the maze. I'm glad I cheated, because there are some interesting things after the maze.
+
+It also has the same sort of stilted syntax and variable levels of description that the original _Adventure_ had. Looking at one item might give you "there is nothing special about that" while looking at another might give you a sentence of flavor text. Several things mentioned in the background do not exist to the parser, which some do. Part of game play is putting up with experimenting. This includes, in cases, a tendency of room descriptions to be written from the perspective of the first time you enter. I know that the Classroom found towards the end of the game does not mention the South exit, either. There are possibly other times this occured that I didn't notice.
+
+It's final issue, again coming out of the era it was designed, is random death syndrome. This is not too common, but there are a few places where things that have no initially apparent fatal outcome lead to one anyhow. In some ways, this "fatal outcome" is just the game reaching an unwinnable state. For an example of the former, type "shake trees" in the first room. For an example of the latter, send either the lamp, the key, or the shovel through the ftp without switching ftp modes first. At least with the former, there is a sense of exploration in finding out new ways to die. In IF, creative deaths is a form of victory in their own right.
+
+_Dunnet_ has a couple of differences from most IF. The former difference is minor. There are little odd descriptions throughout the game. "This room is red" or "The towel has a picture of Snoopy one it" or "There is a cliff here" that do not seem to have an immediate effect on the game. Sure, you can jump over the cliff (and die, obviously) but but it still comes off as a bright spot in the standard description matrix. Towards the end, you will be forced to bring back these details. It makes a neat little diversion of looking around and exploring things. Most of the details are cute and/or add to the surreality of the game overall.
+
+The other big difference, and the one that greatly increased both my annoyance with and my enjoyment of the game, revolves around the two-three computer oriented scenes in the game. You have to type commands into two different computers throughout. One is a VAX and the other is, um, something like a PC (I forget). In both cases, there are clues to be found by knowing your way around the interface. This is a game for computer folk, so most who play it will have a sense of how to type "ls" or "dir" depending on the OS. But not all, will. Beating the game requires a general sense of computer literacy. You must know what types are in ftp. You must know how to determine what type a file is. You must know how to read a text file on a DOS style prompt. You must know something about protocols and etiquette for logging into ftp servers. All this sort of thing. If you do, or are willing to learn (I looked up some of the stuff online) then you can get past this portion with no problem. But this can be like the maze to some people, requiring several replays to get things right.
+
+The end result is a quirky but fun game that I wish I had known about before because now I have the feeling that my computer is hiding other secrets from me. Glad to have played. Will likely play again to see how many ways I can die.
+
+--------------------------------------------------------------------------------
+
+via: http://www.wyrmis.com/games/if/dunnet.html
+
+作者:[W Doug Bolden][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: http://www.wyrmis.com
+[b]: https://github.com/lujun9972
+[1]: http://www.driver-aces.com/ronnie.html
diff --git a/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md b/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md
index 60e1094af9..7be913c3bf 100644
--- a/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md
+++ b/sources/talk/20140412 My Lisp Experiences and the Development of GNU Emacs.md
@@ -1,4 +1,3 @@
-zgj1024 is translating
My Lisp Experiences and the Development of GNU Emacs
======
diff --git a/sources/talk/20180916 The Rise and Demise of RSS.md b/sources/talk/20180916 The Rise and Demise of RSS.md
index 8511d220d9..d7f5c610b6 100644
--- a/sources/talk/20180916 The Rise and Demise of RSS.md
+++ b/sources/talk/20180916 The Rise and Demise of RSS.md
@@ -1,3 +1,4 @@
+name1e5s translating
The Rise and Demise of RSS
======
There are two stories here. The first is a story about a vision of the web’s future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development.
diff --git a/sources/talk/20180930 A Short History of Chaosnet.md b/sources/talk/20180930 A Short History of Chaosnet.md
index acbb10d53d..7ee1db8a37 100644
--- a/sources/talk/20180930 A Short History of Chaosnet.md
+++ b/sources/talk/20180930 A Short History of Chaosnet.md
@@ -1,3 +1,4 @@
+acyanbird translating
A Short History of Chaosnet
======
If you fire up `dig` and run a DNS query for `google.com`, you will get a response somewhat like the following:
diff --git a/sources/talk/20181205 5 reasons to give Linux for the holidays.md b/sources/talk/20181205 5 reasons to give Linux for the holidays.md
index 2bcd6d642c..71d65741ed 100644
--- a/sources/talk/20181205 5 reasons to give Linux for the holidays.md
+++ b/sources/talk/20181205 5 reasons to give Linux for the holidays.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (mokshal)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: subject: (5 reasons to give Linux for the holidays)
diff --git a/sources/talk/20181220 7 CI-CD tools for sysadmins.md b/sources/talk/20181220 7 CI-CD tools for sysadmins.md
deleted file mode 100644
index d645cf2561..0000000000
--- a/sources/talk/20181220 7 CI-CD tools for sysadmins.md
+++ /dev/null
@@ -1,136 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (7 CI/CD tools for sysadmins)
-[#]: via: (https://opensource.com/article/18/12/cicd-tools-sysadmins)
-[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
-
-7 CI/CD tools for sysadmins
-======
-An easy guide to the top open source continuous integration, continuous delivery, and continuous deployment tools.
-
-
-Continuous integration, continuous delivery, and continuous deployment (CI/CD) have all existed in the developer community for many years. Some organizations have involved their operations counterparts, but many haven't. For most organizations, it's imperative for their operations teams to become just as familiar with CI/CD tools and practices as their development compatriots are.
-
-CI/CD practices can equally apply to infrastructure and third-party applications and internally developed applications. Also, there are many different tools but all use similar models. And possibly most importantly, leading your company into this new practice will put you in a strong position within your company, and you'll be a beacon for others to follow.
-
-Some organizations have been using CI/CD practices on infrastructure, with tools like [Ansible][1], [Chef][2], or [Puppet][3], for several years. Other tools, like [Test Kitchen][4], allow tests to be performed on infrastructure that will eventually host applications. In fact, those tests can even deploy the application into a production-like environment and execute application-level tests with production loads in more advanced configurations. However, just getting to the point of being able to test the infrastructure individually is a huge feat. Terraform can also use Test Kitchen for even more [ephemeral][5] and [idempotent][6] infrastructure configurations than some of the original configuration-management tools. Add in Linux containers and Kubernetes, and you can now test full infrastructure and application deployments with prod-like specs and resources that come and go in hours rather than months or years. Everything is wiped out before being deployed and tested again.
-
-However, you can also focus on getting your network configurations or database data definition language (DDL) files into version control and start running small CI/CD pipelines on them. Maybe it just checks syntax or semantics or some best practices. Actually, this is how most development pipelines started. Once you get the scaffolding down, it will be easier to build on. You'll start to find all kinds of use cases for pipelines once you get started.
-
-For example, I regularly write a newsletter within my company, and I maintain it in version control using [MJML][7]. I needed to be able to host a web version, and some folks liked being able to get a PDF, so I built a [pipeline][8]. Now when I create a new newsletter, I submit it for a merge request in GitLab. This automatically creates an index.html with links to HTML and PDF versions of the newsletter. The HTML and PDF files are also created in the pipeline. None of this is published until someone comes and reviews these artifacts. Then, GitLab Pages publishes the website and I can pull down the HTML to send as a newsletter. In the future, I'll automatically send the newsletter when the merge request is merged or after a special approval step. This seems simple, but it has saved me a lot of time. This is really at the core of what these tools can do for you. They will save you time.
-
-The key is creating tools to work in the abstract so that they can apply to multiple problems with little change. I should also note that what I created required almost no code except [some light HTML templating][9], some [node to loop through the HTML files][10], and some more [node to populate the index page with all the HTML pages and PDFs][11].
-
-Some of this might look a little complex, but most of it was taken from the tutorials of the different tools I'm using. And many developers are happy to work with you on these types of things, as they might also find them useful when they're done. The links I've provided are to a newsletter we plan to start for [DevOps KC][12], and all the code for creating the site comes from the work I did on our internal newsletter.
-
-Many of the tools listed below can offer this type of interaction, but some offer a slightly different model. The emerging model in this space is that of a declarative description of a pipeline in something like YAML with each stage being ephemeral and idempotent. Many of these systems also ensure correct sequencing by creating a [directed acyclic graph][13] (DAG) over the different stages of the pipeline.
-
-These stages are often run in Linux containers and can do anything you can do in a container. Some tools, like [Spinnaker][14], focus only on the deployment component and offer some operational features that others don't normally include. [Jenkins][15] has generally kept pipelines in an XML format and most interactions occur within the GUI, but more recent implementations have used a [domain specific language][16] (DSL) using [Groovy][17]. Further, Jenkins jobs normally execute on nodes with a special Java agent installed and consist of a mix of plugins and pre-installed components.
-
-Jenkins introduced pipelines in its tool, but they were a bit challenging to use and contained several caveats. Recently, the creator of Jenkins decided to move the community toward a couple different initiatives that will hopefully breathe new life into the project—which is the one that really brought CI/CD to the masses. I think its most interesting initiative is creating a Cloud Native Jenkins that can turn a Kubernetes cluster into a Jenkins CI/CD platform.
-
-As you learn more about these tools and start bringing these practices into your company or your operations division, you'll quickly gain followers. You will increase your own productivity as well as that of others. We all have years of backlog to get to—how much would your co-workers love if you could give them enough time to start tackling that backlog? Not only that, but your customers will start to see increased application reliability, and your management will see you as a force multiplier. That certainly can't hurt during your next salary negotiation or when interviewing with all your new skills.
-
-Let's dig into the tools a bit more. We'll briefly cover each one and share links to more information.
-
-### GitLab CI
-
-GitLab is a fairly new entrant to the CI/CD space, but it's already achieved the top spot in the [Forrester Wave for Continuous Integration Tools][20]. That's a huge achievement in such a crowded and highly qualified field. What makes GitLab CI so great? It uses a YAML file to describe the entire pipeline. It also has a functionality called Auto DevOps that allows for simpler projects to have a pipeline built automatically with multiple tests built-in. This system uses [Herokuish buildpacks][21] to determine the language and how to build the application. Some languages can also manage databases, which is a real game-changer for building new applications and getting them deployed to production from the beginning of the development process. The system has native integrations into Kubernetes and will deploy your application automatically into a Kubernetes cluster using one of several different deployment methodologies, like percentage-based rollouts and blue-green deployments.
-
-In addition to its CI functionality, GitLab offers many complementary features like operations and monitoring with Prometheus deployed automatically with your application; portfolio and project management using GitLab Issues, Epics, and Milestones; security checks built into the pipeline with the results provided as an aggregate across multiple projects; and the ability to edit code right in GitLab using the WebIDE, which can even provide a preview or execute part of a pipeline for faster feedback.
-
-### GoCD
-
-GoCD comes from the great minds at Thoughtworks, which is testimony enough for its capabilities and efficiency. To me, GoCD's main differentiator from the rest of the pack is its [Value Stream Map][22] (VSM) feature. In fact, pipelines can be chained together with one pipeline providing the "material" for the next pipeline. This allows for increased independence for different teams with different responsibilities in the deployment process. This may be a useful feature when introducing this type of system in older organizations that intend to keep these teams separate—but having everyone using the same tool will make it easier later to find bottlenecks in the VSM and reorganize the teams or work to increase efficiencies.
-
-It's incredibly valuable to have a VSM for each product in a company; that GoCD allows this to be [described in JSON or YAML][23] in version control and presented visually with all the data around wait times makes this tool even more valuable to an organization trying to understand itself better. Start by installing GoCD and mapping out your process with only manual approval gates. Then have each team use the manual approvals so you can start collecting data on where bottlenecks might exist.
-
-### Travis CI
-
-Travis CI was my first experience with a Software as a Service (SaaS) CI system, and it's pretty awesome. The pipelines are stored as YAML with your source code, and it integrates seamlessly with tools like GitHub. I don't remember the last time a pipeline failed because of Travis CI or the integration—Travis CI has a very high uptime. Not only can it be used as SaaS, but it also has a version that can be hosted. I haven't run that version—there were a lot of components, and it looked a bit daunting to install all of it. I'm guessing it would be much easier to deploy it all to Kubernetes with [Helm charts provided by Travis CI][26]. Those charts don't deploy everything yet, but I'm sure it will grow even more in the future. There is also an enterprise version if you don't want to deal with the hassle.
-
-However, if you're developing open source code, you can use the SaaS version of Travis CI for free. That is an awesome service provided by an awesome team! This alleviates a lot of overhead and allows you to use a fairly common platform for developing open source code without having to run anything.
-
-### Jenkins
-
-Jenkins is the original, the venerable, de facto standard in CI/CD. If you haven't already, you need to read "[Jenkins: Shifting Gears][27]" from Kohsuke, the creator of Jenkins and CTO of CloudBees. It sums up all of my feelings about Jenkins and the community from the last decade. What he describes is something that has been needed for several years, and I'm happy CloudBees is taking the lead on this transformation. Jenkins will be a bit overwhelming to most non-developers and has long been a burden on its administrators. However, these are items they're aiming to fix.
-
-[Jenkins Configuration as Code][28] (JCasC) should help fix the complex configuration issues that have plagued admins for years. This will allow for a zero-touch configuration of Jenkins masters through a YAML file, similar to other CI/CD systems. [Jenkins Evergreen][29] aims to make this process even easier by providing predefined Jenkins configurations based on different use cases. These distributions should be easier to maintain and upgrade than the normal Jenkins distribution.
-
-Jenkins 2 introduced native pipeline functionality with two types of pipelines, which [I discuss][30] in a LISA17 presentation. Neither is as easy to navigate as YAML when you're doing something simple, but they're quite nice for doing more complex tasks.
-
-[Jenkins X][31] is the full transformation of Jenkins and will likely be the implementation of Cloud Native Jenkins (or at least the thing most users see when using Cloud Native Jenkins). It will take JCasC and Evergreen and use them at their best natively on Kubernetes. These are exciting times for Jenkins, and I look forward to its innovation and continued leadership in this space.
-
-### Concourse CI
-
-I was first introduced to Concourse through folks at Pivotal Labs when it was an early beta version—there weren't many tools like it at the time. The system is made of microservices, and each job runs within a container. One of its most useful features that other tools don't have is the ability to run a job from your local system with your local changes. This means you can develop locally (assuming you have a connection to the Concourse server) and run your builds just as they'll run in the real build pipeline. Also, you can rerun failed builds from your local system and inject specific changes to test your fixes.
-
-Concourse also has a simple extension system that relies on the fundamental concept of resources. Basically, each new feature you want to provide to your pipeline can be implemented in a Docker image and included as a new resource type in your configuration. This keeps all functionality encapsulated in a single, immutable artifact that can be upgraded and modified independently, and breaking changes don't necessarily have to break all your builds at the same time.
-
-### Spinnaker
-
-Spinnaker comes from Netflix and is more focused on continuous deployment than continuous integration. It can integrate with other tools, including Travis and Jenkins, to kick off test and deployment pipelines. It also has integrations with monitoring tools like Prometheus and Datadog to make decisions about deployments based on metrics provided by these systems. For example, the canary deployment uses a judge concept and the metrics being collected to determine if the latest canary deployment has caused any degradation in pertinent metrics and should be rolled back or if deployment can continue.
-
-A couple of additional, unique features related to deployments cover an area that is often overlooked when discussing continuous deployment, and might even seem antithetical, but is critical to success: Spinnaker helps make continuous deployment a little less continuous. It will prevent a stage from running during certain times to prevent a deployment from occurring during a critical time in the application lifecycle. It can also enforce manual approvals to ensure the release occurs when the business will benefit the most from the change. In fact, the whole point of continuous integration and continuous deployment is to be ready to deploy changes as quickly as the business needs to change.
-
-### Screwdriver
-
-Screwdriver is an impressively simple piece of engineering. It uses a microservices approach and relies on tools like Nomad, Kubernetes, and Docker to act as its execution engine. There is a pretty good [deployment tutorial][34] for deploying to AWS and Kubernetes, but it could be improved once the in-progress [Helm chart][35] is completed.
-
-Screwdriver also uses YAML for its pipeline descriptions and includes a lot of sensible defaults, so there's less boilerplate configuration for each pipeline. The configuration describes an advanced workflow that can have complex dependencies among jobs. For example, a job can be guaranteed to run after or before another job. Jobs can run in parallel and be joined afterward. You can also use logical operators to run a job, for example, if any of its dependencies are successful or only if all are successful. Even better is that you can specify certain jobs to be triggered from a pull request. Also, dependent jobs won't run when this occurs, which allows easy segregation of your pipeline for when an artifact should go to production and when it still needs to be reviewed.
-
-This is only a brief description of these CI/CD tools—each has even more cool features and differentiators you can investigate. They are all open source and free to use, so go deploy them and see which one fits your needs best.
-
-### What to read next
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/12/cicd-tools-sysadmins
-
-作者:[Dan Barker][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/barkerd427
-[b]: https://github.com/lujun9972
-[1]: https://www.ansible.com/
-[2]: https://www.chef.io/
-[3]: https://puppet.com/
-[4]: https://github.com/test-kitchen/test-kitchen
-[5]: https://www.merriam-webster.com/dictionary/ephemeral
-[6]: https://en.wikipedia.org/wiki/Idempotence
-[7]: https://mjml.io/
-[8]: https://gitlab.com/devopskc/newsletter/blob/master/.gitlab-ci.yml
-[9]: https://gitlab.com/devopskc/newsletter/blob/master/index/index.html
-[10]: https://gitlab.com/devopskc/newsletter/blob/master/html-to-pdf.js
-[11]: https://gitlab.com/devopskc/newsletter/blob/master/populate-index.js
-[12]: https://devopskc.com/
-[13]: https://en.wikipedia.org/wiki/Directed_acyclic_graph
-[14]: https://www.spinnaker.io/
-[15]: https://jenkins.io/
-[16]: https://martinfowler.com/books/dsl.html
-[17]: http://groovy-lang.org/
-[18]: https://about.gitlab.com/product/continuous-integration/
-[19]: https://gitlab.com/gitlab-org/gitlab-ce/
-[20]: https://about.gitlab.com/2017/09/27/gitlab-leader-continuous-integration-forrester-wave/
-[21]: https://github.com/gliderlabs/herokuish
-[22]: https://www.gocd.org/getting-started/part-3/#value_stream_map
-[23]: https://docs.gocd.org/current/advanced_usage/pipelines_as_code.html
-[24]: https://docs.travis-ci.com/
-[25]: https://github.com/travis-ci/travis-ci
-[26]: https://github.com/travis-ci/kubernetes-config
-[27]: https://jenkins.io/blog/2018/08/31/shifting-gears/
-[28]: https://jenkins.io/projects/jcasc/
-[29]: https://github.com/jenkinsci/jep/blob/master/jep/300/README.adoc
-[30]: https://danbarker.codes/talk/lisa17-becoming-plumber-building-deployment-pipelines/
-[31]: https://jenkins-x.io/
-[32]: https://concourse-ci.org/
-[33]: https://github.com/concourse/concourse
-[34]: https://docs.screwdriver.cd/cluster-management/kubernetes
-[35]: https://github.com/screwdriver-cd/screwdriver-chart
diff --git a/sources/talk/20190110 Toyota Motors and its Linux Journey.md b/sources/talk/20190110 Toyota Motors and its Linux Journey.md
deleted file mode 100644
index 1d76ffe0b6..0000000000
--- a/sources/talk/20190110 Toyota Motors and its Linux Journey.md
+++ /dev/null
@@ -1,64 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Toyota Motors and its Linux Journey)
-[#]: via: (https://itsfoss.com/toyota-motors-linux-journey)
-[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
-
-Toyota Motors and its Linux Journey
-======
-
-**This is a community submission from It’s FOSS reader Malcolm Dean.**
-
-I spoke with Brian R Lyons of TMNA Toyota Motor Corp North America about the implementation of Linux in Toyota and Lexus infotainment systems. I came to find out there is an Automotive Grade Linux (AGL) being used by several autmobile manufacturers.
-
-I put together a short article comprising of my discussion with Brian about Toyota and its tryst with Linux. I hope that Linux enthusiasts will like this quick little chat.
-
-All [Toyota vehicles and Lexus vehicles are going to use Automotive Grade Linux][1] (AGL) majorly for the infotainment system. This is instrumental in Toyota Motor Corp because as per Mr. Lyons “As a technology leader, Toyota realized that adopting open source development methodology is the best way to keep up with the rapid pace of new technologies”.
-
-Toyota among other automotive companies thought, going with a Linux based operating system might be cheaper and quicker when it comes to updates, and upgrades compared to using proprietary software.
-
-Wow! Finally Linux in a vehicle. I use Linux every day on my desktop; what a great way to expand the use of this awesome software to a completely different industry.
-
-I was curious when Toyota decided to use the [Automotive Grade Linux][2] (AGL). According to Mr. Lyons, it goes back to 2011.
-
-> “Toyota has been an active member and contributor to AGL since its launch more than five years ago, collaborating with other OEMs, Tier 1s and suppliers to develop a robust, Linux-based platform with increased security and capabilities”
-
-![Toyota Infotainment][3]
-
-In 2011, [Toyota joined the Linux Foundation][4] and started discussions about IVI (In-Vehicle Infotainment) software with other car OEMs and software companies. As a result, in 2012, Automotive Grade Linux working group was formed in the Linux Foundation.
-
-What Toyota did at first in AGL group was to take “code first” approach as normal as in the open source domains, and then start the conversation about the initial direction by specifying requirement specifications which had been discussed among car OEMs, IVI Tier-1 companies, software companies, and so on.
-
-Toyota had already realized that sharing the software code among Tier1 companies was going to essential at the time when it joined the Linux Foundation. This was because the cost of maintaining such a huge software was very costly and was no longer differentiation by Tier1 companies. Toyota and its Tier1 supplier companies wanted to spend more resources n new functions and new user experiences rather than maintaining conventional code all by themselves.
-
-This is a huge thing as automotive companies have gone in together to further their cooperation. Many companies have adopted this after finding proprietary software to be expensive.
-
-Today, AGL is used for all Toyota and Lexus vehicles and is used in all markets where vehicles are sold.
-
-As someone who has sold cars for Lexus, I think this is a huge step forward. I and other sales associates had many customers who would come back to speak with a technology specialist to learn about the full capabilities of their infotainment system.
-
-I see this as a huge step forward for the Linux community, and users. The operating system we use on a daily basis is being put to use right in front of us albeit in a modified form but is there none-the-less.
-
-Where does this lead? Hopefully a better user-friendly and less glitchy experience for consumers.
-
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/toyota-motors-linux-journey
-
-作者:[Abhishek Prakash][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/abhishek/
-[b]: https://github.com/lujun9972
-[1]: https://www.linuxfoundation.org/press-release/2018/01/automotive-grade-linux-hits-road-globally-toyota-amazon-alexa-joins-agl-support-voice-recognition/
-[2]: https://www.automotivelinux.org/
-[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/toyota-interiors.jpg?resize=800%2C450&ssl=1
-[4]: https://www.linuxfoundation.org/press-release/2011/07/toyota-joins-linux-foundation/
diff --git a/sources/talk/20190121 Booting Linux faster.md b/sources/talk/20190121 Booting Linux faster.md
deleted file mode 100644
index ef79351e0e..0000000000
--- a/sources/talk/20190121 Booting Linux faster.md
+++ /dev/null
@@ -1,54 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Booting Linux faster)
-[#]: via: (https://opensource.com/article/19/1/booting-linux-faster)
-[#]: author: (Stewart Smith https://opensource.com/users/stewart-ibm)
-
-Booting Linux faster
-======
-Doing Linux kernel and firmware development leads to lots of reboots and lots of wasted time.
-
-Of all the computers I've ever owned or used, the one that booted the quickest was from the 1980s; by the time your hand moved from the power switch to the keyboard, the BASIC interpreter was ready for your commands. Modern computers take anywhere from 15 seconds for a laptop to minutes for a small home server to boot. Why is there such a difference in boot times?
-
-A microcomputer from the 1980s that booted straight to a BASIC prompt had a very simple CPU that started fetching and executing instructions from a memory address immediately upon getting power. Since these systems had BASIC in ROM, there was no loading time—you got to the BASIC prompt really quickly. More complex systems of that same era, such as the IBM PC or Macintosh, took a significant time to boot (~30 seconds), although this was mostly due to having to read the operating system (OS) off a floppy disk. Only a handful of seconds were spent in firmware before being able to load an OS.
-
-Modern servers typically spend minutes, rather than seconds, in firmware before getting to the point of booting an OS from disk. This is largely due to modern systems' increased complexity. No longer can a CPU just come up and start executing instructions at full speed; we've become accustomed to CPU frequency scaling, idle states that save a lot of power, and multiple CPU cores. In fact, inside modern CPUs are a surprising number of simpler CPUs that help start the main CPU cores and provide runtime services such as throttling the frequency when it gets too hot. On most CPU architectures, the code running on these cores inside your CPU is provided as opaque binary blobs.
-
-On OpenPOWER systems, every instruction executed on every core inside the CPU is open source software. On machines with [OpenBMC][1] (such as IBM's AC922 system and Raptor's TALOS II and Blackbird systems), this extends to the code running on the Baseboard Management Controller as well. This means we can get a tremendous amount of insight into what takes so long from the time you plug in a power cable to the time a familiar login prompt is displayed.
-
-If you're part of a team that works on the Linux kernel, you probably boot a lot of kernels. If you're part of a team that works on firmware, you're probably going to boot a lot of different firmware images, followed by an OS to ensure your firmware still works. If we can reduce the hardware's boot time, these teams can become more productive, and end users may be grateful when they're setting up systems or rebooting to install firmware or OS updates.
-
-Over the years, many improvements have been made to Linux distributions' boot time. Modern init systems deal well with doing things concurrently and on-demand. On a modern system, once the kernel starts executing, it can take very few seconds to get to a login prompt. This handful of seconds are not the place to optimize boot time; we have to go earlier: before we get to the OS.
-
-On OpenPOWER systems, the firmware loads an OS by booting a Linux kernel stored in the firmware flash chip that runs a userspace program called [Petitboot][2] to find the disk that holds the OS the user wants to boot and [kexec][3][()][3] to it. This code reuse leverages the efforts that have gone into making Linux boot quicker. Even so, we found places in our kernel config and userspace where we could improve and easily shave seconds off boot time. With these optimizations, booting the Petitboot environment is a single-digit percentage of boot time, so we had to find more improvements elsewhere.
-
-Before the Petitboot environment starts, there's a prior bit of firmware called [Skiboot][4], and before that there's [Hostboot][5]. Prior to Hostboot is the [Self-Boot Engine][6], a separate core on the die that gets a single CPU core up and executing instructions out of Level 3 cache. These components are where we can make the most headway in reducing boot time, as they take up the overwhelming majority of it. Perhaps some of these components aren't optimized enough or doing as much in parallel as they could be?
-
-Another avenue of attack is reboot time rather than boot time. On a reboot, do we really need to reinitialize all the hardware?
-
-Like any modern system, the solutions to improving boot (and reboot) time have been a mixture of doing more in parallel, dealing with legacy, and (arguably) cheating.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/booting-linux-faster
-
-作者:[Stewart Smith][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/stewart-ibm
-[b]: https://github.com/lujun9972
-[1]: https://en.wikipedia.org/wiki/OpenBMC
-[2]: https://github.com/open-power/petitboot
-[3]: https://en.wikipedia.org/wiki/Kexec
-[4]: https://github.com/open-power/skiboot
-[5]: https://github.com/open-power/hostboot
-[6]: https://github.com/open-power/sbe
-[7]: https://linux.conf.au/schedule/presentation/105/
-[8]: https://linux.conf.au/
diff --git a/sources/talk/20190123 Book Review- Fundamentals of Linux.md b/sources/talk/20190123 Book Review- Fundamentals of Linux.md
deleted file mode 100644
index 5e0cffd9bc..0000000000
--- a/sources/talk/20190123 Book Review- Fundamentals of Linux.md
+++ /dev/null
@@ -1,74 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Book Review: Fundamentals of Linux)
-[#]: via: (https://itsfoss.com/fundamentals-of-linux-book-review)
-[#]: author: (John Paul https://itsfoss.com/author/john/)
-
-Book Review: Fundamentals of Linux
-======
-
-There are many great books that cover the basics of what Linux is and how it works. Today, I will be taking a look at one such book. Today, the subject of our discussion is [Fundamentals of Linux][1] by Oliver Pelz and is published by [PacktPub][2].
-
-[Oliver Pelz][3] has over ten years of experience as a software developer and a system administrator. He holds a degree in bioinformatics.
-
-### What is the book ‘Fundamentals of Linux’ about?
-
-![Fundamental of Linux books][4]
-
-As can be guessed from the title, the goal of Fundamentals of Linux is to give the reader a strong foundation from which to learn about the Linux command line. The book is a little over two hundred pages long, so it only focuses on teaching the everyday tasks and problems that users commonly encounter. The book is designed for readers who want to become Linux administrators.
-
-The first chapter starts out by giving an overview of virtualization. From there the author instructs how to create a virtual instance of [CentOS][5] in [VirtualBox][6], how to clone it, and how to use snapshots. You will also learn how to connect to the virtual machines via SSH.
-
-The second chapter covers the basics of the Linux command line. This includes shell globbing, shell expansion, how to work with file names that contain spaces or special characters. It also explains how to interpret a command’s manual page, as well as, how to use `sed`, `awk`, and to navigate the Linux file system.
-
-The third chapter takes a more in-depth look at the Linux file system. You will learn how files are linked in Linux and how to search for them. You will also be given an overview of users, groups and file permissions. Since the chapter focuses on interacting with files, it tells how to read text files from the command line, as well as, an overview of how to use the VIM editor.
-
-Chapter four focuses on using the command line. It covers important commands, such as `cat`, `sort`, `awk`. `tee`, `tar`, `rsync`, `nmap`, `htop` and more. You will learn what processes are and how they communicate with each other. This chapter also includes an introduction to Bash shell scripting.
-
-The fifth and final chapter covers networking on Linux and other advanced command line concepts. The author discusses how Linux handles networking and gives examples using multiple virtual machines. He also covers how to install new programs and how to set up a firewall.
-
-### Thoughts on the book
-
-Fundamentals of Linux might seem short at five chapters and a little over two hundred pages. However, quite a bit of information is covered. You are given everything that you need to get going on the command line.
-
-The book’s sole focus on the command line is one thing to keep in mind. You won’t get any information on how to use a graphical user interface. That is partially because Linux has so many different desktop environments and so many similar system applications that it would be hard to write a book that could cover all of the variables. It is also partially because the book is aimed at potential Linux administrators.
-
-I was kinda surprised to see that the author used [CentOS][7] to teach Linux. I would have expected him to use a more common Linux distro, like Ubuntu, Debian, or Fedora. However, because it is a distro designed for servers very little changes over time, so it is a very stable basis for a course on Linux basics.
-
-I’ve used Linux for over half a decade. I spent most of that time using desktop Linux. I dove into the terminal when I needed to, but didn’t spend lots of time there. I have performed many of the actions covered in this book using a mouse. Now, I know how to do the same things via the terminal. It won’t change the way I do my tasks, but it will help me understand what goes on behind the curtain.
-
-If you have either just started using Linux or are planning to do so in the future, I would not recommend this book. It might be a little overwhelming. If you have already spent some time with Linux or can quickly grasp the technical language, this book may very well be for you.
-
-If you think this book is apt for your learning needs, you can get the book from the link below:
-
-We will be trying to review more Linux books in coming months so stay tuned with us.
-
-What is your favorite introductory book on Linux? Let us know in the comments below.
-
-If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][8].
-
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/fundamentals-of-linux-book-review
-
-作者:[John Paul][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/john/
-[b]: https://github.com/lujun9972
-[1]: https://www.packtpub.com/networking-and-servers/fundamentals-linux
-[2]: https://www.packtpub.com/
-[3]: http://www.oliverpelz.de/index.html
-[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/fundamentals-of-linux-book-review.jpeg?resize=800%2C450&ssl=1
-[5]: https://centos.org/
-[6]: https://www.virtualbox.org/
-[7]: https://www.centos.org/
-[8]: http://reddit.com/r/linuxusersgroup
diff --git a/sources/talk/20190131 4 confusing open source license scenarios and how to navigate them.md b/sources/talk/20190131 4 confusing open source license scenarios and how to navigate them.md
new file mode 100644
index 0000000000..fd93cdd9a6
--- /dev/null
+++ b/sources/talk/20190131 4 confusing open source license scenarios and how to navigate them.md
@@ -0,0 +1,59 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (4 confusing open source license scenarios and how to navigate them)
+[#]: via: (https://opensource.com/article/19/1/open-source-license-scenarios)
+[#]: author: (P.Kevin Nelson https://opensource.com/users/pkn4645)
+
+4 confusing open source license scenarios and how to navigate them
+======
+
+Before you begin using a piece of software, make sure you fully understand the terms of its license.
+
+
+
+As an attorney running an open source program office for a Fortune 500 corporation, I am often asked to look into a product or component where there seems to be confusion as to the licensing model. Under what terms can the code be used, and what obligations run with such use? This often happens when the code or the associated project community does not clearly indicate availability under a [commonly accepted open source license][1]. The confusion is understandable as copyright owners often evolve their products and services in different directions in response to market demands. Here are some of the scenarios I commonly discover and how you can approach each situation.
+
+### Multiple licenses
+
+The product is truly open source with an [Open Source Initiative][2] (OSI) open source-approved license, but has changed licensing models at least once if not multiple times throughout its lifespan. This scenario is fairly easy to address; the user simply has to decide if the latest version with its attendant features and bug fixes is worth the conditions to be compliant with the current license. If so, great. If not, then the user can move back in time to a version released under a more palatable license and start from that fork, understanding that there may not be an active community for support and continued development.
+
+### Old open source
+
+This is a variation on the multiple licenses model with the twist that current licensing is proprietary only. You have to use an older version to take advantage of open source terms and conditions. Most often, the product was released under a valid open source license up to a certain point in its development, but then the copyright holder chose to evolve the code in a proprietary fashion and offer new releases only under proprietary commercial licensing terms. So, if you want the newest capabilities, you have to purchase a proprietary license, and you most likely will not get a copy of the underlying source code. Most often the open source community that grew up around the original code line falls away once the members understand there will be no further commitment from the copyright holder to the open source branch. While this scenario is understandable from the copyright holder's perspective, it can be seen as "burning a bridge" to the open source community. It would be very difficult to again leverage the benefits of the open source contribution models once a project owner follows this path.
+
+### Open core
+
+By far the most common discovery is that a product has both an open source-licensed "community edition" and a proprietary-licensed commercial offering, commonly referred to as open core. This is often encouraging to potential consumers, as it gives them a "try before you buy" option or even a chance to influence both versions of the product by becoming an active member of the community. I usually encourage clients to begin with the community version, get involved, and see what they can achieve. Then, if the product becomes a crucial part of their business plan, they have the option to upgrade to the proprietary level at any time.
+
+### Freemium
+
+The component is not open source at all, but instead it is released under some version of the "freemium" model. A version with restricted or time-limited functionality can be downloaded with no immediate purchase required. However, since the source code is usually not provided and its accompanying license does not allow perpetual use, the creation of derivative works, nor further distribution, it is definitely not open source. In this scenario, it is usually best to pass unless you are prepared to purchase a proprietary license and accept all attendant terms and conditions of use. Users are often the most disappointed in this outcome as it has somewhat of a deceptive feel.
+
+### OSI compliant
+
+Of course, the happy path I haven't mentioned is to discover the project has a single, clear, OSI-compliant license. In those situations, open source software is as easy as downloading and going forward within appropriate use.
+
+Each of the more complex scenarios described above can present problems to potential development projects, but consultation with skilled procurement or intellectual property professionals with regard to licensing lineage can reveal excellent opportunities.
+
+An earlier version of this article was published on [OSS Law][3] and is republished with the author's permission.
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/1/open-source-license-scenarios
+
+作者:[P.Kevin Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/pkn4645
+[b]: https://github.com/lujun9972
+[1]: https://opensource.org/licenses
+[2]: https://opensource.org/licenses/category
+[3]: http://www.pknlaw.com/2017/06/i-thought-that-was-open-source.html
diff --git a/sources/talk/20190131 OOP Before OOP with Simula.md b/sources/talk/20190131 OOP Before OOP with Simula.md
new file mode 100644
index 0000000000..cae9d9bd3a
--- /dev/null
+++ b/sources/talk/20190131 OOP Before OOP with Simula.md
@@ -0,0 +1,203 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (OOP Before OOP with Simula)
+[#]: via: (https://twobithistory.org/2019/01/31/simula.html)
+[#]: author: (Sinclair Target https://twobithistory.org)
+
+OOP Before OOP with Simula
+======
+
+Imagine that you are sitting on the grassy bank of a river. Ahead of you, the water flows past swiftly. The afternoon sun has put you in an idle, philosophical mood, and you begin to wonder whether the river in front of you really exists at all. Sure, large volumes of water are going by only a few feet away. But what is this thing that you are calling a “river”? After all, the water you see is here and then gone, to be replaced only by more and different water. It doesn’t seem like the word “river” refers to any fixed thing in front of you at all.
+
+In 2009, Rich Hickey, the creator of Clojure, gave [an excellent talk][1] about why this philosophical quandary poses a problem for the object-oriented programming paradigm. He argues that we think of an object in a computer program the same way we think of a river—we imagine that the object has a fixed identity, even though many or all of the object’s properties will change over time. Doing this is a mistake, because we have no way of distinguishing between an object instance in one state and the same object instance in another state. We have no explicit notion of time in our programs. We just breezily use the same name everywhere and hope that the object is in the state we expect it to be in when we reference it. Inevitably, we write bugs.
+
+The solution, Hickey concludes, is that we ought to model the world not as a collection of mutable objects but a collection of processes acting on immutable data. We should think of each object as a “river” of causally related states. In sum, you should use a functional language like Clojure.
+
+![][2]
+The author, on a hike, pondering the ontological commitments
+of object-oriented programming.
+
+Since Hickey gave his talk in 2009, interest in functional programming languages has grown, and functional programming idioms have found their way into the most popular object-oriented languages. Even so, most programmers continue to instantiate objects and mutate them in place every day. And they have been doing it for so long that it is hard to imagine that programming could ever look different.
+
+I wanted to write an article about Simula and imagined that it would mostly be about when and how object-oriented constructs we are familiar with today were added to the language. But I think the more interesting story is about how Simula was originally so unlike modern object-oriented programming languages. This shouldn’t be a surprise, because the object-oriented paradigm we know now did not spring into existence fully formed. There were two major versions of Simula: Simula I and Simula 67. Simula 67 brought the world classes, class hierarchies, and virtual methods. But Simula I was a first draft that experimented with other ideas about how data and procedures could be bundled together. The Simula I model is not a functional model like the one Hickey proposes, but it does focus on processes that unfold over time rather than objects with hidden state that interact with each other. Had Simula 67 stuck with more of Simula I’s ideas, the object-oriented paradigm we know today might have looked very different indeed—and that contingency should teach us to be wary of assuming that the current paradigm will dominate forever.
+
+### Simula 0 Through 67
+
+Simula was created by two Norwegians, Kristen Nygaard and Ole-Johan Dahl.
+
+In the late 1950s, Nygaard was employed by the Norwegian Defense Research Establishment (NDRE), a research institute affiliated with the Norwegian military. While there, he developed Monte Carlo simulations used for nuclear reactor design and operations research. These simulations were at first done by hand and then eventually programmed and run on a Ferranti Mercury. Nygaard soon found that he wanted a higher-level way to describe these simulations to a computer.
+
+The kind of simulation that Nygaard commonly developed is known as a “discrete event model.” The simulation captures how a sequence of events change the state of a system over time—but the important property here is that the simulation can jump from one event to the next, since the events are discrete and nothing changes in the system between events. This kind of modeling, according to a paper that Nygaard and Dahl presented about Simula in 1966, was increasingly being used to analyze “nerve networks, communication systems, traffic flow, production systems, administrative systems, social systems, etc.” So Nygaard thought that other people might want a higher-level way to describe these simulations too. He began looking for someone that could help him implement what he called his “Simulation Language” or “Monte Carlo Compiler.”
+
+Dahl, who had also been employed by NDRE, where he had worked on language design, came aboard at this point to play Wozniak to Nygaard’s Jobs. Over the next year or so, Nygaard and Dahl worked to develop what has been called “Simula 0.” This early version of the language was going to be merely a modest extension to ALGOL 60, and the plan was to implement it as a preprocessor. The language was then much less abstract than what came later. The primary language constructs were “stations” and “customers.” These could be used to model certain discrete event networks; Nygaard and Dahl give an example simulating airport departures. But Nygaard and Dahl eventually came up with a more general language construct that could represent both “stations” and “customers” and also model a wider range of simulations. This was the first of two major generalizations that took Simula from being an application-specific ALGOL package to a general-purpose programming language.
+
+In Simula I, there were no “stations” or “customers,” but these could be recreated using “processes.” A process was a bundle of data attributes associated with a single action known as the process’ operating rule. You might think of a process as an object with only a single method, called something like `run()`. This analogy is imperfect though, because each process’ operating rule could be suspended or resumed at any time—the operating rules were a kind of coroutine. A Simula I program would model a system as a set of processes that conceptually all ran in parallel. Only one process could actually be “current” at any time, but once a process suspended itself the next queued process would automatically take over. As the simulation ran, behind the scenes, Simula would keep a timeline of “event notices” that tracked when each process should be resumed. In order to resume a suspended process, Simula needed to keep track of multiple call stacks. This meant that Simula could no longer be an ALGOL preprocessor, because ALGOL had only once call stack. Nygaard and Dahl were committed to writing their own compiler.
+
+In their paper introducing this system, Nygaard and Dahl illustrate its use by implementing a simulation of a factory with a limited number of machines that can serve orders. The process here is the order, which starts by looking for an available machine, suspends itself to wait for one if none are available, and then runs to completion once a free machine is found. There is a definition of the order process that is then used to instantiate several different order instances, but no methods are ever called on these instances. The main part of the program just creates the processes and sets them running.
+
+The first Simula I compiler was finished in 1965. The language grew popular at the Norwegian Computer Center, where Nygaard and Dahl had gone to work after leaving NDRE. Implementations of Simula I were made available to UNIVAC users and to Burroughs B5500 users. Nygaard and Dahl did a consulting deal with a Swedish company called ASEA that involved using Simula to run job shop simulations. But Nygaard and Dahl soon realized that Simula could be used to write programs that had nothing to do with simulation at all.
+
+Stein Krogdahl, a professor at the University of Oslo that has written about the history of Simula, claims that “the spark that really made the development of a new general-purpose language take off” was [a paper called “Record Handling”][3] by the British computer scientist C.A.R. Hoare. If you read Hoare’s paper now, this is easy to believe. I’m surprised that you don’t hear Hoare’s name more often when people talk about the history of object-oriented languages. Consider this excerpt from his paper:
+
+> The proposal envisages the existence inside the computer during the execution of the program, of an arbitrary number of records, each of which represents some object which is of past, present or future interest to the programmer. The program keeps dynamic control of the number of records in existence, and can create new records or destroy existing ones in accordance with the requirements of the task in hand.
+
+> Each record in the computer must belong to one of a limited number of disjoint record classes; the programmer may declare as many record classes as he requires, and he associates with each class an identifier to name it. A record class name may be thought of as a common generic term like “cow,” “table,” or “house” and the records which belong to these classes represent the individual cows, tables, and houses.
+
+Hoare does not mention subclasses in this particular paper, but Dahl credits him with introducing Nygaard and himself to the concept. Nygaard and Dahl had noticed that processes in Simula I often had common elements. Using a superclass to implement those common elements would be convenient. This also raised the possibility that the “process” idea itself could be implemented as a superclass, meaning that not every class had to be a process with a single operating rule. This then was the second great generalization that would make Simula 67 a truly general-purpose programming language. It was such a shift of focus that Nygaard and Dahl briefly considered changing the name of the language so that people would know it was not just for simulations. But “Simula” was too much of an established name for them to risk it.
+
+In 1967, Nygaard and Dahl signed a contract with Control Data to implement this new version of Simula, to be known as Simula 67. A conference was held in June, where people from Control Data, the University of Oslo, and the Norwegian Computing Center met with Nygaard and Dahl to establish a specification for this new language. This conference eventually led to a document called the [“Simula 67 Common Base Language,”][4] which defined the language going forward.
+
+Several different vendors would make Simula 67 compilers. The Association of Simula Users (ASU) was founded and began holding annual conferences. Simula 67 soon had users in more than 23 different countries.
+
+### 21st Century Simula
+
+Simula is remembered now because of its influence on the languages that have supplanted it. You would be hard-pressed to find anyone still using Simula to write application programs. But that doesn’t mean that Simula is an entirely dead language. You can still compile and run Simula programs on your computer today, thanks to [GNU cim][5].
+
+The cim compiler implements the Simula standard as it was after a revision in 1986. But this is mostly the Simula 67 version of the language. You can write classes, subclass, and virtual methods just as you would have with Simula 67. So you could create a small object-oriented program that looks a lot like something you could easily write in Python or Ruby:
+
+```
+! dogs.sim ;
+Begin
+ Class Dog;
+ ! The cim compiler requires virtual procedures to be fully specified ;
+ Virtual: Procedure bark Is Procedure bark;;
+ Begin
+ Procedure bark;
+ Begin
+ OutText("Woof!");
+ OutImage; ! Outputs a newline ;
+ End;
+ End;
+
+ Dog Class Chihuahua; ! Chihuahua is "prefixed" by Dog ;
+ Begin
+ Procedure bark;
+ Begin
+ OutText("Yap yap yap yap yap yap");
+ OutImage;
+ End;
+ End;
+
+ Ref (Dog) d;
+ d :- new Chihuahua; ! :- is the reference assignment operator ;
+ d.bark;
+End;
+```
+
+You would compile and run it as follows:
+
+```
+$ cim dogs.sim
+Compiling dogs.sim:
+gcc -g -O2 -c dogs.c
+gcc -g -O2 -o dogs dogs.o -L/usr/local/lib -lcim
+$ ./dogs
+Yap yap yap yap yap yap
+```
+
+(You might notice that cim compiles Simula to C, then hands off to a C compiler.)
+
+This was what object-oriented programming looked like in 1967, and I hope you agree that aside from syntactic differences this is also what object-oriented programming looks like in 2019. So you can see why Simula is considered a historically important language.
+
+But I’m more interested in showing you the process model that was central to Simula I. That process model is still available in Simula 67, but only when you use the `Process` class and a special `Simulation` block.
+
+In order to show you how processes work, I’ve decided to simulate the following scenario. Imagine that there is a village full of villagers next to a river. The river has lots of fish, but between them the villagers only have one fishing rod. The villagers, who have voracious appetites, get hungry every 60 minutes or so. When they get hungry, they have to use the fishing rod to catch a fish. If a villager cannot use the fishing rod because another villager is waiting for it, then the villager queues up to use the fishing rod. If a villager has to wait more than five minutes to catch a fish, then the villager loses health. If a villager loses too much health, then that villager has starved to death.
+
+This is a somewhat strange example and I’m not sure why this is what first came to mind. But there you go. We will represent our villagers as Simula processes and see what happens over a day’s worth of simulated time in a village with four villagers.
+
+The full program is [available here as a Gist][6].
+
+The last lines of my output look like the following. Here we are seeing what happens in the last few hours of the day:
+
+```
+1299.45: John is hungry and requests the fishing rod.
+1299.45: John is now fishing.
+1311.39: John has caught a fish.
+1328.96: Betty is hungry and requests the fishing rod.
+1328.96: Betty is now fishing.
+1331.25: Jane is hungry and requests the fishing rod.
+1340.44: Betty has caught a fish.
+1340.44: Jane went hungry waiting for the rod.
+1340.44: Jane starved to death waiting for the rod.
+1369.21: John is hungry and requests the fishing rod.
+1369.21: John is now fishing.
+1379.33: John has caught a fish.
+1409.59: Betty is hungry and requests the fishing rod.
+1409.59: Betty is now fishing.
+1419.98: Betty has caught a fish.
+1427.53: John is hungry and requests the fishing rod.
+1427.53: John is now fishing.
+1437.52: John has caught a fish.
+```
+
+Poor Jane starved to death. But she lasted longer than Sam, who didn’t even make it to 7am. Betty and John sure have it good now that only two of them need the fishing rod.
+
+What I want you to see here is that the main, top-level part of the program does nothing but create the four villager processes and get them going. The processes manipulate the fishing rod object in the same way that we would manipulate an object today. But the main part of the program does not call any methods or modify and properties on the processes. The processes have internal state, but this internal state only gets modified by the process itself.
+
+There are still fields that get mutated in place here, so this style of programming does not directly address the problems that pure functional programming would solve. But as Krogdahl observes, “this mechanism invites the programmer of a simulation to model the underlying system as a set of processes, each describing some natural sequence of events in that system.” Rather than thinking primarily in terms of nouns or actors—objects that do things to other objects—here we are thinking of ongoing processes. The benefit is that we can hand overall control of our program off to Simula’s event notice system, which Krogdahl calls a “time manager.” So even though we are still mutating processes in place, no process makes any assumptions about the state of another process. Each process interacts with other processes only indirectly.
+
+It’s not obvious how this pattern could be used to build, say, a compiler or an HTTP server. (On the other hand, if you’ve ever programmed games in the Unity game engine, this should look familiar.) I also admit that even though we have a “time manager” now, this may not have been exactly what Hickey meant when he said that we need an explicit notion of time in our programs. (I think he’d want something like the superscript notation [that Ada Lovelace used][7] to distinguish between the different values a variable assumes through time.) All the same, I think it’s really interesting that right there at the beginning of object-oriented programming we can find a style of programming that is not all like the object-oriented programming we are used to. We might take it for granted that object-oriented programming simply works one way—that a program is just a long list of the things that certain objects do to other objects in the exact order that they do them. Simula I’s process system shows that there are other approaches. Functional languages are probably a better thought-out alternative, but Simula I reminds us that the very notion of alternatives to modern object-oriented programming should come as no surprise.
+
+If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][8] on Twitter or subscribe to the [RSS feed][9] to make sure you know when a new post is out.
+
+Previously on TwoBitHistory…
+
+> Hey everyone! I sadly haven't had time to do any new writing but I've just put up an updated version of my history of RSS. This version incorporates interviews I've since done with some of the key people behind RSS like Ramanathan Guha and Dan Libby.
+>
+> — TwoBitHistory (@TwoBitHistory) [December 18, 2018][10]
+
+
+
+--------------------------------------------------------------------------------
+
+1. Jan Rune Holmevik, “The History of Simula,” accessed January 31, 2019, http://campus.hesge.ch/daehne/2004-2005/langages/simula.htm. ↩
+
+2. Ole-Johan Dahl and Kristen Nygaard, “SIMULA—An ALGOL-Based Simulation Langauge,” Communications of the ACM 9, no. 9 (September 1966): 671, accessed January 31, 2019, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.95.384&rep=rep1&type=pdf. ↩
+
+3. Stein Krogdahl, “The Birth of Simula,” 2, accessed January 31, 2019, http://heim.ifi.uio.no/~steinkr/papers/HiNC1-webversion-simula.pdf. ↩
+
+4. ibid. ↩
+
+5. Ole-Johan Dahl and Kristen Nygaard, “The Development of the Simula Languages,” ACM SIGPLAN Notices 13, no. 8 (August 1978): 248, accessed January 31, 2019, https://hannemyr.com/cache/knojd_acm78.pdf. ↩
+
+6. Dahl and Nygaard (1966), 676. ↩
+
+7. Dahl and Nygaard (1978), 257. ↩
+
+8. Krogdahl, 3. ↩
+
+9. Ole-Johan Dahl, “The Birth of Object-Orientation: The Simula Languages,” 3, accessed January 31, 2019, http://www.olejohandahl.info/old/birth-of-oo.pdf. ↩
+
+10. Dahl and Nygaard (1978), 265. ↩
+
+11. Holmevik. ↩
+
+12. Krogdahl, 4. ↩
+
+
+--------------------------------------------------------------------------------
+
+via: https://twobithistory.org/2019/01/31/simula.html
+
+作者:[Sinclair Target][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://twobithistory.org
+[b]: https://github.com/lujun9972
+[1]: https://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey
+[2]: /images/river.jpg
+[3]: https://archive.computerhistory.org/resources/text/algol/ACM_Algol_bulletin/1061032/p39-hoare.pdf
+[4]: http://web.eah-jena.de/~kleine/history/languages/Simula-CommonBaseLanguage.pdf
+[5]: https://www.gnu.org/software/cim/
+[6]: https://gist.github.com/sinclairtarget/6364cd521010d28ee24dd41ab3d61a96
+[7]: https://twobithistory.org/2018/08/18/ada-lovelace-note-g.html
+[8]: https://twitter.com/TwoBitHistory
+[9]: https://twobithistory.org/feed.xml
+[10]: https://twitter.com/TwoBitHistory/status/1075075139543449600?ref_src=twsrc%5Etfw
diff --git a/sources/talk/20190204 Config management is dead- Long live Config Management Camp.md b/sources/talk/20190204 Config management is dead- Long live Config Management Camp.md
new file mode 100644
index 0000000000..679ac9033b
--- /dev/null
+++ b/sources/talk/20190204 Config management is dead- Long live Config Management Camp.md
@@ -0,0 +1,118 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Config management is dead: Long live Config Management Camp)
+[#]: via: (https://opensource.com/article/19/2/configuration-management-camp)
+[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
+
+Config management is dead: Long live Config Management Camp
+======
+
+CfgMgmtCamp '19 co-organizers share their take on ops, DevOps, observability, and the rise of YoloOps and YAML engineers.
+
+
+
+Everyone goes to [FOSDEM][1] in Brussels to learn from its massive collection of talk tracks, colloquially known as developer rooms, that run the gauntlet of curiosities, covering programming languages like Rust, Go, and Python, to special topics ranging from community, to legal, to privacy. After two days of nonstop activity, many FOSDEM attendees move on to Ghent, Belgium, to join hundreds for Configuration Management Camp ([CfgMgmtCamp][2]).
+
+Kris Buytaert and Toshaan Bharvani run the popular post-FOSDEM show centered around infrastructure management, featuring hackerspaces, training, workshops, and keynotes. It's a deeply technical exploration of the who, what, and how of building resilient infrastructure. It started in 2013 as a PuppetCamp but expanded to include more communities and tools in 2014.
+
+I spoke with Kris and Toshaan, who both have a healthy sense of humor, about CfgMgmtCamp's past, present, and future. Our interview has been edited for length and clarity.
+
+**Matthew: Your opening[keynote][3] is called "CfgMgmtCamp is dead." Is config management dead? Will it live on, or will something take its place?**
+
+**Kris:** We've noticed people are jumping on the hype of containers, trying to solve the same problems in a different way. But they are still managing config, only in different ways and with other tools. Over the past couple of years, we've evolved from a conference with a focus on infrastructure-as-code tooling, such as Puppet, Chef, CFEngine, Ansible, Juju, and Salt, to a more open source infrastructure automation conference in general. So, config management is definitely not dead. Infrastructure-as-code is also not dead, but it all is evolving.
+
+**Toshaan:** We see people changing tools, jumping on hype, and communities changing; however, the basic ideas and concepts remain the same.
+
+**Matthew: It's great to see[observability as the topic][4] of one of your keynotes. Why should those who care about configuration management also care about monitoring and observability?**
+
+**Kris:** While the name of the conference hasn't changed, the tools have evolved and we have expanded our horizon. Ten years ago, [Devopsdays][5] was just #devopsdays, but it evolved to focus on culture—the C of [CAMS][6] in the DevOps' core principles of Culture, Automation, Measurement, and Sharing.
+
+
+
+[Monitorama][7] filled the gap on monitoring and metrics (tackling the M in CAMS). Config Management Camp is about open source Automation, the A. Since they are all open source conferences, they fulfill the Sharing part, completing the CAMS concept.
+
+Observability sits on the line between Automation and Measurement. To go one step further, in some of my talks about open source monitoring, I describe the evolution of monitoring tools from #monitoringsucks to #monitoringlove; for lots of people (including me), the love for monitoring returned because we tied it to automation. We started to provision a service and automatically adapted the monitoring of that service to its state. Gone were the days where the monitoring tool was out of sync with reality.
+
+Looking at it from the other side, when you have an infrastructure or application so complex that you need observability in it, you'd better not be deploying manually; you will need some form of automation at that level of complexity. So, observability and infrastructure automation are tied together.
+
+**Toshaan:** Yes, while in the past we focused on configuration management, we will be looking to expand that into all types of infrastructure management. Last year, we played with this idea, and we were able to have a lot of cross-tool presentations. This year, we've taken this a step further by having more differentiated content.
+
+**Matthew: Some of my virtualization and Linux admin friends push back, saying observability is a developer's responsibility. How would you respond without just saying "DevOps?"**
+
+**Kris:** What you describe is what I call "Ooops Devs." This is a trend where the people who run the platform don't really care what they run; as long as port 80 is listening and the node pings, they are happy. It's equally bad as "Dev Ooops." "Ooops Devs" is where the devs rant about the ops folks because they are slow, not agile, and not responsive. But, to me, your job as an ops person or as a Linux admin is to keep a service running, and the only way to do that is to take on that task is as a team—with your colleagues who have different roles and insights, people who write code, people who design, etc. It is a shared responsibility. And hiding behind "that is someone else's responsibility," doesn't smell like collaboration going on.
+
+**Toshaan:** Even in the dark ages of silos, I believe a true sysadmin should have cared about observability, monitoring, and automation. I believe that the DevOps movement has made this much more widespread, and that it has become easier to get this information and expose it. On the other hand, I believe that pure operators or sysadmins have learned to be team players (or, they may have died out). I like the analogy of an army unit composed of different specialty soldiers who work together to complete a mission; we have engineers who work to deliver products or services.
+
+**Matthew: In a[Devopsdays Zurich talk][8], Kris offered an opinion that Americans build software for acquisition and Europeans build for resilience. In that light, what are the best skills for someone who wants to build meaningful infrastructure?**
+
+**Toshaan:** I believe still some people don't understand the complexity of code sprawl, and they believe that some new hype will solve this magically.
+
+**Kris:** This year, we invited [Steve Traugott][9], co-author of the 1998 USENIX paper "[Bootstrapping an Infrastructure][10]" that helped kickstart our community. So many people never read [Infrastructures.org][11], never experienced the pain of building images and image sprawl, and don't understand the evolution we went through that led us to build things the way we build them from source code.
+
+People should study topics such as idempotence, resilience, reproducibility, and surviving the tenth floor test. (As explained in "Bootstrapping an Infrastructure": "The test we used when designing infrastructures was 'Can I grab a random machine and throw it out the tenth-floor window without adversely impacting users for more than 10 minutes?' If the answer to this was 'yes,' then we knew we were doing things right.") But only after they understand the service they are building—the service is the absolute priority—can they begin working on things like: how can we run this, how can we make sure it keeps running, how can it fail and how can we prevent that, and if it disappears, how can we spin it up again fast, unnoticed by the end user.
+
+**Toshaan:** 100% uptime.
+
+**Kris:** The challenge we have is that lots of people don't have that experience yet. We've seen the rise of [YoloOps][12]—just spin it up once, fire, and forget—which results in security problems, stability problems, data loss, etc., and they often grasp onto the solutions in YoloOps, the easy way to do something quickly and move on. But understanding how things will eventually fail takes time, it's called experience.
+
+**Toshaan:** Well, when I was a student and manned the CentOS stand at FOSDEM, I remember a guy coming up to the stand and complaining that he couldn't do consulting because of the "fire once and forgot" policy of CentOS, and that it just worked too well. I like to call this ZombieOps, but YoloOps works also.
+
+**Matthew: I see you're leading the second year of YamlCamp as well. Why does a markup language need its own camp?**
+
+**Kris:** [YamlCamp][13] is a parody, it's a joke. Last year, Bob Walker ([@rjw1][14]) gave a talk titled "Are we all YAML engineers now?" that led to more jokes. We've had a discussion for years about rebranding CfgMgmtCamp; the problem is that people know our name, we have a large enough audience to keep going, and changing the name would mean effort spent on logos, website, DNS, etc. We won't change the name, but we joked that we could rebrand to YamlCamp, because for some weird reason, a lot of the talks are about YAML. :)
+
+**Matthew: Do you think systems engineers should list YAML as a skill or a language on their CV? Should companies be hiring YAML engineers, or do you have "Long live all YAML engineers" on the website in jest?**
+
+**Toshaan:** Well, the real question is whether people are willing to call themselves YAML engineers proudly, because we already have enough DevOps engineers.
+
+**Matthew: What FOSS software helps you manage the event?**
+
+**Toshaan:** I re-did the website in Hugo CMS because we were spending too much time maintaining the website manually. I chose Hugo, because I was learning Golang, and because it has been successfully used for other conferences and my own website. I also wanted a static website and iCalendar output, so we could use calendar tooling such as Giggity to have a good scheduling tool.
+
+The website now builds quite nicely, and while I still have some ideas on improvements, maintenance is now much easier.
+
+For the call for proposals (CFP), we now use [OpenCFP][15]. We want to optimize the submission, voting, selection, and extraction to be as automated as possible, while being easy and comfortable for potential speakers, reviewers, and ourselves to use. OpenCFP seems to be the tool that works; while we still have some feature requirements, I believe that, once we have some time to contribute back to OpenCFP, we'll have a fully functional and easy tool to run CFPs with.
+
+Last, we switched from EventBrite to Pretix because I wanted to be GDPR compliant and have the ability to run our questions, vouchers, and extra features. Pretix allows us to control registration of attendees, speakers, sponsors, and organizers and have a single overview of all the people coming to the event.
+
+### Wrapping up
+
+The beauty of Configuration Management Camp to me is that it continues to evolve with its audience. Configuration management is certainly at the heart of the work, but it's in service to resilient infrastructure. Keep your eyes open for the talk recordings to learn from the [line up of incredible speakers][16], and thank you to the team for running this (free) show!
+
+You can follow Kris [@KrisBuytaert][17] and Toshaan [@toshywoshy][18]. You can also see Kris' past articles [on his blog][19].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/configuration-management-camp
+
+作者:[Matthew Broberg][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mbbroberg
+[b]: https://github.com/lujun9972
+[1]: https://fosdem.org/2019/
+[2]: https://cfgmgmtcamp.eu/
+[3]: https://cfgmgmtcamp.eu/schedule/monday/intro00/
+[4]: https://cfgmgmtcamp.eu/schedule/monday/keynote0/
+[5]: https://www.devopsdays.org/
+[6]: http://devopsdictionary.com/wiki/CAMS
+[7]: http://monitorama.com/
+[8]: https://vimeo.com/272519813
+[9]: https://cfgmgmtcamp.eu/schedule/tuesday/keynote1/
+[10]: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
+[11]: http://www.infrastructures.org/
+[12]: https://gist.githubusercontent.com/mariozig/5025613/raw/yolo
+[13]: https://twitter.com/yamlcamp
+[14]: https://twitter.com/rjw1
+[15]: https://github.com/opencfp/opencfp
+[16]: https://cfgmgmtcamp.eu/speaker/
+[17]: https://twitter.com/KrisBuytaert
+[18]: https://twitter.com/toshywoshy
+[19]: https://krisbuytaert.be/index.shtml
diff --git a/sources/talk/20190205 7 predictions for artificial intelligence in 2019.md b/sources/talk/20190205 7 predictions for artificial intelligence in 2019.md
new file mode 100644
index 0000000000..2e1b047a15
--- /dev/null
+++ b/sources/talk/20190205 7 predictions for artificial intelligence in 2019.md
@@ -0,0 +1,91 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (7 predictions for artificial intelligence in 2019)
+[#]: via: (https://opensource.com/article/19/2/predictions-artificial-intelligence)
+[#]: author: (Salil Sethi https://opensource.com/users/salilsethi)
+
+7 predictions for artificial intelligence in 2019
+======
+
+While 2018 was a big year for AI, the stage is set for it to make an even deeper impact in 2019.
+
+
+
+Without question, 2018 was a big year for artificial intelligence (AI) as it pushed even further into the mainstream, successfully automating more functionality than ever before. Companies are increasingly exploring applications for AI, and the general public has grown accustomed to interacting with the technology on a daily basis.
+
+The stage is set for AI to continue transforming the world as we know it. In 2019, not only will the technology continue growing in global prevalence, but it will also spawn deeper conversations around important topics, fuel innovative business models, and impact society in new ways, including the following seven.
+
+### 1\. Machine learning as a service (MLaaS) will be deployed more broadly
+
+In 2018, we witnessed major strides in MLaaS with technology powerhouses like Google, Microsoft, and Amazon leading the way. Prebuilt machine learning solutions and capabilities are becoming more attractive in the market, especially to smaller companies that don't have the necessary in-house resources or talent. For those that have the technical know-how and experience, there is a significant opportunity to sell and deploy packaged solutions that can be easily implemented by others.
+
+Today, MLaaS is sold primarily on a subscription or usage basis by cloud-computing providers. For example, Microsoft Azure's ML Studio provides developers with a drag-and-drop environment to develop powerful machine learning models. Google Cloud's Machine Learning Engine also helps developers build large, sophisticated algorithms for a variety of applications. In 2017, Amazon jumped into the realm of AI and launched Amazon SageMaker, another platform that developers can use to build, train, and deploy custom machine learning models.
+
+In 2019 and beyond, be prepared to see MLaaS offered on a much broader scale. Transparency Market Research predicts it will grow to US$20 billion at an alarming 40% CAGR by 2025.
+
+### 2\. More explainable or "transparent" AI will be developed
+
+Although there are already many examples of how AI is impacting our world, explaining the outputs and rationale of complex machine learning models remains a challenge.
+
+Unfortunately, AI continues to carry the "black box" burden, posing a significant limitation in situations where humans want to understand the rationale behind AI-supported decision making.
+
+AI democratization has been led by a plethora of open source tools and libraries, such as Scikit Learn, TensorFlow, PyTorch, and more. The open source community will lead the charge to build explainable, or "transparent," AI that can clearly document its logic, expose biases in data sets, and provide answers to follow-up questions.
+
+Before AI is widely adopted, humans need to know that the technology can perform effectively and explain its reasoning under any circumstance.
+
+### 3\. AI will impact the global political landscape
+
+In 2019, AI will play a bigger role on the global stage, impacting relationships between international superpowers that are investing in the technology. Early adopters of AI, such as the US and [China][1], will struggle to balance self-interest with collaborative R&D. Countries that have AI talent and machine learning capabilities will experience tremendous growth in areas like predictive analytics, creating a wider global technology gap.
+
+Additionally, more conversations will take place around the ethical use of AI. Naturally, different countries will approach this topic differently, which will affect political relationships. Overall, AI's impact will be small relative to other international issues, but more noticeable than before.
+
+### 4\. AI will create more jobs than it eliminates
+
+Over the long term, many jobs will be eliminated as a result of AI-enabled automation. Roles characterized by repetitive, manual tasks are being outsourced to AI more and more every day. However, in 2019, AI will create more jobs than it replaces.
+
+Rather than eliminating the need for humans entirely, AI is augmenting existing systems and processes. As a result, a new type of role is emerging. Humans are needed to support AI implementation and oversee its application. Next year, more manual labor will transition to management-type jobs that work alongside AI, a trend that will continue to 2020. Gartner predicts that in two years, [AI will create 2.3 million jobs while only eliminating 1.8 million.][2]
+
+### 5\. AI assistants will become more pervasive and useful
+
+AI assistants are nothing new to the modern world. Apple's Siri and Amazon's Alexa have been supporting humans on the road and in their homes for years. In 2019, we will see AI assistants continue to grow in their sophistication and capabilities. As they collect more behavioral data, AI assistants will become better at responding to requests and completing tasks. With advances in natural language processing and speech recognition, humans will have smoother and more useful interactions with AI assistants.
+
+In 2018, we saw companies launch promising new AI assistants. Recently, Google began rolling out its voice-enabled reservation booking service, Duplex, which can call and book appointments on behalf of users. Technology company X.ai has built two AI personal assistants, Amy and Andrew, who can interact with humans and schedule meetings for their employers. Amazon also recently announced Echo Auto, a device that enables drivers to integrate Alexa into their vehicles. However, humans will continue to place expectations ahead of reality and be disappointed at the technology's limitations.
+
+### 6\. AI/ML governance will gain importance
+
+With so many companies investing in AI, much more energy will be put towards developing effective AI governance structures. Frameworks are needed to guide data collection and management, appropriate AI use, and ethical applications. Successful and appropriate AI use involves many different stakeholders, highlighting the need for reliable and consistent governing bodies.
+
+In 2019, more organizations will create governance structures and more clearly define how AI progress and implementation are managed. Given the current gap in explainability, these structures will be tremendously important as humans continue to turn to AI to support decision-making.
+
+### 7\. AI will help companies solve AI talent shortages
+
+A [shortage of AI and machine learning talent][3] is creating an innovation bottleneck. A [survey][4] released last year from O'Reilly revealed that the biggest challenge companies are facing related to using AI is a lack of available talent. And as technological advancement continues to accelerate, it is becoming harder for companies to develop talent that can lead large-scale enterprise AI efforts.
+
+To combat this, organizations will—ironically—use AI and machine learning to help address the talent gap in 2019. For example, Google Cloud's AutoML includes machine learning products that help developers train machine learning models without having any prior AI coding experience. Amazon Personalize is another machine learning service that helps developers build sophisticated personalization systems that can be implemented in many ways by different kinds of companies. In addition, companies will use AI to find talent and fill job vacancies and propel innovation forward.
+
+### AI In 2019: bigger and better with a tighter leash
+
+Over the next year, AI will grow more prevalent and powerful than ever. Expect to see new applications and challenges and be ready for an increased emphasis on checks and balances.
+
+What do you think? How might AI impact the world in 2019? Please share your thoughts in the comments below!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/predictions-artificial-intelligence
+
+作者:[Salil Sethi][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/salilsethi
+[b]: https://github.com/lujun9972
+[1]: https://www.turingtribe.com/story/china-is-achieving-ai-dominance-by-relying-on-young-blue-collar-workers-rLMsmWqLG4fGFwisQ
+[2]: https://www.gartner.com/en/newsroom/press-releases/2017-12-13-gartner-says-by-2020-artificial-intelligence-will-create-more-jobs-than-it-eliminates
+[3]: https://www.turingtribe.com/story/tencent-says-there-are-only-bTpNm9HKaADd4DrEi
+[4]: https://www.forbes.com/sites/bernardmarr/2018/06/25/the-ai-skills-crisis-and-how-to-close-the-gap/#19bafcf631f3
diff --git a/sources/talk/20190206 4 steps to becoming an awesome agile developer.md b/sources/talk/20190206 4 steps to becoming an awesome agile developer.md
new file mode 100644
index 0000000000..bad4025aef
--- /dev/null
+++ b/sources/talk/20190206 4 steps to becoming an awesome agile developer.md
@@ -0,0 +1,82 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (4 steps to becoming an awesome agile developer)
+[#]: via: (https://opensource.com/article/19/2/steps-agile-developer)
+[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
+
+4 steps to becoming an awesome agile developer
+======
+There's no magical way to do it, but these practices will put you well on your way to embracing agile in application development, testing, and debugging.
+
+
+Enterprises are rushing into their DevOps journey through [agile][1] software development with cloud-native technologies such as [Linux containers][2], [Kubernetes][3], and [serverless][4]. Continuous integration helps enterprise developers reduce bugs, unexpected errors, and improve the quality of their code deployed in production.
+
+However, this doesn't mean all developers in DevOps automatically embrace agile for their daily work in application development, testing, and debugging. There is no magical way to do it, but the following four practical steps and best practices will put you well on your way to becoming an awesome agile developer.
+
+### Start with design thinking agile practices
+
+There are many opportunities to learn about using agile software development practices in your DevOps initiatives. Agile practices inspire people with new ideas and experiences for improving their daily work in application development with team collaboration. More importantly, those practices will help you discover the answers to questions such as: Why am I doing this? What kind of problems am I trying to solve? How do I measure the outcomes?
+
+A [domain-driven design][5] approach will help you start discovery sooner and easier. For example, the [Start At The End][6] practice helps you redesign your application and explore potential business outcomes—such as, what would happen if your application fails in production? You might also be interested in [Event Storming][7] for interactive and rapid discovery or [Impact Mapping][8] for graphical and strategic design as part of domain-driven design practices.
+
+### Use a predictive approach first
+
+In agile software development projects, enterprise developers are mainly focused on adapting to rapidly changing app development environments such as reactive runtimes, cloud-native frameworks, Linux container packaging, and the Kubernetes platform. They believe this is the best way to become an agile developer in their organization. However, this type of adaptive approach typically makes it harder for developers to understand and report what they will do in the next sprint. Developers might know the ultimate goal and, at best, the app features for a release about four months from the current sprint.
+
+In contrast, the predictive approach places more emphasis on analyzing known risks and planning future sprints in detail. For example, predictive developers can accurately report the functions and tasks planned for the entire development process. But it's not a magical way to make your agile projects succeed all the time because the predictive team depends totally on effective early-stage analysis. If the analysis does not work very well, it may be difficult for the project to change direction once it gets started.
+
+To mitigate this risk, I recommend that senior agile developers increase the predictive capabilities with a plan-driven method, and junior agile developers start with the adaptive methods for value-driven development.
+
+### Continuously improve code quality
+
+Don't hesitate to engage in [continuous integration][9] (CI) practices for improving your application before deploying code into production. To adopt modern application frameworks, such as cloud-native architecture, Linux container packaging, and hybrid cloud workloads, you have to learn about automated tools to address complex CI procedures.
+
+[Jenkins][10] is the standard CI tool for many organizations; it allows developers to build and test applications in many projects in an automated fashion. Its most important function is detecting unexpected errors during CI to prevent them from happening in production. This should increase business outcomes through better customer satisfaction.
+
+Automated CI enables agile developers to not only improve the quality of their code but their also application development agility through learning and using open source tools and patterns such as [behavior-driven development][11], [test-driven development][12], [automated unit testing][13], [pair programming][14], [code review][15], and [design pattern][16].
+
+### Never stop exploring communities
+
+Never settle, even if you already have a great reputation as an agile developer. You have to continuously take on bigger challenges to make great software in an agile way.
+
+By participating in the very active and growing open source community, you will not only improve your skills as an agile developer, but your actions can also inspire other developers who want to learn agile practices.
+
+How do you get involved in specific communities? It depends on your interests and what you want to learn. It might mean presenting specific topics at conferences or local meetups, writing technical blog posts, publishing practical guidebooks, committing code, or creating pull requests to open source projects' Git repositories. It's worth exploring open source communities for agile software development, as I've found it is a great way to share your expertise, knowledge, and practices with other brilliant developers and, along the way, help each other.
+
+### Get started
+
+These practical steps can give you a shorter path to becoming an awesome agile developer. Then you can lead junior developers in your team and organization to become more flexible, valuable, and predictive using agile principles.
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/steps-agile-developer
+
+作者:[Daniel Oh][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/daniel-oh
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/article/18/10/what-agile
+[2]: https://opensource.com/resources/what-are-linux-containers
+[3]: https://opensource.com/resources/what-is-kubernetes
+[4]: https://opensource.com/article/18/11/open-source-serverless-platforms
+[5]: https://en.wikipedia.org/wiki/Domain-driven_design
+[6]: https://openpracticelibrary.com/practice/start-at-the-end/
+[7]: https://openpracticelibrary.com/practice/event-storming/
+[8]: https://openpracticelibrary.com/practice/impact-mapping/
+[9]: https://en.wikipedia.org/wiki/Continuous_integration
+[10]: https://jenkins.io/
+[11]: https://en.wikipedia.org/wiki/Behavior-driven_development
+[12]: https://en.wikipedia.org/wiki/Test-driven_development
+[13]: https://en.wikipedia.org/wiki/Unit_testing
+[14]: https://en.wikipedia.org/wiki/Pair_programming
+[15]: https://en.wikipedia.org/wiki/Code_review
+[16]: https://en.wikipedia.org/wiki/Design_pattern
diff --git a/sources/talk/20190206 What blockchain and open source communities have in common.md b/sources/talk/20190206 What blockchain and open source communities have in common.md
new file mode 100644
index 0000000000..bc4f9464d0
--- /dev/null
+++ b/sources/talk/20190206 What blockchain and open source communities have in common.md
@@ -0,0 +1,64 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What blockchain and open source communities have in common)
+[#]: via: (https://opensource.com/article/19/2/blockchain-open-source-communities)
+[#]: author: (Gordon Haff https://opensource.com/users/ghaff)
+
+What blockchain and open source communities have in common
+======
+Blockchain initiatives can look to open source governance for lessons on establishing trust.
+
+
+One of the characteristics of blockchains that gets a lot of attention is how they enable distributed trust. The topic of trust is a surprisingly complicated one. In fact, there's now an [entire book][1] devoted to the topic by Kevin Werbach.
+
+But here's what it means in a nutshell. Organizations that wish to work together, but do not fully trust one another, can establish a permissioned blockchain and invite business partners to record their transactions on a shared distributed ledger. Permissioned blockchains can trace assets when transactions are added to the blockchain. A permissioned blockchain implies a degree of trust (again, trust is complicated) among members of a consortium, but no single entity controls the storage and validation of transactions.
+
+The basic model is that a group of financial institutions or participants in a logistics system can jointly set up a permissioned blockchain that will validate and immutably record transactions. There's no dependence on a single entity, whether it's one of the direct participants or a third-party intermediary who set up the blockchain, to safeguard the integrity of the system. The blockchain itself does so through a variety of cryptographic mechanisms.
+
+Here's the rub though. It requires that competitors work together cooperatively—a relationship often called [coopetition][2]. The term dates back to the early 20th century, but it grew into widespread use when former Novell CEO Ray Noorda started using the term to describe the company's business strategy in the 1990s. Novell was then planning to get into the internet portal business, which required it to seek partnerships with some of the search engine providers and other companies it would also be competing against. In 1996, coopetition became the subject of a bestselling [book][3].
+
+Coopetition can be especially difficult when a blockchain network initiative appears to be driven by a dominant company. And it's hard for the dominant company not to exert outsize influence over the initiative, just as a natural consequence of how big it is. For example, the IBM-Maersk joint venture has [struggled to sign up rival shipping companies][4], in part because Maersk is the world's largest carrier by capacity, a position that makes rivals wary.
+
+We see this same dynamic in open source communities. The original creators of a project need to not only let go; they need to put governance structures in place that give competing companies confidence that there's a level playing field.
+
+For example, Sarah Novotny, now head of open source strategy at Google Cloud Platform, [told me in a 2017 interview][5] about the [Kubernetes][6] project that it isn't always easy to give up control, even when people buy into doing what is best for a project.
+
+> Google turned Kubernetes over to the Cloud Native Computing Foundation (CNCF), which sits under the Linux Foundation umbrella. As [CNCF executive director Dan Kohn puts it][7]: "One of the things they realized very early on is that a project with a neutral home is always going to achieve a higher level of collaboration. They really wanted to find a home for it where a number of different companies could participate."
+>
+> Defaulting to public may not be either natural or comfortable. "Early on, my first six, eight, or 12 weeks at Google, I think half my electrons in email were spent on: 'Why is this discussion not happening on a public mailing list? Is there a reason that this is specific to GKE [Google Container Engine]? No, there's not a reason,'" said Novotny.
+
+To be sure, some grumble that open source foundations have become too common and that many are too dominated by paying corporate members. Simon Phipps, currently the president of the Open Source Initiative, gave a talk at OSCON way back in 2015 titled ["Enough Foundations Already!"][8] in which he argued that "before we start another open source foundation, let's agree that what we need protected is software freedom and not corporate politics."
+
+Nonetheless, while not appropriate for every project, foundations with business, legal, and technical governance are increasingly the model for open source projects that require extensive cooperation among competing companies. A [2017 analysis of GitHub data by the Linux Foundation][9] found a number of different governance models in use by the highest-velocity open source projects. Unsurprisingly, quite a few remained under the control of the company that created or acquired them. However, about a third were under the auspices of a foundation.
+
+Is there a lesson here for blockchain? Quite possibly. Open source projects can be sponsored by a company while still putting systems and governance in place that are welcoming to outside contributors. However, there's a great deal of history to suggest that doing so is hard because it's hard not to exert control and leverage when you can. Furthermore, even if you make a successful case for being truly open to equal participation to outsiders today, it will be hard to allay suspicions that you might not be as welcoming tomorrow.
+
+To the degree that we can equate blockchain consortiums with open source communities, this suggests that business blockchain initiatives should look to open source governance for lessons. Dominant players in the ecosystem need to forgo control, and they need to have conversations with partners and potential partners about what types of structures would make participating easier.
+
+Many blockchain infrastructure software projects are already under foundations such as Hyperledger. But perhaps some specific production deployments of blockchain aimed at specific industries and ecosystems will benefit from formal governance structures as well.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/blockchain-open-source-communities
+
+作者:[Gordon Haff][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ghaff
+[b]: https://github.com/lujun9972
+[1]: https://mitpress.mit.edu/books/blockchain-and-new-architecture-trust
+[2]: https://en.wikipedia.org/wiki/Coopetition
+[3]: https://en.wikipedia.org/wiki/Co-opetition_(book)
+[4]: https://www.theregister.co.uk/2018/10/30/ibm_struggles_to_sign_up_shipping_carriers_to_blockchain_supply_chain_platform_reports/
+[5]: https://opensource.com/article/17/4/podcast-kubernetes-sarah-novotny
+[6]: https://kubernetes.io/
+[7]: http://bitmason.blogspot.com/2017/02/podcast-cloud-native-computing.html
+[8]: https://www.oreilly.com/ideas/enough-foundations-already
+[9]: https://www.linuxfoundation.org/blog/2017/08/successful-open-source-projects-common/
diff --git a/sources/talk/20190208 Which programming languages should you learn.md b/sources/talk/20190208 Which programming languages should you learn.md
new file mode 100644
index 0000000000..31cef16f03
--- /dev/null
+++ b/sources/talk/20190208 Which programming languages should you learn.md
@@ -0,0 +1,46 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Which programming languages should you learn?)
+[#]: via: (https://opensource.com/article/19/2/which-programming-languages-should-you-learn)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+Which programming languages should you learn?
+======
+Learning a new programming language is a great way to get ahead in your career. But which one?
+
+
+If you want to get started or get ahead in your programming career, learning a new language is a smart idea. But the huge number of languages in active use invites the question: Which programming language is the best one to know? To answer that, let's start with a simplifying question: What sort of programming do you want to do?
+
+If you want to do web programming on the client side, then the specialized languages HTML, CSS, and JavaScript—in one of its seemingly infinite dialects—are de rigueur.
+
+If you want to do web programming on the server side, the options include all of the familiar general-purpose languages: C++, Golang, Java, C#, Node.js, Perl, Python, Ruby, and so on. As a matter of course, server-side programs interact with datastores, such as relational and other databases, which means query languages such as SQL may come into play.
+
+If you're writing native apps for mobile devices, knowing the target platform is important. For Apple devices, Swift has supplanted Objective C as the language of choice. For Android devices, Java (with dedicated libraries and toolsets) remains the dominant language. There are special languages such as Xamarin, used with C#, that can generate platform-specific code for Apple, Android, and Windows devices.
+
+What about general-purpose languages? There are various choices within the usual pigeonholes. Among the dynamic or scripting languages (e.g., Perl, Python, and Ruby), there are newer offerings such as Node.js. Java and C#, which are more alike than their fans like to admit, remain the dominant statically compiled languages targeted at a virtual machine (the JVM and CLR, respectively). Among languages that compile into native executables, C++ is still in the mix, along with later arrivals such as Golang and Rust. General-purpose functional languages abound (e.g., Clojure, Haskell, Erlang, F#, Lisp, and Scala), often with passionately devoted communities. It's worth noting that object-oriented languages such as Java and C# have added functional constructs (in particular, lambdas), and the dynamic languages have had functional constructs from the start.
+
+Let me end with a pitch for C, which is a small, elegant, and extensible language not to be confused with C++. Modern operating systems are written mostly in C, with the rest in assembly language. The standard libraries on any platform are likewise mostly in C. For example, any program that issues the Hello, world! greeting does so through a call to the C library function named **write**.
+
+C serves as a portable assembly language, exposing details about the underlying system that other high-level languages deliberately hide. To understand C is thus to gain a better grasp of how programs contend for the shared system resources (processors, memory, and I/O devices) required for execution. C is at once high-level and close-to-the-metal, so unrivaled in performance—except, of course, for assembly language. Finally, C is the lingua franca among programming languages, and almost every general-purpose language supports C calls in one form or another.
+
+For a modern introduction to C, consider my book [C Programming: Introducing Portable Assembler][1]. No matter how you go about it, learn C and you'll learn a lot more than just another programming language.
+
+What programming languages do you think are important to know? Do you agree or disagree with these recommendations? Let us know in the comments!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/which-programming-languages-should-you-learn
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://www.amazon.com/dp/1977056954?ref_=pe_870760_150889320
diff --git a/sources/talk/20190211 Introducing kids to computational thinking with Python.md b/sources/talk/20190211 Introducing kids to computational thinking with Python.md
new file mode 100644
index 0000000000..c877d3c212
--- /dev/null
+++ b/sources/talk/20190211 Introducing kids to computational thinking with Python.md
@@ -0,0 +1,69 @@
+[#]: collector: (lujun9972)
+[#]: translator: (WangYueScream )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Introducing kids to computational thinking with Python)
+[#]: via: (https://opensource.com/article/19/2/break-down-stereotypes-python)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
+
+Introducing kids to computational thinking with Python
+======
+Coding program gives low-income students the skills, confidence, and knowledge to break free from economic and societal disadvantages.
+
+
+
+When the [Parkman Branch][1] of the Detroit Public Library was flooded with bored children taking up all the computers during summer break, the library saw it not as a problem, rather an opportunity. They started a coding club, the [Parkman Coders][2], led by [Qumisha Goss][3], a librarian who is leveraging the power of Python to introduce disadvantaged children to computational thinking.
+
+When she started the Parkman Coders program about four years ago, "Q" (as she is known) didn't know much about coding. Since then, she's become a specialist in library instruction and technology and a certified Raspberry Pi instructor.
+
+The program began by using [Scratch][4], but the students got bored with the block coding interface, which they regarded as "baby stuff." She says, "I knew we need to make a change to something that was still beginner friendly, but that would be more challenging for them to continue to hold their attention." At this point, she started teaching them Python.
+
+Q first saw Python while playing a game with dungeons and skeleton monsters on [Code.org][5]. She began to learn Python by reading books like [Python Programming: An Introduction to Computer Science][6] and [Python for Kids][7]. She also recommends [Automate the Boring Stuff with Python][8] and [Lauren Ipsum: A Story about Computer Science and Other Improbable Things][9].
+
+### Setting up a Raspberry Pi makerspace
+
+Q decided to use [Raspberry Pi][10] computers to avoid the possibility that the students might be able to hack into the library system's computers, which weren't arranged in a way conducive to a makerspace anyway. The Pi's affordability, plus its flexibility and the included free software, lent more credibility to her decision.
+
+While the coder program was the library's effort keep the peace and create a learning space that would engage the children, it quickly grew so popular that it ran out of space, computers, and adequate electrical outlets in a building built in 1921. They started with 10 Raspberry Pi computers shared among 20 children, but the library obtained funding from individuals, companies including Microsoft, the 4H, and the Detroit Public Library Foundation to get more equipment and expand the program.
+
+Currently, about 40 children participate in each session and they have enough Raspberry Pi's for one device per child and some to give away. Many of the Parkman Coders come from low socio-economic backgrounds and don't have a computer at home, so the library provides them with donated Chromebooks.
+
+Q says, "when kids demonstrate that they have a good understanding of how to use a Raspberry Pi or a [Microbit][11] and have been coming to programs regularly, we give them equipment to take home with them. This process is very challenging, however, because [they may not] have internet access at home [or] all the peripheral things they need like monitors, keyboards, and mice."
+
+### Learning life skills and breaking stereotypes with Python
+
+Q says, "I believe that the mainstays of learning computer science are learning critical thinking and problem-solving skills. My hope is that these lessons will stay with the kids as they grow and pursue futures in whatever field they choose. In addition, I'm hoping to inspire some pride in creatorship. It's a very powerful feeling to know 'I made this thing,' and once they've had these successes early, I hope they will approach new challenges with zeal."
+
+She also says, "in learning to program, you have to learn to be hyper-vigilant about spelling and capitalization, and for some of our kids, reading is an issue. To make sure that the program is inclusive, we spell aloud during our lessons, and we encourage kids to speak up if they don't know a word or can't spell it correctly."
+
+Q also tries to give extra attention to children who need it. She says, "if I recognize that someone has a more severe problem, we try to get them paired with a tutor at our library outside of program time, but still allow them to come to the program. We want to help them without discouraging them from participating."
+
+Most importantly, the Parkman Coders program seeks to help every child realize that each has a unique skill set and abilities. Most of the children are African-American and half are girls. Q says, "we live in a world where we grow up with societal stigmas that frequently limit our own belief of what we can accomplish." She believes that children need a nonjudgmental space where "they can try new things, mess up, and discover."
+
+The environment Q and the Parkman Coders program creates helps the participants break away from economic and societal disadvantages. She says that the secret sauce is to "make sure you have a welcoming space so anyone can come and that your space is forgiving and understanding. Let people come as they are, and be prepared to teach and to learn; when people feel comfortable and engaged, they want to stay."
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/break-down-stereotypes-python
+
+作者:[Don Watkins][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://detroitpubliclibrary.org/locations/parkman
+[2]: https://www.dplfound.org/single-post/2016/05/15/Parkman-Branch-Coders
+[3]: https://www.linkedin.com/in/qumisha-goss-b3bb5470
+[4]: https://scratch.mit.edu/
+[5]: http://Code.org
+[6]: https://www.amazon.com/Python-Programming-Introduction-Computer-Science/dp/1887902996
+[7]: https://nostarch.com/pythonforkids
+[8]: https://automatetheboringstuff.com/
+[9]: https://nostarch.com/laurenipsum
+[10]: https://www.raspberrypi.org/
+[11]: https://microbit.org/guide/
diff --git a/sources/talk/20190214 Top 5 podcasts for Linux news and tips.md b/sources/talk/20190214 Top 5 podcasts for Linux news and tips.md
new file mode 100644
index 0000000000..fb827bb39b
--- /dev/null
+++ b/sources/talk/20190214 Top 5 podcasts for Linux news and tips.md
@@ -0,0 +1,80 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Top 5 podcasts for Linux news and tips)
+[#]: via: (https://opensource.com/article/19/2/top-linux-podcasts)
+[#]: author: (Stephen Bancroft https://opensource.com/users/stevereaver)
+
+Top 5 podcasts for Linux news and tips
+======
+A tried and tested podcast listener, shares his favorite Linux podcasts over the years, plus a couple of bonus picks.
+
+
+Like many Linux enthusiasts, I listen to a lot of podcasts. I find my daily commute is the best time to get some time to myself and catch up on the latest tech news. Over the years, I have subscribed and unsubscribed to more show feeds than I care to think about and have distilled them down to the best of the best.
+
+Here are my top five Linux podcasts I think you should be listening to in 2019, plus a couple of bonus picks.
+
+ 5. [**Late Night Linux**][1]—This podcast, hosted by Joe, [Félim][2], [Graham][3], and [Will][4] from the UK, is rough, ready, and pulls no punches. [Joe Ressington][5] is always ready to tell it how it is, and Félim is always quick with his opinions. It's presented in a casual conversation format—but not one to have one with the kids around, especially with subjects they are all passionate about!
+
+
+ 4. [**Ask Noah Show**][6]—This show was forked from the Linux Action Show after it ended. Hosted by [Noah Chelliah][7], it's presented in a radio talkback style and takes live calls from listeners—it's syndicated from a local radio station in Grand Forks, North Dakota. The podcast isn't purely about Linux, but Noah takes on technical challenges and solves them with Linux and answers listeners' questions about how to achieve good technical solutions using Linux.
+
+
+ 3. [**The Ubuntu Podcast**][8]—If you want the latest about Ubuntu, you can't go past this show. In another podcast with a UK twist, hosts [Alan Pope][9] (Popey), [Mark Johnson][10], and [Martin Wimpress][11] (Wimpy) present a funny and insightful view of the open source community with news directly from Ubuntu.
+
+
+ 2. [**Linux Action News**][12]—The title says it all: it's a news show for Linux. This show was spawned from the popular Linux Action Show and is broadcast by the [Jupiter Broadcasting Network][13], which has many other tech-related podcasts. Hosts Chris Fisher and [Joe Ressington][5] present the show in a more formal "evening news" style, which runs around 30 minutes long. If you want to get a quick weekly update on Linux and Linux-related news, this is the show for you.
+
+
+ 1. [**Linux Unplugged**][14]—Finally, coming in at the number one spot is the granddaddy of them all, Linux Unplugged. This show gets to the core of what being in the Linux community is all about. Presented as a casual panel-style discussion by [Chris Fisher][15] and [Wes Payne][16], the podcast includes an interactive voice chatroom where listeners can connect and be heard live on the show as it broadcasts.
+
+
+
+Well, there you have it, my current shortlist of Linux podcasts. It's likely to change in the near future, but for now, I am enjoying every minute these guys put together.
+
+### Bonus podcasts
+
+Here are two bonus podcasts you might want to check out.
+
+**[Choose Linux][17]** is a brand-new podcast that is tantalizing because of its hosts: Joe Ressington of Linux Action News, who is a long-time Linux veteran, and [Jason Evangelho][18], a Forbes writer who recently shot to fame in the open source community with his articles showcasing his introduction to Linux and open source. Living vicariously through Jason's introduction to Linux has been and will continue to be fun.
+
+[**Command Line Heroes**][19] is a podcast produced by Red Hat. It has a very high production standard and has a slightly different format to the shows I have previously mentioned, anchored by a single presenter, developer, and [CodeNewbie][20] founder [Saron Yitbarek][21], who presents the latest innovations in open source. Now in its second season and released fortnightly, I highly recommend that you start from the first episode of this podcast. It starts with a great intro to the O/S wars of the '90s and sets the foundations for the start of Linux.
+
+Do you have a favorite Linux podcast that isn't on this list? Please share it in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/top-linux-podcasts
+
+作者:[Stephen Bancroft][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/stevereaver
+[b]: https://github.com/lujun9972
+[1]: https://latenightlinux.com/
+[2]: https://twitter.com/felimwhiteley
+[3]: https://twitter.com/degville
+[4]: https://twitter.com/8none1
+[5]: https://twitter.com/JoeRessington
+[6]: http://www.asknoahshow.com/
+[7]: https://twitter.com/kernellinux?lang=en
+[8]: http://ubuntupodcast.org/
+[9]: https://twitter.com/popey
+[10]: https://twitter.com/marxjohnson
+[11]: https://twitter.com/m_wimpress
+[12]: https://linuxactionnews.com/
+[13]: https://www.jupiterbroadcasting.com/
+[14]: https://linuxunplugged.com/
+[15]: https://twitter.com/ChrisLAS
+[16]: https://twitter.com/wespayne
+[17]: https://chooselinux.show
+[18]: https://twitter.com/killyourfm
+[19]: https://www.redhat.com/en/command-line-heroes
+[20]: https://www.codenewbie.org/
+[21]: https://twitter.com/saronyitbarek
diff --git a/sources/talk/20190219 How Linux testing has changed and what matters today.md b/sources/talk/20190219 How Linux testing has changed and what matters today.md
new file mode 100644
index 0000000000..ad26d6dbec
--- /dev/null
+++ b/sources/talk/20190219 How Linux testing has changed and what matters today.md
@@ -0,0 +1,99 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How Linux testing has changed and what matters today)
+[#]: via: (https://opensource.com/article/19/2/phoronix-michael-larabel)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
+
+How Linux testing has changed and what matters today
+======
+Michael Larabel, the founder of Phoronix, shares his insights on the evolution of Linux and open hardware.
+
+
+If you've ever wondered how your Linux computer stacks up against other Linux, Windows, and MacOS machines or searched for reviews of Linux-compatible hardware, you're probably familiar with [Phoronix][1]. Along with its website, which attracts more than 250 million visitors a year to its Linux reviews and news, the company also offers the [Phoronix Test Suite][2], an open source hardware benchmarking tool, and [OpenBenchmarking.org][3], where test result data is stored.
+
+According to [Michael Larabel][4], who started Phoronix in 2004, the site "is frequently cited as being the leading source for those interested in computer hardware and Linux. It offers insights regarding the development of the Linux kernel, product reviews, interviews, and news regarding free and open source software."
+
+I recently had the opportunity to interview Michael about Phoronix and his work.
+
+The questions and answers have been edited for length and clarity.
+
+**Don Watkins:** What inspired you to start Phoronix?
+
+**Michael Larabel:** When I started [Phoronix.com][5] in June 2004, it was still challenging to get a mouse or other USB peripherals working on the popular distributions of the time, like Mandrake, Yoper, MEPIS, and others. So, I set out to work on reviewing different hardware components and their compatibility with Linux. Over time, that shifted more from "does the basic device work?" to how well they perform and what features are supported or unsupported under Linux.
+
+It's been interesting to see the evolution and the importance of Linux on hardware rise. Linux was very common to LAMP/web servers, but Linux has also become synonymous with high-performance computing (HPC), Android smartphones, cloud software, autonomous vehicles, edge computing, digital signage, and related areas. While Linux hasn't quite dominated the desktop, it's doing great practically everywhere else.
+
+I also developed the Phoronix Test Suite, with its initial 1.0 public release in 2008, to increase the viability of testing on Linux, engage with more hardware and software vendors on best practices for testing, and just get more test cases running on Linux. At the time, there weren't any really shiny benchmarks on Linux like there were on Windows.
+
+**DW:** Who are your website's readers?
+
+**ML:** Phoronix's audience is as diverse as the content. Initially, it was quite desktop/gamer/enthusiast oriented, but as Linux's dominance has grown in HPC, cloud, embedded, etc., my testing has expanded in those areas and thus so has the readership. Readers tend to be interested in open source/Linux ecosystem advancements, performance, and a slight bent towards graphics processor and hardware driver interests.
+
+**DW:** How important is testing in the Linux world and how has it changed from when you started?
+
+**ML:** Testing has changed radically since 2004. Back then, many open source projects weren't carrying out any continuous integration (CI) or testing for regressions—both functional issues and performance problems. The hardware vendors supporting Linux were mostly trying to get things working and maintained while being less concerned about performance or scratching away at catching up to Mac, Solaris, and Windows. With time, we've seen the desktop reach close parity with (or exceed, depending upon your views) alternative operating systems. Most PC hardware now works out-of-the-box on Linux, most open source projects engage in some form of CI or testing, and more time and resources are afforded to advancing Linux performance. With high-frequency trading and cloud platforms relying on Linux, performance has become of utmost importance.
+
+Most of my testing at Phoronix.com is focused on benchmarking processors, graphics cards, storage devices, and other areas of interest to gamers and enthusiasts, but also interesting server platforms. Readers are also quite interested in testing of software components like the Linux kernel, code compilers, and filesystems. But in terms of the Phoronix Test Suite, its scope is rather limitless, with a framework in which new tests can be easily added and automated. There are currently more than 1,000 different profiles/suites, and new ones are routinely added—from machine learning tests to traditional benchmarks.
+
+**DW:** How important is open source hardware? Where do you see it going?
+
+**ML:** Open hardware is of increasing importance, especially in light of all the security vulnerabilities and disclosures in recent years. Facebook's work on the [Open Compute Project][6] can be commended, as can Google leveraging [Coreboot][7] in its Chromebook devices, and [Raptor Computing Systems][8]' successful, high-performance, open source POWER9 desktops/workstations/servers. [Intel][9] potentially open sourcing its firmware support package this year is also incredibly tantalizing and will hopefully spur more efforts in this space.
+
+Outside of that, open source hardware has had a really tough time cracking the consumer space due to the sheer amount of capital necessary and the complexities of designing a modern chip, etc., not to mention competing with the established hardware vendors' marketing budgets and other resources. So, while I would love for 100% open source hardware to dominate—or even compete in features and performance with proprietary hardware—in most segments, that is sadly unlikely to happen, especially with open hardware generally being much more expensive due to economies of scale.
+
+Software efforts like [OpenBMC][10], Coreboot/[Libreboot][11], and [LinuxBoot][12] are opening up hardware much more. Those efforts at liberating hardware have proven successful and will hopefully continue to be endorsed by more organizations.
+
+As for [OSHWA][13], I certainly applaud their efforts and the enthusiasm they bring to open source hardware. Certainly, for niche and smaller-scale devices, open source hardware can be a great fit. It will certainly be interesting to see what comes about with OSHWA and some of its partners like Lulzbot, Adafruit, and System76.
+
+**DW:** Can people install Phoronix Test Suite on their own computers?
+
+ML: The Phoronix Test Suite benchmarking software is open source under the GPL and can be downloaded from [Phoronix-Test-Suite.com][2] and [GitHub][14]. The benchmarking software works on not only Linux systems but also MacOS, Solaris, BSD, and Windows 10/Windows Server. The Phoronix Test Suite works on x86/x86_64, ARM/AArch64, POWER, RISC-V, and other architectures.
+
+**DW:** How does [OpenBenchmarking.org][15] work with the Phoronix Test Suite?
+
+**ML:** OpenBenchmarking.org is, in essence, the "cloud" component to the Phoronix Test Suite. It stores test profiles/test suites in a package manager-like fashion, allows users to upload their own benchmarking results, and offers related functionality around our benchmarking software.
+
+OpenBenchmarking.org is seamlessly integrated into the Phoronix Test Suite, but from the web interface, it is also where anyone can see the public benchmark results, inspect the open source test profiles to understand their methodology, research hardware and software data, and use similar functionality.
+
+Another component developed as part of the Phoronix Test Suite is [Phoromatic][16], which effectively allows anyone to deploy their own OpenBenchmarking-like environment within their own private intranet/LAN. This allows organizations to archive their benchmark results locally (and privately), orchestrate benchmarks automatically against groups of systems, manage the benchmark systems, and develop new test cases.
+
+**DW:** How can people stay up to date on Phoronix?
+
+**ML:** You can follow [me][17], [Phoronix][18], [Phoronix Test Suite][19], and [OpenBenchMarking.org][20] on Twitter.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/phoronix-michael-larabel
+
+作者:[Don Watkins][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://www.phoronix.com/
+[2]: https://www.phoronix-test-suite.com/
+[3]: https://openbenchmarking.org/
+[4]: https://www.michaellarabel.com/
+[5]: http://Phoronix.com
+[6]: https://www.opencompute.org/
+[7]: https://www.coreboot.org/
+[8]: https://www.raptorcs.com/
+[9]: https://www.phoronix.com/scan.php?page=news_item&px=Intel-Open-Source-FSP-Likely
+[10]: https://en.wikipedia.org/wiki/OpenBMC
+[11]: https://libreboot.org/
+[12]: https://linuxboot.org/
+[13]: https://www.oshwa.org/
+[14]: https://github.com/phoronix-test-suite/
+[15]: http://OpenBenchmarking.org
+[16]: http://www.phoronix-test-suite.com/index.php?k=phoromatic
+[17]: https://twitter.com/michaellarabel
+[18]: https://twitter.com/phoronix
+[19]: https://twitter.com/Phoromatic
+[20]: https://twitter.com/OpenBenchmark
diff --git a/sources/talk/20190219 How our non-profit works openly to make education accessible.md b/sources/talk/20190219 How our non-profit works openly to make education accessible.md
new file mode 100644
index 0000000000..eee670610c
--- /dev/null
+++ b/sources/talk/20190219 How our non-profit works openly to make education accessible.md
@@ -0,0 +1,136 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How our non-profit works openly to make education accessible)
+[#]: via: (https://opensource.com/open-organization/19/2/building-curriculahub)
+[#]: author: (Tanner Johnson https://opensource.com/users/johnsontanner3)
+
+How our non-profit works openly to make education accessible
+======
+To build an open access education hub, our team practiced the same open methods we teach our students.
+
+
+I'm lucky to work with a team of impressive students at Duke University who are leaders in their classrooms and beyond. As members of [CSbyUs][1], a non-profit and student-run organization based at Duke, we connect university students to middle school students, mostly from [title I schools][2] across North Carolina's Research Triangle Park. Our mission is to fuel future change agents from under-resourced learning environments by fostering critical technology skills for thriving in the digital age.
+
+The CSbyUs Tech R&D team (TRD for short) recently set an ambitious goal to build and deploy a powerful web application over the course of one fall semester. Our team of six knew we had to do something about our workflow to ship a product by winter break. In our middle school classrooms, we teach our learners to use agile methodologies and design thinking to create mobile applications. On the TRD team, we realized we needed to practice what we preach in those classrooms to ship a quality product by semester's end.
+
+This is the story of how and why we utilized the principles we teach our students in order to deploy technology that will scale our mission and make our teaching resources open and accessible.
+
+### Setting the scene
+
+For the past two years, CSbyUs has operated "on the ground," connecting Duke undergraduates to Durham middle schools via after-school programming. After teaching and evaluating several iterations of our unique, student-centered mobile app development curriculum, we saw promising results. Our middle schoolers were creating functional mobile apps, connecting to their mentors, and leaving the class more confident in their computer science skills. Naturally, we wondered how to expand our programming.
+
+We knew we should take our own advice and lean into web-based technologies to share our work, but we weren't immediately sure what problem we needed to solve. Ultimately, we decided to create a web app that serves as a centralized hub for open source and open access digital education curricula. "CurriculaHub" (name inspired by GitHub) would be the defining pillar of CSbyUs's new website, where educators could share and adapt resources.
+
+But the vision and implementation didn't happen overnight.
+
+Given our sense of urgency and the potential of "CurriculaHub," we wanted to start this project with a well defined plan. The stakes were (and are) high, so planning, albeit occasionally tedious, was critical to our success. Like the curriculum we teach, we scaffolded our workflow process with design thinking and agile methodology, two critical 21st century frameworks we often fail to practice in higher ed.
+
+What follows is a step-wise explanation of our design thinking process, starting from inspiration and ending in a shipped prototype.
+
+```
+This is the story of how and why we utilized the principles we teach our students in order to deploy technology that will scale our mission and make our teaching resources open and accessible.
+```
+
+### Our Process
+
+#### **Step 1: Pre-Work**
+
+In order to understand the why to our what, you have to know who our team is.
+
+The members of this team are busy. All of us contribute to CSbyUs beyond our TRD-related responsibilities. As an organization with lofty goals beyond creating a web-based platform, we have to reconcile our "on the ground" commitments (i.e., curriculum curation, research and evaluation, mentorship training and practice, presentations at conferences, etc.) with our "in the cloud" technological goals.
+
+In addition to balancing time across our organization, we have to be flexible in the ways we communicate. As a remote member of the team, I'm writing this post from Spain, but the rest of our team is based in North Carolina, adding collaboration challenges.
+
+Before diving into development (or even problem identification), we knew we had to set some clear expectations for how we'd operate as a team. We took a note from our curriculum team's book and started with some [rules of engagement][3]. This is actually [a well-documented approach][4] to setting up a team's [social contract][5] used by teams across the tech space. During a summer internship at IBM, I remember pre-project meetings where my manager and team spent more than an hour clarifying principles of interaction. Whenever we faced uncertainty in our team operations, we'd pull out the rules of engagement and clear things up almost immediately. (An aside: I've found this strategy to be wildly effective not only in my teams, but in all relationships).
+
+Considering the remote nature of our team, one of our favorite tools is Slack. We use it for almost everything. We can't have sticky-note brainstorms, so we create Slack brainstorm threads. In fact, that's exactly what we did to generate our rules of engagement. One [open source principle we take to heart is transparency][6]; Slack allows us to archive and openly share our thought processes and decision-making steps with the rest of our team.
+
+#### **Step 2: Empathy Research**
+
+We're all here for unique reasons, but we find a common intersection: the desire to broaden equity in access to quality digital era education.
+
+Each member of our team has been lucky enough to study at Duke. We know how it feels to have limitless opportunities and the support of talented peers and renowned professors. But we're mindful that this isn't normal. Across the country and beyond, these opportunities are few and far between. Where they do exist, they're confined within the guarded walls of higher institutes of learning or come with a lofty price tag.
+
+While our team members' common desire to broaden access is clear, we work hard to root our decisions in research. So our team begins each semester [reviewing][7] [research][8] that justifies our existence. TRD works with CRD (curriculum research and development) and TT (teaching team), our two other CSbyUs sub-teams, to discuss current trends in digital education access, their systemic roots, and novel approaches to broaden access and make materials relevant to learners. We not only perform research collaboratively at the beginning of the semester but also implement weekly stand-up research meetings with the sub-teams. During these, CRD often presents new findings we've gleaned from interviewing current teachers and digging into the current state of access in our local community. They are our constant source of data-driven, empathy-fueling research.
+
+Through this type of empathy-based research, we have found that educators interested in student-centered teaching and digital era education lack a centralized space for proven and adaptable curricula and lesson plans. The bureaucracy and rigid structures that shape classroom learning in the United States makes reshaping curricula around the personal needs of students daunting and seemingly impossible. As students, educators, and technologists, we wondered how we might unleash the creativity and agency of others by sharing our own resources and creating an online ecosystem of support.
+
+#### **Step 3: Defining the Problem**
+
+We wanted to avoid [scope creep][9] caused by a poorly defined mission and vision (something that happens too often in some organizations). We needed structures to define our goals and maintain clarity in scope. Before imagining our application features, we knew we'd have to start with defining our north star. We would generate a clear problem statement to which we could refer throughout development.
+
+Before imagining our application features, we knew we'd have to start with defining our north star.
+
+This is common practice for us. Before committing to new programming, new partnerships, or new changes, the CSbyUs team always refers back to our mission and vision and asks, "Does this make sense?" (in fact, we post our mission and vision to the top of every meeting minutes document). If it fits and we have capacity to pursue it, we go for it. And if we don't, then we don't. In the case of a "no," we are always sure to document what and why because, as engineers know, [detailed logs are almost always a good decision][10]. TRD gleaned that big-picture wisdom and implemented a group-defined problem statement to guide our sub-team mission and future development decisions.
+
+To formulate a single, succinct problem statement, we each began by posting our own takes on the problem. Then, during one of our weekly [30-minute-no-more-no-less stand-up meetings][11], we identified commonalities and differences, ultimately [merging all our ideas into one][12]. Boiled down, we identified that there exist massive barriers for educators, parents, and students to share, modify, and discuss open source and accessible curricula. And of course, our mission would be to break down those barriers with user-centered technology. This "north star" lives as a highly visible document in our Google Drive, which has influenced our feature prioritization and future directions.
+
+#### **Step 4: Ideating a Solution**
+
+With our problem defined and our rules of engagement established, we were ready to imagine a solution.
+
+We believe that effective structures can ensure meritocracy and community. Sometimes, certain personalities dominate team decision-making and leave little space for collaborative input. To avoid that pitfall and maximize our equality of voice, we tend to use "offline" individual brainstorms and merge collective ideas online. It's the same process we used to create our rules of engagement and problem statement. In the case of ideating a solution, we started with "offline" brainstorms of three [S.M.A.R.T. goals][13]. Those goals would be ones we could achieve as a software development team (specifically because the CRD and TT teams offer different skill sets) and address our problem statement. Finally, we wrote these goals in a meeting minutes document, clustering common goals and ultimately identifying themes that describe our application features. In the end, we identified three: support, feedback, and open source curricula.
+
+From here, we divided ourselves into sub-teams, repeating the goal-setting process with those teams—but in a way that was specific to our features. And if it's not obvious by now, we realized a web-based platform would be the most optimal and scalable solution for supporting students, educators, and parents by providing a hub for sharing and adapting proven curricula.
+
+To work efficiently, we needed to be adaptive, reinforcing structures that worked and eliminating those that didn't. For example, we put a lot of effort in crafting meeting agendas. We strive to include only those subjects we must discuss in-person and table everything else for offline discussions on Slack or individually organized calls. We practice this in real time, too. During our regular meetings on Google Hangouts, if someone brings up a topic that isn't highly relevant or urgent, the current stand-up lead (a role that rotates weekly) "parking lots" it until the end of the meeting. If we have space at the end, we pull from the parking lot, and if not, we reserve that discussion for a Slack thread.
+
+This prioritization structure has led to massive gains in meeting efficiency and a focus on progress updates, shared technical hurdle discussions, collective decision-making, and assigning actionable tasks (the next-steps a person has committed to taking, documented with their name attached for everyone to view).
+
+#### **Step 5: Prototyping**
+
+This is where the fun starts.
+
+Our team was only able to unite new people with highly varied experience through the power of open principles and methodologies.
+
+Given our requirements—like an interactive user experience, the ability to collaborate on blogs and curricula, and the ability to receive feedback from our users—we began identifying the best technologies. Ultimately, we decided to build our web app with a ReactJS frontend and a Ruby on Rails backend. We chose these due to the extensive documentation and active community for both, and the well-maintained libraries that bridge the relationship between the two (e.g., react-on-rails). Since we chose Rails for our backend, it was obvious from the start that we'd work within a Model-View-Controller framework.
+
+Most of us didn't have previous experience with web development, neither on the frontend nor the backend. So, getting up and running with either technology independently presented a steep learning curve, and gluing the two together only steepened it. To centralize our work, we use an open-access GitHub repository. Given our relatively novice experience in web development, our success hinged on extremely efficient and open collaborations.
+
+And to explain that, we need to revisit the idea of structures. Some of ours include peer code reviews—where we can exchange best-practices and reusable solutions, maintaining up-to-date tech and user documentation so we can look back and understand design decisions—and (my personal favorite) our questions bot on Slack, which gently reminds us to post and answer questions in a separate Slack #questions channel.
+
+We've also dabbled with other strategies, like instructional videos for generating basic React components and rendering them in Rails Views. I tried this and in my first video, [I covered a basic introduction to our repository structure][14] and best practices for generating React components. While this proved useful, our team has since realized the wealth of online resources that document various implementations of these technologies robustly. Also, we simply haven't had enough time (but we might revisit them in the future—stay tuned).
+
+We're also excited about our cloud-based implementation. We use Heroku to host our application and manage data storage. In next iterations, we plan to both expand upon our current features and configure a continuous iteration/continuous development pipeline using services like Jenkins integrated with GitHub.
+
+#### **Step 6: Testing**
+
+Since we've [just deployed][1], we are now in a testing stage. Our goals are to collect user feedback across our feature domains and our application experience as a whole, especially as they interact with our specific audiences. Given our original constraints (namely, time and people power), this iteration is the first of many to come. For example, future iterations will allow for individual users to register accounts and post external curricula directly on our site without going through the extra steps of email. We want to scale and maximize our efficiency, and that's part of the recipe we'll deploy in future iterations. As for user testing: We collect user feedback via our contact form, via informal testing within our team, and via structured focus groups. [We welcome your constructive feedback and collaboration][15].
+
+Our team was only able to unite new people with highly varied experience through the power of open principles and methodologies. Luckily enough, each one I described in this post is adaptable to virtually every team.
+
+Regardless of whether you work—on a software development team, in a classroom, or, heck, [even in your family][16]—principles like transparency and community are almost always the best foundation for a successful organization.
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/open-organization/19/2/building-curriculahub
+
+作者:[Tanner Johnson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/johnsontanner3
+[b]: https://github.com/lujun9972
+[1]: http://csbyus.org
+[2]: https://www2.ed.gov/programs/titleiparta/index.html
+[3]: https://docs.google.com/document/d/1tqV6B6Uk-QB7Psj1rX9tfCyW3E64_v6xDlhRZ-L2rq0/edit
+[4]: https://www.atlassian.com/team-playbook/plays/rules-of-engagement
+[5]: https://openpracticelibrary.com/practice/social-contract/
+[6]: https://opensource.com/open-organization/resources/open-org-definition
+[7]: https://services.google.com/fh/files/misc/images-of-computer-science-report.pdf
+[8]: https://drive.google.com/file/d/1_iK0ZRAXVwGX9owtjUUjNz3_2kbyYZ79/view?usp=sharing
+[9]: https://www.pmi.org/learning/library/top-five-causes-scope-creep-6675
+[10]: https://www.codeproject.com/Articles/42354/The-Art-of-Logging#what
+[11]: https://opensource.com/open-organization/16/2/6-steps-running-perfect-30-minute-meeting
+[12]: https://docs.google.com/document/d/1wdPRvFhMKPCrwOG2CGp7kP4rKOXrJKI77CgjMfaaXnk/edit?usp=sharing
+[13]: https://www.projectmanager.com/blog/how-to-create-smart-goals
+[14]: https://www.youtube.com/watch?v=52kvV0plW1E
+[15]: http://csbyus.org/
+[16]: https://opensource.com/open-organization/15/11/what-our-families-teach-us-about-organizational-life
diff --git a/sources/talk/20190220 Do Linux distributions still matter with containers.md b/sources/talk/20190220 Do Linux distributions still matter with containers.md
new file mode 100644
index 0000000000..c1c7886d0d
--- /dev/null
+++ b/sources/talk/20190220 Do Linux distributions still matter with containers.md
@@ -0,0 +1,87 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Do Linux distributions still matter with containers?)
+[#]: via: (https://opensource.com/article/19/2/linux-distributions-still-matter-containers)
+[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
+
+Do Linux distributions still matter with containers?
+======
+There are two major trends in container builds: using a base image and building from scratch. Each has engineering tradeoffs.
+
+
+Some people say Linux distributions no longer matter with containers. Alternative approaches, like distroless and scratch containers, seem to be all the rage. It appears we are considering and making technology decisions based more on fashion sense and immediate emotional gratification than thinking through the secondary effects of our choices. We should be asking questions like: How will these choices affect maintenance six months down the road? What are the engineering tradeoffs? How does this paradigm shift affect our build systems at scale?
+
+It's frustrating to watch. If we forget that engineering is a zero-sum game with measurable tradeoffs—advantages and disadvantages, with costs and benefits of different approaches— we do ourselves a disservice, we do our employers a disservice, and we do our colleagues who will eventually maintain our code a disservice. Finally, we do all of the maintainers ([hail the maintainers!][1]) a disservice by not appreciating the work they do.
+
+### Understanding the problem
+
+To understand the problem, we have to investigate why we started using Linux distributions in the first place. I would group the reasons into two major buckets: kernels and other packages. Compiling kernels is actually fairly easy. Slackware and Gentoo (I still have a soft spot in my heart) taught us that.
+
+On the other hand, the tremendous amount of development and runtime software that needs to be packaged for a usable Linux system can be daunting. Furthermore, the only way you can ensure that millions of permutations of packages can be installed and work together is by using the old paradigm: compile it and ship it together as a thing (i.e., a Linux distribution). So, why do Linux distributions compile kernels and all the packages together? Simple: to make sure things work together.
+
+First, let's talk about kernels. The kernel is special. Booting a Linux system without a compiled kernel is a bit of a challenge. It's the core of a Linux operating system, and it's the first thing we rely on when a system boots. Kernels have a lot of different configuration options when they're being compiled that can have a tremendous effect on how hardware and software run on one. A secondary problem in this bucket is that system software, like compilers, C libraries, and interpreters, must be tuned for the options you built into the kernel. Gentoo taught us this in a visceral way, which turned everyone into a miniature distribution maintainer.
+
+Embarrassingly (because I have worked with containers for the last five years), I must admit that I have compiled kernels quite recently. I had to get nested KVM working on RHEL 7 so that I could run [OpenShift on OpenStack][2] virtual machines, in a KVM virtual machine on my laptop, as well as [our Container Development Kit (CDK][3]). #justsayin Suffice to say, I fired RHEL7 up on a brand new 4.X kernel at the time. Like any good sysadmin, I was a little worried that I missed some important configuration options and patches. And, of course, I had missed some things. Sleep mode stopped working right, my docking station stopped working right, and there were numerous other small, random errors. But it did work well enough for a live demo of OpenShift on OpenStack, in a single KVM virtual machine on my laptop. Come on, that's kinda' fun, right? But I digress…
+
+Now, let's talk about all the other packages. While the kernel and associated system software can be tricky to compile, the much, much bigger problem from a workload perspective is compiling thousands and thousands of packages to give us a useable Linux system. Each package requires subject matter expertise. Some pieces of software require running only three commands: **./configure** , **make** , and **make install**. Others require a lot of subject matter expertise ranging from adding users and configuring specific defaults in **etc** to running post-install scripts and adding systemd unit files. The set of skills necessary for the thousands of different pieces of software you might use is daunting for any single person. But, if you want a usable system with the ability to try new software whenever you want, you have to learn how to compile and install the new software before you can even begin to learn to use it. That's Linux without a Linux distribution. That's the engineering problem you are agreeing to when you forgo a Linux distribution.
+
+The point is that you have to build everything together to ensure it works together with any sane level of reliability, and it takes a ton of knowledge to build a usable cohort of packages. This is more knowledge than any single developer or sysadmin is ever going to reasonably learn and retain. Every problem I described applies to your [container host][4] (kernel and system software) and [container image][5] (system software and all other packages)—notice the overlap; there are compilers, C libraries, interpreters, and JVMs in the container image, too.
+
+### The solution
+
+You already know this, but Linux distributions are the solution. Stop reading and send your nearest package maintainer (again, hail the maintainers!) an e-card (wait, did I just give my age away?). Seriously though, these people do a ton of work, and it's really underappreciated. Kubernetes, Istio, Prometheus, and Knative: I am looking at you. Your time is coming too, when you will be in maintenance mode, overused, and underappreciated. I will be writing this same article again, probably about Kubernetes, in about seven to 10 years.
+
+### First principles with container builds
+
+There are tradeoffs to building from scratch and building from base images.
+
+#### Building from base images
+
+Building from base images has the advantage that most build operations are nothing more than a package install or update. It relies on a ton of work done by package maintainers in a Linux distribution. It also has the advantage that a patching event six months—or even 10 years—from now (with RHEL) is an operations/systems administrator event (yum update), not a developer event (that requires picking through code to figure out why some function argument no longer works).
+
+Let's double-click on that a bit. Application code relies on a lot of libraries ranging from JSON munging libraries to object-relational mappers. Unlike the Linux kernel and Glibc, these types of libraries change with very little regard to breaking API compatibility. That means that three years from now your patching event likely becomes a code-changing event, not a yum update event. Got it, let that sink in. Developers, you are getting paged at 2 AM if the security team can't find a firewall hack to block the exploit.
+
+Building from a base image is not perfect; there are disadvantages, like the size of all the dependencies that get dragged in. This will almost always make your container images larger than building from scratch. Another disadvantage is you will not always have access to the latest upstream code. This can be frustrating for developers, especially when you just want to get something out the door, but not as frustrating as being paged to look at a library you haven't thought about in three years that the upstream maintainers have been changing the whole time.
+
+If you are a web developer and rolling your eyes at me, I have one word for you: DevOps. That means you are carrying a pager, my friend.
+
+#### Building from scratch
+
+Scratch builds have the advantage of being really small. When you don't rely on a Linux distribution in the container, you have a lot of control, which means you can customize everything for your needs. This is a best-of-breed model, and it's valid in certain use cases. Another advantage is you have access to the latest packages. You don't have to wait for a Linux distro to update anything. You are in control, so you choose when to spend the engineering work to incorporate new software.
+
+Remember, there is a cost to controlling everything. Often, updating to new libraries with new features drags in unwanted API changes, which means fixing incompatibilities in code (in other words, [shaving yaks][6]). Shaving yaks at 2 AM when the application doesn't work is not fun. Luckily, with containers, you can roll back and shave the yaks the next business day, but it will still eat into your time for delivering new value to the business, new features to your applications. Welcome to the life of a sysadmin.
+
+OK, that said, there are times that building from scratch makes sense. I will completely concede that statically compiled Golang programs and C programs are two decent candidates for scratch/distroless builds. With these types of programs, every container build is a compile event. You still have to worry about API breakage three years from now, but if you are a Golang shop, you should have the skillset to fix things over time.
+
+### Conclusion
+
+Basically, Linux distributions do a ton of work to save you time—on a regular Linux system or with containers. The knowledge that maintainers have is tremendous and leveraged so much without really being appreciated. The adoption of containers has made the problem even worse because it's even further abstracted.
+
+With container hosts, a Linux distribution offers you access to a wide hardware ecosystem, ranging from tiny ARM systems, to giant 128 CPU x86 boxes, to cloud-provider VMs. They offer working container engines and container runtimes out of the box, so you can just fire up your containers and let somebody else worry about making things work.
+
+For container images, Linux distributions offer you easy access to a ton of software for your projects. Even when you build from scratch, you will likely look at how a package maintainer built and shipped things—a good artist is a good thief—so, don't undervalue this work.
+
+So, thank you to all of the maintainers in Fedora, RHEL (Frantisek, you are my hero), Debian, Gentoo, and every other Linux distribution. I appreciate the work you do, even though I am a "container guy."
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/linux-distributions-still-matter-containers
+
+作者:[Scott McCarty][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/fatherlinux
+[b]: https://github.com/lujun9972
+[1]: https://aeon.co/essays/innovation-is-overvalued-maintenance-often-matters-more
+[2]: https://blog.openshift.com/openshift-on-openstack-delivering-applications-better-together/
+[3]: https://developers.redhat.com/blog/2018/02/13/red-hat-cdk-nested-kvm/
+[4]: https://developers.redhat.com/blog/2018/02/22/container-terminology-practical-introduction/#h.8tyd9p17othl
+[5]: https://developers.redhat.com/blog/2018/02/22/container-terminology-practical-introduction/#h.dqlu6589ootw
+[6]: https://en.wiktionary.org/wiki/yak_shaving
diff --git a/sources/talk/20190222 Developer happiness- What you need to know.md b/sources/talk/20190222 Developer happiness- What you need to know.md
new file mode 100644
index 0000000000..8ef7772193
--- /dev/null
+++ b/sources/talk/20190222 Developer happiness- What you need to know.md
@@ -0,0 +1,79 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Developer happiness: What you need to know)
+[#]: via: (https://opensource.com/article/19/2/developer-happiness)
+[#]: author: (Bart Copeland https://opensource.com/users/bartcopeland)
+
+Developer happiness: What you need to know
+======
+Developers need the tools and the freedom to code quickly, without getting bogged down by compliance and security.
+
+
+A person needs the right tools for the job. There's nothing as frustrating as getting halfway through a car repair, for instance, only to discover you don't have the specialized tool you need to complete the job. The same concept applies to developers: you need the tools to do what you are best at, without disrupting your workflow with compliance and security needs, so you can produce code faster.
+
+Over half—51%, to be specific—of developers spend only one to four hours each day programming, according to ActiveState's recent [Developer Survey 2018: Open Source Runtime Pains][1]. In other words, the majority of developers spend less than half of their time coding. According to the survey, 50% of developers say security is one of their biggest concerns, but 67% of developers choose not to add a new language when coding because of the difficulties related to corporate policies.
+
+The result is developers have to devote time to non-coding activities like retrofitting software for security and compliance criteria checked after software and languages have been built. And they won't choose the best tool or language for the job because of corporate policies. Their satisfaction goes down and risk goes up.
+
+So, developers aren't able to devote time to high-value work. This creates additional business risk because their time-to-market is slowed, and the organization increases tech debt by not empowering developers to decide on "the best" tech, unencumbered by corporate policy drag.
+
+### Baking in security and compliance workflows
+
+How can we solve this issue? One way is to integrate security and compliance workflows into the software development process in four easy steps:
+
+#### 1\. Gather your forces
+
+Get support from everyone involved. This is an often-forgotten but critical first step. Make sure to consider a wide range of stakeholders, including:
+
+ * DevOps
+ * Developers
+ * InfoSec
+ * Legal/compliance
+ * IT security
+
+
+
+Stakeholders want to understand the business benefits, so make a solid case for eliminating the security and compliance checkpoints after software builds. You can consider any (or all) of the following in building your business case: time savings, opportunity cost, and developer productivity. By integrating security and compliance workflows into the development process, you also avoid retrofitting of languages.
+
+#### 2\. Find trustworthy sources
+
+Next, choose the trusted sources that can be used, along with their license and security requirements. Consider including information such as:
+
+ * Restrictions on usage based on environment or application type and version controls per language
+ * Which open source components are allowable, e.g., specific packages
+ * Which licenses can be used in which types of environments (e.g., research vs. production)
+ * The definition of security levels, acceptable vulnerability risk levels, what risk levels trigger an action, what that action would be, and who would be responsible for its implementation
+
+
+
+#### 3\. Incorporate security and compliance from day one
+
+The upshot of incorporating security and compliance workflows is that it ultimately bakes security and compliance into the first line of code. It eliminates the drag of corporate policy because you're coding to spec versus having to fix things after the fact. But to do this, consider mechanisms for automatically scanning code as it's being built, along with using agentless monitoring of your runtime code. You're freeing up your time, and you'll also be able to programmatically enforce policies to ensure compliance across your entire organization.
+
+New vulnerabilities arise, and new patches and versions become available. Consequently, security and compliance need to be considered when deploying code into production and also when running code. You need to know what, if any, code is at risk and where that code is running. So, the process for deploying and running code should include monitoring, reporting, and updating code in production.
+
+By integrating security and compliance into your software development process from the start, you can also benefit by tracking where your code is running once deployed and be alerted of new threats as they arise. You will be able to track when your applications were vulnerable and respond with automatic enforcement of your software policies.
+
+If your software development process has security and compliance workflows baked in, you will improve your productivity. And you'll be able to measure value through increased time spent coding; gains in security and stability; and cost- and time-savings in maintenance and discovery of security and compliance threats.
+
+### Happiness through integration
+
+If you don't develop and update software, your organization can't go forward. Developers are a linchpin in the success of your company, which means they need the tools and the freedom to code quickly. You can't let compliance and security needs—though they are critical—bog you down. Developers clearly worry about security, so the happy medium is to "shift left" and integrate security and compliance workflows from the start. You'll get more done, get it right the first time, and spend far less time retrofitting code.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/developer-happiness
+
+作者:[Bart Copeland][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/bartcopeland
+[b]: https://github.com/lujun9972
+[1]: https://www.activestate.com/company/press/press-releases/activestate-developer-survey-examines-open-source-challenges/
diff --git a/sources/talk/20190223 No- Ubuntu is NOT Replacing Apt with Snap.md b/sources/talk/20190223 No- Ubuntu is NOT Replacing Apt with Snap.md
new file mode 100644
index 0000000000..bb7dd14943
--- /dev/null
+++ b/sources/talk/20190223 No- Ubuntu is NOT Replacing Apt with Snap.md
@@ -0,0 +1,76 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (No! Ubuntu is NOT Replacing Apt with Snap)
+[#]: via: (https://itsfoss.com/ubuntu-snap-replaces-apt-blueprint/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+No! Ubuntu is NOT Replacing Apt with Snap
+======
+
+Stop believing the rumors that Ubuntu is planning to replace Apt with Snap in the [Ubuntu 19.04 release][1]. These are only rumors.
+
+![Snap replacing apt rumors][2]
+
+Don’t get what I am talking about? Let me give you some context.
+
+There is a ‘blueprint’ on Ubuntu’s launchpad website, titled ‘Replace APT with snap as default package manager’. It talks about replacing Apt (package manager at the heart of Debian) with Snap ( a new packaging system by Ubuntu).
+
+> Thanks to Snap, the need for APT is disappearing, fast… why don’t we use snap at the system level?
+
+The post further says “Imagine, for example, being able to run “sudo snap install cosmic” to upgrade to the current release, “sudo snap install –beta disco” (in March) to upgrade to a beta release, or, for that matter, “sudo snap install –edge disco” to upgrade to a pre-beta release. It would make the whole process much easier, and updates could simply be delivered as updates to the corresponding snap, which could then just be pushed to the repositories and there it is. This way, instead of having a separate release updater, it would be possible to A, run all system updates completely and silently in the background to avoid nagging the user (a la Chrome OS), and B, offer release upgrades in the GNOME software store, Mac-style, as banners, so the user can install them easily. It would make the user experience both more consistent and even more user-friendly than it currently is.”
+
+It might sound good and promising and if you take a look at [this link][3], even you might start believing the rumor. Why? Because at the bottom of the blueprint information, it lists Ubuntu-founder Mark Shuttleworth as the approver.
+
+![Apt being replaced with Snap blueprint rumor][4]Mark Shuttleworth’s name adds to the confusion
+
+The rumor got fanned when the Switch to Linux YouTube channel covered it. You can watch the video from around 11:30.
+
+
+
+When this ‘news’ was brought to my attention, I reached out to Alan Pope of Canonical and asked him if he or his colleagues at Canonical (Ubuntu’s parent company) could confirm it.
+
+Alan clarified that the so called blueprint was not associated with official Ubuntu team. It was created as a proposal by some community member not affiliated with Ubuntu.
+
+> That’s not anything official. Some random community person made it. Anyone can write a blueprint.
+>
+> Alan Pope, Canonical
+
+Alan further elaborated that anyone can create such blueprints and tag Mark Shuttleworth or other Ubuntu members in it. Just because Mark’s name was listed as the approver, it doesn’t mean he already approved the idea.
+
+Canonical has no such plans to replace Apt with Snap. It’s not as simple as the blueprint in question suggests.
+
+After talking with Alan, I decided to not write about this topic because I don’t want to fan baseless rumors and confuse people.
+
+Unfortunately, the ‘replace Apt with Snap’ blueprint is still being shared on various Ubuntu and Linux related groups and forums. Alan had to publicly dismiss these rumors in a series of tweets:
+
+> Seen this [#Ubuntu][5] blueprint being shared around the internet. It's not official, not a thing we're doing. Just because someone made a blueprint, doesn't make it fact.
+>
+> — Alan Pope 🇪🇺🇬🇧 (@popey) [February 23, 2019][6]
+
+I don’t want you, the It’s FOSS reader, to fell for such silly rumors so I quickly penned this article.
+
+If you come across ‘apt being replaced with snap’ discussion, you may tell people that it’s not true and provide them this link as a reference.
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/ubuntu-snap-replaces-apt-blueprint/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/ubuntu-19-04-release-features/
+[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/snap-replacing-apt.png?resize=800%2C450&ssl=1
+[3]: https://blueprints.launchpad.net/ubuntu/+spec/package-management-default-snap
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/apt-snap-blueprint.jpg?ssl=1
+[5]: https://twitter.com/hashtag/Ubuntu?src=hash&ref_src=twsrc%5Etfw
+[6]: https://twitter.com/popey/status/1099238146393468931?ref_src=twsrc%5Etfw
diff --git a/sources/talk/20190225 Teaching scientists how to share code.md b/sources/talk/20190225 Teaching scientists how to share code.md
new file mode 100644
index 0000000000..074da27476
--- /dev/null
+++ b/sources/talk/20190225 Teaching scientists how to share code.md
@@ -0,0 +1,72 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Teaching scientists how to share code)
+[#]: via: (https://opensource.com/article/19/2/open-science-git)
+[#]: author: (Jon Tennant https://opensource.com/users/jon-tennant)
+
+Teaching scientists how to share code
+======
+This course teaches them how to set up a GitHub project, index their project in Zenodo, and integrate Git into an RStudio workflow.
+
+
+Would it surprise you to learn that most of the world's scholarly research is not owned by the people who funded it or who created it? Rather it's owned by private corporations and locked up in proprietary systems, leading to [problems][1] around sharing, reuse, and reproducibility.
+
+The open science movement is challenging this system, aiming to give researchers control, ownership, and freedom over their work. The [Open Science MOOC][2] (massively open online community) is a mission-driven project launched in 2018 to kick-start an open scientific revolution and foster more partnerships between open source software and open science.
+
+The Open Science MOOC is a peer-to-peer community of practice, based around sharing knowledge and ideas, learning new skills, and using these things to develop as individuals so research communities can grow as part of a wider cultural shift towards openness.
+
+### The curriculum
+
+The Open Science MOOC is divided into 10 core modules, from the principles of open science to becoming an open science advocate.
+
+The first module, [Open Research Software and Open Source][3], was released in late 2018. It includes three main tasks, all designed to help make research workflows more efficient and more open for collaboration:
+
+#### 1\. Setting up your first GitHub project
+
+GitHub is a powerful project management tool, both for coders and non-coders. This task teaches how to create a community around the platform, select an appropriate license, and write good documentation (including README files, contributing guidelines, and codes of conduct) to foster open collaboration and a welcoming community.
+
+#### 2\. Indexing your project in Zenodo
+
+[Zenodo][4] is an open science platform that seamlessly integrates with GitHub to help make projects more permanent, reusable, and citable. This task explains how webhooks between Zenodo and GitHub allow new versions of projects to become permanently archived as they progress. This is critical for helping researchers get a [DOI][5] for their work so they can receive full credit for all aspects of a project. As citations are still a primary form of "academic capital," this is essential for researchers.
+
+#### 3\. Integrating Git into an RStudio workflow
+
+This task is about giving research a mega-boost through greater collaborative efficiency and reproducibility. Git enables version control in all forms of text-based content, including data analysis and writing papers. Each time you save your work during the development process, Git saves time-stamped copies. This saves the hassle of trying to "roll back" projects if you delete a file or text by mistake, and eliminates horrific file-naming conventions. (For example, does FINAL_Revised_2.2_supervisor_edits_ver1.7_scream.txt look familiar?) Getting Git to interface with RStudio is the painful part, but this task goes through it, step by step, to ease the stress.
+
+The third task also gives students the ability to interact directly with the MOOC by submitting pull requests to demonstrate their skills. This also adds their name to an online list of open source champions (aka "open sourcerers").
+
+The MOOC's inherently interactive style is much more valuable than listening to someone talk at you, either on or off screen, like with many traditional online courses or educational programs. Each task is backed up by expert-gathered knowledge, so students get a rigorous, dual-learning experience.
+
+### Empowering researchers
+
+The Open Science MOOC strives to be as open as possible—this means we walk the walk and talk the talk. We are built upon a solid values-based foundation of freedom and equitable access to research. We see this route towards widespread adoption of best scientific practices as an essential part of the research process.
+
+Everything we produce is openly developed and openly licensed for maximum engagement, sharing, and reuse. An open source workflow underpins our development. All of this happens openly around channels such as [Slack][6] and [GitHub][7] and helps to make the community much more coherent.
+
+If we can instill the value of open source into modern research, this would empower current and future generations of researchers to think more about fundamental freedoms around knowledge production. We think that is something worth working towards as a community.
+
+The Open Science MOOC combines the best elements of the open education, open science, and open source worlds. If you're ready to join, [sign up for the full course][3], which is, of course, free.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/open-science-git
+
+作者:[Jon Tennant][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jon-tennant
+[b]: https://github.com/lujun9972
+[1]: https://www.theguardian.com/science/political-science/2018/jun/29/elsevier-are-corrupting-open-science-in-europe
+[2]: https://opensciencemooc.eu/
+[3]: https://eliademy.com/catalog/oer/module-5-open-research-software-and-open-source.html
+[4]: https://zenodo.org/
+[5]: https://en.wikipedia.org/wiki/Digital_object_identifier
+[6]: https://osmooc.herokuapp.com/
+[7]: https://open-science-mooc-invite.herokuapp.com/
diff --git a/sources/talk/20190226 Reducing security risks with centralized logging.md b/sources/talk/20190226 Reducing security risks with centralized logging.md
new file mode 100644
index 0000000000..60bbd7a80b
--- /dev/null
+++ b/sources/talk/20190226 Reducing security risks with centralized logging.md
@@ -0,0 +1,82 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Reducing security risks with centralized logging)
+[#]: via: (https://opensource.com/article/19/2/reducing-security-risks-centralized-logging)
+[#]: author: (Hannah Suarez https://opensource.com/users/hcs)
+
+Reducing security risks with centralized logging
+======
+Centralizing logs and structuring log data for processing can mitigate risks related to insufficient logging.
+
+
+
+Logging and log analysis are essential to securing infrastructure, particularly when we consider common vulnerabilities. This article, based on my lightning talk [Let's use centralized log collection to make incident response teams happy][1] at FOSDEM'19, aims to raise awareness about the security concerns around insufficient logging, offer a way to avoid the risk, and advocate for more secure practices _(disclaimer: I work for NXLog)._
+
+### Why log collection and why centralized logging?
+
+Logging is, to be specific, an append-only sequence of records written to disk. In practice, logs help you investigate an infrastructure issue as you try to find a cause for misbehavior. A challenge comes up when you have heterogeneous systems with their own standards and formats, and you want to be able to handle and process these in a dependable way. This often comes at the cost of metadata. Centralized logging solutions require commonality, and that commonality often removes the rich metadata many open source logging tools provide.
+
+### The security risk of insufficient logging and monitoring
+
+The Open Web Application Security Project ([OWASP][2]) is a nonprofit organization that contributes to the industry with incredible projects (including many [tools][3] focusing on software security). The organization regularly reports on the riskiest security challenges for application developers and maintainers. In its most recent report on the [top 10 most critical web application security risks][4], OWASP added Insufficient Logging and Monitoring to its list. OWASP warns of risks due to insufficient logging, detection, monitoring, and active response in the following types of scenarios.
+
+ * Important auditable events, such as logins, failed logins, and high-value transactions are not logged.
+ * Warnings and errors generate none, inadequate, or unclear log messages.
+ * Logs are only being stored locally.
+ * The application is unable to detect, escalate, or alert for active attacks in real time or near real time.
+
+
+
+These instances can be mitigated by centralizing logs (i.e., not storing logs locally) and structuring log data for processing (i.e., in alerting dashboards and security suites).
+
+For example, imagine a DNS query leads to a malicious site named **hacked.badsite.net**. With DNS monitoring, administrators monitor and proactively analyze DNS queries and responses. The efficiency of DNS monitoring relies on both sufficient logging and log collection in order to catch potential issues as well as structuring the resulting DNS log for further processing:
+
+```
+2019-01-29
+Time (GMT) Source Destination Protocol-Info
+12:42:42.112898 SOURCE_IP xxx.xx.xx.x DNS Standard query 0x1de7 A hacked.badsite.net
+```
+
+You can try this yourself and run through other examples and snippets with the [NXLog Community Edition][5] _(disclaimer again: I work for NXLog)._
+
+### Important aside: unstructured vs. structured data
+
+It's important to take a moment and consider the log data format. For example, let's consider this log message:
+
+```
+debug1: Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2
+```
+
+This log contains a predefined structure, such as a metadata keyword before the colon ( **debug1** ). However, the rest of the log field is an unstructured string ( **Failed password for invalid user amy from SOURCE_IP port SOURCE_PORT ssh2** ). So, while the message is easily available in a human-readable format, it is not a format a computer can easily parse.
+
+Unstructured event data poses limitations including difficulty of parsing, searching, and analyzing the logs. The important metadata is too often set in an unstructured data field in the form of a freeform string like the example above. Logging administrators will come across this problem at some point as they attempt to standardize/normalize log data and centralize their log sources.
+
+### Where to go next
+
+Alongside centralizing and structuring your logs, make sure you're collecting the right log data—Sysmon, PowerShell, Windows EventLog, DNS debug, NetFlow, ETW, kernel monitoring, file integrity monitoring, database logs, external cloud logs, and so on. Also have the right tools and processes in place to collect, aggregate, and help make sense of the data.
+
+Hopefully, this gives you a starting point to centralize log collection across diverse sources; send them to outside sources like dashboards, monitoring software, analytics software, specialized software like security information and event management (SEIM) suites; and more.
+
+What does your centralized logging strategy look like? Share your thoughts in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/reducing-security-risks-centralized-logging
+
+作者:[Hannah Suarez][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/hcs
+[b]: https://github.com/lujun9972
+[1]: https://fosdem.org/2019/schedule/event/lets_use_centralized_log_collection_to_make_incident_response_teams_happy/
+[2]: https://www.owasp.org/index.php/Main_Page
+[3]: https://github.com/OWASP
+[4]: https://www.owasp.org/index.php/Top_10-2017_Top_10
+[5]: https://nxlog.co/products/nxlog-community-edition/download
diff --git a/sources/talk/20190227 Let your engineers choose the license- A guide.md b/sources/talk/20190227 Let your engineers choose the license- A guide.md
new file mode 100644
index 0000000000..81a8dd8ee2
--- /dev/null
+++ b/sources/talk/20190227 Let your engineers choose the license- A guide.md
@@ -0,0 +1,62 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Let your engineers choose the license: A guide)
+[#]: via: (https://opensource.com/article/19/2/choose-open-source-license-engineers)
+[#]: author: (Jeffrey Robert Kaufman https://opensource.com/users/jkaufman)
+
+Let your engineers choose the license: A guide
+======
+Enabling engineers to make licensing decisions is wise and efficient.
+
+
+Imagine you are working for a company that will be starting a new open source community project. Great! You have taken a positive first step to give back and enable a virtuous cycle of innovation that the open source community-based development model provides.
+
+But what about choosing an open source license for your project? You ask your manager for guidance, and she provides some perspective but quickly realizes that there is no formal company policy or guidelines. As any wise manager would do, she asks you to develop formal corporate guidelines for choosing an open source license for such projects.
+
+Simple, right? You may be surprised to learn some unexpected challenges. This article will describe some of the complexities you may encounter and some perspective based on my recent experience with a similar project at Red Hat.
+
+It may be useful to quickly review some of the more common forms of open source licensing. Open source licenses may be generally placed into two main buckets, copyleft and permissive.
+
+> Copyleft licenses, such as the GPL, allow access to source code, modifications to the source, and distribution of the source or binary versions in their original or modified forms. Copyleft additionally provides that essential software freedoms (run, study, change, and distribution) will be allowed and ensured for any recipients of that code. A copyleft license prohibits restrictions or limitations on these essential software freedoms.
+>
+> Permissive licenses, similar to copyleft, also generally allow access to source code, modifications to the source, and distribution of the source or binary versions in their original or modified forms. However, unlike copyleft licenses, additional restrictions may be included with these forms of licenses, including proprietary limitations such as prohibiting the creation of modified works or further distribution.
+
+Red Hat is one of the leading open source development companies, with thousands of open source developers continuously working upstream and contributing to an assortment of open source projects. When I joined Red Hat, I was very familiar with its flagship Red Hat Enterprise Linux offering, often referred to as RHEL. Although I fully expected that the company contributes under a wide assortment of licenses based on project requirements, I thought our preference and top recommendation for our engineers would be GPLv2 due to our significant involvement with Linux. In addition, GPL is a copyleft license, and copyleft ensures that the essential software freedoms (run, study, change, distribute) will be extended to any recipients of that code. What could be better for sustaining the open source ecosystem than a copyleft license?
+
+Fast forwarding on my journey to craft internal license choice guidelines for Red Hat, the end result was to not have any license preference at all. Instead, we delegate that responsibility, to the maximum extent possible, to our engineers. Why? Because each open source project and community is unique and there are social aspects to these communities that may have preferences towards various licensing philosophies (e.g., copyleft or permissive). Engineers working in those communities understand all these issues and are best equipped to choose the proper license on this knowledge. Mandating certain licenses for code contributions often will conflict with these community norms and result in reduction or prohibition in contributed content.
+
+For example, perhaps your organization believes that the latest GPL license (GPLv3) is the best for your company due to its updated provisions. If you mandated GPLv3 for all future contributions vs. GPLv2, you would be prohibited from contributing code to the Linux kernel, since that is a GPLv2 project and will likely remain that way for a very long time. Your engineers, being part of that open source community project, would know that and would automatically choose GPLv2 in the absence of such a mandate.
+
+Bottom line: Enabling engineers to make these decisions is wise and efficient.
+
+To the extent your organization may have to restrict the use of certain licenses (e.g., due to certain intellectual property concerns), this should naturally be part of your guidelines or policy. I believe it is much better to delegate to the maximum extent possible to those that understand all the nuances, politics, and licensing philosophies of these varied communities and restrict license choice only when absolutely necessary. Even having a preference for a certain license over another can be problematic. Open source engineers may have deeply rooted feelings about copyleft (either for or against), and forcing one license over the other (unless absolutely necessary for business reasons) may result in creating ill-will and ostracizing an engineer or engineering department within your organization
+
+In summary, Red Hat's guidelines are very simple and are summarized below:
+
+ 1. We suggest choosing an open source license from a set of 10 different licenses that are very common and meet the needs of most new open source projects.
+
+ 2. We allow the use of other licenses but we ask that a reason is provided to the open source legal team so we can collect and better understand some of the new and perhaps evolving needs of the open source communities that we serve. (As stated above, our engineers are on the front lines and are best equipped to deliver this type of information.)
+
+ 3. The open source legal team always has the right to override a decision, but this would be very rare and only would occur if we were aware of some community or legal concern regarding a specific license or project.
+
+ 4. Publishing source code without a license is never permitted.
+
+In summary, the advantages of these guidelines are enormous. They are very efficient and lead to a very low-friction development and approval system within our organization.
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/choose-open-source-license-engineers
+
+作者:[Jeffrey Robert Kaufman][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jkaufman
+[b]: https://github.com/lujun9972
diff --git a/sources/talk/20190228 A Brief History of FOSS Practices.md b/sources/talk/20190228 A Brief History of FOSS Practices.md
new file mode 100644
index 0000000000..58e90a8efa
--- /dev/null
+++ b/sources/talk/20190228 A Brief History of FOSS Practices.md
@@ -0,0 +1,113 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (A Brief History of FOSS Practices)
+[#]: via: (https://itsfoss.com/history-of-foss)
+[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
+
+A Brief History of FOSS Practices
+======
+
+We focus a great deal about Linux and Free & Open Source Software at It’s FOSS. Ever wondered how old such FOSS Practices are? How did this Practice come by? What is the History behind this revolutionary concept?
+
+In this history and trivia article, let’s take a look back in time through this brief write-up and note some interesting initiatives in the past that have grown so huge today.
+
+### Origins of FOSS
+
+![History of FOSS][1]
+
+The origins of FOSS goes back to the 1950s. When hardware was purchased, there used to be no additional charges for bundled software and the source code would also be available in order to fix possible bugs in the software.
+
+It was actually a common Practice back then for users with the freedom to customize the code.
+
+At that time, mostly academicians and researchers in the industry were the collaborators to develop such software.
+
+The term Open Source was not there yet. Instead, the term that was popular at that time was “Public Domain Software”. As of today, ideologically, both are very much [different][2] in nature even though they may sound similar.
+
+
+
+Back in 1955, some users of the [IBM 701][3] computer system from Los Angeles, voluntarily founded a group called SHARE. The “SHARE Program Library Agency” (SPLA) distributed information and software through magnetic tapes.
+
+Technical information shared, was about programming languages, operating systems, database systems, and user experiences for enterprise users of small, medium, and large-scale IBM computers.
+
+The initiative that is now more than 60 years old, continues to follow its goals quite actively. SHARE has its upcoming event coming up as [SHARE Phoenix 2019][4]. You can download and check out their complete timeline [here][5].
+
+### The GNU Project
+
+Announced at MIT on September 27, 1983, by Richard Stallman, the GNU Project is what immensely empowers and supports the Free Software Community today.
+
+### Free Software Foundation
+
+The “Free Software Movement” by Richard Stallman established a new norm for developing Free Software.
+
+He founded The Free Software Foundation (FSF) on 4th October 1985 to support the free software movement. Software that ensures that the end users have freedom in using, studying, sharing and modifying that software came to be called as Free Software.
+
+**Free as in Free Speech, Not Free Beer**
+
+
+
+The Free Software Movement laid the following rules to establish the distinctiveness of the idea:
+
+ * The freedom to run the program as you wish, for any purpose (freedom 0).
+ * The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
+ * The freedom to redistribute copies so you can help your neighbor (freedom 2).
+ * The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
+
+### The Linux Kernel
+
+
+
+How can we miss this section at It’s FOSS! The Linux kernel was released as freely modifiable source code in 1991 by Linus Torvalds. At first, it was neither Free Software nor used an Open-source software license. In February 1992, Linux was relicensed under the GPL.
+
+### The Linux Foundation
+
+The Linux Foundation has a goal to empower open source projects to accelerate technology development and commercial adoption. It is an initiative that was taken in 2000 via the [Open Source Development Labs][6] (OSDL) which later merged with the [Free Standards Group][7].
+
+Linus Torvalds works at The Linux Foundation who provide complete support to him so that he can work full-time on improving Linux.
+
+### Open Source
+
+When the source code of [Netscape][8] Communicator was released in 1998, the label “Open Source” was adopted by a group of individuals at a strategy session held on February 3rd, 1998 in Palo Alto, California. The idea grew from a visionary realization that the [Netscape announcement][9] had changed the way people looked at commercial software.
+
+This opened up a whole new world, creating a new perspective that revealed the superiority and advantage of an open development process that could be powered by collaboration.
+
+[Christine Peterson][10] was the one among that group of individuals who originally suggested the term “Open Source” as we perceive today (mentioned [earlier][11]).
+
+### Evolution of Business Models
+
+The concept of Open Source is a huge phenomenon right now and there are several companies that continue to adopt the Open Source Approach to this day. [As of April 2015, 78% of companies used Open Source Software][12] with different [Open Source Licenses][13].
+
+Several organisations have adopted [different business models][14] for Open Source. Red Hat and Mozilla are two good examples.
+
+So this was a brief recap of some interesting facts from FOSS History. Do let us know your thoughts if you want to share in the comments below.
+
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/history-of-foss
+
+作者:[Avimanyu Bandyopadhyay][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/avimanyu/
+[b]: https://github.com/lujun9972
+[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/history-of-foss.png?resize=800%2C450&ssl=1
+[2]: https://opensource.org/node/878
+[3]: https://en.wikipedia.org/wiki/IBM_701
+[4]: https://event.share.org/home
+[5]: https://www.share.org/d/do/11532
+[6]: https://en.wikipedia.org/wiki/Open_Source_Development_Labs
+[7]: https://en.wikipedia.org/wiki/Free_Standards_Group
+[8]: https://en.wikipedia.org/wiki/Netscape
+[9]: https://web.archive.org/web/20021001071727/http://wp.netscape.com:80/newsref/pr/newsrelease558.html
+[10]: https://en.wikipedia.org/wiki/Christine_Peterson
+[11]: https://itsfoss.com/nanotechnology-open-science-ai/
+[12]: https://www.zdnet.com/article/its-an-open-source-world-78-percent-of-companies-run-open-source-software/
+[13]: https://itsfoss.com/open-source-licenses-explained/
+[14]: https://opensource.com/article/17/12/open-source-business-models
diff --git a/sources/talk/20190228 IRC vs IRL- How to run a good IRC meeting.md b/sources/talk/20190228 IRC vs IRL- How to run a good IRC meeting.md
new file mode 100644
index 0000000000..99f0c5c465
--- /dev/null
+++ b/sources/talk/20190228 IRC vs IRL- How to run a good IRC meeting.md
@@ -0,0 +1,56 @@
+[#]: collector: (lujun9972)
+[#]: translator: (lujun9972)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (IRC vs IRL: How to run a good IRC meeting)
+[#]: via: (https://opensource.com/article/19/2/irc-vs-irl-meetings)
+[#]: author: (Ben Cotton https://opensource.com/users/bcotton)
+
+IRC vs IRL: How to run a good IRC meeting
+======
+Internet Relay Chat meetings can be a great way to move a project forward if you follow these best practices.
+
+
+There's an art to running a meeting in any format. Many people have learned to run in-person or telephone meetings, but [Internet Relay Chat][1] (IRC) meetings have unique characteristics that differ from "in real life" (IRL) meetings. This article will share the advantages and disadvantages of the IRC format as well as tips that will help you lead IRC meetings more effectively.
+
+Why IRC? Despite the wealth of real-time chat options available today, [IRC remains a cornerstone of open source projects][2]. If your project uses another communication method, don't worry. Most of this advice works for any synchronous text chat mechanism, perhaps with a few tweaks here and there.
+
+### Challenges of IRC meetings
+
+IRC meetings pose certain challenges compared to in-person meetings. You know that lag between when one person finishes talking and the next one begins? It's worse in IRC because people have to type what they're thinking. This is slower than talking and—unlike with talking—you can't tell when someone else is trying to compose a message. Moderators must remember to insert long pauses when asking for responses or moving to the next topic. And someone who wants to speak up should insert a brief message (e.g., a period) to let the moderator know.
+
+IRC meetings also lack the metadata you get from other methods. You can't read facial expressions or tone of voice in text. This means you have to be careful with your word choice and phrasing.
+
+And IRC meetings make it really easy to get distracted. At least when someone is looking at funny cat GIFs during an in-person meeting, you'll see them smile and hear them laugh at inopportune times. In IRC, unless they accidentally paste the wrong text, there's no peer pressure even to pretend to pay attention. With IRC, you can even be in multiple meetings at once. I've done this, but it's dangerous if you need to be an active participant.
+
+### Benefits of IRC meetings
+
+IRC meetings have some unique advantages, too. IRC is a very resource-light medium. It doesn't tax bandwidth or CPU. This lowers the barrier for participation, which is advantageous for both the underprivileged and people who are on the road. For volunteer contributors, it means they may be able to participate during their workday. And it means participants don't need to find a quiet space where they can talk without bothering those around them.
+
+With a meeting bot, IRC can produce meeting minutes instantly. In Fedora, we use Zodbot, an instance of Debian's [Meetbot][3], to log meetings and provide interaction. When a meeting ends, the minutes and full logs are immediately available to the community. This can reduce the administrative overhead of running the meeting.
+
+### It's like a normal meeting, but different
+
+Conducting a meeting via IRC or other text-based medium means thinking about the meeting in a slightly different way. Although it lacks some of the benefits of higher-bandwidth modes of communication, it has advantages, too. Running an IRC meeting provides the opportunity to develop discipline that can help you run any type of meeting.
+
+Like any meeting, IRC meetings are best when there's a defined agenda and purpose. A good meeting moderator knows when to let the conversation follow twists and turns and when it's time to reel it back in. There's no hard and fast rule here—it's very much an art. But IRC offers an advantage in this regard. By setting the channel topic to the meeting's current topic, people have a visible reminder of what they should be talking about.
+
+If your project doesn't already conduct synchronous meetings, you should give it some thought. For projects with a diverse set of time zones, finding a mutually agreeable time to hold a meeting is hard. You can't rely on meetings as your only source of coordination. But they can be a valuable part of how your project works.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/irc-vs-irl-meetings
+
+作者:[Ben Cotton][a]
+选题:[lujun9972][b]
+译者:[lujun9972](https://github.com/lujun9972)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/bcotton
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Internet_Relay_Chat
+[2]: https://opensource.com/article/16/6/getting-started-irc
+[3]: https://wiki.debian.org/MeetBot
diff --git a/sources/talk/20190228 Why CLAs aren-t good for open source.md b/sources/talk/20190228 Why CLAs aren-t good for open source.md
new file mode 100644
index 0000000000..ca39619762
--- /dev/null
+++ b/sources/talk/20190228 Why CLAs aren-t good for open source.md
@@ -0,0 +1,76 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Why CLAs aren't good for open source)
+[#]: via: (https://opensource.com/article/19/2/cla-problems)
+[#]: author: (Richard Fontana https://opensource.com/users/fontana)
+
+Why CLAs aren't good for open source
+======
+Few legal topics in open source are as controversial as contributor license agreements.
+
+
+Few legal topics in open source are as controversial as [contributor license agreements][1] (CLAs). Unless you count the special historical case of the [Fedora Project Contributor Agreement][2] (which I've always seen as an un-CLA), or, like [Karl Fogel][3], you classify the [DCO][4] as a [type of CLA][5], today Red Hat makes no use of CLAs for the projects it maintains.
+
+It wasn't always so. Red Hat's earliest projects followed the traditional practice I've called "inbound=outbound," in which contributions to a project are simply provided under the project's open source license with no execution of an external, non-FOSS contract required. But in the early 2000s, Red Hat began experimenting with the use of contributor agreements. Fedora started requiring contributors to sign a CLA based on the widely adapted [Apache ICLA][6], while a Free Software Foundation-derived copyright assignment agreement and a pair of bespoke CLAs were inherited from the Cygnus and JBoss acquisitions, respectively. We even took [a few steps][7] towards adopting an Apache-style CLA across the rapidly growing set of Red Hat-led projects.
+
+This came to an end, in large part because those of us on the Red Hat legal team heard and understood the concerns and objections raised by Red Hat engineers and the wider technical community. We went on to become de facto leaders of what some have called the anti-CLA movement, marked notably by our [opposition to Project Harmony][8] and our [efforts][9] to get OpenStack to replace its CLA with the DCO. (We [reluctantly][10] sign tolerable upstream project CLAs out of practical necessity.)
+
+### Why CLAs are problematic
+
+Our choice not to use CLAs is a reflection of our values as an authentic open source company with deep roots in the free software movement. Over the years, many in the open source community have explained why CLAs, and the very similar mechanism of copyright assignment, are a bad policy for open source.
+
+One reason is the red tape problem. Normally, open source development is characterized by frictionless contribution, which is enabled by inbound=outbound without imposition of further legal ceremony or process. This makes it relatively easy for new contributors to get involved in a project, allowing more effective growth of contributor communities and driving technical innovation upstream. Frictionless contribution is a key part of the advantage open source development holds over proprietary alternatives. But frictionless contribution is negated by CLAs. Having to sign an unusual legal agreement before a contribution can be accepted creates a bureaucratic hurdle that slows down development and discourages participation. This cost persists despite the growing use of automation by CLA-using projects.
+
+CLAs also give rise to an asymmetry of legal power among a project's participants, which also discourages the growth of strong contributor and user communities around a project. With Apache-style CLAs, the company or organization leading the project gets special rights that other contributors do not receive, while those other contributors must shoulder certain legal obligations (in addition to the red tape burden) from which the project leader is exempt. The problem of asymmetry is most severe in copyleft projects, but it is present even when the outbound license is permissive.
+
+When assessing the arguments for and against CLAs, bear in mind that today, as in the past, the vast majority of the open source code in any product originates in projects that follow the inbound=outbound practice. The use of CLAs by a relatively small number of projects causes collateral harm to all the others by signaling that, for some reason, open source licensing is insufficient to handle contributions flowing into a project.
+
+### The case for CLAs
+
+Since CLAs continue to be a minority practice and originate from outside open source community culture, I believe that CLA proponents should bear the burden of explaining why they are necessary or beneficial relative to their costs. I suspect that most companies using CLAs are merely emulating peer company behavior without critical examination. CLAs have an understandable, if superficial, appeal to risk-averse lawyers who are predisposed to favor greater formality, paper, and process regardless of the business costs. Still, some arguments in favor of CLAs are often advanced and deserve consideration.
+
+**Easy relicensing:** If administered appropriately, Apache-style CLAs give the project steward effectively unlimited power to sublicense contributions under terms of the steward's choice. This is sometimes seen as desirable because of the potential need to relicense a project under some other open source license. But the value of easy relicensing has been greatly exaggerated by pointing to a few historical cases involving major relicensing campaigns undertaken by projects with an unusually large number of past contributors (all of which were successful without the use of a CLA). There are benefits in relicensing being hard because it results in stable legal expectations around a project and encourages projects to consult their contributor communities before undertaking significant legal policy changes. In any case, most inbound=outbound open source projects never attempt to relicense during their lifetime, and for the small number that do, relicensing will be relatively painless because typically the number of past contributors to contact will not be large.
+
+**Provenance tracking:** It is sometimes claimed that CLAs enable a project to rigorously track the provenance of contributions, which purportedly has some legal benefit. It is unclear what is achieved by the use of CLAs in this regard that is not better handled through such non-CLA means as preserving Git commit history. And the DCO would seem to be much better suited to tracking contributions, given that it is normally used on a per-commit basis, while CLAs are signed once per contributor and are administratively separate from code contributions. Moreover, provenance tracking is often described as though it were a benefit for the public, yet I know of no case where a project provides transparent, ready public access to CLA acceptance records.
+
+**License revocation:** Some CLA advocates warn of the prospect that a contributor may someday attempt to revoke a past license grant. To the extent that the concern is about largely judgment-proof individual contributors with no corporate affiliation, it is not clear why an Apache-style CLA provides more meaningful protection against this outcome compared to the use of an open source license. And, as with so many of the legal risks raised in discussions of open source legal policy, this appears to be a phantom risk. I have heard of only a few purported attempts at license revocation over the years, all of which were resolved quickly when the contributor backed down in the face of community pressure.
+
+**Unauthorized employee contribution:** This is a special case of the license revocation issue and has recently become a point commonly raised by CLA advocates. When an employee contributes to an upstream project, normally the employer owns the copyrights and patents for which the project needs licenses, and only certain executives are authorized to grant such licenses. Suppose an employee contributed proprietary code to a project without approval from the employer, and the employer later discovers this and demands removal of the contribution or sues the project's users. This risk of unauthorized contributions is thought to be minimized by use of something like the [Apache CCLA][11] with its representations and signature requirement, coupled with some adequate review process to ascertain that the CCLA signer likely was authorized to sign (a step which I suspect is not meaningfully undertaken by most CLA-using companies).
+
+Based on common sense and common experience, I contend that in nearly all cases today, employee contributions are done with the actual or constructive knowledge and consent of the employer. If there were an atmosphere of high litigation risk surrounding open source software, perhaps this risk should be taken more seriously, but litigation arising out of open source projects remains remarkably uncommon.
+
+More to the point, I know of no case where an allegation of copyright or patent infringement against an inbound=outbound project, not stemming from an alleged open source license violation, would have been prevented by use of a CLA. Patent risk, in particular, is often cited by CLA proponents when pointing to the risk of unauthorized contributions, but the patent license grants in Apache-style CLAs are, by design, quite narrow in scope. Moreover, corporate contributions to an open source project will typically be few in number, small in size (and thus easily replaceable), and likely to be discarded as time goes on.
+
+### Alternatives
+
+If your company does not buy into the anti-CLA case and cannot get comfortable with the simple use of inbound=outbound, there are alternatives to resorting to an asymmetric and administratively burdensome Apache-style CLA requirement. The use of the DCO as a complement to inbound=outbound addresses at least some of the concerns of risk-averse CLA advocates. If you must use a true CLA, there is no need to use the Apache model (let alone a [monstrous derivative][10] of it). Consider the non-specification core of the [Eclipse Contributor Agreement][12]—essentially the DCO wrapped inside a CLA—or the Software Freedom Conservancy's [Selenium CLA][13], which merely ceremonializes an inbound=outbound contribution policy.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/cla-problems
+
+作者:[Richard Fontana][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/fontana
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/article/18/3/cla-vs-dco-whats-difference
+[2]: https://opensource.com/law/10/6/new-contributor-agreement-fedora
+[3]: https://www.red-bean.com/kfogel/
+[4]: https://developercertificate.org
+[5]: https://producingoss.com/en/contributor-agreements.html#developer-certificate-of-origin
+[6]: https://www.apache.org/licenses/icla.pdf
+[7]: https://www.freeipa.org/page/Why_CLA%3F
+[8]: https://opensource.com/law/11/7/trouble-harmony-part-1
+[9]: https://wiki.openstack.org/wiki/OpenStackAndItsCLA
+[10]: https://opensource.com/article/19/1/cla-proliferation
+[11]: https://www.apache.org/licenses/cla-corporate.txt
+[12]: https://www.eclipse.org/legal/ECA.php
+[13]: https://docs.google.com/forms/d/e/1FAIpQLSd2FsN12NzjCs450ZmJzkJNulmRC8r8l8NYwVW5KWNX7XDiUw/viewform?hl=en_US&formkey=dFFjXzBzM1VwekFlOWFWMjFFRjJMRFE6MQ#gid=0
diff --git a/sources/talk/20190301 What-s happening in the OpenStack community.md b/sources/talk/20190301 What-s happening in the OpenStack community.md
new file mode 100644
index 0000000000..28e3bf2fd3
--- /dev/null
+++ b/sources/talk/20190301 What-s happening in the OpenStack community.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What's happening in the OpenStack community?)
+[#]: via: (https://opensource.com/article/19/3/whats-happening-openstack)
+[#]: author: (Jonathan Bryce https://opensource.com/users/jonathan-bryce)
+
+What's happening in the OpenStack community?
+======
+
+In many ways, 2018 was a transformative year for the OpenStack Foundation.
+
+
+
+Since 2010, the OpenStack community has been building open source software to run cloud computing infrastructure. Initially, the focus was public and private clouds, but open infrastructure has been pulled into many new important use cases like telecoms, 5G, and manufacturing IoT.
+
+As OpenStack software matured and grew in scope to support new technologies like bare metal provisioning and container infrastructure, the community widened its thinking to embrace users who deploy and run the software in addition to the developers who build the software. Questions like, "What problems are users trying to solve?" "Which technologies are users trying to integrate?" and "What are the gaps?" began to drive the community's thinking and decision making.
+
+In response to those questions, the OSF reorganized its approach and created a new "open infrastructure" framework focused on use cases, including edge, container infrastructure, CI/CD, and private and hybrid cloud. And, for the first time, the OSF is hosting open source projects outside of the OpenStack project.
+
+Following are three highlights from the [OSF 2018 Annual Report][1]; I encourage you to read the entire report for more detailed information about what's new.
+
+### Pilot projects
+
+On the heels of launching [Kata Containers][2] in December 2017, the OSF launched three pilot projects in 2018—[Zuul][3], [StarlingX][4], and [Airship][5]—that help further our goals of taking our technology into additional relevant markets. Each project follows the tenets we consider key to the success of true open source, [the Four Opens][6]: open design, open collaboration, open development, and open source. While these efforts are still new, they have been extremely valuable in helping us learn how we should expand the scope of the OSF, as well as showing others the kind of approach we will take.
+
+While the OpenStack project remained at the core of the team's focus, pilot projects are helping expand usage of open infrastructure across markets and already benefiting the OpenStack community. This has attracted dozens of new developers to the open infrastructure community, which will ultimately benefit the OpenStack community and users.
+
+There is direct benefit from these contributors working upstream in OpenStack, such as through StarlingX, as well as indirect benefit from the relationships we've built with the Kubernetes community through the Kata Containers project. Airship is similarly bridging the gaps between the Kubernetes and OpenStack ecosystems. This shows users how the technologies work together. A rise in contributions to Zuul has provided the engine for OpenStack CI and keeps our development running smoothly.
+
+### Containers collaboration
+
+In addition to investing in new pilot projects, we continued efforts to work with key adjacent projects in 2018, and we made particularly good progress with Kubernetes. OSF staffer Chris Hoge helps lead the cloud provider special interest group, where he has helped standardize how Kubernetes deployments expect to run on top of various infrastructure. This has clarified OpenStack's place in the Kubernetes ecosystem and led to valuable integration points, like having OpenStack as part of the Kubernetes release testing process.
+
+Additionally, OpenStack Magnum was certified as a Kubernetes installer by the CNCF. Through the Kata Containers community, we have deepened these relationships into additional areas within the container ecosystem resulting in a number of companies getting involved for the first time.
+
+### Evolving events
+
+We knew heading into 2018 that the environment around our events was changing and we needed to respond. During the year, we held two successful project team gatherings (PTGs) in Dublin and Denver, reaching capacity for both events while also including new projects and OpenStack operators. We held OpenStack Summits in Vancouver and Berlin, both experiencing increases in attendance and project diversity since Sydney in 2017, with each Summit including more than 30 open source projects. Recognizing this broader audience and the OSF's evolving strategy, the OpenStack Summit was renamed the [Open Infrastructure Summit][7], beginning with the Denver event coming up in April.
+
+In 2018, we boosted investment in China, onboarding a China Community Manager based in Shanghai and hosting a strategy day in Beijing with 30+ attendees from Gold and Platinum Members in China. This effort will continue in 2019 as we host our first Summit in China: the [Open Infrastructure Summit Shanghai][8] in November.
+
+We also worked with the community in 2018 to define a new model for events to maximize participation while saving on travel and expenses for the individuals and companies who are increasingly stretched across multiple open source communities. We arrived at a plan that we will implement and iterate on in 2019 where we will collocate PTGs as standalone events adjacent to our Open Infrastructure Summits.
+
+### Looking ahead
+
+We've seen impressive progress, but the biggest accomplishment might be in establishing a framework for the future of the foundation itself. In 2018, we advanced the open infrastructure mission by establishing OSF as an effective place to collaborate for CI/CD, container infrastructure, and edge computing, in addition to the traditional public and private cloud use cases. The open infrastructure approach opened a lot of doors in 2018, from the initial release of software from each pilot project, to live 5G demos, to engagement with hyperscale public cloud providers.
+
+Ultimately, our value comes from the effectiveness of our communities and the software they produce. As 2019 unfolds, our community is excited to apply learnings from 2018 to the benefit of developers, users, and the commercial ecosystem across all our projects.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/whats-happening-openstack
+
+作者:[Jonathan Bryce][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jonathan-bryce
+[b]: https://github.com/lujun9972
+[1]: https://www.openstack.org/foundation/2018-openstack-foundation-annual-report
+[2]: https://katacontainers.io/
+[3]: https://zuul-ci.org/
+[4]: https://www.starlingx.io/
+[5]: https://www.airshipit.org/
+[6]: https://www.openstack.org/four-opens/
+[7]: https://www.openstack.org/summit/denver-2019/
+[8]: https://www.openstack.org/summit/shanghai-2019
diff --git a/sources/tech/20120205 Computer Laboratory - Raspberry Pi- Lesson 5 OK05.md b/sources/tech/20120205 Computer Laboratory - Raspberry Pi- Lesson 5 OK05.md
deleted file mode 100644
index 912d4c348c..0000000000
--- a/sources/tech/20120205 Computer Laboratory - Raspberry Pi- Lesson 5 OK05.md
+++ /dev/null
@@ -1,103 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (oska874)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 5 OK05)
-[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/ok05.html)
-[#]: author: (Robert Mullins http://www.cl.cam.ac.uk/~rdm34)
-
-translating by ezio
-
-树莓派计算机实验室 课程 5 OK05
-======
-
-OK05 课程构建于课程 OK04 的基础,使用更多代码方式烧、保存写莫尔斯码的 SOS 序列(`...---...`)。这里假设你已经有了 [课程 4: OK04][1] 操作系统的代码基础。
-
-### 1 数据
-
-到目前为止,我们与操作系统有关的所有内容都提供了遵循的说明。然而有时候,说明只是一半。我们的操作系统可能需要数据
-
-> 一些早期的操作系统确实只允许特定文件中的特定类型的数据,但是这通常被认为太严格了。现代方法确实使程序变得复杂的多。
-
-通常,只是数据的值很重要。你可能经过训练,认为数据是指定类型,比如,一个文本文件包含文章,一个图像文件包含一幅图片,等等。说实话,这只是一个理想罢了。计算机上的全部数据都是二进制数字,重要的是我们选择用什么来解释这些数据。在这个例子中,我们会将一个闪灯序列作为数据保存下来。
-
-在 `main.s` 结束处复制下面的代码:
-
-```
-.section .data %定义 .data 段
-.align 2 %对齐
-pattern: %定义整形变量
-.int 0b11111111101010100010001000101010
-```
-
->`.align num` 保证下一行代码的地址是 `2^num` 的整数倍。
-
->`.int val` 输出数值 `val`。
-
-要区分数据和代码,我们将数据都放在 `.data` 区域。我已经将该区域包含在操作系统的内存布局图。我已经选择将数据放到代码后面。将我们的指令和数据分开保存的原因是,如果最后我们在自己的操作系统上实现一些安全措施,我们就需要知道代码的那些部分是可以执行的,而那些部分是不行的。
-
-我在这里使用了两个新命令 `.align` 和 `.int`。`.align` 保证下来的数据是按照 2 的乘方对齐的。在这个里,我使用 `.align 2` ,意味着数据最终存放的地址是 `2^2=4` 的整数倍。这个操作是很重要的,因为我们用来读取内寸的指令 `ldr` 要求内存地址是 4 的倍数。
-
-The .int command copies the constant after it into the output directly. That means that 111111111010101000100010001010102 will be placed into the output, and so the label pattern actually labels this piece of data as pattern.
-
-命令 `.int` 直接复制它后面的常量到输出。这意味着 `11111111101010100010001000101010`(二进制数) 将会被存放到输出,所以标签模式实际将标记这段数据作为模式。
-
-> 关于数据的一个挑战是寻找一个高效和有用的展示形式。这种保存一个开、关的时间单元的序列的方式,运行起来很容易,但是将很难编辑,因为摩尔斯的原理 `-` 或 `.` 丢失了。
-
-如我提到的,数据可以意味这你想要的所有东西。在这里我编码了摩尔斯代码 SOS 序列,对于不熟悉的人,就是 `...---...`。我使用 0 表示一个时间单元的 LED 灭灯,而 1 表示一个时间单元的 LED 亮。这样,我们可以像这样编写一些代码在数据中显示一个序列,然后要显示不同序列,我们所有需要做的就是修改这段数据。下面是一个非常简单的例子,操作系统必须一直执行这段程序,解释和展示数据。
-
-复制下面几行到 `main.s` 中的标记 `loop$` 之前。
-
-```
-ptrn .req r4 %重命名 r4 为 ptrn
-ldr ptrn,=pattern %加载 pattern 的地址到 ptrn
-ldr ptrn,[ptrn] %加载地址 ptrn 所在内存的值
-seq .req r5 %重命名 r5 为 seq
-mov seq,#0 %seq 赋值为 0
-```
-
-这段代码加载 `pattrern` 到寄存器 `r4`,并加载 0 到寄存器 `r5`。`r5` 将是我们的序列位置,所以我们可以追踪有多少 `pattern` 我们已经展示了。
-
-下面的代码将非零值放入 `r1` ,如果仅仅是如果,这里有个 1 在当前位置的 `pattern`。
-
-```
-mov r1,#1 %加载1到 r1
-lsl r1,seq %对r1 的值逻辑左移 seq 次
-and r1,ptrn %按位与
-```
-
-这段代码对你调用 `SetGpio` 很有用,它必须有一个非零值来关掉 LED,而一个0值会打开 LED。
-
-现在修改 `main.s` 中全部你的代码,这样代码中每次循环会根据当前的序列数设置 LED,等待 250000 毫秒(或者其他合适的延时),然后增加序列数。当这个序列数到达 32 就需要返回 0.看看你是否能实现这个功能,作为额外的挑战,也可以试着只使用一条指令。
-
-### 2 Time Flies When You're Having Fun... 当你玩得开心时,过得很快
-
-你现在准备好在树莓派上实验。应该闪烁一串包含 3 个短脉冲,3 个长脉冲,然后 3 个更短脉冲的序列。在一次延时之后,这种模式应该重复。如果这部工作,请查看我们的问题页。
-
-一旦它工作,祝贺你已经达到 OK 系列教程的结束。
-
-在这个谢列我们学习了汇编代码,GPIO 控制器和系统定时器。我们已经学习了函数和 ABI,以及几个基础的操作系统原理,和关于数据的知识。
-
-你现在已经准备好下面几个更高级的课程的某一个。
- * [Screen][2] 系列是接下来的,会教你如何通过汇编代码使用屏幕。
- * [Input][3] 系列教授你如何使用键盘和鼠标。
-
-到现在,你已经有了足够的信息来制作操作系统,用其它方法和 GPIO 交互。如果你有任何机器人工具,你可能会想尝试编写一个通过 GPIO 管教控制的机器人操作系统。
-
---------------------------------------------------------------------------------
-
-via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/ok05.html
-
-作者:[Robert Mullins][a]
-选题:[lujun9972][b]
-译者:[ezio](https://github.com/oska874)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: http://www.cl.cam.ac.uk/~rdm34
-[b]: https://github.com/lujun9972
-[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/ok04.html
-[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen01.html
-[3]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input01.html
diff --git a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 10 Input01.md b/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 10 Input01.md
deleted file mode 100644
index beb7613b2f..0000000000
--- a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 10 Input01.md
+++ /dev/null
@@ -1,494 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 10 Input01)
-[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input01.html)
-[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
-
-Computer Laboratory – Raspberry Pi: Lesson 10 Input01
-======
-
-Welcome to the Input lesson series. In this series, you will learn how to receive inputs to the Raspberry Pi using the keyboard. We will start with just revealing the input, and then move to a more traditional text prompt.
-
-This first input lesson teaches some theory about drivers and linking, as well as about keyboards and ends up displaying text on the screen.
-
-### 1 Getting Started
-
-It is expected that you have completed the OK series, and it would be helpful to have completed the Screen series. Many of the files from that series will be called, without comment. If you do not have these files, or prefer to use a correct implementation, download the template for this lesson from the [Downloads][1] page. If you're using your own implementation, please remove everything after your call to SetGraphicsAddress.
-
-### 2 USB
-
-```
-The USB standard was designed to make simple hardware in exchange for complex software.
-```
-
-As you are no doubt aware, the Raspberry Pi model B has two USB ports, commonly used for connecting a mouse and keyboard. This was a very good design decision, USB is a very generic connector, and many different kinds of device use it. It's simple to build new devices for, simple to write device drivers for, and is highly extensible thanks to USB hubs. Could it get any better? Well, no, in fact for an Operating Systems developer this is our worst nightmare. The USB standard is huge. I really mean it this time, it is over 700 pages, before you've even thought about connecting a device.
-
-I spoke to a number of other hobbyist Operating Systems developers about this and they all say one thing: don't bother. "It will take too long to implement", "You won't be able to write a tutorial on it" and "It will be of little benefit". In many ways they are right, I'm not able to write a tutorial on the USB standard, as it would take weeks. I also can't teach how to write device drivers for all the different devices, so it is useless on its own. However, I can do the next best thing: Get a working USB driver, get a keyboard driver, and then teach how to use these in an Operating System. I set out searching for a free driver that would run in an operating system that doesn't even know what a file is yet, but I couldn't find one. They were all too high level. So, I attempted to write one. Everybody was right, this took weeks to do. However, I'm pleased to say I did get one that works with no external help from the Operating System, and can talk to a mouse and keyboard. It is by no means complete, efficient, or correct, but it does work. It has been written in C and the full source code can be found on the downloads page for those interested.
-
-So, this tutorial won't be a lesson on the USB standard (at all). Instead we'll look at how to work with other people's code.
-
-### 3 Linking
-
-```
-Linking allows us to make reusable code 'libraries' that anyone can use in their program.
-```
-
-Since we're about to incorporate external code into the Operating System, we need to talk about linking. Linking is a process which is applied to programs or Operating System to link in functions. What this means is that when a program is made, we don't necessarily code every function (almost certainly not in fact). Linking is what we do to make our program link to functions in other people's code. This has actually been going on all along in our Operating Systems, as the linker links together all of the different files, each of which is compiled separately.
-
-```
-Programs often just call libraries, which call other libraries and so on until eventually they call an Operating System library which we would write.
-```
-
-There are two types of linking: static and dynamic. Static linking is like what goes on when we make our Operating Systems. The linker finds all the addresses of the functions, and writes them into the code, before the program is finished. Dynamic linking is linking that occurs after the program is 'complete'. When it is loaded, the dynamic linker goes through the program and links any functions which are not in the program to libraries in the Operating System. This is one of the jobs our Operating System should eventually be capable of, but for now everything will be statically linked.
-
-The USB driver I have written is suitable for static linking. This means I give you the compiled code for each of my files, and then the linker finds functions in your code which are not defined in your code, and links them to functions in my code. On the [Downloads][1] page for this lesson is a makefile and my USB driver, which you will need to continue. Download them and replace the makefile in your code with this one, and also put the driver in the same folder as that makefile.
-
-### 4 Keyboards
-
-In order to get input into our Operating System, we need to understand at some level how keyboards actually work. Keyboards have two types of keys: Normal and Modifier keys. The normal keys are the letters, numbers, function keys, etc. They constitute almost every key on the keyboard. The modifiers are up to 8 special keys. These are left shift, right shift, left control, right control, left alt, right alt, left GUI and right GUI. The keyboard can detect any combination of the modifier keys being held, as well as up to 6 normal keys. Every time a key changes (i.e. is pushed or released), it reports this to the computer. Typically, keyboards also have three LEDs for Caps Lock, Num Lock and Scroll Lock, which are controlled by the computer, not the keyboard itself. Keyboards may have many more lights such as power, mute, etc.
-
-In order to help standardise USB keyboards, a table of values was produced, such that every keyboard key ever is given a unique number, as well as every conceivable LED. The table below lists the first 126 of values.
-
-Table 4.1 USB Keyboard Keys
-| Number | Description | Number | Description | Number | Description | Number | Description | |
-| ------ | ---------------- | ------- | ---------------------- | -------- | -------------- | --------------- | -------------------- | |
-| 4 | a and A | 5 | b and B | 6 | c and C | 7 | d and D | |
-| 8 | e and E | 9 | f and F | 10 | g and G | 11 | h and H | |
-| 12 | i and I | 13 | j and J | 14 | k and K | 15 | l and L | |
-| 16 | m and M | 17 | n and N | 18 | o and O | 19 | p and P | |
-| 20 | q and Q | 21 | r and R | 22 | s and S | 23 | t and T | |
-| 24 | u and U | 25 | v and V | 26 | w and W | 27 | x and X | |
-| 28 | y and Y | 29 | z and Z | 30 | 1 and ! | 31 | 2 and @ | |
-| 32 | 3 and # | 33 | 4 and $ | 34 | 5 and % | 35 | 6 and ^ | |
-| 36 | 7 and & | 37 | 8 and * | 38 | 9 and ( | 39 | 0 and ) | |
-| 40 | Return (Enter) | 41 | Escape | 42 | Delete (Backspace) | 43 | Tab | |
-| 44 | Spacebar | 45 | - and _ | 46 | = and + | 47 | [ and { | |
-| 48 | ] and } | 49 | \ and | | 50 | # and ~ | 51 | ; and : |
-| 52 | ' and " | 53 | ` and ~ | 54 | , and < | 55 | . and > | |
-| 56 | / and ? | 57 | Caps Lock | 58 | F1 | 59 | F2 | |
-| 60 | F3 | 61 | F4 | 62 | F5 | 63 | F6 | |
-| 64 | F7 | 65 | F8 | 66 | F9 | 67 | F10 | |
-| 68 | F11 | 69 | F12 | 70 | Print Screen | 71 | Scroll Lock | |
-| 72 | Pause | 73 | Insert | 74 | Home | 75 | Page Up | |
-| 76 | Delete forward | 77 | End | 78 | Page Down | 79 | Right Arrow | |
-| 80 | Left Arrow | 81 | Down Arrow | 82 | Up Arrow | 83 | Num Lock | |
-| 84 | Keypad / | 85 | Keypad * | 86 | Keypad - | 87 | Keypad + | |
-| 88 | Keypad Enter | 89 | Keypad 1 and End | 90 | Keypad 2 and Down Arrow | 91 | Keypad 3 and Page Down | |
-| 92 | Keypad 4 and Left Arrow | 93 | Keypad 5 | 94 | Keypad 6 and Right Arrow | 95 | Keypad 7 and Home | |
-| 96 | Keypad 8 and Up Arrow | 97 | Keypad 9 and Page Up | 98 | Keypad 0 and Insert | 99 | Keypad . and Delete | |
-| 100 | \ and | | 101 | Application | 102 | Power | 103 | Keypad = |
-| 104 | F13 | 105 | F14 | 106 | F15 | 107 | F16 | |
-| 108 | F17 | 109 | F18 | 110 | F19 | 111 | F20 | |
-| 112 | F21 | 113 | F22 | 114 | F23 | 115 | F24 | |
-| 116 | Execute | 117 | Help | 118 | Menu | 119 | Select | |
-| 120 | Stop | 121 | Again | 122 | Undo | 123 | Cut | |
-| 124 | Copy | 125 | Paste | 126 | Find | 127 | Mute | |
-| 128 | Volume Up | 129 | Volume Down | | | | | |
-
-The full list can be found in section 10, page 53 of [HID Usage Tables 1.12][2].
-
-### 5 The Nut Behind the Wheel
-
-```
-These summaries and the code they describe form an API - Application Product Interface.
-```
-
-Normally, when you work with someone else's code, they provide a summary of their methods, what they do and roughly how they work, as well as how they can go wrong. Here is a table of the relevant instructions required to use my USB driver.
-
-Table 5.1 Keyboard related functions in CSUD
-| Function | Arguments | Returns | Description |
-| ----------------------- | ----------------------- | ----------------------- | ----------------------- |
-| UsbInitialise | None | r0 is result code | This method is the all-in-one method that loads the USB driver, enumerates all devices and attempts to communicate with them. This method generally takes about a second to execute, though with a few USB hubs plugged in this can be significantly longer. After this method is completed methods in the keyboard driver become available, regardless of whether or not a keyboard is indeed inserted. Result code explained below. |
-| UsbCheckForChange | None | None | Essentially provides the same effect as UsbInitialise, but does not provide the same one time initialisation. This method checks every port on every connected hub recursively, and adds new devices if they have been added. This should be very quick if there are no changes, but can take up to a few seconds if a hub with many devices is attached. |
-| KeyboardCount | None | r0 is count | Returns the number of keyboards currently connected and detected. UsbCheckForChange may update this. Up to 4 keyboards are supported by default. Up to this many keyboards may be accessed through this driver. |
-| KeyboardGetAddress | r0 is index | r0 is address | Retrieves the address of a given keyboard. All other functions take a keyboard address in order to know which keyboard to access. Thus, to communicate with a keyboard, first check the count, then retrieve the address, then use other methods. Note, the order of keyboards that this method returns may change after calls to UsbCheckForChange. |
-| KeyboardPoll | r0 is address | r0 is result code | Reads in the current key state from the keyboard. This operates via polling the device directly, contrary to the best practice. This means that if this method is not called frequently enough, a key press could be missed. All reading methods simply return the value as of the last poll. |
-| KeyboardGetModifiers | r0 is address | r0 is modifier state | Retrieves the status of the modifier keys as of the last poll. These are the shift, alt control and GUI keys on both sides. This is returned as a bit field, such that a 1 in the bit 0 means left control is held, bit 1 means left shift, bit 2 means left alt, bit 3 means left GUI and bits 4 to 7 mean the right versions of those previous. If there is a problem r0 contains 0. |
-| KeyboardGetKeyDownCount | r0 is address | r0 is count | Retrieves the number of keys currently held down on the keyboard. This excludes modifier keys. Normally, this cannot go above 6. If there is an error this method returns 0. |
-| KeyboardGetKeyDown | r0 is address, r1 is key number | r0 is scan code | Retrieves the scan code (see Table 4.1) of a particular held down key. Normally, to work out which keys are down, call KeyboardGetKeyDownCount and then call KeyboardGetKeyDown up to that many times with increasing values of r1 to determine which keys are down. Returns 0 if there is a problem. It is safe (but not recommended) to call this method without calling KeyboardGetKeyDownCount and interpret 0s as keys not held. Note, the order or scan codes can change randomly (some keyboards sort numerically, some sort temporally, no guarantees are made). |
-| KeyboardGetKeyIsDown | r0 is address, r1 is scan code | r0 is status | Alternative to KeyboardGetKeyDown, checks if a particular scan code is among the held down keys. Returns 0 if not, or a non-zero value if so. Faster when detecting particular scan codes (e.g. looking for ctrl+c). On error, returns 0. |
-| KeyboardGetLedSupport | r0 is address | r0 is LEDs | Checks which LEDs a particular keyboard supports. Bit 0 being 1 represents Number Lock, bit 1 represents Caps Lock, bit 2 represents Scroll Lock, bit 3 represents Compose, bit 4 represents Kana, bit 5 represents Power, bit 6 represents Mute and bit 7 represents Compose. As per the USB standard, none of these LEDs update automatically (e.g. Caps Lock must be set manually when the Caps Lock scan code is detected). |
-| KeyboardSetLeds | r0 is address, r1 is LEDs | r0 is result code | Attempts to turn on/off the specified LEDs on the keyboard. See below for result code values. See KeyboardGetLedSupport for LEDs' values. |
-
-```
-Result codes are an easy way to handle errors, but often more elegant solutions exist in higher level code.
-```
-
-Several methods return 'result codes'. These are commonplace in C code, and are just numbers which represent what happened in a method call. By convention, 0 always indicates success. The following result codes are used by this driver.
-
-Table 5.2 - CSUD Result Codes
-| Code | Description |
-| ---- | ----------------------------------------------------------------------- |
-| 0 | Method completed successfully. |
-| -2 | Argument: A method was called with an invalid argument. |
-| -4 | Device: The device did not respond correctly to the request. |
-| -5 | Incompatible: The driver is not compatible with this request or device. |
-| -6 | Compiler: The driver was compiled incorrectly, and is broken. |
-| -7 | Memory: The driver ran out of memory. |
-| -8 | Timeout: The device did not respond in the expected time. |
-| -9 | Disconnect: The device requested has disconnected, and cannot be used. |
-
-The general usage of the driver is as follows:
-
- 1. Call UsbInitialise
- 2. Call UsbCheckForChange
- 3. Call KeyboardCount
- 4. If this is 0, go to 2.
- 5. For each keyboard you support:
- 1. Call KeyboardGetAddress
- 2. Call KeybordGetKeyDownCount
- 3. For each key down:
- 1. Check whether or not it has just been pushed
- 2. Store that the key is down
- 4. For each key stored:
- 1. Check whether or not key is released
- 2. Remove key if released
- 6. Perform actions based on keys pushed/released
- 7. Go to 2.
-
-
-
-Ultimately, you may do whatever you wish to with the keyboard, and these methods should allow you to access all of its functionality. Over the next 2 lessons, we shall look at completing the input side of a text terminal, similarly to most command line computers, and interpreting the commands. In order to do this, we're going to need to have keyboard inputs in a more useful form. You may notice that my driver is (deliberately) unhelpful, because it doesn't have methods to deduce whether or not a key has just been pushed down or released, it only has methods about what is currently held down. This means we'll need to write such methods ourselves.
-
-### 6 Updates Available
-
-Repeatedly checking for updates is called 'polling'. This is in contrast to interrupt driven IO, where the device sends a signal when data is ready.
-
-First of all, let's implement a method KeyboardUpdate which detects the first keyboard and uses its poll method to get the current input, as well as saving the last inputs for comparison. We can then use this data with other methods to translate scan codes to keys. The method should do precisely the following:
-
- 1. Retrieve a stored keyboard address (initially 0).
- 2. If this is not 0, go to 9.
- 3. Call UsbCheckForChange to detect new keyboards.
- 4. Call KeyboardCount to detect how many keyboards are present.
- 5. If this is 0 store the address as 0 and return; we can't do anything with no keyboard.
- 6. Call KeyboardGetAddress with parameter 0 to get the first keyboard's address.
- 7. Store this address.
- 8. If this is 0, return; there is some problem.
- 9. Call KeyboardGetKeyDown 6 times to get each key currently down and store them
- 10. Call KeyboardPoll
- 11. If the result is non-zero go to 3. There is some problem (such as disconnected keyboard).
-
-
-
-To store the values mentioned above, we will need the following values in the .data section.
-
-```
-.section .data
-.align 2
-KeyboardAddress:
-.int 0
-KeyboardOldDown:
-.rept 6
-.hword 0
-.endr
-```
-
-```
-.hword num inserts the half word constant num into the file directly.
-```
-
-```
-.rept num [commands] .endr copies the commands commands to the output num times.
-```
-
-Try to implement the method yourself. My implementation for this is as follows:
-
-1.
-```
-.section .text
-.globl KeyboardUpdate
-KeyboardUpdate:
-push {r4,r5,lr}
-
-kbd .req r4
-ldr r0,=KeyboardAddress
-ldr kbd,[r0]
-```
-We load in the keyboard address.
-2.
-```
-teq kbd,#0
-bne haveKeyboard$
-```
-If the address is non-zero, we have a keyboard. Calling UsbCheckForChanges is slow, and so if everything works we avoid it.
-3.
-```
-getKeyboard$:
-bl UsbCheckForChange
-```
-If we don't have a keyboard, we have to check for new devices.
-4.
-```
-bl KeyboardCount
-```
-Now we see if a new keyboard has been added.
-5.
-```
-teq r0,#0
-ldreq r1,=KeyboardAddress
-streq r0,[r1]
-beq return$
-```
-There are no keyboards, so we have no keyboard address.
-6.
-```
-mov r0,#0
-bl KeyboardGetAddress
-```
-Let's just get the address of the first keyboard. You may want to allow more.
-7.
-```
-ldr r1,=KeyboardAddress
-str r0,[r1]
-```
-Store the keyboard's address.
-8.
-```
-teq r0,#0
-beq return$
-mov kbd,r0
-```
-If we have no address, there is nothing more to do.
-9.
-```
-saveKeys$:
- mov r0,kbd
- mov r1,r5
- bl KeyboardGetKeyDown
-
- ldr r1,=KeyboardOldDown
- add r1,r5,lsl #1
- strh r0,[r1]
- add r5,#1
- cmp r5,#6
- blt saveKeys$
-```
-Loop through all the keys, storing them in KeyboardOldDown. If we ask for too many, this returns 0 which is fine.
-
-10.
-```
-mov r0,kbd
-bl KeyboardPoll
-```
-Now we get the new keys.
-
-11.
-```
-teq r0,#0
-bne getKeyboard$
-
-return$:
-pop {r4,r5,pc}
-.unreq kbd
-```
-Finally we check if KeyboardPoll worked. If not, we probably disconnected.
-
-
-With our new KeyboardUpdate method, checking for inputs becomes as simple as calling this method at regular intervals, and it will even check for disconnections etc. This is a useful method to have, as our actual key processing may differ based on the situation, and so being able to get the current input in its raw form with one method call is generally applicable. The next method we ideally want is KeyboardGetChar, a method that simply returns the next key pressed as an ASCII character, or returns 0 if no key has just been pressed. This could be extended to support typing a key multiple times if it is held for a certain duration, and to support the 'lock' keys as well as modifiers.
-
-To make this method it is useful if we have a method KeyWasDown, which simply returns 0 if a given scan code is not in the KeyboardOldDown values, and returns a non-zero value otherwise. Have a go at implementing this yourself. As always, a solution can be found on the downloads page.
-
-### 7 Look Up Tables
-
-```
-In many areas of programming, the larger the program, the faster it is. Look up tables are large, but are very fast. Some problems can be solved by a mixture of look up tables and normal functions.
-```
-
-The KeyboardGetChar method could be quite complex if we write it poorly. There are 100s of scan codes, each with different effects depending on the presence or absence of the shift key or other modifiers. Not all of the keys can be translated to a character. For some characters, multiple keys can produce the same character. A useful trick in situations with such vast arrays of possibilities is look up tables. A look up table, much like in the physical sense, is a table of values and their results. For some limited functions, the simplest way to deduce the answer is just to precompute every answer, and just return the correct one by retrieving it. In this case, we could build up a sequence of values in memory such that the nth value into the sequence is the ASCII character code for the scan code n. This means our method would simply have to detect if a key was pressed, and then retrieve its value from the table. Further, we could have a separate table for the values when shift is held, so that the shift key simply changes which table we're working with.
-
-After the .section .data command, copy the following tables:
-
-```
-.align 3
-KeysNormal:
- .byte 0x0, 0x0, 0x0, 0x0, 'a', 'b', 'c', 'd'
- .byte 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l'
- .byte 'm', 'n', 'o', 'p', 'q', 'r', 's', 't'
- .byte 'u', 'v', 'w', 'x', 'y', 'z', '1', '2'
- .byte '3', '4', '5', '6', '7', '8', '9', '0'
- .byte '\n', 0x0, '\b', '\t', ' ', '-', '=', '['
- .byte ']', '\\\', '#', ';', '\'', '`', ',', '.'
- .byte '/', 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0
- .byte 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0
- .byte 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0
- .byte 0x0, 0x0, 0x0, 0x0, '/', '*', '-', '+'
- .byte '\n', '1', '2', '3', '4', '5', '6', '7'
- .byte '8', '9', '0', '.', '\\\', 0x0, 0x0, '='
-
-.align 3
-KeysShift:
- .byte 0x0, 0x0, 0x0, 0x0, 'A', 'B', 'C', 'D'
- .byte 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L'
- .byte 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T'
- .byte 'U', 'V', 'W', 'X', 'Y', 'Z', '!', '"'
- .byte '£', '$', '%', '^', '&', '*', '(', ')'
- .byte '\n', 0x0, '\b', '\t', ' ', '_', '+', '{'
- .byte '}', '|', '~', ':', '@', '¬', '<', '>'
- .byte '?', 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0
- .byte 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0
- .byte 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0
- .byte 0x0, 0x0, 0x0, 0x0, '/', '*', '-', '+'
- .byte '\n', '1', '2', '3', '4', '5', '6', '7'
- .byte '8', '9', '0', '.', '|', 0x0, 0x0, '='
-```
-
-```
-.byte num inserts the byte constant num into the file directly.
-```
-
-```
-Most assemblers and compilers recognise escape sequences; character sequences such as \t which insert special characters instead.
-```
-
-These tables map directly the first 104 scan codes onto the ASCII characters as a table of bytes. We also have a separate table describing the effects of the shift key on those scan codes. I've used the ASCII null character (0) for all keys without direct mappings in ASCII (such as the function keys). Backspace is mapped to the ASCII backspace character (8 denoted \b), enter is mapped to the ASCII new line character (10 denoted \n) and tab is mapped to the ASCII horizontal tab character (9 denoted \t).
-
-The KeyboardGetChar method will need to do the following:
-
- 1. Check if KeyboardAddress is 0. If so, return 0.
- 2. Call KeyboardGetKeyDown up to 6 times. Each time:
- 1. If key is 0, exit loop.
- 2. Call KeyWasDown. If it was, go to the next key.
- 3. If the scan code is more than 103, go to the next key.
- 4. Call KeyboardGetModifiers
- 5. If shift is held, load the address of KeysShift. Otherwise load KeysNormal.
- 6. Read the ASCII value from the table.
- 7. If it is 0, go to the next key otherwise return this ASCII code and exit.
- 3. Return 0.
-
-
-
-Try to implement this yourself. My implementation is presented below:
-
-1.
-```
-.globl KeyboardGetChar
-KeyboardGetChar:
-ldr r0,=KeyboardAddress
-ldr r1,[r0]
-teq r1,#0
-moveq r0,#0
-moveq pc,lr
-```
-Simple check to see if we have a keyboard.
-
-2.
-```
-push {r4,r5,r6,lr}
-kbd .req r4
-key .req r6
-mov r4,r1
-mov r5,#0
-keyLoop$:
- mov r0,kbd
- mov r1,r5
- bl KeyboardGetKeyDown
-```
-r5 will hold the index of the key, r4 holds the keyboard address.
-
- 1.
- ```
- teq r0,#0
- beq keyLoopBreak$
- ```
- If a scan code is 0, it either means there is an error, or there are no more keys.
-
- 2.
- ```
- mov key,r0
- bl KeyWasDown
- teq r0,#0
- bne keyLoopContinue$
- ```
- If a key was already down it is uninteresting, we only want ot know about key presses.
-
- 3.
- ```
- cmp key,#104
- bge keyLoopContinue$
- ```
- If a key has a scan code higher than 104, it will be outside our table, and so is not relevant.
-
- 4.
- ```
- mov r0,kbd
- bl KeyboardGetModifiers
- ```
- We need to know about the modifier keys in order to deduce the character.
-
- 5.
- ```
- tst r0,#0b00100010
- ldreq r0,=KeysNormal
- ldrne r0,=KeysShift
- ```
- We detect both a left and right shift key as changing the characters to their shift variants. Remember, a tst instruction computes the logical AND and then compares it to zero, so it will be equal to 0 if and only if both of the shift bits are zero.
-
- 6.
- ```
- ldrb r0,[r0,key]
- ```
- Now we can load in the key from the look up table.
-
- 7.
- ```
- teq r0,#0
- bne keyboardGetCharReturn$
- keyLoopContinue$:
- add r5,#1
- cmp r5,#6
- blt keyLoop$
- ```
- If the look up code contains a zero, we must continue. To continue, we increment the index, and check if we've reached 6.
-
-3.
-```
-keyLoopBreak$:
-mov r0,#0
-keyboardGetCharReturn$:
-pop {r4,r5,r6,pc}
-.unreq kbd
-.unreq key
-```
-We return our key here, if we reach keyLoopBreak$, then we know there is no key held, so return 0.
-
-
-
-
-### 8 Notepad OS
-
-Now we have our KeyboardGetChar method, we can make an operating system that just types what the user writes to the screen. For simplicity we'll ignore all the unusual keys. In 'main.s' delete all code after bl SetGraphicsAddress. Call UsbInitialise, set r4 and r5 to 0, then loop forever over the following commands:
-
- 1. Call KeyboardUpdate
- 2. Call KeyboardGetChar
- 3. If it is 0, got to 1
- 4. Copy r4 and r5 to r1 and r2 then call DrawCharacter
- 5. Add r0 to r4
- 6. If r4 is 1024, add r1 to r5 and set r4 to 0
- 7. If r5 is 768 set r5 to 0
- 8. Go to 1
-
-
-
-Now compile this and test it on the Pi. You should almost immediately be able to start typing text to the screen when the Pi starts. If not, please see our troubleshooting page.
-
-When it works, congratulations, you've achieved an interface with the computer. You should now begin to realise that you've almost got a primitive operating system together. You can now interface with the computer, issuing it commands, and receive feedback on screen. In the next tutorial, [Input02][3] we will look at producing a full text terminal, in which the user types commands, and the computer executes them.
-
---------------------------------------------------------------------------------
-
-via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input01.html
-
-作者:[Alex Chadwick][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.cl.cam.ac.uk
-[b]: https://github.com/lujun9972
-[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/downloads.html
-[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/downloads/hut1_12v2.pdf
-[3]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html
diff --git a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md b/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md
deleted file mode 100644
index 9040162a97..0000000000
--- a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md
+++ /dev/null
@@ -1,911 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 11 Input02)
-[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html)
-[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
-
-Computer Laboratory – Raspberry Pi: Lesson 11 Input02
-======
-
-The Input02 lesson builds on Input01, by building a simple command line interface where the user can type commands and the computer interprets and displays them. It is assumed you have the code for the [Lesson 11: Input01][1] operating system as a basis.
-
-### 1 Terminal 1
-
-```
-In the early days of computing, there would usually be one large computer in a building, and many 'terminals' which sent commands to it. The computer would take it in turns to execute different incoming commands.
-```
-
-Almost every operating system starts life out as a text terminal. This is typically a black screen with white writing, where you type commands for the computer to execute on the keyboard, and it explains how you've mistyped them, or very occasionally, does what you want. This approach has two main advantages: it provides a simple, robust control mechanism for the computer using only a keyboard and monitor, and it is done by almost every operating system, so is widely understood by system administrators.
-
-Let's analyse what we want to do precisely:
-
- 1. Computer turns on, displays some sort of welcome message
- 2. Computer indicates its ready for input
- 3. User types a command, with parameters, on the keyboard
- 4. User presses return or enter to commit the command
- 5. Computer interprets command and performs actions if command is acceptable
- 6. Computer displays messages to indicate if command was successful, and also what happened
- 7. Loop back to 2
-
-
-
-One defining feature of such terminals is that they are unified for both input and output. The same screen is used to enter inputs as is used to print outputs. This means it is useful to build an abstraction of a character based display. In a character based display, the smallest unit is a character, not a pixel. The screen is divided into a fixed number of characters which have varying colours. We can build this on top of our existing screen code, by storing the characters and their colours, and then using the DrawCharacter method to push them to the screen. Once we have a character based display, drawing text becomes a matter of drawing a line of characters.
-
-In a new file called terminal.s copy the following code:
-```
-.section .data
-.align 4
-terminalStart:
-.int terminalBuffer
-terminalStop:
-.int terminalBuffer
-terminalView:
-.int terminalBuffer
-terminalColour:
-.byte 0xf
-.align 8
-terminalBuffer:
-.rept 128*128
-.byte 0x7f
-.byte 0x0
-.endr
-terminalScreen:
-.rept 1024/8 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 768/16
-.byte 0x7f
-.byte 0x0
-.endr
-```
-This sets up the data we need for the text terminal. We have two main storages: terminalBuffer and terminalScreen. terminalBuffer is storage for all of the text we have displayed. It stores up to 128 lines of text (each containing 128 characters). Each character consists of an ASCII character code and a colour, all of which are initially set to 0x7f (ASCII delete) and 0 (black on a black background). terminalScreen stores the characters that are currently displayed on the screen. It is 128 by 48 characters, similarly initialised. You may think that we only need this terminalScreen, not the terminalBuffer, but storing the buffer has 2 main advantages:
-
- 1. We can easily see which characters are different, so we only have to draw those.
- 2. We can 'scroll' back through the terminal's history because it is stored (to a limit).
-
-
-
-You should always try to design systems that do the minimum amount of work, as they run much faster for things which don't often change.
-
-The differing trick is really common on low power Operating Systems. Drawing the screen is a slow operation, and so we only want to draw thing that we absolutely have to. In this system, we can freely alter the terminalBuffer, and then call a method which copies the bits that change to the screen. This means we don't have to draw each character as we go along, which may save time in the long run on very long sections of text that span many lines.
-
-The other values in the .data section are as follows:
-
- * terminalStart
- The first character which has been written in terminalBuffer.
- * terminalStop
- The last character which has been written in terminalBuffer.
- * terminalView
- The first character on the screen at present. We can use this to scroll the screen.
- * temrinalColour
- The colour to draw new characters with.
-
-
-
-```
-Circular buffers are an example of an **data structure**. These are just ideas we have for organising data, that we sometimes implement in software.
-```
-
-![Diagram showing hellow world being inserted into a circular buffer of size 5.][2]
-The reason why terminalStart needs to be stored is because termainlBuffer should be a circular buffer. This means that when the buffer is completely full, the end 'wraps' round to the start, and so the character after the very last one is the first one. Thus, we need to advance terminalStart so we know that we've done this. When wokring with the buffer this can easily be implemented by checking if the index goes beyond the end of the buffer, and setting it back to the beginning if it does. Circular buffers are a common and clever way of storing a lot of data, where only the most recent data is important. It allows us to keep writing indefinitely, while always being sure there is a certain amount of recent data available. They're often used in signal processing or compression algorithms. In this case, it allows us to store a 128 line history of the terminal, without any penalties for writing over 128 lines. If we didn't have this, we would have to copy 127 lines back a line very time we went beyond the 128th line, wasting valuable time.
-
-I've mentioned the terminalColour here a few times. You can implement this however you, wish, however there is something of a standard on text terminals to have only 16 colours for foreground, and 16 colours for background (meaning there are 162 = 256 combinations). The colours on a CGA terminal are defined as follows:
-
-Table 1.1 - CGA Colour Codes
-| Number | Colour (R, G, B) |
-| ------ | ------------------------|
-| 0 | Black (0, 0, 0) |
-| 1 | Blue (0, 0, ⅔) |
-| 2 | Green (0, ⅔, 0) |
-| 3 | Cyan (0, ⅔, ⅔) |
-| 4 | Red (⅔, 0, 0) |
-| 5 | Magenta (⅔, 0, ⅔) |
-| 6 | Brown (⅔, ⅓, 0) |
-| 7 | Light Grey (⅔, ⅔, ⅔) |
-| 8 | Grey (⅓, ⅓, ⅓) |
-| 9 | Light Blue (⅓, ⅓, 1) |
-| 10 | Light Green (⅓, 1, ⅓) |
-| 11 | Light Cyan (⅓, 1, 1) |
-| 12 | Light Red (1, ⅓, ⅓) |
-| 13 | Light Magenta (1, ⅓, 1) |
-| 14 | Yellow (1, 1, ⅓) |
-| 15 | White (1, 1, 1) |
-
-```
-Brown was used as the alternative (dark yellow) was unappealing and not useful.
-```
-
-We store the colour of each character by storing the fore colour in the low nibble of the colour byte, and the background colour in the high nibble. Apart from brown, all of these colours follow a pattern such that in binary, the top bit represents adding ⅓ to each component, and the other bits represent adding ⅔ to individual components. This makes it easy to convert to RGB colour values.
-
-We need a method, TerminalColour, to read these 4 bit colour codes, and then call SetForeColour with the 16 bit equivalent. Try to implement this on your own. If you get stuck, or have not completed the Screen series, my implementation is given below:
-
-```
-.section .text
-TerminalColour:
-teq r0,#6
-ldreq r0,=0x02B5
-beq SetForeColour
-
-tst r0,#0b1000
-ldrne r1,=0x52AA
-moveq r1,#0
-tst r0,#0b0100
-addne r1,#0x15
-tst r0,#0b0010
-addne r1,#0x540
-tst r0,#0b0001
-addne r1,#0xA800
-mov r0,r1
-b SetForeColour
-```
-### 2 Showing the Text
-
-The first method we really need for our terminal is TerminalDisplay, one that copies the current data from terminalBuffer to terminalScreen and the actual screen. As mentioned, this method should do a minimal amount of work, because we need to be able to call it often. It should compare the text in terminalBuffer with that in terminalDisplay, and copy it across if they're different. Remember, terminalBuffer is a circular buffer running, in this case, from terminalView to terminalStop or 128*48 characters, whichever comes sooner. If we hit terminalStop, we'll assume all characters after that point are 7f16 (ASCII delete), and have colour 0 (black on a black background).
-
-Let's look at what we have to do:
-
- 1. Load in terminalView, terminalStop and the address of terminalDisplay.
- 2. For each row:
- 1. For each column:
- 1. If view is not equal to stop, load the current character and colour from view
- 2. Otherwise load the character as 0x7f and the colour as 0
- 3. Load the current character from terminalDisplay
- 4. If the character and colour are equal, go to 10
- 5. Store the character and colour to terminalDisplay
- 6. Call TerminalColour with the background colour in r0
- 7. Call DrawCharacter with r0 = 0x7f (ASCII delete, a block), r1 = x, r2 = y
- 8. Call TerminalColour with the foreground colour in r0
- 9. Call DrawCharacter with r0 = character, r1 = x, r2 = y
- 10. Increment the position in terminalDisplay by 2
- 11. If view and stop are not equal, increment the view position by 2
- 12. If the view position is at the end of textBuffer, set it to the start
- 13. Increment the x co-ordinate by 8
- 2. Increment the y co-ordinate by 16
-
-
-
-Try to implement this yourself. If you get stuck, my solution is given below:
-
-1.
-```
-.globl TerminalDisplay
-TerminalDisplay:
-push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
-x .req r4
-y .req r5
-char .req r6
-col .req r7
-screen .req r8
-taddr .req r9
-view .req r10
-stop .req r11
-
-ldr taddr,=terminalStart
-ldr view,[taddr,#terminalView - terminalStart]
-ldr stop,[taddr,#terminalStop - terminalStart]
-add taddr,#terminalBuffer - terminalStart
-add taddr,#128*128*2
-mov screen,taddr
-```
-
-I go a little wild with variables here. I'm using taddr to store the location of the end of the textBuffer for ease.
-
-2.
-```
-mov y,#0
-yLoop$:
-```
-Start off the y loop.
-
- 1.
- ```
- mov x,#0
- xLoop$:
- ```
- Start off the x loop.
-
- 1.
- ```
- teq view,stop
- ldrneh char,[view]
- ```
- I load both the character and the colour into char simultaneously for ease.
-
- 2.
- ```
- moveq char,#0x7f
- ```
- This line complements the one above by acting as though a black delete character was read.
-
- 3.
- ```
- ldrh col,[screen]
- ```
- For simplicity I load both the character and colour into col simultaneously.
-
- 4.
- ```
- teq col,char
- beq xLoopContinue$
- ```
- Now we can check if anything has changed with a teq.
-
- 5.
- ```
- strh char,[screen]
- ```
- We can also easily save the current value.
-
- 6.
- ```
- lsr col,char,#8
- and char,#0x7f
- lsr r0,col,#4
- bl TerminalColour
- ```
- I split up char into the colour in col and the character in char with a bitshift and an and, then use a bitshift to get the background colour to call TerminalColour.
-
- 7.
- ```
- mov r0,#0x7f
- mov r1,x
- mov r2,y
- bl DrawCharacter
- ```
- Write out a delete character which is a coloured block.
-
- 8.
- ```
- and r0,col,#0xf
- bl TerminalColour
- ```
- Use an and to get the low nibble of col then call TerminalColour.
-
- 9.
- ```
- mov r0,char
- mov r1,x
- mov r2,y
- bl DrawCharacter
- ```
- Write out the character we're supposed to write.
-
- 10.
- ```
- xLoopContinue$:
- add screen,#2
- ```
- Increment the screen pointer.
-
- 11.
- ```
- teq view,stop
- addne view,#2
- ```
- Increment the view pointer if necessary.
-
- 12.
- ```
- teq view,taddr
- subeq view,#128*128*2
- ```
- It's easy to check for view going past the end of the buffer because the end of the buffer's address is stored in taddr.
-
- 13.
- ```
- add x,#8
- teq x,#1024
- bne xLoop$
- ```
- We increment x and then loop back if there are more characters to go.
-
- 2.
- ```
- add y,#16
- teq y,#768
- bne yLoop$
- ```
- We increment y and then loop back if there are more characters to go.
-
-```
-pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
-.unreq x
-.unreq y
-.unreq char
-.unreq col
-.unreq screen
-.unreq taddr
-.unreq view
-.unreq stop
-```
-Don't forget to clean up at the end!
-
-
-### 3 Printing Lines
-
-Now we have our TerminalDisplay method, which will automatically display the contents of terminalBuffer to terminalScreen, so theoretically we can draw text. However, we don't actually have any drawing routines that work on a character based display. A quick method that will come in handy first of all is TerminalClear, which completely clears the terminal. This can actually very easily be achieved with no loops. Try to deduce why the following method suffices:
-
-```
-.globl TerminalClear
-TerminalClear:
-ldr r0,=terminalStart
-add r1,r0,#terminalBuffer-terminalStart
-str r1,[r0]
-str r1,[r0,#terminalStop-terminalStart]
-str r1,[r0,#terminalView-terminalStart]
-mov pc,lr
-```
-
-Now we need to make a basic method for character based displays; the Print function. This takes in a string address in r0, and a length in r1, and simply writes it to the current location at the screen. There are a few special characters to be wary of, as well as special behaviour to ensure that terminalView is kept up to date. Let's analyse what it has to do:
-
- 1. Check if string length is 0, if so return
- 2. Load in terminalStop and terminalView
- 3. Deduce the x-coordinate of terminalStop
- 4. For each character:
- 1. Check if the character is a new line
- 2. If so, increment bufferStop to the end of the line storing a black on black delete character.
- 3. Otherwise, copy the character in the current terminalColour
- 4. Check if we're at the end of a line
- 5. If so, check if the number of characters between terminalView and terminalStop is more than one screen
- 6. If so, increment terminalView by one line
- 7. Check if terminalView is at the end of the buffer, replace it with the start if so
- 8. Check if terminalStop is at the end of the buffer, replace it with the start if so
- 9. Check if terminalStop equals terminalStart, increment terminalStart by one line if so
- 10. Check if terminalStart is at the end of the buffer, replace it with the start if so
- 5. Store back terminalStop and terminalView.
-
-
-
-See if you can implement this yourself. My solution is provided below:
-
-1.
-```
-.globl Print
-Print:
-teq r1,#0
-moveq pc,lr
-```
-This quick check at the beginning makes a call to Print with a string of length 0 almost instant.
-
-2.
-```
-push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
-bufferStart .req r4
-taddr .req r5
-x .req r6
-string .req r7
-length .req r8
-char .req r9
-bufferStop .req r10
-view .req r11
-
-mov string,r0
-mov length,r1
-
-ldr taddr,=terminalStart
-ldr bufferStop,[taddr,#terminalStop-terminalStart]
-ldr view,[taddr,#terminalView-terminalStart]
-ldr bufferStart,[taddr]
-add taddr,#terminalBuffer-terminalStart
-add taddr,#128*128*2
-```
-I do a lot of setup here. bufferStart contains terminalStart, bufferStop contains terminalStop, view contains terminalView, taddr is the address of the end of terminalBuffer.
-
-3.
-```
-and x,bufferStop,#0xfe
-lsr x,#1
-```
-As per usual, a sneaky alignment trick makes everything easier. Because of the aligment of terminalBuffer, the x-coordinate of any character address is simply the last 8 bits divided by 2.
-
- 4.
- 1.
- ```
- charLoop$:
- ldrb char,[string]
- and char,#0x7f
- teq char,#'\n'
- bne charNormal$
- ```
- We need to check for new lines.
-
- 2.
- ```
- mov r0,#0x7f
- clearLine$:
- strh r0,[bufferStop]
- add bufferStop,#2
- add x,#1
- teq x,#128 blt clearLine$
-
- b charLoopContinue$
- ```
- Loop until the end of the line, writing out 0x7f; a delete character in black on a black background.
-
- 3.
- ```
- charNormal$:
- strb char,[bufferStop]
- ldr r0,=terminalColour
- ldrb r0,[r0]
- strb r0,[bufferStop,#1]
- add bufferStop,#2
- add x,#1
- ```
- Store the current character in the string and the terminalColour to the end of the terminalBuffer and then increment it and x.
-
- 4.
- ```
- charLoopContinue$:
- cmp x,#128
- blt noScroll$
- ```
- Check if x is at the end of a line; 128.
-
- 5.
- ```
- mov x,#0
- subs r0,bufferStop,view
- addlt r0,#128*128*2
- cmp r0,#128*(768/16)*2
- ```
- Set x back to 0 and check if we're currently showing more than one screen. Remember, we're using a circular buffer, so if the difference between bufferStop and view is negative, we're actually wrapping around the buffer.
-
- 6.
- ```
- addge view,#128*2
- ```
- Add one lines worth of bytes to the view address.
-
- 7.
- ```
- teq view,taddr
- subeq view,taddr,#128*128*2
- ```
- If the view address is at the end of the buffer we subtract the buffer length from it to move it back to the start. I set taddr to the address of the end of the buffer at the beginning.
-
- 8.
- ```
- noScroll$:
- teq bufferStop,taddr
- subeq bufferStop,taddr,#128*128*2
- ```
- If the stop address is at the end of the buffer we subtract the buffer length from it to move it back to the start. I set taddr to the address of the end of the buffer at the beginning.
-
- 9.
- ```
- teq bufferStop,bufferStart
- addeq bufferStart,#128*2
- ```
- Check if bufferStop equals bufferStart. If so, add one line to bufferStart.
-
- 10.
- ```
- teq bufferStart,taddr
- subeq bufferStart,taddr,#128*128*2
- ```
- If the start address is at the end of the buffer we subtract the buffer length from it to move it back to the start. I set taddr to the address of the end of the buffer at the beginning.
-
-```
-subs length,#1
-add string,#1
-bgt charLoop$
-```
-Loop until the string is done.
-
-5.
-```
-charLoopBreak$:
-sub taddr,#128*128*2
-sub taddr,#terminalBuffer-terminalStart
-str bufferStop,[taddr,#terminalStop-terminalStart]
-str view,[taddr,#terminalView-terminalStart]
-str bufferStart,[taddr]
-
-pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
-.unreq bufferStart
-.unreq taddr
-.unreq x
-.unreq string
-.unreq length
-.unreq char
-.unreq bufferStop
-.unreq view
-```
-Store back the variables and return.
-
-
-This method allows us to print arbitrary text to the screen. Throughout, I've been using the colour variable, but no where have we actually set it. Normally, terminals use special combinations of characters to change the colour. For example ASCII Escape (1b16) followed by a number 0 to f in hexadecimal could set the foreground colour to that CGA colour number. You can try implementing this yourself; my version is in the further examples section on the download page.
-
-### 4 Standard Input
-
-```
-By convention, in many programming languages, every program has access to stdin and stdout, which are an input and and output stream linked to the terminal. This is still true on graphical programs, though many don't use it.
-```
-
-Now we have an output terminal that in theory can print out text and display it. That is only half the story however, we want input. We want to implement a method, ReadLine, which stores the next line of text a user types to a location given in r0, up to a maximum length given in r1, and returns the length of the string read in r0. The tricky thing is, the user annoyingly wants to see what they're typing as they type it, they want to use backspace to delete mistakes and they want to use return to submit commands. They probably even want a flashing underscore character to indicate the computer would like input! These perfectly reasonable requests make this method a real challenge. One way to achieve all of this is to store the text they type in memory somewhere along with its length, and then after every character, move the terminalStop address back to where it started when ReadLine was called and calling Print. This means we only have to be able to manipulate a string in memory, and then make use of our Print function.
-
-Lets have a look at what ReadLine will do:
-
- 1. If the maximum length is 0, return 0
- 2. Retrieve the current values of terminalStop and terminalView
- 3. If the maximum length is bigger than half the buffer size, set it to half the buffer size
- 4. Subtract one from maximum length to ensure it can store our flashing underscore or a null terminator
- 5. Write an underscore to the string
- 6. Write the stored terminalView and terminalStop addresses back to the memory
- 7. Call Print on the current string
- 8. Call TerminalDisplay
- 9. Call KeyboardUpdate
- 10. Call KeyboardGetChar
- 11. If it is a new line character go to 16
- 12. If it is a backspace character, subtract 1 from the length of the string (if it is > 0)
- 13. If it is an ordinary character, write it to the string (if the length < maximum length)
- 14. If the string ends in an underscore, write a space, otherwise write an underscore
- 15. Go to 6
- 16. Write a new line character to the end of the string
- 17. Call Print and TerminalDisplay
- 18. Replace the new line with a null terminator
- 19. Return the length of the string
-
-
-
-Convince yourself that this will work, and then try to implement it yourself. My implementation is given below:
-
-1.
-```
-.globl ReadLine
-ReadLine:
-teq r1,#0
-moveq r0,#0
-moveq pc,lr
-```
-Quick special handling for the zero case, which is otherwise difficult.
-
-2.
-```
-string .req r4
-maxLength .req r5
-input .req r6
-taddr .req r7
-length .req r8
-view .req r9
-
-push {r4,r5,r6,r7,r8,r9,lr}
-
-mov string,r0
-mov maxLength,r1
-ldr taddr,=terminalStart
-ldr input,[taddr,#terminalStop-terminalStart]
-ldr view,[taddr,#terminalView-terminalStart]
-mov length,#0
-```
-As per the general theme, I do a lot of initialisations early. input contains the value of terminalStop and view contains terminalView. Length starts at 0.
-
-3.
-```
-cmp maxLength,#128*64
-movhi maxLength,#128*64
-```
-We have to check for unusually large reads, as we can't process them beyond the size of the terminalBuffer (I suppose we CAN, but it would be very buggy, as terminalStart could move past the stored terminalStop).
-
-4.
-```
-sub maxLength,#1
-```
-Since the user wants a flashing cursor, and we ideally want to put a null terminator on this string, we need 1 spare character.
-
-5.
-```
-mov r0,#'_'
-strb r0,[string,length]
-```
-Write out the underscore to let the user know they can input.
-
-6.
-```
-readLoop$:
-str input,[taddr,#terminalStop-terminalStart]
-str view,[taddr,#terminalView-terminalStart]
-```
-Save the stored terminalStop and terminalView. This is important to reset the terminal after each call to Print, which changes these variables. Strictly speaking it can change terminalStart too, but this is irreversible.
-
-7.
-```
-mov r0,string
-mov r1,length
-add r1,#1
-bl Print
-```
-Write the current input. We add 1 to the length for the underscore.
-
-8.
-```
-bl TerminalDisplay
-```
-Copy the new text to the screen.
-
-9.
-```
-bl KeyboardUpdate
-```
-Fetch the latest keyboard input.
-
-10.
-```
-bl KeyboardGetChar
-```
-Retrieve the key pressed.
-
-11.
-```
-teq r0,#'\n'
-beq readLoopBreak$
-teq r0,#0
-beq cursor$
-teq r0,#'\b'
-bne standard$
-```
-
-Break out of the loop if we have an enter key. Also skip these conditions if we have a null terminator and process a backspace if we have one.
-
-12.
-```
-delete$:
-cmp length,#0
-subgt length,#1
-b cursor$
-```
-Remove one from the length to delete a character.
-
-13.
-```
-standard$:
-cmp length,maxLength
-bge cursor$
-strb r0,[string,length]
-add length,#1
-```
-Write out an ordinary character where possible.
-
-14.
-```
-cursor$:
-ldrb r0,[string,length]
-teq r0,#'_'
-moveq r0,#' '
-movne r0,#'_'
-strb r0,[string,length]
-```
-Load in the last character, and change it to an underscore if it isn't one, and a space if it is.
-
-15.
-```
-b readLoop$
-readLoopBreak$:
-```
-Loop until the user presses enter.
-
-16.
-```
-mov r0,#'\n'
-strb r0,[string,length]
-```
-Store a new line at the end of the string.
-
-17.
-```
-str input,[taddr,#terminalStop-terminalStart]
-str view,[taddr,#terminalView-terminalStart]
-mov r0,string
-mov r1,length
-add r1,#1
-bl Print
-bl TerminalDisplay
-```
-Reset the terminalView and terminalStop and then Print and TerminalDisplay the final input.
-
-18.
-```
-mov r0,#0
-strb r0,[string,length]
-```
-Write out the null terminator.
-
-19.
-```
-mov r0,length
-pop {r4,r5,r6,r7,r8,r9,pc}
-.unreq string
-.unreq maxLength
-.unreq input
-.unreq taddr
-.unreq length
-.unreq view
-```
-Return the length.
-
-
-
-
-### 5 The Terminal: Rise of the Machine
-
-So, now we can theoretically interact with the user on the terminal. The most obvious thing to do is to put this to the test! In 'main.s' delete everything after bl UsbInitialise and copy in the following code:
-
-```
-reset$:
- mov sp,#0x8000
- bl TerminalClear
-
- ldr r0,=welcome
- mov r1,#welcomeEnd-welcome
- bl Print
-
-loop$:
- ldr r0,=prompt
- mov r1,#promptEnd-prompt
- bl Print
-
- ldr r0,=command
- mov r1,#commandEnd-command
- bl ReadLine
-
- teq r0,#0
- beq loopContinue$
-
- mov r4,r0
-
- ldr r5,=command
- ldr r6,=commandTable
-
- ldr r7,[r6,#0]
- ldr r9,[r6,#4]
- commandLoop$:
- ldr r8,[r6,#8]
- sub r1,r8,r7
-
- cmp r1,r4
- bgt commandLoopContinue$
-
- mov r0,#0
- commandName$:
- ldrb r2,[r5,r0]
- ldrb r3,[r7,r0]
- teq r2,r3
- bne commandLoopContinue$
- add r0,#1
- teq r0,r1
- bne commandName$
-
- ldrb r2,[r5,r0]
- teq r2,#0
- teqne r2,#' '
- bne commandLoopContinue$
-
- mov r0,r5
- mov r1,r4
- mov lr,pc
- mov pc,r9
- b loopContinue$
-
- commandLoopContinue$:
- add r6,#8
- mov r7,r8
- ldr r9,[r6,#4]
- teq r9,#0
- bne commandLoop$
-
- ldr r0,=commandUnknown
- mov r1,#commandUnknownEnd-commandUnknown
- ldr r2,=formatBuffer
- ldr r3,=command
- bl FormatString
-
- mov r1,r0
- ldr r0,=formatBuffer
- bl Print
-
-loopContinue$:
- bl TerminalDisplay
- b loop$
-
-echo:
- cmp r1,#5
- movle pc,lr
-
- add r0,#5
- sub r1,#5
- b Print
-
-ok:
- teq r1,#5
- beq okOn$
- teq r1,#6
- beq okOff$
- mov pc,lr
-
- okOn$:
- ldrb r2,[r0,#3]
- teq r2,#'o'
- ldreqb r2,[r0,#4]
- teqeq r2,#'n'
- movne pc,lr
- mov r1,#0
- b okAct$
-
- okOff$:
- ldrb r2,[r0,#3]
- teq r2,#'o'
- ldreqb r2,[r0,#4]
- teqeq r2,#'f'
- ldreqb r2,[r0,#5]
- teqeq r2,#'f'
- movne pc,lr
- mov r1,#1
-
- okAct$:
-
- mov r0,#16
- b SetGpio
-
-.section .data
-.align 2
-welcome: .ascii "Welcome to Alex's OS - Everyone's favourite OS"
-welcomeEnd:
-.align 2
-prompt: .ascii "\n> "
-promptEnd:
-.align 2
-command:
- .rept 128
- .byte 0
- .endr
-commandEnd:
-.byte 0
-.align 2
-commandUnknown: .ascii "Command `%s' was not recognised.\n"
-commandUnknownEnd:
-.align 2
-formatBuffer:
- .rept 256
- .byte 0
- .endr
-formatEnd:
-
-.align 2
-commandStringEcho: .ascii "echo"
-commandStringReset: .ascii "reset"
-commandStringOk: .ascii "ok"
-commandStringCls: .ascii "cls"
-commandStringEnd:
-
-.align 2
-commandTable:
-.int commandStringEcho, echo
-.int commandStringReset, reset$
-.int commandStringOk, ok
-.int commandStringCls, TerminalClear
-.int commandStringEnd, 0
-```
-This code brings everything together into a simple command line operating system. The commands available are echo, reset, ok and cls. echo copies any text after it back to the terminal, reset resets the operating system if things go wrong, ok has two functions: ok on turns the OK LED on, and ok off turns the OK LED off, and cls clears the terminal using TerminalClear.
-
-Have a go with this code on the Raspberry Pi. If it doesn't work, please see our troubleshooting page.
-
-When it works, congratulations you've completed a basic terminal Operating System, and have completed the input series. Unfortunately, this is as far as these tutorials go at the moment, but I hope to make more in the future. Please send feedback to awc32@cam.ac.uk.
-
-You're now in position to start building some simple terminal Operating Systems. My code above builds up a table of available commands in commandTable. Each entry in the table is an int for the address of the string, and an int for the address of the code to run. The last entry has to be commandStringEnd, 0. Try implementing some of your own commands, using our existing functions, or making new ones. The parameters for the functions to run are r0 is the address of the command the user typed, and r1 is the length. You can use this to pass inputs to your commands. Maybe you could make a calculator program, perhaps a drawing program or a chess program. Whatever ideas you've got, give them a go!
-
---------------------------------------------------------------------------------
-
-via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html
-
-作者:[Alex Chadwick][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.cl.cam.ac.uk
-[b]: https://github.com/lujun9972
-[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input01.html
-[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/circular_buffer.png
diff --git a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 6 Screen01.md b/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 6 Screen01.md
deleted file mode 100644
index 0b3cc3940c..0000000000
--- a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 6 Screen01.md
+++ /dev/null
@@ -1,503 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 6 Screen01)
-[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen01.html)
-[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
-
-Computer Laboratory – Raspberry Pi: Lesson 6 Screen01
-======
-
-Welcome to the Screen lesson series. In this series, you will learn how to control the screen using the Raspberry Pi in assembly code, starting at just displaying random data, then moving up to displaying a fixed image, displaying text and then formatting numbers into text. It is assumed that you have already completed the OK series, and so things covered in this series will not be repeated here.
-
-This first screen lesson teaches some basic theory about graphics, and then applies it to display a gradient pattern to the screen or TV.
-
-### 1 Getting Started
-
-It is expected that you have completed the OK series, and so functions in the 'gpio.s' file and 'systemTimer.s' file from that series will be called. If you do not have these files, or prefer to use a correct implementation, download the solution to OK05.s. The 'main.s' file from here will also be useful, up to and including mov sp,#0x8000. Please delete anything after that line.
-
-### 2 Computer Graphics
-
-There are a few systems for representing colours as numbers. Here we focus on RGB systems, but HSL is another common system used.
-
-As you're hopefully beginning to appreciate, at a fundamental level, computers are very stupid. They have a limited number of instructions, almost exclusively to do with maths, and yet somehow they are capable of doing many things. The thing we currently wish to understand is how a computer could possibly put an image on the screen. How would we translate this problem into binary? The answer is relatively straightforward; we devise some system of numbering each colour, and then we store one number for every pixel on the screen. A pixel is a small dot on your screen. If you move very close, you will probably be able to make out individual pixels on your screen, and be able to see that everything image is just made out of these pixels in combination.
-
-As the computer age advanced, people wanted more and more complicated graphics, and so the concept of a graphics card was invented. The graphics card is a secondary processor on your computer which only exists to draw images to the screen. It has the job of turning the pixel value information into light intensity levels to be transmitted to the screen. On modern computers, graphics cards can also do a lot more than that, such as drawing 3D graphics. In this tutorial however, we will just concentrate on the first use of graphics cards; getting pixel colours from memory out to the screen.
-
-One issue that is raised immediately by all this is the system we use for numbering colours. There are several choices, each producing outputs of different quality. I will outline a few here for completeness.
-
-Although some images here have few colours they use a technique called spatial dithering. This allows them to still show a good representation of the image, with very few colours. Many early Operating Systems used this technique.
-
-| Name | Unique Colours | Description | Examples |
-| ----------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------- |
-| Monochrome | 2 | Use 1 bit to store each pixel, with a 1 being white, and a 0 being black. | ![Monochrome image of a bird][1] |
-| Greyscale | 256 | Use 1 byte to store each pixel, with 255 representing white, 0 representing black, and all values in between representing a linear combination of the two. | ![Geryscale image of a bird][2] |
-| 8 Colour | 8 | Use 3 bits to store each pixel, the first bit representing the presence of a red channel, the second representing a green channel and the third a blue channel. | ![8 colour image of a bird][3] |
-| Low Colour | 256 | Use 8 bits to store each pixel, the first 3 bit representing the intensity of the red channel, the next 3 bits representing the intensity of the green channel and the final 2 bits representing the intensity of the blue channel. | ![Low colour image of a bird][4] |
-| High Colour | 65,536 | Use 16 bits to store each pixel, the first 5 bit representing the intensity of the red channel, the next 6 bits representing the intensity of the green channel and the final 5 bits representing the intensity of the blue channel. | ![High colour image of a bird][5] |
-| True Colour | 16,777,216 | Use 24 bits to store each pixel, the first 8 bits representing the intensity of the red channel, the second 8 representing the green channel and the final 8 bits the blue channel. | ![True colour image of a bird][6] |
-| RGBA32 | 16,777,216 with 256 transparency levels | Use 32 bits to store each pixel, the first 8 bits representing the intensity of the red channel, the second 8 representing the green channel, the third 8 bits the blue channel, and the final 8 bits a transparency channel. The transparency channel is only considered when drawing one image on top of another and is stored such that a value of 0 indicates the image behind's colour, a value of 255 represents this image's colour, and all values between represent a mix. | |
-
-
-In this tutorial we shall use High Colour initially. As you can see form the image, it is produces clear, good quality images, but it doesn't take up as much space as True Colour. That said, for quite a small display of 800x600 pixels, it would still take just under 1 megabyte of space. It also has the advantage that the size is a multiple of a power of 2, which greatly reduces the complexity of getting information compared with True Colour.
-
-```
-Storing the frame buffer places a heavy memory burden on a computer. For this reason, early computers often cheated, by, for example, storing a screens worth of text, and just drawing each letter to the screen every time it is refreshed separately.
-```
-
-The Raspberry Pi has a very special and rather odd relationship with it's graphics processor. On the Raspberry Pi, the graphics processor actually runs first, and is responsible for starting up the main processor. This is very unusual. Ultimately it doesn't make too much difference, but in many interactions, it often feels like the processor is secondary, and the graphics processor is the most important. The two communicate on the Raspberry Pi by what is called the 'mailbox'. Each can deposit mail for the other, which will be collected at some future point and then dealt with. We shall use the mailbox to ask the graphics processor for an address. The address will be a location to which we can write the pixel colour information for the screen, called a frame buffer, and the graphics card will regularly check this location, and update the pixels on the screen appropriately.
-
-### 3 Programming the Postman
-
-```
-Message passing is quite a common way for components to communicate. Some Operating Systems use virtual message passing to allow programs to communicate.
-```
-
-The first thing we are going to need to program is a 'postman'. This is just two methods: MailboxRead, reading one message from the mailbox channel in r0. and MailboxWrite, writing the value in the top 28 bits of r0 to the mailbox channel in r1. The Raspberry Pi has 7 mailbox channels for communication with the graphics processor, only the first of which is useful to us, as it is for negotiating the frame buffer.
-
-The following table and diagrams describe the operation of the mailbox.
-
-Table 3.1 Mailbox Addresses
-| Address | Size / Bytes | Name | Description | Read / Write |
-| 2000B880 | 4 | Read | Receiving mail. | R |
-| 2000B890 | 4 | Poll | Receive without retrieving. | R |
-| 2000B894 | 4 | Sender | Sender information. | R |
-| 2000B898 | 4 | Status | Information. | R |
-| 2000B89C | 4 | Configuration | Settings. | RW |
-| 2000B8A0 | 4 | Write | Sending mail. | W |
-
-In order to send a message to a particular mailbox:
-
- 1. The sender waits until the Status field has a 0 in the top bit.
- 2. The sender writes to Write such that the lowest 4 bits are the mailbox to write to, and the upper 28 bits are the message to write.
-
-
-
-In order to read a message:
-
- 1. The receiver waits until the Status field has a 0 in the 30th bit.
- 2. The receiver reads from Read.
- 3. The receiver confirms the message is for the correct mailbox, and tries again if not.
-
-
-
-If you're feeling particularly confident, you now have enough information to write the two methods we need. If not, read on.
-
-As always the first method I recommend you implement is one to get the address of the mailbox region.
-
-```
-.globl GetMailboxBase
-GetMailboxBase:
-ldr r0,=0x2000B880
-mov pc,lr
-```
-
-The sending procedure is least complicated, so we shall implement this first. As your methods become more and more complicated, you will need to start planning them in advance. A good way to do this might be to write out a simple list of the steps that need to be done, in a fair amount of detail, like below.
-
- 1. Our input will be what to write (r0), and what mailbox to write it to (r1). We must validate this is by checking it is a real mailbox, and that the low 4 bits of the value are 0. Never forget to validate inputs.
- 2. Use GetMailboxBase to retrieve the address.
- 3. Read from the Status field.
- 4. Check the top bit is 0. If not, go back to 3.
- 5. Combine the value to write and the channel.
- 6. Write to the Write.
-
-
-
-Let's handle each of these in order.
-
-1.
-```
-.globl MailboxWrite
-MailboxWrite:
-tst r0,#0b1111
-movne pc,lr
-cmp r1,#15
-movhi pc,lr
-```
-
-```
-tst reg,#val computes and reg,#val and compares the result with 0.
-```
-
-This achieves our validation on r0 and r1. tst is a function that compares two numbers by computing the logical and operation of the numbers, and then comparing the result with 0. In this case it checks that the lowest 4 bits of the input in r0 are all 0.
-
-2.
-```
-channel .req r1
-value .req r2
-mov value,r0
-push {lr}
-bl GetMailboxBase
-mailbox .req r0
-```
-
-This code ensures we will not overwrite our value, or link register and calls GetMailboxBase.
-
-3.
-```
-wait1$:
-status .req r3
-ldr status,[mailbox,#0x18]
-```
-
-This code loads in the current status.
-
-4.
-```
-tst status,#0x80000000
-.unreq status
-bne wait1$
-```
-
-This code checks that the top bit of the status field is 0, and loops back to 3. if it is not.
-
-5.
-```
-add value,channel
-.unreq channel
-```
-
-This code combines the channel and value together.
-
-6.
-```
-str value,[mailbox,#0x20]
-.unreq value
-.unreq mailbox
-pop {pc}
-```
-
-This code stores the result to the write field.
-
-
-
-
-The code for MailboxRead is quite similar.
-
- 1. Our input will be what mailbox to read from (r0). We must validate this is by checking it is a real mailbox. Never forget to validate inputs.
- 2. Use GetMailboxBase to retrieve the address.
- 3. Read from the Status field.
- 4. Check the 30th bit is 0. If not, go back to 3.
- 5. Read from the Read field.
- 6. Check the mailbox is the one we want, if not go back to 3.
- 7. Return the result.
-
-
-
-Let's handle each of these in order.
-
-1.
-```
-.globl MailboxRead
-MailboxRead:
-cmp r0,#15
-movhi pc,lr
-```
-
-This achieves our validation on r0.
-
-2.
-```
-channel .req r1
-mov channel,r0
-push {lr}
-bl GetMailboxBase
-mailbox .req r0
-```
-
-This code ensures we will not overwrite our value, or link register and calls GetMailboxBase.
-
-3.
-```
-rightmail$:
-wait2$:
-status .req r2
-ldr status,[mailbox,#0x18]
-```
-
-This code loads in the current status.
-
-4.
-```
-tst status,#0x40000000
-.unreq status
-bne wait2$
-```
-
-This code checks that the 30th bit of the status field is 0, and loops back to 3. if it is not.
-
-5.
-```
-mail .req r2
-ldr mail,[mailbox,#0]
-```
-
-This code reads the next item from the mailbox.
-
-6.
-```
-inchan .req r3
-and inchan,mail,#0b1111
-teq inchan,channel
-.unreq inchan
-bne rightmail$
-.unreq mailbox
-.unreq channel
-```
-
-This code checks that the channel of the mail we just read is the one we were supplied. If not it loops back to 3.
-
-7.
-```
-and r0,mail,#0xfffffff0
-.unreq mail
-pop {pc}
-```
-
-This code moves the answer (the top 28 bits of mail) to r0.
-
-
-
-
-### 4 My Dearest Graphics Processor
-
-Through our new postman, we now have the ability to send a message to the graphics card. What should we send though? This was certainly a difficult question for me to find the answer to, as it isn't in any online manual that I have found. Nevertheless, by looking at the GNU/Linux for the Raspberry Pi, we are able to work out what we needed to send.
-
-```
-Since the RAM is shared between the graphics processor and the processor on the Pi, we can just send where to find our message. This is called DMA, many complicated devices use this to speed up access times.
-```
-
-The message is very simple. We describe the framebuffer we would like, and the graphics card either agrees to our request, in which case it sends us back a 0, and fills in a small questionnaire we make, or it sends back a non-zero number, in which case we know it is unhappy. Unfortunately, I have no idea what any of the other numbers it can send back are, nor what they mean, but only when it sends a zero it is happy. Fortunately it always seems to send a zero for sensible inputs, so we don't need to worry too much.
-
-For simplicity we shall design our request in advance, and store it in the .data section. In a file called 'framebuffer.s' place the following code:
-
-```
-.section .data
-.align 4
-.globl FrameBufferInfo
-FrameBufferInfo:
-.int 1024 /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var #0 Physical Width */
-.int 768 /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var #4 Physical Height */
-.int 1024 /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var #8 Virtual Width */
-.int 768 /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var #12 Virtual Height */
-.int 0 /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var #16 GPU - Pitch */
-.int 16 /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var #20 Bit Depth */
-.int 0 /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var #24 X */
-.int 0 /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var #28 Y */
-.int 0 /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var #32 GPU - Pointer */
-.int 0 /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var #36 GPU - Size */
-```
-
-This is the format of our messages to the graphics processor. The first two words describe the physical width and height. The second pair is the virtual width and height. The framebuffer's width and height are the virtual width and height, and the GPU scales the framebuffer as need to fit the physical screen. The next word is one of the ones the GPU will fill in if it grants our request. It will be the number of bytes on each row of the frame buffer, in this case 2 × 1024 = 2048. The next word is how many bits to allocate to each pixel. Using a value of 16 means that the graphics processor uses High Colour mode described above. A value of 24 would use True Colour, and 32 would use RGBA32. The next two words are x and y offsets, which mean the number of pixels to skip in the top left corner of the screen when copying the framebuffer to the screen. Finally, the last two words are filled in by the graphics processor, the first of which is the actual pointer to the frame buffer, and the second is the size of the frame buffer in bytes.
-
-```
-When working with devices using DMA, alignment constraints become very important. The GPU expects the message to be 16 byte aligned.
-```
-
-I was very careful to include a .align 4 here. As discussed before, this ensures the lowest 4 bits of the address of the next line are 0. Thus, we know for sure that FrameBufferInfo will be placed at an address we can send to the graphics processor, as our mailbox only sends values with the low 4 bits all 0.
-
-So, now that we have our message, we can write code to send it. The communication will go as follows:
-
- 1. Write the address of FrameBufferInfo + 0x40000000 to mailbox 1.
- 2. Read the result from mailbox 1. If it is not zero, we didn't ask for a proper frame buffer.
- 3. Copy our images to the pointer, and they will appear on screen!
-
-
-
-I've said something that I've not mentioned before in step 1. We have to add 0x40000000 to the address of FrameBufferInfo before sending it. This is actually a special signal to the GPU of how it should write to the structure. If we just send the address, the GPU will write its response, but will not make sure we can see it by flushing its cache. The cache is a piece of memory where a processor stores values its working on before sending them to the RAM. By adding 0x40000000, we tell the GPU not to use its cache for these writes, which ensures we will be able to see the change.
-
-Since there is quite a lot going on there, it would be best to implement this as a function, rather than just putting the code into main.s. We shall write a function InitialiseFrameBuffer which does all this negotiation and returns the pointer to the frame buffer info data above, once it has a pointer in it. For ease, we should also make it so that the width, height and bit depth of the frame buffer are inputs to this method, so that it is easy to change in main.s without having to get into the details of the negotiation.
-
-Once again, let's write down in detail the steps we will have to take. If you're feeling confident, try writing the function straight away.
-
- 1. Validate our inputs.
- 2. Write the inputs into the frame buffer.
- 3. Send the address of the frame buffer + 0x40000000 to the mailbox.
- 4. Receive the reply from the mailbox.
- 5. If the reply is not 0, the method has failed. We should return 0 to indicate failure.
- 6. Return a pointer to the frame buffer info.
-
-
-
-Now we're getting into much bigger methods than before. Below is one implementation of the above.
-
-1.
-```
-.section .text
-.globl InitialiseFrameBuffer
-InitialiseFrameBuffer:
-width .req r0
-height .req r1
-bitDepth .req r2
-cmp width,#4096
-cmpls height,#4096
-cmpls bitDepth,#32
-result .req r0
-movhi result,#0
-movhi pc,lr
-```
-
-This code checks that the width and height are less than or equal to 4096, and that the bit depth is less than or equal to 32. This is once again using a trick with conditional execution. Convince yourself that this works.
-
-2.
-```
-fbInfoAddr .req r3
-push {lr}
-ldr fbInfoAddr,=FrameBufferInfo
-str width,[fbInfoAddr,#0]
-str height,[fbInfoAddr,#4]
-str width,[fbInfoAddr,#8]
-str height,[fbInfoAddr,#12]
-str bitDepth,[fbInfoAddr,#20]
-.unreq width
-.unreq height
-.unreq bitDepth
-```
-
-This code simply writes into our frame buffer structure defined above. I also take the opportunity to push the link register onto the stack.
-
-3.
-```
-mov r0,fbInfoAddr
-add r0,#0x40000000
-mov r1,#1
-bl MailboxWrite
-```
-
-The inputs to the MailboxWrite method are the value to write in r0, and the channel to write to in r1.
-
-4.
-```
-mov r0,#1
-bl MailboxRead
-```
-
-The inputs to the MailboxRead method is the channel to write to in r0, and the output is the value read.
-
-5.
-```
-teq result,#0
-movne result,#0
-popne {pc}
-```
-
-This code checks if the result of the MailboxRead method is 0, and returns 0 if not.
-
-6.
-```
-mov result,fbInfoAddr
-pop {pc}
-.unreq result
-.unreq fbInfoAddr
-```
-
-This code finishes off and returns the frame buffer info address.
-
-
-
-
-### 5 A Pixel Within a Row Within a Frame
-
-So, we've now created our methods to communicate with the graphics processor. It should now be capable of giving us the pointer to a frame buffer we can draw graphics to. Let's draw something now.
-
-In this first example, we'll just draw consecutive colours to the screen. It won't look pretty, but at least it will be working. How we will do this is by setting each pixel in the framebuffer to a consecutive number, and continually doing so.
-
-Copy the following code to 'main.s' after mov sp,#0x8000
-
-```
-mov r0,#1024
-mov r1,#768
-mov r2,#16
-bl InitialiseFrameBuffer
-```
-
-This code simply uses our InitialiseFrameBuffer method to create a frame buffer with width 1024, height 768, and bit depth 16. You can try different values in here if you wish, as long as you are consistent throughout the code. Since it's possible that this method can return 0 if the graphics processor did not give us a frame buffer, we had better check for this, and turn the OK LED on if it happens.
-
-```
-teq r0,#0
-bne noError$
-
-mov r0,#16
-mov r1,#1
-bl SetGpioFunction
-mov r0,#16
-mov r1,#0
-bl SetGpio
-
-error$:
-b error$
-
-noError$:
-fbInfoAddr .req r4
-mov fbInfoAddr,r0
-```
-
-Now that we have the frame buffer info address, we need to get the frame buffer pointer from it, and start drawing to the screen. We will do this using two loops, one going down the rows, and one going along the columns. On the Raspberry Pi, indeed in most applications, pictures are stored left to right then top to bottom, so we have to do the loops in the order I have said.
-
-
-```
-render$:
-
- fbAddr .req r3
- ldr fbAddr,[fbInfoAddr,#32]
-
- colour .req r0
- y .req r1
- mov y,#768
- drawRow$:
-
- x .req r2
- mov x,#1024
- drawPixel$:
-
- strh colour,[fbAddr]
- add fbAddr,#2
- sub x,#1
- teq x,#0
- bne drawPixel$
-
- sub y,#1
- add colour,#1
- teq y,#0
- bne drawRow$
-
- b render$
-
-.unreq fbAddr
-.unreq fbInfoAddr
-```
-
-```
-strh reg,[dest] stores the low half word number in reg at the address given by dest.
-```
-
-This is quite a large chunk of code, and has a loop within a loop within a loop. To help get your head around the looping, I've indented the code which is looped, depending on which loop it is in. This is quite common in most high level programming languages, and the assembler simply ignores the tabs. We see here that I load in the frame buffer address from the frame buffer information structure, and then loop over every row, then every pixel on the row. At each pixel, I use an strh (store half word) command to store the current colour, then increment the address we're writing to. After drawing each row, we increment the colour that we are drawing. After drawing the full screen, we branch back to the beginning.
-
-### 6 Seeing the Light
-
-Now you're ready to test this code on the Raspberry Pi. You should see a changing gradient pattern. Be careful: until the first message is sent to the mailbox, the Raspberry Pi displays a still gradient pattern between the four corners. If it doesn't work, please see our troubleshooting page.
-
-If it does work, congratulations! You can now control the screen! Feel free to alter this code to draw whatever pattern you like. You can do some very nice gradient patterns, and can compute the value of each pixel directly, since y contains a y-coordinate for the pixel, and x contains an x-coordinate. In the next lesson, [Lesson 7: Screen 02][7], we will look at one of the most common drawing tasks, lines.
-
---------------------------------------------------------------------------------
-
-via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen01.html
-
-作者:[Alex Chadwick][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.cl.cam.ac.uk
-[b]: https://github.com/lujun9972
-[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/colour1bImage.png
-[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/colour8gImage.png
-[3]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/colour3bImage.png
-[4]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/colour8bImage.png
-[5]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/colour16bImage.png
-[6]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/colour24bImage.png
-[7]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen02.html
diff --git a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 7 Screen02.md b/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 7 Screen02.md
deleted file mode 100644
index 3a8fe60f6f..0000000000
--- a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 7 Screen02.md
+++ /dev/null
@@ -1,449 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 7 Screen02)
-[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen02.html)
-[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
-
-Computer Laboratory – Raspberry Pi: Lesson 7 Screen02
-======
-
-The Screen02 lesson builds on Screen01, by teaching how to draw lines and also a small feature on generating pseudo random numbers. It is assumed you have the code for the [Lesson 6: Screen01][1] operating system as a basis.
-
-### 1 Dots
-
-Now that we've got the screen working, it is only natural to start waiting to create sensible images. It would be very nice indeed if we were able to actually draw something. One of the most basic components in all drawings is a line. If we were able to draw a line between any two points on the screen, we could start creating more complicated drawings just using combinations of these lines.
-
-```
-To allow complex drawing, some systems use a colouring function rather than just one colour to draw things. Each pixel calls the colouring function to determine what colour to draw there.
-```
-
-We will attempt to implement this in assembly code, but first we could really use some other functions to help. We need a function I will call SetPixel that changes the colour of a particular pixel, supplied as inputs in r0 and r1. It will be helpful for future if we write code that could draw to any memory, not just the screen, so first of all, we need some system to control where we are actually going to draw to. I think that the best way to do this would be to have a piece of memory which stores where we are going to draw to. What we should end up with is a stored address which normally points to the frame buffer structure from last time. We will use this at all times in our drawing method. That way, if we want to draw to a different image in another part of our operating system, we could make this value the address of a different structure, and use the exact same code. For simplicity we will use another piece of data to control the colour of our drawings.
-
-Copy the following code to a new file called 'drawing.s'.
-
-```
-.section .data
-.align 1
-foreColour:
-.hword 0xFFFF
-
-.align 2
-graphicsAddress:
-.int 0
-
-.section .text
-.globl SetForeColour
-SetForeColour:
-cmp r0,#0x10000
-movhs pc,lr
-ldr r1,=foreColour
-strh r0,[r1]
-mov pc,lr
-
-.globl SetGraphicsAddress
-SetGraphicsAddress:
-ldr r1,=graphicsAddress
-str r0,[r1]
-mov pc,lr
-```
-
-This is just the pair of functions that I described above, along with their data. We will use them in 'main.s' before drawing anything to control where and what we are drawing.
-
-```
-Building generic methods like SetPixel which we can build other methods on top of is a useful idea. We have to make sure the method is fast though, since we will use it a lot.
-```
-
-Our next task is to implement a SetPixel method. This needs to take two parameters, the x and y co-ordinate of a pixel, and it should use the graphicsAddress and foreColour we have just defined to control exactly what and where it is drawing. If you think you can implement this immediately, do, if not I shall outline the steps to be taken, and then give an example implementation.
-
- 1. Load in the graphicsAddress.
- 2. Check that the x and y co-ordinates of the pixel are less than the width and height.
- 3. Compute the address of the pixel to write. (hint: frameBufferAddress + (x + y core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated width) * pixel size)
- 4. Load in the foreColour.
- 5. Store it at the address.
-
-
-
-An implementation of the above follows.
-
-1.
-```
-.globl DrawPixel
-DrawPixel:
-px .req r0
-py .req r1
-addr .req r2
-ldr addr,=graphicsAddress
-ldr addr,[addr]
-```
-
-2.
-```
-height .req r3
-ldr height,[addr,#4]
-sub height,#1
-cmp py,height
-movhi pc,lr
-.unreq height
-
-width .req r3
-ldr width,[addr,#0]
-sub width,#1
-cmp px,width
-movhi pc,lr
-```
-
-Remember that the width and height are stored at offsets of 0 and 4 into the frame buffer description respectively. You can refer back to 'frameBuffer.s' if necessary.
-
-3.
-```
-ldr addr,[addr,#32]
-add width,#1
-mla px,py,width,px
-.unreq width
-.unreq py
-add addr, px,lsl #1
-.unreq px
-```
-
-```
-mla dst,reg1,reg2,reg3 multiplies the values from reg1 and reg2, adds the value from reg3 and places the least significant 32 bits of the result in dst.
-```
-
-Admittedly, this code is specific to high colour frame buffers, as I use a bit shift directly to compute this address. You may wish to code a version of this function without the specific requirement to use high colour frame buffers, remembering to update the SetForeColour code. It may be significantly more complicated to implement.
-
-4.
-```
-fore .req r3
-ldr fore,=foreColour
-ldrh fore,[fore]
-```
-
-As above, this is high colour specific.
-
-5.
-```
-strh fore,[addr]
-.unreq fore
-.unreq addr
-mov pc,lr
-```
-
-As above, this is high colour specific.
-
-
-
-
-### 2 Lines
-
-The trouble is, line drawing isn't quite as simple as you may expect. By now you must realise that when making operating system, we have to do almost everything ourselves, and line drawing is no exception. I suggest for a few minutes you have a think about how you would draw a line between any two points.
-
-```
-When programming normally, we tend to be lazy with things like division. Operating Systems must be incredibly efficient, and so we must focus on doing things as best as possible.
-```
-
-I expect the central idea of most strategies will involve computing the gradient of the line, and stepping along it. This sounds perfectly reasonable, but is actually a terrible idea. The problem with it is it involves division, which is something that we know can't easily be done in assembly, and also keeping track of decimal numbers, which is again difficult. There is, in fact, an algorithm called Bresenham's Algorithm, which is perfect for assembly code because it only involves addition, subtraction and bit shifts.
-```
-Let's start off by defining a reasonably straightforward line drawing algorithm as follows:
-
-if x1 > x0 then
-
-set deltax to x1 - x0
-set stepx to +1
-
-otherwise
-
-set deltax to x0 - x1
-set stepx to -1
-
-end if
-
-if y1 > y0 then
-
-set deltay to y1 - y0
-set stepy to +1
-
-otherwise
-
-set deltay to y0 - y1
-set stepy to -1
-
-end if
-
-if deltax > deltay then
-
-set error to 0
-until x0 = x1 + stepx
-
-setPixel(x0, y0)
-set error to error + deltax ÷ deltay
-if error ≥ 0.5 then
-
-set y0 to y0 + stepy
-set error to error - 1
-
-end if
-set x0 to x0 + stepx
-
-repeat
-
-otherwise
-
-end if
-
-This algorithm is a representation of the sort of thing you may have imagined. The variable error keeps track of how far away from the actual line we are. Every step we take along the x axis increases this error, and every time we move down the y axis, the error decreases by 1 unit again. The error is measured as a distance along the y axis.
-
-While this algorithm works, it suffers a major problem in that we clearly have to use decimal numbers to store error, and also we have to do a division. An immediate optimisation would therefore be to change the units of error. There is no need to store it in any particular units, as long as we scale every use of it by the same amount. Therefore, we could rewrite the algorithm simply by multiplying all equations involving error by deltay, and simplifying the result. Just showing the main loop:
-
-set error to 0 × deltay
-until x0 = x1 + stepx
-
-setPixel(x0, y0)
-set error to error + deltax ÷ deltay × deltay
-if error ≥ 0.5 × deltay then
-
-set y0 to y0 + stepy
-set error to error - 1 × deltay
-
-end if
-set x0 to x0 + stepx
-
-repeat
-
-Which simplifies to:
-
-set error to 0
-until x0 = x1 + stepx
-
-setPixel(x0, y0)
-set error to error + deltax
-if error × 2 ≥ deltay then
-
-set y0 to y0 + stepy
-set error to error - deltay
-
-end if
-set x0 to x0 + stepx
-
-repeat
-
-Suddenly we have a much better algorithm. We see now that we've eliminated the need for division altogether. Better still, the only multiplication is by 2, which we know is just a bit shift left by 1! This is now very close to Bresenham's Algorithm, but one further optimisation can be made. At the moment, we have an if statement which leads to two very similar blocks of code, one for lines with larger x differences, and one for lines with larger y differences. It is worth checking if the code could be converted to a single statement for both types of line.
-
-The difficulty arises somewhat in that in the first case, error is to do with y, and in the second case error is to do with x. The solution is to track the error in both variables simultaneously, using negative error to represent an error in x, and positive error in y.
-
-set error to deltax - deltay
-until x0 = x1 + stepx or y0 = y1 + stepy
-
-setPixel(x0, y0)
-if error × 2 > -deltay then
-
-set x0 to x0 + stepx
-set error to error - deltay
-
-end if
-if error × 2 < deltax then
-
-set y0 to y0 + stepy
-set error to error + deltax
-
-end if
-
-repeat
-
-It may take some time to convince yourself that this actually works. At each step, we consider if it would be correct to move in x or y. We do this by checking if the magnitude of the error would be lower if we moved in the x or y co-ordinates, and then moving if so.
-```
-
-```
-Bresenham's Line Algorithm was developed in 1962 by Jack Elton Bresenham, 24 at the time, whilst studying for a PhD.
-```
-
-Bresenham's Algorithm for drawing a line can be described by the following pseudo code. Pseudo code is just text which looks like computer instructions, but is actually intended for programmers to understand algorithms, rather than being machine readable.
-
-```
-/* We wish to draw a line from (x0,y0) to (x1,y1), using only a function setPixel(x,y) which draws a dot in the pixel given by (x,y). */
-if x1 > x0 then
- set deltax to x1 - x0
- set stepx to +1
-otherwise
- set deltax to x0 - x1
- set stepx to -1
-end if
-
-set error to deltax - deltay
-until x0 = x1 + stepx or y0 = y1 + stepy
- setPixel(x0, y0)
- if error × 2 ≥ -deltay then
- set x0 to x0 + stepx
- set error to error - deltay
- end if
- if error × 2 ≤ deltax then
- set y0 to y0 + stepy
- set error to error + deltax
- end if
-repeat
-```
-
-Rather than numbered lists as I have used so far, this representation of an algorithm is far more common. See if you can implement this yourself. For reference, I have provided my implementation below.
-
-```
-.globl DrawLine
-DrawLine:
-push {r4,r5,r6,r7,r8,r9,r10,r11,r12,lr}
-x0 .req r9
-x1 .req r10
-y0 .req r11
-y1 .req r12
-
-mov x0,r0
-mov x1,r2
-mov y0,r1
-mov y1,r3
-
-dx .req r4
-dyn .req r5 /* Note that we only ever use -deltay, so I store its negative for speed. (hence dyn) */
-sx .req r6
-sy .req r7
-err .req r8
-
-cmp x0,x1
-subgt dx,x0,x1
-movgt sx,#-1
-suble dx,x1,x0
-movle sx,#1
-
-cmp y0,y1
-subgt dyn,y1,y0
-movgt sy,#-1
-suble dyn,y0,y1
-movle sy,#1
-
-add err,dx,dyn
-add x1,sx
-add y1,sy
-
-pixelLoop$:
-
- teq x0,x1
- teqne y0,y1
- popeq {r4,r5,r6,r7,r8,r9,r10,r11,r12,pc}
-
- mov r0,x0
- mov r1,y0
- bl DrawPixel
-
- cmp dyn, err,lsl #1
- addle err,dyn
- addle x0,sx
-
- cmp dx, err,lsl #1
- addge err,dx
- addge y0,sy
-
- b pixelLoop$
-
-.unreq x0
-.unreq x1
-.unreq y0
-.unreq y1
-.unreq dx
-.unreq dyn
-.unreq sx
-.unreq sy
-.unreq err
-```
-
-### 3 Randomness
-
-So, now we can draw lines. Although we could use this to draw pictures and whatnot (feel free to do so!), I thought I would take the opportunity to introduce the idea of computer randomness. What we will do is select a pair of random co-ordinates, and then draw a line from the last pair to that point in steadily incrementing colours. I do this purely because it looks quite pretty.
-
-```
-Hardware random number generators are occasionally used in security, where the predictability sequence of random numbers may affect the security of some encryption.
-```
-
-So, now it comes down to it, how do we be random? Unfortunately for us there isn't some device which generates random numbers (such devices are very rare). So somehow using only the operations we've learned so far we need to invent 'random numbers'. It shouldn't take you long to realise this is impossible. The operations always have well defined results, executing the same sequence of instructions with the same registers yields the same answer. What we instead do is deduce a sequence that is pseudo random. This means numbers that, to the outside observer, look random, but in fact were completely determined. So, we need a formula to generate random numbers. One might be tempted to just spam mathematical operators out for example: 4x2! / 64, but in actuality this generally produces low quality random numbers. In this case for example, if x were 0, the answer would be 0. Stupid though it sounds, we need a very careful choice of equation to produce high quality random numbers.
-
-```
-This sort of discussion often begs the question what do we mean by a random number? We generally mean statistical randomness: A sequence of numbers that has no obvious patterns or properties that could be used to generalise it.
-```
-
-The method I'm going to teach you is called the quadratic congruence generator. This is a good choice because it can be implemented in 5 instructions, and yet generates a seemingly random order of the numbers from 0 to 232-1.
-
-The reason why the generator can create such long sequence with so few instructions is unfortunately a little beyond the scope of this course, but I encourage the interested to research it. It all centralises on the following quadratic formula, where xn is the nth random number generated.
-
-x_(n+1) = ax_(n)^2 + bx_(n) + c mod 2^32
-
-Subject to the following constraints:
-
- 1. a is even
-
- 2. b = a + 1 mod 4
-
- 3. c is odd
-
-
-
-
-If you've not seen mod before, it means the remainder of a division by the number after it. For example b = a + 1 mod 4 means that b is the remainder of dividing a + 1 by 4, so if a were 12 say, b would be 1 as a + 1 is 13, and 13 divided by 4 is 3 remainder 1.
-
-Copy the following code into a file called 'random.s'.
-
-```
-.globl Random
-Random:
-xnm .req r0
-a .req r1
-
-mov a,#0xef00
-mul a,xnm
-mul a,xnm
-add a,xnm
-.unreq xnm
-add r0,a,#73
-
-.unreq a
-mov pc,lr
-```
-
-This is an implementation of the random function, with an input of the last value generated in r0, and an output of the next number. In my case, I've used a = EF0016, b = 1, c = 73. This choice was arbitrary but meets the requirements above. Feel free to use any numbers you wish instead, as long as they obey the rules.
-
-### 4 Pi-casso
-
-OK, now we have all the functions we're going to need, let's try it out. Alter main to do the following, after getting the frame buffer info address:
-
- 1. Call SetGraphicsAddress with r0 containing the frame buffer info address.
- 2. Set four registers to 0. One will be the last random number, one will be the colour, one will be the last x co-ordinate and one will be the last y co-ordinate.
- 3. Call random to generate the next x-coordinate, using the last random number as the input.
- 4. Call random again to generate the next y-coordinate, using the x-coordinate you generated as an input.
- 5. Update the last random number to contain the y-coordinate.
- 6. Call SetForeColour with the colour, then increment the colour. If it goes above FFFF16, make sure it goes back to 0.
- 7. The x and y coordinates we have generated are between 0 and FFFFFFFF16. Convert them to a number between 0 and 102310 by using a logical shift right of 22.
- 8. Check the y coordinate is on the screen. Valid y coordinates are between 0 and 76710. If not, go back to 3.
- 9. Draw a line from the last x and y coordinates to the current x and y coordinates.
- 10. Update the last x and y coordinates to contain the current ones.
- 11. Go back to 3.
-
-
-
-As always, a solution for this can be found on the downloads page.
-
-Once you've finished, test it on the Raspberry Pi. You should see a very fast sequence of random lines being drawn on the screen, in steadily incrementing colours. This should never stop. If it doesn't work, please see our troubleshooting page.
-
-When you have it working, congratulations! We've now learned about meaningful graphics, and also about random numbers. I encourage you to play with line drawing, as it can be used to render almost anything you want. You may also want to explore more complicated shapes. Most can be made out of lines, but is this necessarily the best strategy? If you like the line program, try experimenting with the SetPixel function. What happens if instead of just setting the value of the pixel, you increase it by a small amount? What other patterns can you make? In the next lesson, [Lesson 8: Screen 03][2], we will look at the invaluable skill of drawing text.
-
---------------------------------------------------------------------------------
-
-via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen02.html
-
-作者:[Alex Chadwick][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.cl.cam.ac.uk
-[b]: https://github.com/lujun9972
-[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen01.html
-[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen03.html
diff --git a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 8 Screen03.md b/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 8 Screen03.md
deleted file mode 100644
index 08803fd50f..0000000000
--- a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 8 Screen03.md
+++ /dev/null
@@ -1,485 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 8 Screen03)
-[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen03.html)
-[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
-
-Computer Laboratory – Raspberry Pi: Lesson 8 Screen03
-======
-
-The Screen03 lesson builds on Screen02, by teaching how to draw text, and also a small feature on the command line arguments of the operating system. It is assumed you have the code for the [Lesson 7: Screen02][1] operating system as a basis.
-
-### 1 String Theory
-
-So, our task for this operating system is to draw text. We have several problems to address, the most pressing of which is probably about storing text. Unbelievably text has been one of the biggest flaws on computers to date. What should have been a straightforward type of data has brought down operating systems, crippled otherwise wonderful encryption, and caused many problems for users of different alphabets. Nevertheless, it is an incredibly important type of data, as it is an excellent link between the computer and the user. Text can be sufficiently structured that the operating system understands it, as well as sufficiently readable that humans can use it.
-
-```
-Variable data types such as text require much more complex handling.
-```
-
-So how exactly is text stored? Simply enough, we have some system by which we give each letter a unique number, and then store a sequence of such numbers. Sounds easy. The problem is that the number of numbers is not fixed. Some pieces of text are longer than others. With storing ordinary numbers, we have some fixed limit, e.g. 32 bits, and then we can't go beyond that, we write methods that use numbers of that length, etc. In terms of text, or strings as we often call it, we want to write functions that work on variable length strings, otherwise we would need a lot of functions! This is not a problem for numbers normally, as there are only a few common number formats (byte, word, half, double).
-
-```
-Buffer overrun attacks have plagued computers for years. Recently, the Wii, Xbox and Playstation 2 all suffered buffer overrun attacks, as well as large systems like Microsoft's Web and Database servers.
-```
-
-So, how do we determine how long the string is? I think the obvious answer is just to store how long the string is, and then to store the characters that make it up. This is called length prefixing, as the length comes before the string. Unfortunately, the pioneers of computer science did not agree. They felt it made more sense to have a special character called the null terminator (denoted \0) which represents when a string ends. This does indeed simplify many string algorithms, as you just keep working until the null terminator. Unfortunately this is the source of many security issues. What if a malicious user gives you a very long string? What if you didn't have enough space to store it. You might run a string copying function that copies until the null terminator, but because the string is so long, it overwrites your program. This may sound far fetched, but nevertheless, such buffer overrun attacks are incredibly common. Length prefixing mitigates this problem as it is easy to deduce the size of the buffer required to store the string. As an operating system developer, I leave it to you to decide how best to store text.
-
-The next thing we need to establish is how best to map characters to numbers. Fortunately, this is reasonably well standardised, so you have two major choices, Unicode and ASCII. Unicode maps almost every single useful symbol that can be written to a number, in exchange for having a lot more numbers, and a more complicated encoding system. ASCII uses one byte per character, and so only stores the Latin alphabet, numbers, a few symbols and a few special characters. Thus, ASCII is very easy to implement, compared to Unicode, in which not every character takes the same space, making string algorithms tricky. Normally operating systems use ASCII for strings which will not be displayed to end users (but perhaps to developers or experts), and Unicode for displaying messages to users, as Unicode can support things like Japanese characters, and so could be localised.
-
-Fortunately for us, this decision is irrelevant at the moment, as the first 128 characters of both are exactly the same, and are encoded exactly the same.
-
-Table 1.1 ASCII/Unicode symbols 0-127
-
-| | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | a | b | c | d | e | f | |
-|----| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | ----|
-| 00 | NUL | SOH | STX | ETX | EOT | ENQ | ACK | BEL | BS | HT | LF | VT | FF | CR | SO | SI | |
-| 10 | DLE | DC1 | DC2 | DC3 | DC4 | NAK | SYN | ETB | CAN | EM | SUB | ESC | FS | GS | RS | US | |
-| 20 | ! | " | # | $ | % | & | . | ( | ) | * | + | , | - | . | / | | |
-| 30 | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | : | ; | < | = | > | ? | |
-| 40 | @ | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | |
-| 50 | P | Q | R | S | T | U | V | W | X | Y | Z | [ | \ | ] | ^ | _ | |
-| 60 | ` | a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | |
-| 70 | p | q | r | s | t | u | v | w | x | y | z | { | | | } | ~ | DEL |
-
-The table shows the first 128 symbols. The hexadecimal representation of the number for a symbol is the row value added to the column value, for example A is 4116. What you may find surprising is the first two rows, and the very last value. These 33 special characters are not printed at all. In fact, these days, many are ignored. They exist because ASCII was originally intended as a system for transmitting data over computer networks, and so a lot more information than just the symbols had to be sent. The key special symbols that you should learn are NUL, the null terminator character I mentioned before, HT, horizontal tab is what we normally refer to as a tab and LF, the line feed character is used to make a new line. You may wish to research and use the other characters for special meanings in your operating system.
-
-### 2 Characters
-
-So, now that we know a bit about strings, we can start to think about how they're displayed. The fundamental thing we need to do in order to be able to display a string is to be able to display a character. Our first task will be making a DrawCharacter function which takes in a character to draw and a location, and then draws the character.
-
-```
-The true type font format used in many Operating Systems is so powerful, it has its own assembly language built in to ensure letters look correct at any resolution.
-```
-
-Naturally, this leads to a discussion about fonts. We already know there are many ways to display any given letter in accordance with font choice. So how does a font work? In the very early days of computer science, a font was just a series of little pictures of all the letters, called a bitmap font, and all the draw character method would do is copy one of the pictures to the screen. The trouble with this is when people want to resize the text. Sometimes we need big letters, and sometimes small. Although we could keep drawing new pictures for every font at every size with every character, this would get tedious. Thus, vector fonts were invented. in vector fonts, rather than containing an image of the font, the font file contains a description of how to draw it, e.g. an 'o' could be circle with radius half that of the maximum letter height. Modern operating systems use such fonts almost exclusively, as they are perfect at any resolution.
-
-Unfortunately, much though I would love to include an implementation of one of the vector font formats, it would take up the remainder of this website. Thus, we will implement a bitmap font, though if you wish to make a decent graphical operating system, a vector font would be useful.
-```
-00000000
-00000000
-00000000
-00010000
-00101000
-00101000
-00101000
-01000100
-01000100
-01111100
-11000110
-10000010
-00000000
-00000000
-00000000
-00000000
-```
-
-On the downloads page, I have included several '.bin' files in the font section. These are just raw binary data files for a few fonts. For this tutorial, pick your favourite from the monospace, monochrome, 8x16 section. Download it and store it in the 'source' directory as 'font.bin'. These files are just monochrome images of each of the letters in turn, with each letter being exactly 8 by 16 pixels. Thus, each takes 16 bytes, the first byte being the top row, the second the next, etc.
-
-The diagram shows the 'A' character in the monospace, monochrome, 8x16 font Bitstream Vera Sans Mono. In the file, we would find this starting at the 4116 × 1016 = 41016th byte as the following sequence in hexadecimal:
-
-00, 00, 00, 10, 28, 28, 28, 44, 44, 7C, C6, 82, 00, 00, 00, 00
-
-We're going to use a monospace font here, because in a monospace font every character is the same size. Unfortunately, yet another complication with most fonts is that the character's widths vary, leading to more complex display code. I've included a few other fonts on the downloads page, as well as an explanation of the format I've stored them all in.
-
-So let's get down to business. Copy the following to 'drawing.s' after the .int 0 of graphicsAddress.
-
-```
-.align 4
-font:
-.incbin "font.bin"
-```
-
-```
-.incbin "file" inserts the binary data from the file file.
-```
-
-This code copies the font data from the file to the address labelled font. We've used an .align 4 here to ensure each character starts on a multiple of 16 bytes, which can be used for a speed trick later.
-
-Now we want to write the draw character method. I'll give the pseudo code for this, so you can try to implement it yourself if you want to. Conventionally >> means logical shift right.
-
-```
-function drawCharacter(r0 is character, r1 is x, r2 is y)
- if character > 127 then exit
- set charAddress to font + character × 16
- for row = 0 to 15
- set bits to readByte(charAddress + row)
- for bit = 0 to 7
- if test(bits >> bit, 0x1)
- then setPixel(x + bit, y + row)
- next
- next
- return r0 = 8, r1 = 16
-end function
-
-```
-If implemented directly, this is deliberately not very efficient. With things like drawing characters, efficiency is a top priority, as we will do it a lot. Let's explore some improvements that bring this closer to optimal assembly code. Firstly, we have a × 16, which by now you should spot is the same as a logical shift left by 4 places. Next we have a variable row, which is only ever added to charAddress and to y. Thus, we can eliminate it by increasing these variables instead. The only issue now is how to tell when we've finished. This is where the .align 4 comes in handy. We know that charAddress will start with the low nibble containing 0. This means we can see how far into the character data we are by checking that low nibble.
-
-Though we can eliminate the need for bits, we must introduce a new variable to do so, so it is best left in. The only other improvement that can be made is to remove the nested bits >> bit.
-
-```
-function drawCharacter(r0 is character, r1 is x, r2 is y)
- if character > 127 then exit
- set charAddress to font + character << 4
- loop
- set bits to readByte(charAddress)
- set bit to 8
- loop
- set bits to bits << 1
- set bit to bit - 1
- if test(bits, 0x100)
- then setPixel(x + bit, y)
- until bit = 0
- set y to y + 1
- set chadAddress to chadAddress + 1
- until charAddress AND 0b1111 = 0
- return r0 = 8, r1 = 16
-end function
-```
-
-Now we've got code that is much closer to assembly code, and is near optimal. Below is the assembly code version of the above.
-
-```
-.globl DrawCharacter
-DrawCharacter:
-cmp r0,#127
-movhi r0,#0
-movhi r1,#0
-movhi pc,lr
-
-push {r4,r5,r6,r7,r8,lr}
-x .req r4
-y .req r5
-charAddr .req r6
-mov x,r1
-mov y,r2
-ldr charAddr,=font
-add charAddr, r0,lsl #4
-
-lineLoop$:
-
- bits .req r7
- bit .req r8
- ldrb bits,[charAddr]
- mov bit,#8
-
- charPixelLoop$:
-
- subs bit,#1
- blt charPixelLoopEnd$
- lsl bits,#1
- tst bits,#0x100
- beq charPixelLoop$
-
- add r0,x,bit
- mov r1,y
- bl DrawPixel
-
- teq bit,#0
- bne charPixelLoop$
-
- charPixelLoopEnd$:
- .unreq bit
- .unreq bits
- add y,#1
- add charAddr,#1
- tst charAddr,#0b1111
- bne lineLoop$
-
-.unreq x
-.unreq y
-.unreq charAddr
-
-width .req r0
-height .req r1
-mov width,#8
-mov height,#16
-
-pop {r4,r5,r6,r7,r8,pc}
-.unreq width
-.unreq height
-```
-
-### 3 Strings
-
-Now that we can draw characters, we can draw text. We need to make a method that, for a given string, draws each character in turn, at incrementing positions. To be nice, we shall also implement new lines and tabs. It's decision time as far as null terminators are concerned, and if you want to make your operating system use them, feel free by changing the code below. To avoid the issue, I will have the length of the string passed as an argument to the DrawString function, along with the address of the string, and the x and y coordinates.
-
-```
-function drawString(r0 is string, r1 is length, r2 is x, r3 is y)
- set x0 to x
- for pos = 0 to length - 1
- set char to loadByte(string + pos)
- set (cwidth, cheight) to DrawCharacter(char, x, y)
- if char = '\n' then
- set x to x0
- set y to y + cheight
- otherwise if char = '\t' then
- set x1 to x
- until x1 > x0
- set x1 to x1 + 5 × cwidth
- loop
- set x to x1
- otherwise
- set x to x + cwidth
- end if
- next
-end function
-```
-
-Once again, this function isn't that close to assembly code. Feel free to try to implement it either directly or by simplifying it. I will give the simplification and then the assembly code below.
-
-Clearly the person who wrote this function wasn't being very efficient (me in case you were wondering). Once again we have a pos variable that just increments and is added to something else, which is completely unnecessary. We can remove it, and instead simultaneously decrement length until it is 0, saving the need for one register. The rest of the function is probably fine, except for that annoying multiplication by five. A key thing to do here would be to move the multiplication outside the loop; multiplication is slow even with bit shifts, and since we're always adding the same constant multiplied by 5, there is no need to recompute this. It can in fact be implemented in one operation using the argument shifting in assembly code, so I shall rephrase it like that.
-
-```
-function drawString(r0 is string, r1 is length, r2 is x, r3 is y)
- set x0 to x
- until length = 0
- set length to length - 1
- set char to loadByte(string)
- set (cwidth, cheight) to DrawCharacter(char, x, y)
- if char = '\n' then
- set x to x0
- set y to y + cheight
- otherwise if char = '\t' then
- set x1 to x
- set cwidth to cwidth + cwidth << 2
- until x1 > x0
- set x1 to x1 + cwidth
- loop
- set x to x1
- otherwise
- set x to x + cwidth
- end if
- set string to string + 1
- loop
-end function
-```
-
-In assembly code:
-
-```
-.globl DrawString
-DrawString:
-x .req r4
-y .req r5
-x0 .req r6
-string .req r7
-length .req r8
-char .req r9
-push {r4,r5,r6,r7,r8,r9,lr}
-
-mov string,r0
-mov x,r2
-mov x0,x
-mov y,r3
-mov length,r1
-
-stringLoop$:
- subs length,#1
- blt stringLoopEnd$
-
- ldrb char,[string]
- add string,#1
-
- mov r0,char
- mov r1,x
- mov r2,y
- bl DrawCharacter
- cwidth .req r0
- cheight .req r1
-
- teq char,#'\n'
- moveq x,x0
- addeq y,cheight
- beq stringLoop$
-
- teq char,#'\t'
- addne x,cwidth
- bne stringLoop$
-
- add cwidth, cwidth,lsl #2
- x1 .req r1
- mov x1,x0
-
- stringLoopTab$:
- add x1,cwidth
- cmp x,x1
- bge stringLoopTab$
- mov x,x1
- .unreq x1
- b stringLoop$
-stringLoopEnd$:
-.unreq cwidth
-.unreq cheight
-
-pop {r4,r5,r6,r7,r8,r9,pc}
-.unreq x
-.unreq y
-.unreq x0
-.unreq string
-.unreq length
-```
-
-```
-subs reg,#val subtracts val from the register reg and compares the result with 0.
-```
-
-This code makes clever use of a new operation, subs which subtracts one number from another, stores the result and then compares it with 0. In truth, all comparisons are implemented as a subtraction and then comparison with 0, but the result is normally discarded. This means that this operation is as fast as cmp.
-
-### 4 Your Wish is My Command Line
-
-Now that we can print strings, the challenge is to find an interesting one to draw. Normally in tutorials such as this, people just draw "Hello World!", but after all we've done so far, I feel that is a little patronising (feel free to do so if it helps). Instead we're going to draw our command line.
-
-A convention has been made for computers running ARM. When they boot, it is important they are given certain information about what they have available to them. Most all processors have some way of ascertaining this information, and on ARM this is by data left at the address 10016. The format of the data is as follows:
-
- 1. The data is broken down into a series of 'tags'.
- 2. There are nine types of tag: 'core', 'mem', 'videotext', 'ramdisk', 'initrd2', 'serial' 'revision', 'videolfb', 'cmdline'.
- 3. Each can only appear once, but all but the 'core' tag don't have to appear.
- 4. The tags are placed from 0x100 in order one after the other.
- 5. The end of the list of tags always contains 2 words which are 0.
- 6. Every tag's size in bytes is a multiple of 4.
- 7. Each tag starts with the size of the tag in words in the tag, including this number.
- 8. This is followed by a half word containing the tag's number. These are numbered from 1 in the order above ('core' is 1, 'cmdline' is 9).
- 9. This is followed by a half word containing 544116.
- 10. After this comes the data of the tag, which varies depending on the tag. The size of the data in words + 2 is always the same as the length mentioned above.
- 11. A 'core' tag is either 2 or 5 words in length. If it is 2, there is no data, if it is 5, it has 3 words.
- 12. A 'mem' tag is always 4 words in length. The data is the first address in a block of memory, and the length of that block.
- 13. A 'cmdline' tag contains a null terminated string which is the parameters of the kernel.
-
-
-```
-Almost all Operating Systems support the notion of programs having a 'command line'. The idea is to provide a common mechanism for choosing the desired behaviour of the program.
-```
-
-On the current version of the Raspberry Pi, only the 'core', 'mem' and 'cmdline' tags are present. You may find these useful later, and a more complete reference for these is on our Raspberry Pi reference page. The one we're interested in at the moment is the 'cmdline' tag, because it contains a string. We're going to write some code to search for the command line tag, and, if found, to print it out with each item on a new line. The command line is just a list of things that either the graphics processor or the user thought it might be nice for the Operating System to know. On the Raspberry Pi, this includes the MAC Address, serial number and screen resolution. The string itself is just a list of expressions such as 'key.subkey=value' separated by spaces.
-
-Let's start by finding the 'cmdline' tag. In a new file called 'tags.s' copy the following code.
-
-```
-.section .data
-tag_core: .int 0
-tag_mem: .int 0
-tag_videotext: .int 0
-tag_ramdisk: .int 0
-tag_initrd2: .int 0
-tag_serial: .int 0
-tag_revision: .int 0
-tag_videolfb: .int 0
-tag_cmdline: .int 0
-```
-
-Looking through the list of tags will be a slow operation, as it involves a lot of memory access. Therefore, we only want to have to do it once. This code creates some data which will store the memory address of the first tag of each of the types. Then, to find a tag the following pseudo code will suffice.
-
-```
-function FindTag(r0 is tag)
- if tag > 9 or tag = 0 then return 0
- set tagAddr to loadWord(tag_core + (tag - 1) × 4)
- if not tagAddr = 0 then return tagAddr
- if readWord(tag_core) = 0 then return 0
- set tagAddr to 0x100
- loop forever
- set tagIndex to readHalfWord(tagAddr + 4)
- if tagIndex = 0 then return FindTag(tag)
- if readWord(tag_core+(tagIndex-1)×4) = 0
- then storeWord(tagAddr, tag_core+(tagIndex-1)×4)
- set tagAddr to tagAddr + loadWord(tagAddr) × 4
- end loop
-end function
-```
-This code is already quite well optimised and close to assembly. It is optimistic in that the first thing it tries is loading the tag directly, as all but the first time this should be the case. If that fails, it checks if the core tag has an address. Since there must always be a core tag, the only reason that it would not have an address is if it doesn't exist. If it does have an address, the tag we were looking for didn't. If it doesn't we need to find the addresses of all the tags. It does this by reading the number of the tag. If it is zero, that must mean we are at the end of the list. This means we've now filled in all the tags in our directory. Therefore if we run our function again, it will now be able to produce an answer. If the tag number is not zero, we check to see if this tag type already has an address. If not, we store the address of this tag in our directory. We then add the length of this tag in bytes to the tag address to find the next tag.
-
-Have a go at implementing this code in assembly. You will need to simplify it. If you get stuck, my answer is below. Don't forget the .section .text!
-
-```
-.section .text
-.globl FindTag
-FindTag:
-tag .req r0
-tagList .req r1
-tagAddr .req r2
-
-sub tag,#1
-cmp tag,#8
-movhi tag,#0
-movhi pc,lr
-
-ldr tagList,=tag_core
-tagReturn$:
-add tagAddr,tagList, tag,lsl #2
-ldr tagAddr,[tagAddr]
-
-teq tagAddr,#0
-movne r0,tagAddr
-movne pc,lr
-
-ldr tagAddr,[tagList]
-teq tagAddr,#0
-movne r0,#0
-movne pc,lr
-
-mov tagAddr,#0x100
-push {r4}
-tagIndex .req r3
-oldAddr .req r4
-tagLoop$:
-ldrh tagIndex,[tagAddr,#4]
-subs tagIndex,#1
-poplt {r4}
-blt tagReturn$
-
-add tagIndex,tagList, tagIndex,lsl #2
-ldr oldAddr,[tagIndex]
-teq oldAddr,#0
-.unreq oldAddr
-streq tagAddr,[tagIndex]
-
-ldr tagIndex,[tagAddr]
-add tagAddr, tagIndex,lsl #2
-b tagLoop$
-
-.unreq tag
-.unreq tagList
-.unreq tagAddr
-.unreq tagIndex
-```
-
-### 5 Hello World
-
-Now that we have everything we need, we can draw our first string. In 'main.s' delete everything after bl SetGraphicsAddress, and replace it with the following:
-
-```
-mov r0,#9
-bl FindTag
-ldr r1,[r0]
-lsl r1,#2
-sub r1,#8
-add r0,#8
-mov r2,#0
-mov r3,#0
-bl DrawString
-loop$:
-b loop$
-```
-
-This code simply uses our FindTag method to find the 9th tag (cmdline) and then calculates its length and passes the command and the length to the DrawString method, and tells it to draw the string at 0,0. Now test this on the Raspberry Pi. You should see a line of text on the screen. If not please see our troubleshooting page.
-
-Once it works, congratulations you've now got the ability to draw text. But there is still room for improvement. What if we wanted to write out a number, or a section of the memory or manipulate our command line? In [Lesson 9: Screen04][2], we will look at manipulating text and displaying useful numbers and information.
-
---------------------------------------------------------------------------------
-
-via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen03.html
-
-作者:[Alex Chadwick][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.cl.cam.ac.uk
-[b]: https://github.com/lujun9972
-[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen02.html
-[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen04.html
diff --git a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 9 Screen04.md b/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 9 Screen04.md
deleted file mode 100644
index 1ca2a16609..0000000000
--- a/sources/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 9 Screen04.md
+++ /dev/null
@@ -1,540 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 9 Screen04)
-[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen04.html)
-[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
-
-Computer Laboratory – Raspberry Pi: Lesson 9 Screen04
-======
-
-The Screen04 lesson builds on Screen03, by teaching how to manipulate text. It is assumed you have the code for the [Lesson 8: Screen03][1] operating system as a basis.
-
-### 1 String Manipulation
-
-```
-Variadic functions look much less intuitive in assembly code. Nevertheless, they are useful and powerful concepts.
-```
-
-Being able to draw text is lovely, but unfortunately at the moment you can only draw strings which are already prepared. This is fine for displaying something like the command line, but ideally we would like to be able to display and text we so desire. As per usual, if we put the effort in and make an excellent function that does all the string manipulation we could ever want, we get much easier code later on in return. Once such complicated function in C programming is sprintf. This function generates a string based on a description given as another string and additional arguments. What is interesting about this function is that it is variadic. This means that it takes a variable number of parameters. The number of parameters depends on the exact format string, and so cannot be determined in advance.
-
-The full function has many options, and I list a few here. I've highlighted the ones which we will implement in this tutorial, though you can try to implement more.
-
-The function works by reading the format string, and then interpreting it using the table below. Once an argument is used, it is not considered again. The return value of the function is the number of characters written. If the method fails, a negative number is returned.
-
-Table 1.1 sprintf formatting rules
-| Sequence | Meaning |
-| ---------------------- | ---------------------- |
-| Any character except % | Copies the character to the output. |
-| %% | Writes a % character to the output. |
-| %c | Writes the next argument as a character. |
-| %d or %i | Writes the next argument as a base 10 signed integer. |
-| %e | Writes the next argument in scientific notation using eN to mean ×10N. |
-| %E | Writes the next argument in scientific notation using EN to mean ×10N. |
-| %f | Writes the next argument as a decimal IEEE 754 floating point number. |
-| %g | Same as the shorter of %e and %f. |
-| %G | Same as the shorter of %E and %f. |
-| %o | Writes the next argument as a base 8 unsigned integer. |
-| %s | Writes the next argument as if it were a pointer to a null terminated string. |
-| %u | Writes the next argument as a base 10 unsigned integer. |
-| %x | Writes the next argument as a base 16 unsigned integer, with lowercase a,b,c,d,e and f. |
-| %X | Writes the next argument as a base 16 unsigned integer, with uppercase A,B,C,D,E and F. |
-| %p | Writes the next argument as a pointer address. |
-| %n | Writes nothing. Copies instead the number of characters written so far to the location addressed by the next argument. |
-
-Further to the above, many additional tweaks exist to the sequences, such as specifying minimum length, signs, etc. More information can be found at [sprintf - C++ Reference][2].
-
-Here are a few examples of calls to the method and their results to illustrate its use.
-
-Table 1.2 sprintf example calls
-| Format String | Arguments | Result |
-| "%d" | 13 | "13" |
-| "+%d degrees" | 12 | "+12 degrees" |
-| "+%x degrees" | 24 | "+1c degrees" |
-| "'%c' = 0%o" | 65, 65 | "'A' = 0101" |
-| "%d * %d%% = %d" | 200, 40, 80 | "200 * 40% = 80" |
-| "+%d degrees" | -5 | "+-5 degrees" |
-| "+%u degrees" | -5 | "+4294967291 degrees" |
-
-Hopefully you can already begin to see the usefulness of the function. It does take a fair amount of work to program, but our reward is a very general function we can use for all sorts of purposes.
-
-### 2 Division
-
-```
-Division is the slowest and most complicated of the basic mathematical operators. It is not implemented directly in ARM assembly code because it takes so long to deduce the answer, and so isn't a 'simple' operation.
-```
-
-While this function does look very powerful, it also looks very complicated. The easiest way to deal with its many cases is probably to write functions to deal with some common tasks it has. What would be useful would be a function to generate the string for a signed and an unsigned number in any base. So, how can we go about doing that? Try to devise an algorithm quickly before reading on.
-
-The easiest way is probably the exact way I mentioned in [Lesson 1: OK01][3], which is the division remainder method. The idea is the following:
-
- 1. Divide the current value by the base you're working in.
- 2. Store the remainder.
- 3. If the new value is not 0, go to 1.
- 4. Reverse the order of the remainders. This is the answer.
-
-
-
-For example:
-
-Table 2.1 Example base 2
-conversion
-| Value | New Value | Remainder |
-| ----- | --------- | --------- |
-| 137 | 68 | 1 |
-| 68 | 34 | 0 |
-| 34 | 17 | 0 |
-| 17 | 8 | 1 |
-| 8 | 4 | 0 |
-| 4 | 2 | 0 |
-| 2 | 1 | 0 |
-| 1 | 0 | 1 |
-
-So the answer is 100010012
-
-The unfortunate part about this procedure is that it unavoidably uses division. Therefore, we must first contemplate division in binary.
-
-For a refresher on long division expand the box below.
-
-```
-Let's suppose we wish to divide 4135 by 17.
-
- 0243 r 4
-17)4135
- 0 0 × 17 = 0000
- 4135 4135 - 0 = 4135
- 34 200 × 17 = 3400
- 735 4135 - 3400 = 735
- 68 40 × 17 = 680
- 55 735 - 680 = 55
- 51 3 × 17 = 51
- 4 55 - 51 = 4
- Answer: 243 remainder 4
-
-First of all we would look at the top digit of the dividend. We see that the smallest multiple of the divisor which is less or equal to it is 0. We output a 0 to the result.
-
-Next we look at the second to top digit of the dividend and all higher digits. We see the smallest multiple of the divisor which is less than or equal is 34. We output a 2 and subtract 3400.
-
-Next we look at the third digit of the dividend and all higher digits. The smallest multiple of the divisor that is less than or equal to this is 68. We output 4 and subtract 680.
-
-Finally we look at all remaining digits. We see that the lowest multiple of the divisor that is less than the remaining digits is 51. We output a 3, subtract 51. The result of the subtraction is our remainder.
-```
-
-To implement division in assembly code, we will implement binary long division. We do this because the numbers are stored in binary, which gives us easy access to the all important bit shift operations, and because division in binary is simpler than in any higher base due to the much lower number of cases.
-
-```
- 1011 r 1
-1010)1101111
- 1010
- 11111
- 1010
- 1011
- 1010
- 1
-This example shows how binary long division works. You simply shift the divisor as far right as possible without exceeding the dividend, output a 1 according to the poisition and subtract the number. Whatever remains is the remainder. In this case we show 11011112 ÷ 10102 = 10112 remainder 12. In decimal, 111 ÷ 10 = 11 remainder 1.
-```
-
-
-Try to implement long division yourself now. You should write a function, DivideU32 which divides r0 by r1, returning the result in r0, and the remainder in r1. Below, we will go through a very efficient implementation.
-
-```
-function DivideU32(r0 is dividend, r1 is divisor)
- set shift to 31
- set result to 0
- while shift ≥ 0
- if dividend ≥ (divisor << shift) then
- set dividend to dividend - (divisor << shift)
- set result to result + 1
- end if
- set result to result << 1
- set shift to shift - 1
- loop
- return (result, dividend)
-end function
-```
-
-This code does achieve what we need, but would not work as assembly code. Our problem comes from the fact that our registers only hold 32 bits, and so the result of divisor << shift may not fit in a register (we call this overflow). This is a real problem. Did your solution have overflow?
-
-Fortunately, an instruction exists called clz or count leading zeros, which counts the number of zeros in the binary representation of a number starting at the top bit. Conveniently, this is exactly the number of times we can shift the register left before overflow occurs. Another optimisation you may spot is that we compute divisor << shift twice each loop. We could improve upon this by shifting the divisor at the beginning, then shifting it down at the end of each loop to avoid any need to shift it elsewhere.
-
-Let's have a look at the assembly code to make further improvements.
-
-```
-.globl DivideU32
-DivideU32:
-result .req r0
-remainder .req r1
-shift .req r2
-current .req r3
-
-clz shift,r1
-lsl current,r1,shift
-mov remainder,r0
-mov result,#0
-
-divideU32Loop$:
- cmp shift,#0
- blt divideU32Return$
- cmp remainder,current
-
- addge result,result,#1
- subge remainder,current
- sub shift,#1
- lsr current,#1
- lsl result,#1
- b divideU32Loop$
-divideU32Return$:
-.unreq current
-mov pc,lr
-
-.unreq result
-.unreq remainder
-.unreq shift
-```
-
-```
-clz dest,src stores the number of zeros from the top to the first one of register dest to register src
-```
-
-You may, quite rightly, think that this looks quite efficient. It is pretty good, but division is a very expensive operation, and one we may wish to do quite often, so it would be good if we could improve the speed in any way. When looking to optimise code with a loop in it, it is always important to consider how many times the loop must run. In this case, the loop will run a maximum of 31 times for an input of 1. Without making special cases, this could often be improved easily. For example when dividing 1 by 1, no shift is required, yet we shift the divisor to each of the positions above it. This could be improved by simply using the new clz command on the dividend and subtracting this from the shift. In the case of 1 ÷ 1, this means shift would be set to 0, rightly indicating no shift is required. If this causes the shift to be negative, the divisor is bigger than the dividend and so we know the result is 0 remainder the dividend. Another quick check we could make is if the current value is ever 0, then we have a perfect division and can stop looping.
-
-```
-.globl DivideU32
-DivideU32:
-result .req r0
-remainder .req r1
-shift .req r2
-current .req r3
-
-clz shift,r1
-clz r3,r0
-subs shift,r3
-lsl current,r1,shift
-mov remainder,r0
-mov result,#0
-blt divideU32Return$
-
-divideU32Loop$:
- cmp remainder,current
- blt divideU32LoopContinue$
-
- add result,result,#1
- subs remainder,current
- lsleq result,shift
- beq divideU32Return$
-divideU32LoopContinue$:
- subs shift,#1
- lsrge current,#1
- lslge result,#1
- bge divideU32Loop$
-
-divideU32Return$:
-.unreq current
-mov pc,lr
-
-.unreq result
-.unreq remainder
-.unreq shift
-```
-
-Copy the code above to a file called 'maths.s'.
-
-### 3 Number Strings
-
-Now that we can do division, let's have another look at implementing number to string conversion. The following is pseudo code to convert numbers from registers into strings in up to base 36. By convention, a % b means the remainder of dividing a by b.
-
-```
-function SignedString(r0 is value, r1 is dest, r2 is base)
- if value ≥ 0
- then return UnsignedString(value, dest, base)
- otherwise
- if dest > 0 then
- setByte(dest, '-')
- set dest to dest + 1
- end if
- return UnsignedString(-value, dest, base) + 1
- end if
-end function
-
-function UnsignedString(r0 is value, r1 is dest, r2 is base)
- set length to 0
- do
-
- set (value, rem) to DivideU32(value, base)
- if rem > 10
- then set rem to rem + '0'
- otherwise set rem to rem - 10 + 'a'
- if dest > 0
- then setByte(dest + length, rem)
- set length to length + 1
-
- while value > 0
- if dest > 0
- then ReverseString(dest, length)
- return length
-end function
-
-function ReverseString(r0 is string, r1 is length)
- set end to string + length - 1
- while end > start
- set temp1 to readByte(start)
- set temp2 to readByte(end)
- setByte(start, temp2)
- setByte(end, temp1)
- set start to start + 1
- set end to end - 1
- end while
-end function
-```
-
-In a file called 'text.s' implement the above. Remember that if you get stuck, a full solution can be found on the downloads page.
-
-### 4 Format Strings
-
-Let's get back to our string formatting method. Since we're programming our own operating system, we can add or change formatting rules as we please. We may find it useful to add a %b operation that outputs a number in binary, and if you're not using null terminated strings, you may wish to alter the behaviour of %s to take the length of the string from another argument, or from a length prefix if you wish. I will use a null terminator in the example below.
-
-One of the main obstacles to implementing this function is that the number of arguments varies. According to the ABI, additional arguments are pushed onto the stack before calling the method in reverse order. So, for example, if we wish to call our method with 8 parameters; 1,2,3,4,5,6,7 and 8, we would do the following:
-
- 1. Set r0 = 5, r1 = 6, r2 = 7, r3 = 8
- 2. Push {r0,r1,r2,r3}
- 3. Set r0 = 1, r1 = 2, r2 = 3, r3 = 4
- 4. Call the function
- 5. Add sp,#4*4
-
-
-
-Now we must decide what arguments our function actually needs. In my case, I used the format string address in r0, the length of the format string in r1, the destination string address in r2, followed by the list of arguments required, starting in r3 and continuing on the stack as above. If you wish to use a null terminated format string, the parameter in r1 can be removed. If you wish to have a maximum buffer length, you could store this in r3. As an additional modification, I think it is useful to alter the function so that if the destination string address is 0, no string is outputted, but an accurate length is still returned, so that the length of a formatted string can be accurately determined.
-
-If you wish to attempt the implementation on your own, try it now. If not, I will first construct the pseudo code for the method, then give the assembly code implementation.
-
-```
-function StringFormat(r0 is format, r1 is formatLength, r2 is dest, ...)
- set index to 0
- set length to 0
- while index < formatLength
- if readByte(format + index) = '%' then
- set index to index + 1
- if readByte(format + index) = '%' then
- if dest > 0
- then setByte(dest + length, '%')
- set length to length + 1
- otherwise if readByte(format + index) = 'c' then
- if dest > 0
- then setByte(dest + length, nextArg)
- set length to length + 1
- otherwise if readByte(format + index) = 'd' or 'i' then
- set length to length + SignedString(nextArg, dest, 10)
- otherwise if readByte(format + index) = 'o' then
- set length to length + UnsignedString(nextArg, dest, 8)
- otherwise if readByte(format + index) = 'u' then
- set length to length + UnsignedString(nextArg, dest, 10)
- otherwise if readByte(format + index) = 'b' then
- set length to length + UnsignedString(nextArg, dest, 2)
- otherwise if readByte(format + index) = 'x' then
- set length to length + UnsignedString(nextArg, dest, 16)
- otherwise if readByte(format + index) = 's' then
- set str to nextArg
- while getByte(str) != '\0'
- if dest > 0
- then setByte(dest + length, getByte(str))
- set length to length + 1
- set str to str + 1
- loop
- otherwise if readByte(format + index) = 'n' then
- setWord(nextArg, length)
- end if
- otherwise
- if dest > 0
- then setByte(dest + length, readByte(format + index))
- set length to length + 1
- end if
- set index to index + 1
- loop
- return length
-end function
-```
-
-Although this function is massive, it is quite straightforward. Most of the code goes into checking all the various conditions, the code for each one is simple. Further, all the various unsigned integer cases are the same but for the base, and so can be summarised in assembly. This is given below.
-
-```
-.globl FormatString
-FormatString:
-format .req r4
-formatLength .req r5
-dest .req r6
-nextArg .req r7
-argList .req r8
-length .req r9
-
-push {r4,r5,r6,r7,r8,r9,lr}
-mov format,r0
-mov formatLength,r1
-mov dest,r2
-mov nextArg,r3
-add argList,sp,#7*4
-mov length,#0
-
-formatLoop$:
- subs formatLength,#1
- movlt r0,length
- poplt {r4,r5,r6,r7,r8,r9,pc}
-
- ldrb r0,[format]
- add format,#1
- teq r0,#'%'
- beq formatArg$
-
-formatChar$:
- teq dest,#0
- strneb r0,[dest]
- addne dest,#1
- add length,#1
- b formatLoop$
-
-formatArg$:
- subs formatLength,#1
- movlt r0,length
- poplt {r4,r5,r6,r7,r8,r9,pc}
-
- ldrb r0,[format]
- add format,#1
- teq r0,#'%'
- beq formatChar$
-
- teq r0,#'c'
- moveq r0,nextArg
- ldreq nextArg,[argList]
- addeq argList,#4
- beq formatChar$
-
- teq r0,#'s'
- beq formatString$
-
- teq r0,#'d'
- beq formatSigned$
-
- teq r0,#'u'
- teqne r0,#'x'
- teqne r0,#'b'
- teqne r0,#'o'
- beq formatUnsigned$
-
- b formatLoop$
-
-formatString$:
- ldrb r0,[nextArg]
- teq r0,#0x0
- ldreq nextArg,[argList]
- addeq argList,#4
- beq formatLoop$
- add length,#1
- teq dest,#0
- strneb r0,[dest]
- addne dest,#1
- add nextArg,#1
- b formatString$
-
-formatSigned$:
- mov r0,nextArg
- ldr nextArg,[argList]
- add argList,#4
- mov r1,dest
- mov r2,#10
- bl SignedString
- teq dest,#0
- addne dest,r0
- add length,r0
- b formatLoop$
-
-formatUnsigned$:
- teq r0,#'u'
- moveq r2,#10
- teq r0,#'x'
- moveq r2,#16
- teq r0,#'b'
- moveq r2,#2
- teq r0,#'o'
- moveq r2,#8
-
- mov r0,nextArg
- ldr nextArg,[argList]
- add argList,#4
- mov r1,dest
- bl UnsignedString
- teq dest,#0
- addne dest,r0
- add length,r0
- b formatLoop$
-```
-
-### 5 Convert OS
-
-Feel free to try using this method however you wish. As an example, here is the code to generate a conversion chart from base 10 to binary to hexadecimal to octal and to ASCII.
-
-Delete all code after bl SetGraphicsAddress in 'main.s' and replace it with the following:
-
-```
-mov r4,#0
-loop$:
-ldr r0,=format
-mov r1,#formatEnd-format
-ldr r2,=formatEnd
-lsr r3,r4,#4
-push {r3}
-push {r3}
-push {r3}
-push {r3}
-bl FormatString
-add sp,#16
-
-mov r1,r0
-ldr r0,=formatEnd
-mov r2,#0
-mov r3,r4
-
-cmp r3,#768-16
-subhi r3,#768
-addhi r2,#256
-cmp r3,#768-16
-subhi r3,#768
-addhi r2,#256
-cmp r3,#768-16
-subhi r3,#768
-addhi r2,#256
-
-bl DrawString
-
-add r4,#16
-b loop$
-
-.section .data
-format:
-.ascii "%d=0b%b=0x%x=0%o='%c'"
-formatEnd:
-```
-
-Can you work out what will happen before testing? Particularly what happens for r3 ≥ 128? Try it on the Raspberry Pi to see if you're right. If it doesn't work, please see our troubleshooting page.
-
-When it does work, congratulations, you've completed the Screen04 tutorial, and reached the end of the screen series! We've learned about pixels and frame buffers, and how these apply to the Raspberry Pi. We've learned how to draw simple lines, and also how to draw characters, as well as the invaluable skill of formatting numbers into text. We now have all that you would need to make graphical output on an Operating System. Can you make some more drawing methods? What about 3D graphics? Can you implement a 24bit frame buffer? What about reading the size of the framebuffer in from the command line?
-
-The next series is the [Input][4] series, which teaches how to use the keyboard and mouse to really get towards a traditional console computer.
-
---------------------------------------------------------------------------------
-
-via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen04.html
-
-作者:[Alex Chadwick][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.cl.cam.ac.uk
-[b]: https://github.com/lujun9972
-[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen03.html
-[2]: http://www.cplusplus.com/reference/clibrary/cstdio/sprintf/
-[3]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/ok01.html
-[4]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input01.html
diff --git a/sources/tech/20160922 Annoying Experiences Every Linux Gamer Never Wanted.md b/sources/tech/20160922 Annoying Experiences Every Linux Gamer Never Wanted.md
deleted file mode 100644
index 8362f15d9b..0000000000
--- a/sources/tech/20160922 Annoying Experiences Every Linux Gamer Never Wanted.md
+++ /dev/null
@@ -1,159 +0,0 @@
-Annoying Experiences Every Linux Gamer Never Wanted!
-============================================================
-
-
- [][10]
-
-[Gaming on Linux][12] has come a long way. There are dedicated [Linux gaming distributions][13] now. But this doesn’t mean that gaming experience on Linux is as smooth as on Windows.
-
-What are the obstacles that should be thought about to ensure that we enjoy games as much as Windows users do?
-
-[Wine][14], [PlayOnLinux][15] and other similar tools are not always able to play every popular Windows game. In this article, I would like to discuss various factors that must be dealt with in order to have the best possible Linux gaming experience.
-
-### #1 SteamOS is Open Source, Steam for Linux is NOT
-
-As stated on the [SteamOS page][16], even though SteamOS is open source, Steam for Linux continues to be proprietary. Had it also been open source, the amount of support from the open source community would have been tremendous! Since it is not, [the birth of Project Ascension was inevitable][17]:
-
-[video](https://youtu.be/07UiS5iAknA)
-
-Project Ascension is an open source game launcher designed to launch games that have been bought and downloaded from anywhere – they can be Steam games, [Origin games][18], Uplay games, games downloaded directly from game developer websites or from DVD/CD-ROMs.
-
-Here is how it all began: [Sharing The Idea][19] resulted in a very interesting discussion with readers all over from the gaming community pitching in their own opinions and suggestions.
-
-### #2 Performance compared to Windows
-
-Getting Windows games to run on Linux is not always an easy task. But thanks to a feature called [CSMT][20] (command stream multi-threading), PlayOnLinux is now better equipped to deal with these performance issues, though it’s still a long way to achieve Windows level outcomes.
-
-Native Linux support for games has not been so good for past releases.
-
-Last year, it was reported that SteamOS performed [significantly worse][21] than Windows. Tomb Raider was released on SteamOS/Steam for Linux last year. However, benchmark results were [not at par][22] with performance on Windows.
-
-[video](https://youtu.be/nkWUBRacBNE)
-
-This was much obviously due to the fact that the game had been developed with [DirectX][23] in mind and not [OpenGL][24].
-
-Tomb Raider is the [first Linux game that uses TressFX][25]. This video includes TressFX comparisons:
-
-[video](https://youtu.be/-IeY5ZS-LlA)
-
-Here is another interesting comparison which shows Wine+CSMT performing much better than the native Linux version itself on Steam! This is the power of Open Source!
-
-[Suggested readA New Linux OS "OSu" Vying To Be Ubuntu Of Arch Linux World][26]
-
-[video](https://youtu.be/sCJkC6oJ08A)
-
-TressFX has been turned off in this case to avoid FPS loss.
-
-Here is another Linux vs Windows comparison for the recently released “[Life is Strange][27]” on Linux:
-
-[video](https://youtu.be/Vlflu-pIgIY)
-
-It’s good to know that [_Steam for Linux_][28] has begun to show better improvements in performance for this new Linux game.
-
-Before launching any game for Linux, developers should consider optimizing them especially if it’s a DirectX game and requires OpenGL translation. We really do hope that [Deus Ex: Mankind Divided on Linux][29] gets benchmarked well, upon release. As its a DirectX game, we hope it’s being ported well for Linux. Here’s [what the Executive Game Director had to say][30].
-
-### #3 Proprietary NVIDIA Drivers
-
-[AMD’s support for Open Source][31] is definitely commendable when compared to [NVIDIA][32]. Though [AMD][33] driver support is [pretty good on Linux][34] now due to its better open source driver, NVIDIA graphic card owners will still have to use the proprietary NVIDIA drivers because of the limited capabilities of the open-source version of NVIDIA’s graphics driver called Nouveau.
-
-In the past, legendary Linus Torvalds has also shared his thoughts about Linux support from NVIDIA to be totally unacceptable:
-
-[video](https://youtu.be/O0r6Pr_mdio)
-
-You can watch the complete talk [here][35]. Although NVIDIA responded with [a commitment for better linux support][36], the open source graphics driver still continues to be weak as before.
-
-### #4 Need for Uplay and Origin DRM support on Linux
-
-[video](https://youtu.be/rc96NFwyxWU)
-
-The above video describes how to install the [Uplay][37] DRM on Linux. The uploader also suggests that the use of wine as the main tool of games and applications is not recommended on Linux. Rather, preference to native applications should be encouraged instead.
-
-The following video is a guide about installing the [Origin][38] DRM on Linux:
-
-[video](https://youtu.be/ga2lNM72-Kw)
-
-Digital Rights Management Software adds another layer for game execution and hence it adds up to the already challenging task to make a Windows game run well on Linux. So in addition to making the game execute, W.I.N.E has to take care of running the DRM software such as Uplay or Origin as well. It would have been great if, like Steam, Linux could have got its own native versions of Uplay and Origin.
-
-[Suggested readLinux Foundation Head Calls 2017 'Year of the Linux Desktop'... While Running Apple's macOS Himself][39]
-
-### #5 DirectX 11 support for Linux
-
-Even though we have tools on Linux to run Windows applications, every game comes with its own set of tweak requirements for it to be playable on Linux. Though there was an announcement about [DirectX 11 support for Linux][40] last year via Code Weavers, it’s still a long way to go to make playing newly launched titles on Linux a possibility. Currently, you can
-
-Currently, you can [buy Crossover from Codeweavers][41] to get the best DirectX 11 support available. This [thread][42] on the Arch Linux forums clearly shows how much more effort is required to make this dream a possibility. Here is an interesting [find][43] from a [Reddit thread][44], which mentions Wine getting [DirectX 11 patches from Codeweavers][45]. Now that’s definitely some good news.
-
-### #6 100% of Steam games are not available for Linux
-
-This is an important point to ponder as Linux gamers continue to miss out on every major game release since most of them land up on Windows. Here is a guide to [install Steam for Windows on Linux][46].
-
-### #7 Better Support from video game publishers for OpenGL
-
-Currently, developers and publishers focus primarily on DirectX for video game development rather than OpenGL. Now as Steam is officially here for Linux, developers should start considering development in OpenGL as well.
-
-[Direct3D][47] is made solely for the Windows platform. The OpenGL API is an open standard, and implementations exist for not only Windows but a wide variety of other platforms.
-
-Though quite an old article, [this valuable resource][48] shares a lot of thoughtful information on the realities of OpenGL and DirectX. The points made are truly very sensible and enlightens the reader about the facts based on actual chronological events.
-
-Publishers who are launching their titles on Linux should definitely not leave out the fact that developing the game on OpenGL would be a much better deal than translating it from DirectX to OpenGL. If conversion has to be done, the translations must be well optimized and carefully looked into. There might be a delay in releasing the games but still it would definitely be worth the wait.
-
-Have more annoyances to share? Do let us know in the comments.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/linux-gaming-problems/
-
-作者:[Avimanyu Bandyopadhyay ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://itsfoss.com/author/avimanyu/
-[1]:https://itsfoss.com/author/avimanyu/
-[2]:https://itsfoss.com/linux-gaming-problems/#comments
-[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
-[4]:https://twitter.com/share?original_referer=/&text=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21&url=https://itsfoss.com/linux-gaming-problems/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=itsfoss2
-[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
-[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
-[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/linux-gaming-problems/&title=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21
-[8]:https://www.reddit.com/submit?url=https://itsfoss.com/linux-gaming-problems/&title=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21
-[9]:https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg
-[10]:https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg
-[11]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg&url=https://itsfoss.com/linux-gaming-problems/&is_video=false&description=Linux%20gamer%27s%20problem
-[12]:https://itsfoss.com/linux-gaming-guide/
-[13]:https://itsfoss.com/linux-gaming-distributions/
-[14]:https://itsfoss.com/use-windows-applications-linux/
-[15]:https://www.playonlinux.com/en/
-[16]:http://store.steampowered.com/steamos/
-[17]:http://www.ibtimes.co.uk/reddit-users-want-replace-steam-open-source-game-launcher-project-ascension-1498999
-[18]:https://www.origin.com/
-[19]:https://www.reddit.com/r/pcmasterrace/comments/33xcvm/we_hate_valves_monopoly_over_pc_gaming_why/
-[20]:https://github.com/wine-compholio/wine-staging/wiki/CSMT
-[21]:http://arstechnica.com/gaming/2015/11/ars-benchmarks-show-significant-performance-hit-for-steamos-gaming/
-[22]:https://www.gamingonlinux.com/articles/tomb-raider-benchmark-video-comparison-linux-vs-windows-10.7138
-[23]:https://en.wikipedia.org/wiki/DirectX
-[24]:https://en.wikipedia.org/wiki/OpenGL
-[25]:https://www.gamingonlinux.com/articles/tomb-raider-released-for-linux-video-thoughts-port-report-included-the-first-linux-game-to-use-tresfx.7124
-[26]:https://itsfoss.com/osu-new-linux/
-[27]:http://lifeisstrange.com/
-[28]:https://itsfoss.com/install-steam-ubuntu-linux/
-[29]:https://itsfoss.com/deus-ex-mankind-divided-linux/
-[30]:http://wccftech.com/deus-ex-mankind-divided-director-console-ports-on-pc-is-disrespectful/
-[31]:http://developer.amd.com/tools-and-sdks/open-source/
-[32]:http://nvidia.com/
-[33]:http://amd.com/
-[34]:http://www.makeuseof.com/tag/open-source-amd-graphics-now-awesome-heres-get/
-[35]:https://youtu.be/MShbP3OpASA
-[36]:https://itsfoss.com/nvidia-optimus-support-linux/
-[37]:http://uplay.com/
-[38]:http://origin.com/
-[39]:https://itsfoss.com/linux-foundation-head-uses-macos/
-[40]:http://www.pcworld.com/article/2940470/hey-gamers-directx-11-is-coming-to-linux-thanks-to-codeweavers-and-wine.html
-[41]:https://itsfoss.com/deal-run-windows-software-and-games-on-linux-with-crossover-15-66-off/
-[42]:https://bbs.archlinux.org/viewtopic.php?id=214771
-[43]:https://ghostbin.com/paste/sy3e2
-[44]:https://www.reddit.com/r/linux_gaming/comments/3ap3uu/directx_11_support_coming_to_codeweavers/
-[45]:https://www.codeweavers.com/about/blogs/caron/2015/12/10/directx-11-really-james-didnt-lie
-[46]:https://itsfoss.com/linux-gaming-guide/
-[47]:https://en.wikipedia.org/wiki/Direct3D
-[48]:http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-and-not-DirectX
diff --git a/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md b/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md
index 20c14074c6..3c61f6dd8f 100644
--- a/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md
+++ b/sources/tech/20171010 In Device We Trust Measure Twice Compute Once with Xen Linux TPM 2.0 and TXT.md
@@ -1,3 +1,6 @@
+ezio is translating
+
+
In Device We Trust: Measure Twice, Compute Once with Xen, Linux, TPM 2.0 and TXT
============================================================
diff --git a/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md b/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md
index ad3528f60b..883834d7e7 100644
--- a/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md
+++ b/sources/tech/20171212 Toplip – A Very Strong File Encryption And Decryption CLI Utility.md
@@ -1,3 +1,4 @@
+tomjlw is translatting
Toplip – A Very Strong File Encryption And Decryption CLI Utility
======
There are numerous file encryption tools available on the market to protect
@@ -260,7 +261,7 @@ Cheers!
via: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/
作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
+译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/sources/tech/20171215 Top 5 Linux Music Players.md b/sources/tech/20171215 Top 5 Linux Music Players.md
deleted file mode 100644
index a57ad29c52..0000000000
--- a/sources/tech/20171215 Top 5 Linux Music Players.md
+++ /dev/null
@@ -1,145 +0,0 @@
-tomjlw is translating
-Top 5 Linux Music Players
-======
-
-
->Jack Wallen rounds up his five favorite Linux music players. Creative Commons Zero
->Pixabay
-
-No matter what you do, chances are you enjoy a bit of music playing in the background. Whether you're a coder, system administrator, or typical desktop user, enjoying good music might be at the top of your list of things you do on the desktop. And, with the holidays upon us, you might wind up with some gift cards that allow you to purchase some new music. If your music format of choice is of a digital nature (mine happens to be vinyl) and your platform is Linux, you're going to want a good GUI player to enjoy that music.
-
-Fortunately, Linux has no lack of digital music players. In fact, there are quite a few, most of which are open source and available for free. Let's take a look at a few such players, to see which one might suit your needs.
-
-### Clementine
-
-I wanted to start out with the player that has served as my default for years. [Clementine][1] offers probably the single best ratio of ease-of-use to flexibility you'll find in any player. Clementine is a fork of the new defunct [Amarok][2] music player, but isn't limited to Linux-only; Clementine is also available for Mac OS and Windows platforms. The feature set is seriously impressive and includes the likes of:
-
- * Built-in equalizer
-
- * Customizable interface (display current album cover as background -- Figure 1)
-
- * Play local music or from Spotify, Last.fm, and more
-
- * Sidebar for easy library navigation
-
- * Built-in audio transcoding (into MP3, OGG, Flac, and more)
-
- * Remote control using [Android app][3]
-
- * Handy search function
-
- * Tabbed playlists
-
- * Easy creation of regular and smart playlists
-
- * CUE sheet support
-
- * Tag support
-
-
-
-
-![Clementine][5]
-
-
-Figure 1: The Clementine interface might be a bit old-school, but it's incredibly user-friendly and flexible.
-
-[Used with permission][6]
-
-Of all the music players I have used, Clementine is by far the most feature-rich and easy to use. It also includes one of the finest equalizers you'll find on a Linux music player (with 10 bands to adjust). Although it may not enjoy a very modern interface, it is absolutely unmatched for its ability to create and manipulate playlists. If your music collection is large, and you want total control over it, this is the player you want.
-
-Clementine can be found in the standard repositories and installed from either your distribution's software center or the command line.
-
-### Rhythmbox
-
-[Rhythmbox][7] is the default player for the GNOME desktop, but it does function well on other desktops. The Rhythmbox interface is slightly more modern than Clementine and takes a minimal approach to design. That doesn't mean the app is bereft of features. Quite the opposite. Rhythmbox offers gapless playback, Soundcloud support, album cover display, audio scrobbling from Last.fm and Libre.fm, Jamendo support, podcast subscription (from [Apple iTunes][8]), web remote control, and more.
-
-One very nice feature found in Rhythmbox is plugin support, which allows you to enable features like DAAP Music Sharing, FM Radio, Cover art search, notifications, ReplayGain, Song Lyrics, and more.
-
-The Rhythmbox playlist feature isn't quite as powerful as that found in Clementine, but it still makes it fairly easy to organize your music into quick playlists for any mood. Although Rhythmbox does offer a slightly more modern interface than Clementine (Figure 2), it's not quite as flexible.
-
-![Rhythmbox][10]
-
-
-Figure 2: The Rhythmbox interface is simple and straightforward.
-
-[Used with permission][6]
-
-### VLC Media Player
-
-For some, [VLC][11] cannot be beat for playing videos. However, VLC isn't limited to the playback of video. In fact, VLC does a great job of playing audio files. For [KDE Neon][12] users, VLC serves as your default for both music and video playback. Although VLC is one of the finest video players on the Linux market (it's my default), it does suffer from some minor limitations with audio--namely the lack of playlists and the inability to connect to remote directories on your network. But if you're looking for an incredibly simple and reliable means to play local files or network mms/rtsp streams VLC is a quality tool.
-
-VLC does include an equalizer (Figure 3), a compressor, and a spatializer as well as the ability to record from a capture device.
-
-![VLC][14]
-
-
-Figure 3: The VLC equalizer in action.
-
-[Used with permission][6]
-
-### Audacious
-
-If you're looking for a lightweight music player, Audacious perfectly fits that bill. This particular music player is fairly single minded, but it does include an equalizer and a small selection of effects that will please many an audiophile (e.g., Echo, Silence removal, Speed and Pitch, Voice Removal, and more--Figure 4).
-
-![Audacious ][16]
-
-
-Figure 4: The Audacious EQ and plugins.
-
-[Used with permission][6]
-
-Audacious also includes a really handy alarm feature, that allows you to set an alarm that will start playing your currently selected track at a user-specified time and duration.
-
-### Spotify
-
-I must confess, I use spotify daily. I'm a subscriber and use it to find new music to purchase--which means I am constantly searching and discovering. Fortunately, there is a desktop client for Spotify (Figure 5) that can be easily installed using the [official Spotify Linux installation instructions][17]. Outside of listening to vinyl, I probably make use of Spotify more than any other music player. It also helps that I can seamlessly jump between the desktop client and the [Android app][18], so I never miss out on the music I enjoy.
-
-![Spotify][20]
-
-
-Figure 5: The official Spotify client on Linux.
-
-[Used with permission][6]
-
-The Spotify interface is very easy to use and, in fact, it beats the web player by leaps and bounds. Do not settle for the [Spotify Web Player][21] on Linux, as the desktop client makes it much easier to create and manage your playlists. If you're a Spotify power user, don't even bother with the built-in support for the streaming client in the other desktop apps--once you've used the Spotify Desktop Client, the other apps pale in comparison.
-
-### The choice is yours
-
-Other options are available (check your desktop software center), but these five clients (in my opinion) are the best of the best. For me, the one-two punch of Clementine and Spotify gives me the best of all possible worlds. Try them out and see which one best meets your needs.
-
-Learn more about Linux through the free ["Introduction to Linux" ][22]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/learn/intro-to-linux/2017/12/top-5-linux-music-players
-
-作者:[][a]
-译者:[tomjlw](https://github.com/tomjlw)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com
-[1]:https://www.clementine-player.org/
-[2]:https://en.wikipedia.org/wiki/Amarok_(software)
-[3]:https://play.google.com/store/apps/details?id=de.qspool.clementineremote
-[4]:https://www.linux.com/files/images/clementinejpg
-[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/clementine.jpg?itok=_k13MtM3 (Clementine)
-[6]:https://www.linux.com/licenses/category/used-permission
-[7]:https://wiki.gnome.org/Apps/Rhythmbox
-[8]:https://www.apple.com/itunes/
-[9]:https://www.linux.com/files/images/rhythmboxjpg
-[10]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/rhythmbox.jpg?itok=GOjs9vTv (Rhythmbox)
-[11]:https://www.videolan.org/vlc/index.html
-[12]:https://neon.kde.org/
-[13]:https://www.linux.com/files/images/vlcjpg
-[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/vlc.jpg?itok=hn7iKkmK (VLC)
-[15]:https://www.linux.com/files/images/audaciousjpg
-[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/audacious.jpg?itok=9YALPzOx (Audacious )
-[17]:https://www.spotify.com/us/download/linux/
-[18]:https://play.google.com/store/apps/details?id=com.spotify.music
-[19]:https://www.linux.com/files/images/spotifyjpg
-[20]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/spotify.jpg?itok=P3FLfcYt (Spotify)
-[21]:https://open.spotify.com/browse/featured
-[22]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/sources/tech/20180112 8 KDE Plasma Tips and Tricks to Improve Your Productivity.md b/sources/tech/20180112 8 KDE Plasma Tips and Tricks to Improve Your Productivity.md
deleted file mode 100644
index 4335a7a98d..0000000000
--- a/sources/tech/20180112 8 KDE Plasma Tips and Tricks to Improve Your Productivity.md
+++ /dev/null
@@ -1,97 +0,0 @@
-8 KDE Plasma Tips and Tricks to Improve Your Productivity
-======
-
-[#] leon-shi is translating
-
-
-KDE's Plasma is easily one of the most powerful desktop environments available for Linux. It's highly configurable, and it looks pretty good, too. That doesn't amount to a whole lot unless you can actually get things done.
-
-You can easily configure Plasma and make use of a lot of its convenient and time-saving features to boost your productivity and have a desktop that empowers you, rather than getting in your way.
-
-These tips aren't in any particular order, so you don't need to prioritize. Pick the ones that best fit your workflow.
-
- **Related** : [10 of the Best KDE Plasma Applications You Should Try][1]
-
-### 1. Multimedia Controls
-
-This isn't so much of a tip as it is something that's good to keep in mind. Plasma keeps multimedia controls everywhere. You don't need to open your media player every time you need to pause, resume, or skip a song; you can mouse over the minimized window or even control it via the lock screen. There's no need to scramble to log in to change a song or because you forgot to pause one.
-
-### 2. KRunner
-
-![KDE Plasma KRunner][2]
-
-KRunner is an often under-appreciated feature of the Plasma desktop. Most people are used to digging through the application launcher menu to find the program that they're looking to launch. That's not necessary with KRunner.
-
-To use KRunner, make sure that your focus is on the desktop itself. (Click on it instead of a window.) Then, start typing the name of the program that you want. KRunner will automatically drop down from the top of your screen with suggestions. Click or press Enter on the one you're looking for. It's much faster than remembering which category your program is under.
-
-### 3. Jump Lists
-
-![KDE Plasma Jump Lists][3]
-
-Jump lists are a fairly recent addition to the Plasma desktop. They allow you to launch an application directly to a specific section or feature.
-
-So if you have a launcher on a menu bar, you can right-click and get a list of places to jump to. Select where you want to go, and you're off.
-
-### 4. KDE Connect
-
-![KDE Connect Menu Android][4]
-
-[KDE Connect][5] is a massive help if you have an Android phone. It connects the phone to your desktop so you can share things seamlessly between the devices.
-
-With KDE Connect, you can see your [Android device's notification][6] on your desktop in real time. It also enables you to send and receive text messages from Plasma without ever picking up your phone.
-
-KDE Connect also lets you send files and share web pages between your phone and your computer. You can easily move from one device to the other without a lot of hassle or losing your train of thought.
-
-### 5. Plasma Vaults
-
-![KDE Plasma Vault][7]
-
-Plasma Vaults are another new addition to the Plasma desktop. They are KDE's simple solution to encrypted files and folders. If you don't work with encrypted files, this one won't really save you any time. If you do, though, vaults are a much simpler approach.
-
-Plasma Vaults let you create encrypted directories as a regular user without root and manage them from your task bar. You can mount and unmount the directories on the fly without the need for external programs or additional privileges.
-
-### 6. Pager Widget
-
-![KDE Plasma Pager][8]
-
-Configure your desktop with the pager widget. It allows you to easily access three additional workspaces for even more screen room.
-
-Add the widget to your menu bar, and you can slide between multiple workspaces. These are all the size of your screen, so you gain multiple times the total screen space. That lets you lay out more windows without getting confused by a minimized mess or disorganization.
-
-### 7. Create a Dock
-
-![KDE Plasma Dock][9]
-
-Plasma is known for its flexibility and the room it allows for configuration. Use that to your advantage. If you have programs that you're always using, consider setting up an OS X style dock with your most used applications. You'll be able to get them with a single click rather than going through a menu or typing in their name.
-
-### 8. Add a File Tree to Dolphin
-
-![Plasma Dolphin Directory][10]
-
-It's much easier to navigate folders in a directory tree. Dolphin, Plasma's default file manager, has built-in functionality to display a directory listing in the form of a tree on the side of the folder window.
-
-To enable the directory tree, click on the "Control" tab, then "Configure Dolphin," "View Modes," and "Details." Finally, select "Expandable Folders."
-
-Remember that these tips are just tips. Don't try to force yourself to do something that's getting in your way. You may hate using file trees in Dolphin. You may never use Pager. That's alright. There may even be something that you personally like that's not listed here. Do what works for you. That said, at least a few of these should shave some serious time out of your work day.
-
---------------------------------------------------------------------------------
-
-via: https://www.maketecheasier.com/kde-plasma-tips-tricks-improve-productivity/
-
-作者:[Nick Congleton][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.maketecheasier.com/author/nickcongleton/
-[1]:https://www.maketecheasier.com/10-best-kde-plasma-applications/ (10 of the Best KDE Plasma Applications You Should Try)
-[2]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-krunner.jpg (KDE Plasma KRunner)
-[3]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-jumplist.jpg (KDE Plasma Jump Lists)
-[4]:https://www.maketecheasier.com/assets/uploads/2017/05/kde-connect-menu-e1494899929112.jpg (KDE Connect Menu Android)
-[5]:https://www.maketecheasier.com/send-receive-sms-linux-kde-connect/
-[6]:https://www.maketecheasier.com/android-notifications-ubuntu-kde-connect/
-[7]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-vault.jpg (KDE Plasma Vault)
-[8]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-pager.jpg (KDE Plasma Pager)
-[9]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-dock.jpg (KDE Plasma Dock)
-[10]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-dolphin.jpg (Plasma Dolphin Directory)
diff --git a/sources/tech/20180122 Ick- a continuous integration system.md b/sources/tech/20180122 Ick- a continuous integration system.md
deleted file mode 100644
index 4620e2c036..0000000000
--- a/sources/tech/20180122 Ick- a continuous integration system.md
+++ /dev/null
@@ -1,75 +0,0 @@
-Ick: a continuous integration system
-======
-**TL;DR:** Ick is a continuous integration or CI system. See for more information.
-
-More verbose version follows.
-
-### First public version released
-
-The world may not need yet another continuous integration system (CI), but I do. I've been unsatisfied with the ones I've tried or looked at. More importantly, I am interested in a few things that are more powerful than what I've ever even heard of. So I've started writing my own.
-
-My new personal hobby project is called ick. It is a CI system, which means it can run automated steps for building and testing software. The home page is at , and the [download][1] page has links to the source code and .deb packages and an Ansible playbook for installing it.
-
-I have now made the first publicly advertised release, dubbed ALPHA-1, version number 0.23. It is of alpha quality, and that means it doesn't have all the intended features and if any of the features it does have work, you should consider yourself lucky.
-
-### Invitation to contribute
-
-Ick has so far been my personal project. I am hoping to make it more than that, and invite contributions. See the [governance][2] page for the constitution, the [getting started][3] page for tips on how to start contributing, and the [contact][4] page for how to get in touch.
-
-### Architecture
-
-Ick has an architecture consisting of several components that communicate over HTTPS using RESTful APIs and JSON for structured data. See the [architecture][5] page for details.
-
-### Manifesto
-
-Continuous integration (CI) is a powerful tool for software development. It should not be tedious, fragile, or annoying. It should be quick and simple to set up, and work quietly in the background unless there's a problem in the code being built and tested.
-
-A CI system should be simple, easy, clear, clean, scalable, fast, comprehensible, transparent, reliable, and boost your productivity to get things done. It should not be a lot of effort to set up, require a lot of hardware just for the CI, need frequent attention for it to keep working, and developers should never have to wonder why something isn't working.
-
-A CI system should be flexible to suit your build and test needs. It should support multiple types of workers, as far as CPU architecture and operating system version are concerned.
-
-Also, like all software, CI should be fully and completely free software and your instance should be under your control.
-
-(Ick is little of this yet, but it will try to become all of it. In the best possible taste.)
-
-### Dreams of the future
-
-In the long run, I would ick to have features like ones described below. It may take a while to get all of them implemented.
-
- * A build may be triggered by a variety of events. Time is an obvious event, as is source code repository for the project changing. More powerfully, any build dependency changing, regardless of whether the dependency comes from another project built by ick, or a package from, say, Debian: ick should keep track of all the packages that get installed into the build environment of a project, and if any of their versions change, it should trigger the project build and tests again.
-
- * Ick should support building in (or against) any reasonable target, including any Linux distribution, any free operating system, and any non-free operating system that isn't brain-dead.
-
- * Ick should manage the build environment itself, and be able to do builds that are isolated from the build host or the network. This partially works: one can ask ick to build a container and run a build in the container. The container is implemented using systemd-nspawn. This can be improved upon, however. (If you think Docker is the only way to go, please contribute support for that.)
-
- * Ick should support any workers that it can control over ssh or a serial port or other such neutral communication channel, without having to install an agent of any kind on them. Ick won't assume that it can have, say, a full Java run time, so that the worker can be, say, a micro controller.
-
- * Ick should be able to effortlessly handle very large numbers of projects. I'm thinking here that it should be able to keep up with building everything in Debian, whenever a new Debian source package is uploaded. (Obviously whether that is feasible depends on whether there are enough resources to actually build things, but ick itself should not be the bottleneck.)
-
- * Ick should optionally provision workers as needed. If all workers of a certain type are busy, and ick's been configured to allow using more resources, it should do so. This seems like it would be easy to do with virtual machines, containers, cloud providers, etc.
-
- * Ick should be flexible in how it can notify interested parties, particularly about failures. It should allow an interested party to ask to be notified over IRC, Matrix, Mastodon, Twitter, email, SMS, or even by a phone call and speech syntethiser. "Hello, interested party. It is 04:00 and you wanted to be told when the hello package has been built for RISC-V."
-
-
-
-
-### Please give feedback
-
-If you try ick, or even if you've just read this far, please share your thoughts on it. See the [contact][4] page for where to send it. Public feedback is preferred over private, but if you prefer private, that's OK too.
-
---------------------------------------------------------------------------------
-
-via: https://blog.liw.fi/posts/2018/01/22/ick_a_continuous_integration_system/
-
-作者:[Lars Wirzenius][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://blog.liw.fi/
-[1]:http://ick.liw.fi/download/
-[2]:http://ick.liw.fi/governance/
-[3]:http://ick.liw.fi/getting-started/
-[4]:http://ick.liw.fi/contact/
-[5]:http://ick.liw.fi/architecture/
diff --git a/sources/tech/20180128 Get started with Org mode without Emacs.md b/sources/tech/20180128 Get started with Org mode without Emacs.md
deleted file mode 100644
index 75a5bcb092..0000000000
--- a/sources/tech/20180128 Get started with Org mode without Emacs.md
+++ /dev/null
@@ -1,78 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Get started with Org mode without Emacs)
-[#]: via: (https://opensource.com/article/19/1/productivity-tool-org-mode)
-[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
-
-Get started with Org mode without Emacs
-======
-No, you don't need Emacs to use Org, the 16th in our series on open source tools that will make you more productive in 2019.
-
-
-
-There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
-
-Here's the 16th of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
-
-### Org (without Emacs)
-
-[Org mode][1] (or just Org) is not in the least bit new, but there are still many people who have never used it. They would love to try it out to get a feel for how Org can help them be productive. But the biggest barrier is that Org is associated with Emacs, and many people think one requires the other. Not so! Org can be used with a variety of other tools and editors once you understand the basics.
-
-
-
-Org, at its very heart, is a structured text file. It has headers, subheaders, and keywords that allow other tools to parse files into agendas and to-do lists. Org files can be edited with any flat-text editor (e.g., [Vim][2], [Atom][3], or [Visual Studio Code][4]), and many have plugins that help create and manage Org files.
-
-A basic Org file looks something like this:
-
-```
-* Task List
-** TODO Write Article for Day 16 - Org w/out emacs
- DEADLINE: <2019-01-25 12:00>
-*** DONE Write sample org snippet for article
- - Include at least one TODO and one DONE item
- - Show notes
- - Show SCHEDULED and DEADLINE
-*** TODO Take Screenshots
-** Dentist Appointment
- SCHEDULED: <2019-01-31 13:30-14:30>
-```
-
-Org uses an outline format that uses ***** as bullets to indicate an item's level. Any item that begins with the word TODO (yes, in all caps) is just that—a to-do item. The work DONE indicates it is completed. SCHEDULED and DEADLINE indicate dates and times relevant to the item. If there's no time in either field, the item is considered an all-day event.
-
-With the right plugins, your favorite text editor becomes a powerhouse of productivity and organization. For example, the [vim-orgmode][5] plugin's features include functions to create Org files, syntax highlighting, and key commands to generate agendas and comprehensive to-do lists across files.
-
-
-
-The Atom [Organized][6] plugin adds a sidebar on the right side of the screen that shows the agenda and to-do items in Org files. It can read from multiple files by default with a path set up in the configuration options. The Todo sidebar allows you to click on a to-do item to mark it done, then automatically updates the source Org file.
-
-
-
-There are also a whole host of tools that "speak Org" to help keep you productive. With libraries in Python, Perl, PHP, NodeJS, and more, you can develop your own scripts and tools. And, of course, there is also [Emacs][7], which has Org support within the core distribution.
-
-
-
-Org mode is one of the best tools for keeping on track with what needs to be done and when. And, contrary to myth, it doesn't need Emacs, just a text editor.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/productivity-tool-org-mode
-
-作者:[Kevin Sonney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/ksonney (Kevin Sonney)
-[b]: https://github.com/lujun9972
-[1]: https://orgmode.org/
-[2]: https://www.vim.org/
-[3]: https://atom.io/
-[4]: https://code.visualstudio.com/
-[5]: https://github.com/jceb/vim-orgmode
-[6]: https://atom.io/packages/organized
-[7]: https://www.gnu.org/software/emacs/
diff --git a/sources/tech/20180129 The 5 Best Linux Distributions for Development.md b/sources/tech/20180129 The 5 Best Linux Distributions for Development.md
deleted file mode 100644
index cc11407ff3..0000000000
--- a/sources/tech/20180129 The 5 Best Linux Distributions for Development.md
+++ /dev/null
@@ -1,158 +0,0 @@
-The 5 Best Linux Distributions for Development
-============================================================
-
-
-Jack Wallen looks at some of the best LInux distributions for development efforts.[Creative Commons Zero][6]
-
-When considering Linux, there are so many variables to take into account. What package manager do you wish to use? Do you prefer a modern or old-standard desktop interface? Is ease of use your priority? How flexible do you want your distribution? What task will the distribution serve?
-
-It is that last question which should often be considered first. Is the distribution going to work as a desktop or a server? Will you be doing network or system audits? Or will you be developing? If you’ve spent much time considering Linux, you know that for every task there are several well-suited distributions. This certainly holds true for developers. Even though Linux, by design, is an ideal platform for developers, there are certain distributions that rise above the rest, to serve as great operating systems to serve developers.
-
-I want to share what I consider to be some of the best distributions for your development efforts. Although each of these five distributions can be used for general purpose development (with maybe one exception), they each serve a specific purpose. You may or may not be surprised by the selections.
-
-With that said, let’s get to the choices.
-
-### Debian
-
-The [Debian][14] distribution winds up on the top of many a Linux list. With good reason. Debian is that distribution from which so many are based. It is this reason why many developers choose Debian. When you develop a piece of software on Debian, chances are very good that package will also work on [Ubuntu][15], [Linux Mint][16], [Elementary OS][17], and a vast collection of other distributions.
-
-Beyond that obvious answer, Debian also has a very large amount of applications available, by way of the default repositories (Figure 1).
-
-
-Figure 1: Available applications from the standard Debian repositories.[Used with permission][1]
-
-To make matters even programmer-friendly, those applications (and their dependencies) are simple to install. Take, for instance, the build-essential package (which can be installed on any distribution derived from Debian). This package includes the likes of dkpg-dev, g++, gcc, hurd-dev, libc-dev, and make—all tools necessary for the development process. The build-essential package can be installed with the command sudo apt install build-essential.
-
-There are hundreds of other developer-specific applications available from the standard repositories, tools such as:
-
-* Autoconf—configure script builder
-
-* Autoproject—creates a source package for a new program
-
-* Bison—general purpose parser generator
-
-* Bluefish—powerful GUI editor, targeted towards programmers
-
-* Geany—lightweight IDE
-
-* Kate—powerful text editor
-
-* Eclipse—helps builders independently develop tools that integrate with other people’s tools
-
-The list goes on and on.
-
-Debian is also as rock-solid a distribution as you’ll find, so there’s very little concern you’ll lose precious work, by way of the desktop crashing. As a bonus, all programs included with Debian have met the [Debian Free Software Guidelines][18], which adheres to the following “social contract”:
-
-* Debian will remain 100% free.
-
-* We will give back to the free software community.
-
-* We will not hide problems.
-
-* Our priorities are our users and free software
-
-* Works that do not meet our free software standards are included in a non-free archive.
-
-Also, if you’re new to developing on Linux, Debian has a handy [Programming section in their user manual][19].
-
-### openSUSE Tumbleweed
-
-If you’re looking to develop with a cutting-edge, rolling release distribution, [openSUSE][20] offers one of the best in [Tumbleweed][21]. Not only will you be developing with the most up to date software available, you’ll be doing so with the help of openSUSE’s amazing administrator tools … of which includes YaST. If you’re not familiar with YaST (Yet another Setup Tool), it’s an incredibly powerful piece of software that allows you to manage the whole of the platform, from one convenient location. From within YaST, you can also install using RPM Groups. Open YaST, click on RPM Groups (software grouped together by purpose), and scroll down to the Development section to see the large amount of groups available for installation (Figure 2).
-
-
-
-Figure 2: Installing package groups in openSUSE Tumbleweed.[Creative Commons Zero][2]
-
-openSUSE also allows you to quickly install all the necessary devtools with the simple click of a weblink. Head over to the [rpmdevtools install site][22] and click the link for Tumbleweed. This will automatically add the necessary repository and install rpmdevtools.
-
-By developing with a rolling release distribution, you know you’re working with the most recent releases of installed software.
-
-### CentOS
-
-Let’s face it, [Red Hat Enterprise Linux][23] (RHEL) is the de facto standard for enterprise businesses. If you’re looking to develop for that particular platform, and you can’t afford a RHEL license, you cannot go wrong with [CentOS][24]—which is, effectively, a community version of RHEL. You will find many of the packages found on CentOS to be the same as in RHEL—so once you’re familiar with developing on one, you’ll be fine on the other.
-
-If you’re serious about developing on an enterprise-grade platform, you cannot go wrong starting with CentOS. And because CentOS is a server-specific distribution, you can more easily develop for a web-centric platform. Instead of developing your work and then migrating it to a server (hosted on a different machine), you can easily have CentOS setup to serve as an ideal host for both developing and testing.
-
-Looking for software to meet your development needs? You only need open up the CentOS Application Installer, where you’ll find a Developer section that includes a dedicated sub-section for Integrated Development Environments (IDEs - Figure 3).
-
-
-Figure 3: Installing a powerful IDE is simple in CentOS.[Used with permission][3]
-
-CentOS also includes Security Enhanced Linux (SELinux), which makes it easier for you to test your software’s ability to integrate with the same security platform found in RHEL. SELinux can often cause headaches for poorly designed software, so having it at the ready can be a real boon for ensuring your applications work on the likes of RHEL. If you’re not sure where to start with developing on CentOS 7, you can read through the [RHEL 7 Developer Guide][25].
-
-### Raspbian
-
-Let’s face it, embedded systems are all the rage. One easy means of working with such systems is via the Raspberry Pi—a tiny footprint computer that has become incredibly powerful and flexible. In fact, the Raspberry Pi has become the hardware used by DIYers all over the planet. Powering those devices is the [Raspbian][26] operating system. Raspbian includes tools like [BlueJ][27], [Geany][28], [Greenfoot][29], [Sense HAT Emulator][30], [Sonic Pi][31], and [Thonny Python IDE][32], [Python][33], and [Scratch][34], so you won’t want for the necessary development software. Raspbian also includes a user-friendly desktop UI (Figure 4), to make things even easier.
-
-
-Figure 4: The Raspbian main menu, showing pre-installed developer software.[Used with permission][4]
-
-For anyone looking to develop for the Raspberry Pi platform, Raspbian is a must have. If you’d like to give Raspbian a go, without the Raspberry Pi hardware, you can always install it as a VirtualBox virtual machine, by way of the ISO image found [here][35].
-
-### Pop!_OS
-
-Don’t let the name full you, [System76][36]’s [Pop!_OS][37] entry into the world of operating systems is serious. And although what System76 has done to this Ubuntu derivative may not be readily obvious, it is something special.
-
-The goal of System76 is to create an operating system specific to the developer, maker, and computer science professional. With a newly-designed GNOME theme, Pop!_OS is beautiful (Figure 5) and as highly functional as you would expect from both the hardware maker and desktop designers.
-
-### [devel_5.jpg][11]
-
-
-Figure 5: The Pop!_OS Desktop.[Used with permission][5]
-
-But what makes Pop!_OS special is the fact that it is being developed by a company dedicated to Linux hardware. This means, when you purchase a System76 laptop, desktop, or server, you know the operating system will work seamlessly with the hardware—on a level no other company can offer. I would predict that, with Pop!_OS, System76 will become the Apple of Linux.
-
-### Time for work
-
-In their own way, each of these distributions. You have a stable desktop (Debian), a cutting-edge desktop (openSUSE Tumbleweed), a server (CentOS), an embedded platform (Raspbian), and a distribution to seamless meld with hardware (Pop!_OS). With the exception of Raspbian, any one of these distributions would serve as an outstanding development platform. Get one installed and start working on your next project with confidence.
-
- _Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/5-best-linux-distributions-development
-
-作者:[JACK WALLEN ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/jlwallen
-[1]:https://www.linux.com/licenses/category/used-permission
-[2]:https://www.linux.com/licenses/category/creative-commons-zero
-[3]:https://www.linux.com/licenses/category/used-permission
-[4]:https://www.linux.com/licenses/category/used-permission
-[5]:https://www.linux.com/licenses/category/used-permission
-[6]:https://www.linux.com/licenses/category/creative-commons-zero
-[7]:https://www.linux.com/files/images/devel1jpg
-[8]:https://www.linux.com/files/images/devel2jpg
-[9]:https://www.linux.com/files/images/devel3jpg
-[10]:https://www.linux.com/files/images/devel4jpg
-[11]:https://www.linux.com/files/images/devel5jpg
-[12]:https://www.linux.com/files/images/king-penguins1920jpg
-[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
-[14]:https://www.debian.org/
-[15]:https://www.ubuntu.com/
-[16]:https://linuxmint.com/
-[17]:https://elementary.io/
-[18]:https://www.debian.org/social_contract
-[19]:https://www.debian.org/doc/manuals/debian-reference/ch12.en.html
-[20]:https://www.opensuse.org/
-[21]:https://en.opensuse.org/Portal:Tumbleweed
-[22]:https://software.opensuse.org/download.html?project=devel%3Atools&package=rpmdevtools
-[23]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
-[24]:https://www.centos.org/
-[25]:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/pdf/developer_guide/Red_Hat_Enterprise_Linux-7-Developer_Guide-en-US.pdf
-[26]:https://www.raspberrypi.org/downloads/raspbian/
-[27]:https://www.bluej.org/
-[28]:https://www.geany.org/
-[29]:https://www.greenfoot.org/
-[30]:https://www.raspberrypi.org/blog/sense-hat-emulator/
-[31]:http://sonic-pi.net/
-[32]:http://thonny.org/
-[33]:https://www.python.org/
-[34]:https://scratch.mit.edu/
-[35]:http://rpf.io/x86iso
-[36]:https://system76.com/
-[37]:https://system76.com/pop
diff --git a/sources/tech/20180202 Tips for success when getting started with Ansible.md b/sources/tech/20180202 Tips for success when getting started with Ansible.md
deleted file mode 100644
index 539db2ac86..0000000000
--- a/sources/tech/20180202 Tips for success when getting started with Ansible.md
+++ /dev/null
@@ -1,73 +0,0 @@
-Tips for success when getting started with Ansible
-======
-
-
-
-Ansible is an open source automation tool used to configure servers, install software, and perform a wide variety of IT tasks from one central location. It is a one-to-many agentless mechanism where all instructions are run from a control machine that communicates with remote clients over SSH, although other protocols are also supported.
-
-While targeted for system administrators with privileged access who routinely perform tasks such as installing and configuring applications, Ansible can also be used by non-privileged users. For example, a database administrator using the `mysql` login ID could use Ansible to create databases, add users, and define access-level controls.
-
-Let's go over a very simple example where a system administrator provisions 100 servers each day and must run a series of Bash commands on each one before handing it off to users.
-
-
-
-This is a simple example, but should illustrate how easily commands can be specified in yaml files and executed on remote servers. In a heterogeneous environment, conditional statements can be added so that certain commands are only executed in certain servers (e.g., "only execute `yum` commands in systems that are not Ubuntu or Debian").
-
-One important feature in Ansible is that a playbook describes a desired state in a computer system, so a playbook can be run multiple times against a server without impacting its state. If a certain task has already been implemented (e.g., "user `sysman` already exists"), then Ansible simply ignores it and moves on.
-
-### Definitions
-
- * **Tasks:**``A task is the smallest unit of work. It can be an action like "Install a database," "Install a web server," "Create a firewall rule," or "Copy this configuration file to that server."
- * **Plays:**``A play is made up of tasks. For example, the play: "Prepare a database to be used by a web server" is made up of tasks: 1) Install the database package; 2) Set a password for the database administrator; 3) Create a database; and 4) Set access to the database.
- * **Playbook:**``A playbook is made up of plays. A playbook could be: "Prepare my website with a database backend," and the plays would be 1) Set up the database server; and 2) Set up the web server.
- * **Roles:**``Roles are used to save and organize playbooks and allow sharing and reuse of playbooks. Following the previous examples, if you need to fully configure a web server, you can use a role that others have written and shared to do just that. Since roles are highly configurable (if written correctly), they can be easily reused to suit any given deployment requirements.
- * **Ansible Galaxy:**``Ansible [Galaxy][1] is an online repository where roles are uploaded so they can be shared with others. It is integrated with GitHub, so roles can be organized into Git repositories and then shared via Ansible Galaxy.
-
-
-
-These definitions and their relationships are depicted here:
-
-
-
-Please note this is just one way to organize the tasks that need to be executed. We could have split up the installation of the database and the web server into separate playbooks and into different roles. Most roles in Ansible Galaxy install and configure individual applications. You can see examples for installing [mysql][2] and installing [httpd][3].
-
-### Tips for writing playbooks
-
-The best source for learning Ansible is the official [documentation][4] site. And, as usual, online search is your friend. I recommend starting with simple tasks, like installing applications or creating users. Once you are ready, follow these guidelines:
-
- * When testing, use a small subset of servers so that your plays execute faster. If they are successful in one server, they will be successful in others.
- * Always do a dry run to make sure all commands are working (run with `--check-mode` flag).
- * Test as often as you need to without fear of breaking things. Tasks describe a desired state, so if a desired state is already achieved, it will simply be ignored.
- * Be sure all host names defined in `/etc/ansible/hosts` are resolvable.
- * Because communication to remote hosts is done using SSH, keys have to be accepted by the control machine, so either 1) exchange keys with remote hosts prior to starting; or 2) be ready to type in "Yes" to accept SSH key exchange requests for each remote host you want to manage.
- * Although you can combine tasks for different Linux distributions in one playbook, it's cleaner to write a separate playbook for each distro.
-
-
-
-### In the final analysis
-
-Ansible is a great choice for implementing automation in your data center:
-
- * It's agentless, so it is simpler to install than other automation tools.
- * Instructions are in YAML (though JSON is also supported) so it's easier than writing shell scripts.
- * It's open source software, so contribute back to it and make it even better!
-
-
-
-How have you used Ansible to automate your data center? Share your experience in the comments.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/2/tips-success-when-getting-started-ansible
-
-作者:[Jose Delarosa][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/jdelaros1
-[1]:https://galaxy.ansible.com/
-[2]:https://galaxy.ansible.com/bennojoy/mysql/
-[3]:https://galaxy.ansible.com/xcezx/httpd/
-[4]:http://docs.ansible.com/
diff --git a/sources/tech/20180307 3 open source tools for scientific publishing.md b/sources/tech/20180307 3 open source tools for scientific publishing.md
deleted file mode 100644
index 0bbc3578e9..0000000000
--- a/sources/tech/20180307 3 open source tools for scientific publishing.md
+++ /dev/null
@@ -1,78 +0,0 @@
-3 open source tools for scientific publishing
-======
-
-
-One industry that lags behind others in the adoption of digital or open source tools is the competitive and lucrative world of scientific publishing. Worth over £19B ($26B) annually, according to figures published by Stephen Buranyi in [The Guardian][1] last year, the system for selecting, publishing, and sharing even the most important scientific research today still bears many of the constraints of print media. New digital-era technologies present a huge opportunity to accelerate discovery, make science collaborative instead of competitive, and redirect investments from infrastructure development into research that benefits society.
-
-The non-profit [eLife initiative][2] was established by the funders of research, in part to encourage the use of these technologies to this end. In addition to publishing an open-access journal for important advances in life science and biomedical research, eLife has made itself into a platform for experimentation and showcasing innovation in research communication—with most of this experimentation based around the open source ethos.
-
-Working on open publishing infrastructure projects gives us the opportunity to accelerate the reach and adoption of the types of technology and user experience (UX) best practices that we consider important to the advancement of the academic publishing industry. Speaking very generally, the UX of open source products is often left undeveloped, which can in some cases dissuade people from using it. As part of our investment in OSS development, we place a strong emphasis on UX in order to encourage users to adopt these products.
-
-All of our code is open source, and we actively encourage community involvement in our projects, which to us means faster iteration, more experimentation, greater transparency, and increased reach for our work.
-
-The projects that we are involved in, such as the development of Libero (formerly known as [eLife Continuum][3]) and the [Reproducible Document Stack][4], along with our recent collaboration with [Hypothesis][5], show how OSS can be used to bring about positive changes in the assessment, publication, and communication of new discoveries.
-
-### Libero
-
-Libero is a suite of services and applications available to publishers that includes a post-production publishing system, a full front-end user interface pattern suite, Libero's Lens Reader, an open API, and search and recommendation engines.
-
-Last year, we took a user-driven approach to redesigning the front end of Libero, resulting in less distracting site “furniture” and a greater focus on research articles. We tested and iterated all the key functional areas of the site with members of the eLife community to ensure the best possible reading experience for everyone. The site’s new API also provides simpler access to content for machine readability, including text mining, machine learning, and online application development.
-
-The content on our website and the patterns that drive the new design are all open source to encourage future product development for both eLife and other publishers that wish to use it.
-
-### The Reproducible Document Stack
-
-In collaboration with [Substance][6] and [Stencila][7], eLife is also engaged in a project to create a Reproducible Document Stack (RDS)—an open stack of tools for authoring, compiling, and publishing computationally reproducible manuscripts online.
-
-Today, an increasing number of researchers are able to document their computational experiments through languages such as [R Markdown][8] and [Python][9]. These can serve as important parts of the experimental record, and while they can be shared independently from or alongside the resulting research article, traditional publishing workflows tend to relegate these assets as a secondary class of content. To publish papers, researchers using these languages often have little option but to submit their computational results as “flattened” outputs in the form of figures, losing much of the value and reusability of the code and data references used in the computation. And while electronic notebook solutions such as [Jupyter][10] can enable researchers to publish their code in an easily reusable and executable form, that’s still in addition to, rather than as an integral part of, the published manuscript.
-
-The [Reproducible Document Stack][11] project aims to address these challenges through development and publication of a working prototype of a reproducible manuscript that treats code and data as integral parts of the document, demonstrating a complete end-to-end technology stack from authoring through to publication. It will ultimately allow authors to submit their manuscripts in a format that includes embedded code blocks and computed outputs (statistical results, tables, or graphs), and have those assets remain both visible and executable throughout the publication process. Publishers will then be able to preserve these assets directly as integral parts of the published online article.
-
-### Open annotation with Hypothesis
-
-Most recently, we introduced open annotation in collaboration with [Hypothesis][12] to enable users of our website to make comments, highlight important sections of articles, and engage with the reading public online.
-
-Through this collaboration, the open source Hypothesis software was customized with new moderation features, single sign-on authentication, and user-interface customization options, giving publishers more control over its implementation on their sites. These enhancements are already driving higher-quality discussions around published scholarly content.
-
-The tool can be integrated seamlessly into publishers’ websites, with the scholarly publishing platform [PubFactory][13] and content solutions provider [Ingenta][14] already taking advantage of its improved feature set. [HighWire][15] and [Silverchair][16] are also offering their publishers the opportunity to implement the service.
-
-### Other industries and open source
-
-Over time, we hope to see more publishers adopt Hypothesis, Libero, and other projects to help them foster the discovery and reuse of important scientific research. But the opportunities for innovation eLife has been able to leverage because of these and other OSS technologies are also prevalent in other industries.
-
-The world of data science would be nowhere without the high-quality, well-supported open source software and the communities built around it; [TensorFlow][17] is a leading example of this. Thanks to OSS and its communities, all areas of AI and machine learning have seen rapid acceleration and advancement compared to other areas of computing. Similar is the explosion in usage of Linux as a cloud web host, followed by containerization with Docker, and now the growth of Kubernetes, one of the most popular open source projects on GitHub.
-
-All of these technologies enable organizations to do more with less and focus on innovation instead of reinventing the wheel. And in the end, that’s the real benefit of OSS: It lets us all learn from each other’s failures while building on each other's successes.
-
-We are always on the lookout for opportunities to engage with the best emerging talent and ideas at the interface of research and technology. Find out more about some of these engagements on [eLife Labs][18], or contact [innovation@elifesciences.org][19] for more information.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/3/scientific-publishing-software
-
-作者:[Paul Shanno][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/pshannon
-[1]:https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science
-[2]:https://elifesciences.org/about
-[3]:https://elifesciences.org/inside-elife/33e4127f/elife-introduces-continuum-a-new-open-source-tool-for-publishing
-[4]:https://elifesciences.org/for-the-press/e6038800/elife-supports-development-of-open-technology-stack-for-publishing-reproducible-manuscripts-online
-[5]:https://elifesciences.org/for-the-press/81d42f7d/elife-enhances-open-annotation-with-hypothesis-to-promote-scientific-discussion-online
-[6]:https://github.com/substance
-[7]:https://github.com/stencila/stencila
-[8]:https://rmarkdown.rstudio.com/
-[9]:https://www.python.org/
-[10]:http://jupyter.org/
-[11]:https://elifesciences.org/labs/7dbeb390/reproducible-document-stack-supporting-the-next-generation-research-article
-[12]:https://github.com/hypothesis
-[13]:http://www.pubfactory.com/
-[14]:http://www.ingenta.com/
-[15]:https://github.com/highwire
-[16]:https://www.silverchair.com/community/silverchair-universe/hypothesis/
-[17]:https://www.tensorflow.org/
-[18]:https://elifesciences.org/labs
-[19]:mailto:innovation@elifesciences.org
diff --git a/sources/tech/20180507 Modularity in Fedora 28 Server Edition.md b/sources/tech/20180507 Modularity in Fedora 28 Server Edition.md
new file mode 100644
index 0000000000..0b5fb0415b
--- /dev/null
+++ b/sources/tech/20180507 Modularity in Fedora 28 Server Edition.md
@@ -0,0 +1,76 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Modularity in Fedora 28 Server Edition)
+[#]: via: (https://fedoramagazine.org/wp-content/uploads/2018/05/f28-server-modularity-816x345.jpg)
+[#]: author: (Stephen Gallagher https://fedoramagazine.org/author/sgallagh/)
+
+Modularity in Fedora 28 Server Edition
+======
+
+
+
+### What is Modularity?
+
+A classic conundrum that all open-source distributions have faced is the “too fast/too slow” problem. Users install an operating system in order to enable the use of their applications. A comprehensive distribution like Fedora has an advantage and a disadvantage to the large amount of available software. While the package the user wants may be available, it might not be available in the version needed. Here’s how Modularity can help solve that problem.
+
+Fedora sometimes moves too fast for some users. Its rapid release cycle and desire to carry the latest stable software can result in breaking compatibility with applications. If a user can’t run a web application because Fedora upgraded a web framework to an incompatible version, it can be very frustrating. The classic answer to the “too fast” problem has been “Fedora should have an LTS release.” However, this approach only solves half the problem and makes the flip side of this conundrum worse.
+
+There are also times when Fedora moves too slowly for some of its users. For example, a Fedora release may be poorly-timed alongside the release of other desirable software. Once a Fedora release is declared stable, packagers must abide by the [Stable Updates Policy][1] and not introduce incompatible changes into the system.
+
+Fedora Modularity addresses both sides of this problem. Fedora will still ship a standard release under its traditional policy. However, it will also ship a set of modules that define alternative versions of popular software. Those in the “too fast” camp still have the benefit of Fedora’s newer kernel and other general platform enhancements. In addition, they still have access to older frameworks or toolchains that support their applications.
+
+In addition, those users who like to live closer to the edge can access newer software than was available at release time.
+
+### What is Modularity not?
+
+Modularity is not a drop-in replacement for [Software Collections][2]. These two technologies try to solve many of the same problems, but have distinct differences.
+
+Software Collections install different versions of packages in parallel on the system. However, their downside is that each installation exists in its own namespaced portion of the filesystem. Furthermore, each application that relies on them needs to be told where to find them.
+
+With Modularity, only one version of a package exists on the system, but the user can choose which one. The advantage is that this version lives in a standard location on the system. The package requires no special changes to applications that rely on it. Feedback from user studies shows most users don’t actually rely on parallel installation. Containerization and virtualization solve that problem.
+
+### Why not just use containers?
+
+This is another common question. Why would a user want modules when they could just use containers? The answer is, someone still has to maintain the software in the containers. Modules provide pre-packaged content for those containers that users don’t need to maintain, update and patch on their own. This is how Fedora takes the traditional value of a distribution and moves it into the new, containerized world.
+
+Here’s an example of how Modularity solves problems for users of Node.js and Review Board.
+
+### Node.js
+
+Many readers may be familiar with Node.js, a popular server-side JavaScript runtime. Node.js has an even/odd release policy. Its community supports even-numbered releases (6.x, 8.x, 10.x, etc.) for around 30 months. Meanwhile, they support odd-numbered releases that are essentially developer previews for 9 months.
+
+Due to this cycle, Fedora carried only the most recent even-numbered version of Node.js in its stable repositories. It avoided the odd-numbered versions entirely since their lifecycle was shorter than Fedora, and generally not aligned with a Fedora release. This didn’t sit well with some Fedora users, who wanted access to the latest and greatest enhancements.
+
+Thanks to Modularity, Fedora 28 shipped with not just one, but three versions of Node.js to satisfy both developers and stable deployments. Fedora 28’s traditional repository shipped with Node.js 8.x. This version was the most recent long-term stable version at release time. The Modular repositories (available by default on Fedora 28 Server edition) also made the older Node.js 6.x release and the newer Node.js 9.x development release available.
+
+Additionally, Node.js released 10.x upstream just days after Fedora 28\. In the past, users who wanted to deploy that version had to wait until Fedora 29, or use sources from outside Fedora. However, thanks again to Modularity, Node.js 10.x is already [available][3] in the Modular Updates-Testing repository for Fedora 28.
+
+### Review Board
+
+Review Board is a popular Django application for performing code reviews. Fedora included Review Board from Fedora 13 all the way until Fedora 21\. At that point, Fedora moved to Django 1.7\. Review Board was unable to keep up, due to backwards-incompatible changes in Django’s database support. It remained alive in EPEL for RHEL/CentOS 7, simply because those releases had fortunately frozen on Django 1.6\. Nevertheless, its time in Fedora was apparently over.
+
+However, with the advent of Modularity, Fedora could again ship the older Django as a non-default module stream. As a result, Review Board has been restored to Fedora as a module. Fedora carries both supported releases from upstream: 2.5.x and 3.0.x.
+
+### Putting the pieces together
+
+Fedora has always provided users with a wide range of software to use. Fedora Modularity now provides them with deeper choices for which versions of the software they need. The next few years will be very exciting for Fedora, as developers and users start putting together their software in new and exciting (or old and exciting) ways.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/wp-content/uploads/2018/05/f28-server-modularity-816x345.jpg
+
+作者:[Stephen Gallagher][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/sgallagh/
+[b]: https://github.com/lujun9972
+[1]: https://fedoraproject.org/wiki/Updates_Policy#Stable_Releases
+[2]: https://www.softwarecollections.org
+[3]: https://bodhi.fedoraproject.org/updates/FEDORA-MODULAR-2018-2b0846cb86
diff --git a/sources/tech/20180514 An introduction to the Pyramid web framework for Python.md b/sources/tech/20180514 An introduction to the Pyramid web framework for Python.md
index 03a6fa6494..a16e604774 100644
--- a/sources/tech/20180514 An introduction to the Pyramid web framework for Python.md
+++ b/sources/tech/20180514 An introduction to the Pyramid web framework for Python.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: (Flowsnow)
+[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: subject: (An introduction to the Pyramid web framework for Python)
diff --git a/sources/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md b/sources/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md
index 3ca8fba288..52b46c1adb 100644
--- a/sources/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md
+++ b/sources/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md
@@ -1,3 +1,5 @@
+Translating by cycoe
+Cycoe 翻译中
What's a hero without a villain? How to add one to your Python game
======

diff --git a/sources/tech/20180530 Introduction to the Pony programming language.md b/sources/tech/20180530 Introduction to the Pony programming language.md
deleted file mode 100644
index 2292c65fc2..0000000000
--- a/sources/tech/20180530 Introduction to the Pony programming language.md
+++ /dev/null
@@ -1,95 +0,0 @@
-beamrolling is translating.
-Introduction to the Pony programming language
-======
-
-
-
-At [Wallaroo Labs][1], where I'm the VP of engineering, we're are building a [high-performance, distributed stream processor][2] written in the [Pony][3] programming language. Most people haven't heard of Pony, but it has been an excellent choice for Wallaroo, and it might be an excellent choice for your next project, too.
-
-> "A programming language is just another tool. It's not about syntax. It's not about expressiveness. It's not about paradigms or models. It's about managing hard problems." —Sylvan Clebsch, creator of Pony
-
-I'm a contributor to the Pony project, but here I'll touch on why Pony is a good choice for applications like Wallaroo and share ways I've used Pony. If you are interested in a more in-depth look at why we use Pony to write Wallaroo, we have a [blog post][4] about that.
-
-### What is Pony?
-
-You can think of Pony as something like "Rust meets Erlang." Pony sports buzzworthy features. It is:
-
- * Type-safe
- * Memory-safe
- * Exception-safe
- * Data-race-free
- * Deadlock-free
-
-
-
-Additionally, it's compiled to efficient native code, and it's developed in the open and available under a two-clause BSD license.
-
-That's a lot of features, but here I'll focus on the few that were key to my company adopting Pony.
-
-### Why Pony?
-
-Writing fast, safe, efficient, highly concurrent programs is not easy with most of our existing tools. "Fast, efficient, and highly concurrent" is an achievable goal, but throw in "safe," and things get a lot harder. With Wallaroo, we wanted to accomplish all four, and Pony has made it easy to achieve.
-
-#### Highly concurrent
-
-Pony makes concurrency easy. Part of the way it does that is by providing an opinionated concurrency story. In Pony, all concurrency happens via the [actor model][5].
-
-The actor model is most famous via the implementations in Erlang and Akka. The actor model has been around since the 1970s, and details vary widely from implementation to implementation. What doesn't vary is that all computation is executed by actors that communicate via asynchronous messaging.
-
-Think of the actor model this way: objects in object-oriented programming are state + synchronous methods and actors are state + asynchronous methods.
-
-When an actor receives a message, it executes a corresponding method. That method might operate on a state that is accessible by only that actor. The actor model allows us to use a mutable state in a concurrency-safe manner. Every actor is single-threaded. Two methods within an actor are never run concurrently. This means that, within a given actor, data updates cannot cause data races or other problems commonly associated with threads and mutable states.
-
-#### Fast and efficient
-
-Pony actors are scheduled with an efficient work-stealing scheduler. There's a single Pony scheduler per available CPU. The thread-per-core concurrency model is part of Pony's attempt to work in concert with the CPU to operate as efficiently as possible. The Pony runtime attempts to be as CPU-cache friendly as possible. The less your code thrashes the cache, the better it will run. Pony aims to help your code play nice with CPU caches.
-
-The Pony runtime also features per-actor heaps so that, during garbage collection, there's no "stop the world" garbage collection step. This means your program is always doing at least some work. As a result, Pony programs end up with very consistent performance and predictable latencies.
-
-#### Safe
-
-The Pony type system introduces a novel concept: reference capabilities, which make data safety part of the type system. The type of every variable in Pony includes information about how the data can be shared between actors. The Pony compiler uses the information to verify, at compile time, that your code is data-race- and deadlock-free.
-
-If this sounds a bit like Rust, it's because it is. Pony's reference capabilities and Rust's borrow checker both provide data safety; they just approach it in different ways and have different tradeoffs.
-
-### Is Pony right for you?
-
-Deciding whether to use a new programming language for a non-hobby project is hard. You must weigh the appropriateness of the tool against its immaturity compared to other solutions. So, what about Pony and you?
-
-Pony might be the right solution if you have a hard concurrency problem to solve. Concurrent applications are Pony's raison d'être. If you can accomplish what you want in a single-threaded Python script, you probably don't need Pony. If you have a hard concurrency problem, you should consider Pony and its powerful data-race-free, concurrency-aware type system.
-
-You'll get a compiler that will prevent you from introducing many concurrency-related bugs and a runtime that will give you excellent performance characteristics.
-
-### Getting started with Pony
-
-If you're ready to get started with Pony, your first visit should be the [Learn section][6] of the Pony website. There you will find directions for installing the Pony compiler and resources for learning the language.
-
-If you like to contribute to the language you are using, we have some [beginner-friendly issues][7] waiting for you on GitHub.
-
-Also, I can't wait to start talking with you on [our IRC channel][8] and the [Pony mailing list][9].
-
-To learn more about Pony, attend Sean Allen's talk, [Pony: How I learned to stop worrying and embrace an unproven technology][10], at the [20th OSCON][11], July 16-19, 2018, in Portland, Ore.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/5/pony
-
-作者:[Sean T Allen][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/seantallen
-[1]:http://www.wallaroolabs.com/
-[2]:https://github.com/wallaroolabs/wallaroo
-[3]:https://www.ponylang.org/
-[4]:https://blog.wallaroolabs.com/2017/10/why-we-used-pony-to-write-wallaroo/
-[5]:https://en.wikipedia.org/wiki/Actor_model
-[6]:https://www.ponylang.org/learn/
-[7]:https://github.com/ponylang/ponyc/issues?q=is%3Aissue+is%3Aopen+label%3A%22complexity%3A+beginner+friendly%22
-[8]:https://webchat.freenode.net/?channels=%23ponylang
-[9]:https://pony.groups.io/g/user
-[10]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/speaker/213590
-[11]:https://conferences.oreilly.com/oscon/oscon-or
diff --git a/sources/tech/20180531 Qalculate- - The Best Calculator Application in The Entire Universe.md b/sources/tech/20180531 Qalculate- - The Best Calculator Application in The Entire Universe.md
deleted file mode 100644
index 6cbea61308..0000000000
--- a/sources/tech/20180531 Qalculate- - The Best Calculator Application in The Entire Universe.md
+++ /dev/null
@@ -1,125 +0,0 @@
-Qalculate! – The Best Calculator Application in The Entire Universe
-======
-I have been a GNU-Linux user and a [Debian][1] user for more than a decade. As I started using the desktop more and more, it seemed to me that apart from few web-based services most of my needs were being met with [desktop applications][2] within Debian itself.
-
-One of such applications was the need for me to calculate between different measurements of units. While there are and were many web-services which can do the same, I wanted something which could do all this and more on my desktop for both privacy reasons as well as not having to hunt for a web service for doing one thing or the other. My search ended when I found Qalculate!.
-
-### Qalculate! The most versatile calculator application
-
-![Qalculator is the best calculator app][3]
-
-This is what aptitude says about [Qalculate!][4] and I cannot put it in better terms:
-
-> Powerful and easy to use desktop calculator – GTK+ version
->
-> Qalculate! is small and simple to use but with much power and versatility underneath. Features include customizable functions, units, arbitrary precision, plotting, and a graphical interface that uses a one-line fault-tolerant expression entry (although it supports optional traditional buttons).
-
-It also did have a KDE interface as well as in its previous avatar, but at least in Debian testing, it just shows only the GTK+ version which can be seen from the github [repo][5] as well.
-
-Needless to say that Qalculate! is available in Debian repository and hence can easily be installed [using apt command][6] or through software center in Debian based distributions like Ubuntu. It is also availale for Windows and macOS.
-
-#### Features of Qalculate!
-
-Now while it would be particularly long to go through the whole list of functionality it allows – allow me to list some of the functionality to be followed by a few screenshots of just a couple of functionalities that Qalculate! provides. The idea is basically to familiarize you with a couple of basic methods and then leave it up to you to enjoy exploring what all Qalculate! can do.
-
- * Algebra
- * Calculus
- * Combinatorics
- * Complex_Numbers
- * Data_Sets
- * Date_&_Time
- * Economics
- * Exponents_&_Logarithms
- * Geometry
- * Logical
- * Matrices_&_Vectors
- * Miscellaneous
- * Number_Theory
- * Statistics
- * Trigonometry
-
-
-
-#### Using Qalculate!
-
-Using Qalculate! is not complicated. You can even write in the simple natural language. However, I recommend [reading the manual][7] to utilize the full potential of Qalculate!
-
-![qalculate byte to gibibyte conversion ][8]
-
-![conversion from celcius degrees to fahreneit][9]
-
-#### qalc is the command line version of Qalculate!
-
-You can achieve the same results as Qalculate! with its command-line brethren qalc
-```
-$ qalc 62499836 byte to gibibyte
-62499836 * byte = approx. 0.058207508 gibibyte
-
-$ qalc 40 degree celsius to fahrenheit
-(40 * degree) * celsius = 104 deg*oF
-
-```
-
-I shared the command-line interface so that people who don’t like GUI interfaces and prefer command-line (CLI) or have headless nodes (no GUI) could also use qalculate, pretty common in server environments.
-
-If you want to use it in scripts, I guess libqalculate would be the way to go and seeing how qalculate-gtk, qalc depend on it seems it should be good enough.
-
-Just to share, you could also explore how to use plotting of series data but that and other uses will leave to you. Don’t forget to check the /usr/share/doc/qalculate/index.html to see all the different functionalities that Qalculate! has.
-
-Note:- Do note that though Debian prefers [gnuplot][10] to showcase the pretty graphs that can come out of it.
-
-#### Bonus Tip: You can thank the developer via command line in Debian
-
-If you use Debian and like any package, you can quickly thank the Debian Developer or maintainer maintaining the said package using:
-```
-reportbug --kudos $PACKAGENAME
-
-```
-
-Since I liked QaIculate!, I would like to give a big shout-out to the Debian developer and maintainer Vincent Legout for the fantastic work he has done.
-```
-reportbug --kudos qalculate
-
-```
-
-I would also suggest reading my detailed article on using reportbug tool for [bug reporting in Debian][11].
-
-#### The opinion of a Polymer Chemist on Qalculate!
-
-Through my fellow author [Philip Prado][12], we contacted a Mr. Timothy Meyers, currently a student working in a polymer lab as a Polymer Chemist.
-
-His professional opinion on Qaclulate! is –
-
-> This looks like almost any scientist to use as any type of data calculations statistics could use this program issue would be do you know the commands and such to make it function
->
-> I feel like there’s some Physics constants that are missing but off the top of my head I can’t think of what they are but I feel like there’s not very many [fluid dynamics][13] stuff in there and also some different like [light absorption][14] coefficients for different compounds but that’s just a chemist in me I don’t know if those are super necessary. [Free energy][15] might be one
-
-In the end, I just want to share this is a mere introduction to what Qalculate! can do and is limited by what you want to get done and your imagination. I hope you like Qalculate!
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/qalculate/
-
-作者:[Shirish][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://itsfoss.com/author/shirish/
-[1]:https://www.debian.org/
-[2]:https://itsfoss.com/essential-linux-applications/
-[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/qalculate-app-featured-1-800x450.jpeg
-[4]:https://qalculate.github.io/
-[5]:https://github.com/Qalculate
-[6]:https://itsfoss.com/apt-command-guide/
-[7]:https://qalculate.github.io/manual/index.html
-[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/qalculate-byte-conversion.png
-[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/qalculate-gtk-weather-conversion.png
-[10]:http://www.gnuplot.info/
-[11]:https://itsfoss.com/bug-report-debian/
-[12]:https://itsfoss.com/author/phillip/
-[13]:https://en.wikipedia.org/wiki/Fluid_dynamics
-[14]:https://en.wikipedia.org/wiki/Absorption_(electromagnetic_radiation)
-[15]:https://en.wikipedia.org/wiki/Gibbs_free_energy
diff --git a/sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md b/sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md
index c489b0f0f1..664c054913 100644
--- a/sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md
+++ b/sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md
@@ -1,5 +1,3 @@
-scoutydren is translating
-
3 open source alternatives to Adobe Lightroom
======
diff --git a/sources/tech/20180612 7 open source tools to make literature reviews easy.md b/sources/tech/20180612 7 open source tools to make literature reviews easy.md
index 96edb68eff..674f4ea44b 100644
--- a/sources/tech/20180612 7 open source tools to make literature reviews easy.md
+++ b/sources/tech/20180612 7 open source tools to make literature reviews easy.md
@@ -1,3 +1,4 @@
+translated by lixinyuxx
7 open source tools to make literature reviews easy
======
diff --git a/sources/tech/20180621 How to connect to a remote desktop from Linux.md b/sources/tech/20180621 How to connect to a remote desktop from Linux.md
deleted file mode 100644
index ced3b233dc..0000000000
--- a/sources/tech/20180621 How to connect to a remote desktop from Linux.md
+++ /dev/null
@@ -1,136 +0,0 @@
-How to connect to a remote desktop from Linux
-======
-
-
-
-A [remote desktop][1], according to Wikipedia, is "a software or operating system feature that allows a personal computer's desktop environment to be run remotely on one system (usually a PC, but the concept applies equally to a server), while being displayed on a separate client device."
-
-In other words, a remote desktop is used to access an environment running on another computer. For example, the [ManageIQ/Integration tests][2] repository's pull request (PR) testing system exposes a Virtual Network Computing (VNC) connection port so I can remotely view my PRs being tested in real time. Remote desktops are also used to help customers solve computer problems: with the customer's permission, you can establish a VNC or Remote Desktop Protocol (RDP) connection to see or interactively access the computer to troubleshoot or repair the problem.
-
-These connections are made using remote desktop connection software, and there are many options available. I use [Remmina][3] because I like its minimal, easy-to-use user interface (UI). It's written in GTK+ and is open source under the GNU GPL license.
-
-In this article, I'll explain how to use the Remmina client to connect remotely from a Linux computer to a Windows 10 system and a Red Hat Enterprise Linux 7 system.
-
-### Install Remmina on Linux
-
-First, you need to install Remmina on the computer you'll use to access the other computer(s) remotely. If you're using Fedora, you can run the following command to install Remmina:
-```
-sudo dnf install -y remmina
-
-```
-
-If you want to install Remmina on a different Linux platform, follow these [installation instructions][4]. You should then find Remmina with your other apps (Remmina is selected in this image).
-
-
-
-Launch Remmina by clicking on the icon. You should see a screen that resembles this:
-
-
-
-Remmina offers several types of connections, including RDP, which is used to connect to Windows-based computers, and VNC, which is used to connect to Linux machines. As you can see in the top-left corner above, Remmina's default setting is RDP.
-
-### Connecting to Windows 10
-
-Before you can connect to a Windows 10 computer through RDP, you must change some permissions to allow remote desktop sharing and connections through your firewall.
-
-[Note: Windows 10 Home has no RDP feature listed. ][5]
-
-To enable remote desktop sharing, in **File Explorer** right-click on **My Computer → Properties → Remote Settings** and, in the pop-up that opens, check **Allow remote connections to this computer** , then select **Apply**.
-
-
-
-Next, enable remote desktop connections through your firewall. First, search for **firewall settings** in the **Start** menu and select **Allow an app through Windows Firewall**.
-
-
-
-In the window that opens, look for **Remote Desktop** under **Allowed apps and features**. Check the box(es) in the **Private **and/or **Public** columns, depending on the type of network(s) you will use to access this desktop. Click **OK**.
-
-
-
-Go to the Linux computer you use to remotely access the Windows PC and launch Remmina. Enter the IP address of your Windows computer and hit the Enter key. (How do I locate my IP address [in Linux][6] and [Windows 10][7]?) When prompted, enter your username and password and click OK.
-
-
-
-If you're asked to accept the certificate, select OK.
-
-
-
-You should be able to see your Windows 10 computer's desktop.
-
-
-
-### Connecting to Red Hat Enterprise Linux 7
-
-To set permissions to enable remote access on your RHEL7 computer, open **All Settings** on the Linux desktop.
-
-
-
-Click on the Sharing icon, and this window will open:
-
-
-
-If **Screen Sharing** is off, click on it. A window will open, where you can slide it into the **On** position. If you want to allow remote connections to control the desktop, set **Allow Remote Control** to **On**. You can also select between two access options: one that prompts the computer's primary user to accept or deny the connection request, and another that allows connection authentication with a password. At the bottom of the window, select the network interface where connections are allowed, then close the window.
-
-Next, open **Firewall Settings** from **Applications Menu → Sundry → Firewall**.
-
-
-
-Check the box next to vnc-server (as shown above) and close the window. Then, head to Remmina on your remote computer, enter the IP address of the Linux desktop you want to connect with, select **VNC** as the protocol, and hit the **Enter** key.
-
-
-
-If you previously chose the authentication option **New connections must ask for access** , the RHEL system's user will see a prompt like this:
-
-
-
-Select **Accept** for the remote connection to succeed.
-
-If you chose the option to authenticate the connection with a password, Remmina will prompt you for the password.
-
-
-
-Enter the password and hit **OK** , and you should be connected to the remote computer.
-
-
-
-### Using Remmina
-
-Remmina offers a tabbed UI, as shown in above picture, much like a web browser. In the top-left corner, as shown in the screenshot above, you can see two tabs: one for the previously established Windows 10 connection and a new one for the RHEL connection.
-
-On the left-hand side of the window, there is a toolbar with options such as **Resize Window** , **Full-Screen Mode** , **Preferences** , **Screenshot** , **Disconnect** , and more. Explore them and see which ones work best for you.
-
-You can also create saved connections in Remmina by clicking on the **+** (plus) sign in the top-left corner. Fill in the form with details specific to your connection and click **Save**. Here is an example Windows 10 RDP connection:
-
-
-
-The next time you open Remmina, the connection will be available.
-
-
-
-Just click on it, and your connection will be established without re-entering the details.
-
-### Additional info
-
-When you use remote desktop software, all the operations you perform take place on the remote desktop and use its resources—Remmina (or similar software) is just a way to interact with that desktop. You can also access a computer remotely through SSH, but it usually limits you to a text-only terminal to that computer.
-
-You should also note that enabling remote connections with your computer could cause serious damage if an attacker uses this method to gain access to your computer. Therefore, it is wise to disallow remote desktop connections and block related services in your firewall when you are not actively using Remote Desktop.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/6/linux-remote-desktop
-
-作者:[Kedar Vijay Kulkarni][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/kkulkarn
-[1]:https://en.wikipedia.org/wiki/Remote_desktop_software
-[2]:https://github.com/ManageIQ/integration_tests
-[3]:https://www.remmina.org/wp/
-[4]:https://www.tecmint.com/remmina-remote-desktop-sharing-and-ssh-client/
-[5]:https://superuser.com/questions/1019203/remote-desktop-settings-missing#1019212
-[6]:https://opensource.com/article/18/5/how-find-ip-address-linux
-[7]:https://www.groovypost.com/howto/find-windows-10-device-ip-address/
diff --git a/sources/tech/20180629 How To Get Flatpak Apps And Games Built With OpenGL To Work With Proprietary Nvidia Graphics Drivers.md b/sources/tech/20180629 How To Get Flatpak Apps And Games Built With OpenGL To Work With Proprietary Nvidia Graphics Drivers.md
deleted file mode 100644
index a9d540adae..0000000000
--- a/sources/tech/20180629 How To Get Flatpak Apps And Games Built With OpenGL To Work With Proprietary Nvidia Graphics Drivers.md
+++ /dev/null
@@ -1,113 +0,0 @@
-How To Get Flatpak Apps And Games Built With OpenGL To Work With Proprietary Nvidia Graphics Drivers
-======
-**Some applications and games built with OpenGL support and packaged as Flatpak fail to start with proprietary Nvidia drivers. This article explains how to get such Flatpak applications or games them to start, without installing the open source drivers (Nouveau).**
-
-Here's an example. I'm using the proprietary Nvidia drivers on my Ubuntu 18.04 desktop (`nvidia-driver-390`) and when I try to launch the latest
-```
-$ /usr/bin/flatpak run --branch=stable --arch=x86_64 --command=krita --file-forwarding org.kde.krita
-Gtk-Message: Failed to load module "canberra-gtk-module"
-Gtk-Message: Failed to load module "canberra-gtk-module"
-libGL error: No matching fbConfigs or visuals found
-libGL error: failed to load driver: swrast
-Could not initialize GLX
-
-```
-
-To fix Flatpak games and applications not starting when using OpenGL with proprietary Nvidia graphics drivers, you'll need to install a runtime for your currently installed proprietary Nvidia drivers. Here's how to do this.
-
-**1\. Add the FlatHub repository if you haven't already. You can find exact instructions for your Linux distribution[here][1].**
-
-**2. Now you'll need to figure out the exact version of the proprietary Nvidia drivers installed on your system. **
-
-_This step is dependant of the Linux distribution you're using and I can't cover all cases. The instructions below are Ubuntu-oriented (and Ubuntu flavors) but hopefully you can figure out for yourself the Nvidia drivers version installed on your system._
-
-To do this in Ubuntu, open `Software & Updates` , switch to the `Additional Drivers` tab and note the name of the Nvidia driver package.
-
-As an example, this is `nvidia-driver-390` in my case, as you can see here:
-
-
-
-That's not all. We've only found out the Nvidia drivers major version but we'll also need to know the minor version. To get the exact Nvidia driver version, which we'll need for the next step, run this command (should work in any Debian-based Linux distribution, like Ubuntu, Linux Mint and so on):
-```
-apt-cache policy NVIDIA-PACKAGE-NAME
-
-```
-
-Where NVIDIA-PACKAGE-NAME is the Nvidia drivers package name listed in `Software & Updates` . For example, to see the exact installed version of the `nvidia-driver-390` package, run this command:
-```
-$ apt-cache policy nvidia-driver-390
-nvidia-driver-390:
- Installed: 390.48-0ubuntu3
- Candidate: 390.48-0ubuntu3
- Version table:
- * 390.48-0ubuntu3 500
- 500 http://ro.archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages
- 100 /var/lib/dpkg/status
-
-```
-
-In this command's output, look for the `Installed` section and note the version numbers (excluding `-0ubuntu3` and anything similar). Now we know the exact version of the installed Nvidia drivers (`390.48` in my example). Remember this because we'll need it for the next step.
-
-**3\. And finally, you can install the Nvidia runtime for your installed proprietary Nvidia graphics drivers, from FlatHub**
-
-To list all the available Nvidia runtime packages available on FlatHub, you can use this command:
-```
-flatpak remote-ls flathub | grep nvidia
-
-```
-
-Hopefully the runtime for your installed Nvidia drivers is available on FlatHub. You can now proceed to install the runtime by using this command:
-
- * For 64bit systems:
-
-
-```
-flatpak install flathub org.freedesktop.Platform.GL.nvidia-MAJORVERSION-MINORVERSION
-
-```
-
-Replace MAJORVERSION with the Nvidia driver major version installed on your computer (390 in my example above) and
-MINORVERSION with the minor version (48 in my example from step 2).
-
-For example, to install the runtime for Nvidia graphics driver version 390.48, you'd have to use this command:
-```
-flatpak install flathub org.freedesktop.Platform.GL.nvidia-390-48
-
-```
-
- * For 32bit systems (or to be able to run 32bit applications or games on 64bit), install the 32bit runtime using:
-
-
-```
-flatpak install flathub org.freedesktop.Platform.GL32.nvidia-MAJORVERSION-MINORVERSION
-
-```
-
-Once again, replace MAJORVERSION with the Nvidia driver major version installed on your computer (390 in my example above) and MINORVERSION with the minor version (48 in my example from step 2).
-
-For example, to install the 32bit runtime for Nvidia graphics driver version 390.48, you'd have to use this command:
-```
-flatpak install flathub org.freedesktop.Platform.GL32.nvidia-390-48
-
-```
-
-That is all you need to do to get applications or games packaged as Flatpak that are built with OpenGL to run.
-
-
---------------------------------------------------------------------------------
-
-via: https://www.linuxuprising.com/2018/06/how-to-get-flatpak-apps-and-games-built.html
-
-作者:[Logix][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://plus.google.com/118280394805678839070
-[1]:https://flatpak.org/setup/
-[2]:https://www.linuxuprising.com/2018/06/free-painting-software-krita-410.html
-[3]:https://www.linuxuprising.com/2018/06/winepak-is-flatpak-repository-for.html
-[4]:https://github.com/winepak/applications/issues/23
-[5]:https://github.com/flatpak/flatpak/issues/138
diff --git a/sources/tech/20180724 How To Mount Google Drive Locally As Virtual File System In Linux.md b/sources/tech/20180724 How To Mount Google Drive Locally As Virtual File System In Linux.md
deleted file mode 100644
index 3f804ffe9e..0000000000
--- a/sources/tech/20180724 How To Mount Google Drive Locally As Virtual File System In Linux.md
+++ /dev/null
@@ -1,265 +0,0 @@
-How To Mount Google Drive Locally As Virtual File System In Linux
-======
-
-
-
-[**Google Drive**][1] is the one of the popular cloud storage provider on the planet. As of 2017, over 800 million users are actively using this service worldwide. Even though the number of users have dramatically increased, Google haven’t released a Google drive client for Linux yet. But it didn’t stop the Linux community. Every now and then, some developers had brought few google drive clients for Linux operating system. In this guide, we will see three unofficial google drive clients for Linux. Using these clients, you can mount Google drive locally as a virtual file system and access your drive files in your Linux box. Read on.
-
-### 1. Google-drive-ocamlfuse
-
-The **google-drive-ocamlfuse** is a FUSE filesystem for Google Drive, written in OCaml. For those wondering, FUSE, stands for **F** ilesystem in **Use** rspace, is a project that allows the users to create virtual file systems in user level. **google-drive-ocamlfuse** allows you to mount your Google Drive on Linux system. It features read/write access to ordinary files and folders, read-only access to Google docks, sheets, and slides, support for multiple google drive accounts, duplicate file handling, access to your drive trash directory, and more.
-
-#### Installing google-drive-ocamlfuse
-
-google-drive-ocamlfuse is available in the [**AUR**][2], so you can install it using any AUR helper programs, for example [**Yay**][3].
-```
-$ yay -S google-drive-ocamlfuse
-
-```
-
-On Ubuntu:
-```
-$ sudo add-apt-repository ppa:alessandro-strada/ppa
-$ sudo apt-get update
-$ sudo apt-get install google-drive-ocamlfuse
-
-```
-
-To install latest beta version, do:
-```
-$ sudo add-apt-repository ppa:alessandro-strada/google-drive-ocamlfuse-beta
-$ sudo apt-get update
-$ sudo apt-get install google-drive-ocamlfuse
-
-```
-
-#### Usage
-
-Once installed, run the following command to launch **google-drive-ocamlfuse** utility from your Terminal:
-```
-$ google-drive-ocamlfuse
-
-```
-
-When you run this first time, the utility will open your web browser and ask your permission to authorize your google drive files. Once you gave authorization, all necessary config files and folders it needs to mount your google drive will be automatically created.
-
-![][5]
-
-After successful authentication, you will see the following message in your Terminal.
-```
-Access token retrieved correctly.
-
-```
-
-You’re good to go now. Close the web browser and then create a mount point to mount your google drive files.
-```
-$ mkdir ~/mygoogledrive
-
-```
-
-Finally, mount your google drive using command:
-```
-$ google-drive-ocamlfuse ~/mygoogledrive
-
-```
-
-Congratulations! You can access access your files either from Terminal or file manager.
-
-From **Terminal** :
-```
-$ ls ~/mygoogledrive
-
-```
-
-From **File manager** :
-
-![][6]
-
-If you have more than one account, use **label** option to distinguish different accounts like below.
-```
-$ google-drive-ocamlfuse -label label [mountpoint]
-
-```
-
-Once you’re done, unmount the FUSE flesystem using command:
-```
-$ fusermount -u ~/mygoogledrive
-
-```
-
-For more details, refer man pages.
-```
-$ google-drive-ocamlfuse --help
-
-```
-
-Also, do check the [**official wiki**][7] and the [**project GitHub repository**][8] for more details.
-
-### 2. GCSF
-
-**GCSF** is a FUSE filesystem based on Google Drive, written using **Rust** programming language. The name GCSF has come from the Romanian word “ **G** oogle **C** onduce **S** istem de **F** ișiere”, which means “Google Drive Filesystem” in English. Using GCSF, you can mount your Google drive as a local virtual file system and access the contents from the Terminal or file manager. You might wonder how it differ from other Google Drive FUSE projects, for example **google-drive-ocamlfuse**. The developer of GCSF replied to a similar [comment on Reddit][9] “GCSF tends to be faster in several cases (listing files recursively, reading large files from Drive). The caching strategy it uses also leads to very fast reads (x4-7 improvement compared to google-drive-ocamlfuse) for files that have been cached, at the cost of using more RAM“.
-
-#### Installing GCSF
-
-GCSF is available in the [**AUR**][10], so the Arch Linux users can install it using any AUR helper, for example [**Yay**][3].
-```
-$ yay -S gcsf-git
-
-```
-
-For other distributions, do the following.
-
-Make sure you have installed Rust on your system.
-
-Make sure **pkg-config** and the **fuse** packages are installed. They are available in the default repositories of most Linux distributions. For example, on Ubuntu and derivatives, you can install them using command:
-```
-$ sudo apt-get install -y libfuse-dev pkg-config
-
-```
-
-Once all dependencies installed, run the following command to install GCSF:
-```
-$ cargo install gcsf
-
-```
-
-#### Usage
-
-First, we need to authorize our google drive. To do so, simply run:
-```
-$ gcsf login ostechnix
-
-```
-
-You must specify a session name. Replace **ostechnix** with your own session name. You will see an output something like below with an URL to authorize your google drive account.
-
-![][11]
-
-Just copy and navigate to the above URL from your browser and click **allow** to give permission to access your google drive contents. Once you gave the authentication you will see an output like below.
-```
-Successfully logged in. Credentials saved to "/home/sk/.config/gcsf/ostechnix".
-
-```
-
-GCSF will create a configuration file in **$XDG_CONFIG_HOME/gcsf/gcsf.toml** , which is usually defined as **$HOME/.config/gcsf/gcsf.toml**. Credentials are stored in the same directory.
-
-Next, create a directory to mount your google drive contents.
-```
-$ mkdir ~/mygoogledrive
-
-```
-
-Then, edit **/etc/fuse.conf** file:
-```
-$ sudo vi /etc/fuse.conf
-
-```
-
-Uncomment the following line to allow non-root users to specify the allow_other or allow_root mount options.
-```
-user_allow_other
-
-```
-
-Save and close the file.
-
-Finally, mount your google drive using command:
-```
-$ gcsf mount ~/mygoogledrive -s ostechnix
-
-```
-
-Sample output:
-```
-INFO gcsf > Creating and populating file system...
-INFO gcsf > File sytem created.
-INFO gcsf > Mounting to /home/sk/mygoogledrive
-INFO gcsf > Mounted to /home/sk/mygoogledrive
-INFO gcsf::gcsf::file_manager > Checking for changes and possibly applying them.
-INFO gcsf::gcsf::file_manager > Checking for changes and possibly applying them.
-
-```
-
-Again, replace **ostechnix** with your session name. You can view the existing sessions using command:
-```
-$ gcsf list
-Sessions:
-- ostechnix
-
-```
-
-You can now access your google drive contents either from the Terminal or from File manager.
-
-From **Terminal** :
-```
-$ ls ~/mygoogledrive
-
-```
-
-From **File manager** :
-
-![][12]
-
-If you don’t know where your Google drive is mounted, use **df** or **mount** command as shown below.
-```
-$ df -h
-Filesystem Size Used Avail Use% Mounted on
-udev 968M 0 968M 0% /dev
-tmpfs 200M 1.6M 198M 1% /run
-/dev/sda1 20G 7.5G 12G 41% /
-tmpfs 997M 0 997M 0% /dev/shm
-tmpfs 5.0M 4.0K 5.0M 1% /run/lock
-tmpfs 997M 0 997M 0% /sys/fs/cgroup
-tmpfs 200M 40K 200M 1% /run/user/1000
-GCSF 15G 857M 15G 6% /home/sk/mygoogledrive
-
-$ mount | grep GCSF
-GCSF on /home/sk/mygoogledrive type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other)
-
-```
-
-Once done, unmount the google drive using command:
-```
-$ fusermount -u ~/mygoogledrive
-
-```
-
-Check the [**GCSF GitHub repository**][13] for more details.
-
-### 3. Tuxdrive
-
-**Tuxdrive** is yet another unofficial google drive client for Linux. We have written a detailed guide about Tuxdrive a while ago. Please check the following link.
-
-Of course, there were few other unofficial google drive clients available in the past, such as Grive2, Syncdrive. But it seems that they are discontinued now. I will keep updating this list when I come across any active google drive clients.
-
-And, that’s all for now, folks. Hope this was useful. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://www.google.com/drive/
-[2]:https://aur.archlinux.org/packages/google-drive-ocamlfuse/
-[3]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
-[4]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[5]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive.png
-[6]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-2.png
-[7]:https://github.com/astrada/google-drive-ocamlfuse/wiki/Configuration
-[8]:https://github.com/astrada/google-drive-ocamlfuse
-[9]:https://www.reddit.com/r/DataHoarder/comments/8vlb2v/google_drive_as_a_file_system/e1oh9q9/
-[10]:https://aur.archlinux.org/packages/gcsf-git/
-[11]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-3.png
-[12]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-4.png
-[13]:https://github.com/harababurel/gcsf
diff --git a/sources/tech/20180906 What a shell dotfile can do for you.md b/sources/tech/20180906 What a shell dotfile can do for you.md
index 35593e1e32..16ee0936e3 100644
--- a/sources/tech/20180906 What a shell dotfile can do for you.md
+++ b/sources/tech/20180906 What a shell dotfile can do for you.md
@@ -1,3 +1,11 @@
+[#]: collector: (lujun9972)
+[#]: translator: (runningwater)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What a shell dotfile can do for you)
+[#]: via: (https://opensource.com/article/18/9/shell-dotfile)
+[#]: author: (H.Waldo Grunenwald https://opensource.com/users/gwaldo)
What a shell dotfile can do for you
======
@@ -223,7 +231,7 @@ via: https://opensource.com/article/18/9/shell-dotfile
作者:[H.Waldo Grunenwald][a]
选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/译者ID)
+译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/sources/tech/20180926 HTTP- Brief History of HTTP.md b/sources/tech/20180926 HTTP- Brief History of HTTP.md
new file mode 100644
index 0000000000..64e2abfd6b
--- /dev/null
+++ b/sources/tech/20180926 HTTP- Brief History of HTTP.md
@@ -0,0 +1,286 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (HTTP: Brief History of HTTP)
+[#]: via: (https://hpbn.co/brief-history-of-http/#http-09-the-one-line-protocol)
+[#]: author: (Ilya Grigorik https://www.igvita.com/)
+
+HTTP: Brief History of HTTP
+======
+
+### Introduction
+
+The Hypertext Transfer Protocol (HTTP) is one of the most ubiquitous and widely adopted application protocols on the Internet: it is the common language between clients and servers, enabling the modern web. From its simple beginnings as a single keyword and document path, it has become the protocol of choice not just for browsers, but for virtually every Internet-connected software and hardware application.
+
+In this chapter, we will take a brief historical tour of the evolution of the HTTP protocol. A full discussion of the varying HTTP semantics is outside the scope of this book, but an understanding of the key design changes of HTTP, and the motivations behind each, will give us the necessary background for our discussions on HTTP performance, especially in the context of the many upcoming improvements in HTTP/2.
+
+### §HTTP 0.9: The One-Line Protocol
+
+The original HTTP proposal by Tim Berners-Lee was designed with simplicity in mind as to help with the adoption of his other nascent idea: the World Wide Web. The strategy appears to have worked: aspiring protocol designers, take note.
+
+In 1991, Berners-Lee outlined the motivation for the new protocol and listed several high-level design goals: file transfer functionality, ability to request an index search of a hypertext archive, format negotiation, and an ability to refer the client to another server. To prove the theory in action, a simple prototype was built, which implemented a small subset of the proposed functionality:
+
+ * Client request is a single ASCII character string.
+
+ * Client request is terminated by a carriage return (CRLF).
+
+ * Server response is an ASCII character stream.
+
+
+
+ * Server response is a hypertext markup language (HTML).
+
+ * Connection is terminated after the document transfer is complete.
+
+
+
+
+However, even that sounds a lot more complicated than it really is. What these rules enable is an extremely simple, Telnet-friendly protocol, which some web servers support to this very day:
+
+```
+$> telnet google.com 80
+
+Connected to 74.125.xxx.xxx
+
+GET /about/
+
+(hypertext response)
+(connection closed)
+```
+
+The request consists of a single line: `GET` method and the path of the requested document. The response is a single hypertext document—no headers or any other metadata, just the HTML. It really couldn’t get any simpler. Further, since the previous interaction is a subset of the intended protocol, it unofficially acquired the HTTP 0.9 label. The rest, as they say, is history.
+
+From these humble beginnings in 1991, HTTP took on a life of its own and evolved rapidly over the coming years. Let us quickly recap the features of HTTP 0.9:
+
+ * Client-server, request-response protocol.
+
+ * ASCII protocol, running over a TCP/IP link.
+
+ * Designed to transfer hypertext documents (HTML).
+
+ * The connection between server and client is closed after every request.
+
+
+```
+Popular web servers, such as Apache and Nginx, still support the HTTP 0.9 protocol—in part, because there is not much to it! If you are curious, open up a Telnet session and try accessing google.com, or your own favorite site, via HTTP 0.9 and inspect the behavior and the limitations of this early protocol.
+```
+
+### §HTTP/1.0: Rapid Growth and Informational RFC
+
+The period from 1991 to 1995 is one of rapid coevolution of the HTML specification, a new breed of software known as a "web browser," and the emergence and quick growth of the consumer-oriented public Internet infrastructure.
+
+```
+##### §The Perfect Storm: Internet Boom of the Early 1990s
+
+Building on Tim Berner-Lee’s initial browser prototype, a team at the National Center of Supercomputing Applications (NCSA) decided to implement their own version. With that, the first popular browser was born: NCSA Mosaic. One of the programmers on the NCSA team, Marc Andreessen, partnered with Jim Clark to found Mosaic Communications in October 1994. The company was later renamed Netscape, and it shipped Netscape Navigator 1.0 in December 1994. By this point, it was already clear that the World Wide Web was bound to be much more than just an academic curiosity.
+
+In fact, that same year the first World Wide Web conference was organized in Geneva, Switzerland, which led to the creation of the World Wide Web Consortium (W3C) to help guide the evolution of HTML. Similarly, a parallel HTTP Working Group (HTTP-WG) was established within the IETF to focus on improving the HTTP protocol. Both of these groups continue to be instrumental to the evolution of the Web.
+
+Finally, to create the perfect storm, CompuServe, AOL, and Prodigy began providing dial-up Internet access to the public within the same 1994–1995 time frame. Riding on this wave of rapid adoption, Netscape made history with a wildly successful IPO on August 9, 1995—the Internet boom had arrived, and everyone wanted a piece of it!
+```
+
+The growing list of desired capabilities of the nascent Web and their use cases on the public Web quickly exposed many of the fundamental limitations of HTTP 0.9: we needed a protocol that could serve more than just hypertext documents, provide richer metadata about the request and the response, enable content negotiation, and more. In turn, the nascent community of web developers responded by producing a large number of experimental HTTP server and client implementations through an ad hoc process: implement, deploy, and see if other people adopt it.
+
+From this period of rapid experimentation, a set of best practices and common patterns began to emerge, and in May 1996 the HTTP Working Group (HTTP-WG) published RFC 1945, which documented the "common usage" of the many HTTP/1.0 implementations found in the wild. Note that this was only an informational RFC: HTTP/1.0 as we know it is not a formal specification or an Internet standard!
+
+Having said that, an example HTTP/1.0 request should look very familiar:
+
+```
+$> telnet website.org 80
+
+Connected to xxx.xxx.xxx.xxx
+
+GET /rfc/rfc1945.txt HTTP/1.0
+User-Agent: CERN-LineMode/2.15 libwww/2.17b3
+Accept: */*
+
+HTTP/1.0 200 OK
+Content-Type: text/plain
+Content-Length: 137582
+Expires: Thu, 01 Dec 1997 16:00:00 GMT
+Last-Modified: Wed, 1 May 1996 12:45:26 GMT
+Server: Apache 0.84
+
+(plain-text response)
+(connection closed)
+```
+
+ 1. Request line with HTTP version number, followed by request headers
+
+ 2. Response status, followed by response headers
+
+
+
+
+The preceding exchange is not an exhaustive list of HTTP/1.0 capabilities, but it does illustrate some of the key protocol changes:
+
+ * Request may consist of multiple newline separated header fields.
+
+ * Response object is prefixed with a response status line.
+
+ * Response object has its own set of newline separated header fields.
+
+ * Response object is not limited to hypertext.
+
+ * The connection between server and client is closed after every request.
+
+
+
+
+Both the request and response headers were kept as ASCII encoded, but the response object itself could be of any type: an HTML file, a plain text file, an image, or any other content type. Hence, the "hypertext transfer" part of HTTP became a misnomer not long after its introduction. In reality, HTTP has quickly evolved to become a hypermedia transport, but the original name stuck.
+
+In addition to media type negotiation, the RFC also documented a number of other commonly implemented capabilities: content encoding, character set support, multi-part types, authorization, caching, proxy behaviors, date formats, and more.
+
+```
+Almost every server on the Web today can and will still speak HTTP/1.0. Except that, by now, you should know better! Requiring a new TCP connection per request imposes a significant performance penalty on HTTP/1.0; see [Three-Way Handshake][1], followed by [Slow-Start][2].
+```
+
+### §HTTP/1.1: Internet Standard
+
+The work on turning HTTP into an official IETF Internet standard proceeded in parallel with the documentation effort around HTTP/1.0 and happened over a period of roughly four years: between 1995 and 1999. In fact, the first official HTTP/1.1 standard is defined in RFC 2068, which was officially released in January 1997, roughly six months after the publication of HTTP/1.0. Then, two and a half years later, in June of 1999, a number of improvements and updates were incorporated into the standard and were released as RFC 2616.
+
+The HTTP/1.1 standard resolved a lot of the protocol ambiguities found in earlier versions and introduced a number of critical performance optimizations: keepalive connections, chunked encoding transfers, byte-range requests, additional caching mechanisms, transfer encodings, and request pipelining.
+
+With these capabilities in place, we can now inspect a typical HTTP/1.1 session as performed by any modern HTTP browser and client:
+
+```
+$> telnet website.org 80
+Connected to xxx.xxx.xxx.xxx
+
+GET /index.html HTTP/1.1
+Host: website.org
+User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4)... (snip)
+Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
+Accept-Encoding: gzip,deflate,sdch
+Accept-Language: en-US,en;q=0.8
+Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
+Cookie: __qca=P0-800083390... (snip)
+
+HTTP/1.1 200 OK
+Server: nginx/1.0.11
+Connection: keep-alive
+Content-Type: text/html; charset=utf-8
+Via: HTTP/1.1 GWA
+Date: Wed, 25 Jul 2012 20:23:35 GMT
+Expires: Wed, 25 Jul 2012 20:23:35 GMT
+Cache-Control: max-age=0, no-cache
+Transfer-Encoding: chunked
+
+100
+
+(snip)
+
+100
+(snip)
+
+0
+
+GET /favicon.ico HTTP/1.1
+Host: www.website.org
+User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4)... (snip)
+Accept: */*
+Referer: http://website.org/
+Connection: close
+Accept-Encoding: gzip,deflate,sdch
+Accept-Language: en-US,en;q=0.8
+Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
+Cookie: __qca=P0-800083390... (snip)
+
+HTTP/1.1 200 OK
+Server: nginx/1.0.11
+Content-Type: image/x-icon
+Content-Length: 3638
+Connection: close
+Last-Modified: Thu, 19 Jul 2012 17:51:44 GMT
+Cache-Control: max-age=315360000
+Accept-Ranges: bytes
+Via: HTTP/1.1 GWA
+Date: Sat, 21 Jul 2012 21:35:22 GMT
+Expires: Thu, 31 Dec 2037 23:55:55 GMT
+Etag: W/PSA-GAu26oXbDi
+
+(icon data)
+(connection closed)
+```
+
+ 1. Request for HTML file, with encoding, charset, and cookie metadata
+
+ 2. Chunked response for original HTML request
+
+ 3. Number of octets in the chunk expressed as an ASCII hexadecimal number (256 bytes)
+
+ 4. End of chunked stream response
+
+ 5. Request for an icon file made on same TCP connection
+
+ 6. Inform server that the connection will not be reused
+
+ 7. Icon response, followed by connection close
+
+
+
+
+Phew, there is a lot going on in there! The first and most obvious difference is that we have two object requests, one for an HTML page and one for an image, both delivered over a single connection. This is connection keepalive in action, which allows us to reuse the existing TCP connection for multiple requests to the same host and deliver a much faster end-user experience; see [Optimizing for TCP][3].
+
+To terminate the persistent connection, notice that the second client request sends an explicit `close` token to the server via the `Connection` header. Similarly, the server can notify the client of the intent to close the current TCP connection once the response is transferred. Technically, either side can terminate the TCP connection without such signal at any point, but clients and servers should provide it whenever possible to enable better connection reuse strategies on both sides.
+
+```
+HTTP/1.1 changed the semantics of the HTTP protocol to use connection keepalive by default. Meaning, unless told otherwise (via `Connection: close` header), the server should keep the connection open by default.
+
+However, this same functionality was also backported to HTTP/1.0 and enabled via the `Connection: Keep-Alive` header. Hence, if you are using HTTP/1.1, technically you don’t need the `Connection: Keep-Alive` header, but many clients choose to provide it nonetheless.
+```
+
+Additionally, the HTTP/1.1 protocol added content, encoding, character set, and even language negotiation, transfer encoding, caching directives, client cookies, plus a dozen other capabilities that can be negotiated on each request.
+
+We are not going to dwell on the semantics of every HTTP/1.1 feature. This is a subject for a dedicated book, and many great ones have been written already. Instead, the previous example serves as a good illustration of both the quick progress and evolution of HTTP, as well as the intricate and complicated dance of every client-server exchange. There is a lot going on in there!
+
+```
+For a good reference on all the inner workings of the HTTP protocol, check out O’Reilly’s HTTP: The Definitive Guide by David Gourley and Brian Totty.
+```
+
+### §HTTP/2: Improving Transport Performance
+
+Since its publication, RFC 2616 has served as a foundation for the unprecedented growth of the Internet: billions of devices of all shapes and sizes, from desktop computers to the tiny web devices in our pockets, speak HTTP every day to deliver news, video, and millions of other web applications we have all come to depend on in our lives.
+
+What began as a simple, one-line protocol for retrieving hypertext quickly evolved into a generic hypermedia transport, and now a decade later can be used to power just about any use case you can imagine. Both the ubiquity of servers that can speak the protocol and the wide availability of clients to consume it means that many applications are now designed and deployed exclusively on top of HTTP.
+
+Need a protocol to control your coffee pot? RFC 2324 has you covered with the Hyper Text Coffee Pot Control Protocol (HTCPCP/1.0)—originally an April Fools’ Day joke by IETF, and increasingly anything but a joke in our new hyper-connected world.
+
+> The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. It is a generic, stateless, protocol that can be used for many tasks beyond its use for hypertext, such as name servers and distributed object management systems, through extension of its request methods, error codes and headers. A feature of HTTP is the typing and negotiation of data representation, allowing systems to be built independently of the data being transferred.
+>
+> RFC 2616: HTTP/1.1, June 1999
+
+The simplicity of the HTTP protocol is what enabled its original adoption and rapid growth. In fact, it is now not unusual to find embedded devices—sensors, actuators, and coffee pots alike—using HTTP as their primary control and data protocols. But under the weight of its own success and as we increasingly continue to migrate our everyday interactions to the Web—social, email, news, and video, and increasingly our entire personal and job workspaces—it has also begun to show signs of stress. Users and web developers alike are now demanding near real-time responsiveness and protocol performance from HTTP/1.1, which it simply cannot meet without some modifications.
+
+To meet these new challenges, HTTP must continue to evolve, and hence the HTTPbis working group announced a new initiative for HTTP/2 in early 2012:
+
+> There is emerging implementation experience and interest in a protocol that retains the semantics of HTTP without the legacy of HTTP/1.x message framing and syntax, which have been identified as hampering performance and encouraging misuse of the underlying transport.
+>
+> The working group will produce a specification of a new expression of HTTP’s current semantics in ordered, bi-directional streams. As with HTTP/1.x, the primary target transport is TCP, but it should be possible to use other transports.
+>
+> HTTP/2 charter, January 2012
+
+The primary focus of HTTP/2 is on improving transport performance and enabling both lower latency and higher throughput. The major version increment sounds like a big step, which it is and will be as far as performance is concerned, but it is important to note that none of the high-level protocol semantics are affected: all HTTP headers, values, and use cases are the same.
+
+Any existing website or application can and will be delivered over HTTP/2 without modification: you do not need to modify your application markup to take advantage of HTTP/2. The HTTP servers will have to speak HTTP/2, but that should be a transparent upgrade for the majority of users. The only difference if the working group meets its goal, should be that our applications are delivered with lower latency and better utilization of the network link!
+
+Having said that, let’s not get ahead of ourselves. Before we get to the new HTTP/2 protocol features, it is worth taking a step back and examining our existing deployment and performance best practices for HTTP/1.1. The HTTP/2 working group is making fast progress on the new specification, but even if the final standard was already done and ready, we would still have to support older HTTP/1.1 clients for the foreseeable future—realistically, a decade or more.
+
+--------------------------------------------------------------------------------
+
+via: https://hpbn.co/brief-history-of-http/#http-09-the-one-line-protocol
+
+作者:[Ilya Grigorik][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.igvita.com/
+[b]: https://github.com/lujun9972
+[1]: https://hpbn.co/building-blocks-of-tcp/#three-way-handshake
+[2]: https://hpbn.co/building-blocks-of-tcp/#slow-start
+[3]: https://hpbn.co/building-blocks-of-tcp/#optimizing-for-tcp
diff --git a/sources/tech/20181123 Three SSH GUI Tools for Linux.md b/sources/tech/20181123 Three SSH GUI Tools for Linux.md
deleted file mode 100644
index 9691a737ca..0000000000
--- a/sources/tech/20181123 Three SSH GUI Tools for Linux.md
+++ /dev/null
@@ -1,176 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: subject: (Three SSH GUI Tools for Linux)
-[#]: via: (https://www.linux.com/blog/learn/intro-to-linux/2018/11/three-ssh-guis-linux)
-[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
-[#]: url: ( )
-
-Three SSH GUI Tools for Linux
-======
-
-
-
-At some point in your career as a Linux administrator, you’re going to use Secure Shell (SSH) to remote into a Linux server or desktop. Chances are, you already have. In some instances, you’ll be SSH’ing into multiple Linux servers at once. In fact, Secure Shell might well be one of the most-used tools in your Linux toolbox. Because of this, you’ll want to make the experience as efficient as possible. For many admins, nothing is as efficient as the command line. However, there are users out there who do prefer a GUI tool, especially when working from a desktop machine to remote into and work on a server.
-
-If you happen to prefer a good GUI tool, you’ll be happy to know there are a couple of outstanding graphical tools for SSH on Linux. Couple that with a unique terminal window that allows you to remote into multiple machines from the same window, and you have everything you need to work efficiently. Let’s take a look at these three tools and find out if one (or more) of them is perfectly apt to meet your needs.
-
-I’ll be demonstrating these tools on [Elementary OS][1], but they are all available for most major distributions.
-
-### PuTTY
-
-Anyone that’s been around long enough knows about [PuTTY][2]. In fact, PuTTY is the de facto standard tool for connecting, via SSH, to Linux servers from the Windows environment. But PuTTY isn’t just for Windows. In fact, from withing the standard repositories, PuTTY can also be installed on Linux. PuTTY’s feature list includes:
-
- * Saved sessions.
-
- * Connect via IP address or hostname.
-
- * Define alternative SSH port.
-
- * Connection type definition.
-
- * Logging.
-
- * Options for keyboard, bell, appearance, connection, and more.
-
- * Local and remote tunnel configuration
-
- * Proxy support
-
- * X11 tunneling support
-
-
-
-
-The PuTTY GUI is mostly a way to save SSH sessions, so it’s easier to manage all of those various Linux servers and desktops you need to constantly remote into and out of. Once you’ve connected, from PuTTY to the Linux server, you will have a terminal window in which to work. At this point, you may be asking yourself, why not just work from the terminal window? For some, the convenience of saving sessions does make PuTTY worth using.
-
-Installing PuTTY on Linux is simple. For example, you could issue the command on a Debian-based distribution:
-
-```
-sudo apt-get install -y putty
-```
-
-Once installed, you can either run the PuTTY GUI from your desktop menu or issue the command putty. In the PuTTY Configuration window (Figure 1), type the hostname or IP address in the HostName (or IP address) section, configure the port (if not the default 22), select SSH from the connection type, and click Open.
-
-![PuTTY Connection][4]
-
-Figure 1: The PuTTY Connection Configuration Window.
-
-[Used with permission][5]
-
-Once the connection is made, you’ll then be prompted for the user credentials on the remote server (Figure 2).
-
-![log in][7]
-
-Figure 2: Logging into a remote server with PuTTY.
-
-[Used with permission][5]
-
-To save a session (so you don’t have to always type the remote server information), fill out the IP address (or hostname), configure the port and connection type, and then (before you click Open), type a name for the connection in the top text area of the Saved Sessions section, and click Save. This will then save the configuration for the session. To then connect to a saved session, select it from the saved sessions window, click Load, and then click Open. You should then be prompted for the remote credentials on the remote server.
-
-### EasySSH
-
-Although [EasySSH][8] doesn’t offer the amount of configuration options found in PuTTY, it’s (as the name implies) incredibly easy to use. One of the best features of EasySSH is that it offers a tabbed interface, so you can have multiple SSH connections open and quickly switch between them. Other EasySSH features include:
-
- * Groups (so you can group tabs for an even more efficient experience).
-
- * Username/password save.
-
- * Appearance options.
-
- * Local and remote tunnel support.
-
-
-
-
-Install EasySSH on a Linux desktop is simple, as the app can be installed via flatpak (which does mean you must have Flatpak installed on your system). Once flatpak is installed, add EasySSH with the commands:
-
-```
-sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
-
-sudo flatpak install flathub com.github.muriloventuroso.easyssh
-```
-
-Run EasySSH with the command:
-
-```
-flatpak run com.github.muriloventuroso.easyssh
-```
-
-The EasySSH app will open, where you can click the + button in the upper left corner. In the resulting window (Figure 3), configure your SSH connection as required.
-
-![Adding a connection][10]
-
-Figure 3: Adding a connection in EasySSH is simple.
-
-[Used with permission][5]
-
-Once you’ve added the connection, it will appear in the left navigation of the main window (Figure 4).
-
-![EasySSH][12]
-
-Figure 4: The EasySSH main window.
-
-[Used with permission][5]
-
-To connect to a remote server in EasySSH, select it from the left navigation and then click the Connect button (Figure 5).
-
-![Connecting][14]
-
-Figure 5: Connecting to a remote server with EasySSH.
-
-[Used with permission][5]
-
-The one caveat with EasySSH is that you must save the username and password in the connection configuration (otherwise the connection will fail). This means anyone with access to the desktop running EasySSH can remote into your servers without knowing the passwords. Because of this, you must always remember to lock your desktop screen any time you are away (and make sure to use a strong password). The last thing you want is to have a server vulnerable to unwanted logins.
-
-### Terminator
-
-Terminator is not actually an SSH GUI. Instead, Terminator functions as a single window that allows you to run multiple terminals (and even groups of terminals) at once. Effectively you can open Terminator, split the window vertical and horizontally (until you have all the terminals you want), and then connect to all of your remote Linux servers by way of the standard SSH command (Figure 6).
-
-![Terminator][16]
-
-Figure 6: Terminator split into three different windows, each connecting to a different Linux server.
-
-[Used with permission][5]
-
-To install Terminator, issue a command like:
-
-### sudo apt-get install -y terminator
-
-Once installed, open the tool either from your desktop menu or from the command terminator. With the window opened, you can right-click inside Terminator and select either Split Horizontally or Split Vertically. Continue splitting the terminal until you have exactly the number of terminals you need, and then start remoting into those servers.
-The caveat to using Terminator is that it is not a standard SSH GUI tool, in that it won’t save your sessions or give you quick access to those servers. In other words, you will always have to manually log into your remote Linux servers. However, being able to see your remote Secure Shell sessions side by side does make administering multiple remote machines quite a bit easier.
-
-Few (But Worthwhile) Options
-
-There aren’t a lot of SSH GUI tools available for Linux. Why? Because most administrators prefer to simply open a terminal window and use the standard command-line tools to remotely access their servers. However, if you have a need for a GUI tool, you have two solid options and one terminal that makes logging into multiple machines slightly easier. Although there are only a few options for those looking for an SSH GUI tool, those that are available are certainly worth your time. Give one of these a try and see for yourself.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/learn/intro-to-linux/2018/11/three-ssh-guis-linux
-
-作者:[Jack Wallen][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.linux.com/users/jlwallen
-[b]: https://github.com/lujun9972
-[1]: https://elementary.io/
-[2]: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
-[3]: https://www.linux.com/files/images/sshguis1jpg
-[4]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_1.jpg?itok=DiNTz_wO (PuTTY Connection)
-[5]: https://www.linux.com/licenses/category/used-permission
-[6]: https://www.linux.com/files/images/sshguis2jpg
-[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_2.jpg?itok=4ORsJlz3 (log in)
-[8]: https://github.com/muriloventuroso/easyssh
-[9]: https://www.linux.com/files/images/sshguis3jpg
-[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_3.jpg?itok=bHC2zlda (Adding a connection)
-[11]: https://www.linux.com/files/images/sshguis4jpg
-[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_4.jpg?itok=hhJzhRIg (EasySSH)
-[13]: https://www.linux.com/files/images/sshguis5jpg
-[14]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_5.jpg?itok=piFEFYTQ (Connecting)
-[15]: https://www.linux.com/files/images/sshguis6jpg
-[16]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_6.jpg?itok=-kYl6iSE (Terminator)
diff --git a/sources/tech/20181124 14 Best ASCII Games for Linux That are Insanely Good.md b/sources/tech/20181124 14 Best ASCII Games for Linux That are Insanely Good.md
deleted file mode 100644
index 094467698b..0000000000
--- a/sources/tech/20181124 14 Best ASCII Games for Linux That are Insanely Good.md
+++ /dev/null
@@ -1,335 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: subject: (14 Best ASCII Games for Linux That are Insanely Good)
-[#]: via: (https://itsfoss.com/best-ascii-games/)
-[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
-[#]: url: ( )
-
-14 Best ASCII Games for Linux That are Insanely Good
-======
-
-Text-based or should I say [terminal-based games][1] were very popular a decade back – when you didn’t have visual masterpieces like God Of War, Red Dead Redemption 2 or Spiderman.
-
-Of course, the Linux platform has its share of good games – but not always the “latest and greatest”. But, there are some ASCII games out there – to which you can never turn your back on.
-
-I’m not sure if you’d believe me, some of the ASCII games proved to be very addictive (So, it might take a while for me to resume work on the next article, or I might just get fired? – Help me!)
-
-Jokes apart, let us take a look at the best ASCII games.
-
-**Note:** Installing ASCII games could be time-consuming (some might ask you to install additional dependencies or simply won’t work). You might even encounter some ASCII games that require you build from Source. So, we’ve filtered out only the ones that are easy to install/run – without breaking a sweat.
-
-### Things to do before Running or Installing an ASCII Game
-
-Some of the ASCII games might require you to install [Simple DirectMedia Layer][2] unless you already have it installed. So, in case, you should install them first before trying to run any of the games mentioned in this article.
-
-For that, you just need to type in these commands:
-
-```
-sudo apt install libsdl2-2.0
-```
-
-```
-sudo apt install libsdl2_mixer-2.0
-```
-
-
-### Best ASCII Games for Linux
-
-![Best Ascii games for Linux][3]
-
-The games listed are in no particular order of ranking.
-
-#### 1 . [Curse of War][4]
-
-![Curse of War ascii games][5]
-
-Curse of War is an interesting strategy game. You might find it a bit confusing at first but once you get to know it – you’ll love it. I’ll recommend you to take a look at the rules of the game on their [homepage][4] before launching the game.
-
-You will be building infrastructure, secure resources and directing your army to fight. All you have to do is place your flag in a good position to let your army take care of the rest. It’s not just about attacking – you need to manage and secure the resources to help win the fight.
-
-If you’ve never played any ASCII game before, be patient and spend some time learning it – to experience it to its fullest potential.
-
-##### How to install Curse of War?
-
-You will find it in the official repository. So, type in the following command to install it:
-
-```
-sudo apt install curseofwar
-```
-#### 2. ASCII Sector
-
-![ascii sector][6]
-
-Hate strategy games? Fret not, ASCII sector is a game that has a space-setting and lets you explore a lot.
-
-Also, the game isn’t just limited to exploration, you need some action? You got that here as well. Of course, not the best combat experience- but it is fun. It gets even more exciting when you see a variety of bases, missions, and quests. You’ll encounter a leveling system in this tiny game where you have to earn enough money or trade in order upgrade your spaceship.
-
-The best part about this game is – you can create your own quests or play other’s.
-
-###### How to install ASCII Sector?
-
-You need to first download and unpack the archived package from the [official site][7]. After it’s done, open up your terminal and type these commands (replace the **Downloads** folder with your location where the unpacked folder exists, ignore it if the unpacked folder resides inside your home directory):
-
-```
-cd Downloads
-cd asciisec
-chmod +x asciisec
-./asciisec
-```
-
-#### 3. DoomRL
-
-![doom ascii game][8]
-
-You must be knowing the classic game “Doom”. So, if you want the scaled down experience of it as a rogue-like, DoomRL is for you. It is an ASCII-based game, in case you don’t feel like it to be.
-
-It’s a very tiny game with a lot of gameplay hours to have fun with.
-
-###### How to install DoomRL?
-
-Similar to what you did for ASCII Sector, you need to download the official archive from their [download page][9] and then extract it to a folder.
-
-After extracting it, type in these commands:
-
-```
-cd Downloads // navigating to the location where the unpacked folder exists
-```
-
-```
-cd doomrl-linux-x64-0997
-chmod +x doomrl
-./doomrl
-```
-#### 4. Pyramid Builder
-
-![Pyramid Builder ascii game for Linux][10]
-
-Pyramid Builder is an innovative take as an ASCII game where get to improve your civilization by helping build pyramids.
-
-You need to direct the workers to farm, unload the cargo, and move the gigantic stones to successfully build the pyramid.
-
-It is indeed a beautiful ASCII game to download.
-
-###### How to install Pyramid Builder?
-
-Simply head to its official site and download the package to unpack it. After extraction, navigate to the folder and run the executable file.
-
-```
-cd Downloads
-cd pyramid_builder_linux
-chmod +x pyramid_builder_linux.x86_64
-./pyramid_builder_linux.x86_64
-```
-#### 5. DiabloRL
-
-![Diablo ascii RPG game][11]
-
-If you’re an avid gamer, you must have heard about Blizzard’s Diablo 1. It is undoubtedly a good game.
-
-You get the chance to play a unique rendition of the game – which is an ASCII game. DiabloRL is a turn-based rogue-like game that is insanely good. You get to choose from a variety of classes (Warrior, Sorcerer, or Rogue). Every class would result in a different gameplay experience with a set of different stats.
-
-Of course, personal preference will differ – but it’s a decent “unmake” of Diablo. What do you think?
-
-#### 6. Ninvaders
-
-![Ninvaders terminal game for Linux][12]
-
-Ninvaders is one of the best ASCII game just because it’s so simple and an arcade game to kill time.
-
-You have to defend against a hord of invaders – just finish them off before they get to you. It sounds very simple – but it is a challenging game.
-
-##### How to install Ninvaders?
-
-Similar to Curse of War, you can find this in the official repository. So, just type in this command to install it:
-
-```
-sudo apt install ninvaders
-```
-#### 7. Empire
-
-![Empire terminal game][13]
-
-A real-time strategy game for which you will need an active Internet connection. I’m personally not a fan of Real-Time strategy games, but if you are a fan of such games – you should really check out their [guide][14] to play this game – because it can be very challenging to learn.
-
-The rectangle contains cities, land, and water. You need to expand your city with an army, ships, planes and other resources. By expanding quickly, you will be able to capture other cities by destroying them before they make a move.
-
-##### How to install Empire?
-
-Install this is very simple, just type in the following command:
-
-```
-sudo apt install empire
-```
-
-#### 8. Nudoku
-
-![Nudoku is a terminal version game of Sudoku][15]
-
-Love Sudoku? Well, you have Nudoku – a clone for it. A perfect time-killing ASCII game while you relax.
-
-It presents you with three difficulty levels – Easy, normal, and hard. If you want to put up a challenge with the computer, the hard difficulty will be perfect! If you just want to chill, go for the easy one.
-
-##### How to install Nudoku?
-
-It’s very easy to get it installed, just type in the following command in the terminal:
-
-```
-sudo apt install nudoku
-```
-
-#### 9\. Nethack
-
-A dungeons and dragon-style ASCII game which is one of the best out there. I believe it’s one of your favorites if you already knew about ASCII games for Linux – in general.
-
-It features a lot of different levels (about 45) and comes packed in with a bunch of weapons, scrolls, potions, armor, rings, and gems. You can also choose permadeath as your mode to play it.
-
-It’s not just about killing here – you got a lot to explore.
-
-##### How to install Nethack?
-
-Simply follow the command below to install it:
-
-```
-sudo apt install nethack
-```
-
-#### 10. ASCII Jump
-
-![ascii jump game][16]
-
-ASCII Jump is a dead simple game where you have to slide along a varierty of tracks – while jumping, changing position, and moving as long as you can to cover maximum distance.
-
-It’s really amazing to see how this ASCII game looks like (visually) even it seems so simple. You can start with the training mode and then proceed to the world cup. You also get to choose your competitors and the hills on which you want to start the game.
-
-##### How to install Ascii Jump?
-
-To install the game, just type the following command:
-
-```
-sudo apt install asciijump
-```
-
-#### 11. Bastet
-
-![Bastet is tetris game in ascii form][17]
-
-Let’s just not pay any attention to the name – it’s actually a fun clone of Tetris game.
-
-You shouldn’t expect it to be just another ordinary tetris game – but it will present you the worst possible bricks to play with. Have fun!
-
-##### How to install Bastet?
-
-Open the terminal and type in the following command:
-
-```
-sudo apt install bastet
-```
-
-#### 12\. Bombardier
-
-![Bomabrdier game in ascii form][18]
-
-Bombardier is yet another simple ASCII game which will keep you hooked on to it.
-
-Here, you have a helicopter (or whatever you’d like to call your aircraft) which lowers down every cycle and you need to throw bombs in order to destroy the blocks/buildings under you. The game also puts a pinch of humor for the messages it displays when you destroy a block. It is fun.
-
-##### How to install Bombardier?
-
-Bombardier is available in the official repository, so just type in the following in the terminal to install it:
-
-```
-sudo apt install bombardier
-```
-
-#### 13\. Angband
-
-![Angband ascii game][19]
-
-A cool dungeon exploration game with a neat interface. You can see all the vital information in a single screen while you explore the game.
-
-It contains different kinds of race to pick a character. You can either be an Elf, Hobbit, Dwarf or something else – there’s nearly a dozen to choose from. Remember, that you need to defeat the lord of darkness at the end – so make every upgrade possible to your weapon and get ready.
-
-How to install Angband?
-
-Simply type in the following command:
-
-```
-sudo apt install angband
-```
-
-#### 14\. GNU Chess
-
-![GNU Chess is a chess game that you can play in Linux terminal][20]
-
-How can you not play chess? It is my favorite strategy game!
-
-But, GNU Chess can be tough to play with unless you know the Algebraic notation to describe the next move. Of course, being an ASCII game – it isn’t quite possible to interact – so it asks you the notation to detect your move and displays the output (while it waits for the computer to think its next move).
-
-##### How to install GNU Chess?
-
-If you’re aware of the algebraic notations of Chess, enter the following command to install it from the terminal:
-
-```
-sudo apt install gnuchess
-```
-
-#### Some Honorable Mentions
-
-As I mentioned earlier, we’ve tried to recommend you the best (but also the ones that are the easiest to install on your Linux machine).
-
-However, there are some iconic ASCII games which deserve the attention and requires a tad more effort to install (You will get the source code and you need to build it / install it).
-
-Some of those games are:
-
-+ [Cataclysm: Dark Days Ahead][22]
-+ [Brogue][23]
-+ [Dwarf Fortress][24]
-
-You should follow our [guide to install software from source code][21].
-
-### Wrapping Up
-
-Which of the ASCII games mentioned seem perfect for you? Did we miss any of your favorites?
-
-Let us know your thoughts in the comments below.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/best-ascii-games/
-
-作者:[Ankush Das][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/ankush/
-[b]: https://github.com/lujun9972
-[1]: https://itsfoss.com/best-command-line-games-linux/
-[2]: https://www.libsdl.org/
-[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/best-ascii-games-featured.png?resize=800%2C450&ssl=1
-[4]: http://a-nikolaev.github.io/curseofwar/
-[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/curseofwar-ascii-game.jpg?fit=800%2C479&ssl=1
-[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/ascii-sector-game.jpg?fit=800%2C424&ssl=1
-[7]: http://www.asciisector.net/download/
-[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/doom-rl-ascii-game.jpg?ssl=1
-[9]: https://drl.chaosforge.org/downloads
-[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/pyramid-builder-ascii-game.jpg?fit=800%2C509&ssl=1
-[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/diablo-rl-ascii-game.jpg?ssl=1
-[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/ninvaders-ascii-game.jpg?fit=800%2C426&ssl=1
-[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/empire-ascii-game.jpg?fit=800%2C570&ssl=1
-[14]: http://www.wolfpackempire.com/infopages/Guide.html
-[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/nudoku-ascii-game.jpg?fit=800%2C434&ssl=1
-[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/ascii-jump.jpg?fit=800%2C566&ssl=1
-[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/bastet-tetris-clone-ascii.jpg?fit=800%2C465&ssl=1
-[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/bombardier.jpg?fit=800%2C571&ssl=1
-[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/angband-ascii-game.jpg?ssl=1
-[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/gnuchess-ascii-game.jpg?ssl=1
-[21]: https://itsfoss.com/install-software-from-source-code/
-[22]: https://github.com/CleverRaven/Cataclysm-DDA
-[23]: https://sites.google.com/site/broguegame/
-[24]: http://www.bay12games.com/dwarves/index.html
-
diff --git a/sources/tech/20181204 4 Unique Terminal Emulators for Linux.md b/sources/tech/20181204 4 Unique Terminal Emulators for Linux.md
deleted file mode 100644
index 04110b670e..0000000000
--- a/sources/tech/20181204 4 Unique Terminal Emulators for Linux.md
+++ /dev/null
@@ -1,169 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (4 Unique Terminal Emulators for Linux)
-[#]: via: (https://www.linux.com/blog/learn/2018/12/4-unique-terminals-linux)
-[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
-
-4 Unique Terminal Emulators for Linux
-======
-
-Let’s face it, if you’re a Linux administrator, you’re going to work with the command line. To do that, you’ll be using a terminal emulator. Most likely, your distribution of choice came pre-installed with a default terminal emulator that gets the job done. But this is Linux, so you have a wealth of choices to pick from, and that ideology holds true for terminal emulators as well. In fact, if you open up your distribution’s GUI package manager (or search from the command line), you’ll find a trove of possible options. Of those, many are pretty straightforward tools; however, some are truly unique.
-
-In this article, I’ll highlight four such terminal emulators, that will not only get the job done, but do so while making the job a bit more interesting or fun. So, let’s take a look at these terminals.
-
-### Tilda
-
-[Tilda][1] is designed for Gtk and is a member of the cool drop-down family of terminals. That means the terminal is always running in the background, ready to drop down from the top of your monitor (such as Guake and Yakuake). What makes Tilda rise above many of the others is the number of configuration options available for the terminal (Figure 1).
-
-
-Tilda can be installed from the standard repositories. On a Ubuntu- (or Debian-) based distribution, the installation is as simple as:
-
-```
-sudo apt-get install tilda -y
-```
-
-Once installed, open Tilda from your desktop menu, which will also open the configuration window. Configure the app to suit your taste and then close the configuration window. You can then open and close Tilda by hitting the F1 hotkey. One caveat to using Tilda is that, after the first run, you won’t find any indication as to how to reach the configuration wizard. No worries. If you run the command tilda -C it will open the configuration window, while still retaining the options you’ve previously set.
-
-Available options include:
-
- * Terminal size and location
-
- * Font and color configurations
-
- * Auto Hide
-
- * Title
-
- * Custom commands
-
- * URL Handling
-
- * Transparency
-
- * Animation
-
- * Scrolling
-
- * And more
-
-
-
-
-What I like about these types of terminals is that they easily get out of the way when you don’t need them and are just a button click away when you do. For those that hop in and out of the terminal, a tool like Tilda is ideal.
-
-### Aterm
-
-Aterm holds a special place in my heart, as it was one of the first terminals I used that made me realize how flexible Linux was. This was back when AfterStep was my window manager of choice (which dates me a bit) and I was new to the command line. What Aterm offered was a terminal emulator that was highly customizable, while helping me learn the ins and outs of using the terminal (how to add options and switches to a command). “How?” you ask. Because Aterm never had a GUI for customization. To run Aterm with any special options, it had to run as a command. For example, say you want to open Aterm with transparency enabled, green text, white highlights, and no scroll bar. To do this, issue the command:
-
-```
-aterm -tr -fg green -bg white +xb
-```
-
-The end result (with the top command running for illustration) would look like that shown in Figure 2.
-
-![Aterm][3]
-
-Figure 2: Aterm with a few custom options.
-
-[Used with permission][4]
-
-Of course, you must first install Aterm. Fortunately, the application is still found in the standard repositories, so installing on the likes of Ubuntu is as simple as:
-
-```
-sudo apt-get install aterm -y
-```
-
-If you want to always open Aterm with those options, your best bet is to create an alias in your ~/.bashrc file like so:
-
-```
-alias=”aterm -tr -fg green -bg white +sb”
-```
-
-Save that file and, when you issue the command aterm, it will always open with those options. For more about creating aliases, check out [this tutorial][5].
-
-### Eterm
-
-Eterm is the second terminal that really showed me how much fun the Linux command line could be. Eterm is the default terminal emulator for the Enlightenment desktop. When I eventually migrated from AfterStep to Enlightenment (back in the early 2000s), I was afraid I’d lose out on all those cool aesthetic options. That turned out to not be the case. In fact, Eterm offered plenty of unique options, while making the task easier with a terminal toolbar. With Eterm, you can easily select from a large number of background images (should you want one - Figure 3) by selecting from the Background > Pixmap menu entry.
-
-![Eterm][7]
-
-Figure 3: Selecting from one of the many background images for Eterm.
-
-[Used with permission][4]
-
-There are a number of other options to configure (such as font size, map alerts, toggle scrollbar, brightness, contrast, and gamma of background images, and more). The one thing you want to make sure is, after you’ve configured Eterm to suit your tastes, to click Eterm > Save User Settings (otherwise, all settings will be lost when you close the app).
-
-Eterm can be installed from the standard repositories, with a command such as:
-
-```
-sudo apt-get install eterm
-```
-
-### Extraterm
-
-[Extraterm][8] should probably win a few awards for coolest feature set of any terminal window project available today. The most unique feature of Extraterm is the ability to wrap commands in color-coded frames (blue for successful commands and red for failed commands - Figure 4).
-
-![Extraterm][10]
-
-Figure 4: Extraterm showing two failed command frames.
-
-[Used with permission][4]
-
-When you run a command, Extraterm will wrap the command in an isolated frame. If the command succeeds, the frame will be outlined in blue. Should the command fail, the frame will be outlined in red.
-
-Extraterm cannot be installed via the standard repositories. In fact, the only way to run Extraterm on Linux (at the moment) is to [download the precompiled binary][11] from the project’s GitHub page, extract the file, change into the newly created directory, and issue the command ./extraterm.
-
-Once the app is running, to enable frames you must first enable bash integration. To do that, open Extraterm and then right-click anywhere in the window to reveal the popup menu. Scroll until you see the entry for Inject Bash shell Integration (Figure 5). Select that entry and you can then begin using the frames option.
-
-![Extraterm][13]
-
-Figure 5: Injecting Bash integration for Extraterm.
-
-[Used with permission][4]
-
-If you run a command, and don’t see a frame appear, you probably have to create a new frame for the command (as Extraterm only ships with a few default frames). To do that, click on the Extraterm menu button (three horizontal lines in the top right corner of the window), select Settings, and then click the Frames tab. In this window, scroll down and click the New Rule button. You can then add a command you want to work with the frames option (Figure 6).
-
-![frames][15]
-
-Figure 6: Adding a new rule for frames.
-
-[Used with permission][4]
-
-If, after this, you still don’t see frames appearing, download the extraterm-commands file from the [Download page][11], extract the file, change into the newly created directory, and issue the command sh setup_extraterm_bash.sh. That should enable frames for Extraterm.
-There’s plenty more options available for Extraterm. I’m convinced, once you start playing around with this new take on the terminal window, you won’t want to go back to the standard terminal. Hopefully the developer will make this app available to the standard repositories soon (as it could easily become one of the most popular terminal windows in use).
-
-### And Many More
-
-As you probably expected, there are quite a lot of terminals available for Linux. These four represent (at least for me) four unique takes on the task, each of which do a great job of helping you run the commands every Linux admin needs to run. If you aren’t satisfied with one of these, give your package manager a look to see what’s available. You are sure to find something that works perfectly for you.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/learn/2018/12/4-unique-terminals-linux
-
-作者:[Jack Wallen][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.linux.com/users/jlwallen
-[b]: https://github.com/lujun9972
-[1]: http://tilda.sourceforge.net/tildadoc.php
-[2]: https://www.linux.com/files/images/terminals2jpg
-[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_2.jpg?itok=gBkRLwDI (Aterm)
-[4]: https://www.linux.com/licenses/category/used-permission
-[5]: https://www.linux.com/blog/learn/2018/12/aliases-diy-shell-commands
-[6]: https://www.linux.com/files/images/terminals3jpg
-[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_3.jpg?itok=RVPTJAtK (Eterm)
-[8]: http://extraterm.org
-[9]: https://www.linux.com/files/images/terminals4jpg
-[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_4.jpg?itok=2n01qdwO (Extraterm)
-[11]: https://github.com/sedwards2009/extraterm/releases
-[12]: https://www.linux.com/files/images/terminals5jpg
-[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_5.jpg?itok=FdaE1Mpf (Extraterm)
-[14]: https://www.linux.com/files/images/terminals6jpg
-[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_6.jpg?itok=lQ1Zv5wq (frames)
diff --git a/sources/tech/20181216 Schedule a visit with the Emacs psychiatrist.md b/sources/tech/20181216 Schedule a visit with the Emacs psychiatrist.md
deleted file mode 100644
index 6d72cda348..0000000000
--- a/sources/tech/20181216 Schedule a visit with the Emacs psychiatrist.md
+++ /dev/null
@@ -1,62 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Schedule a visit with the Emacs psychiatrist)
-[#]: via: (https://opensource.com/article/18/12/linux-toy-eliza)
-[#]: author: (Jason Baker https://opensource.com/users/jason-baker)
-
-Schedule a visit with the Emacs psychiatrist
-======
-Eliza is a natural language processing chatbot hidden inside of one of Linux's most popular text editors.
-
-
-Welcome to another day of the 24-day-long Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what a command-line toy even is. We’re figuring that out as we go, but generally, it could be a game, or any simple diversion that helps you have fun at the terminal.
-
-Some of you will have seen various selections from our calendar before, but we hope there’s at least one new thing for everyone.
-
-Today's selection is a hidden gem inside of Emacs: Eliza, the Rogerian psychotherapist, a terminal toy ready to listen to everything you have to say.
-
-A brief aside: While this toy is amusing, your health is no laughing matter. Please take care of yourself this holiday season, physically and mentally, and if stress and anxiety from the holidays are having a negative impact on your wellbeing, please consider seeing a professional for guidance. It really can help.
-
-To launch [Eliza][1], first, you'll need to launch Emacs. There's a good chance Emacs is already installed on your system, but if it's not, it's almost certainly in your default repositories.
-
-Since I've been pretty fastidious about keeping this series in the terminal, launch Emacs with the **-nw** flag to keep in within your terminal emulator.
-
-```
-$ emacs -nw
-```
-
-Inside of Emacs, type M-x doctor to launch Eliza. For those of you like me from a Vim background who have no idea what this means, just hit escape, type x and then type doctor. Then, share all of your holiday frustrations.
-
-Eliza goes way back, all the way to the mid-1960s a the MIT Artificial Intelligence Lab. [Wikipedia][2] has a rather fascinating look at her history.
-
-Eliza isn't the only amusement inside of Emacs. Check out the [manual][3] for a whole list of fun toys.
-
-
-![Linux toy: eliza animated][5]
-
-Do you have a favorite command-line toy that you think I ought to profile? We're running out of time, but I'd still love to hear your suggestions. Let me know in the comments below, and I'll check it out. And let me know what you thought of today's amusement.
-
-Be sure to check out yesterday's toy, [Head to the arcade in your Linux terminal with this Pac-man clone][6], and come back tomorrow for another!
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/12/linux-toy-eliza
-
-作者:[Jason Baker][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/jason-baker
-[b]: https://github.com/lujun9972
-[1]: https://www.emacswiki.org/emacs/EmacsDoctor
-[2]: https://en.wikipedia.org/wiki/ELIZA
-[3]: https://www.gnu.org/software/emacs/manual/html_node/emacs/Amusements.html
-[4]: /file/417326
-[5]: https://opensource.com/sites/default/files/uploads/linux-toy-eliza-animated.gif (Linux toy: eliza animated)
-[6]: https://opensource.com/article/18/12/linux-toy-myman
diff --git a/sources/tech/20181221 Large files with Git- LFS and git-annex.md b/sources/tech/20181221 Large files with Git- LFS and git-annex.md
index 2e7b9a9b74..29a76f810f 100644
--- a/sources/tech/20181221 Large files with Git- LFS and git-annex.md
+++ b/sources/tech/20181221 Large files with Git- LFS and git-annex.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: (runningwater)
+[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@@ -96,7 +96,7 @@ via: https://anarc.at/blog/2018-12-21-large-files-with-git/
作者:[Anarc.at][a]
选题:[lujun9972][b]
-译者:[runningwater](https://github.com/runningwater)
+译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/sources/tech/20181222 How to detect automatically generated emails.md b/sources/tech/20181222 How to detect automatically generated emails.md
index 2ccaeddeee..23b509a77b 100644
--- a/sources/tech/20181222 How to detect automatically generated emails.md
+++ b/sources/tech/20181222 How to detect automatically generated emails.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: (wyxplus)
+[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/tech/20181224 Go on an adventure in your Linux terminal.md b/sources/tech/20181224 Go on an adventure in your Linux terminal.md
deleted file mode 100644
index f1b46340bb..0000000000
--- a/sources/tech/20181224 Go on an adventure in your Linux terminal.md
+++ /dev/null
@@ -1,54 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Go on an adventure in your Linux terminal)
-[#]: via: (https://opensource.com/article/18/12/linux-toy-adventure)
-[#]: author: (Jason Baker https://opensource.com/users/jason-baker)
-
-Go on an adventure in your Linux terminal
-======
-
-Our final day of the Linux command-line toys advent calendar ends with the beginning of a grand adventure.
-
-
-
-Today is the final day of our 24-day-long Linux command-line toys advent calendar. Hopefully, you've been following along, but if not, start back at [the beginning][1] and work your way through. You'll find plenty of games, diversions, and oddities for your Linux terminal.
-
-And while you may have seen some toys from our calendar before, we hope there’s at least one new thing for everyone.
-
-Today's toy was suggested by Opensource.com moderator [Joshua Allen Holm][2]:
-
-"If the last day of your advent calendar is not ESR's [Eric S. Raymond's] [open source release of Adventure][3], which retains use of the classic 'advent' command (Adventure in the BSD Games package uses 'adventure), I will be very, very, very disappointed. ;-)"
-
-What a perfect way to end our series.
-
-Colossal Cave Adventure (often just called Adventure), is a text-based game from the 1970s that gave rise to the entire adventure game genre. Despite its age, Adventure is still an easy way to lose hours as you explore a fantasy world, much like a Dungeons and Dragons dungeon master might lead you through an imaginary place.
-
-Rather than take you through the history of Adventure here, I encourage you to go read Joshua's [history of the game][4] itself and why it was resurrected and re-ported a few years ago. Then, go [clone the source][5] and follow the [installation instructions][6] to launch the game with **advent** **** on your system. Or, as Joshua mentions, another version of the game can be obtained from the **bsd-games** package, which is probably available from your default repositories in your distribution of choice.
-
-Do you have a favorite command-line toy that you we should have included? Our series concludes today, but we'd still love to feature some cool command-line toys in the new year. Let me know in the comments below, and I'll check it out. And let me know what you thought of today's amusement.
-
-Be sure to check out yesterday's toy, [The Linux command line can fetch fun from afar][7], and I'll see you next year!
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/12/linux-toy-adventure
-
-作者:[Jason Baker][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/jason-baker
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/article/18/12/linux-toy-boxes
-[2]: https://opensource.com/users/holmja
-[3]: https://gitlab.com/esr/open-adventure (https://gitlab.com/esr/open-adventure)
-[4]: https://opensource.com/article/17/6/revisit-colossal-cave-adventure-open-adventure
-[5]: https://gitlab.com/esr/open-adventure
-[6]: https://gitlab.com/esr/open-adventure/blob/master/INSTALL.adoc
-[7]: https://opensource.com/article/18/12/linux-toy-remote
diff --git a/sources/tech/20190102 How To Display Thumbnail Images In Terminal.md b/sources/tech/20190102 How To Display Thumbnail Images In Terminal.md
deleted file mode 100644
index 3c4105e13f..0000000000
--- a/sources/tech/20190102 How To Display Thumbnail Images In Terminal.md
+++ /dev/null
@@ -1,186 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( WangYueScream)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How To Display Thumbnail Images In Terminal)
-[#]: via: (https://www.ostechnix.com/how-to-display-thumbnail-images-in-terminal/)
-[#]: author: (SK https://www.ostechnix.com/author/sk/)
-
-How To Display Thumbnail Images In Terminal
-======
-
-
-A while ago, we discussed about [**Fim**][1], a lightweight, CLI image viewer application used to display various type of images, such as bmp, gif, jpeg, and png etc., from command line. Today, I stumbled upon a similar utility named **‘lsix’**. It is like ‘ls’ command in Unix-like systems, but for images only. The lsix is a simple CLI utility designed to display thumbnail images in Terminal using **Sixel** graphics. For those wondering, Sixel, short for six pixels, is a type of bitmap graphics format. It uses **ImageMagick** , so almost all file formats supported by imagemagick will work fine.
-
-### Features
-
-Concerning the features of lsix, we can list the following:
-
- * Automatically detects if your Terminal supports Sixel graphics or not. If your Terminal doesn’t support Sixel, it will notify you to enable it.
- * Automatically detects the terminal background color. It uses terminal escape sequences to try to figure out the foreground and background colors of your Terminal application and will display the thumbnails clearly.
- * If there are more images in the directory, usually >21, lsix will display those images one row a a time, so you need not to wait for the entire montage to be created.
- * Works well over SSH, so you can manipulate images stored on your remote web server without much hassle.
- * It supports Non-bitmap graphics, such as.svg, .eps, .pdf, .xcf etc.
- * Written in BASH, so works on almost all Linux distros.
-
-
-
-### Installing lsix
-
-Since lsix uses ImageMagick, make sure you have installed it. It is available in the default repositories of most Linux distributions. For example, on Arch Linux and its variants like Antergos, Manjaro Linux, ImageMagick can be installed using command:
-
-```
-$ sudo pacman -S imagemagick
-```
-
-On Debian, Ubuntu, Linux Mint:
-
-```
-$ sudo apt-get install imagemagick
-```
-
-lsix doesn’t require any installation as it is just a BASH script. Just download it and move it to your $PATH. It’s that simple.
-
-Download the latest lsix version from project’s github page. I am going to download the lsix archive file using command:
-
-```
-$ wget https://github.com/hackerb9/lsix/archive/master.zip
-```
-
-Extract the downloaded zip file:
-
-```
-$ unzip master.zip
-```
-
-This command will extract all contents into a folder named ‘lsix-master’. Copy the lsix binary from this directory to your $PATH, for example /usr/local/bin/.
-
-```
-$ sudo cp lsix-master/lsix /usr/local/bin/
-```
-
-Finally, make the lsbix binary executable:
-
-```
-$ sudo chmod +x /usr/local/bin/lsix
-```
-
-That’s it. Now is the time to display thumbnails in the terminal itself.
-
-Before start using lsix, **make sure your Terminal supports Sixel graphics**.
-
-The developer has developed lsix on an Xterm in **vt340 emulation mode**. However, the he claims that lsix should work on any Sixel compatible Terminal.
-
-Xterm supports Sixel graphics, but it isn’t enabled by default.
-
-You can launch Xterm with Sixel mode enabled using command (from another Terminal):
-
-```
-$ xterm -ti vt340
-```
-
-Alternatively, you can make vt340 the default terminal type for Xterm as described below.
-
-Edit **.Xresources** file (If it not available, just create it):
-
-```
-$ vi .Xresources
-```
-
-Add the following line:
-
-```
-xterm*decTerminalID : vt340
-```
-
-Press **ESC** and type **:wq** to save and close the file.
-
-Finally, run the following command to apply the changes:
-
-```
-$ xrdb -merge .Xresources
-```
-
-Now Xterm will start with Sixel mode enabled at every launch by default.
-
-### Display Thumbnail Images In Terminal
-
-Launch Xterm (Don’t forget to start it with vt340 mode). Here is how Xterm looks like in my system.
-
-
-Like I already stated, lsix is very simple utility. It doesn’t have any command line flags or configuration files. All you have to do is just pass the path of your file as an argument like below.
-
-```
-$ lsix ostechnix/logo.png
-```
-
-
-
-If you run it without path, it will display the thumbnail images in your current working directory. I have few files in a directory named **ostechnix**.
-
-To display the thumbnails in this directory, just run:
-
-```
-$ lsix
-```
-
-
-
-See? The thumbnails of all files are displayed in the terminal itself.
-
-If you use ‘ls’ command, you would just see the filenames only, not thumbnails.
-
-![][3]
-
-You can also display a specific image or group of images of a specific type using wildcards.
-
-For example, to display a single image, just mention the full path of the image like below.
-
-```
-$ lsix girl.jpg
-```
-
-
-
-To display all images of a specific type, say PNG, use the wildcard character like below.
-
-```
-$ lsix *.png
-```
-
-![][4]
-
-For JPEG type images, the command would be:
-
-```
-$ lsix *jpg
-```
-
-The thumbnail image quality is surprisingly good. I thought lsix would just display blurry thumbnails. I was wrong. The thumbnails are clearly visible just like on the graphical image viewers.
-
-And, that’s all for now. As you can see, lsix is very similar to ‘ls’ command, but it only for displaying thumbnails. If you deal with a lot of images at work, lsix might be quite handy. Give it a try and let us know your thoughts on this utility in the comment section below. If you know any similar tools, please suggest them as well. I will check and update this guide.
-
-More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/how-to-display-thumbnail-images-in-terminal/
-
-作者:[SK][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/sk/
-[b]: https://github.com/lujun9972
-[1]: https://www.ostechnix.com/how-to-display-images-in-the-terminal/
-[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[3]: http://www.ostechnix.com/wp-content/uploads/2019/01/ls-command-1.png
-[4]: http://www.ostechnix.com/wp-content/uploads/2019/01/lsix-3.png
diff --git a/sources/tech/20190103 How to use Magit to manage Git projects.md b/sources/tech/20190103 How to use Magit to manage Git projects.md
deleted file mode 100644
index dbcb63d736..0000000000
--- a/sources/tech/20190103 How to use Magit to manage Git projects.md
+++ /dev/null
@@ -1,93 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to use Magit to manage Git projects)
-[#]: via: (https://opensource.com/article/19/1/how-use-magit)
-[#]: author: (Sachin Patil https://opensource.com/users/psachin)
-
-How to use Magit to manage Git projects
-======
-Emacs' Magit extension makes it easy to get started with Git version control.
-
-
-[Git][1] is an excellent [version control][2] tool for managing projects, but it can be hard for novices to learn. It's difficult to work from the Git command line unless you're familiar with the flags and options and the appropriate situations to use them. This can be discouraging and cause people to be stuck with very limited usage.
-
-Fortunately, most of today's integrated development environments (IDEs) include Git extensions that make using it a lot easier. One such Git extension available in Emacs is called [Magit][3].
-
-The Magit project has been around for 10 years and defines itself as "a Git porcelain inside Emacs." In other words, it's an interface where every action can be managed by pressing a key. This article walks you through the Magit interface and explains how to use it to manage a Git project.
-
-If you haven't already, [install Emacs][4], then [install Magit][5] before you continue with this tutorial.
-
-### Magit's interface
-
-Start by visiting a project directory in Emacs' [Dired mode][6]. For example, all my Emacs configurations are stored in the **~/.emacs.d/** directory, which is managed by Git.
-
-
-
-If you were working from the command line, you would enter **git status** to find a project's current status. Magit has a similar function: **magit-status**. You can call this function using **M-x magit-status** (short for the keystroke **Alt+x magit-status** ). Your result will look something like this:
-
-
-
-Magit shows much more information than you would get from the **git status** command. It shows a list of untracked files, files that aren't staged, and staged files. It also shows the stash list and the most recent commits—all in a single window.
-
-If you want to know what has changed, use the Tab key. For example, if I move my cursor over the unstaged file **custom_functions.org** and press the Tab key, Magit will display the changes:
-
-
-
-This is similar to using the command **git diff custom_functions.org**. Staging a file is even easier. Simply move the cursor over a file and press the **s** key. The file will be quickly moved to the staged file list:
-
-
-
-To unstage a file, use the **u** key. It is quicker and more fun to use **s** and **u** instead of entering **git add -u ** and **git reset HEAD ** on the command line.
-
-### Commit changes
-
-In the same Magit window, pressing the **c** key will display a commit window that provides flags like **\--all** to stage all files or **\--signoff** to add a signoff line to a commit message.
-
-
-
-Move your cursor to the line where you want to enable a signoff flag and press Enter. This will highlight the **\--signoff** text, which indicates that the flag is enabled.
-
-
-
-Pressing **c** again will display the window to write the commit message.
-
-
-
-Finally, use **C-c C-c **(short form of the keys Ctrl+cc) to commit the changes.
-
-
-
-### Push changes
-
-Once the changes are committed, the commit line will appear in the **Recent commits** section.
-
-
-
-Place the cursor on that commit and press **p** to push the changes.
-
-I've uploaded a [demonstration][7] on YouTube if you want to get a feel for using Magit. I have just scratched the surface in this article. It has many cool features to help you with Git branches, rebasing, and more. You can find [documentation, support, and more][8] linked from Magit's homepage.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/1/how-use-magit
-
-作者:[Sachin Patil][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/psachin
-[b]: https://github.com/lujun9972
-[1]: https://git-scm.com
-[2]: https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control
-[3]: https://magit.vc
-[4]: https://www.gnu.org/software/emacs/download.html
-[5]: https://magit.vc/manual/magit/Installing-from-Melpa.html#Installing-from-Melpa
-[6]: https://www.gnu.org/software/emacs/manual/html_node/emacs/Dired-Enter.html#Dired-Enter
-[7]: https://youtu.be/Vvw75Pqp7Mc
-[8]: https://magit.vc/
diff --git a/sources/tech/20190104 Midori- A Lightweight Open Source Web Browser.md b/sources/tech/20190104 Midori- A Lightweight Open Source Web Browser.md
deleted file mode 100644
index a2e31daf6c..0000000000
--- a/sources/tech/20190104 Midori- A Lightweight Open Source Web Browser.md
+++ /dev/null
@@ -1,110 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Midori: A Lightweight Open Source Web Browser)
-[#]: via: (https://itsfoss.com/midori-browser)
-[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
-
-Midori: A Lightweight Open Source Web Browser
-======
-
-**Here’s a quick review of the lightweight, fast, open source web browser Midori, which has returned from the dead.**
-
-If you are looking for a lightweight [alternative web browser][1], try Midori.
-
-[Midori][2] is an open source web browser that focuses more on being lightweight than on providing a ton of features.
-
-If you have never heard of Midori, you might think that it is a new application but Midori was first released in 2007.
-
-Because it focused on speed, Midori soon gathered a niche following and became the default browser in lightweight Linux distributions like Bodhi Linux, SilTaz etc.
-
-Other distributions like [elementary OS][3] also used Midori as its default browser. But the development of Midori stalled around 2016 and its fans started wondering if Midori was dead already. elementary OS dropped it from its latest release, I believe, for this reason.
-
-The good news is that Midori is not dead. After almost two years of inactivity, the development resumed in the last quarter of 2018. A few extensions including an ad-blocker were added in the later releases.
-
-### Features of Midori web browser
-
-![Midori web browser][4]
-
-Here are some of the main features of the Midori browser
-
- * Written in Vala with GTK+3 and WebKit rendering engine.
- * Tabs, windows and session management
- * Speed dial
- * Saves tab for the next session by default
- * Uses DuckDuckGo as a default search engine. It can be changed to Google or Yahoo.
- * Bookmark management
- * Customizable and extensible interface
- * Extension modules can be written in C and Vala
- * Supports HTML5
- * An extremely limited set of extensions include an ad-blocker, colorful tabs etc. No third-party extensions.
- * Form history
- * Private browsing
- * Available for Linux and Windows
-
-
-
-Trivia: Midori is a Japanese word that means green. The Midori developer is not Japanese if you were guessing something along that line.
-
-### Experiencing Midori
-
-![Midori web browser in Ubuntu 18.04][5]
-
-I have been using Midori for the past few days. The experience is mostly fine. It supports HTML5 and renders the websites quickly. The ad-blocker is okay. The browsing experience is more or less smooth as you would expect in any standard web browser.
-
-The lack of extensions has always been a weak point of Midori so I am not going to talk about that.
-
-What I did notice is that it doesn’t support international languages. I couldn’t find a way to add new language support. It could not render the Hindi fonts at all and I am guessing it’s the same with many other non-[Romance languages][6].
-
-I also had my fair share of troubles with YouTube videos. Some videos would throw playback error while others would run just fine.
-
-Midori didn’t eat my RAM like Chrome so that’s a big plus here.
-
-If you want to try out Midori, let’s see how can you get your hands on it.
-
-### Install Midori on Linux
-
-Midori is no longer available in the Ubuntu 18.04 repository. However, the newer versions of Midori can be easily installed using the [Snap packages][7].
-
-If you are using Ubuntu, you can find Midori (Snap version) in the Software Center and install it from there.
-
-![Midori browser is available in Ubuntu Software Center][8]Midori browser is available in Ubuntu Software Center
-
-For other Linux distributions, make sure that you have [Snap support enabled][9] and then you can install Midori using the command below:
-
-```
-sudo snap install midori
-```
-
-You always have the option to compile from the source code. You can download the source code of Midori from its website.
-
-If you like Midori and want to help this open source project, please donate to them or [buy Midori merchandise from their shop][10].
-
-Do you use Midori or have you ever tried it? How’s your experience with it? What other web browser do you prefer to use? Please share your views in the comment section below.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/midori-browser
-
-作者:[Abhishek Prakash][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/abhishek/
-[b]: https://github.com/lujun9972
-[1]: https://itsfoss.com/open-source-browsers-linux/
-[2]: https://www.midori-browser.org/
-[3]: https://itsfoss.com/elementary-os-juno-features/
-[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/Midori-web-browser.jpeg?resize=800%2C450&ssl=1
-[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/midori-browser-linux.jpeg?resize=800%2C491&ssl=1
-[6]: https://en.wikipedia.org/wiki/Romance_languages
-[7]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/
-[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/midori-ubuntu-software-center.jpeg?ssl=1
-[9]: https://itsfoss.com/install-snap-linux/
-[10]: https://www.midori-browser.org/shop
-[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/Midori-web-browser.jpeg?fit=800%2C450&ssl=1
diff --git a/sources/tech/20190104 Three Ways To Reset And Change Forgotten Root Password on RHEL 7-CentOS 7 Systems.md b/sources/tech/20190104 Three Ways To Reset And Change Forgotten Root Password on RHEL 7-CentOS 7 Systems.md
new file mode 100644
index 0000000000..6619cfe65a
--- /dev/null
+++ b/sources/tech/20190104 Three Ways To Reset And Change Forgotten Root Password on RHEL 7-CentOS 7 Systems.md
@@ -0,0 +1,254 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Three Ways To Reset And Change Forgotten Root Password on RHEL 7/CentOS 7 Systems)
+[#]: via: (https://www.2daygeek.com/linux-reset-change-forgotten-root-password-in-rhel-7-centos-7/)
+[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/)
+
+Three Ways To Reset And Change Forgotten Root Password on RHEL 7/CentOS 7 Systems
+======
+
+If you are forget to remember your root password for RHEL 7 and CentOS 7 systems and want to reset the forgotten root password?
+
+If so, don’t worry we are here to help you out on this.
+
+Navigate to the following link if you want to **[reset forgotten root password on RHEL 6/CentOS 6][1]**.
+
+This is generally happens when you use different password in vast environment or if you are not maintaining the proper inventory.
+
+Whatever it is. No issues, we will help you through this article.
+
+It can be done in many ways but we are going to show you the best three methods which we tried many times for our clients.
+
+In Linux servers there are three different users are available. These are, Normal User, System User and Super User.
+
+As everyone knows the Root user is known as super user in Linux and Administrator is in Windows.
+
+We can’t perform any major activity without root password so, make sure you should have the right root password when you perform any major tasks.
+
+If you don’t know or don’t have it, try to reset using one of the below method.
+
+ * Reset Forgotten Root Password By Booting into Single User Mode using `rd.break`
+ * Reset Forgotten Root Password By Booting into Single User Mode using `init=/bin/bash`
+ * Reset Forgotten Root Password By Booting into Rescue Mode
+
+
+
+### Method-1: Reset Forgotten Root Password By Booting into Single User Mode
+
+Just follow the below procedure to reset the forgotten root password in RHEL 7/CentOS 7 systems.
+
+To do so, reboot your system and follow the instructions carefully.
+
+**`Step-1:`** Reboot your system and interrupt at the boot menu by hitting **`e`** key to modify the kernel arguments.
+![][3]
+
+**`Step-2:`** In the GRUB options, find `linux16` word and add the `rd.break` word in the end of the file then press `Ctrl+x` or `F10` to boot into single user mode.
+![][4]
+
+**`Step-3:`** At this point of time, your root filesystem will be mounted in Read only (RO) mode to /sysroot. Run the below command to confirm this.
+
+```
+# mount | grep root
+```
+
+![][5]
+
+**`Step-4:`** Based on the above output, i can say that i’m in single user mode and my root file system is mounted in read only mode.
+
+It won’t allow you to make any changes on your system until you mount the root filesystem with Read and write (RW) mode to /sysroot. To do so, use the following command.
+
+```
+# mount -o remount,rw /sysroot
+```
+
+![][6]
+
+**`Step-5:`** Currently your file systems are mounted as a temporary partition. Now, your command prompt shows **switch_root:/#**.
+
+Run the following command to get into a chroot jail so that /sysroot is used as the root of the file system.
+
+```
+# chroot /sysroot
+```
+
+![][7]
+
+**`Step-6:`** Now you can able to reset the root password with help of `passwd` command.
+
+```
+# echo "CentOS7$#123" | passwd --stdin root
+```
+
+![][8]
+
+**`Step-7:`** By default CentOS 7/RHEL 7 use SELinux in enforcing mode, so create a following hidden file which will automatically perform a relabel of all files on next boot.
+
+It allow us to fix the context of the **/etc/shadow** file.
+
+```
+# touch /.autorelabel
+```
+
+![][9]
+
+**`Step-8:`** Issue `exit` twice to exit from the chroot jail environment and reboot the system.
+![][10]
+
+**`Step-9:`** Now you can login to your system with your new password.
+![][11]
+
+### Method-2: Reset Forgotten Root Password By Booting into Single User Mode
+
+Alternatively we can use the below procedure to reset the forgotten root password in RHEL 7/CentOS 7 systems.
+
+**`Step-1:`** Reboot your system and interrupt at the boot menu by hitting **`e`** key to modify the kernel arguments.
+![][3]
+
+**`Step-2:`** In the GRUB options, find `rhgb quiet` word and replace with the `init=/bin/bash` or `init=/bin/sh` word then press `Ctrl+x` or `F10` to boot into single user mode.
+
+Screenshot for **`init=/bin/bash`**.
+![][12]
+
+Screenshot for **`init=/bin/sh`**.
+![][13]
+
+**`Step-3:`** At this point of time, your root system will be mounted in Read only mode to /. Run the below command to confirm this.
+
+```
+# mount | grep root
+```
+
+![][14]
+
+**`Step-4:`** Based on the above ouput, i can say that i’m in single user mode and my root file system is mounted in read only (RO) mode.
+
+It won’t allow you to make any changes on your system until you mount the root file system with Read and write (RW) mode. To do so, use the following command.
+
+```
+# mount -o remount,rw /
+```
+
+![][15]
+
+**`Step-5:`** Now you can able to reset the root password with help of `passwd` command.
+
+```
+# echo "RHEL7$#123" | passwd --stdin root
+```
+
+![][16]
+
+**`Step-6:`** By default CentOS 7/RHEL 7 use SELinux in enforcing mode, so create a following hidden file which will automatically perform a relabel of all files on next boot.
+
+It allow us to fix the context of the **/etc/shadow** file.
+
+```
+# touch /.autorelabel
+```
+
+![][17]
+
+**`Step-7:`** Finally `Reboot` the system.
+
+```
+# exec /sbin/init 6
+```
+
+![][18]
+
+**`Step-9:`** Now you can login to your system with your new password.
+![][11]
+
+### Method-3: Reset Forgotten Root Password By Booting into Rescue Mode
+
+Alternatively, we can reset the forgotten Root password for RHEL 7 and CentOS 7 systems using Rescue mode.
+
+**`Step-1:`** Insert the bootable media through USB or DVD drive which is compatible for you and reboot your system. It will take to you to the below screen.
+
+Hit `Troubleshooting` to launch the `Rescue` mode.
+![][19]
+
+**`Step-2:`** Choose `Rescue a CentOS system` and hit `Enter` button.
+![][20]
+
+**`Step-3:`** Here choose `1` and the rescue environment will now attempt to find your Linux installation and mount it under the directory `/mnt/sysimage`.
+![][21]
+
+**`Step-4:`** Simple hit `Enter` to get a shell.
+![][22]
+
+**`Step-5:`** Run the following command to get into a chroot jail so that /mnt/sysimage is used as the root of the file system.
+
+```
+# chroot /mnt/sysimage
+```
+
+![][23]
+
+**`Step-6:`** Now you can able to reset the root password with help of **passwd** command.
+
+```
+# echo "RHEL7$#123" | passwd --stdin root
+```
+
+![][24]
+
+**`Step-7:`** By default CentOS 7/RHEL 7 use SELinux in enforcing mode, so create a following hidden file which will automatically perform a relabel of all files on next boot.
+It allow us to fix the context of the /etc/shadow file.
+
+```
+# touch /.autorelabel
+```
+
+![][25]
+
+**`Step-8:`** Remove the bootable media then initiate the reboot.
+
+**`Step-9:`** Issue `exit` twice to exit from the chroot jail environment and reboot the system.
+![][26]
+
+**`Step-10:`** Now you can login to your system with your new password.
+![][11]
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/linux-reset-change-forgotten-root-password-in-rhel-7-centos-7/
+
+作者:[Prakash Subramanian][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/prakash/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/linux-reset-change-forgotten-root-password-in-rhel-6-centos-6/
+[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[3]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-2.png
+[4]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-3.png
+[5]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-5.png
+[6]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-6.png
+[7]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-8.png
+[8]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-10.png
+[9]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-10a.png
+[10]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-11.png
+[11]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-12.png
+[12]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-1.png
+[13]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-1a.png
+[14]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-3.png
+[15]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-4.png
+[16]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-5.png
+[17]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-6.png
+[18]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-7.png
+[19]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-1.png
+[20]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-2.png
+[21]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-3.png
+[22]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-4.png
+[23]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-5.png
+[24]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-6.png
+[25]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-7.png
+[26]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-8.png
diff --git a/sources/tech/20190108 How ASLR protects Linux systems from buffer overflow attacks.md b/sources/tech/20190108 How ASLR protects Linux systems from buffer overflow attacks.md
deleted file mode 100644
index 41d4e47acc..0000000000
--- a/sources/tech/20190108 How ASLR protects Linux systems from buffer overflow attacks.md
+++ /dev/null
@@ -1,133 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How ASLR protects Linux systems from buffer overflow attacks)
-[#]: via: (https://www.networkworld.com/article/3331199/linux/what-does-aslr-do-for-linux.html)
-[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
-
-How ASLR protects Linux systems from buffer overflow attacks
-======
-
-
-
-Address Space Layout Randomization (ASLR) is a memory-protection process for operating systems that guards against buffer-overflow attacks. It helps to ensure that the memory addresses associated with running processes on systems are not predictable, thus flaws or vulnerabilities associated with these processes will be more difficult to exploit.
-
-ASLR is used today on Linux, Windows, and MacOS systems. It was first implemented on Linux in 2005. In 2007, the technique was deployed on Microsoft Windows and MacOS. While ASLR provides the same function on each of these operating systems, it is implemented differently on each one.
-
-The effectiveness of ASLR is dependent on the entirety of the address space layout remaining unknown to the attacker. In addition, only executables that are compiled as Position Independent Executable (PIE) programs will be able to claim the maximum protection from ASLR technique because all sections of the code will be loaded at random locations. PIE machine code will execute properly regardless of its absolute address.
-
-**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][1] ]**
-
-### ASLR limitations
-
-In spite of ASLR making exploitation of system vulnerabilities more difficult, its role in protecting systems is limited. It's important to understand that ASLR:
-
- * Doesn't _resolve_ vulnerabilities, but makes exploiting them more of a challenge
- * Doesn't track or report vulnerabilities
- * Doesn't offer any protection for binaries that are not built with ASLR support
- * Isn't immune to circumvention
-
-
-
-### How ASLR works
-
-ASLR increases the control-flow integrity of a system by making it more difficult for an attacker to execute a successful buffer-overflow attack by randomizing the offsets it uses in memory layouts.
-
-ASLR works considerably better on 64-bit systems, as these systems provide much greater entropy (randomization potential).
-
-### Is ASLR working on your Linux system?
-
-Either of the two commands shown below will tell you whether ASLR is enabled on your system.
-
-```
-$ cat /proc/sys/kernel/randomize_va_space
-2
-$ sysctl -a --pattern randomize
-kernel.randomize_va_space = 2
-```
-
-The value (2) shown in the commands above indicates that ASLR is working in full randomization mode. The value shown will be one of the following:
-
-```
-0 = Disabled
-1 = Conservative Randomization
-2 = Full Randomization
-```
-
-If you disable ASLR and run the commands below, you should notice that the addresses shown in the **ldd** output below are all the same in the successive **ldd** commands. The **ldd** command works by loading the shared objects and showing where they end up in memory.
-
-```
-$ sudo sysctl -w kernel.randomize_va_space=0 <== disable
-[sudo] password for shs:
-kernel.randomize_va_space = 0
-$ ldd /bin/bash
- linux-vdso.so.1 (0x00007ffff7fd1000) <== same addresses
- libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007ffff7c69000)
- libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ffff7c63000)
- libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7a79000)
- /lib64/ld-linux-x86-64.so.2 (0x00007ffff7fd3000)
-$ ldd /bin/bash
- linux-vdso.so.1 (0x00007ffff7fd1000) <== same addresses
- libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007ffff7c69000)
- libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ffff7c63000)
- libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7a79000)
- /lib64/ld-linux-x86-64.so.2 (0x00007ffff7fd3000)
-```
-
-If the value is set back to **2** to enable ASLR, you will see that the addresses will change each time you run the command.
-
-```
-$ sudo sysctl -w kernel.randomize_va_space=2 <== enable
-[sudo] password for shs:
-kernel.randomize_va_space = 2
-$ ldd /bin/bash
- linux-vdso.so.1 (0x00007fff47d0e000) <== first set of addresses
- libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f1cb7ce0000)
- libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f1cb7cda000)
- libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1cb7af0000)
- /lib64/ld-linux-x86-64.so.2 (0x00007f1cb8045000)
-$ ldd /bin/bash
- linux-vdso.so.1 (0x00007ffe1cbd7000) <== second set of addresses
- libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007fed59742000)
- libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fed5973c000)
- libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fed59552000)
- /lib64/ld-linux-x86-64.so.2 (0x00007fed59aa7000)
-```
-
-### Attempting to bypass ASLR
-
-In spite of its advantages, attempts to bypass ASLR are not uncommon and seem to fall into several categories:
-
- * Using address leaks
- * Gaining access to data relative to particular addresses
- * Exploiting implementation weaknesses that allow attackers to guess addresses when entropy is low or when the ASLR implementation is faulty
- * Using side channels of hardware operation
-
-
-
-### Wrap-up
-
-ASLR is of great value, especially when run on 64 bit systems and implemented properly. While not immune from circumvention attempts, it does make exploitation of system vulnerabilities considerably more difficult. Here is a reference that can provide a lot more detail [on the Effectiveness of Full-ASLR on 64-bit Linux][2], and here is a paper on one circumvention effort to [bypass ASLR][3] using branch predictors.
-
-Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3331199/linux/what-does-aslr-do-for-linux.html
-
-作者:[Sandra Henry-Stocker][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
-[b]: https://github.com/lujun9972
-[1]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
-[2]: https://cybersecurity.upv.es/attacks/offset2lib/offset2lib-paper.pdf
-[3]: http://www.cs.ucr.edu/~nael/pubs/micro16.pdf
-[4]: https://www.facebook.com/NetworkWorld/
-[5]: https://www.linkedin.com/company/network-world
diff --git a/sources/tech/20190110 5 useful Vim plugins for developers.md b/sources/tech/20190110 5 useful Vim plugins for developers.md
deleted file mode 100644
index 2b5b9421d4..0000000000
--- a/sources/tech/20190110 5 useful Vim plugins for developers.md
+++ /dev/null
@@ -1,369 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (pityonline)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (5 useful Vim plugins for developers)
-[#]: via: (https://opensource.com/article/19/1/vim-plugins-developers)
-[#]: author: (Ricardo Gerardi https://opensource.com/users/rgerardi)
-
-5 useful Vim plugins for developers
-======
-Expand Vim's capabilities and improve your workflow with these five plugins for writing code.
-
-
-I have used [Vim][1] as a text editor for over 20 years, but about two years ago I decided to make it my primary text editor. I use Vim to write code, configuration files, blog articles, and pretty much everything I can do in plaintext. Vim has many great features and, once you get used to it, you become very productive.
-
-I tend to use Vim's robust native capabilities for most of what I do, but there are a number of plugins developed by the open source community that extend Vim's capabilities, improve your workflow, and make you even more productive.
-
-Following are five plugins that are useful when using Vim to write code in any programming language.
-
-### 1. Auto Pairs
-
-The [Auto Pairs][2] plugin helps insert and delete pairs of characters, such as brackets, parentheses, or quotation marks. This is very useful for writing code, since most programming languages use pairs of characters in their syntax—such as parentheses for function calls or quotation marks for string definitions.
-
-In its most basic functionality, Auto Pairs inserts the corresponding closing character when you type an opening character. For example, if you enter a bracket **[** , Auto-Pairs automatically inserts the closing bracket **]**. Conversely, if you use the Backspace key to delete the opening bracket, Auto Pairs deletes the corresponding closing bracket.
-
-If you have automatic indentation on, Auto Pairs inserts the paired character in the proper indented position when you press Return/Enter, saving you from finding the correct position and typing the required spaces or tabs.
-
-Consider this Go code block for instance:
-
-```
-package main
-
-import "fmt"
-
-func main() {
- x := true
- items := []string{"tv", "pc", "tablet"}
-
- if x {
- for _, i := range items
- }
-}
-```
-
-Inserting an opening curly brace **{** after **items** and pressing Return/Enter produces this result:
-
-```
-package main
-
-import "fmt"
-
-func main() {
- x := true
- items := []string{"tv", "pc", "tablet"}
-
- if x {
- for _, i := range items {
- | (cursor here)
- }
- }
-}
-```
-
-Auto Pairs offers many other options (which you can read about on [GitHub][3]), but even these basic features will save time.
-
-### 2. NERD Commenter
-
-The [NERD Commenter][4] plugin adds code-commenting functions to Vim, similar to the ones found in an integrated development environment (IDE). With this plugin installed, you can select one or several lines of code and change them to comments with the press of a button.
-
-NERD Commenter integrates with the standard Vim [filetype][5] plugin, so it understands several programming languages and uses the appropriate commenting characters for single or multi-line comments.
-
-The easiest way to get started is by pressing **Leader+Space** to toggle the current line between commented and uncommented. The standard Vim Leader key is the **\** character.
-
-In Visual mode, you can select multiple lines and toggle their status at the same time. NERD Commenter also understands counts, so you can provide a count n followed by the command to change n lines together.
-
-Other useful features are the "Sexy Comment," triggered by **Leader+cs** , which creates a fancy comment block using the multi-line comment character. For example, consider this block of code:
-
-```
-package main
-
-import "fmt"
-
-func main() {
- x := true
- items := []string{"tv", "pc", "tablet"}
-
- if x {
- for _, i := range items {
- fmt.Println(i)
- }
- }
-}
-```
-
-Selecting all the lines in **function main** and pressing **Leader+cs** results in the following comment block:
-
-```
-package main
-
-import "fmt"
-
-func main() {
-/*
- * x := true
- * items := []string{"tv", "pc", "tablet"}
- *
- * if x {
- * for _, i := range items {
- * fmt.Println(i)
- * }
- * }
- */
-}
-```
-
-Since all the lines are commented in one block, you can uncomment the entire block by toggling any of the lines of the block with **Leader+Space**.
-
-NERD Commenter is a must-have for any developer using Vim to write code.
-
-### 3. VIM Surround
-
-The [Vim Surround][6] plugin helps you "surround" existing text with pairs of characters (such as parentheses or quotation marks) or tags (such as HTML or XML tags). It's similar to Auto Pairs but, instead of working while you're inserting text, it's more useful when you're editing text.
-
-For example, if you have the following sentence:
-
-```
-"Vim plugins are awesome !"
-```
-
-You can remove the quotation marks around the sentence by pressing the combination **ds"** while your cursor is anywhere between the quotation marks:
-
-```
-Vim plugins are awesome !
-```
-
-You can also change the double quotation marks to single quotation marks with the command **cs"'** :
-
-```
-'Vim plugins are awesome !'
-```
-
-Or replace them with brackets by pressing **cs'[**
-
-```
-[ Vim plugins are awesome ! ]
-```
-
-While it's a great help for text objects, this plugin really shines when working with HTML or XML tags. Consider the following HTML line:
-
-```
-
Vim plugins are awesome !
-```
-
-You can emphasize the word "awesome" by pressing the combination **ysiw ** while the cursor is anywhere on that word:
-
-```
-
Vim plugins are awesome !
-```
-
-Notice that the plugin is smart enough to use the proper closing tag **< /em>**.
-
-Vim Surround can also indent text and add tags in their own lines using **ySS**. For example, if you have:
-
-```
-
Vim plugins are awesome !
-```
-
-Add a **div** tag with this combination: **ySS
**, and notice that the paragraph line is indented automatically.
-
-```
-