From 0c1913f827e864488a47450b2a88e6017396b77f Mon Sep 17 00:00:00 2001
From: kennethXia <37970750+kennethXia@users.noreply.github.com>
Date: Sun, 15 Apr 2018 10:57:26 +0800
Subject: [PATCH 001/220] Update 20180226 5 keys to building open hardware.md
---
sources/talk/20180226 5 keys to building open hardware.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/sources/talk/20180226 5 keys to building open hardware.md b/sources/talk/20180226 5 keys to building open hardware.md
index fbeb1c0460..690acae04c 100644
--- a/sources/talk/20180226 5 keys to building open hardware.md
+++ b/sources/talk/20180226 5 keys to building open hardware.md
@@ -1,11 +1,13 @@
translating by kennethXia
5 keys to building open hardware
+构建开源硬件的5个关键点
======

-
+科学社区正在加速拥抱自由和开源硬件([FOSH][1]). 研究员正忙于[改进他们自己的装备][2]并创造数以百计基于分布式数字制造模型的设备来推动他们的研究。
The science community is increasingly embracing free and open source hardware ([FOSH][1]). Researchers have been busy [hacking their own equipment][2] and creating hundreds of devices based on the distributed digital manufacturing model to advance their scientific experiments.
+热衷于 FOSH 的主要原因还是钱: 有研究表明,和专用设备相比,FOSH 可以节约90%到99%的花费。基于开源硬件商业模式的科学 FOSH 的商业化已经快速地推动了开发科学 FOSH 的工程领域。
A major reason for all this interest in distributed digital manufacturing of scientific FOSH is money: Research indicates that FOSH [slashes costs by 90% to 99%][3] compared to proprietary tools. Commercializing scientific FOSH with [open hardware business models][4] has supported the rapid growth of an engineering subfield to develop FOSH for science, which comes together annually at the [Gathering for Open Science Hardware][5].
Remarkably, not one, but [two new academic journals][6] are devoted to the topic: the [Journal of Open Hardware][7] (from Ubiquity Press, a new open access publisher that also publishes the [Journal of Open Research Software][8] ) and [HardwareX][9] (an [open access journal][10] from Elsevier, one of the world's largest academic publishers).
From 90dad96e8262fd4ffeaaf06dfa03c9084b6455a9 Mon Sep 17 00:00:00 2001
From: kennethXia <37970750+kennethXia@users.noreply.github.com>
Date: Sun, 15 Apr 2018 16:39:47 +0800
Subject: [PATCH 002/220] Update 20180226 5 keys to building open hardware.md
---
sources/talk/20180226 5 keys to building open hardware.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/sources/talk/20180226 5 keys to building open hardware.md b/sources/talk/20180226 5 keys to building open hardware.md
index 690acae04c..2319954e94 100644
--- a/sources/talk/20180226 5 keys to building open hardware.md
+++ b/sources/talk/20180226 5 keys to building open hardware.md
@@ -7,11 +7,13 @@ translating by kennethXia
科学社区正在加速拥抱自由和开源硬件([FOSH][1]). 研究员正忙于[改进他们自己的装备][2]并创造数以百计基于分布式数字制造模型的设备来推动他们的研究。
The science community is increasingly embracing free and open source hardware ([FOSH][1]). Researchers have been busy [hacking their own equipment][2] and creating hundreds of devices based on the distributed digital manufacturing model to advance their scientific experiments.
-热衷于 FOSH 的主要原因还是钱: 有研究表明,和专用设备相比,FOSH 可以节约90%到99%的花费。基于开源硬件商业模式的科学 FOSH 的商业化已经快速地推动了开发科学 FOSH 的工程领域。
+热衷于 FOSH 的主要原因还是钱: 有研究表明,和专用设备相比,FOSH 可以[节约90%到99%的花费][3]。基于[开源硬件商业模式][4]的科学 FOSH 的商业化已经推动其快速地发展为一个新的工程领域,并为此定期[举行年会][5]。
A major reason for all this interest in distributed digital manufacturing of scientific FOSH is money: Research indicates that FOSH [slashes costs by 90% to 99%][3] compared to proprietary tools. Commercializing scientific FOSH with [open hardware business models][4] has supported the rapid growth of an engineering subfield to develop FOSH for science, which comes together annually at the [Gathering for Open Science Hardware][5].
+特别的是,不止一本,而是两本关于这个主题的学术期刊:[Journal of Open Hardware] (由Ubiquity出版,一个新的开放访问出版商,同时出版了[Journal of Open Research Software][8])以及[HardwareX][9](由Elsevier出版的一种[自由访问期刊][10],它是世界上最大的学术出版商之一)。
Remarkably, not one, but [two new academic journals][6] are devoted to the topic: the [Journal of Open Hardware][7] (from Ubiquity Press, a new open access publisher that also publishes the [Journal of Open Research Software][8] ) and [HardwareX][9] (an [open access journal][10] from Elsevier, one of the world's largest academic publishers).
+由于学术社区的支持,科学 FOSH 的开发者在设计开源硬件获取乐趣并快速推进科学发展的同时获得学术成就。
Because of the academic community's support, scientific FOSH developers can get academic credit while having fun designing open hardware and pushing science forward faster.
### 5 steps for scientific FOSH
From af15566127fc97d2eb9d8330f2bc735891a6947b Mon Sep 17 00:00:00 2001
From: kennethXia <37970750+kennethXia@users.noreply.github.com>
Date: Sun, 15 Apr 2018 18:17:35 +0800
Subject: [PATCH 003/220] Update 20180226 5 keys to building open hardware.md
---
.../20180226 5 keys to building open hardware.md | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/sources/talk/20180226 5 keys to building open hardware.md b/sources/talk/20180226 5 keys to building open hardware.md
index 2319954e94..f9d3c0a6de 100644
--- a/sources/talk/20180226 5 keys to building open hardware.md
+++ b/sources/talk/20180226 5 keys to building open hardware.md
@@ -13,27 +13,36 @@ A major reason for all this interest in distributed digital manufacturing of sci
特别的是,不止一本,而是两本关于这个主题的学术期刊:[Journal of Open Hardware] (由Ubiquity出版,一个新的开放访问出版商,同时出版了[Journal of Open Research Software][8])以及[HardwareX][9](由Elsevier出版的一种[自由访问期刊][10],它是世界上最大的学术出版商之一)。
Remarkably, not one, but [two new academic journals][6] are devoted to the topic: the [Journal of Open Hardware][7] (from Ubiquity Press, a new open access publisher that also publishes the [Journal of Open Research Software][8] ) and [HardwareX][9] (an [open access journal][10] from Elsevier, one of the world's largest academic publishers).
-由于学术社区的支持,科学 FOSH 的开发者在设计开源硬件获取乐趣并快速推进科学发展的同时获得学术成就。
+由于学术社区的支持,科学 FOSH 的开发者在获取制作乐趣并推进科学快速发展的同时获得学术声望。
Because of the academic community's support, scientific FOSH developers can get academic credit while having fun designing open hardware and pushing science forward faster.
### 5 steps for scientific FOSH
+### 科学 FOSH 的5个步骤
+协恩 (Shane Oberloier)和我在名为Designes的自由问工程期刊上共同发表了一篇关于设计 FOSH 科学设备原则的文章。我们以滑动烘干机为例,制造成本低于20美元,仅是专用设备价格的三百分之一。[科学][1]和[医疗][12]设备往往比较复杂,开发 FOSH 替代品将带来巨大的回报。
Shane Oberloier and I co-authored a new [article][11] published in Designs, an open access engineering design journal, about the principles of designing FOSH scientific equipment. We used the example of a slide dryer, fabricated for under $20, which costs up to 300 times less than proprietary equivalents. [Scientific][1] and [medical][12] equipment tends to be complex with huge payoffs for developing FOSH alternatives.
+我总结了5个步骤(包括6条设计原则),它们在协恩和我发表的文章里有详细阐述。这些设计原则也推广到非科学设备,而且制作越复杂的设计越能带来更大的潜在收益。
I've summarized the five steps (including six design principles) that Shane and I detail in our Designs article. These design principles can be generalized to non-scientific devices, although the more complex the design or equipment, the larger the potential savings.
+如果你对科学项目的开源硬件设计感兴趣,这些步骤将使你的项目的影响最大化。
If you are interested in designing open hardware for scientific projects, these steps will maximize your project's impact.
+ 1. 评估类似现有工具的功能,你的 FOSH 设计目标应该针对实际效果而不是现有的设计(译者注:作者的意思应该是不要被现有设计缚住手脚)。必要的时候需进行概念证明。
1. Evaluate similar existing tools for their functions but base your FOSH design on replicating their physical effects, not pre-existing designs. If necessary, evaluate a proof of concept.
-
+ 2. 使用下列设计原则:
2. Use the following design principles:
-
+ * 在设备生产中,仅适用自由和开源的软件工具链(比如,开源的CAD工具,例如[OpenSCAD][13], [FreeCAD][14], or [Blender][15])和开源硬件。
* Use only free and open source software toolchains (e.g., open source CAD packages such as [OpenSCAD][13], [FreeCAD][14], or [Blender][15]) and open hardware for device fabrication.
+ * 尝试减少部件的数量和类型并降低工具的复杂度
* Attempt to minimize the number and type of parts and the complexity of the tools.
+ * 减少材料的数量和制造成本。
* Minimize the amount of material and the cost of production.
+ * 尽量使用方便易得的工具(比如 RepRap 3D 打印机)进行部件的分布式或数字化生产。
* Maximize the use of components that can be distributed or digitally manufactured by using widespread and accessible tools such as the open source [RepRap 3D printer][16].
+ *
* Create [parametric designs][17] with predesigned components, which enable others to customize your design. By making parametric designs rather than solving a specific case, all future cases can also be solved while enabling future users to alter the core variables to make the device useful for them.
* All components that are not easily and economically fabricated with existing open hardware equipment in a distributed fashion should be chosen from off-the-shelf parts that are readily available throughout the world.
From e885e0099eb00eb2d43a0d0aea42f96db66b2fdb Mon Sep 17 00:00:00 2001
From: kennethXia <37970750+kennethXia@users.noreply.github.com>
Date: Sun, 15 Apr 2018 19:20:13 +0800
Subject: [PATCH 004/220] Update 20180226 5 keys to building open hardware.md
---
.../20180226 5 keys to building open hardware.md | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/sources/talk/20180226 5 keys to building open hardware.md b/sources/talk/20180226 5 keys to building open hardware.md
index f9d3c0a6de..25a5248090 100644
--- a/sources/talk/20180226 5 keys to building open hardware.md
+++ b/sources/talk/20180226 5 keys to building open hardware.md
@@ -42,22 +42,28 @@ If you are interested in designing open hardware for scientific projects, these
* Minimize the amount of material and the cost of production.
* 尽量使用方便易得的工具(比如 RepRap 3D 打印机)进行部件的分布式或数字化生产。
* Maximize the use of components that can be distributed or digitally manufactured by using widespread and accessible tools such as the open source [RepRap 3D printer][16].
- *
+ * 对部件进行[参数化设计][17],这使他人可以对你的设计进行个性化改动。相较于特例化设计,参数化设计会更有用。在未来的项目中,使用者可以通过修改核心参数来继续利用它们。
* Create [parametric designs][17] with predesigned components, which enable others to customize your design. By making parametric designs rather than solving a specific case, all future cases can also be solved while enabling future users to alter the core variables to make the device useful for them.
+ * 所有不能使用开源硬件进行分布制造的零件,必须选择现货产品以方便采购
* All components that are not easily and economically fabricated with existing open hardware equipment in a distributed fashion should be chosen from off-the-shelf parts that are readily available throughout the world.
-
+ 3. 验证功能设计
3. Validate the design for the targeted function(s).
-
+ 4. 提供关于设计、生产、装配、校准和操作的详尽文档。包括原始设计文件而不仅仅是设计输出。开源硬件协会对于开源设计的发布和文档化有额外的指南,总结如下:
4. Meticulously document the design, manufacture, assembly, calibration, and operation of the device. This should include the raw source of the design, not just the files used for production. The Open Source Hardware Association has extensive [guidelines][18] for properly documenting and releasing open source designs, which can be summarized as follows:
-
+ * 以通用的形式分享设计文件
* Share design files in a universal type.
+ * 提供详尽的材料清单,包括价格和采购信息
* Include a fully detailed bill of materials, including prices and sourcing information.
+ * 如果包含软件,确保代码对大众来说清晰易懂
* If software is involved, make sure the code is clear and understandable to the general public.
+ * 作为生产时的参考,必须提供足够的照片,以确保没有任何被遮挡的部分。
* Include many photos so that nothing is obscured, and they can be used as a reference while manufacturing.
+ * 在描述方法的章节,整个制作过程必须被细化成简单步骤以便复制此设计。
* In the methods section, the entire manufacturing process must be detailed to act as instructions for users to replicate the design.
+ * 在线上分享并指定许可证。
* Share online and specify a license. This gives users information on what constitutes fair use of the design.
From f3b820f8aad00224dd0ef1e78e33d60bc3a2870a Mon Sep 17 00:00:00 2001
From: kennethXia <37970750+kennethXia@users.noreply.github.com>
Date: Sun, 15 Apr 2018 21:39:31 +0800
Subject: [PATCH 005/220] Update 20180226 5 keys to building open hardware.md
---
.../talk/20180226 5 keys to building open hardware.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/sources/talk/20180226 5 keys to building open hardware.md b/sources/talk/20180226 5 keys to building open hardware.md
index 25a5248090..33ba7ab32c 100644
--- a/sources/talk/20180226 5 keys to building open hardware.md
+++ b/sources/talk/20180226 5 keys to building open hardware.md
@@ -63,14 +63,14 @@ If you are interested in designing open hardware for scientific projects, these
* Include many photos so that nothing is obscured, and they can be used as a reference while manufacturing.
* 在描述方法的章节,整个制作过程必须被细化成简单步骤以便复制此设计。
* In the methods section, the entire manufacturing process must be detailed to act as instructions for users to replicate the design.
- * 在线上分享并指定许可证。
+ * 在线上分享并指定许可证。这为用户提供了合理使用设计的信息。
* Share online and specify a license. This gives users information on what constitutes fair use of the design.
-
- 5. Share aggressively! For FOSH to proliferate, designs must be shared widely, frequently, and noticeably to raise awareness of their existence. All documentation should be published in the open access literature and shared with appropriate communities. One nice universal repository to consider is the [Open Science Framework][19], hosted by the Center for Open Science, which is set up to take any type of file and handle large datasets.
-
+ 5. 主动分享!为了使 FOSH 发扬光大,设计必须被广泛、频繁和有效地分享以提升他们的存在感。所有的文档应该在开放存取文献中发表,并与适当的社区共享。[开源科学框架][19]是一个值得考虑的优雅的通用存储库,它由开源科学中心主办,该中心设置为接受任何类型的文件并处理大型数据集。
+ 5. Share aggressively! For FOSH to proliferate, designs must be shared widely, frequently, and noticeably to raise awareness of their existence. All documentation should be published in the open access literature and shared with appropriate communities. One nice universal repository to consider is the [Open Science Framework][19], hosted by the Center for Open Science, which is set up to take any type of file and handle large datasets.
+这篇文章得到了 [Fulbright Finland][20] 的支持,该公司赞助了芬兰 Fulbright-Aalto 大学的特聘校席 Joshua Pearce 在开源科学硬件方面的研究工作。
This article was supported by [Fulbright Finland][20], which is sponsoring Joshua Pearce's research in open source scientific hardware in Finland as the Fulbright-Aalto University Distinguished Chair.
--------------------------------------------------------------------------------
@@ -78,7 +78,7 @@ This article was supported by [Fulbright Finland][20], which is sponsoring Joshu
via: https://opensource.com/article/18/2/5-steps-creating-successful-open-hardware
作者:[Joshua Pearce][a]
-译者:[译者ID](https://github.com/译者ID)
+译者:[kennethXia](https://github.com/kennethXia)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From b1506ff1d1baa6227e12f7e979b79cd4548a6f9c Mon Sep 17 00:00:00 2001
From: leemeans <1808577072@qq.com>
Date: Mon, 16 Apr 2018 11:04:51 +0800
Subject: [PATCH 006/220] Translated
Done
---
...oductivity Tool For Tracking Work Hours.md | 79 +++++++++----------
1 file changed, 39 insertions(+), 40 deletions(-)
diff --git a/sources/tech/20180319 A Command Line Productivity Tool For Tracking Work Hours.md b/sources/tech/20180319 A Command Line Productivity Tool For Tracking Work Hours.md
index 24cbe8e1fa..35dc204c71 100644
--- a/sources/tech/20180319 A Command Line Productivity Tool For Tracking Work Hours.md
+++ b/sources/tech/20180319 A Command Line Productivity Tool For Tracking Work Hours.md
@@ -1,31 +1,30 @@
-leemeans translating
-A Command Line Productivity Tool For Tracking Work Hours
+一个用于追踪工作时间的命令行生产力工具
======

-Keeping track of your work hours will give you an insight about the amount of work you get done in a specific time frame. There are plenty of GUI-based productivity tools available on the Internet for tracking work hours. However, I couldn’t find a good CLI-based tool. Today, I stumbled upon a a simple, yet useful tool named **“Moro”** for tracking work hours. Moro is a Finnish word which means “Hello”. Using Moro, you can find how much time you take to complete a specific task. It is free, open source and written using **NodeJS**.
+保持对你的工作小时数的追踪将让你知晓在一个特定时间区间内你所完成的工作总量。在网络上有大量的基于GUI的生产力工具可以用来追踪工作小时数。但我却不能找到一个基于CLI的工具。今天我偶然发现了一个简单而奏效的叫做 **“Moro”** 的追踪工作时间数的工具。Moro是一个芬兰词汇,意为"Hello"。通过使用Moro,你可以找到你在完成某项特定任务时花费了多少时间。这个工具是免费且开源的,它是通过**NodeJS**编写的。
-### Moro – A Command Line Productivity Tool For Tracking Work Hours
+### Moro - 一个追踪工作时间的命令行生产力工具
-Since Moro is written using NodeJS, make sure you have installed it on your system. If you haven’t installed it already, follow the link given below to install NodeJS and NPM in your Linux box.
+由于Moro是使用NodeJS编写的,保证你的系统上已经安装了(NodeJS)。如果你没有安装好NodeJS,跟随下面的链接在你的Linux中安装NodeJS和NPM。
-Once NodeJS and Npm installed, run the following command to install Moro.
+NodeJS和NPM一旦装好,运行下面的命令来安装Moro。
```
$ npm install -g moro
```
-### Usage
+### 用法
-Moro’s working concept is very simple. It saves your work staring time, ending time and the break time in your system. At the end of each day, it will tell you how many hours you have worked!
+Moro的工作概念非常简单。它记录了你的工作开始时间,结束时间和在你的系统上的休息时间。在每天结束时,它将会告知你已经工作了多少时间。
-When you reached to office, just type:
+当你到达办公室时,只需键入:
```
$ moro
```
-Sample output:
+示例输出:
```
💙 Moro \o/
@@ -33,15 +32,15 @@ Sample output:
```
-Moro will register this time as your starting time.
+Moro将会把这个时间注册为你的开始时间。
-When you leave the office, again type:
+当你离开办公室时,再次键入:
```
$ moro
```
-Sample output:
+示例输出:
```
💙 Moro \o/
@@ -64,21 +63,21 @@ Sample output:
```
-Moro will registers that time as your ending time.
+Moro将会把这个时间注册为你的结束时间。
-Now, More will subtract the starting time from ending time and then subtracts another 30 minutes for break time from the total and gives you the total working hours on that day. Sorry I am really terrible at explaining Math calculations. Let us say you came to work at 10 am in the morning and left 17.30 in the evening. So, the total hours you spent on the office is 7.30 hours (i.e 17.30-10). Then subtract the break time (default is 30 minutes) from the total. Hence, your total working time is 7 hours. Understood? Great!
+现在,Moro将会从结束时间减去开始时间然后从总的时间减去另外的30分钟作为休息时间并给你在那天总的工作时间。抱歉,我的数学计算过程解释实在糟糕。假设你在早上10:00来工作并在晚上17:30离开。所以,你总共在办公室呆了7.30小时(例如17.30-10)。然后在总的时间减去休息时间(默认是30分钟)。因此,你的总工作时间是7小时。明白了?很好!
-**Note:** Don’t confuse “moro” with “more” command like I did while writing this guide.
+**注意:**不要像我在写这个手册的时候一样把“moro”和“more”弄混了。
-To see all your registered hours, run:
+查看你注册的所有小时数,运行:
```
$ moro report --all
```
-Just in case, you forgot to register the start time or end time, you can specify that later on the same.
+以防万一,如果你忘记注册开始时间或者结束时间,你一样可以在之后指定这些值。
-For example, to register 10 am as start time, run:
+例如,将上午10点注册为开始时间,运行:
```
$ moro hi 10:00
@@ -90,7 +89,7 @@ $ moro hi 10:00
```
-To register 17.30 as end time:
+注册17:30作为结束时间:
```
$ moro bye 17:30
@@ -115,15 +114,15 @@ $ moro bye 17:30
```
-You already know Moro will subtract 30 minutes for break time, by default. If you wanted to set a custom break time, you could simply set it using command:
+你已经知道Moro默认将会减去30分钟的休息时间。如果你需要设置一个自定义的休息时间,你可以简单使用以下命令:
```
$ moro break 45
```
-Now, the break time is 45 minutes.
+现在,休息时间是45分钟了。
-To clear all data:
+若要清除所有的数据:
```
$ moro clear --yes
@@ -133,60 +132,60 @@ $ moro clear --yes
```
-**Add notes**
+**添加笔记**
-Sometimes, you may want to add note while working. Don’t look for a separate note taking application. Moro will help you to add notes. To add a note, just run:
+有时候,你想要在工作时添加笔记。不必去寻找一个独立的作笔记的应用。Moro将会帮助你添加笔记。要添加笔记,只需运行:
```
$ moro note mynotes
```
-To search for the registered notes at later time, simply do:
+要在之后搜索所有已经注册的笔记,只需做:
```
$ moro search mynotes
```
-**Change default settings**
+**修改默认设置**
-The default full work day is 7.5 hours. Since the developer is from Finland, it’s the official work hours. You can, however, change this settings as per your country’s work hours.
+默认的完整工作时间是7.5小时。这是因为开发者来自芬兰,这是官方的工作小时数。但是你也可以修改这个设置为你的国家的工作小时数。
-Say for example, to set it 7 hours, run:
+举个例子,要将其设置为7小时,运行:
```
$ moro config --day 7
```
-Also the default break time can be changed from 30 minutes like below:
+同样地,默认的休息时间也可以像下面这样从30分钟修改:
```
$ moro config --break 45
```
-**Backup your data**
+**备份你的数据**
-Like I already said, Moro stores the tracking time data in your home directory, and the file name is **.moro-data.db**.
+正如我已经说了的,Moro将时间追踪信息存储在你的家目录,文件名是**.moro-data.db**。
+
+但是,你可以保存备份数据库到不同的位置。要这样做的话,像下面这样将**.moro-data.db**文件移到你选择的一个不同的位置并告知Moro使用那个数据库文件。
-You can can, however, save the backup database file to different location. To do so, move the **.more-data.db** file to a different location of your choice and tell Moro to use that database file like below.
```
$ moro config --database-path /home/sk/personal/moro-data.db
```
-As per above command, I have assigned the default database file’s location to **/home/sk/personal** directory.
+在上面的每一个命令,我都已经把默认的数据库文件分配到了**/home/sk/personal**目录。
-For help, run:
+需要帮助的话,运行:
```
$ moro --help
```
-As you can see, Moro is very simple, yet useful to track how much time you’ve spent to get your work done. It is will be useful for freelancers and also anyone who must get things done under a limited time frame.
+正如你所见,Moro是非常简单而又能用于追踪你完成你的工作使用了多少时间的。对于自由职业者和任何想要在一定时间范围内完成事情的人,它将会是有用的。
-And, that’s all for today. Hope this helps. More good stuffs to come. Stay tuned!
-
-Cheers!
+并且,这些只是今天的(内容)。希望这些(内容)能够有所帮助。更多的好东西将会出现。请保持关注!
+干杯!
--------------------------------------------------------------------------------
@@ -194,7 +193,7 @@ Cheers!
via: https://www.ostechnix.com/moro-a-command-line-productivity-tool-for-tracking-work-hours/
作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
+译者:[leemeans](https://github.com/leemeans)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 55c26d420c1e0783984f79bba5af7ad77965ed7c Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Fri, 20 Apr 2018 23:03:44 +0800
Subject: [PATCH 007/220] PRF:20180123 Migrating to Linux- The Command Line.md
@CYLeft
---
...23 Migrating to Linux- The Command Line.md | 85 ++++++++++++-------
1 file changed, 55 insertions(+), 30 deletions(-)
diff --git a/translated/tech/20180123 Migrating to Linux- The Command Line.md b/translated/tech/20180123 Migrating to Linux- The Command Line.md
index 4d9d98f0f9..f99b49f980 100644
--- a/translated/tech/20180123 Migrating to Linux- The Command Line.md
+++ b/translated/tech/20180123 Migrating to Linux- The Command Line.md
@@ -1,37 +1,43 @@
迁徙到 Linux:命令行环境
======
+> 刚接触 Linux?在这篇教程中将学习如何轻松地在命令行列出、移动和编辑文件。
+

-这是关于迁徙到 Linux 系列的第四篇文章了。如果您错过了之前的内容,可以回顾我们之前谈到的内容 [新手之 Linux][1]、[文件和文件系统][2]、和 [图形环境][3]。Linux 无处不在,它可以运行在大部分的网络服务器,如 web、email 和其他服务器;它同样可以在您的手机、汽车控制台和其他很多设备上使用。现在,您可能会开始好奇 Linux 系统,并对学习 Linux 的工作原理萌发兴趣。
+这是关于迁徙到 Linux 系列的第四篇文章了。如果您错过了之前的内容,可以回顾我们之前谈到的内容 [新手之 Linux][1]、[文件和文件系统][2]、和 [图形环境][3]。Linux 无处不在,它可以用于运行大部分的网络服务器,如 Web、email 和其他服务器;它同样可以在您的手机、汽车控制台和其他很多设备上使用。现在,您可能会开始好奇 Linux 系统,并对学习 Linux 的工作原理萌发兴趣。
-在 Linux 下,命令行非常实用。Linux 的桌面系统中,尽管命令行只是可选操作,但是您依旧能看见很多朋友开着一个命令行窗口和其他应用窗口并肩作战。在运行 Linux 系统的网络服务器中,命令行通常是唯一能直接与操作系统交互的工具。因此,命令行是有必要了解的,至少应当涉猎一些基础命令。
+在 Linux 下,命令行非常实用。Linux 的桌面系统中,尽管命令行只是可选操作,但是您依旧能看见很多朋友开着一个命令行窗口和其他应用窗口并肩作战。在互联网服务器上和在设备中运行 Linux 时(LCTT 译注:指 IoT),命令行通常是唯一能直接与操作系统交互的工具。因此,命令行是有必要了解的,至少应当涉猎一些基础命令。
在命令行(通常称之为 Linux shell)中,所有操作都是通过键入命令完成。您可以执行查看文件列表、移动文件位置、显示文件内容、编辑文件内容等一系列操作,通过命令行,您甚至可以查看网页中的内容。
-如果您在 Windows(CMD 或者 PowerShell) 上已经熟悉关于命令行的使用,您是否想跳转到了解 Windows 命令行的章节上去?先了阅读这些内容吧。
+如果您在 Windows(CMD 或者 PowerShell) 上已经熟悉关于命令行的使用,您是否想跳转到“Windows 命令行用户”的章节上去?先阅读这些内容吧。
-### 导语
+### 导航
+
+在命令行中,这里有一个当前工作目录(文件夹和目录是同义词,在 Linux 中它们通常都被称为目录)的概念。如果没有特别指定目录,许多命令的执行会在当前目录下生效。比如,键入 `ls` 列出文件目录,当前工作目录的文件将会被列举出来。看一个例子:
-在命令行中,这里有一个当前工作目录(文件夹和目录是同义词,在 Linux 中它们通常都被称为目录)的概念。如果没有特别指定目录,许多命令的执行会在当前目录下生效。比如,键入 ls 列出文件目录,当前工作目录的文件将会被列举出来。看一个例子:
```
$ ls
Desktop Documents Downloads Music Pictures README.txt Videos
```
`ls Documents` 这条命令将会列出 `Documents` 目录下的文件:
+
```
$ ls Documents
report.txt todo.txt EmailHowTo.pdf
```
通过 `pwd` 命令可以显示当前您的工作目录。比如:
+
```
$ pwd
/home/student
```
您可以通过 `cd` 命令改变当前目录并切换到您想要抵达的目录。比如:
+
```
$ pwd
/home/student
@@ -40,11 +46,12 @@ $ pwd
/home/student/Downloads
```
-路径中的目录由 `/`(左斜杠)字符分隔。路径中有一个隐含的层次关系,比如 `/home/student` 目录中,home 是顶层目录,而 student 是 home 的子目录。
+路径中的目录由 `/`(左斜杠)字符分隔。路径中有一个隐含的层次关系,比如 `/home/student` 目录中,home 是顶层目录,而 `student` 是 `home` 的子目录。
路径要么是绝对路径,要么是相对路径。绝对路径由一个 `/` 字符打头。
-相对路径由 `.` 或者 `..` 开始。在一个路径中,一个 `.` 意味着当前目录,`..` 意味着当前目录的上级目录。比如,`ls ../Documents` 意味着在此寻找当前目录的上级名为 `Documets` 的目录:
+相对路径由 `.` 或者 `..` 开始。在一个路径中,一个 `.` 意味着当前目录,`..` 意味着当前目录的上级目录。比如,`ls ../Documents` 意味着在此寻找当前目录的上级名为 `Documents` 的目录:
+
```
$ pwd
/home/student
@@ -57,9 +64,10 @@ $ ls ../Documents
report.txt todo.txt EmailHowTo.pdf
```
-当您第一次打开命令行窗口时,您当前的工作目录被设置为您的家目录,通常为 `/home/<您的登录名>`。家目录专用于登陆之后存储您的专属文件。
+当您第一次打开命令行窗口时,您当前的工作目录被设置为您的家目录,通常为 `/home/<您的登录名>`。家目录专用于登录之后存储您的专属文件。
+
+环境变量 `$HOME` 会展开为您的家目录,比如:
-设置环境变量 `$HOME` 到您的家目录,比如:
```
$ echo $HOME
/home/student
@@ -67,59 +75,67 @@ $ echo $HOME
下表显示了用于目录导航和管理简单的文本文件的一些命令摘要。
-
+
### 搜索
有时我们会遗忘文件的位置,或者忘记了我要寻找的文件名。Linux 命令行有几个命令可以帮助您搜索到文件。
-第一个命令是 `find`。您可以使用 `find` 命令通过文件名或其他属性搜索文件和目录。举个例子,当您遗忘了 todo.txt 文件的位置,我们可以执行下面的代码:
+第一个命令是 `find`。您可以使用 `find` 命令通过文件名或其他属性搜索文件和目录。举个例子,当您遗忘了 `todo.txt` 文件的位置,我们可以执行下面的代码:
+
```
$ find $HOME -name todo.txt
/home/student/Documents/todo.txt
```
`find` 程序有很多功能和选项。一个简单的例子:
+
```
find <要寻找的目录> -name <文件名>
```
-如果这里有 `todo.txt` 文件且不止一个,它将向我们列出拥有这个名字的所有文件的所有所在位置。`find` 命令有很多便于搜索的选项比如类型(文件或是目录等等)、时间、大小和其他一些选项。更多内容您可以同通过:`man find` 获取关于如何使用 `find` 命令的帮助。
+如果这里有 `todo.txt` 文件且不止一个,它将向我们列出拥有这个名字的所有文件的所有所在位置。`find` 命令有很多便于搜索的选项比如类型(文件或是目录等等)、时间、大小和其他一些选项。更多内容您可以同通过 `man find` 获取关于如何使用 `find` 命令的帮助。
+
+您还可以使用 `grep` 命令搜索文件的特定内容,比如:
-您还可以使用 `grep` 命令搜索文件的特殊内容,比如:
```
grep "01/02/2018" todo.txt
```
+
这将为您展示 `todo` 文件中 `01/02/2018` 所在行。
### 获取帮助
-Linux 有很多命令,这里,我们没有办法一一列举。授人以鱼不如授人以渔,所以下一步我们将向您介绍帮助命令。
+Linux 有很多命令,这里,我们没有办法一一列举。授人以鱼不如授人以渔,所以下一步我们将向您介绍帮助命令。
+
+`apropos` 命令可以帮助您查找需要使用的命令。也许您想要查找能够操作目录或是获得文件列表的所有命令,但是您不知道该运行哪个命令。您可以这样尝试:
-`apropos` 命令可以帮助您查找需要使用的命令。也许您想要查找能够操作目录或是获得文件列表的所有命令,但是您并不希望让这些命令执行。您可以这样尝试:
```
apropos directory
```
要在帮助文档中,得到一个于 `directiory` 关键字的相关命令列表,您可以这样操作:
+
```
apropos "list open files"
```
-这将提供一个 `lsof` 命令给您,帮助您打开文件列表。
+这将提供一个 `lsof` 命令给您,帮助您列出打开文件的列表。
+
+当您明确知道您要使用的命令,但是不确定应该使用什么选项完成预期工作,您可以使用 `man` 命令,它是 manual 的缩写。您可以这样使用:
-当您明确您要使用的命令,但是不确定应该使用什么选项完成预期工作,您可以使用 man 命令,它是 manual 的缩写。您可以这样使用:
```
man ls
```
您可以在自己的设备上尝试这个命令。它会提供给您关于使用这个命令的完整信息。
-通常,很多命令都会有能够给 `help` 选项(比如说,`ls --help`),列出命令使用的提示。`man` 页面的内容通常太繁琐,`--help` 选项可能更适合快速浏览。
+通常,很多命令都能够接受 `help` 选项(比如说,`ls --help`),列出命令使用的提示。`man` 页面的内容通常太繁琐,`--help` 选项可能更适合快速浏览。
### 脚本
-Linux 命令行中最贴心的功能是能够运行脚本文件,并且能重复运行。Linux 命令可以存储在文本文件中,您可以在文件的开头写入 `#!/bin/sh`,之后追加命令。之后,一旦文件被存储为可执行文件,您就可以像执行命令一样运行脚本文件,比如,
+Linux 命令行中最贴心的功能之一是能够运行脚本文件,并且能重复运行。Linux 命令可以存储在文本文件中,您可以在文件的开头写入 `#!/bin/sh`,后面的行是命令。之后,一旦文件被存储为可执行文件,您就可以像执行命令一样运行脚本文件,比如,
+
```
--- contents of get_todays_todos.sh ---
#!/bin/sh
@@ -127,44 +143,53 @@ todays_date=`date +"%m/%d/%y"`
grep $todays_date $HOME/todos.txt
```
-在一个确定的工作中脚本可以帮助自动化重复执行命令。如果需要的话,脚本也可以很复杂,能够使用循环、判断语句等。限于篇幅,这里不细述,但是您可以在网上查询到相关信息。
+脚本可以以一套可重复的步骤自动化执行特定命令。如果需要的话,脚本也可以很复杂,能够使用循环、判断语句等。限于篇幅,这里不细述,但是您可以在网上查询到相关信息。
-您是否已经熟悉了 Windows 命令行?
+### Windows 命令行用户
如果您对 Windows CMD 或者 PowerShell 程序很熟悉,在命令行输入命令应该是轻车熟路的。然而,它们之间有很多差异,如果您没有理解它们之间的差异可能会为之困扰。
-首先,在 Linux 下的 PATH 环境于 Windows 不同。在 Windows 中,当前目录被认为是路径中的第一个文件夹,尽管该目录没有在环境变量中列出。而在 Linux 下,当前目录不会在路径中显示表示。Linux 下设置环境变量会被认为是风险操作。在 Linux 的当前目录执行程序,您需要使用 ./(代表当前目录的相对目录表示方式) 前缀。这可能会干扰很多 CMD 用户。比如:
+首先,在 Linux 下的 `PATH` 环境与 Windows 不同。在 Windows 中,当前目录被认为是该搜索路径(`PATH`)中的第一个文件夹,尽管该目录没有在环境变量中列出。而在 Linux 下,当前目录不会明确的放在搜索路径中。Linux 下设置环境变量会被认为是风险操作。在 Linux 的当前目录执行程序,您需要使用 `./`(代表当前目录的相对目录表示方式) 前缀。这可能会搞糊涂很多 CMD 用户。比如:
+
```
./my_program
```
而不是
+
```
my_program
```
另外,在 Windows 环境变量的路径中是以 `;`(分号) 分割的。在 Linux 中,由 `:` 分割环境变量。同样,在 Linux 中路径由 `/` 字符分隔,而在 Windows 目录中路径由 `\` 字符分割。因此 Windows 中典型的环境变量会像这样:
+
```
PATH="C:\Program Files;C:\Program Files\Firefox;"
-while on Linux it might look like:
+```
+而在 Linux 中看起来像这样:
+
+```
PATH="/usr/bin:/opt/mozilla/firefox"
```
-还要注意,在 Linux 中环境变量由 `$` 拓展,而在 Windows 中您需要使用百分号(就是这样: %PATH%)。
+还要注意,在 Linux 中环境变量由 `$` 拓展,而在 Windows 中您需要使用百分号(就是这样: `%PATH%`)。
在 Linux 中,通过 `-` 使用命令选项,而在 Windows 中,使用选项要通过 `/` 字符。所以,在 Linux 中您应该:
+
```
a_prog -h
```
而不是
+
```
a_prog /h
```
-在 Linux 下,文件拓展名并没有意义。例如,将 `myscript` 重命名为 `myscript.bat` 并不会因此而可执行,需要设置文件的执行权限。文件执行权限会在下次的内容中覆盖到。
+在 Linux 下,文件拓展名并没有意义。例如,将 `myscript` 重命名为 `myscript.bat` 并不会因此而变得可执行,需要设置文件的执行权限。文件执行权限会在下次的内容中覆盖到。
+
+在 Linux 中,如果文件或者目录名以 `.` 字符开头,意味着它们是隐藏文件。比如,如果您申请编辑 `.bashrc` 文件,您不能在家目录中找到它,但是它可能真的存在,只不过它是隐藏文件。在命令行中,您可以通过 `ls` 命令的 `-a` 选项查看隐藏文件,比如:
-在 Linux 中,如果文件或者目录名以 `.` 字符开头,意味着它们是隐藏文件。比如,如果您申请编辑 `.bashrc` 文件,您不能在 `home` 目录中找到它,但是它可能真的存在,只不过它是隐藏文件。在命令行中,您可以通过 `ls` 命令的 `-a` 选项查看隐藏文件,比如:
```
ls -a
```
@@ -179,11 +204,11 @@ via: https://www.linux.com/blog/learn/2018/1/migrating-linux-command-line
作者:[John Bonesio][a]
译者:[CYLeft](https://github.com/CYLeft)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/johnbonesio
-[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
-[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
-[3]:https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments
+[1]:https://linux.cn/article-9212-1.html
+[2]:https://linux.cn/article-9213-1.html
+[3]:https://linux.cn/article-9293-1.html
From f1240dd11ebb0c10799bb982fdd7d7d12ddc854c Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Fri, 20 Apr 2018 23:04:17 +0800
Subject: [PATCH 008/220] PUB:20180123 Migrating to Linux- The Command Line.md
@CYLeft https://linux.cn/article-9565-1.html
---
.../20180123 Migrating to Linux- The Command Line.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180123 Migrating to Linux- The Command Line.md (100%)
diff --git a/translated/tech/20180123 Migrating to Linux- The Command Line.md b/published/20180123 Migrating to Linux- The Command Line.md
similarity index 100%
rename from translated/tech/20180123 Migrating to Linux- The Command Line.md
rename to published/20180123 Migrating to Linux- The Command Line.md
From c7887e12dbc89661b4ab5df64714a5516666483f Mon Sep 17 00:00:00 2001
From: songshunqiang
Date: Sat, 21 Apr 2018 09:06:11 +0800
Subject: [PATCH 009/220] submit tech/20180320 Dry - An Interactive CLI Manager
For Docker Containers.md
---
...ctive CLI Manager For Docker Containers.md | 155 ------------------
...ctive CLI Manager For Docker Containers.md | 154 +++++++++++++++++
2 files changed, 154 insertions(+), 155 deletions(-)
delete mode 100644 sources/tech/20180320 Dry - An Interactive CLI Manager For Docker Containers.md
create mode 100644 translated/tech/20180320 Dry - An Interactive CLI Manager For Docker Containers.md
diff --git a/sources/tech/20180320 Dry - An Interactive CLI Manager For Docker Containers.md b/sources/tech/20180320 Dry - An Interactive CLI Manager For Docker Containers.md
deleted file mode 100644
index bb6fb546a5..0000000000
--- a/sources/tech/20180320 Dry - An Interactive CLI Manager For Docker Containers.md
+++ /dev/null
@@ -1,155 +0,0 @@
-pinewall translating
-
-Dry – An Interactive CLI Manager For Docker Containers
-======
-Docker is a software that allows operating-system-level virtualization also known as containerization.
-
-It uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and others to allows independent containers to run within a single Linux instance.
-
-Docker provides a way to run applications securely isolated in a container, packaged with all its dependencies and libraries.
-
-### What Is Dry
-
-[Dry][1] is a command line utility to manage & monitor Docker containers and images.
-
-It shows information about Containers, Images, Name of the containers, Networks, Running commands in the containers, and status, and, if running a Docker Swarm, it also shows all kinds of information about the state of the Swarm cluster.
-
-It can connect to both local or remote Docker daemons. Docker host shows `unix:///var/run/docker.sock` if the local Docker connected.
-
-Docker host shows `tcp://IP Address:Port Number` or `tcp://Host Name:Port Number` if the remote Docker connected.
-
-It could provide you the similar output metrics like `docker ps` but it has more verbose and colored output than “docker ps”.
-
-It also has an additional NAME column which comes handy at times when you have many containers you are not a memory champion.
-
-**Suggested Read :**
-**(#)** [Portainer – A Simple Docker Management GUI][2]
-**(#)** [Rancher – A Complete Container Management Platform For Production Environment][3]
-**(#)** [cTop – A Command-Line Tool For Container Monitoring And Management In Linux][4]
-
-### How To Install Dry On Linux
-
-The latest dry utility can be installed through single shell script on Linux. It does not require external libraries. Most of the Docker commands are available in dry with the same behavior.
-```
-$ curl -sSf https://moncho.github.io/dry/dryup.sh | sudo sh
- % Total % Received % Xferd Average Speed Time Time Time Current
- Dload Upload Total Spent Left Speed
-100 10 100 10 0 0 35 0 --:--:-- --:--:-- --:--:-- 35
-dryup: downloading dry binary
-######################################################################## 100.0%
-dryup: Moving dry binary to its destination
-dryup: dry binary was copied to /usr/local/bin, now you should 'sudo chmod 755 /usr/local/bin/dry'
-
-```
-
-Change the file permission to `755` using the below command.
-```
-$ sudo chmod 755 /usr/local/bin/dry
-
-```
-
-For Arch Linux users can install from AUR repository with help of **[Packer][5]** or **[Yaourt][6]** package manager.
-```
-$ yaourt -S dry-bin
-or
-$ packer -S dry-bin
-
-```
-
-If you wants to run dry as a Docker container, run the following command. Make sure Docker should be installed on your system for this as a prerequisites.
-
-**Suggested Read :**
-**(#)** [How to install Docker in Linux][7]
-**(#)** [How to play with Docker images on Linux][8]
-**(#)** [How to play with Docker containers on Linux][9]
-**(#)** [How to Install, Run Applications inside Docker Containers][10]
-```
-$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock moncho/dry
-
-```
-
-### How To Launch & Use Dry
-
-Simply run the `dry` command from your terminal to launch the utility. The default output for dry is similar to below.
-```
-$ dry
-
-```
-
-![][12]
-
-### How To Monitor Docker Using Dry
-
-You can open the monitor mode in dry by pressing `m` key.
-![][13]
-
-### How To Manage Container Using Dry
-
-To monitor any containers, just hit `Enter` on that. Dry allows you to perform activity such as logs, inspect, kill, remove container, stop, start, restart, stats, and image history.
-![][14]
-
-### How To Monitor Container Resource Utilization
-
-Dry allows users to monitor particular container resource utilization using `Stats+Top` option.
-
-We can achieve this by navigating to container management page (Follow the above steps and hit the `Stats+Top` option). Alternatively we can press `s` key to open container resource utilization page.
-
-![][15]
-
-### How To Check Container, Image, & Local Volume Disk Usage
-
-We can check disk usage of containers, images, and local volumes using `F8` key.
-
-This will clearly display the total number of containers, images, and volumes, and how many are active, and total disk usage and reclaimable size details.
-![][16]
-
-### How To Check Downloaded Images
-
-Press `2` key to list all the downloaded images.
-![][17]
-
-### How To Show Network List
-
-Press `3` key to list all the networks and it’s gateway.
-![][18]
-
-### How To List All Docker Containers
-
-Press `F2` key to list all the containers (This output includes Running & Stopped containers).
-![][19]
-
-### Dry Keybinds
-
-To view keybinds, navigate to help page or [dry github][1] page.
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/dry-an-interactive-cli-manager-for-docker-containers/
-
-作者:[Magesh Maruthamuthu][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.2daygeek.com/author/magesh/
-[1]:https://github.com/moncho/dry
-[2]:https://www.2daygeek.com/portainer-a-simple-docker-management-gui/
-[3]:https://www.2daygeek.com/rancher-a-complete-container-management-platform-for-production-environment/
-[4]:https://www.2daygeek.com/ctop-a-command-line-tool-for-container-monitoring-and-management-in-linux/
-[5]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/
-[6]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/
-[7]:https://www.2daygeek.com/install-docker-on-centos-rhel-fedora-ubuntu-debian-oracle-archi-scentific-linux-mint-opensuse/
-[8]:https://www.2daygeek.com/list-search-pull-download-remove-docker-images-on-linux/
-[9]:https://www.2daygeek.com/create-run-list-start-stop-attach-delete-interactive-daemonized-docker-containers-on-linux/
-[10]:https://www.2daygeek.com/install-run-applications-inside-docker-containers/
-[11]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[12]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-1.png
-[13]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-2.png
-[14]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-3.png
-[15]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-4.png
-[16]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-5.png
-[17]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-6.png
-[18]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-7.png
-[19]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-8.png
diff --git a/translated/tech/20180320 Dry - An Interactive CLI Manager For Docker Containers.md b/translated/tech/20180320 Dry - An Interactive CLI Manager For Docker Containers.md
new file mode 100644
index 0000000000..a38910e274
--- /dev/null
+++ b/translated/tech/20180320 Dry - An Interactive CLI Manager For Docker Containers.md
@@ -0,0 +1,154 @@
+Dry – 一个命令行交互式 Docker 容器管理器
+======
+Docker 是一种实现操作系统级别虚拟化或容器化的软件。
+
+基于 Linux 内核的 cgroups 和 namespaces 等资源隔离特性,Docker 可以在单个 Linux 实例中运行多个独立的容器。
+
+通过将应用依赖和相关库打包进容器,Docker 使得应用可以在容器中安全隔离地运行。
+
+### Dry 是什么
+
+[Dry][1] 是一个管理并监控 Docker 容器和镜像的命令行工具。
+
+Dry 给出容器相关的信息,包括对应镜像、容器名称、网络、容器中运行的命令及容器状态;如果运行在 Docker Swarm 中,工具还会给出 Swarm 集群的各种状态信息。
+
+Dry 可以连接至本地或远程的 Docker 守护进程。如果连接本地 Docker,Docker 主机显示为`unix:///var/run/docker.sock`。
+
+如果连接远程 Docker,Docker 主机显示为 `tcp://IP Address:Port Number` 或 `tcp://Host Name:Port Number`。
+
+Dry 可以提供类似 `docker ps` 的指标输出,但输出比 “docker ps” 内容详实、富有色彩。
+
+相比 Docker,Dry 还可以手动添加一个额外的名称列,用于降低记忆难度。
+
+***推荐阅读:**
+
+**(#)** [Portainer – 用于 Docker 管理的简明 GUI][2]
+**(#)** [Rancher – 适用于生产环境的完备容器管理平台][3]
+**(#)** [cTop – Linux环境下容器管理与监控的命令行工具][4]
+
+### 如何在 Linux 中安装 Dry
+
+在 Linux 中,可以通过一个简单的 shell 脚本安装最新版本的 dry 工具。Dry 不依赖外部库。对于绝大多数的 Docker 命令,dry 提供类似样式的命令。
+```
+$ curl -sSf https://moncho.github.io/dry/dryup.sh | sudo sh
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+100 10 100 10 0 0 35 0 --:--:-- --:--:-- --:--:-- 35
+dryup: downloading dry binary
+######################################################################## 100.0%
+dryup: Moving dry binary to its destination
+dryup: dry binary was copied to /usr/local/bin, now you should 'sudo chmod 755 /usr/local/bin/dry'
+
+```
+
+使用如下命令将文件权限变更为 `755`
+```
+$ sudo chmod 755 /usr/local/bin/dry
+
+```
+
+对于使用 Arch Linux 的用户,可以使用 **[Packer][5]** or **[Yaourt][6]** 包管理器,从 AUR 源安装该工具。
+```
+$ yaourt -S dry-bin
+或者
+$ packer -S dry-bin
+
+```
+
+如果希望在 Docker 容器中运行 dry,可以运行如下命令。前提条件是已确认在操作系统中安装了 Docker。
+
+**推荐阅读:**
+**(#)** [如何在 Linux 中安装 Docker][7]
+**(#)** [如何在 Linux 中玩转 Docker 镜像][8]
+**(#)** [如何在 Linux 中玩转 Docker 容器][9]
+**(#)** [如何在 Docker 容器中安装并运行应用程序][10]
+```
+$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock moncho/dry
+
+```
+
+### 如何启动并运行 Dry
+
+在控制台运行 `dry` 命令即可启动该工具,其默认输出如下:
+```
+$ dry
+
+```
+
+![][12]
+
+### 如何使用 Dry 监控 Docker
+
+你可以在 dry 的界面中按下 `m` 键打开监控模式。
+![][13]
+
+### 如何使用 Dry 管理容器
+
+在选中的容器上单击 `Enter` 键,即可管理容器。Dry 提供如下操作:查看日志,查看、杀死、删除容器,停止、启动、重启容器,查看容器状态及镜像历史记录等。
+![][14]
+
+### 如何监控容器资源利用率
+
+用户可以使用 `Stats+Top` 选项查看指定容器的资源利用率。
+
+该操作需要在容器管理界面完成(在上一步的基础上,点击 `Stats+Top` 选项)。另外,也可以按下 `s` 打开容器资源利用率界面。
+
+![][15]
+
+### 如何查看容器、镜像及本地卷的磁盘使用情况
+
+可以使用 `F8` 键查看容器、镜像及本地卷的磁盘使用情况。
+
+该界面明确地给出容器、镜像和卷的总数,哪些处于使用状态,以及整体磁盘使用情况、可回收空间大小的详细信息。
+![][16]
+
+### 如何查看已下载的镜像
+
+按下 `2` 键即可列出全部的已下载镜像。
+![][17]
+
+### 如何查看网络列表
+
+按下 `3` 键即可查看全部网络及网关。
+![][18]
+
+### 如何查看全部 Docker 容器
+
+按下 `F2` 键即可列出列出全部容器,包括运行中和已关闭的容器。
+![][19]
+
+### Dry 快捷键
+
+查看帮助页面或 [dry github][1] 即可查看全部快捷键。
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/dry-an-interactive-cli-manager-for-docker-containers/
+
+作者:[Magesh Maruthamuthu][a]
+译者:[pinewall](https://github.com/pinewall)
+校对:[校对者ID](https://github.com/校对者ID)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2daygeek.com/author/magesh/
+[1]:https://github.com/moncho/dry
+[2]:https://www.2daygeek.com/portainer-a-simple-docker-management-gui/
+[3]:https://www.2daygeek.com/rancher-a-complete-container-management-platform-for-production-environment/
+[4]:https://www.2daygeek.com/ctop-a-command-line-tool-for-container-monitoring-and-management-in-linux/
+[5]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/
+[6]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/
+[7]:https://www.2daygeek.com/install-docker-on-centos-rhel-fedora-ubuntu-debian-oracle-archi-scentific-linux-mint-opensuse/
+[8]:https://www.2daygeek.com/list-search-pull-download-remove-docker-images-on-linux/
+[9]:https://www.2daygeek.com/create-run-list-start-stop-attach-delete-interactive-daemonized-docker-containers-on-linux/
+[10]:https://www.2daygeek.com/install-run-applications-inside-docker-containers/
+[11]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[12]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-1.png
+[13]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-2.png
+[14]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-3.png
+[15]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-4.png
+[16]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-5.png
+[17]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-6.png
+[18]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-7.png
+[19]:https://www.2daygeek.com/wp-content/uploads/2018/03/dry-an-interactive-cli-manager-for-docker-containers-8.png
From 3fd61b40823191c69f644cdb843545535a533cab Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 09:40:49 +0800
Subject: [PATCH 010/220] =?UTF-8?q?20180421-1=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...5 The Best Linux Distributions for 2018.md | 140 ++++++++++++++++++
1 file changed, 140 insertions(+)
create mode 100644 sources/tech/20180105 The Best Linux Distributions for 2018.md
diff --git a/sources/tech/20180105 The Best Linux Distributions for 2018.md b/sources/tech/20180105 The Best Linux Distributions for 2018.md
new file mode 100644
index 0000000000..3be92638c5
--- /dev/null
+++ b/sources/tech/20180105 The Best Linux Distributions for 2018.md
@@ -0,0 +1,140 @@
+The Best Linux Distributions for 2018
+============================================================
+
+
+Jack Wallen shares his picks for the best Linux distributions for 2018.[Creative Commons Zero][6]Pixabay
+
+It’s a new year and the landscape of possibility is limitless for Linux. Whereas 2017 brought about some big changes to a number of Linux distributions, I believe 2018 will bring serious stability and market share growth—for both the server and the desktop.
+
+For those who might be looking to migrate to the open source platform (or those looking to switch it up), what are the best choices for the coming year? If you hop over to [Distrowatch][14], you’ll find a dizzying array of possibilities, some of which are on the rise, and some that are seeing quite the opposite effect.
+
+So, which Linux distributions will 2018 favor? I have my thoughts. In fact, I’m going to share them with you now.
+
+Similar to what I did for[ last year’s list][15], I’m going to make this task easier and break down the list, as follows: sysadmin, lightweight distribution, desktop, distro with more to prove, IoT, and server. These categories should cover the needs of any type of Linux user.
+
+With that said, let’s get to the list of best Linux distributions for 2018.
+
+### Best distribution for sysadmins
+
+[Debian][16] isn’t often seen on “best of” lists. It should be. Why? If you consider that Debian is the foundation for Ubuntu (which is, in turn, the foundation for so many distributions), it’s pretty easy to understand why this distribution should find its way on many a list. But why for administrators? I’ve considered this for two very important reasons:
+
+* Ease of use
+
+* Extreme stability
+
+Because Debian uses the dpkg and apt package managers, it makes for an incredibly easy to use environment. And because Debian offers one of the the most stable Linux platforms, it makes for an ideal environment for so many things: Desktops, servers, testing, development. Although Debian may not include the plethora of applications found in last years winner (for this category), [Parrot Linux][17], it is very easy to add any/all the necessary applications you need to get the job done. And because Debian can be installed with your choice of desktop (Cinnamon, GNOME, KDE, LXDE, Mate, or Xfce), you can be sure the interface will meet your needs.
+
+
+
+Figure 1: The GNOME desktop running on top of Debian 9.3.[Used with permission][1]
+
+At the moment, Debian is listed at #2 on Distrowatch. Download it, install it, and then make it serve a specific purpose. It may not be flashy, but Debian is a sysadmin dream come true.
+
+### Best lightweight distribution
+
+Lightweight distribution serve a very specific purpose—giving new life to older, lesser-powered machines. But that doesn’t mean these particular distributions should only be considered for your older hardware. If speed is your ultimate need, you might want to see just how fast this category of distribution will run on your modern machine.
+
+Topping the list of lightweight distributions for 2018 is [Lubuntu][18]. Although there are plenty of options in this category, few come even close to the next-to-zero learning curve found on this distribution. And although Lubuntu’s footprint isn’t quite as small as Puppy Linux, thanks to it being a member of the Ubuntu family, the ease of use gained with this distribution makes up for it. But fear not, Lubuntu won’t bog down your older hardware.The requirements are:
+
+* CPU: Pentium 4 or Pentium M or AMD K8
+
+* For local applications, Lubuntu can function with 512MB of RAM. For online usage (Youtube, Google+, Google Drive, and Facebook), 1GB of RAM is recommended.
+
+Lubuntu makes use of the LXDE desktop (Figure 2), which means users new to Linux won’t have the slightest problem working with this distribution. The short list of included apps (such as Abiword, Gnumeric, and Firefox) are all lightning fast and user-friendly.
+
+### [lubuntu.jpg][8]
+
+
+Figure 2: The Lubuntu LXDE desktop in action.[Used with permission][2]
+
+Lubntu can make short and easy work of breathing life into hardware that is up to ten years old.
+
+### Best desktop distribution
+
+For the second year in a row, [Elementary OS][19] tops my list of best Desktop distribution. For many, the leader on the Desktop is [Linux Mint][20] (which is a very fine flavor). However, for my money, it’s hard to beat the ease of use and stability of Elementary OS. Case in point, I was certain the release of [Ubuntu][21] 17.10 would have me migrating back to Canonical’s distribution. Very soon after migrating to the new GNOME-Friendly Ubuntu, I found myself missing the look, feel, and reliability of Elementary OS (Figure 3). After two weeks with Ubuntu, I was back to Elementary OS.
+
+### [elementaros.jpg][9]
+
+
+Figure 3: The Pantheon desktop is a work of art as a desktop.[Used with permission][3]
+
+Anyone that has given Elementary OS a go immediately feels right at home. The Pantheon desktop is a perfect combination of slickness and user-friendliness. And with each update, it only gets better.
+
+Although Elementary OS stands at #6 on the Distrowatch page hit ranking, I predict it will find itself climbing to at least the third spot by the end of 2018\. The Elementary developers are very much in tune with what users want. They listen and they evolve. However, the current state of this distribution is so good, it seems all they could do to better it is a bit of polish here and there. Anyone looking for a desktop that offers a unified look and feel throughout the UI, Elementary OS is hard to beat. If you need a desktop that offers an outstanding ratio of reliability and ease of use, Elementary OS is your distribution.
+
+### Best distro for those with something to prove
+
+For the longest time [Gentoo][22] sat on top of the “show us your skills” distribution list. However, I think it’s time Gentoo took a backseat to the true leader of “something to prove”: [Linux From Scratch][23]. You may not think this fair, as LFS isn’t actually a distribution, but a project that helps users create their own Linux distribution. But, seriously, if you want to go a very long way to proving your Linux knowledge, what better way than to create your own distribution? From the LFS project, you can build a custom Linux system, from the ground up... entirely from source code. So, if you really have something to prove, download the [Linux From Scratch Book][24] and start building.
+
+### Best distribution for IoT
+
+For the second year in a row [Ubuntu Core][25] wins, hands down. Ubuntu Core is a tiny, transactional version of Ubuntu, built specifically for embedded and IoT devices. What makes Ubuntu Core so perfect for IoT is that it places the focus on snap packages—universal packages that can be installed onto a platform, without interfering with the base system. These snap packages contain everything they need to run (including dependencies), so there is no worry the installation will break the operating system (or any other installed software). Also, snaps are very easy to upgrade and run in an isolated sandbox, making them a great solution for IoT.
+
+Another area of security built into Ubuntu Core is the login mechanism. Ubuntu Core works with Ubuntu One ssh keys, such that the only way to log into the system is via uploaded ssh keys to a [Ubuntu One account][26] (Figure 4). This makes for a heightened security for your IoT devices.
+
+### [ubuntucore.jpg][10]
+
+
+Figure 4:The Ubuntu Core screen indicating a remote access enabled via Ubuntu One user.[Used with permission][4]
+
+### Best server distribution
+
+This where things get a bit confusing. The primary reason is support. If you need commercial support your best choice might be, at first blush, [Red Hat Enterprise Linux][27]. Red Hat has proved itself, year after year, to not only be one of the strongest enterprise server platforms on the planet, but the single most profitable open source businesses (with over $2 billion in annual revenue).
+
+However, Red Hat isn’t far and away the only server distribution. In fact, Red Hat doesn’t even dominate every aspect of Enterprise server computing. If you look at cloud statistics on Amazon’s Elastic Compute Cloud alone, Ubuntu blows away Red Hat Enterprise Linux. According to [The Cloud Market][28], EC2 statistics show RHEL at under 100k deployments, whereas Ubuntu is over 200k deployments. That’s significant.
+
+The end result is that Ubuntu has pretty much taken over as the leader in the cloud. And if you combine that with Ubuntu’s ease of working with and managing containers, it starts to become clear that Ubuntu Server is the clear winner for the Server category. And, if you need commercial support, Canonical has you covered, with [Ubuntu Advantage][29].
+
+The one caveat to Ubuntu Server is that it defaults to a text-only interface (Figure 5). You can install a GUI, if needed, but working with the Ubuntu Server command line is pretty straightforward (and something every Linux administrator should know).
+
+### [ubuntuserver.jpg][11]
+
+
+Figure 5: The Ubuntu server login, informing of updates.[Used with permission][5]
+
+### The choice is yours
+
+As I said before, these choices are all very subjective … but if you’re looking for a great place to start, give these distributions a try. Each one can serve a very specific purpose and do it better than most. Although you may not agree with my particular picks, chances are you’ll agree that Linux offers amazing possibilities on every front. And, stay tuned for more “best distro” picks next week.
+
+ _Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/best-linux-distributions-2018
+
+作者:[JACK WALLEN ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/jlwallen
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:https://www.linux.com/licenses/category/used-permission
+[3]:https://www.linux.com/licenses/category/used-permission
+[4]:https://www.linux.com/licenses/category/used-permission
+[5]:https://www.linux.com/licenses/category/used-permission
+[6]:https://www.linux.com/licenses/category/creative-commons-zero
+[7]:https://www.linux.com/files/images/debianjpg
+[8]:https://www.linux.com/files/images/lubuntujpg-2
+[9]:https://www.linux.com/files/images/elementarosjpg
+[10]:https://www.linux.com/files/images/ubuntucorejpg
+[11]:https://www.linux.com/files/images/ubuntuserverjpg-1
+[12]:https://www.linux.com/files/images/linux-distros-2018jpg
+[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+[14]:https://distrowatch.com/
+[15]:https://www.linux.com/news/learn/sysadmin/best-linux-distributions-2017
+[16]:https://www.debian.org/
+[17]:https://www.parrotsec.org/
+[18]:http://lubuntu.me/
+[19]:https://elementary.io/
+[20]:https://linuxmint.com/
+[21]:https://www.ubuntu.com/
+[22]:https://www.gentoo.org/
+[23]:http://www.linuxfromscratch.org/
+[24]:http://www.linuxfromscratch.org/lfs/download.html
+[25]:https://www.ubuntu.com/core
+[26]:https://login.ubuntu.com/
+[27]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
+[28]:http://thecloudmarket.com/stats#/by_platform_definition
+[29]:https://buy.ubuntu.com/?_ga=2.177313893.113132429.1514825043-1939188204.1510782993
From 4734b72885b25fd83eeb35a151c1b2b490612388 Mon Sep 17 00:00:00 2001
From: Means Lee
Date: Sat, 21 Apr 2018 09:41:04 +0800
Subject: [PATCH 011/220] move translated file to translated directory
---
...19 A Command Line Productivity Tool For Tracking Work Hours.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {sources => translated}/tech/20180319 A Command Line Productivity Tool For Tracking Work Hours.md (100%)
diff --git a/sources/tech/20180319 A Command Line Productivity Tool For Tracking Work Hours.md b/translated/tech/20180319 A Command Line Productivity Tool For Tracking Work Hours.md
similarity index 100%
rename from sources/tech/20180319 A Command Line Productivity Tool For Tracking Work Hours.md
rename to translated/tech/20180319 A Command Line Productivity Tool For Tracking Work Hours.md
From c10c307ddef23516f40435aeb28e7dee4b0f9ea8 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 09:44:02 +0800
Subject: [PATCH 012/220] =?UTF-8?q?20180421-2=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... contribution to an open source project.md | 134 ++++++++++++++++++
1 file changed, 134 insertions(+)
create mode 100644 sources/tech/20180411 Make your first contribution to an open source project.md
diff --git a/sources/tech/20180411 Make your first contribution to an open source project.md b/sources/tech/20180411 Make your first contribution to an open source project.md
new file mode 100644
index 0000000000..431c9da7d1
--- /dev/null
+++ b/sources/tech/20180411 Make your first contribution to an open source project.md
@@ -0,0 +1,134 @@
+Make your first contribution to an open source project
+============================================================
+
+> There's a first time for everything.
+
+
+Image by : [WOCinTech Chat][16]. Modified by Opensource.com. [CC BY-SA 4.0][17]
+
+It's a common misconception that contributing to open source is difficult. You might think, "Sometimes I can't even understand my own code; how am I supposed to understand someone else's?"
+
+Relax. Until last year, I thought the same thing. Reading and understanding someone else's code and writing your own on top of that can be a daunting task, but with the right resources, it isn't as hard as you might think.
+
+The first step is to choose a project. This decision can be instrumental in turning a newbie programmer into a seasoned open sourcer.
+
+Many amateur programmers interested in open source are advised to check out [Git][18], but that is not the best way to start. Git is maintained by uber-geeks with years of software development experience. It is a good place to find an open source project to contribute to, but it's not beginner-friendly. Most devs contributing to Git have enough experience that they do not need resources or detailed documentation. In this article, I'll provide a checklist of beginner-friendly features and some tips to make your first open source contribution easy.
+
+### Understand the product
+
+Before contributing to a project, you should understand how it works. To understand it, you need to try it for yourself. If you find the product interesting and useful, it is worth contributing to.
+
+Too often, beginners try to contribute to a project without first using the software. They then get frustrated and give up. If you don't use the software, you can't understand how it works. If you don't know how it works, how can you fix a bug or write a new feature?
+
+Remember: Try it, then hack it.
+
+### Check the project's status
+
+How active is the project?
+
+If you send a pull request to an unmaintained or dormant project, your pull request (or PR) may never be reviewed or merged. Look for projects with lots of activity; that way you will get immediate feedback on your code and your contributions will not go to waste.
+
+Here's how to tell if a project is active:
+
+* **Number of contributors:** A growing number of contributors indicates the developer community is interested and willing to accept new contributors.
+
+* **Frequency of commits:** Check the most recent commit date. If it was within the last week, or even month or two, the project is being maintained.
+
+* **Number of maintainers:** A higher number of maintainers means more potential mentors to guide you.
+
+* **Activity level in the chat room/IRC:** A busy chat room means quick replies to your queries.
+
+### Resources for beginners
+
+Coala is an example of an open sour project that has its own resources for tutorials and documentation, where you can also access its API (every class and method). The site also features an attractive UI that makes you want to read more.
+
+**Documentation:** Developers of all levels need reliable, well-maintained documentation to understand the details of a project. Look for projects that offer solid documentation on [GitHub][19] (or wherever it is hosted) and on a separate site like [Read the Docs][20], with lots of examples that will help you dive deep into the code.
+
+### [coala-newcomers_guide.png][2]
+
+
+
+**Tutorials:** Tutorials that explain how to add features to a project are helpful for beginners (however, you may not find them for all projects). For example, coala offers [tutorials for writing _bears_][21] (Python wrappers for linting tools to perform code analysis).
+
+### [coala_ui.png][3]
+
+
+
+**Labeled issues:** For beginners who are just figuring out how to choose their first project, selecting an issue can be an even tougher task. Issues labeled "difficulty/low," "difficulty/newcomer," "good first issue," and "low-hanging fruit" can be perfect for newbies.
+
+### [coala_labeled_issues.png][4]
+
+
+
+### Miscellaneous factors
+
+### [ci_logs.png][5]
+
+
+
+* **Maintainers' attitudes toward new contributors:** In my experience, most open sourcers are eager to help newcomers onboard their projects. However, you may also encounter some who are less welcoming (maybe even a bit rude) when you ask for help. Don't let them discourage you. Just because someone has more experience doesn't give them the right to be rude. There are plenty of others out there who want to help.
+
+* **Review process/structure:** Your PR will go through a number of reviews and changes by experienced developers and your peers—that's how you learn the most about software development. A project with a stringent review process enables you to grow as a developer by writing production-grade code.
+
+* **A robust CI pipeline:** Open source projects introduce beginners to continuous integration and deployment services. A robust CI pipeline will help you learn how to read and make sense of CI logs. It will also give you experience dealing with failing test cases and code coverage issues.
+
+* **Participation in code programs (Ex. [Google Summer Of Code][1]): **Participating organizations demonstrate a willingness to commit to the long-term development of a project. They also provide an opportunity for newcomers to gain real-world development experience and get paid for it. Most of the organizations that participate in such programs welcome newbies.
+
+### 7 beginner-friendly organizations
+
+* [coala (Python)][7]
+
+* [oppia (Python, Django)][8]
+
+* [DuckDuckGo (Perl, JavaScript)][9]
+
+* [OpenGenus (JavaScript)][10]
+
+* [Kinto (Python, JavaScript)][11]
+
+* [FOSSASIA (Python, JavaScript)][12]
+
+* [Kubernetes (Go)][13]
+
+
+### About the author
+
+ [][22] Palash Nigam - I'm a computer science undergrad from India who loves to hack on open source software and spend most of my time on GitHub. My current interests include backend web development, blockchains, and all things python.[More about me][14]
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/get-started-open-source-project
+
+作者:[ Palash Nigam ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/palash25
+[1]:https://en.wikipedia.org/wiki/Google_Summer_of_Code
+[2]:https://opensource.com/file/391211
+[3]:https://opensource.com/file/391216
+[4]:https://opensource.com/file/391226
+[5]:https://opensource.com/file/391221
+[6]:https://opensource.com/article/18/4/get-started-open-source-project?rate=i_d2neWpbOIJIAEjQKFExhe0U_sC6SiQgkm3c7ck8IM
+[7]:https://github.com/coala/coala
+[8]:https://github.com/oppia/oppia
+[9]:https://github.com/duckduckgo/
+[10]:https://github.com/OpenGenus/
+[11]:https://github.com/kinto
+[12]:https://github.com/fossasia/
+[13]:https://github.com/kubernetes
+[14]:https://opensource.com/users/palash25
+[15]:https://opensource.com/user/212436/feed
+[16]:https://www.flickr.com/photos/wocintechchat/25171528213/
+[17]:https://creativecommons.org/licenses/by/4.0/
+[18]:https://git-scm.com/
+[19]:https://github.com/
+[20]:https://readthedocs.org/
+[21]:http://api.coala.io/en/latest/Developers/Writing_Linter_Bears.html
+[22]:https://opensource.com/users/palash25
+[23]:https://opensource.com/users/palash25
+[24]:https://opensource.com/users/palash25
+[25]:https://opensource.com/article/18/4/get-started-open-source-project#comments
+[26]:https://opensource.com/tags/web-development
From 47563644257c3ae6bfeef861651e10559c3d36fa Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 09:45:38 +0800
Subject: [PATCH 013/220] =?UTF-8?q?20180421-3=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...erve Scientific and Medical Communities.md | 170 ++++++++++++++++++
1 file changed, 170 insertions(+)
create mode 100644 sources/tech/20171201 Linux Distros That Serve Scientific and Medical Communities.md
diff --git a/sources/tech/20171201 Linux Distros That Serve Scientific and Medical Communities.md b/sources/tech/20171201 Linux Distros That Serve Scientific and Medical Communities.md
new file mode 100644
index 0000000000..59c6297157
--- /dev/null
+++ b/sources/tech/20171201 Linux Distros That Serve Scientific and Medical Communities.md
@@ -0,0 +1,170 @@
+Linux Distros That Serve Scientific and Medical Communities
+============================================================
+
+
+Jack Wallen looks at a few Linux distributions that specialize in serving the scientific and medical communities.[Creative Commons Zero][5]
+
+Linux serves — of that there is no doubt — literally and figuratively. The open source platform serves up websites across the globe, it serves educational systems in numerous ways, and it also serves the medical and scientific communities and has done so for quite some time.
+
+I remember, back in my early days of Linux usage (I first adopted Linux as my OS of choice in 1997), how every Linux distribution included so many tools I would personally never use. Tools used for plotting and calculating on levels I’d not even heard of before. I cannot remember the names of those tools, but I know they were opened once and never again. I didn’t understand their purpose. Why? Because I wasn’t knee-deep in studying such science.
+
+Modern Linux is a far cry from those early days. Not only is it much more user-friendly, it doesn’t include that plethora of science-centric tools. There are, however, still Linux distributions for that very purpose — serving the scientific and medical communities.
+
+Let’s take a look at a few of these distributions. Maybe one of them will suit your needs.
+
+### Scientific Linux
+
+You can’t start a listing of science-specific Linux distributions without first mentioning [Scientific Linux][12]. This particular take on Linux was developed by [Fermilab][13]. Based on Red Hat Enterprise Linux, Scientific Linux aims to offer a common Linux distribution for various labs and universities around the world, in order to reduce duplication of effort. The goal of Scientific Linux is to have a distribution that is compatible with Red Hat Enterprise Linux, that:
+
+* Provides a stable, scalable, and extensible operating system for scientific computing.
+
+* Supports scientific research by providing the necessary methods and procedures to enable the integration of scientific applications with the operating environment.
+
+* Uses the free exchange of ideas, designs, and implementations in order to prepare a computing platform for the next generation of scientific computing.
+
+* Includes all the necessary tools to enable users to create their own Scientific Linux spins.
+
+Because Scientific Linux is based on Red Hat Enterprise Linux, you can select a Security Policy for the platform during installation (Figure 1).
+
+
+
+Figure 1: Selecting a security policy for Scientific Linux during installation.[Used with permission][1]
+
+Two famous experiments that work with Scientific Linux are:
+
+* Collider Detector at Fermilab — experimental collaboration that studies high energy particle collisions at the [Tevatron][6] (a circular particle accelerator)
+
+* DØ experiment — a worldwide collaboration of scientists that conducts research on the fundamental nature of matter.
+
+What you might find interesting about Scientific Linux is that it doesn’t actually include all the science-y goodness you might expect. There is no Matlab equivalent pre-installed, or other such tools. The good news is that there are plenty of repositories available that allow you to install everything you need to create a distribution that perfectly suits your needs.
+
+Scientific Linux is available to use for free and can be downloaded from the [official download page][14].
+
+### Bio-Linux
+
+Now we’re venturing into territory that should make at least one cross section of scientists very happy. Bio-Linux is a distribution aimed specifically at bioinformatics (the science of collecting and analyzing complex biological data such as genetic codes). This very green-looking take on Linux (Figure 2) was developed at the [Environmental Omics Synthesis Centre ][15]and the [Natural Environment for Ecology & Hydrology][16] and includes hundreds of bioinformatics tools, including:
+
+* abyss — de novo, parallel, sequence assembler for short reads
+
+* Artemis — DNA sequence viewer and annotation tool
+
+* bamtools — toolkit for manipulating BAM (genome alignment) files
+
+* Big-blast — The big-blast script for annotation of long sequence
+
+* Galaxy — browser-based biomedical research platform
+
+* Fasta — tool for searching DNA and protein databases
+
+* Mesquite — used for evolutionary biology
+
+* njplot — tool for drawing phylogenetic trees
+
+* Rasmo — tool for visualizing macromolecules
+
+
+
+Figure 2: The Bio-Linux desktop.[Used with permission][2]
+
+There are plenty of command line and graphical tools to be found in this niche platform. For a complete list, check out the included software page [here][17].
+
+Bio-Linux is based on Ubuntu and is available for free download.
+
+### Poseidon Linux
+
+This particular Ubuntu-based Linux distribution originally started as a desktop, based on open source software, aimed at the international scientific community. Back in 2010, the platform switched directions to focus solely on bathymetry (the measurement of depth of water in oceans, seas, or lakes), seafloor mapping, GIS, and 3D visualization.
+
+
+
+Figure 3: Poseidon Linux with menus (Image: Wikipedia).[Used with permission][3]
+
+Poseidon Linux (Figure 3) is, effectively, Ubuntu 16.04 (complete with Ubuntu Unity, at the moment) with the addition of [GMT][25] (a collection of about 80 command-line tools for manipulating geographic and Cartesian data sets), [PROJ][26] (a standard UNIX filter function which converts geographic longitude and latitude coordinates into Cartesian coordinates), and [MB System][27] (seafloor mapping software).
+
+Yes, Poseidon Linux is a very niche distribution, but if you need to measure the depth of water in oceans, seas, and lakes, you’ll be glad it’s available.
+
+Download Poseidon Linux for free from the [official download site][18].
+
+### NHSbuntu
+
+A group of British IT specialists took on the task to tailor Ubuntu Linux to be used as a desktop distribution by the [UK National Health Service][19]. [NHSbuntu][20] was first released, as an alpha, on April 27, 2017\. The goal was to create a PC operating system that could deliver security, speed, and cost-effectiveness and to create a desktop distribution that would conform to the needs of the NHS — not insist the NHS conform to the needs of the software. NHSbuntu was set up for full disk encryption to safeguard the privacy of sensitive data.
+
+NHSbuntu includes LibreOffice, NHSMail2 (a version of the Evolution groupware suite, capable of connecting to NHSmail2 and Trust email), and Chat (a messenger app able to work with NHSmail2). This spin on Ubuntu can:
+
+* Perform as a Clinical OS
+
+* Serve as an office desktop OS
+
+* Be used as in kiosk mode
+
+* Function as a real-time dashboard
+
+
+
+Figure 4: NHSbuntu main screen.[Used with permission][4]
+
+The specific customizations of NHSbuntu are:
+
+* NHSbuntu wallpaper (Figure 4)
+
+* A look and feel similar to a well-known desktop
+
+* NHSmail2 compatibility
+
+* Email, calendar, address book
+
+* Messager, with file sharing
+
+* N3 VPN compatibility
+
+* RSA token supported
+
+* Removal of games
+
+* Inclusion of Remmina (Remote Desktop client for VDI)
+
+NHSbuntu can be [downloaded][21], for free, for either 32- or 64-bit hardware.
+
+### The tip of the scientific iceberg
+
+Even if you cannot find a Linux distribution geared toward your specific branch of science or medicine, chances are you will find software perfectly capable of serving your needs. There are even organizations (such as the [Open Science Project][22] and [Neurodebian][23]) dedicated to writing and releasing open source software for the scientific community.
+
+ _Learn more about Linux through the free ["Introduction to Linux" ][24]course from The Linux Foundation and edX._
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2017/9/linux-serves-scientific-and-medical-communities
+
+作者:[JACK WALLEN ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/jlwallen
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:https://www.linux.com/licenses/category/used-permission
+[3]:https://www.linux.com/licenses/category/used-permission
+[4]:https://www.linux.com/licenses/category/used-permission
+[5]:https://www.linux.com/licenses/category/creative-commons-zero
+[6]:https://en.wikipedia.org/wiki/Tevatron
+[7]:https://www.linux.com/files/images/scientificlinux1jpg
+[8]:https://www.linux.com/files/images/biolinuxjpg
+[9]:https://www.linux.com/files/images/poseidon4-menupng
+[10]:https://www.linux.com/files/images/nshbuntujpg
+[11]:https://www.linux.com/files/images/linux-sciencejpg
+[12]:http://www.scientificlinux.org/
+[13]:http://www.fnal.gov/
+[14]:http://www.scientificlinux.org/downloads/
+[15]:http://environmentalomics.org/omics-synthesis-centre/
+[16]:https://www.environmental-research.ox.ac.uk/partners/centre-for-ecology-hydrology/
+[17]:http://environmentalomics.org/bio-linux-software-list/
+[18]:https://sites.google.com/site/poseidonlinux/download
+[19]:http://www.nhs.uk/pages/home.aspx
+[20]:https://www.nhsbuntu.org/
+[21]:https://www.nhsbuntu.org/
+[22]:http://openscience.org/
+[23]:http://neuro.debian.net/
+[24]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+[25]:http://gmt.soest.hawaii.edu/
+[26]:http://proj4.org/
+[27]:http://svn.ilab.ldeo.columbia.edu/listing.php?repname=MB-System
From 9449f0782fce97c021c6a8dde78735e8f9da593d Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 09:48:08 +0800
Subject: [PATCH 014/220] =?UTF-8?q?20180421-4=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...est Linux Distributions for Development.md | 158 ++++++++++++++++++
1 file changed, 158 insertions(+)
create mode 100644 sources/tech/20180129 The 5 Best Linux Distributions for Development.md
diff --git a/sources/tech/20180129 The 5 Best Linux Distributions for Development.md b/sources/tech/20180129 The 5 Best Linux Distributions for Development.md
new file mode 100644
index 0000000000..cc11407ff3
--- /dev/null
+++ b/sources/tech/20180129 The 5 Best Linux Distributions for Development.md
@@ -0,0 +1,158 @@
+The 5 Best Linux Distributions for Development
+============================================================
+
+
+Jack Wallen looks at some of the best LInux distributions for development efforts.[Creative Commons Zero][6]
+
+When considering Linux, there are so many variables to take into account. What package manager do you wish to use? Do you prefer a modern or old-standard desktop interface? Is ease of use your priority? How flexible do you want your distribution? What task will the distribution serve?
+
+It is that last question which should often be considered first. Is the distribution going to work as a desktop or a server? Will you be doing network or system audits? Or will you be developing? If you’ve spent much time considering Linux, you know that for every task there are several well-suited distributions. This certainly holds true for developers. Even though Linux, by design, is an ideal platform for developers, there are certain distributions that rise above the rest, to serve as great operating systems to serve developers.
+
+I want to share what I consider to be some of the best distributions for your development efforts. Although each of these five distributions can be used for general purpose development (with maybe one exception), they each serve a specific purpose. You may or may not be surprised by the selections.
+
+With that said, let’s get to the choices.
+
+### Debian
+
+The [Debian][14] distribution winds up on the top of many a Linux list. With good reason. Debian is that distribution from which so many are based. It is this reason why many developers choose Debian. When you develop a piece of software on Debian, chances are very good that package will also work on [Ubuntu][15], [Linux Mint][16], [Elementary OS][17], and a vast collection of other distributions.
+
+Beyond that obvious answer, Debian also has a very large amount of applications available, by way of the default repositories (Figure 1).
+
+
+Figure 1: Available applications from the standard Debian repositories.[Used with permission][1]
+
+To make matters even programmer-friendly, those applications (and their dependencies) are simple to install. Take, for instance, the build-essential package (which can be installed on any distribution derived from Debian). This package includes the likes of dkpg-dev, g++, gcc, hurd-dev, libc-dev, and make—all tools necessary for the development process. The build-essential package can be installed with the command sudo apt install build-essential.
+
+There are hundreds of other developer-specific applications available from the standard repositories, tools such as:
+
+* Autoconf—configure script builder
+
+* Autoproject—creates a source package for a new program
+
+* Bison—general purpose parser generator
+
+* Bluefish—powerful GUI editor, targeted towards programmers
+
+* Geany—lightweight IDE
+
+* Kate—powerful text editor
+
+* Eclipse—helps builders independently develop tools that integrate with other people’s tools
+
+The list goes on and on.
+
+Debian is also as rock-solid a distribution as you’ll find, so there’s very little concern you’ll lose precious work, by way of the desktop crashing. As a bonus, all programs included with Debian have met the [Debian Free Software Guidelines][18], which adheres to the following “social contract”:
+
+* Debian will remain 100% free.
+
+* We will give back to the free software community.
+
+* We will not hide problems.
+
+* Our priorities are our users and free software
+
+* Works that do not meet our free software standards are included in a non-free archive.
+
+Also, if you’re new to developing on Linux, Debian has a handy [Programming section in their user manual][19].
+
+### openSUSE Tumbleweed
+
+If you’re looking to develop with a cutting-edge, rolling release distribution, [openSUSE][20] offers one of the best in [Tumbleweed][21]. Not only will you be developing with the most up to date software available, you’ll be doing so with the help of openSUSE’s amazing administrator tools … of which includes YaST. If you’re not familiar with YaST (Yet another Setup Tool), it’s an incredibly powerful piece of software that allows you to manage the whole of the platform, from one convenient location. From within YaST, you can also install using RPM Groups. Open YaST, click on RPM Groups (software grouped together by purpose), and scroll down to the Development section to see the large amount of groups available for installation (Figure 2).
+
+
+
+Figure 2: Installing package groups in openSUSE Tumbleweed.[Creative Commons Zero][2]
+
+openSUSE also allows you to quickly install all the necessary devtools with the simple click of a weblink. Head over to the [rpmdevtools install site][22] and click the link for Tumbleweed. This will automatically add the necessary repository and install rpmdevtools.
+
+By developing with a rolling release distribution, you know you’re working with the most recent releases of installed software.
+
+### CentOS
+
+Let’s face it, [Red Hat Enterprise Linux][23] (RHEL) is the de facto standard for enterprise businesses. If you’re looking to develop for that particular platform, and you can’t afford a RHEL license, you cannot go wrong with [CentOS][24]—which is, effectively, a community version of RHEL. You will find many of the packages found on CentOS to be the same as in RHEL—so once you’re familiar with developing on one, you’ll be fine on the other.
+
+If you’re serious about developing on an enterprise-grade platform, you cannot go wrong starting with CentOS. And because CentOS is a server-specific distribution, you can more easily develop for a web-centric platform. Instead of developing your work and then migrating it to a server (hosted on a different machine), you can easily have CentOS setup to serve as an ideal host for both developing and testing.
+
+Looking for software to meet your development needs? You only need open up the CentOS Application Installer, where you’ll find a Developer section that includes a dedicated sub-section for Integrated Development Environments (IDEs - Figure 3).
+
+
+Figure 3: Installing a powerful IDE is simple in CentOS.[Used with permission][3]
+
+CentOS also includes Security Enhanced Linux (SELinux), which makes it easier for you to test your software’s ability to integrate with the same security platform found in RHEL. SELinux can often cause headaches for poorly designed software, so having it at the ready can be a real boon for ensuring your applications work on the likes of RHEL. If you’re not sure where to start with developing on CentOS 7, you can read through the [RHEL 7 Developer Guide][25].
+
+### Raspbian
+
+Let’s face it, embedded systems are all the rage. One easy means of working with such systems is via the Raspberry Pi—a tiny footprint computer that has become incredibly powerful and flexible. In fact, the Raspberry Pi has become the hardware used by DIYers all over the planet. Powering those devices is the [Raspbian][26] operating system. Raspbian includes tools like [BlueJ][27], [Geany][28], [Greenfoot][29], [Sense HAT Emulator][30], [Sonic Pi][31], and [Thonny Python IDE][32], [Python][33], and [Scratch][34], so you won’t want for the necessary development software. Raspbian also includes a user-friendly desktop UI (Figure 4), to make things even easier.
+
+
+Figure 4: The Raspbian main menu, showing pre-installed developer software.[Used with permission][4]
+
+For anyone looking to develop for the Raspberry Pi platform, Raspbian is a must have. If you’d like to give Raspbian a go, without the Raspberry Pi hardware, you can always install it as a VirtualBox virtual machine, by way of the ISO image found [here][35].
+
+### Pop!_OS
+
+Don’t let the name full you, [System76][36]’s [Pop!_OS][37] entry into the world of operating systems is serious. And although what System76 has done to this Ubuntu derivative may not be readily obvious, it is something special.
+
+The goal of System76 is to create an operating system specific to the developer, maker, and computer science professional. With a newly-designed GNOME theme, Pop!_OS is beautiful (Figure 5) and as highly functional as you would expect from both the hardware maker and desktop designers.
+
+### [devel_5.jpg][11]
+
+
+Figure 5: The Pop!_OS Desktop.[Used with permission][5]
+
+But what makes Pop!_OS special is the fact that it is being developed by a company dedicated to Linux hardware. This means, when you purchase a System76 laptop, desktop, or server, you know the operating system will work seamlessly with the hardware—on a level no other company can offer. I would predict that, with Pop!_OS, System76 will become the Apple of Linux.
+
+### Time for work
+
+In their own way, each of these distributions. You have a stable desktop (Debian), a cutting-edge desktop (openSUSE Tumbleweed), a server (CentOS), an embedded platform (Raspbian), and a distribution to seamless meld with hardware (Pop!_OS). With the exception of Raspbian, any one of these distributions would serve as an outstanding development platform. Get one installed and start working on your next project with confidence.
+
+ _Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/5-best-linux-distributions-development
+
+作者:[JACK WALLEN ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/jlwallen
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:https://www.linux.com/licenses/category/creative-commons-zero
+[3]:https://www.linux.com/licenses/category/used-permission
+[4]:https://www.linux.com/licenses/category/used-permission
+[5]:https://www.linux.com/licenses/category/used-permission
+[6]:https://www.linux.com/licenses/category/creative-commons-zero
+[7]:https://www.linux.com/files/images/devel1jpg
+[8]:https://www.linux.com/files/images/devel2jpg
+[9]:https://www.linux.com/files/images/devel3jpg
+[10]:https://www.linux.com/files/images/devel4jpg
+[11]:https://www.linux.com/files/images/devel5jpg
+[12]:https://www.linux.com/files/images/king-penguins1920jpg
+[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+[14]:https://www.debian.org/
+[15]:https://www.ubuntu.com/
+[16]:https://linuxmint.com/
+[17]:https://elementary.io/
+[18]:https://www.debian.org/social_contract
+[19]:https://www.debian.org/doc/manuals/debian-reference/ch12.en.html
+[20]:https://www.opensuse.org/
+[21]:https://en.opensuse.org/Portal:Tumbleweed
+[22]:https://software.opensuse.org/download.html?project=devel%3Atools&package=rpmdevtools
+[23]:https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
+[24]:https://www.centos.org/
+[25]:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/pdf/developer_guide/Red_Hat_Enterprise_Linux-7-Developer_Guide-en-US.pdf
+[26]:https://www.raspberrypi.org/downloads/raspbian/
+[27]:https://www.bluej.org/
+[28]:https://www.geany.org/
+[29]:https://www.greenfoot.org/
+[30]:https://www.raspberrypi.org/blog/sense-hat-emulator/
+[31]:http://sonic-pi.net/
+[32]:http://thonny.org/
+[33]:https://www.python.org/
+[34]:https://scratch.mit.edu/
+[35]:http://rpf.io/x86iso
+[36]:https://system76.com/
+[37]:https://system76.com/pop
From 6123303c391fc5467ae2aecd95609daa37f56d75 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 09:52:27 +0800
Subject: [PATCH 015/220] =?UTF-8?q?20180421-5=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...sh Scripts Way More Robust And Reliable.md | 175 ++++++++++++++++++
1 file changed, 175 insertions(+)
create mode 100644 sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
diff --git a/sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md b/sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
new file mode 100644
index 0000000000..61013f03f4
--- /dev/null
+++ b/sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
@@ -0,0 +1,175 @@
+How "Exit Traps" Can Make Your Bash Scripts Way More Robust And Reliable
+============================================================
+
+There is a simple, useful idiom to make your bash scripts more robust - ensuring they always perform necessary cleanup operations, even when something unexpected goes wrong. The secret sauce is a pseudo-signal provided by bash, called EXIT, that you can [trap][1]; commands or functions trapped on it will execute when the script exits for any reason. Let's see how this works.
+
+The basic code structure is like this:
+
+```
+#!/bin/bash
+function finish {
+ # Your cleanup code here
+}
+trap finish EXIT
+```
+
+You place any code that you want to be certain to run in this "finish" function. A good common example: creating a temporary scratch directory, then deleting it after.
+
+```
+#!/bin/bash
+scratch=$(mktemp -d -t tmp.XXXXXXXXXX)
+function finish {
+ rm -rf "$scratch"
+}
+trap finish EXIT
+```
+
+You can then download, generate, slice and dice intermediate or temporary files to the `$scratch` directory to your heart's content. [[1]][2]
+
+```
+# Download every linux kernel ever.... FOR SCIENCE!
+for major in {1..4}; do
+ for minor in {0..99}; do
+ for patchlevel in {0..99}; do
+ tarball="linux-${major}-${minor}-${patchlevel}.tar.bz2"
+ curl -q "http://kernel.org/path/to/$tarball" -o "$scratch/$tarball" || true
+ if [ -f "$scratch/$tarball" ]; then
+ tar jxf "$scratch/$tarball"
+ fi
+ done
+ done
+done
+# magically merge them into some frankenstein kernel ...
+# That done, copy it to a destination
+cp "$scratch/frankenstein-linux.tar.bz2" "$1"
+# Here at script end, the scratch directory is erased automatically
+```
+
+Compare this to how you'd remove the scratch directory without the trap:
+
+```
+#!/bin/bash
+# DON'T DO THIS!
+scratch=$(mktemp -d -t tmp.XXXXXXXXXX)
+
+# Insert dozens or hundreds of lines of code here...
+
+# All done, now remove the directory before we exit
+rm -rf "$scratch"
+```
+
+What's wrong with this? Plenty:
+
+* If some error causes the script to exit prematurely, the scratch directory and its contents don't get deleted. This is a resource leak, and may have security implications too.
+
+* If the script is designed to exit before the end, you must manually copy 'n paste the rm command at each exit point.
+
+* There are maintainability problems as well. If you later add a new in-script exit, it's easy to forget to include the removal - potentially creating mysterious heisenleaks.
+
+### Keeping Services Up, No Matter What
+
+Another scenario: Imagine you are automating some system administration task, requiring you to temporarily stop a server... and you want to be dead certain it starts again at the end, even if there is some runtime error. Then the pattern is:
+
+```
+function finish {
+ # re-start service
+ sudo /etc/init.d/something start
+}
+trap finish EXIT
+sudo /etc/init.d/something stop
+# Do the work...
+
+# Allow the script to end and the trapped finish function to start the
+# daemon back up.
+```
+
+A concrete example: suppose you have MongoDB running on an Ubuntu server, and want a cronned script to temporarily stop the process for some regular maintenance task. The way to handle it is:
+
+```
+function finish {
+ # re-start service
+ sudo service mongdb start
+}
+trap finish EXIT
+# Stop the mongod instance
+sudo service mongdb stop
+# (If mongod is configured to fork, e.g. as part of a replica set, you
+# may instead need to do "sudo killall --wait /usr/bin/mongod".)
+```
+
+### Capping Expensive Resources
+
+There is another situation where the exit trap is very useful: if your script initiates an expensive resource, needed only while the script is executing, and you want to make certain it releases that resource once it's done. For example, suppose you are working with Amazon Web Services (AWS), and want a script that creates a new image.
+
+(If you're not familar with this: Servers running on the Amazon cloud are called "[instances][3]". Instances are launched from Amazon Machine Images, a.k.a. "AMIs" or "images". AMIs are kind of like a snapshot of a server at a specific moment in time.)
+
+A common pattern for creating custom AMIs looks like:
+
+1. Run an instance (i.e. start a server) from some base AMI.
+
+2. Make some modifications to it, perhaps by copying a script over and then executing it.
+
+3. Create a new image from this now-modified instance.
+
+4. Terminate the running instance, which you no longer need.
+
+That last step is **really important**. If your script fails to terminate the instance, it will keep running and accruing charges to your account. (In the worst case, you won't notice until the end of the month, when your bill is way higher than you expect. Believe me, that's no fun!)
+
+If our AMI-creation is encapsulated in a script, we can set an exit trap to destroy the instance. Let's rely on the EC2 command line tools:
+
+```
+#!/bin/bash
+# define the base AMI ID somehow
+ami=$1
+# Store the temporary instance ID here
+instance=''
+# While we are at it, let me show you another use for a scratch directory.
+scratch=$(mktemp -d -t tmp.XXXXXXXXXX)
+function finish {
+ if [ -n "$instance" ]; then
+ ec2-terminate-instances "$instance"
+ fi
+ rm -rf "$scratch"
+}
+trap finish EXIT
+# This line runs the instance, and stores the program output (which
+# shows the instance ID) in a file in the scratch directory.
+ec2-run-instances "$ami" > "$scratch/run-instance"
+# Now extract the instance ID.
+instance=$(grep '^INSTANCE' "$scratch/run-instance" | cut -f 2)
+```
+
+At this point in the script, the instance (EC2 server) is running [[2]][4]. You can do whatever you like: install software on the instance, modify its configuration programatically, et cetera, finally creating an image from the final version. The instance will be terminated for you when the script exits - even if some uncaught error causes it to exit early. (Just make sure to block until the image creation process finishes.)
+
+### Plenty Of Uses
+
+I believe what I've covered in this article only scratches the surface; having used this bash pattern for years, I still find new interesting and fun ways to apply it. You will probably discover your own situations where it will help make your bash scripts more reliable.
+
+### Footnotes
+
+1. The -t option to mktemp is optional on Linux, but needed on OS X. Make your scripts using this idiom more portable by including this option.
+
+2. When getting the instance ID, instead of using the scratch file, we could just say: `instance=$(ec2-run-instances "$ami" | grep '^INSTANCE' | cut -f 2)`. But using the scratch file makes the code a bit more readable, leaves us with better logging for debugging, and makes it easy to capture other info from ec2-run-instances's output if we wish.
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Writer, software engineer, and entrepreneur in San Francisco, CA, USA.
+
+Author of [Powerful Python][5] and its [blog][6].
+via: http://redsymbol.net/articles/bash-exit-traps/
+
+作者:[aaron maxwell ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://redsymbol.net/
+[1]:http://www.gnu.org/software/bash/manual/bashref.html#index-trap
+[2]:http://redsymbol.net/articles/bash-exit-traps/#footnote-1
+[3]:http://aws.amazon.com/ec2/
+[4]:http://redsymbol.net/articles/bash-exit-traps/#footnote-2
+[5]:https://www.amazon.com/d/0692878971
+[6]:https://powerfulpython.com/blog/
From 29ad5fc5f5c1277528a63c2d0c726a68877787af Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 09:59:18 +0800
Subject: [PATCH 016/220] =?UTF-8?q?20180421-6=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20180416 Cgo and Python.md | 351 ++++++++++++++++++++++++
1 file changed, 351 insertions(+)
create mode 100644 sources/tech/20180416 Cgo and Python.md
diff --git a/sources/tech/20180416 Cgo and Python.md b/sources/tech/20180416 Cgo and Python.md
new file mode 100644
index 0000000000..e5688d43c8
--- /dev/null
+++ b/sources/tech/20180416 Cgo and Python.md
@@ -0,0 +1,351 @@
+Cgo and Python
+============================================================
+
+
+
+
+
+If you look at the [new Datadog Agent][8], you might notice most of the codebase is written in Go, although the checks we use to gather metrics are still written in Python. This is possible because the Datadog Agent, a regular Go binary, [embeds][9] a CPython interpreter that can be called whenever it needs to execute Python code. This process can be made transparent using an abstraction layer so that you can still write idiomatic Go code even when there’s Python running under the hood.
+
+[video](https://youtu.be/yrEi5ezq2-c)
+
+There are a number of reasons why you might want to embed Python in a Go application:
+
+* It is useful during a port; gradually moving portions of an existing Python project to the new language without losing any functionality during the process.
+
+* You can reuse existing Python software or libraries without re-implementing them in the new language.
+
+* You can dynamically extend your software by loading and executing regular Python scripts, even at runtime.
+
+The list could go on, but for the Datadog Agent the last point is crucial: we want you to be able to execute custom checks or change existing ones without forcing you to recompile the Agent, or in general, to compile anything.
+
+Embedding CPython is quite easy and well documented. The interpreter itself is written in C and a C API is provided to programmatically perform operations at a very low level, like creating objects, importing modules, and calling functions.
+
+In this article we’ll show some code examples, and we’ll focus on keeping the Go code idiomatic while interacting with Python at the same time, but before we proceed we need to address a small gap: the embedding API is C but our main application is Go, how can this possibly work?
+
+
+
+### Introducing cgo
+
+There are [a number of good reasons][10] why you might not want to introduce cgo in your stack, but embedding CPython is one of those cases where you must. [Cgo][11] is not a language nor a compiler. It’s a [Foreign Function Interface][12] (FFI), a mechanism we can use in Go to invoke functions and services written in a different language, specifically C.
+
+When we say “cgo” we’re actually referring to a set of tools, libraries, functions, and types that are used by the go toolchain under the hood so we can keep doing `go build` to get our Go binaries. An absolutely minimal example of a program using cgo looks like this:
+
+```
+package main
+
+// #include
+import "C"
+import "fmt"
+
+func main() {
+ fmt.Println("Max float value of float is", C.FLT_MAX)
+}
+
+```
+
+The comment block right above the `import "C"` instruction is called a “preamble” and can contain actual C code, in this case an header inclusion. Once imported, the “C” pseudo-package lets us “jump” to the foreign code, accessing the `FLT_MAX` constant. You can build the example by invoking `go build`, the same as if it was plain Go.
+
+If you want to have a look at all the work cgo does under the hood, run `go build -x`. You’ll see the “cgo” tool will be invoked to generate some C and Go modules, then the C and Go compilers will be invoked to build the object modules and finally the linker will put everything together.
+
+You can read more about cgo on the [Go blog][13]. The article contains more examples and few useful links to get further into details.
+
+Now that we have an idea of what cgo can do for us, let’s see how we can run some Python code using this mechanism.
+
+
+### Embedding CPython: a primer
+
+A Go program that, technically speaking, embeds CPython is not as complicated as you might expect. In fact, at the bare minimum, all we have to do is initialize the interpreter before running any Python code and finalize it when we’re done. Please note that we’re going to use Python 2.x throughout all the examples but everything we’ll see can be applied to Python 3.x as well with very little adaptation. Let’s look at an example:
+
+```
+package main
+
+// #cgo pkg-config: python-2.7
+// #include
+import "C"
+import "fmt"
+
+func main() {
+ C.Py_Initialize()
+ fmt.Println(C.GoString(C.Py_GetVersion()))
+ C.Py_Finalize()
+}
+
+```
+
+The example above does exactly what the following Python code would do:
+
+```
+import sys
+print(sys.version)
+
+```
+
+You can see we put a `#cgo` directive in the preamble; those directives are passed to the toolchain to let you change the build workflow. In this case, we tell cgo to invoke “pkg-config” to gather the flags needed to build and link against a library called “python-2.7” and pass those flags to the C compiler. If you have the CPython development libraries installed in your system along with pkg-config, this would let you keep using a plain `go build` to compile the example above.
+
+Back to the code, we use `Py_Initialize()` and `Py_Finalize()` to set up and shut down the interpreter and the `Py_GetVersion` C function to retrieve the string containing the version information for the embedded interpreter.
+
+If you’re wondering, all the cgo bits we need to put together to invoke the C Python API are boilerplate code. This is why the Datadog Agent relies on [go-python][14] for all the embedding operations; the library provides a Go friendly thin wrapper around the C API and hides the cgo details. This is another basic embedding example, this time using go-python:
+
+```
+package main
+
+import (
+ python "github.com/sbinet/go-python"
+)
+
+func main() {
+ python.Initialize()
+ python.PyRun_SimpleString("print 'hello, world!'")
+ python.Finalize()
+}
+
+```
+
+This looks closer to regular Go code, no more cgo exposed and we can use Go strings back and forth while accessing the Python API. Embedding looks powerful and developer friendly. Time to put the interpreter to good use: let’s try to load a Python module from disk.
+
+We don’t need anything complex on the Python side, the ubiquitous “hello world” will serve the purpose:
+
+```
+# foo.py
+def hello():
+ """
+ Print hello world for fun and profit.
+ """
+ print "hello, world!"
+
+```
+
+The Go code is slightly more complex but still readable:
+
+```
+// main.go
+package main
+
+import "github.com/sbinet/go-python"
+
+func main() {
+ python.Initialize()
+ defer python.Finalize()
+
+ fooModule := python.PyImport_ImportModule("foo")
+ if fooModule == nil {
+ panic("Error importing module")
+ }
+
+ helloFunc := fooModule.GetAttrString("hello")
+ if helloFunc == nil {
+ panic("Error importing function")
+ }
+
+ // The Python function takes no params but when using the C api
+ // we're required to send (empty) *args and **kwargs anyways.
+ helloFunc.Call(python.PyTuple_New(0), python.PyDict_New())
+}
+
+```
+
+Once built, we need to set the `PYTHONPATH` environment variable to the current working dir so that the import statement will be able to find the `foo.py`module. From a shell, the command would look like this:
+
+```
+$ go build main.go && PYTHONPATH=. ./main
+hello, world!
+
+```
+
+
+### The dreadful Global Interpreter Lock
+
+Having to bring in cgo in order to embed Python is a tradeoff: builds will be slower, the Garbage Collector won’t help us managing memory used by the foreign system, and cross compilation will be non-trivial. Whether or not these are concerns for a specific project can be debated, but there’s something I deem not negotiable: the Go concurrency model. If we couldn’t run Python from a goroutine, using Go altogether would make very little sense.
+
+Before playing with concurrency, Python, and cgo, there’s something we need to know: it’s the Global Interpreter Lock, also known as the GIL. The GIL is a mechanism widely adopted in language interpreters (CPython is one of those) preventing more than one thread from running at the same time. This means that no Python program executed by CPython will be ever able to run in parallel within the same process. Concurrency is still possible and in the end, the lock is a good tradeoff between speed, security, and implementation simplicity. So why should this pose a problem when it comes to embedding?
+
+When a regular, non-embedded Python program starts, there’s no GIL involved to avoid useless overhead in locking operations; the GIL starts the first time some Python code requests to spawn a thread. For each thread, the interpreter creates a data structure to store information about the current state and locks the GIL. When the thread has finished, the state is restored and the GIL unlocked, ready to be used by other threads.
+
+When we run Python from a Go program, none of the above happens automatically. Without the GIL, multiple Python threads could be created by our Go program. This could cause a race condition leading to fatal runtime errors, and most likely a segmentation fault bringing down the whole Go application.
+
+The solution to this problem is to explicitly invoke the GIL whenever we run multithreaded code from Go; the code is not complex because the C API provides all the tools we need. To better expose the problem, we need to do something CPU bounded from Python. Let’s add these functions to our foo.py module from the previous example:
+
+```
+# foo.py
+import sys
+
+def print_odds(limit=10):
+ """
+ Print odds numbers < limit
+ """
+ for i in range(limit):
+ if i%2:
+ sys.stderr.write("{}\n".format(i))
+
+def print_even(limit=10):
+ """
+ Print even numbers < limit
+ """
+ for i in range(limit):
+ if i%2 == 0:
+ sys.stderr.write("{}\n".format(i))
+
+```
+
+We’ll try to print odd and even numbers concurrently from Go, using two different goroutines (thus involving threads):
+
+```
+package main
+
+import (
+ "sync"
+
+ "github.com/sbinet/go-python"
+)
+
+func main() {
+ // The following will also create the GIL explicitly
+ // by calling PyEval_InitThreads(), without waiting
+ // for the interpreter to do that
+ python.Initialize()
+
+ var wg sync.WaitGroup
+ wg.Add(2)
+
+ fooModule := python.PyImport_ImportModule("foo")
+ odds := fooModule.GetAttrString("print_odds")
+ even := fooModule.GetAttrString("print_even")
+
+ // Initialize() has locked the the GIL but at this point we don't need it
+ // anymore. We save the current state and release the lock
+ // so that goroutines can acquire it
+ state := python.PyEval_SaveThread()
+
+ go func() {
+ _gstate := python.PyGILState_Ensure()
+ odds.Call(python.PyTuple_New(0), python.PyDict_New())
+ python.PyGILState_Release(_gstate)
+
+ wg.Done()
+ }()
+
+ go func() {
+ _gstate := python.PyGILState_Ensure()
+ even.Call(python.PyTuple_New(0), python.PyDict_New())
+ python.PyGILState_Release(_gstate)
+
+ wg.Done()
+ }()
+
+ wg.Wait()
+
+ // At this point we know we won't need Python anymore in this
+ // program, we can restore the state and lock the GIL to perform
+ // the final operations before exiting.
+ python.PyEval_RestoreThread(state)
+ python.Finalize()
+}
+
+```
+
+While reading the example you might note a pattern, the pattern that will become our mantra to run embedded Python code:
+
+1. Save the state and lock the GIL.
+
+2. Do Python.
+
+3. Restore the state and unlock the GIL.
+
+The code should be straightforward but there’s a subtle detail we want to point out: notice that despite seconding the GIL mantra, in one case we operate the GIL by calling `PyEval_SaveThread()` and `PyEval_RestoreThread()`, in another (look inside the goroutines) we do the same with `PyGILState_Ensure()`and `PyGILState_Release()`.
+
+We said when multithreading is operated from Python, the interpreter takes care of creating the data structure needed to store the current state, but when the same happens from the C API, we’re responsible for that.
+
+When we initialize the interpreter with go-python, we’re operating in a Python context. So when `PyEval_InitThreads()` is called it initializes the data structure and locks the GIL. We can use `PyEval_SaveThread()` and `PyEval_RestoreThread()` to operate on already existing state.
+
+Inside the goroutines, we’re operating from a Go context and we need to explicitly create the state and remove it when done, which is what `PyGILState_Ensure()` and `PyGILState_Release()` do for us.
+
+
+### Unleash the Gopher
+
+At this point we know how to deal with multithreading Go code executing Python in an embedded interpreter but after the GIL, another challenge is right around the corner: the Go scheduler.
+
+When a goroutine starts, it’s scheduled for execution on one of the `GOMAXPROCS`threads available—[see here][15] for more details on the topic. If a goroutine happens to perform a syscall or call C code, the current thread hands over the other goroutines waiting to run in the thread queue to another thread so they can have better chances to run; the current goroutine is paused, waiting for the syscall or the C function to return. When this happens, the thread tries to resume the paused goroutine, but if this is not possible, it asks the Go runtime to find another thread to complete the goroutine and goes to sleep. The goroutine is finally scheduled to another thread and it finishes.
+
+With this in mind, let’s see what can happen to a goroutine running some Python code when a goroutine is moved to a new thread::
+
+1. Our goroutine starts, performs a C call, and pauses. The GIL is locked.
+
+2. When the C call returns, the current thread tries to resume the goroutine, but it fails.
+
+3. The current thread tells the Go runtime to find another thread to resume our goroutine.
+
+4. The Go scheduler finds an available thread and the goroutine is resumed.
+
+5. The goroutine is almost done and tries to unlock the GIL before returning.
+
+6. The thread ID stored in the current state is from the original thread and is different from the ID of the current thread.
+
+7. Panic!
+
+Luckily for us, we can force the Go runtime to always keep our goroutine running on the same thread by calling the LockOSThread function from the runtime package from within a goroutine:
+
+```
+go func() {
+ runtime.LockOSThread()
+
+ _gstate := python.PyGILState_Ensure()
+ odds.Call(python.PyTuple_New(0), python.PyDict_New())
+ python.PyGILState_Release(_gstate)
+ wg.Done()
+}()
+
+```
+
+This will interfere with the scheduler and might introduce some overhead, but it’s a price that we’re willing to pay to avoid random panics.
+
+### Conclusions
+
+In order to embed Python, the Datadog Agent has to accept a few tradeoffs:
+
+* The overhead introduced by cgo.
+
+* The task of manually handling the GIL.
+
+* The limitation of binding goroutines to the same thread during execution.
+
+We’re happy to accept each of these for the convenience of running Python checks in Go. But by being conscious of the tradeoffs, we’re able to minimize their effect. Regarding other limitations introduced to support Python, we have few countermeasures to contain potential issues:
+
+* The build is automated and configurable so that devs have still something very similar to `go build`.
+
+* A lightweight version of the agent can be built stripping out Python support entirely simply using Go build tags.
+
+* Such a version only relies on core checks hardcoded in the agent itself (system and network checks mostly) but is cgo free and can be cross compiled.
+
+We’ll re-evaluate our options in the future and decide whether keeping around cgo is still worth it; we could even reconsider whether Python as a whole is still worth it, waiting for the [Go plugin package][16] to be mature enough to support our use case. But for now the embedded Python is working well and transitioning from the old Agent to the new one couldn’t be easier.
+
+Are you a polyglot who loves mixing different programming languages? Do you love learning about the inner workings of languages to make your code more performant? [Join us at Datadog!][17]
+
+--------------------------------------------------------------------------------
+
+via: https://www.datadoghq.com/blog/engineering/cgo-and-python/
+
+作者:[ Massimiliano Pippi][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://github.com/masci
+[1]:http://twitter.com/share?url=https://www.datadoghq.com/blog/engineering/cgo-and-python/
+[2]:http://www.reddit.com/submit?url=https://www.datadoghq.com/blog/engineering/cgo-and-python/
+[3]:https://www.linkedin.com/shareArticle?mini=true&url=https://www.datadoghq.com/blog/engineering/cgo-and-python/
+[4]:https://www.datadoghq.com/blog/category/under-the-hood
+[5]:https://www.datadoghq.com/blog/tag/agent
+[6]:https://www.datadoghq.com/blog/tag/golang
+[7]:https://www.datadoghq.com/blog/tag/python
+[8]:https://github.com/DataDog/datadog-agent/
+[9]:https://docs.python.org/2/extending/embedding.html
+[10]:https://dave.cheney.net/2016/01/18/cgo-is-not-go
+[11]:https://golang.org/cmd/cgo/
+[12]:https://en.wikipedia.org/wiki/Foreign_function_interface
+[13]:https://blog.golang.org/c-go-cgo
+[14]:https://github.com/sbinet/go-python
+[15]:https://morsmachine.dk/go-scheduler
+[16]:https://golang.org/pkg/plugin/
+[17]:https://www.datadoghq.com/careers/
From a58d40d197326f3ce7d29b01e75ba07fb1e627a0 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 10:01:53 +0800
Subject: [PATCH 017/220] =?UTF-8?q?20180421-7=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Unit 1.0 An App Server That Supports Go.md | 108 ++++++++++++++++++
1 file changed, 108 insertions(+)
create mode 100644 sources/tech/20180412 NGINX Unit 1.0 An App Server That Supports Go.md
diff --git a/sources/tech/20180412 NGINX Unit 1.0 An App Server That Supports Go.md b/sources/tech/20180412 NGINX Unit 1.0 An App Server That Supports Go.md
new file mode 100644
index 0000000000..2f86fe2f01
--- /dev/null
+++ b/sources/tech/20180412 NGINX Unit 1.0 An App Server That Supports Go.md
@@ -0,0 +1,108 @@
+Announcing NGINX Unit 1.0
+============================================================
+
+Today, April 12, marks a significant milestone in the development of [NGINX Unit][8], our dynamic web and application server. Approximately six months after its [first public release][9], we’re now happy to announce that NGINX Unit is generally available and production‑ready. NGINX Unit is our new open source initiative led by Igor Sysoev, creator of the original NGINX Open Source software, which is now used by more than [409 million websites][10].
+
+“I set out to make an application server which will be remotely and dynamically configured, and able to switch dynamically from one language or application version to another,” explains Igor. “Dynamic configuration and switching I saw as being certainly the main problem. People want to reconfigure servers without interrupting client processing.”
+
+NGINX Unit is dynamically configured using a REST API; there is no static configuration file. All configuration changes happen directly in memory. Configuration changes take effect without requiring process reloads or service interruptions.
+
+
+NGINX Unit runs multiple languages simultaneously
+
+“The dynamic switching requires that we can run different languages and language versions in one server,” continues Igor.
+
+As of Release 1.0, NGINX Unit supports Go, Perl, PHP, Python, and Ruby on the same server. Multiple language versions are also supported, so you can, for instance, run applications written for PHP 5 and PHP 7 on the same server. Support for additional languages, including Java, is planned for future NGINX Unit releases.
+
+Note: We have an additional blog post on [how to configure NGINX, NGINX Unit, and WordPress][11] to work together.
+
+Igor studied at Moscow State Technical University, which was a pioneer in the Russian space program, and April 12 has a special significance. “This is the anniversary of the first manned spaceflight in history, made by [Yuri Gagarin][12]. The first public version of NGINX [0.1.0] was released on [[October 4, 2004][7],] the anniversary of the [Sputnik][13] launch, and NGINX 1.0 was launched on April 12, 2011.”
+
+### What Is NGINX Unit?
+
+NGINX Unit is a dynamic web and application server, suitable for both stand‑alone applications and distributed, microservices application architectures. It launches and scales application processes on demand, executing each application instance in its own secure sandbox.
+
+NGINX Unit manages and routes all incoming network transactions to the application through a separate “router” process, so it can rapidly implement configuration changes without interrupting service.
+
+“The configuration is in JSON format, so users can edit it manually, and it’s very suitable for scripting. We hope to add capabilities to [NGINX Controller][14] and [NGINX Amplify][15] to work with Unit configuration too,” explains Igor.
+
+The NGINX Unit configuration process is described thoroughly in the [documentation][16].
+
+“Now Unit can run Python, PHP, Ruby, Perl and Go – five languages. For example, during our beta, one of our users used Unit to run a number of different PHP platform versions on a single host,” says Igor.
+
+NGINX Unit’s ability to run multiple language runtimes is based on its internal separation between the router process, which terminates incoming HTTP requests, and groups of application processes, which implement the application runtime and execute application code.
+
+
+NGINX Unit architecture
+
+The router process is persistent – it never restarts – meaning that configuration updates can be implemented seamlessly, without any interruption in service. Each application process is deployed in its own sandbox (with support for [Linux control groups][17] [cgroups] under active development), so that NGINX Unit provides secure isolation for user code.
+
+### What’s Next for NGINX Unit?
+
+The next milestones for the NGINX Unit engineering team after Release 1.0 are concerned with HTTP maturity, serving static content, and additional language support.
+
+“We plan to add SSL and HTTP/2 capabilities in Unit,” says Igor. “Also, we plan to support routing in configurations; currently, we have direct mapping from one listen port to one application. We plan to add routing using URIs and hostnames, etc.”
+
+“In addition, we want to add more language support to Unit. We are completing the Ruby implementation, and next we will consider Node.js and Java. Java will be added in a Tomcat‑compatible fashion.”
+
+The end goal for NGINX Unit is to create an open source platform for distributed, polyglot applications which can run application code securely, reliably, and with the best possible performance. The platform will self‑manage, with capabilities such as autoscaling to meet SLAs within resource constraints, and service discovery and internal load balancing to make it easy to create a [service mesh][18].
+
+### NGINX Unit and the NGINX Application Platform
+
+An NGINX Unit platform will typically be delivered with a front‑end tier of NGINX Open Source or NGINX Plus reverse proxies to provide ingress control, edge load balancing, and security. The joint platform (NGINX Unit and NGINX or NGINX Plus) can then be managed fully using NGINX Controller to monitor, configure, and control the entire platform.
+
+
+The NGINX Application Platform is our vision for building microservices
+
+Together, these three components – NGINX Plus, NGINX Unit, and NGINX Controller – make up the [NGINX Application Platform][19]. The NGINX Application Platform is a product suite that delivers load balancing, caching, API management, a WAF, and application serving, with rich management and control planes that simplify the tasks of operating monolithic, microservices, and transitional applications.
+
+### Getting Started with NGINX Unit
+
+NGINX Unit is free and open source. Please see the [installation instructions][20] to get started. We have prebuilt packages for most operating systems, including Ubuntu and Red Hat Enterprise Linux. We also make a [Docker container][21] available on Docker Hub.
+
+The source code is available in our [Mercurial repository][22] and [mirrored on GitHub][23]. The code is available under the Apache 2.0 license. You can compile NGINX Unit yourself on most popular Linux and Unix systems.
+
+If you have any questions, please use the [GitHub issues board][24] or the [NGINX Unit mailing list][25]. We’d love to hear how you are using NGINX Unit, and we welcome [code contributions][26] too.
+
+We’re also happy to extend technical support for NGINX Unit to NGINX Plus customers with Professional or Enterprise support contracts. Please refer to our [Support page][27] for details of the support services we can offer.
+
+--------------------------------------------------------------------------------
+
+via: https://www.nginx.com/blog/nginx-unit-1-0-released/
+
+作者:[www.nginx.com ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:www.nginx.com
+[1]:https://twitter.com/intent/tweet?text=Announcing+NGINX+Unit+1.0+by+%40nginx+https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F
+[2]:http://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&title=Announcing+NGINX+Unit+1.0&summary=Today%2C+April+12%2C+marks+a+significant+milestone+in+the+development+of+NGINX%26nbsp%3BUnit%2C+our+dynamic+web+and+application+server.+Approximately+six+months+after+its+first+public+release%2C+we%E2%80%99re+now+happy+to+announce+that+NGINX%26nbsp%3BUnit+is+generally+available+and+production%26%238209%3Bready.+NGINX%26nbsp%3BUnit+is+our+new+open+source+initiative+led+by+Igor%26nbsp%3BSysoev%2C+creator+of+the+original+NGINX+Open+Source+%5B%26hellip%3B%5D
+[3]:https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&t=Announcing%20NGINX%20Unit%201.0&text=Today,%20April%2012,%20marks%20a%20significant%20milestone%20in%20the%20development%20of%20NGINX%C2%A0Unit,%20our%20dynamic%20web%20and%20application%20server.%20Approximately%20six%20months%20after%20its%20first%20public%20release,%20we%E2%80%99re%20now%20happy%20to%20announce%20that%20NGINX%C2%A0Unit%20is%20generally%20available%20and%20production%E2%80%91ready.%20NGINX%C2%A0Unit%20is%20our%20new%20open%20source%20initiative%20led%20by%20Igor%C2%A0Sysoev,%20creator%20of%20the%20original%20NGINX%20Open%20Source%20[%E2%80%A6]
+[4]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F
+[5]:https://plus.google.com/share?url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F
+[6]:http://www.reddit.com/submit?url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&title=Announcing+NGINX+Unit+1.0&text=Today%2C+April+12%2C+marks+a+significant+milestone+in+the+development+of+NGINX%26nbsp%3BUnit%2C+our+dynamic+web+and+application+server.+Approximately+six+months+after+its+first+public+release%2C+we%E2%80%99re+now+happy+to+announce+that+NGINX%26nbsp%3BUnit+is+generally+available+and+production%26%238209%3Bready.+NGINX%26nbsp%3BUnit+is+our+new+open+source+initiative+led+by+Igor%26nbsp%3BSysoev%2C+creator+of+the+original+NGINX+Open+Source+%5B%26hellip%3B%5D
+[7]:http://nginx.org/en/CHANGES
+[8]:https://www.nginx.com/products/nginx-unit/
+[9]:https://www.nginx.com/blog/introducing-nginx-unit/
+[10]:https://news.netcraft.com/archives/2018/03/27/march-2018-web-server-survey.html
+[11]:https://www.nginx.com/blog/installing-wordpress-with-nginx-unit/
+[12]:https://en.wikipedia.org/wiki/Yuri_Gagarin
+[13]:https://en.wikipedia.org/wiki/Sputnik_1
+[14]:https://www.nginx.com/products/nginx-controller/
+[15]:https://www.nginx.com/products/nginx-amplify/
+[16]:http://unit.nginx.org/configuration/
+[17]:https://en.wikipedia.org/wiki/Cgroups
+[18]:https://www.nginx.com/blog/what-is-a-service-mesh/
+[19]:https://www.nginx.com/products
+[20]:http://unit.nginx.org/installation/
+[21]:https://hub.docker.com/r/nginx/unit/
+[22]:http://hg.nginx.org/unit
+[23]:https://github.com/nginx/unit
+[24]:https://github.com/nginx/unit/issues
+[25]:http://mailman.nginx.org/mailman/listinfo/unit
+[26]:https://unit.nginx.org/contribution/
+[27]:https://www.nginx.com/support
+[28]:https://www.nginx.com/blog/tag/releases/
+[29]:https://www.nginx.com/blog/tag/nginx-unit/
From 094dbe7accf567513969e713550eedea5109ebc5 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 10:04:32 +0800
Subject: [PATCH 018/220] =?UTF-8?q?20180421-8=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...180330 Go on very small hardware Part 1.md | 506 ++++++++++++++++++
1 file changed, 506 insertions(+)
create mode 100644 sources/tech/20180330 Go on very small hardware Part 1.md
diff --git a/sources/tech/20180330 Go on very small hardware Part 1.md b/sources/tech/20180330 Go on very small hardware Part 1.md
new file mode 100644
index 0000000000..3ca498ada3
--- /dev/null
+++ b/sources/tech/20180330 Go on very small hardware Part 1.md
@@ -0,0 +1,506 @@
+Go on very small hardware (Part 1)
+============================================================
+
+
+How low we can _Go_ and still do something useful?
+
+I recently bought this ridiculously cheap board:
+
+ [][2]
+
+I bought it for three reasons. First, I have never dealt (as a programmer) with STM32F0 series. Second, the STM32F10x series is getting old. MCUs belonging to the STM32F0 family are just as cheap if not cheaper and has newer peripherals, with many improvements and bugs fixed. Thirdly, I chose the smallest member of the family for the purpose of this article, to make the whole thing a little more intriguing.
+
+### The Hardware
+
+The [STM32F030F4P6][3] is impresive piece of hardware:
+
+* CPU: [Cortex M0][1] 48 MHz (only 12000 logic gates, in minimal configuration),
+
+* RAM: 4 KB,
+
+* Flash: 16 KB,
+
+* ADC, SPI, I2C, USART and a couple of timers,
+
+all enclosed in TSSOP20 package. As you can see, it is very small 32-bit system.
+
+### The software
+
+If you hoped to see how to use [genuine Go][4] to program this board, you need to read the hardware specification one more time. You must face the truth: there is a negligible chance that someone will ever add support for Cortex-M0 to the Go compiler and this is just the beginning of work.
+
+I’ll use [Emgo][5], but don’t worry, you will see that it gives you as much Go as it can on such small system.
+
+There was no support for any F0 MCU in [stm32/hal][6] before this board arrived to me. After brief study of [RM][7], the STM32F0 series appeared to be striped down STM32F3 series, which made work on new port easier.
+
+If you want to follow subsequent steps of this post, you need to install Emgo
+
+```
+cd $HOME
+git clone https://github.com/ziutek/emgo/
+cd emgo/egc
+go install
+
+```
+
+and set a couple environment variables
+
+```
+export EGCC=path_to_arm_gcc # eg. /usr/local/arm/bin/arm-none-eabi-gcc
+export EGLD=path_to_arm_linker # eg. /usr/local/arm/bin/arm-none-eabi-ld
+export EGAR=path_to_arm_archiver # eg. /usr/local/arm/bin/arm-none-eabi-ar
+
+export EGROOT=$HOME/emgo/egroot
+export EGPATH=$HOME/emgo/egpath
+
+export EGARCH=cortexm0
+export EGOS=noos
+export EGTARGET=f030x6
+
+```
+
+A more detailed description can be found on the [Emgo website][8].
+
+Ensure that egc is on your PATH. You can use `go build` instead of `go install` and copy egc to your _$HOME/bin_ or _/usr/local/bin_ .
+
+Now create new directory for your first Emgo program and copy example linker script there:
+
+```
+mkdir $HOME/firstemgo
+cd $HOME/firstemgo
+cp $EGPATH/src/stm32/examples/f030-demo-board/blinky/script.ld .
+
+```
+
+### Minimal program
+
+Lets create minimal program in _main.go_ file:
+
+```
+package main
+
+func main() {
+}
+
+```
+
+It’s actually minimal and compiles witout any problem:
+
+```
+$ egc
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 7452 172 104 7728 1e30 cortexm0.elf
+
+```
+
+The first compilation can take some time. The resulting binary takes 7624 bytes of Flash (text+data), quite a lot for a program that does nothing. There are 8760 free bytes left to do something useful.
+
+What about traditional _Hello, World!_ code:
+
+```
+package main
+
+import "fmt"
+
+func main() {
+ fmt.Println("Hello, World!")
+}
+
+```
+
+Unfortunately, this time it went worse:
+
+```
+$ egc
+/usr/local/arm/bin/arm-none-eabi-ld: /home/michal/P/go/src/github.com/ziutek/emgo/egpath/src/stm32/examples/f030-demo-board/blog/cortexm0.elf section `.text' will not fit in region `Flash'
+/usr/local/arm/bin/arm-none-eabi-ld: region `Flash' overflowed by 10880 bytes
+exit status 1
+
+```
+
+ _Hello, World!_ requires at last STM32F030x6, with its 32 KB of Flash.
+
+The _fmt_ package forces to include whole _strconv_ and _reflect_ packages. All three are pretty big, even a slimmed-down versions in Emgo. We must forget about it. There are many applications that don’t require fancy formatted text output. Often one or more LEDs or seven segment display are enough. However, in Part 2, I’ll try to use _strconv_ package to format and print some numbers and text over UART.
+
+### Blinky
+
+Our board has one LED connected between PA4 pin and VCC. This time we need a bit more code:
+
+```
+package main
+
+import (
+ "delay"
+
+ "stm32/hal/gpio"
+ "stm32/hal/system"
+ "stm32/hal/system/timer/systick"
+)
+
+var led gpio.Pin
+
+func init() {
+ system.SetupPLL(8, 1, 48/8)
+ systick.Setup(2e6)
+
+ gpio.A.EnableClock(false)
+ led = gpio.A.Pin(4)
+
+ cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain}
+ led.Setup(cfg)
+}
+
+func main() {
+ for {
+ led.Clear()
+ delay.Millisec(100)
+ led.Set()
+ delay.Millisec(900)
+ }
+}
+
+```
+
+By convention, the _init_ function is used to initialize the basic things and configure peripherals.
+
+`system.SetupPLL(8, 1, 48/8)` configures RCC to use PLL with external 8 MHz oscilator as system clock source. PLL divider is set to 1, multipler to 48/8 = 6 which gives 48 MHz system clock.
+
+`systick.Setup(2e6)` setups Cortex-M SYSTICK timer as system timer, which runs the scheduler every 2e6 nanoseconds (500 times per second).
+
+`gpio.A.EnableClock(false)` enables clock for GPIO port A. _False_ means that this clock should be disabled in low-power mode, but this is not implemented int STM32F0 series.
+
+`led.Setup(cfg)` setups PA4 pin as open-drain output.
+
+`led.Clear()` sets PA4 pin low, which in open-drain configuration turns the LED on.
+
+`led.Set()` sets PA4 to high-impedance state, which turns the LED off.
+
+Lets compile this code:
+
+```
+$ egc
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 9772 172 168 10112 2780 cortexm0.elf
+
+```
+
+As you can see, blinky takes 2320 bytes more than minimal program. There are still 6440 bytes left for more code.
+
+Let’s see if it works:
+
+```
+$ openocd -d0 -f interface/stlink.cfg -f target/stm32f0x.cfg -c 'init; program cortexm0.elf; reset run; exit'
+Open On-Chip Debugger 0.10.0+dev-00319-g8f1f912a (2018-03-07-19:20)
+Licensed under GNU GPL v2
+For bug reports, read
+ http://openocd.org/doc/doxygen/bugs.html
+debug_level: 0
+adapter speed: 1000 kHz
+adapter_nsrst_delay: 100
+none separate
+adapter speed: 950 kHz
+target halted due to debug-request, current mode: Thread
+xPSR: 0xc1000000 pc: 0x0800119c msp: 0x20000da0
+adapter speed: 4000 kHz
+** Programming Started **
+auto erase enabled
+target halted due to breakpoint, current mode: Thread
+xPSR: 0x61000000 pc: 0x2000003a msp: 0x20000da0
+wrote 10240 bytes from file cortexm0.elf in 0.817425s (12.234 KiB/s)
+** Programming Finished **
+adapter speed: 950 kHz
+
+```
+
+For this article, the first time in my life, I converted short video to [animated PNG][9] sequence. I’m impressed, goodbye YouTube and sorry IE users. See [apngasm][10] for more info. I should study HTML5 based alternative, but for now, APNG is my preffered way for short looped videos.
+
+
+
+### More Go
+
+If you aren’t a Go programmer but you’ve heard something about Go language, you can say: “This syntax is nice, but not a significant improvement over C. Show me _Go language_ , give mi _channels_ and _goroutines!_ ”.
+
+Here you are:
+
+```
+import (
+ "delay"
+
+ "stm32/hal/gpio"
+ "stm32/hal/system"
+ "stm32/hal/system/timer/systick"
+)
+
+var led1, led2 gpio.Pin
+
+func init() {
+ system.SetupPLL(8, 1, 48/8)
+ systick.Setup(2e6)
+
+ gpio.A.EnableClock(false)
+ led1 = gpio.A.Pin(4)
+ led2 = gpio.A.Pin(5)
+
+ cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain}
+ led1.Setup(cfg)
+ led2.Setup(cfg)
+}
+
+func blinky(led gpio.Pin, period int) {
+ for {
+ led.Clear()
+ delay.Millisec(100)
+ led.Set()
+ delay.Millisec(period - 100)
+ }
+}
+
+func main() {
+ go blinky(led1, 500)
+ blinky(led2, 1000)
+}
+
+```
+
+Code changes are minor: the second LED was added and the previous _main_ function was renamed to _blinky_ and now requires two parameters. _Main_ starts first _blinky_ in new goroutine, so both LEDs are handled _concurrently_ . It is worth mentioning that _gpio.Pin_ type supports concurrent access to different pins of the same GPIO port.
+
+Emgo still has several shortcomings. One of them is that you have to specify a maximum number of goroutines (tasks) in advance. It’s time to edit _script.ld_ :
+
+```
+ISRStack = 1024;
+MainStack = 1024;
+TaskStack = 1024;
+MaxTasks = 2;
+
+INCLUDE stm32/f030x4
+INCLUDE stm32/loadflash
+INCLUDE noos-cortexm
+
+```
+
+The size of the stacks are set by guess, and we’ll not care about them at the moment.
+
+```
+$ egc
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 10020 172 172 10364 287c cortexm0.elf
+
+```
+
+Another LED and goroutine costs 248 bytes of Flash.
+
+
+
+### Channels
+
+Channels are the [preffered way][11] in Go to communicate between goroutines. Emgo goes even further and allows to use _buffered_ channels by _interrupt handlers_ . The next example actually shows such case.
+
+```
+package main
+
+import (
+ "delay"
+ "rtos"
+
+ "stm32/hal/gpio"
+ "stm32/hal/irq"
+ "stm32/hal/system"
+ "stm32/hal/system/timer/systick"
+ "stm32/hal/tim"
+)
+
+var (
+ leds [3]gpio.Pin
+ timer *tim.Periph
+ ch = make(chan int, 1)
+)
+
+func init() {
+ system.SetupPLL(8, 1, 48/8)
+ systick.Setup(2e6)
+
+ gpio.A.EnableClock(false)
+ leds[0] = gpio.A.Pin(4)
+ leds[1] = gpio.A.Pin(5)
+ leds[2] = gpio.A.Pin(9)
+
+ cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain}
+ for _, led := range leds {
+ led.Set()
+ led.Setup(cfg)
+ }
+
+ timer = tim.TIM3
+ pclk := timer.Bus().Clock()
+ if pclk < system.AHB.Clock() {
+ pclk *= 2
+ }
+ freq := uint(1e3) // Hz
+ timer.EnableClock(true)
+ timer.PSC.Store(tim.PSC(pclk/freq - 1))
+ timer.ARR.Store(700) // ms
+ timer.DIER.Store(tim.UIE)
+ timer.CR1.Store(tim.CEN)
+
+ rtos.IRQ(irq.TIM3).Enable()
+}
+
+func blinky(led gpio.Pin, period int) {
+ for range ch {
+ led.Clear()
+ delay.Millisec(100)
+ led.Set()
+ delay.Millisec(period - 100)
+ }
+}
+
+func main() {
+ go blinky(leds[1], 500)
+ blinky(leds[2], 500)
+}
+
+func timerISR() {
+ timer.SR.Store(0)
+ leds[0].Set()
+ select {
+ case ch <- 0:
+ // Success
+ default:
+ leds[0].Clear()
+ }
+}
+
+//c:__attribute__((section(".ISRs")))
+var ISRs = [...]func(){
+ irq.TIM3: timerISR,
+}
+
+```
+
+Changes compared to the previous example:
+
+1. Thrid LED was added and connected to PA9 pin (TXD pin on UART header).
+
+2. The timer (TIM3) has been introduced as a source of interrupts.
+
+3. The new _timerISR_ function handles _irq.TIM3_ interrupt.
+
+4. The new buffered channel with capacity 1 is intended for communication between _timerISR_ and _blinky_ goroutines.
+
+5. The _ISRs_ array acts as _interrupt vector table_ , a part of bigger _exception vector table_ .
+
+6. The _blinky’s for statement_ was replaced with a _range statement_ .
+
+For convenience, all LEDs, or rather their pins, have been collected in the _leds_ array. Additionally, all pins have been set to a known initial state (high), just before they were configured as outputs.
+
+In this case, we want the timer to tick at 1 kHz. To configure TIM3 prescaler, we need to known its input clock frequency. According to RM the input clock frequency is equal to APBCLK when APBCLK = AHBCLK, otherwise it is equal to 2 x APBCLK.
+
+If the CNT register is incremented at 1 kHz, then the value of ARR register corresponds to the period of counter _update event_ (reload event) expressed in milliseconds. To make update event to generate interrupts, the UIE bit in DIER register must be set. The CEN bit enables the timer.
+
+Timer peripheral should stay enabled in low-power mode, to keep ticking when the CPU is put to sleep: `timer.EnableClock(true)`. It doesn’t matter in case of STM32F0 but it’s important for code portability.
+
+The _timerISR_ function handles _irq.TIM3_ interrupt requests. `timer.SR.Store(0)` clears all event flags in SR register to deassert the IRQ to [NVIC][12]. The rule of thumb is to clear the interrupt flags immedaitely at begining of their handler, because of the IRQ deassert latency. This prevents unjustified re-call the handler again. For absolute certainty, the clear-read sequence should be performed, but in our case, just clearing is enough.
+
+The following code:
+
+```
+select {
+case ch <- 0:
+ // Success
+default:
+ leds[0].Clear()
+}
+
+```
+
+is a Go way to non-blocking sending on a channel. No one interrupt handler can afford to wait for a free space in the channel. If the channel is full, the default case is taken, and the onboard LED is set on, until the next interrupt.
+
+The _ISRs_ array contains interrupt vectors. The `//c:__attribute__((section(".ISRs")))` causes that the linker will inserted it into .ISRs section.
+
+The new form of _blinky’s for_ loop:
+
+```
+for range ch {
+ led.Clear()
+ delay.Millisec(100)
+ led.Set()
+ delay.Millisec(period - 100)
+}
+
+```
+
+is the equivalent of:
+
+```
+for {
+ _, ok := <-ch
+ if !ok {
+ break // Channel closed.
+ }
+ led.Clear()
+ delay.Millisec(100)
+ led.Set()
+ delay.Millisec(period - 100)
+}
+
+```
+
+Note that in this case we aren’t interested in the value received from the channel. We’re interested only in the fact that there is something to receive. We can give it expression by declaring the channel’s element type as empty struct `struct{}` instead of _int_ and send `struct{}{}` values instead of 0, but it can be strange for newcomer’s eyes.
+
+Lets compile this code:
+
+```
+$ egc
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 11096 228 188 11512 2cf8 cortexm0.elf
+
+```
+
+This new example takes 11324 bytes of Flash, 1132 bytes more than the previous one.
+
+With the current timings, both _blinky_ goroutines consume from the channel much faster than the _timerISR_ sends to it. So they both wait for new data simultaneously and you can observe the randomness of _select_ , required by the [Go specification][13].
+
+
+
+The onboard LED is always off, so the channel overrun never occurs.
+
+Let’s speed up sending, by changing `timer.ARR.Store(700)` to `timer.ARR.Store(200)`. Now the _timerISR_ sends 5 messages per second but both recipients together can receive only 4 messages per second.
+
+
+
+As you can see, the _timerISR_ lights the yellow LED which means there is no space in the channel.
+
+This is where I finish the first part of this article. You should know that this part didn’t show you the most important thing in Go language, _interfaces_ .
+
+Goroutines and channels are only nice and convenient syntax. You can replace them with your own code - not easy but feasible. Interfaces are the essence of Go, and that’s what I will start with in the [second part][14] of this article.
+
+We still have some free space on Flash.
+
+--------------------------------------------------------------------------------
+
+via: https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html
+
+作者:[ Michał Derkacz][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://ziutek.github.io/
+[1]:https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M0
+[2]:https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html
+[3]:http://www.st.com/content/st_com/en/products/microcontrollers/stm32-32-bit-arm-cortex-mcus/stm32-mainstream-mcus/stm32f0-series/stm32f0x0-value-line/stm32f030f4.html
+[4]:https://golang.org/
+[5]:https://github.com/ziutek/emgo
+[6]:https://github.com/ziutek/emgo/tree/master/egpath/src/stm32/hal
+[7]:http://www.st.com/resource/en/reference_manual/dm00091010.pdf
+[8]:https://github.com/ziutek/emgo
+[9]:https://en.wikipedia.org/wiki/APNG
+[10]:http://apngasm.sourceforge.net/
+[11]:https://blog.golang.org/share-memory-by-communicating
+[12]:http://infocenter.arm.com/help/topic/com.arm.doc.ddi0432c/Cihbecee.html
+[13]:https://golang.org/ref/spec#Select_statements
+[14]:https://ziutek.github.io/2018/04/14/go_on_very_small_hardware2.html
From 40070a18b62dbd123df2f718d5946b21e020f740 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 10:06:10 +0800
Subject: [PATCH 019/220] =?UTF-8?q?20180421-9=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...180414 Go on very small hardware Part 2.md | 969 ++++++++++++++++++
1 file changed, 969 insertions(+)
create mode 100644 sources/tech/20180414 Go on very small hardware Part 2.md
diff --git a/sources/tech/20180414 Go on very small hardware Part 2.md b/sources/tech/20180414 Go on very small hardware Part 2.md
new file mode 100644
index 0000000000..8ebfb263f1
--- /dev/null
+++ b/sources/tech/20180414 Go on very small hardware Part 2.md
@@ -0,0 +1,969 @@
+Go on very small hardware (Part 2)
+============================================================
+
+
+ [][1]
+
+At the end of the [first part][2] of this article I promised to write something about _interfaces_ . I don’t want to write here a complete or even brief lecture about the interfaces. Instead, I’ll show a simple example how to define and use an interface, and then, how to take advantage of ubiquitous _io.Writer_ interface. There will also be a few words about _reflection_ and _semihosting_ .
+
+Interfaces are a crucial part of Go language. If you want to learn more about them, I suggest to read [Effective Go][3] and [Russ Cox article][4].
+
+### Concurrent Blinky – revisited
+
+When you read the code of previous examples you probably noticed a counterintuitive way to turn the LED on or off. The _Set_ method was used to turn the LED off and the _Clear_ method was used to turn the LED on. This is due to driving the LEDs in open-drain configuration. What we can do to make the code less confusing? Let’s define the _LED_ type with _On_ and _Off_ methods:
+
+```
+type LED struct {
+ pin gpio.Pin
+}
+
+func (led LED) On() {
+ led.pin.Clear()
+}
+
+func (led LED) Off() {
+ led.pin.Set()
+}
+
+```
+
+Now we can simply call `led.On()` and `led.Off()` which no longer raises any doubts.
+
+In all previous examples I tried to use the same open-drain configuration to don’t complicate the code. But in the last example, it would be easier for me to connect the third LED between GND and PA3 pins and configure PA3 in push-pull mode. The next example will use a LED connected this way.
+
+But our new _LED_ type doesn’t support the push-pull configuration. In fact, we should call it _OpenDrainLED_ and define another _PushPullLED_ type:
+
+```
+type PushPullLED struct {
+ pin gpio.Pin
+}
+
+func (led PushPullLED) On() {
+ led.pin.Set()
+}
+
+func (led PushPullLED) Off() {
+ led.pin.Clear()
+}
+
+```
+
+Note, that both types have the same methods that work the same. It would be nice if the code that operates on LEDs could use both types, without paying attention to which one it uses at the moment. The _interface type_ comes to help:
+
+```
+package main
+
+import (
+ "delay"
+
+ "stm32/hal/gpio"
+ "stm32/hal/system"
+ "stm32/hal/system/timer/systick"
+)
+
+type LED interface {
+ On()
+ Off()
+}
+
+type PushPullLED struct{ pin gpio.Pin }
+
+func (led PushPullLED) On() {
+ led.pin.Set()
+}
+
+func (led PushPullLED) Off() {
+ led.pin.Clear()
+}
+
+func MakePushPullLED(pin gpio.Pin) PushPullLED {
+ pin.Setup(&gpio.Config{Mode: gpio.Out, Driver: gpio.PushPull})
+ return PushPullLED{pin}
+}
+
+type OpenDrainLED struct{ pin gpio.Pin }
+
+func (led OpenDrainLED) On() {
+ led.pin.Clear()
+}
+
+func (led OpenDrainLED) Off() {
+ led.pin.Set()
+}
+
+func MakeOpenDrainLED(pin gpio.Pin) OpenDrainLED {
+ pin.Setup(&gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain})
+ return OpenDrainLED{pin}
+}
+
+var led1, led2 LED
+
+func init() {
+ system.SetupPLL(8, 1, 48/8)
+ systick.Setup(2e6)
+
+ gpio.A.EnableClock(false)
+ led1 = MakeOpenDrainLED(gpio.A.Pin(4))
+ led2 = MakePushPullLED(gpio.A.Pin(3))
+}
+
+func blinky(led LED, period int) {
+ for {
+ led.On()
+ delay.Millisec(100)
+ led.Off()
+ delay.Millisec(period - 100)
+ }
+}
+
+func main() {
+ go blinky(led1, 500)
+ blinky(led2, 1000)
+}
+
+```
+
+We’ve defined _LED_ interface that has two methods: _On_ and _Off_ . The _PushPullLED_ and _OpenDrainLED_ types represent two ways of driving LEDs. We also defined two _Make_ _*LED_ functions which act as constructors. Both types implement the _LED_ interface, so the values of these types can be assigned to the variables of type _LED_ :
+
+```
+led1 = MakeOpenDrainLED(gpio.A.Pin(4))
+led2 = MakePushPullLED(gpio.A.Pin(3))
+
+```
+
+In this case the assignability is checked at compile time. After the assignment the _led1_ variable contains `OpenDrainLED{gpio.A.Pin(4)}` and a pointer to the method set of the _OpenDrainLED_ type. The `led1.On()` call roughly corresponds to the following C code:
+
+```
+led1.methods->On(led1.value)
+
+```
+
+As you can see, this is quite inexpensive abstraction if only consider the function call overhead.
+
+But any assigment to an interface causes to include a lot of information about the assigned type. There can be a lot information in case of complex type which consists of many other types:
+
+```
+$ egc
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 10356 196 212 10764 2a0c cortexm0.elf
+
+```
+
+If we don’t use [reflection][5] we can save some bytes by avoid to include the names of types and struct fields:
+
+```
+$ egc -nf -nt
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 10312 196 212 10720 29e0 cortexm0.elf
+
+```
+
+The resulted binary still contains some necessary information about types and full information about all exported methods (with names). This information is need for checking assignability at runtime, mainly when you assign one value stored in the interface variable to any other variable.
+
+We can also remove type and field names from imported packages by recompiling them all:
+
+```
+$ cd $HOME/emgo
+$ ./clean.sh
+$ cd $HOME/firstemgo
+$ egc -nf -nt
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 10272 196 212 10680 29b8 cortexm0.elf
+
+```
+
+Let’s load this program to see does it work as expected. This time we’ll use the [st-flash][6] command:
+
+```
+$ arm-none-eabi-objcopy -O binary cortexm0.elf cortexm0.bin
+$ st-flash write cortexm0.bin 0x8000000
+st-flash 1.4.0-33-gd76e3c7
+2018-04-10T22:04:34 INFO usb.c: -- exit_dfu_mode
+2018-04-10T22:04:34 INFO common.c: Loading device parameters....
+2018-04-10T22:04:34 INFO common.c: Device connected is: F0 small device, id 0x10006444
+2018-04-10T22:04:34 INFO common.c: SRAM size: 0x1000 bytes (4 KiB), Flash: 0x4000 bytes (16 KiB) in pages of 1024 bytes
+2018-04-10T22:04:34 INFO common.c: Attempting to write 10468 (0x28e4) bytes to stm32 address: 134217728 (0x8000000)
+Flash page at addr: 0x08002800 erased
+2018-04-10T22:04:34 INFO common.c: Finished erasing 11 pages of 1024 (0x400) bytes
+2018-04-10T22:04:34 INFO common.c: Starting Flash write for VL/F0/F3/F1_XL core id
+2018-04-10T22:04:34 INFO flash_loader.c: Successfully loaded flash loader in sram
+ 11/11 pages written
+2018-04-10T22:04:35 INFO common.c: Starting verification of write complete
+2018-04-10T22:04:35 INFO common.c: Flash written and verified! jolly good!
+
+```
+
+I didn’t connected the NRST signal to the programmer so the _—reset_ option can’t be used and the reset button have to be pressed to run the program.
+
+
+
+It seems that the _st-flash_ works a bit unreliably with this board (often requires reseting the ST-LINK dongle). Additionally, the current version doesn’t issue the reset command over SWD (uses only NRST signal). The software reset isn’t realiable however it usually works and lack of it introduces inconvenience. For this board-programmer pair the _OpenOCD_ works much better.
+
+### UART
+
+UART (Universal Aynchronous Receiver-Transmitter) is still one of the most important peripherals of today’s microcontrollers. Its advantage is unique combination of the following properties:
+
+* relatively high speed,
+
+* only two signal lines (even one in case of half-duplex communication),
+
+* symmetry of roles,
+
+* synchronous in-band signaling about new data (start bit),
+
+* accurate timing inside transmitted word.
+
+This causes that UART, originally intedned to transmit asynchronous messages consisting of 7-9 bit words, is also used to efficiently implement various other phisical protocols such as used by [WS28xx LEDs][7] or [1-wire][8] devices.
+
+However, we will use the UART in its usual role: to printing text messages from our program.
+
+```
+package main
+
+import (
+ "io"
+ "rtos"
+
+ "stm32/hal/dma"
+ "stm32/hal/gpio"
+ "stm32/hal/irq"
+ "stm32/hal/system"
+ "stm32/hal/system/timer/systick"
+ "stm32/hal/usart"
+)
+
+var tts *usart.Driver
+
+func init() {
+ system.SetupPLL(8, 1, 48/8)
+ systick.Setup(2e6)
+
+ gpio.A.EnableClock(true)
+ tx := gpio.A.Pin(9)
+
+ tx.Setup(&gpio.Config{Mode: gpio.Alt})
+ tx.SetAltFunc(gpio.USART1_AF1)
+ d := dma.DMA1
+ d.EnableClock(true)
+ tts = usart.NewDriver(usart.USART1, d.Channel(2, 0), nil, nil)
+ tts.Periph().EnableClock(true)
+ tts.Periph().SetBaudRate(115200)
+ tts.Periph().Enable()
+ tts.EnableTx()
+
+ rtos.IRQ(irq.USART1).Enable()
+ rtos.IRQ(irq.DMA1_Channel2_3).Enable()
+}
+
+func main() {
+ io.WriteString(tts, "Hello, World!\r\n")
+}
+
+func ttsISR() {
+ tts.ISR()
+}
+
+func ttsDMAISR() {
+ tts.TxDMAISR()
+}
+
+//c:__attribute__((section(".ISRs")))
+var ISRs = [...]func(){
+ irq.USART1: ttsISR,
+ irq.DMA1_Channel2_3: ttsDMAISR,
+}
+
+```
+
+You can find this code slightly complicated but for now there is no simpler UART driver in STM32 HAL (simple polling driver will be probably useful in some cases). The _usart.Driver_ is efficient driver that uses DMA and interrupts to ofload the CPU.
+
+STM32 USART peripheral provides traditional UART and its synchronous version. To use it as output we have to connect its Tx signal to the right GPIO pin:
+
+```
+tx.Setup(&gpio.Config{Mode: gpio.Alt})
+tx.SetAltFunc(gpio.USART1_AF1)
+
+```
+
+The _usart.Driver_ is configured in Tx-only mode (rxdma and rxbuf are set to nil):
+
+```
+tts = usart.NewDriver(usart.USART1, d.Channel(2, 0), nil, nil)
+
+```
+
+We use its _WriteString_ method to print the famous sentence. Let’s clean everything and compile this program:
+
+```
+$ cd $HOME/emgo
+$ ./clean.sh
+$ cd $HOME/firstemgo
+$ egc
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 12728 236 176 13140 3354 cortexm0.elf
+
+```
+
+To see something you need an UART peripheral in your PC.
+
+**Do not use RS232 port or USB to RS232 converter!**
+
+The STM32 family uses 3.3 V logic but RS232 can produce from -15 V to +15 V which will probably demage your MCU. You need USB to UART converter that uses 3.3 V logic. Popular converters are based on FT232 or CP2102 chips.
+
+
+
+You also need some terminal emulator program (I prefer [picocom][9]). Flash the new image, run the terminal emulator and press the reset button a few times:
+
+```
+$ openocd -d0 -f interface/stlink.cfg -f target/stm32f0x.cfg -c 'init; program cortexm0.elf; reset run; exit'
+Open On-Chip Debugger 0.10.0+dev-00319-g8f1f912a (2018-03-07-19:20)
+Licensed under GNU GPL v2
+For bug reports, read
+ http://openocd.org/doc/doxygen/bugs.html
+debug_level: 0
+adapter speed: 1000 kHz
+adapter_nsrst_delay: 100
+none separate
+adapter speed: 950 kHz
+target halted due to debug-request, current mode: Thread
+xPSR: 0xc1000000 pc: 0x080016f4 msp: 0x20000a20
+adapter speed: 4000 kHz
+** Programming Started **
+auto erase enabled
+target halted due to breakpoint, current mode: Thread
+xPSR: 0x61000000 pc: 0x2000003a msp: 0x20000a20
+wrote 13312 bytes from file cortexm0.elf in 1.020185s (12.743 KiB/s)
+** Programming Finished **
+adapter speed: 950 kHz
+$
+$ picocom -b 115200 /dev/ttyUSB0
+picocom v3.1
+
+port is : /dev/ttyUSB0
+flowcontrol : none
+baudrate is : 115200
+parity is : none
+databits are : 8
+stopbits are : 1
+escape is : C-a
+local echo is : no
+noinit is : no
+noreset is : no
+hangup is : no
+nolock is : no
+send_cmd is : sz -vv
+receive_cmd is : rz -vv -E
+imap is :
+omap is :
+emap is : crcrlf,delbs,
+logfile is : none
+initstring : none
+exit_after is : not set
+exit is : no
+
+Type [C-a] [C-h] to see available commands
+Terminal ready
+Hello, World!
+Hello, World!
+Hello, World!
+
+```
+
+Every press of the reset button produces new “Hello, World!” line. Everything works as expected.
+
+To see bi-directional UART code for this MCU check out [this example][10].
+
+### io.Writer
+
+The _io.Writer_ interface is probably the second most commonly used interface type in Go, right after the _error_ interface. Its definition looks like this:
+
+```
+type Writer interface {
+ Write(p []byte) (n int, err error)
+}
+
+```
+
+ _usart.Driver_ implements _io.Writer_ so we can replace:
+
+```
+tts.WriteString("Hello, World!\r\n")
+
+```
+
+with
+
+```
+io.WriteString(tts, "Hello, World!\r\n")
+
+```
+
+Additionally you need to add the _io_ package to the _import_ section.
+
+The declaration of _io.WriteString_ function looks as follows:
+
+```
+func WriteString(w Writer, s string) (n int, err error)
+
+```
+
+As you can see, the _io.WriteString_ allows to write strings using any type that implements _io.Writer_ interface. Internally it check does the underlying type has _WriteString_ method and uses it instead of _Write_ if available.
+
+Let’s compile the modified program:
+
+```
+$ egc
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 15456 320 248 16024 3e98 cortexm0.elf
+
+```
+
+As you can see, _io.WriteString_ causes a significant increase in the size of the binary: 15776 - 12964 = 2812 bytes. There isn’t too much space left on the Flash. What caused such a drastic increase in size?
+
+Using the command:
+
+```
+arm-none-eabi-nm --print-size --size-sort --radix=d cortexm0.elf
+
+```
+
+we can print all symbols ordered by its size for both cases. By filtering and analyzing the obtained data (awk, diff) we can find about 80 new symbols. The ten largest are:
+
+```
+> 00000062 T stm32$hal$usart$Driver$DisableRx
+> 00000072 T stm32$hal$usart$Driver$RxDMAISR
+> 00000076 T internal$Type$Implements
+> 00000080 T stm32$hal$usart$Driver$EnableRx
+> 00000084 t errors$New
+> 00000096 R $8$stm32$hal$usart$Driver$$
+> 00000100 T stm32$hal$usart$Error$Error
+> 00000360 T io$WriteString
+> 00000660 T stm32$hal$usart$Driver$Read
+
+```
+
+So, even though we don’t use the _usart.Driver.Read_ method it was compiled in, same as _DisableRx_ , _RxDMAISR_ , _EnableRx_ and other not mentioned above. Unfortunately, if you assign something to the interface, its full method set is required (with all dependences). This isn’t a problem for a large programs that use most of the methods anyway. But for our simple one it’s a huge burden.
+
+We’re already close to the limits of our MCU but let’s try to print some numbers (you need to replace _io_ package with _strconv_ in _import_ section):
+
+```
+func main() {
+ a := 12
+ b := -123
+
+ tts.WriteString("a = ")
+ strconv.WriteInt(tts, a, 10, 0, 0)
+ tts.WriteString("\r\n")
+ tts.WriteString("b = ")
+ strconv.WriteInt(tts, b, 10, 0, 0)
+ tts.WriteString("\r\n")
+
+ tts.WriteString("hex(a) = ")
+ strconv.WriteInt(tts, a, 16, 0, 0)
+ tts.WriteString("\r\n")
+ tts.WriteString("hex(b) = ")
+ strconv.WriteInt(tts, b, 16, 0, 0)
+ tts.WriteString("\r\n")
+}
+
+```
+
+As in the case of _io.WriteString_ function, the first argument of the _strconv.WriteInt_ is of type _io.Writer_ .
+
+```
+$ egc
+/usr/local/arm/bin/arm-none-eabi-ld: /home/michal/firstemgo/cortexm0.elf section `.rodata' will not fit in region `Flash'
+/usr/local/arm/bin/arm-none-eabi-ld: region `Flash' overflowed by 692 bytes
+exit status 1
+
+```
+
+This time we’ve run out of space. Let’s try to slim down the information about types:
+
+```
+$ cd $HOME/emgo
+$ ./clean.sh
+$ cd $HOME/firstemgo
+$ egc -nf -nt
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 15876 316 320 16512 4080 cortexm0.elf
+
+```
+
+It was close, but we fit. Let’s load and run this code:
+
+```
+a = 12
+b = -123
+hex(a) = c
+hex(b) = -7b
+
+```
+
+The _strconv_ package in Emgo is quite different from its archetype in Go. It is intended for direct use to write formatted numbers and in many cases can replace heavy _fmt_ package. That’s why the function names start with _Write_ instead of _Format_ and have additional two parameters. Below is an example of their use:
+
+```
+func main() {
+ b := -123
+ strconv.WriteInt(tts, b, 10, 0, 0)
+ tts.WriteString("\r\n")
+ strconv.WriteInt(tts, b, 10, 6, ' ')
+ tts.WriteString("\r\n")
+ strconv.WriteInt(tts, b, 10, 6, '0')
+ tts.WriteString("\r\n")
+ strconv.WriteInt(tts, b, 10, 6, '.')
+ tts.WriteString("\r\n")
+ strconv.WriteInt(tts, b, 10, -6, ' ')
+ tts.WriteString("\r\n")
+ strconv.WriteInt(tts, b, 10, -6, '0')
+ tts.WriteString("\r\n")
+ strconv.WriteInt(tts, b, 10, -6, '.')
+ tts.WriteString("\r\n")
+}
+
+```
+
+There is its output:
+
+```
+-123
+ -123
+-00123
+..-123
+-123
+-123
+-123..
+
+```
+
+### Unix streams and Morse code
+
+Thanks to the fact that most of the functions that write something use _io.Writer_ instead of concrete type (eg. _FILE_ in C) we get a functionality similar to _Unix streams_ . In Unix we can easily combine simple commands to perform larger tasks. For example, we can write text to the file this way:
+
+```
+echo "Hello, World!" > file.txt
+
+```
+
+The `>` operator writes the output stream of the preceding command to the file. There is also `|`operator that connects output and input streams of adjacent commands.
+
+Thanks to the streams we can easily convert/filter output of any command. For example, to convert all letters to uppercase we can filter the echo’s output through _tr_ command:
+
+```
+echo "Hello, World!" | tr a-z A-Z > file.txt
+
+```
+
+To show the analogy between _io.Writer_ and Unix streams let’s write our:
+
+```
+io.WriteString(tts, "Hello, World!\r\n")
+
+```
+
+in the following pseudo-unix form:
+
+```
+io.WriteString "Hello, World!" | usart.Driver usart.USART1
+
+```
+
+The next example will show how to do this:
+
+```
+io.WriteString "Hello, World!" | MorseWriter | usart.Driver usart.USART1
+
+```
+
+Let’s create a simple encoder that encodes the text written to it using Morse coding:
+
+```
+type MorseWriter struct {
+ W io.Writer
+}
+
+func (w *MorseWriter) Write(s []byte) (int, error) {
+ var buf [8]byte
+ for n, c := range s {
+ switch {
+ case c == '\n':
+ c = ' ' // Replace new lines with spaces.
+ case 'a' <= c && c <= 'z':
+ c -= 'a' - 'A' // Convert to upper case.
+ }
+ if c < ' ' || 'Z' < c {
+ continue // c is outside ASCII [' ', 'Z']
+ }
+ var symbol morseSymbol
+ if c == ' ' {
+ symbol.length = 1
+ buf[0] = ' '
+ } else {
+ symbol = morseSymbols[c-'!']
+ for i := uint(0); i < uint(symbol.length); i++ {
+ if (symbol.code>>i)&1 != 0 {
+ buf[i] = '-'
+ } else {
+ buf[i] = '.'
+ }
+ }
+ }
+ buf[symbol.length] = ' '
+ if _, err := w.W.Write(buf[:symbol.length+1]); err != nil {
+ return n, err
+ }
+ }
+ return len(s), nil
+}
+
+type morseSymbol struct {
+ code, length byte
+}
+
+//emgo:const
+var morseSymbols = [...]morseSymbol{
+ {1<<0 | 1<<1 | 1<<2, 4}, // ! ---.
+ {1<<1 | 1<<4, 6}, // " .-..-.
+ {}, // #
+ {1<<3 | 1<<6, 7}, // $ ...-..-
+
+ // Some code omitted...
+
+ {1<<0 | 1<<3, 4}, // X -..-
+ {1<<0 | 1<<2 | 1<<3, 4}, // Y -.--
+ {1<<0 | 1<<1, 4}, // Z --..
+}
+
+```
+
+You can find the full _morseSymbols_ array [here][11]. The `//emgo:const` directive ensures that _morseSymbols_ array won’t be copied to the RAM.
+
+Now we can print our sentence in two ways:
+
+```
+func main() {
+ s := "Hello, World!\r\n"
+ mw := &MorseWriter{tts}
+
+ io.WriteString(tts, s)
+ io.WriteString(mw, s)
+}
+
+```
+
+We use the pointer to the _MorseWriter_ `&MorseWriter{tts}` instead os simple `MorseWriter{tts}` value beacuse the _MorseWriter_ is to big to fit into an interface variable.
+
+Emgo, unlike Go, doesn’t dynamically allocate memory for value stored in interface variable. The interface type has limited size, equal to the size of three pointers (to fit _slice_ ) or two _float64_ (to fit _complex128_ ), what is bigger. It can directly store values of all basic types and small structs/arrays but for bigger values you must use pointers.
+
+Let’s compile this code and see its output:
+
+```
+$ egc
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 15152 324 248 15724 3d6c cortexm0.elf
+
+```
+
+```
+Hello, World!
+.... . .-.. .-.. --- --..-- .-- --- .-. .-.. -.. ---.
+
+```
+
+### The Ultimate Blinky
+
+The _Blinky_ is hardware equivalent of _Hello, World!_ program. Once we have a Morse encoder we can easly combine both to obtain the _Ultimate Blinky_ program:
+
+```
+package main
+
+import (
+ "delay"
+ "io"
+
+ "stm32/hal/gpio"
+ "stm32/hal/system"
+ "stm32/hal/system/timer/systick"
+)
+
+var led gpio.Pin
+
+func init() {
+ system.SetupPLL(8, 1, 48/8)
+ systick.Setup(2e6)
+
+ gpio.A.EnableClock(false)
+ led = gpio.A.Pin(4)
+
+ cfg := gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain, Speed: gpio.Low}
+ led.Setup(&cfg)
+}
+
+type Telegraph struct {
+ Pin gpio.Pin
+ Dotms int // Dot length [ms]
+}
+
+func (t Telegraph) Write(s []byte) (int, error) {
+ for _, c := range s {
+ switch c {
+ case '.':
+ t.Pin.Clear()
+ delay.Millisec(t.Dotms)
+ t.Pin.Set()
+ delay.Millisec(t.Dotms)
+ case '-':
+ t.Pin.Clear()
+ delay.Millisec(3 * t.Dotms)
+ t.Pin.Set()
+ delay.Millisec(t.Dotms)
+ case ' ':
+ delay.Millisec(3 * t.Dotms)
+ }
+ }
+ return len(s), nil
+}
+
+func main() {
+ telegraph := &MorseWriter{Telegraph{led, 100}}
+ for {
+ io.WriteString(telegraph, "Hello, World! ")
+ }
+}
+
+// Some code omitted...
+
+```
+
+In the above example I omitted the definition of _MorseWriter_ type because it was shown earlier. The full version is available [here][12]. Let’s compile it and run:
+
+```
+$ egc
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 11772 244 244 12260 2fe4 cortexm0.elf
+
+```
+
+
+
+### Reflection
+
+Yes, Emgo supports [reflection][13]. The _reflect_ package isn’t complete yet but that what is done is enough to implement _fmt.Print_ family of functions. Let’s see what can we do on our small MCU.
+
+To reduce memory usage we will use [semihosting][14] as standard output. For convenience, we also write simple _println_ function which to some extent mimics _fmt.Println_ .
+
+```
+package main
+
+import (
+ "debug/semihosting"
+ "reflect"
+ "strconv"
+
+ "stm32/hal/system"
+ "stm32/hal/system/timer/systick"
+)
+
+var stdout semihosting.File
+
+func init() {
+ system.SetupPLL(8, 1, 48/8)
+ systick.Setup(2e6)
+
+ var err error
+ stdout, err = semihosting.OpenFile(":tt", semihosting.W)
+ for err != nil {
+ }
+}
+
+type stringer interface {
+ String() string
+}
+
+func println(args ...interface{}) {
+ for i, a := range args {
+ if i > 0 {
+ stdout.WriteString(" ")
+ }
+ switch v := a.(type) {
+ case string:
+ stdout.WriteString(v)
+ case int:
+ strconv.WriteInt(stdout, v, 10, 0, 0)
+ case bool:
+ strconv.WriteBool(stdout, v, 't', 0, 0)
+ case stringer:
+ stdout.WriteString(v.String())
+ default:
+ stdout.WriteString("%unknown")
+ }
+ }
+ stdout.WriteString("\r\n")
+}
+
+type S struct {
+ A int
+ B bool
+}
+
+func main() {
+ p := &S{-123, true}
+
+ v := reflect.ValueOf(p)
+
+ println("kind(p) =", v.Kind())
+ println("kind(*p) =", v.Elem().Kind())
+ println("type(*p) =", v.Elem().Type())
+
+ v = v.Elem()
+
+ println("*p = {")
+ for i := 0; i < v.NumField(); i++ {
+ ft := v.Type().Field(i)
+ fv := v.Field(i)
+ println(" ", ft.Name(), ":", fv.Interface())
+ }
+ println("}")
+}
+
+```
+
+The _semihosting.OpenFile_ function allows to open/create file on the host side. The special path _:tt_ corresponds to host’s standard output.
+
+The _println_ function accepts arbitrary number of arguments, each of arbitrary type:
+
+```
+func println(args ...interface{})
+
+```
+
+It’s possible because any type implements the empty interface _interface{}_ . The _println_ uses [type switch][15] to print strings, integers and booleans:
+
+```
+switch v := a.(type) {
+case string:
+ stdout.WriteString(v)
+case int:
+ strconv.WriteInt(stdout, v, 10, 0, 0)
+case bool:
+ strconv.WriteBool(stdout, v, 't', 0, 0)
+case stringer:
+ stdout.WriteString(v.String())
+default:
+ stdout.WriteString("%unknown")
+}
+
+```
+
+Additionally it supports any type that implements _stringer_ interface, that is, any type that has _String()_ method. In any _case_ clause the _v_ variable has the right type, same as listed after _case_ keyword.
+
+The `reflect.ValueOf(p)` returns _p_ in the form that allows to analyze its type and content programmatically. As you can see, we can even dereference pointers using `v.Elem()` and print all struct fields with their names.
+
+Let’s try to compile this code. For now let’s see what will come out if compiled without type and field names:
+
+```
+$ egc -nt -nf
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 16028 216 312 16556 40ac cortexm0.elf
+
+```
+
+Only 140 free bytes left on the Flash. Let’s load it using OpenOCD with semihosting enabled:
+
+```
+$ openocd -d0 -f interface/stlink.cfg -f target/stm32f0x.cfg -c 'init; program cortexm0.elf; arm semihosting enable; reset run'
+Open On-Chip Debugger 0.10.0+dev-00319-g8f1f912a (2018-03-07-19:20)
+Licensed under GNU GPL v2
+For bug reports, read
+ http://openocd.org/doc/doxygen/bugs.html
+debug_level: 0
+adapter speed: 1000 kHz
+adapter_nsrst_delay: 100
+none separate
+adapter speed: 950 kHz
+target halted due to debug-request, current mode: Thread
+xPSR: 0xc1000000 pc: 0x08002338 msp: 0x20000a20
+adapter speed: 4000 kHz
+** Programming Started **
+auto erase enabled
+target halted due to breakpoint, current mode: Thread
+xPSR: 0x61000000 pc: 0x2000003a msp: 0x20000a20
+wrote 16384 bytes from file cortexm0.elf in 0.700133s (22.853 KiB/s)
+** Programming Finished **
+semihosting is enabled
+adapter speed: 950 kHz
+kind(p) = ptr
+kind(*p) = struct
+type(*p) =
+*p = {
+ X. : -123
+ X. : true
+}
+
+```
+
+If you’ve actually run this code, you noticed that semihosting is slow, especially if you write a byte after byte (buffering helps).
+
+As you can see, there is no type name for `*p` and all struct fields have the same _X._ name. Let’s compile this program again, this time without _-nt -nf_ options:
+
+```
+$ egc
+$ arm-none-eabi-size cortexm0.elf
+ text data bss dec hex filename
+ 16052 216 312 16580 40c4 cortexm0.elf
+
+```
+
+Now the type and field names have been included but only these defined in ~~_main.go_ file~~ _main_ package. The output of our program looks as follows:
+
+```
+kind(p) = ptr
+kind(*p) = struct
+type(*p) = S
+*p = {
+ A : -123
+ B : true
+}
+
+```
+
+Reflection is a crucial part of any easy to use serialization library and serialization ~~algorithms~~ like [JSON][16]gain in importance in the IOT era.
+
+This is where I finish the second part of this article. I think there is a chance for the third part, more entertaining, where we connect to this board various interesting devices. If this board won’t carry them, we replace it with something a little bigger.
+
+--------------------------------------------------------------------------------
+
+via: https://ziutek.github.io/2018/04/14/go_on_very_small_hardware2.html
+
+作者:[Michał Derkacz ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://ziutek.github.io/
+[1]:https://ziutek.github.io/2018/04/14/go_on_very_small_hardware2.html
+[2]:https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html
+[3]:https://golang.org/doc/effective_go.html#interfaces
+[4]:https://research.swtch.com/interfaces
+[5]:https://blog.golang.org/laws-of-reflection
+[6]:https://github.com/texane/stlink
+[7]:http://www.world-semi.com/solution/list-4-1.html
+[8]:https://en.wikipedia.org/wiki/1-Wire
+[9]:https://github.com/npat-efault/picocom
+[10]:https://github.com/ziutek/emgo/blob/master/egpath/src/stm32/examples/f030-demo-board/usart/main.go
+[11]:https://github.com/ziutek/emgo/blob/master/egpath/src/stm32/examples/f030-demo-board/morseuart/main.go
+[12]:https://github.com/ziutek/emgo/blob/master/egpath/src/stm32/examples/f030-demo-board/morseled/main.go
+[13]:https://blog.golang.org/laws-of-reflection
+[14]:http://infocenter.arm.com/help/topic/com.arm.doc.dui0471g/Bgbjjgij.html
+[15]:https://golang.org/doc/effective_go.html#type_switch
+[16]:https://en.wikipedia.org/wiki/JSON
From 710c134ef76b6c38ac9d4c458a5ad6d5c53a6942 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 10:15:24 +0800
Subject: [PATCH 020/220] =?UTF-8?q?20180421-10=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...oyment with sample Face Recognition App.md | 1109 +++++++++++++++++
1 file changed, 1109 insertions(+)
create mode 100644 sources/tech/20180315 Kubernetes distributed application deployment with sample Face Recognition App.md
diff --git a/sources/tech/20180315 Kubernetes distributed application deployment with sample Face Recognition App.md b/sources/tech/20180315 Kubernetes distributed application deployment with sample Face Recognition App.md
new file mode 100644
index 0000000000..6971261590
--- /dev/null
+++ b/sources/tech/20180315 Kubernetes distributed application deployment with sample Face Recognition App.md
@@ -0,0 +1,1109 @@
+Kubernetes distributed application deployment with sample Face Recognition App
+============================================================
+
+# Intro
+
+
+Alright folks. Settle in. This is going to be a long, but hopefully, fun ride.
+
+I’m going to deploy a distributed application with [Kubernetes][5]. I was trying to write an app which I thought is as close to real-world as possible. But obviously I cut some corners because of time and energy constraints.
+
+My focus will be on Kubernetes and deployment.
+
+Shall we?
+
+# The Application
+
+### TL;DR
+
+
+
+The application itself consists of six parts. The repository can be found here: [Kube Cluster Sample][6].
+
+It is a face recognition service which identifies images of people, comparing them to known individuals. A simple frontend displays a table of these images and whom they belong to. This happens by sending a request to a [receiver][7]. The request contains a path to an image. The image could be located anywhere. The receiver stores this path in the DB (MySQL) and sends a processing request to a queue. The queue uses [NSQ][8]. The request contains the ID of the saved image.
+
+An [Image Processing][9] service is constantly monitoring the queue for jobs to do. The processing consists of the following steps: taking the ID, loading in the image and sending off the path of the image to a [face recognition][10] backend, written in Python, via [gRPC][11]. If the identification was successful, the backend returns the name of the image corresponding to that person. The image_processor then updates the image record with the person id and marks the image as processed successfully. If identification is unsuccessful the image is left as pending. If there was a failure during identification the image is flagged as failed.
+
+Failed images can be re-tried with a cron job, for example.
+
+So how does this all work? Let’s dive in.
+
+### Receiver
+
+The receiver service is the starting point of the process. It’s an API which receives a request in the following format:
+
+```
+curl -d '{"path":"/unknown_images/unknown0001.jpg"}' http://127.0.0.1:8000/image/post
+
+```
+
+In this moment, receiver stores this path using a shared database cluster. The entity then will receive an ID from the database service. This application is based on the model where unique identification for Entity Objects is provided by the persistence layer. Once the ID is acquired, receiver will send a message to NSQ. The receiver’s job is done at this point.
+
+### Image Processor
+
+Here is where the fun begins. When Image Processor first runs it creates two Go routines. These are…
+
+### Consume
+
+This is an NSQ consumer. It has three jobs. First, it listens for messages on the queue. Second, when there is a message it appends the received ID to a thread safe slice of IDs that the second routine processes. Lastly it signals the second routine that there is work to do. It does that through [sync.Condition][12].
+
+### ProcessImages
+
+This routine processes a slice of IDs until the slice is drained completely. Once the slice is drained the routine goes into suspend instead of sleep-waiting on a channel. The processing of a single ID is through the following steps in order:
+
+* Establish a gRPC connection to the Face Recognition service (explained under Face Recognition)
+
+* Retrieve the image record from the database
+
+* Setup two functions for the [Circuit Breaker][1]
+ * Function 1: The main function which does the RPC method call
+
+ * Function 2: A health check for the Ping of the circuit breaker
+
+* Call Function 1 which sends the path of the image to the face recognition service. This path should be accessible by the face recognition service. Preferably something shared, like an NFS
+
+* If this call fails, update the image record as FAILEDPROCESSING
+
+* If it succeeds, an image name should come back which corresponds to a person in the db. It runs a joined SQL query which gets the corresponding person’s id
+
+* Update the Image record in the database with PROCESSED status and the ID of the person that image was identified as
+
+This service can be replicated, meaning, more than one could run at the same time.
+
+### Circuit Breaker
+
+In a system where replicating resources requires little to no effort, there still could be cases where, for example, the network goes down, or there are communication problems of any kind between two services. I implement a little circuit breaker around the gRPC calls for fun mostly.
+
+This is how it works:
+
+
+
+Once there are 5 unsuccessful calls to the service the circute breaker activates and doesn’t allow any more calls to go through. After a configured amount of time, it will send a health check to the service to see if it’s back up. If that still errors out, it increases the timeout. If not, it opens the circuit and allows traffic to proceed.
+
+### Front-End
+
+This is only a simplistic table view with Go’s own html/template used to render a list of images.
+
+### Face Recognition
+
+Here is where the identification magic is happening. I decided to make this a gRPC based service for the sole purpose of flexibility. I started writing it in Go, but decided that a Python implementation could be much sorter. In fact, not counting the gRPC code, the recognition part is about 7 lines of code. I’m using this fantastic library which has all the C bindings to OpenCV. [Face Recognition][13]. Having an API contract here means that I can change the implementation anytime as long as it adheres to the contract.
+
+Note that there is a great Go library that I was about to use, but they have yet to write the needed C bindings. It’s called [GoCV][14]. Go, check them out. They have some pretty amazing things, like real time camera feed processing with only a couple of lines of code.
+
+How the Python library works is simple in nature. Have a set of images about people you know and have a record for. In this case, I have a folder with a couple of images named, `hannibal_1.jpg, hannibal_2.jpg, gergely_1.jpg, john_doe.jpg`. In the database I have two tables named, `person, person_images`. They look like this:
+
+```
++----+----------+
+| id | name |
++----+----------+
+| 1 | Gergely |
+| 2 | John Doe |
+| 3 | Hannibal |
++----+----------+
++----+----------------+-----------+
+| id | image_name | person_id |
++----+----------------+-----------+
+| 1 | hannibal_1.jpg | 3 |
+| 2 | hannibal_2.jpg | 3 |
++----+----------------+-----------+
+
+```
+
+The face recognition library returns the name of the image the unknown image matches to. After that, a simple joined query like this will return the person in question.
+
+```
+select person.name, person.id from person inner join person_images as pi on person.id = pi.person_id where image_name = 'hannibal_2.jpg';
+
+```
+
+The gRPC call returns the id of the person which is than used to update the image’s `person` column.
+
+### NSQ
+
+NSQ is a little Go based queue. It can be scaled and has a minimal footprint on the system. It has a lookup service which consumers use to receive messages and a daemon that senders use to send messages.
+
+NSQ’s philosophy is that the daemon should run with the sender application. That way, the sender sends to localhost only. But the daemon is connected to the lookup service and that’s how they achieve a global queue.
+
+This means that there are as many NSQ daemons deployed as there are senders. Because the daemon has a minuscule resource requirement, it won’t interfere with the requirements of the main application.
+
+### Configuration
+
+In order to be as flexible as possible and making use of Kubernetes’ ConfigSet, I’m using .env files in development to store configuration like the location of the database service or NSQ’s lookup address. In production, and that means the Kubernetes environment, I’ll use environment properties.
+
+### Conclusion for the Application
+
+And that’s all there is to the architecture of the application we are about to deploy. All of its components are changeable and only coupled through the database, a queue and gRPC. This is imperative when deploying a distributed application because of how updating mechanics work. I will cover that part in the Deployment section.
+
+# Deployment with Kubernetes
+
+### Basics
+
+What is Kubernetes?
+
+I’ll cover some basics here, although I won’t go too much into details as that would require a whole book like this one: [Kubernetes Up And Running][15]. Also, you can look at the documentation if you are daring enough: [Kubernetes Documentation][16].
+
+Kubernetes is a containerized service and application manager. It scales easily, employs a swarm of containers but more importantly, it’s highly configurable via yaml based template files. People compare Kubernetes to Docker swarm, but Kubernetes does way more than that. For example, it’s container agnostic. You could use LXC with Kubernetes and it would work the same way you would use it with Docker. It provides a layer above managing a cluster of deployed services and applications. How? Let’s take a quick look at the building blocks of Kubernetes.
+
+In Kubernetes you describe a desired state of the application and Kubernetes will do what it can to reach that state. States could be something like, deployed, paused, replicated 2 times and so and so forth.
+
+One of the basics of Kubernetes is that it uses Labels and Annotations for all it’s components. Services, Deployments, ReplicaSets, DaemonSets, everyhting is labelled. Consider the following scenario. In order to identify what pod belongs to what application a labeled is used called `app: myapp`. Lets assume you have to containers of this application deployed. If you would remove the label `app` from one of the containers, Kubernetes would only detect one and thus would launch a new instance of `myapp`.
+
+### Kubernetes Cluster
+
+For Kuberenetes to work, a Kubernetes cluster needs to be present. Setting that up one might be a bit painful, but luckily help is there. Minikube sets up a cluster for us locally with one Node. And AWS has a beta service running in the form of a Kubernetes cluster where the only thing you need to do is request nodes and define your deployments. The Kubernetes cluster components are documented here: [Kubernetes Cluster Components][17].
+
+### Nodes
+
+A Node is a worker machine. It can be anything from a vm to a physical machine, including all sorts of cloud provided vms.
+
+### Pods
+
+Pods are a logically grouped collection of containers. That means, one Pod can potentially house a multitude of containers. A Pod gets its own DNS and virtual IP address after it has been created so Kubernetes can load balancer traffic to it. You rarely have to deal with containers directly. Even when debugging, like looking at logs, you usually invoke `kubectl logs deployment/your-app -f` instead of looking at a specific container. Although it is possible with `-c container_name`. The `-f` does a tail on the log.
+
+### Deployments
+
+When creating any kind of resource in Kubernetes, it will use a Deployment in the background. A deployment describes a desired state of the current application. It’s an object you can use to update Pods or a Service to be in a different state; do an update, or rollout new version of your app. You don’t directly conrtol a ReplicaSet (described later) but control the deployment object which creates and manages a ReplicaSet.
+
+### Services
+
+By default a Pod will get an IP address. However, since Pods are a volatile thing in Kubernetes you’ll need something more permanent. A queue, mysql, or an internal API, a frontend; these need to be long running and behind a static, unchanging IP or preferably a DNS record.
+
+For this purpose, Kubernetes has Services for which you can define modes of accessibility. Load Balanced, simple IP or internal DNS.
+
+How does Kubernetes know if a service is running correctly? You can configure Health Checks and Availability Checks. A HealtCheck will check if a container is running but that doesn’t mean that your service is running. For that, you have the availability check which pings a different endpoint in your application.
+
+Since Services are pretty important, I recommend that you read up on them later here: [Services][18]. Fair warning, this is quiet dense. 24 A4 pages of networking, services and discovery. It’s also important to understand if you want to seriously use Kubernetes in production.
+
+### DNS / Service Discovery
+
+If you create a service in the cluster that service will get a DNS record in Kubernetes provided by special Kubernetes deployments called kube-proxy and kube-dns. These two provid service discover inside a cluster. If you have a mysql service running and set `clusterIP: none`, than everyone in the cluster can reach that service by pinging `mysql.default.svc.cluster.local`. Where:
+
+* `mysql` – is the name of the service
+
+* `default` – is the namespace name
+
+* `svc` – is services
+
+* `cluster.local` – is a local cluster domain
+
+The domain can be changed by using a custom definition. To access a service outside the cluster, a DNS provider has to be used and Nginx (for example) to bind an IP address to a record. The public IP address of a service can be queried with the following commands:
+
+* NodePort – `kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services mysql`
+
+* LoadBalancer – `kubectl get -o jsonpath="{.spec.ports[0].LoadBalancer}" services mysql`
+
+### Template Files
+
+Like Docker Compose, or TerraForm or other service management tools, Kubernetes also provides infrastructure describing templates. What that means is that you rarely have to do anything by hand.
+
+For example consider the following yaml template which describes an nginx Deployment:
+
+```
+apiVersion: apps/v1
+kind: Deployment #(1)
+metadata: #(2)
+ name: nginx-deployment
+ labels: #(3)
+ app: nginx
+spec: #(4)
+ replicas: 3 #(5)
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers: #(6)
+ - name: nginx
+ image: nginx:1.7.9
+ ports:
+ - containerPort: 80
+
+```
+
+This is a simple deployment where we do the following:
+
+* (1) Define the type of the template with kind
+
+* (2) Add metadata that will identify this deployment and every resource that it would create with a label (3)
+
+* (4) Then comes the spec which describes the desired state
+ * (5) For the nginx app have 3 replicas
+
+ * (6) This is the template definition for the containers that this Pod will contain
+
+ * nginx named container
+
+ * nginx:1.7.9 image (docker in this case)
+
+ * exposed ports
+
+### ReplicaSet
+
+A ReplicaSet is a low level replication manager. It ensures that the correct number of replicates are running for a application. However, Deployments are higher level and should always manage ReplicaSets. You rarely have to use ReplicaSets directly. Unless you have a fringe case where you want to control the specifics of replication.
+
+### DaemonSet
+
+Remember how I said Kubernetes is using Labels all the time? A DaemonSet is a controller that ensures that at daemonized application is always running on a node with a certain label.
+
+For example, you want all the nodes labelled with `logger` or `mission_critical` to run an logger / auditing service daemon. Then you create a DaemonSet and give it a node selector called `logger` or `mission_critical`. Kubernetes will look for a node that has that label and always ensure that it will have an instance of that daemon running on it. Thus everyone running on that node will have access to that daemon locally.
+
+In case of my application the NSQ daemon could be a DaemonSet. I would ensure it’s up on a node which has the receiver component running by labelling a node with `receiver` and specifying a DaemonSet with `receiver` application selector.
+
+The DaemonSet has all the benefits of the ReplicaSet. It’s scalable and Kubernetes manages it; which means, all life cycle events are handled by Kube enusring it never dies or if it dies it gets immediately replaced.
+
+### Scaling
+
+In Kubernetes it’s trivial to scale. The ReplicaSets take care of the number of instances of a Pod to run. Like you saw in the nginx deployment with the setting `replicas:3`. It’s up to us to write our application in a way that it allows Kubernetes to run multiple copies of it.
+
+Of course the settings are vast. You can specify that the replicates must run on different Nodes, or various waiting times on how long to wait for an instance to come up. You can read up more on this subject here: [Horizontal Scaling][19] and here: [Interactive Scaling with Kubernetes][20] and of course the details of a [ReplicaSet][21] which controls all the scaling made possible in Kubernetes.
+
+### Conclusion for Kubernetes
+
+It’s a convenient tool to handle container orchestration. Its unit of work are Pods and it has a layered architecture. The top level layer is Deployments through which you handle all other resources. It’s highly configurable. It provides an API for all calls you make, so potentionally, instead of running `kubectl` you can also write your own logic to send information to the Kubernetes API.
+
+It provides support for all major cloud providers natively by now and it’s completely open source. Feel free to contribute, check the code if you would like to have a deeper understanding on how it works: [Kubernetes on Github][22].
+
+### Minikube
+
+I’m going to use [Minikube][23]. Minikube is a local kubernetes cluster simulator. It’s not great in simulating multiple nodes though, but for starting out and local play without any costs, it’s great. It uses a VM that can be fine tuned if necessary using VirtualBox and the likes.
+
+All the kube template files that I’ll be using are located here: [Kube files][24].
+
+NOTE If, later on, you would like to play with scaling, but notice that the replicates are always in `Pending` state, remember, that minikube employs a single node only. It might not allow multiple replicas on the same node, or just plain ran out of resources to use. You can check available resources with the following command:
+
+```
+kubectl get nodes -o yaml
+
+```
+
+### Building the containers
+
+Kubernetes supports most of the containers out there. I’m going to use Docker. For all the services I’ve built, there is a Dockerfile included in the repository. I encourage you to study them. Most of them are simple. For the go services I’m using a multi stage build that got recently introduced. The Go services are Alpine Linux based. The Face Recognition service is Python. NSQ and MySQL are using their own containers.
+
+### Context
+
+Kubernetes uses namespaces. If you don’t specify any it will use the `default` namespace. I’m going to permanently set a context to avoid polluting the default namespace. You do that like this:
+
+```
+❯ kubectl config set-context kube-face-cluster --namespace=face
+Context "kube-face-cluster" created.
+
+```
+
+You have to also start using the context once it’s created like so:
+
+```
+❯ kubectl config use-context kube-face-cluster
+Switched to context "kube-face-cluster".
+
+```
+
+After this, all `kubectl` commands will use the namespace `face`.
+
+### Deploying the Application
+
+Overview of Pods and Services:
+
+
+
+### MySQL
+
+The first Service I’m going to deploy is my database.
+
+I’m using the Kubernetes example located here [Kube MySQL][25] which fits my needs. Note that this file is using a plain password for MYSQL_PASSWORD. I’m going to employ a vault described here [Kubernetes Secrets][26].
+
+I’ve created a secret locally as described in that document using a secret yaml:
+
+```
+apiVersion: v1
+kind: Secret
+metadata:
+ name: kube-face-secret
+type: Opaque
+data:
+ mysql_password: base64codehere
+
+```
+
+The base64 code I created with the following commands:
+
+```
+echo -n "ubersecurepassword" | base64
+
+```
+
+And this is what you’ll see in my deployment yaml file:
+
+```
+...
+- name: MYSQL_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: kube-face-secret
+ key: mysql_password
+...
+
+```
+
+One other thing worth mentioning. It’s using a volume to persist the database. The volume definition is as follows:
+
+```
+...
+ volumeMounts:
+ - name: mysql-persistent-storage
+ mountPath: /var/lib/mysql
+...
+ volumes:
+ - name: mysql-persistent-storage
+ persistentVolumeClaim:
+ claimName: mysql-pv-claim
+...
+
+```
+
+`presistentVolumeClain` is the key here. This tells Kubernetes that this resource requires a persistent volume. How it’s provided is abstracted away from the user. You can be sure that Kubernetes will provide a volume that will always be there. Similar to Pods. To read up on the details check out this document: [Kubernetes Persistent Volumes][27].
+
+Deploying the mysql Service is done with the following command:
+
+```
+kubectl apply -f mysql.yaml
+
+```
+
+`apply` vs `create`. In short, `apply` is considered a declerative object configuration command while `create` is imperative. What that means for now is that create is usually for a one of task, like running something or creating a deployment. While, when using apply the user doesn’t define the action to be taken. That will be defined by Kubernetes based on the current status of the cluster. Thus, when there is no service called `mysql` and I’m calling `apply -f mysql.yaml` it will create the service. When running again, Kubernetes won’t do anything. But if I would run `create` again it would throw an error saying the service is already created.
+
+For more information checkout the following docs: [Kubernetes Object Management][28], [Imperative Configuration][29], [Declarative Configuration][30].
+
+To see progress information, run:
+
+```
+# Describes the whole process
+kubectl describe deployment mysql
+# Shows only the pod
+kubectl get pods -l app=mysql
+
+```
+
+Output should be similar to this:
+
+```
+...
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True NewReplicaSetAvailable
+OldReplicaSets:
+NewReplicaSet: mysql-55cd6b9f47 (1/1 replicas created)
+...
+
+```
+
+Or in case of `get pods`:
+
+```
+NAME READY STATUS RESTARTS AGE
+mysql-78dbbd9c49-k6sdv 1/1 Running 0 18s
+
+```
+
+To test the instance, run the following snippet:
+
+```
+kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -pyourpasswordhere
+
+```
+
+GOTCHA: If you change the password now, it’s not enough to re-apply your yaml file to update the container. Since the DB is persisted, the password will not be changed. You have to delete the whole deployment with `kubectl delete -f mysql.yaml`.
+
+You should see the following when running a `show databases`.
+
+```
+If you don't see a command prompt, try pressing enter.
+mysql>
+mysql>
+mysql> show databases;
++--------------------+
+| Database |
++--------------------+
+| information_schema |
+| kube |
+| mysql |
+| performance_schema |
++--------------------+
+4 rows in set (0.00 sec)
+
+mysql> exit
+Bye
+
+```
+
+You’ll notice that I also mounted a file located here [Database Setup SQL][31] into the container. MySQL container automatically executed these. That file will bootstrap some data and the schema I’m going to use.
+
+The volume definition is as follows:
+
+```
+ volumeMounts:
+ - name: mysql-persistent-storage
+ mountPath: /var/lib/mysql
+ - name: bootstrap-script
+ mountPath: /docker-entrypoint-initdb.d/database_setup.sql
+volumes:
+- name: mysql-persistent-storage
+ persistentVolumeClaim:
+ claimName: mysql-pv-claim
+- name: bootstrap-script
+ hostPath:
+ path: /Users/hannibal/golang/src/github.com/Skarlso/kube-cluster-sample/database_setup.sql
+ type: File
+
+```
+
+To check if the bootstrap script was successful run this:
+
+```
+~/golang/src/github.com/Skarlso/kube-cluster-sample/kube_files master*
+❯ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -uroot -pyourpasswordhere kube
+If you don't see a command prompt, try pressing enter.
+
+mysql> show tables;
++----------------+
+| Tables_in_kube |
++----------------+
+| images |
+| person |
+| person_images |
++----------------+
+3 rows in set (0.00 sec)
+
+mysql>
+
+```
+
+This concludes the database service setup. Logs for this service can be viewed with the following command:
+
+```
+kubectl logs deployment/mysql -f
+
+```
+
+### NSQ Lookup
+
+The NSQ Lookup will run as an internal service. It doesn’t need access from the outside so I’m setting `clusterIP: None` which will tell Kubernetes that this service is a headless service. This means that it won’t be loadbalanced and it won’t be a single ip service. The DNS will be based upon service selectors.
+
+Our NSQ Lookup selector is:
+
+```
+ selector:
+ matchLabels:
+ app: nsqlookup
+
+```
+
+Thus, the internal DNS will look like this: `nsqlookup.default.svc.cluster.local`.
+
+Headless services are described in detail here: [Headless Service][32].
+
+Basically it’s the same as MySQL just with slight modifications. As stated earlier, I’m using NSQ’s own Docker Image called `nsqio/nsq`. All nsq commands are there, so nsqd will also use this image just with a different command. For nsqlookupd the command is as follows:
+
+```
+command: ["/nsqlookupd"]
+args: ["--broadcast-address=nsqlookup.default.svc.cluster.local"]
+
+```
+
+What’s the `--broadcast-address` for, you might ask? By default, nsqlookup will use the `hostname` as broadcast address. Meaning, when the consumer runs a callback it will try to connect to something like `http://nsqlookup-234kf-asdf:4161/lookup?topics=image` which will not work of course. By setting the broadcast-address to the internal DNS that callback will be `http://nsqlookup.default.svc.cluster.local:4161/lookup?topic=images`. Which will work as expected.
+
+NSQ Lookup also requires two ports forwarded. One for broadcasting and one for nsqd daemon callback. These are exposed in the Dockerfile and then utilized in the kubernetes template like this:
+
+In the container template:
+
+```
+ ports:
+ - containerPort: 4160
+ hostPort: 4160
+ - containerPort: 4161
+ hostPort: 4161
+
+```
+
+In the service template:
+
+```
+spec:
+ ports:
+ - name: tcp
+ protocol: TCP
+ port: 4160
+ targetPort: 4160
+ - name: http
+ protocol: TCP
+ port: 4161
+ targetPort: 4161
+
+```
+
+Names are required by kubernetes to distinguish between them.
+
+To create this service, I’m using the following command as before:
+
+```
+kubectl apply -f nsqlookup.yaml
+
+```
+
+This concludes nsqlookupd. Two of the major players are in the sack.
+
+### Receiver
+
+This is a more complex one. The receiver will do three things.
+
+* It will create some deployments
+
+* It will create the nsq daemon
+
+* It will be public facing
+
+#### Deployments
+
+The first deployment it creates is it’s own. The receiver container is `skarlso/kube-receiver-alpine`.
+
+#### Nsq Daemon
+
+The receiver starts an nsq daemon. Like said earlier, the receiver runs an nsq with it-self. It does that so talking to it can happen locally and not over the network. By making receiver do this, it will end up on the same node as the receiver.
+
+NSQ daemon also needs some adjustments and parameters.
+
+```
+ ports:
+ - containerPort: 4150
+ hostPort: 4150
+ - containerPort: 4151
+ hostPort: 4151
+ env:
+ - name: NSQLOOKUP_ADDRESS
+ value: nsqlookup.default.svc.cluster.local
+ - name: NSQ_BROADCAST_ADDRESS
+ value: nsqd.default.svc.cluster.local
+ command: ["/nsqd"]
+ args: ["--lookupd-tcp-address=$(NSQLOOKUP_ADDRESS):4160", "--broadcast-address=$(NSQ_BROADCAST_ADDRESS)"]
+
+```
+
+You can see the lookup-tcp-address and the broadcast-address are set. Lookup tcp address is the DNS for the nsqlookupd service. And the broadcast address is necessary just like with nsqlookupd so the callbacks are working properly.
+
+#### Public facing
+
+Now, this is the first time I’m deploying a public facing service. There are two options. I could use a LoadBalancer because this API will be under heavy load. And if this would be deployed anywhere in production, then it should be a LoadBalancer.
+
+I’m doing this locally though with one node so something called a `NodePort` is enough. A `NodePort` exposes a service on each node’s IP at a static port. If not specified, it will assign a random port on the host between 30000-32767\. But it can also be configured to be a specific port, using `nodePort` in the yaml. To reach this service I will have to use `:`. If more than one node is configured a LoadBalancer can multiplex them to a single IP.
+
+For further information check out this document: [Publishing Services][33].
+
+Putting this all together, we’ll get a receiver-service for which the template is as follows:
+
+```
+apiVersion: v1
+kind: Service
+metadata:
+ name: receiver-service
+spec:
+ ports:
+ - protocol: TCP
+ port: 8000
+ targetPort: 8000
+ selector:
+ app: receiver
+ type: NodePort
+
+```
+
+For a fixed nodePort on 8000 a definition of `nodePort` must be provided as follows:
+
+```
+apiVersion: v1
+kind: Service
+metadata:
+ name: receiver-service
+spec:
+ ports:
+ - protocol: TCP
+ port: 8000
+ targetPort: 8000
+ selector:
+ app: receiver
+ type: NodePort
+ nodePort: 8000
+
+```
+
+### Image processor
+
+The Image Processor is where I’m handling passing off images to be identified. It should have access to nsqlookupd, mysql and the gRPC endpoint of the face recognition service deployed later. This is actually a boring service. In fact, it’s not a service at all. It doesn’t expose anything and thus it’s the first deployment only component. For brevity, here is the whole template:
+
+```
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: image-processor-deployment
+spec:
+ selector:
+ matchLabels:
+ app: image-processor
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: image-processor
+ spec:
+ containers:
+ - name: image-processor
+ image: skarlso/kube-processor-alpine:latest
+ env:
+ - name: MYSQL_CONNECTION
+ value: "mysql.default.svc.cluster.local"
+ - name: MYSQL_USERPASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: kube-face-secret
+ key: mysql_userpassword
+ - name: MYSQL_PORT
+ # TIL: If this is 3306 without " kubectl throws an error.
+ value: "3306"
+ - name: MYSQL_DBNAME
+ value: kube
+ - name: NSQ_LOOKUP_ADDRESS
+ value: "nsqlookup.default.svc.cluster.local:4161"
+ - name: GRPC_ADDRESS
+ value: "face-recog.default.svc.cluster.local:50051"
+
+```
+
+The only interesting points in this file are the multitude of environment properties that are used to configure the application. Note the nsqlookupd address and the grpc address.
+
+To create this deployment, run:
+
+```
+kubectl apply -f image_processor.yaml
+
+```
+
+### Face - Recognition
+
+The face recognition service does have a service. It’s a simple one, only needed by image-processor. It’s template is as follows:
+
+```
+apiVersion: v1
+kind: Service
+metadata:
+ name: face-recog
+spec:
+ ports:
+ - protocol: TCP
+ port: 50051
+ targetPort: 50051
+ selector:
+ app: face-recog
+ clusterIP: None
+
+```
+
+The more interesting part is that it requires two volumes. The two volumes are `known_people` and `unknown_people`. Can you guess what they will contain? Yep, images. The `known_people` volume contains all the images associated to the known people in the database. The `unknown_people` volume will contain all the new images. And that’s the path we will need to use when sending images from the receiver. That is, where ever the mount points to. Which in my case is `/unknown_people`. Basically the path needs to be one that the face recognition service can access.
+
+Now, with Kubernetes and Docker this is easy. It could be a mounted S3 or some kind of nfs or a local mount from host to guest. The possibilities are endless (around a dozen or so). I’m going to use a local mount for the sake of simplicity.
+
+Mounting a volume has two parts. First, the Dockerfile has to specify volumes:
+
+```
+VOLUME [ "/unknown_people", "/known_people" ]
+
+```
+
+Second, the Kubernetes template as seen earlier with MySQL; the difference being `hostPath` instead of a claimed volume:
+
+```
+ volumeMounts:
+ - name: known-people-storage
+ mountPath: /known_people
+ - name: unknown-people-storage
+ mountPath: /unknown_people
+ volumes:
+ - name: known-people-storage
+ hostPath:
+ path: /Users/hannibal/Temp/known_people
+ type: Directory
+ - name: unknown-people-storage
+ hostPath:
+ path: /Users/hannibal/Temp/
+ type: Directory
+
+```
+
+We also have to set the `known_people` folder config setting for face recognition. This is done via an environment property of course:
+
+```
+ env:
+ - name: KNOWN_PEOPLE
+ value: "/known_people"
+
+```
+
+Then the Python code will look up images like this:
+
+```
+ known_people = os.getenv('KNOWN_PEOPLE', 'known_people')
+ print("Known people images location is: %s" % known_people)
+ images = self.image_files_in_folder(known_people)
+
+```
+
+Where `image_files_in_folder` is:
+
+```
+ def image_files_in_folder(self, folder):
+ return [os.path.join(folder, f) for f in os.listdir(folder) if re.match(r'.*\.(jpg|jpeg|png)', f, flags=re.I)]
+
+```
+
+Neat.
+
+Now, if the receiver receives a request (and sends it off further the line) similar to the one below…
+
+```
+curl -d '{"path":"/unknown_people/unknown220.jpg"}' http://192.168.99.100:30251/image/post
+
+```
+
+…it will look for an image called unknown220.jpg under `/unknown_people`; locate an image in the known_folder that corresponds to the person on the unknown image and return the name of the image that matched.
+
+Looking at logs you should see something like this:
+
+```
+# Receiver
+❯ curl -d '{"path":"/unknown_people/unknown219.jpg"}' http://192.168.99.100:30251/image/post
+got path: {Path:/unknown_people/unknown219.jpg}
+image saved with id: 4
+image sent to nsq
+
+# Image Processor
+2018/03/26 18:11:21 INF 1 [images/ch] querying nsqlookupd http://nsqlookup.default.svc.cluster.local:4161/lookup?topic=images
+2018/03/26 18:11:59 Got a message: 4
+2018/03/26 18:11:59 Processing image id: 4
+2018/03/26 18:12:00 got person: Hannibal
+2018/03/26 18:12:00 updating record with person id
+2018/03/26 18:12:00 done
+
+```
+
+And that concludes all of the services that we need to deploy with Kubernetes to get this application to work.
+
+### Frontend
+
+Last but not least, there is a small web-app which displays the information in the db for convenience. This is also a public facing service with the same parameters as the receiver’s service.
+
+It looks like this:
+
+
+
+### Recap
+
+So what is the situation so far? I deployed a bunch of services all over the place. A recap off the commands I used:
+
+```
+kubectl apply -f mysql.yaml
+kubectl apply -f nsqlookup.yaml
+kubectl apply -f receiver.yaml
+kubectl apply -f image_processor.yaml
+kubectl apply -f face_recognition.yaml
+kubectl apply -f frontend.yaml
+
+```
+
+These could be in any order because the application does not allocate connections on start except for image_processor’s NSQ consumer. But that re-tries.
+
+Query-ing kube for running pods with `kubectl get pods` should show something like this:
+
+```
+❯ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+face-recog-6bf449c6f-qg5tr 1/1 Running 0 1m
+image-processor-deployment-6467468c9d-cvx6m 1/1 Running 0 31s
+mysql-7d667c75f4-bwghw 1/1 Running 0 36s
+nsqd-584954c44c-299dz 1/1 Running 0 26s
+nsqlookup-7f5bdfcb87-jkdl7 1/1 Running 0 11s
+receiver-deployment-5cb4797598-sf5ds 1/1 Running 0 26s
+
+```
+
+Running `minikube service list`:
+
+```
+❯ minikube service list
+|-------------|----------------------|-----------------------------|
+| NAMESPACE | NAME | URL |
+|-------------|----------------------|-----------------------------|
+| default | face-recog | No node port |
+| default | kubernetes | No node port |
+| default | mysql | No node port |
+| default | nsqd | No node port |
+| default | nsqlookup | No node port |
+| default | receiver-service | http://192.168.99.100:30251 |
+| kube-system | kube-dns | No node port |
+| kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
+|-------------|----------------------|-----------------------------|
+
+```
+
+### Rolling update
+
+What happens during a rolling update?
+
+
+
+As it happens during software development, change is requested/needed to some parts of the application. What happens to our cluster if I would like to change one of it’s components without breaking the other? And also whilest maintaining backwards compatibility with no disruption to user experience. Thankfully Kubernetes can help with that.
+
+What I don’t like is that the API only handles one image at a time. There is no option to bulk upload.
+
+#### Code
+
+Right now, we have the following code segment dealing with a single image:
+
+```
+// PostImage handles a post of an image. Saves it to the database
+// and sends it to NSQ for further processing.
+func PostImage(w http.ResponseWriter, r *http.Request) {
+...
+}
+
+func main() {
+ router := mux.NewRouter()
+ router.HandleFunc("/image/post", PostImage).Methods("POST")
+ log.Fatal(http.ListenAndServe(":8000", router))
+}
+
+```
+
+We have two options. Add a new endpoint with `/images/post` and make the client use that, or modify the existing one.
+
+The new client code has the advantage that it could fall back to submitting the old way if the new endpoint isn’t available. The old client code though doesn’t have this advantage so we can’t change the way our code works right now. Consider this. You have 90 servers. You do a slow paced rolling update. That will take out servers one step at a time doing an update. If an update lasts around a minute, that will take around one and a half hours to complete (not counting any parallel updates).
+
+During that time, some of your servers will run the new code and some will run the old one. Calls are load balanced, thus you have no control over what server is hit. If a client is trying to do a call the new way but hits an old server the client would fail. The client could try a fallback, but since you eliminated the old version it will not succeed unless it, by chance, hits a server with the new code (assuming no sticky sessions are set).
+
+Also, once all your servers are updated, an old client will not be able to use your service any longer at all.
+
+Now, you could argue that you don’t want to keep around old versions of your code forever. And that is true in some sense. That’s why, what we are going to do, is modify the old code, to simply call the new code with some slight augmentations. This way, old code is not kept around. Once all clients have been migrated, the code can simply be deleted without any problems.
+
+#### New Endpoint
+
+Let’s add a new route method:
+
+```
+...
+router.HandleFunc("/images/post", PostImages).Methods("POST")
+...
+
+```
+
+And updating the old one to call the new one with a modified body like this:
+
+```
+// PostImage handles a post of an image. Saves it to the database
+// and sends it to NSQ for further processing.
+func PostImage(w http.ResponseWriter, r *http.Request) {
+ var p Path
+ err := json.NewDecoder(r.Body).Decode(&p)
+ if err != nil {
+ fmt.Fprintf(w, "got error while decoding body: %s", err)
+ return
+ }
+ fmt.Fprintf(w, "got path: %+v\n", p)
+ var ps Paths
+ paths := make([]Path, 0)
+ paths = append(paths, p)
+ ps.Paths = paths
+ var pathsJSON bytes.Buffer
+ err = json.NewEncoder(&pathsJSON).Encode(ps)
+ if err != nil {
+ fmt.Fprintf(w, "failed to encode paths: %s", err)
+ return
+ }
+ r.Body = ioutil.NopCloser(&pathsJSON)
+ r.ContentLength = int64(pathsJSON.Len())
+ PostImages(w, r)
+}
+
+```
+
+Well, the naming could be better, but you should get the basic idea. I’m modifying the incoming single path by wrapping it into the new format and sending it over to the new end-point handler. And that’s it. There are a few more modifications, to check them out take a look at this PR: [Rolling Update Bulk Image Path PR][34].
+
+Now, we can call the receiver in two ways:
+
+```
+# Single Path:
+curl -d '{"path":"unknown4456.jpg"}' http://127.0.0.1:8000/image/post
+
+# Multiple Paths:
+curl -d '{"paths":[{"path":"unknown4456.jpg"}]}' http://127.0.0.1:8000/images/post
+
+```
+
+Here, the client is curl. Normally, if the client would be a service, I would modify it that in case the new end-point throws a 404 it would try the old one next.
+
+For brevity, I’m not modifying NSQ and the others to handle bulk image processing. They will still receive it one - by - one. I’ll leave that up to you as homework. ;)
+
+#### New Image
+
+To perform a rolling update, I must create a new image first from the receiver service. To do this, I’ll create a new image with a new tag, denoting a version v1.1.
+
+```
+docker build -t skarlso/kube-receiver-alpine:v1.1 .
+
+```
+
+Once this is complete, we can begin rolling out the change.
+
+#### Rolling update
+
+In Kubernetes, you can configure your rolling update in multiple ways.
+
+##### Manual Update
+
+If, say, I was using a container version in my config file called `v1.0` than doing an update is simply calling:
+
+```
+kubectl rolling-update receiver --image:skarlso/kube-receiver-alpine:v1.1
+
+```
+
+If there is a problem during the rollout we can always rollback.
+
+```
+kubectl rolling-update receiver --rollback
+
+```
+
+It will set back the previous version no fuss, no muss.
+
+##### Apply a new configuration file
+
+The problem with by-hand updates is always that they aren’t in source control.
+
+Consider this. Something changed, a couple of servers got updated, but nobody witnessed it. A new person comes along and does a change to the template and applys the template to the cluster. All the servers are updated, but suddenly, there is a service outage.
+
+Long story sort, the servers which got updated are wacked over because the template didn’t reflect what has been done by hand. That is bad. Don’t do that.
+
+The recommended way is to change the template to use the new version and than apply the template with the `apply` command.
+
+Kubernetes recommends that the Deployment handles the rollout with ReplicaSets. This means however, that there must be at least two replicates present for a rolling update. Otherwise the update won’t work (unless `maxUnavailable` is set to 1). I’m increasing the replica count in the yaml and I set the new image version for the receiver container.
+
+```
+ replicas: 2
+...
+ spec:
+ containers:
+ - name: receiver
+ image: skarlso/kube-receiver-alpine:v1.1
+...
+
+```
+
+Looking at the progress you should see something like this:
+
+```
+❯ kubectl rollout status deployment/receiver-deployment
+Waiting for rollout to finish: 1 out of 2 new replicas have been updated...
+
+```
+
+You can add in additional rollout configuration settings by specifying the `strategy` part of the template like this:
+
+```
+ strategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxSurge: 1
+ maxUnavailable: 0
+
+```
+
+Additional information on rolling update can be found in these documents: [Deployment Rolling Update][35], [Updating a Deployment][36], [Manage Deployments][37], [Rolling Update using ReplicaController][38].
+
+NOTE MINIKUBE USERS: Since we are doing this on a local machine with one node and 1 replica of an application, we have to set `maxUnavailable` to `1`. Otherwise, Kubernetes won’t allow the update to happen and the new version will always be in `Pending` state since we aren’t allowing that at any given point in time there is a situation where no containers are present for `receiver` app.
+
+### Scaling
+
+Scaling is dead easy with Kubernetes. Since it’s managing the whole cluster, you basically, just have to put a number into the template of the desired replicas to use.
+
+This has been a great post so far but it’s getting too long. I’m planning on writing a follow-up where I will be truly scaling things up on AWS with multiple nodes and replicas. Stay tuned.
+
+### Cleanup
+
+```
+kubectl delete deployments --all
+kubectl delete services -all
+
+```
+
+# Final Words
+
+And that is it ladies and gentleman. We wrote, deployed, updated and scaled (well, not yet really) a distributed application with Kubernetes.
+
+Any questions, please feel free to chat in the comments below, I’m happy to answer.
+
+I hope you enjoyed reading this. I know, it’s quiet long and I was thinking of splitting it up, but having a cohesive, one page guide is sometimes useful and makes it easy to find something or save it for later read or even print as PDF.
+
+Thank you for reading, Gergely.
+
+--------------------------------------------------------------------------------
+
+via: https://skarlso.github.io/2018/03/15/kubernetes-distributed-application/
+
+作者:[hannibal ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://github.com/Skarlso
+[1]:https://skarlso.github.io/2018/03/15/kubernetes-distributed-application/#circuit-breaker
+[2]:https://skarlso.github.io//categories/go
+[3]:https://skarlso.github.io//categories/kubernetes
+[4]:https://skarlso.github.io//categories/facerecognition
+[5]:https://kubernetes.io/
+[6]:https://github.com/Skarlso/kube-cluster-sample
+[7]:https://github.com/Skarlso/kube-cluster-sample/tree/master/receiver
+[8]:http://nsq.io/
+[9]:https://github.com/Skarlso/kube-cluster-sample/tree/master/image_processor
+[10]:https://github.com/Skarlso/kube-cluster-sample/tree/master/face_recognition
+[11]:https://grpc.io/
+[12]:https://golang.org/pkg/sync/#Cond
+[13]:https://github.com/ageitgey/face_recognition
+[14]:https://gocv.io/
+[15]:http://shop.oreilly.com/product/0636920043874.do
+[16]:https://kubernetes.io/docs/
+[17]:https://kubernetes.io/docs/concepts/overview/components/
+[18]:https://kubernetes.io/docs/concepts/services-networking/service/
+[19]:https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
+[20]:https://kubernetes.io/docs/tutorials/kubernetes-basics/scale-interactive/
+[21]:https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
+[22]:https://github.com/kubernetes/kubernetes
+[23]:https://github.com/kubernetes/minikube/
+[24]:https://github.com/Skarlso/kube-cluster-sample/tree/master/kube_files
+[25]:https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/#deploy-mysql
+[26]:https://kubernetes.io/docs/concepts/configuration/secret/
+[27]:https://kubernetes.io/docs/concepts/storage/persistent-volumes
+[28]:https://kubernetes.io/docs/concepts/overview/object-management-kubectl/overview/
+[29]:https://kubernetes.io/docs/concepts/overview/object-management-kubectl/imperative-config/
+[30]:https://kubernetes.io/docs/concepts/overview/object-management-kubectl/declarative-config/
+[31]:https://github.com/Skarlso/kube-cluster-sample/blob/master/database_setup.sql
+[32]:https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
+[33]:https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
+[34]:https://github.com/Skarlso/kube-cluster-sample/pull/1
+[35]:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment
+[36]:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment
+[37]:https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#updating-your-application-without-a-service-outage
+[38]:https://kubernetes.io/docs/tasks/run-application/rolling-update-replication-controller/
+[39]:https://skarlso.github.io/2018/03/15/kubernetes-distributed-application/
From cc585a36f4ec38375a9f22568cde46a2f660146d Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 10:17:10 +0800
Subject: [PATCH 021/220] =?UTF-8?q?20180421-11=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...iting Advanced Web Applications with Go.md | 695 ++++++++++++++++++
1 file changed, 695 insertions(+)
create mode 100644 sources/tech/20180419 Writing Advanced Web Applications with Go.md
diff --git a/sources/tech/20180419 Writing Advanced Web Applications with Go.md b/sources/tech/20180419 Writing Advanced Web Applications with Go.md
new file mode 100644
index 0000000000..ecc696b470
--- /dev/null
+++ b/sources/tech/20180419 Writing Advanced Web Applications with Go.md
@@ -0,0 +1,695 @@
+# Writing Advanced Web Applications with Go
+
+Web development in many programming environments often requires subscribing to some full framework ethos. With [Ruby][6], it’s usually [Rails][7] but could be [Sinatra][8] or something else. With [Python][9], it’s often [Django][10] or [Flask][11]. With [Go][12], it’s…
+
+If you spend some time in Go communities like the [Go mailing list][13] or the [Go subreddit][14], you’ll find Go newcomers frequently wondering what web framework is best to use. [There][15] [are][16] [quite][17] [a][18] [few][19] [Go][20] [frameworks][21]([and][22] [then][23] [some][24]), so which one is best seems like a reasonable question. Without fail, though, the strong recommendation of the Go community is to [avoid web frameworks entirely][25] and just stick with the standard library as long as possible. Here’s [an example from the Go mailing list][26] and here’s [one from the subreddit][27].
+
+It’s not bad advice! The Go standard library is very rich and flexible, much more so than many other languages, and designing a web application in Go with just the standard library is definitely a good choice.
+
+Even when these Go frameworks call themselves minimalistic, they can’t seem to help themselves avoid using a different request handler interface than the default standard library [http.Handler][28], and I think this is the biggest source of angst about why frameworks should be avoided. If everyone standardizes on [http.Handler][29], then dang, all sorts of things would be interoperable!
+
+Before Go 1.7, it made some sense to give in and use a different interface for handling HTTP requests. But now that [http.Request][30] has the [Context][31] and [WithContext][32] methods, there truly isn’t a good reason any longer.
+
+I’ve done a fair share of web development in Go and I’m here to share with you both some standard library development patterns I’ve learned and some code I’ve found myself frequently needing. The code I’m sharing is not for use instead of the standard library, but to augment it.
+
+Overall, if this blog post feels like it’s predominantly plugging various little standalone libraries from my[Webhelp non-framework][33], that’s because it is. It’s okay, they’re little standalone libraries. Only use the ones you want!
+
+If you’re new to Go web development, I suggest reading the Go documentation’s [Writing Web Applications][34]article first.
+
+### Middleware
+
+A frequent design pattern for server-side web development is the concept of _middleware_ , where some portion of the request handler wraps some other portion of the request handler and does some preprocessing or routing or something. This is a big component of how [Express][35] is organized on [Node][36], and how Express middleware and [Negroni][37] middleware works is almost line-for-line identical in design.
+
+Good use cases for middleware are things such as:
+
+* making sure a user is logged in, redirecting if not,
+
+* making sure the request came over HTTPS,
+
+* making sure a session is set up and loaded from a session database,
+
+* making sure we logged information before and after the request was handled,
+
+* making sure the request was routed to the right handler,
+
+* and so on.
+
+Composing your web app as essentially a chain of middleware handlers is a very powerful and flexible approach. It allows you to avoid a lot of [cross-cutting concerns][38] and have your code factored in very elegant and easy-to-maintain ways. By wrapping a set of handlers with middleware that ensures a user is logged in prior to actually attempting to handle the request, the individual handlers no longer need mistake-prone copy-and-pasted code to ensure the same thing.
+
+So, middleware is good. However, if Negroni or other frameworks are any indication, you’d think the standard library’s `http.Handler` isn’t up to the challenge. Negroni adds its own `negroni.Handler` just for the sake of making middleware easier. There’s no reason for this.
+
+Here is a full middleware implementation for ensuring a user is logged in, assuming a `GetUser(*http.Request)`function but otherwise just using the standard library:
+
+```
+func RequireUser(h http.Handler) http.Handler {
+ return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ user, err := GetUser(req)
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
+ if user == nil {
+ http.Error(w, "unauthorized", http.StatusUnauthorized)
+ return
+ }
+ h.ServeHTTP(w, req)
+ })
+}
+```
+
+Here’s how it’s used (just wrap another handler!):
+
+```
+func main() {
+ http.ListenAndServe(":8080", RequireUser(http.HandlerFunc(myHandler)))
+}
+```
+
+Express, Negroni, and other frameworks expect this kind of signature for a middleware-supporting handler:
+
+```
+type Handler interface {
+ // don't do this!
+ ServeHTTP(rw http.ResponseWriter, req *http.Request, next http.HandlerFunc)
+}
+```
+
+There’s really no reason for adding the `next` argument - it reduces cross-library compatibility. So I say, don’t use `negroni.Handler` (or similar). Just use `http.Handler`!
+
+### Composability
+
+Hopefully I’ve sold you on middleware as a good design philosophy.
+
+Probably the most commonly-used type of middleware is request routing, or muxing (seems like we should call this demuxing but what do I know). Some frameworks are almost solely focused on request routing.[gorilla/mux][39] seems more popular than any other part of the [Gorilla][40] library. I think the reason for this is that even though the Go standard library is completely full featured and has a good [ServeMux][41] implementation, it doesn’t make the right thing the default.
+
+So! Let’s talk about request routing and consider the following problem. You, web developer extraordinaire, want to serve some HTML from your web server at `/hello/` but also want to serve some static assets from `/static/`. Let’s take a quick stab.
+
+```
+package main
+
+import (
+ "net/http"
+)
+
+func hello(w http.ResponseWriter, req *http.Request) {
+ w.Write([]byte("hello, world!"))
+}
+
+func main() {
+ mux := http.NewServeMux()
+ mux.Handle("/hello/", http.HandlerFunc(hello))
+ mux.Handle("/static/", http.FileServer(http.Dir("./static-assets")))
+ http.ListenAndServe(":8080", mux)
+}
+```
+
+If you visit `http://localhost:8080/hello/`, you’ll be rewarded with a friendly “hello, world!” message.
+
+If you visit `http://localhost:8080/static/` on the other hand (assuming you have a folder of static assets in `./static-assets`), you’ll be surprised and frustrated. This code tries to find the source content for the request `/static/my-file` at `./static-assets/static/my-file`! There’s an extra `/static` in there!
+
+Okay, so this is why `http.StripPrefix` exists. Let’s fix it.
+
+```
+ mux.Handle("/static/", http.StripPrefix("/static",
+ http.FileServer(http.Dir("./static-assets"))))
+```
+
+`mux.Handle` combined with `http.StripPrefix` is such a common pattern that I think it should be the default. Whenever a request router processes a certain amount of URL elements, it should strip them off the request so the wrapped `http.Handler` doesn’t need to know its absolute URL and only needs to be concerned with its relative one.
+
+In [Russ Cox][42]’s recent [TiddlyWeb backend][43], I would argue that every time `strings.TrimPrefix` is needed to remove the full URL from the handler’s incoming path arguments, it is an unnecessary cross-cutting concern, unfortunately imposed by `http.ServeMux`. (An example is [line 201 in tiddly.go][44].)
+
+I’d much rather have the default `mux` behavior work more like a directory of registered elements that by default strips off the ancestor directory before handing the request to the next middleware handler. It’s much more composable. To this end, I’ve written a simple muxer that works in this fashion called [whmux.Dir][45]. It is essentially `http.ServeMux` and `http.StripPrefix` combined. Here’s the previous example reworked to use it:
+
+```
+package main
+
+import (
+ "net/http"
+
+ "gopkg.in/webhelp.v1/whmux"
+)
+
+func hello(w http.ResponseWriter, req *http.Request) {
+ w.Write([]byte("hello, world!"))
+}
+
+func main() {
+ mux := whmux.Dir{
+ "hello": http.HandlerFunc(hello),
+ "static": http.FileServer(http.Dir("./static-assets")),
+ }
+ http.ListenAndServe(":8080", mux)
+}
+```
+
+There are other useful mux implementations inside the [whmux][46] package that demultiplex on various aspects of the request path, request method, request host, or pull arguments out of the request and place them into the context, such as a [whmux.IntArg][47] or [whmux.StringArg][48]. This brings us to [contexts][49].
+
+### Contexts
+
+Request contexts are a recent addition to the Go 1.7 standard library, but the idea of [contexts has been around since mid-2014][50]. As of Go 1.7, they were added to the standard library ([“context”][51]), but are available for older Go releases in the original location ([“golang.org/x/net/context”][52]).
+
+First, here’s the definition of the `context.Context` type that `(*http.Request).Context()` returns:
+
+```
+type Context interface {
+ Done() <-chan struct{}
+ Err() error
+ Deadline() (deadline time.Time, ok bool)
+
+ Value(key interface{}) interface{}
+}
+```
+
+Talking about `Done()`, `Err()`, and `Deadline()` are enough for an entirely different blog post, so I’m going to ignore them at least for now and focus on `Value(interface{})`.
+
+As a motivating problem, let’s say that the `GetUser(*http.Request)` method we assumed earlier is expensive, and we only want to call it once per request. We certainly don’t want to call it once to check that a user is logged in, and then again when we actually need the `*User` value. With `(*http.Request).WithContext` and `context.WithValue`, we can pass the `*User` down to the next middleware precomputed!
+
+Here’s the new middleware:
+
+```
+type userKey int
+
+func RequireUser(h http.Handler) http.Handler {
+ return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ user, err := GetUser(req)
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
+ if user == nil {
+ http.Error(w, "unauthorized", http.StatusUnauthorized)
+ return
+ }
+ ctx := r.Context()
+ ctx = context.WithValue(ctx, userKey(0), user)
+ h.ServeHTTP(w, req.WithContext(ctx))
+ })
+}
+```
+
+Now, handlers that are protected by this `RequireUser` handler can load the previously computed `*User` value like this:
+
+```
+if user, ok := req.Context().Value(userKey(0)).(*User); ok {
+ // there's a valid user!
+}
+```
+
+Contexts allow us to pass optional values to handlers down the chain in a way that is relatively type-safe and flexible. None of the above context logic requires anything outside of the standard library.
+
+### Aside about context keys
+
+There was a curious piece of code in the above example. At the top, we defined a `type userKey int`, and then always used it as `userKey(0)`.
+
+One of the possible problems with contexts is the `Value()` interface lends itself to a global namespace where you can stomp on other context users and use conflicting key names. Above, we used `type userKey` because it’s an unexported type in your package. It will never compare equal (without a cast) to any other type, including `int`, in Go. This gives us a way to namespace keys to your package, even though the `Value()`method is still a sort of global namespace.
+
+Because the need for this is so common, the `webhelp` package defines a [GenSym()][53] helper that will create a brand new, never-before-seen, unique value for use as a context key.
+
+If we used [GenSym()][54], then `type userKey int` would become `var userKey = webhelp.GenSym()` and `userKey(0)`would simply become `userKey`.
+
+### Back to whmux.StringArg
+
+Armed with this new context behavior, we can now present a `whmux.StringArg` example:
+
+```
+package main
+
+import (
+ "fmt"
+ "net/http"
+
+ "gopkg.in/webhelp.v1/whmux"
+)
+
+var (
+ pageName = whmux.NewStringArg()
+)
+
+func page(w http.ResponseWriter, req *http.Request) {
+ name := pageName.Get(req.Context())
+
+ fmt.Fprintf(w, "Welcome to %s", name)
+}
+
+func main() {
+ // pageName.Shift pulls the next /-delimited string out of the request's
+ // URL.Path and puts it into the context instead.
+ pageHandler := pageName.Shift(http.HandlerFunc(page))
+
+ http.ListenAndServe(":8080", whmux.Dir{
+ "wiki": pageHandler,
+ })
+}
+```
+
+### Pre-Go-1.7 support
+
+Contexts let you do some pretty cool things. But let’s say you’re stuck with something before Go 1.7 (for instance, App Engine is currently Go 1.6).
+
+That’s okay! I’ve backported all of the neat new context features to Go 1.6 and earlier in a forwards compatible way!
+
+With the [whcompat][55] package, `req.Context()` becomes `whcompat.Context(req)`, and `req.WithContext(ctx)`becomes `whcompat.WithContext(req, ctx)`. The `whcompat` versions work with all releases of Go. Yay!
+
+There’s a bit of unpleasantness behind the scenes to make this happen. Specifically, for pre-1.7 builds, a global map indexed by `req.URL` is kept, and a finalizer is installed on `req` to clean up. So don’t change what`req.URL` points to and this will work fine. In practice it’s not a problem.
+
+`whcompat` adds additional backwards-compatibility helpers. In Go 1.7 and on, the context’s `Done()` channel is closed (and `Err()` is set), whenever the request is done processing. If you want this behavior in Go 1.6 and earlier, just use the [whcompat.DoneNotify][56] middleware.
+
+In Go 1.8 and on, the context’s `Done()` channel is closed when the client goes away, even if the request hasn’t completed. If you want this behavior in Go 1.7 and earlier, just use the [whcompat.CloseNotify][57]middleware, though beware that it costs an extra goroutine.
+
+### Error handling
+
+How you handle errors can be another cross-cutting concern, but with good application of context and middleware, it too can be beautifully cleaned up so that the responsibilities lie in the correct place.
+
+Problem statement: your `RequireUser` middleware needs to handle an authentication error differently between your HTML endpoints and your JSON API endpoints. You want to use `RequireUser` for both types of endpoints, but with your HTML endpoints you want to return a user-friendly error page, and with your JSON API endpoints you want to return an appropriate JSON error state.
+
+In my opinion, the right thing to do is to have contextual error handlers, and luckily, we have a context for contextual information!
+
+First, we need an error handler interface.
+
+```
+type ErrHandler interface {
+ HandleError(w http.ResponseWriter, req *http.Request, err error)
+}
+```
+
+Next, let’s make a middleware that registers the error handler in the context:
+
+```
+var errHandler = webhelp.GenSym() // see the aside about context keys
+
+func HandleErrWith(eh ErrHandler, h http.Handler) http.Handler {
+ return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ ctx := context.WithValue(whcompat.Context(req), errHandler, eh)
+ h.ServeHTTP(w, whcompat.WithContext(req, ctx))
+ })
+}
+```
+
+Last, let’s make a function that will use the registered error handler for errors:
+
+```
+func HandleErr(w http.ResponseWriter, req *http.Request, err error) {
+ if handler, ok := whcompat.Context(req).Value(errHandler).(ErrHandler); ok {
+ handler.HandleError(w, req, err)
+ return
+ }
+ log.Printf("error: %v", err)
+ http.Error(w, "internal server error", http.StatusInternalServerError)
+}
+```
+
+Now, as long as everything uses `HandleErr` to handle errors, our JSON API can handle errors with JSON responses, and our HTML endpoints can handle errors with HTML responses.
+
+Of course, the [wherr][58] package implements this all for you, and the [whjson][59] package even implements a friendly JSON API error handler.
+
+Here’s how you might use it:
+
+```
+var userKey = webhelp.GenSym()
+
+func RequireUser(h http.Handler) http.Handler {
+ return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ user, err := GetUser(req)
+ if err != nil {
+ wherr.Handle(w, req, wherr.InternalServerError.New("failed to get user"))
+ return
+ }
+ if user == nil {
+ wherr.Handle(w, req, wherr.Unauthorized.New("no user found"))
+ return
+ }
+ ctx := r.Context()
+ ctx = context.WithValue(ctx, userKey, user)
+ h.ServeHTTP(w, req.WithContext(ctx))
+ })
+}
+
+func userpage(w http.ResponseWriter, req *http.Request) {
+ user := req.Context().Value(userKey).(*User)
+ w.Header().Set("Content-Type", "text/html")
+ userpageTmpl.Execute(w, user)
+}
+
+func username(w http.ResponseWriter, req *http.Request) {
+ user := req.Context().Value(userKey).(*User)
+ w.Header().Set("Content-Type", "application/json")
+ json.NewEncoder(w).Encode(map[string]interface{}{"user": user})
+}
+
+func main() {
+ http.ListenAndServe(":8080", whmux.Dir{
+ "api": wherr.HandleWith(whjson.ErrHandler,
+ RequireUser(whmux.Dir{
+ "username": http.HandlerFunc(username),
+ })),
+ "user": RequireUser(http.HandlerFunc(userpage)),
+ })
+}
+```
+
+### Aside about the spacemonkeygo/errors package
+
+The default [wherr.Handle][60] implementation understands all of the [error classes defined in the wherr top level package][61].
+
+These error classes are implemented using the [spacemonkeygo/errors][62] library and the[spacemonkeygo/errors/errhttp][63] extensions. You don’t have to use this library or these errors, but the benefit is that your error instances can be extended to include HTTP status code messages and information, which once again, provides for a nice elimination of cross-cutting concerns in your error handling logic.
+
+See the [spacemonkeygo/errors][64] package for more details.
+
+ _Update 2018-04-19: After a few years of use, my friend condensed some lessons we learned and the best parts of `spacemonkeygo/errors` into a new, more concise, better library, over at [github.com/zeebo/errs][1]. Consider using that instead!_
+
+### Sessions
+
+Go’s standard library has great support for cookies, but cookies by themselves aren’t usually what a developer thinks of when she thinks about sessions. Cookies are unencrypted, unauthenticated, and readable by the user, and perhaps you don’t want that with your session data.
+
+Further, sessions can be stored in cookies, but could also be stored in a database to provide features like session revocation and querying. There’s lots of potential details about the implementation of sessions.
+
+Request handlers, however, probably don’t care too much about the implementation details of the session. Request handlers usually just want a bucket of keys and values they can store safely and securely.
+
+The [whsess][65] package implements middleware for registering an arbitrary session store (a default cookie-based session store is provided), and implements helpers for retrieving and saving new values into the session.
+
+The default cookie-based session store implements encryption and authentication via the excellent[nacl/secretbox][66] package.
+
+Usage is like this:
+
+```
+func handler(w http.ResponseWriter, req *http.Request) {
+ ctx := whcompat.Context(req)
+ sess, err := whsess.Load(ctx, "namespace")
+ if err != nil {
+ wherr.Handle(w, req, err)
+ return
+ }
+ if loggedIn, _ := sess.Values["logged_in"].(bool); loggedIn {
+ views, _ := sess.Values["views"].(int64)
+ sess.Values["views"] = views + 1
+ sess.Save(w)
+ }
+}
+
+func main() {
+ http.ListenAndServe(":8080", whsess.HandlerWithStore(
+ whsess.NewCookieStore(secret), http.HandlerFunc(handler)))
+}
+```
+
+### Logging
+
+The Go standard library by default doesn’t log incoming requests, outgoing responses, or even just what port the HTTP server is listening on.
+
+The [whlog][67] package implements all three. The [whlog.LogRequests][68] middleware will log requests as they start. The [whlog.LogResponses][69] middleware will log requests as they end, along with status code and timing information. [whlog.ListenAndServe][70] will log the address the server ultimately listens on (if you specify “:0” as your address, a port will be randomly chosen, and [whlog.ListenAndServe][71] will log it).
+
+[whlog.LogResponses][72] deserves special mention for how it does what it does. It uses the [whmon][73] package to instrument the outgoing `http.ResponseWriter` to keep track of response information.
+
+Usage is like this:
+
+```
+func main() {
+ whlog.ListenAndServe(":8080", whlog.LogResponses(whlog.Default, handler))
+}
+```
+
+### App engine logging
+
+App engine logging is unconventional crazytown. The standard library logger doesn’t work by default on App Engine, because App Engine logs _require_ the request context. This is unfortunate for libraries that don’t necessarily run on App Engine all the time, as their logging information doesn’t make it to the App Engine request-specific logger.
+
+Unbelievably, this is fixable with [whgls][74], which uses my terrible, terrible (but recently improved) [Goroutine-local storage library][75] to store the request context on the current stack, register a new log output, and fix logging so standard library logging works with App Engine again.
+
+### Template handling
+
+Go’s standard library [html/template][76] package is excellent, but you’ll be unsurprised to find there’s a few tasks I do with it so commonly that I’ve written additional support code.
+
+The [whtmpl][77] package really does two things. First, it provides a number of useful helper methods for use within templates, and second, it takes some friction out of managing a large number of templates.
+
+When writing templates, one thing you can do is call out to other registered templates for small values. A good example might be some sort of list element. You can have a template that renders the list element, and then your template that renders your list can use the list element template in turn.
+
+Use of another template within a template might look like this:
+
+```
+
+ {{ range .List }}
+ {{ template "list_element" . }}
+ {{ end }}
+
+
+```
+
+You’re now rendering the `list_element` template with the list element from `.List`. But what if you want to also pass the current user `.User`? Unfortunately, you can only pass one argument from one template to another. If you have two arguments you want to pass to another template, with the standard library, you’re out of luck.
+
+The [whtmpl][78] package adds three helper functions to aid you here, `makepair`, `makemap`, and `makeslice` (more docs under the [whtmpl.Collection][79] type). `makepair` is the simplest. It takes two arguments and constructs a[whtmpl.Pair][80]. Fixing our example above would look like this now:
+
+```
+
+ {{ $user := .User }}
+ {{ range .List }}
+ {{ template "list_element" (makepair . $user) }}
+ {{ end }}
+
+
+```
+
+The second thing [whtmpl][81] does is make defining lots of templates easy, by optionally automatically naming templates after the name of the file the template is defined in.
+
+For example, say you have three files.
+
+Here’s `pkg.go`:
+
+```
+package views
+
+import "gopkg.in/webhelp.v1/whtmpl"
+
+var Templates = whtmpl.NewCollection()
+```
+
+Here’s `landing.go`:
+
+```
+package views
+
+var _ = Templates.MustParse(`{{ template "header" . }}
+
+ Landing!
`)
+```
+
+And here’s `header.go`:
+
+```
+package views
+
+var _ = Templates.MustParse(`My website!`)
+```
+
+Now, you can import your new `views` package and render the `landing` template this easily:
+
+```
+func handler(w http.ResponseWriter, req *http.Request) {
+ views.Templates.Render(w, req, "landing", map[string]interface{}{})
+}
+```
+
+### User authentication
+
+I’ve written two Webhelp-style authentication libraries that I end up using frequently.
+
+The first is an OAuth2 library, [whoauth2][82]. I’ve written up [an example application that authenticates with Google, Facebook, and Github][83].
+
+The second, [whgoth][84], is a wrapper around [markbates/goth][85]. My portion isn’t quite complete yet (some fixes are still necessary for optional App Engine support), but will support more non-OAuth2 authentication sources (like Twitter) when it is done.
+
+### Route listing
+
+Surprise! If you’ve used [webhelp][86] based handlers and middleware for your whole app, you automatically get route listing for free, via the [whroute][87] package.
+
+My web serving code’s `main` method often has a form like this:
+
+```
+switch flag.Arg(0) {
+case "serve":
+ panic(whlog.ListenAndServe(*listenAddr, routes))
+case "routes":
+ whroute.PrintRoutes(os.Stdout, routes)
+default:
+ fmt.Printf("Usage: %s \n", os.Args[0])
+}
+```
+
+Here’s some example output:
+
+```
+GET /auth/_cb/
+GET /auth/login/
+GET /auth/logout/
+GET /
+GET /account/apikeys/
+POST /account/apikeys/
+GET /project//
+GET /project//control//
+POST /project//control//sample/
+GET /project//control/
+ Redirect: f(req)
+POST /project//control/
+POST /project//control_named//sample/
+GET /project//control_named/
+ Redirect: f(req)
+GET /project//sample//
+GET /project//sample//similar[/<*>]
+GET /project//sample/
+ Redirect: f(req)
+POST /project//search/
+GET /project/
+ Redirect: /
+POST /project/
+
+```
+
+### Other little things
+
+[webhelp][88] has a number of other subpackages:
+
+* [whparse][2] assists in parsing optional request arguments.
+
+* [whredir][3] provides some handlers and helper methods for doing redirects in various cases.
+
+* [whcache][4] creates request-specific mutable storage for caching various computations and database loaded data. Mutability helps helper functions that aren’t used as middleware share data.
+
+* [whfatal][5] uses panics to simplify early request handling termination. Probably avoid this package unless you want to anger other Go developers.
+
+### Summary
+
+Designing your web project as a collection of composable middlewares goes quite a long way to simplify your code design, eliminate cross-cutting concerns, and create a more flexible development environment. Use my [webhelp][89] package if it helps you.
+
+Or don’t! Whatever! It’s still a free country last I checked.
+
+### Update
+
+Peter Kieltyka points me to his [Chi framework][90], which actually does seem to do the right things with respect to middleware, handlers, and contexts - certainly much more so than all the other frameworks I’ve seen. So, shoutout to Peter and the team at Pressly!
+
+--------------------------------------------------------------------------------
+作者简介:
+
+Utahn, Software Engineer, Terrier lover, Potatoes – these are concepts that make little sense in the far future when the robots completely take over. Former affiliations like University of Minnesota, University of Utah, Space Monkey, Google, Instructure, or Mozy will be meaningless as the last chunk of Earth is plunged into an interstellar fuel reactor to power an ever-growing orb of computronium.
+
+In the meantime, it’s probably best to not worry about it.
+
+Here’s a list of all the ways to find me on the internet. Let’s do all we can together before we can’t!
+
+AngelList: jtolds
+Beeminder: jtolds
+Facebook: jtolds
+Flickr: jtolds
+GitHub: jtolds
+Google+: +jtolds
+Instagram: @jtolds
+Keybase: jtolds
+Last.fm: jtolds
+LinkedIn: jtolds
+Soundcloud: jtolds
+Spotify: jtolds
+Twitter: @jtolds @jtsmusic
+Youtube: jtolds
+(I briefly worried about powerful nation-states cross-correlating all of my accounts if I listed them here, but then I saw how different all my usernames are and thought, “nah.”)
+
+I have a separate page detailing what I’m currently involved in.
+
+Drop me a line if you simply can’t hold so many lines and they don’t all fit in your hands. I have a line tote bag you might be interested in.
+
+--------------------
+
+via: https://www.jtolio.com/2017/01/writing-advanced-web-applications-with-go/
+
+作者:[Utahn ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.jtolio.com/about/
+[1]:https://github.com/zeebo/errs
+[2]:https://godoc.org/gopkg.in/webhelp.v1/whparse
+[3]:https://godoc.org/gopkg.in/webhelp.v1/whredir
+[4]:https://godoc.org/gopkg.in/webhelp.v1/whcache
+[5]:https://godoc.org/gopkg.in/webhelp.v1/whfatal
+[6]:https://www.ruby-lang.org/
+[7]:http://rubyonrails.org/
+[8]:http://www.sinatrarb.com/
+[9]:https://www.python.org/
+[10]:https://www.djangoproject.com/
+[11]:http://flask.pocoo.org/
+[12]:https://golang.org/
+[13]:https://groups.google.com/d/forum/golang-nuts
+[14]:https://www.reddit.com/r/golang/
+[15]:https://revel.github.io/
+[16]:https://gin-gonic.github.io/gin/
+[17]:http://iris-go.com/
+[18]:https://beego.me/
+[19]:https://go-macaron.com/
+[20]:https://github.com/go-martini/martini
+[21]:https://github.com/gocraft/web
+[22]:https://github.com/urfave/negroni
+[23]:https://godoc.org/goji.io
+[24]:https://echo.labstack.com/
+[25]:https://medium.com/code-zen/why-i-don-t-use-go-web-frameworks-1087e1facfa4
+[26]:https://groups.google.com/forum/#!topic/golang-nuts/R_lqsTTBh6I
+[27]:https://www.reddit.com/r/golang/comments/1yh6gm/new_to_go_trying_to_select_web_framework/
+[28]:https://golang.org/pkg/net/http/#Handler
+[29]:https://golang.org/pkg/net/http/#Handler
+[30]:https://golang.org/pkg/net/http/#Request
+[31]:https://golang.org/pkg/net/http/#Request.Context
+[32]:https://golang.org/pkg/net/http/#Request.WithContext
+[33]:https://godoc.org/gopkg.in/webhelp.v1
+[34]:https://golang.org/doc/articles/wiki/
+[35]:http://expressjs.com/
+[36]:https://nodejs.org/en/
+[37]:https://github.com/urfave/negroni
+[38]:https://en.wikipedia.org/wiki/Cross-cutting_concern
+[39]:https://github.com/gorilla/mux
+[40]:https://github.com/gorilla/
+[41]:https://golang.org/pkg/net/http/#ServeMux
+[42]:https://swtch.com/~rsc/
+[43]:https://github.com/rsc/tiddly
+[44]:https://github.com/rsc/tiddly/blob/8f9145ac183e374eb95d90a73be4d5f38534ec47/tiddly.go#L201
+[45]:https://godoc.org/gopkg.in/webhelp.v1/whmux#Dir
+[46]:https://godoc.org/gopkg.in/webhelp.v1/whmux
+[47]:https://godoc.org/gopkg.in/webhelp.v1/whmux#IntArg
+[48]:https://godoc.org/gopkg.in/webhelp.v1/whmux#StringArg
+[49]:https://golang.org/pkg/context/
+[50]:https://blog.golang.org/context
+[51]:https://golang.org/pkg/context/
+[52]:https://godoc.org/golang.org/x/net/context
+[53]:https://godoc.org/gopkg.in/webhelp.v1#GenSym
+[54]:https://godoc.org/gopkg.in/webhelp.v1#GenSym
+[55]:https://godoc.org/gopkg.in/webhelp.v1/whcompat
+[56]:https://godoc.org/gopkg.in/webhelp.v1/whcompat#DoneNotify
+[57]:https://godoc.org/gopkg.in/webhelp.v1/whcompat#CloseNotify
+[58]:https://godoc.org/gopkg.in/webhelp.v1/wherr
+[59]:https://godoc.org/gopkg.in/webhelp.v1/wherr
+[60]:https://godoc.org/gopkg.in/webhelp.v1/wherr#Handle
+[61]:https://godoc.org/gopkg.in/webhelp.v1/wherr#pkg-variables
+[62]:https://godoc.org/github.com/spacemonkeygo/errors
+[63]:https://godoc.org/github.com/spacemonkeygo/errors/errhttp
+[64]:https://godoc.org/github.com/spacemonkeygo/errors
+[65]:https://godoc.org/gopkg.in/webhelp.v1/whsess
+[66]:https://godoc.org/golang.org/x/crypto/nacl/secretbox
+[67]:https://godoc.org/gopkg.in/webhelp.v1/whlog
+[68]:https://godoc.org/gopkg.in/webhelp.v1/whlog#LogRequests
+[69]:https://godoc.org/gopkg.in/webhelp.v1/whlog#LogResponses
+[70]:https://godoc.org/gopkg.in/webhelp.v1/whlog#ListenAndServe
+[71]:https://godoc.org/gopkg.in/webhelp.v1/whlog#ListenAndServe
+[72]:https://godoc.org/gopkg.in/webhelp.v1/whlog#LogResponses
+[73]:https://godoc.org/gopkg.in/webhelp.v1/whmon
+[74]:https://godoc.org/gopkg.in/webhelp.v1/whgls
+[75]:https://godoc.org/github.com/jtolds/gls
+[76]:https://golang.org/pkg/html/template/
+[77]:https://godoc.org/gopkg.in/webhelp.v1/whtmpl
+[78]:https://godoc.org/gopkg.in/webhelp.v1/whtmpl
+[79]:https://godoc.org/gopkg.in/webhelp.v1/whtmpl#Collection
+[80]:https://godoc.org/gopkg.in/webhelp.v1/whtmpl#Pair
+[81]:https://godoc.org/gopkg.in/webhelp.v1/whtmpl
+[82]:https://godoc.org/gopkg.in/go-webhelp/whoauth2.v1
+[83]:https://github.com/go-webhelp/whoauth2/blob/v1/examples/group/main.go
+[84]:https://godoc.org/gopkg.in/go-webhelp/whgoth.v1
+[85]:https://github.com/markbates/goth
+[86]:https://godoc.org/gopkg.in/webhelp.v1
+[87]:https://godoc.org/gopkg.in/webhelp.v1/whroute
+[88]:https://godoc.org/gopkg.in/webhelp.v1
+[89]:https://godoc.org/gopkg.in/webhelp.v1
+[90]:https://github.com/pressly/chi
From cf2522296bf32545573a26fb0fdff8a19ce6bb4b Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 10:27:03 +0800
Subject: [PATCH 022/220] =?UTF-8?q?20180421-12=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../20140607 Five things that make Go fast.md | 493 ++++++++++++++++++
1 file changed, 493 insertions(+)
create mode 100644 sources/tech/20140607 Five things that make Go fast.md
diff --git a/sources/tech/20140607 Five things that make Go fast.md b/sources/tech/20140607 Five things that make Go fast.md
new file mode 100644
index 0000000000..88db93011c
--- /dev/null
+++ b/sources/tech/20140607 Five things that make Go fast.md
@@ -0,0 +1,493 @@
+Five things that make Go fast
+============================================================
+
+ _Anthony Starks has remixed my original Google Present based slides using his fantastic Deck presentation tool. You can check out his remix over on his blog,[mindchunk.blogspot.com.au/2014/06/remixing-with-deck][5]._
+
+* * *
+
+I was recently invited to give a talk at Gocon, a fantastic Go conference held semi-annually in Tokyo, Japan. [Gocon 2014][6] was an entirely community-run one day event combining training and an afternoon of presentations surrounding the theme of Go in production
.
+
+The following is the text of my presentation. The original text was structured to force me to speak slowly and clearly, so I have taken the liberty of editing it slightly to be more readable.
+
+I want to thank [Bill Kennedy][7], Minux Ma, and especially [Josh Bleecher Snyder][8], for their assistance in preparing this talk.
+
+* * *
+
+Good afternoon.
+
+My name is David.
+
+I am delighted to be here at Gocon today. I have wanted to come to this conference for two years and I am very grateful to the organisers for extending me the opportunity to present to you today.
+
+ [][9]
+I want to begin my talk with a question.
+
+Why are people choosing to use Go ?
+
+When people talk about their decision to learn Go, or use it in their product, they have a variety of answers, but there always three that are at the top of their list
+
+ [][10]
+These are the top three.
+
+The first, Concurrency.
+
+Go’s concurrency primitives are attractive to programmers who come from single threaded scripting languages like Nodejs, Ruby, or Python, or from languages like C++ or Java with their heavyweight threading model.
+
+Ease of deployment.
+
+We have heard today from experienced Gophers who appreciate the simplicity of deploying Go applications.
+
+ [][11]
+
+This leaves Performance.
+
+I believe an important reason why people choose to use Go is because it is _fast_ .
+
+ [][12]
+
+For my talk today I want to discuss five features that contribute to Go’s performance.
+
+I will also share with you the details of how Go implements these features.
+
+ [][13]
+
+The first feature I want to talk about is Go’s efficient treatment and storage of values.
+
+ [][14]
+
+This is an example of a value in Go. When compiled, `gocon` consumes exactly four bytes of memory.
+
+Let’s compare Go with some other languages
+
+ [][15]
+
+Due to the overhead of the way Python represents variables, storing the same value using Python consumes six times more memory.
+
+This extra memory is used by Python to track type information, do reference counting, etc
+
+Let’s look at another example:
+
+ [][16]
+
+Similar to Go, the Java `int` type consumes 4 bytes of memory to store this value.
+
+However, to use this value in a collection like a `List` or `Map`, the compiler must convert it into an `Integer` object.
+
+ [][17]
+
+So an integer in Java frequently looks more like this and consumes between 16 and 24 bytes of memory.
+
+Why is this important ? Memory is cheap and plentiful, why should this overhead matter ?
+
+ [][18]
+
+This is a graph showing CPU clock speed vs memory bus speed.
+
+Notice how the gap between CPU clock speed and memory bus speed continues to widen.
+
+The difference between the two is effectively how much time the CPU spends waiting for memory.
+
+ [][19]
+
+Since the late 1960’s CPU designers have understood this problem.
+
+Their solution is a cache, an area of smaller, faster memory which is inserted between the CPU and main memory.
+
+ [][20]
+
+This is a `Location` type which holds the location of some object in three dimensional space. It is written in Go, so each `Location` consumes exactly 24 bytes of storage.
+
+We can use this type to construct an array type of 1,000 `Location`s, which consumes exactly 24,000 bytes of memory.
+
+Inside the array, the `Location` structures are stored sequentially, rather than as pointers to 1,000 Location structures stored randomly.
+
+This is important because now all 1,000 `Location` structures are in the cache in sequence, packed tightly together.
+
+ [][21]
+
+Go lets you create compact data structures, avoiding unnecessary indirection.
+
+Compact data structures utilise the cache better.
+
+Better cache utilisation leads to better performance.
+
+ [][22]
+
+Function calls are not free.
+
+ [][23]
+
+Three things happen when a function is called.
+
+A new stack frame is created, and the details of the caller recorded.
+
+Any registers which may be overwritten during the function call are saved to the stack.
+
+The processor computes the address of the function and executes a branch to that new address.
+
+ [][24]
+
+Because function calls are very common operations, CPU designers have worked hard to optimise this procedure, but they cannot eliminate the overhead.
+
+Depending on what the function does, this overhead may be trivial or significant.
+
+A solution to reducing function call overhead is an optimisation technique called Inlining.
+
+ [][25]
+
+The Go compiler inlines a function by treating the body of the function as if it were part of the caller.
+
+Inlining has a cost; it increases binary size.
+
+It only makes sense to inline when the overhead of calling a function is large relative to the work the function does, so only simple functions are candidates for inlining.
+
+Complicated functions are usually not dominated by the overhead of calling them and are therefore not inlined.
+
+ [][26]
+
+This example shows the function `Double` calling `util.Max`.
+
+To reduce the overhead of the call to `util.Max`, the compiler can inline `util.Max` into `Double`, resulting in something like this
+
+ [][27]
+
+After inlining there is no longer a call to `util.Max`, but the behaviour of `Double` is unchanged.
+
+Inlining isn’t exclusive to Go. Almost every compiled or JITed language performs this optimisation. But how does inlining in Go work?
+
+The Go implementation is very simple. When a package is compiled, any small function that is suitable for inlining is marked and then compiled as usual.
+
+Then both the source of the function and the compiled version are stored.
+
+ [][28]
+
+This slide shows the contents of util.a. The source has been transformed a little to make it easier for the compiler to process quickly.
+
+When the compiler compiles Double it sees that `util.Max` is inlinable, and the source of `util.Max`is available.
+
+Rather than insert a call to the compiled version of `util.Max`, it can substitute the source of the original function.
+
+Having the source of the function enables other optimizations.
+
+ [][29]
+
+In this example, although the function Test always returns false, Expensive cannot know that without executing it.
+
+When `Test` is inlined, we get something like this
+
+ [][30]
+
+The compiler now knows that the expensive code is unreachable.
+
+Not only does this save the cost of calling Test, it saves compiling or running any of the expensive code that is now unreachable.
+
+The Go compiler can automatically inline functions across files and even across packages. This includes code that calls inlinable functions from the standard library.
+
+ [][31]
+
+Mandatory garbage collection makes Go a simpler and safer language.
+
+This does not imply that garbage collection makes Go slow, or that garbage collection is the ultimate arbiter of the speed of your program.
+
+What it does mean is memory allocated on the heap comes at a cost. It is a debt that costs CPU time every time the GC runs until that memory is freed.
+
+ [][32]
+
+There is however another place to allocate memory, and that is the stack.
+
+Unlike C, which forces you to choose if a value will be stored on the heap, via `malloc`, or on the stack, by declaring it inside the scope of the function, Go implements an optimisation called _escape analysis_ .
+
+ [][33]
+
+Escape analysis determines whether any references to a value escape the function in which the value is declared.
+
+If no references escape, the value may be safely stored on the stack.
+
+Values stored on the stack do not need to be allocated or freed.
+
+Lets look at some examples
+
+ [][34]
+
+`Sum` adds the numbers between 1 and 100 and returns the result. This is a rather unusual way to do this, but it illustrates how Escape Analysis works.
+
+Because the numbers slice is only referenced inside `Sum`, the compiler will arrange to store the 100 integers for that slice on the stack, rather than the heap.
+
+There is no need to garbage collect `numbers`, it is automatically freed when `Sum` returns.
+
+ [][35]
+
+This second example is also a little contrived. In `CenterCursor` we create a new `Cursor` and store a pointer to it in c.
+
+Then we pass `c` to the `Center()` function which moves the `Cursor` to the center of the screen.
+
+Then finally we print the X and Y locations of that `Cursor`.
+
+Even though `c` was allocated with the `new` function, it will not be stored on the heap, because no reference `c` escapes the `CenterCursor` function.
+
+ [][36]
+
+Go’s optimisations are always enabled by default. You can see the compiler’s escape analysis and inlining decisions with the `-gcflags=-m` switch.
+
+Because escape analysis is performed at compile time, not run time, stack allocation will always be faster than heap allocation, no matter how efficient your garbage collector is.
+
+I will talk more about the stack in the remaining sections of this talk.
+
+ [][37]
+
+Go has goroutines. These are the foundations for concurrency in Go.
+
+I want to step back for a moment and explore the history that leads us to goroutines.
+
+In the beginning computers ran one process at a time. Then in the 60’s the idea of multiprocessing, or time sharing became popular.
+
+In a time-sharing system the operating systems must constantly switch the attention of the CPU between these processes by recording the state of the current process, then restoring the state of another.
+
+This is called _process switching_ .
+
+ [][38]
+
+There are three main costs of a process switch.
+
+First is the kernel needs to store the contents of all the CPU registers for that process, then restore the values for another process.
+
+The kernel also needs to flush the CPU’s mappings from virtual memory to physical memory as these are only valid for the current process.
+
+Finally there is the cost of the operating system context switch, and the overhead of the scheduler function to choose the next process to occupy the CPU.
+
+ [][39]
+
+There are a surprising number of registers in a modern processor. I have difficulty fitting them on one slide, which should give you a clue how much time it takes to save and restore them.
+
+Because a process switch can occur at any point in a process’ execution, the operating system needs to store the contents of all of these registers because it does not know which are currently in use.
+
+ [][40]
+
+This lead to the development of threads, which are conceptually the same as processes, but share the same memory space.
+
+As threads share address space, they are lighter than processes so are faster to create and faster to switch between.
+
+ [][41]
+
+Goroutines take the idea of threads a step further.
+
+Goroutines are cooperatively scheduled, rather than relying on the kernel to manage their time sharing.
+
+The switch between goroutines only happens at well defined points, when an explicit call is made to the Go runtime scheduler.
+
+The compiler knows the registers which are in use and saves them automatically.
+
+ [][42]
+
+While goroutines are cooperatively scheduled, this scheduling is handled for you by the runtime.
+
+Places where Goroutines may yield to others are:
+
+* Channel send and receive operations, if those operations would block.
+
+* The Go statement, although there is no guarantee that new goroutine will be scheduled immediately.
+
+* Blocking syscalls like file and network operations.
+
+* After being stopped for a garbage collection cycle.
+
+ [][43]
+
+This an example to illustrate some of the scheduling points described in the previous slide.
+
+The thread, depicted by the arrow, starts on the left in the `ReadFile` function. It encounters `os.Open`, which blocks the thread while waiting for the file operation to complete, so the scheduler switches the thread to the goroutine on the right hand side.
+
+Execution continues until the read from the `c` chan blocks, and by this time the `os.Open` call has completed so the scheduler switches the thread back the left hand side and continues to the `file.Read` function, which again blocks on file IO.
+
+The scheduler switches the thread back to the right hand side for another channel operation, which has unblocked during the time the left hand side was running, but it blocks again on the channel send.
+
+Finally the thread switches back to the left hand side as the `Read` operation has completed and data is available.
+
+ [][44]
+
+This slide shows the low level `runtime.Syscall` function which is the base for all functions in the os package.
+
+Any time your code results in a call to the operating system, it will go through this function.
+
+The call to `entersyscall` informs the runtime that this thread is about to block.
+
+This allows the runtime to spin up a new thread which will service other goroutines while this current thread blocked.
+
+This results in relatively few operating system threads per Go process, with the Go runtime taking care of assigning a runnable Goroutine to a free operating system thread.
+
+ [][45]
+
+In the previous section I discussed how goroutines reduce the overhead of managing many, sometimes hundreds of thousands of concurrent threads of execution.
+
+There is another side to the goroutine story, and that is stack management, which leads me to my final topic.
+
+ [][46]
+
+This is a diagram of the memory layout of a process. The key thing we are interested is the location of the heap and the stack.
+
+Traditionally inside the address space of a process, the heap is at the bottom of memory, just above the program (text) and grows upwards.
+
+The stack is located at the top of the virtual address space, and grows downwards.
+
+ [][47]
+
+Because the heap and stack overwriting each other would be catastrophic, the operating system usually arranges to place an area of unwritable memory between the stack and the heap to ensure that if they did collide, the program will abort.
+
+This is called a guard page, and effectively limits the stack size of a process, usually in the order of several megabytes.
+
+ [][48]
+
+We’ve discussed that threads share the same address space, so for each thread, it must have its own stack.
+
+Because it is hard to predict the stack requirements of a particular thread, a large amount of memory is reserved for each thread’s stack along with a guard page.
+
+The hope is that this is more than will ever be needed and the guard page will never be hit.
+
+The downside is that as the number of threads in your program increases, the amount of available address space is reduced.
+
+ [][49]
+
+We’ve seen that the Go runtime schedules a large number of goroutines onto a small number of threads, but what about the stack requirements of those goroutines ?
+
+Instead of using guard pages, the Go compiler inserts a check as part of every function call to check if there is sufficient stack for the function to run. If there is not, the runtime can allocate more stack space.
+
+Because of this check, a goroutines initial stack can be made much smaller, which in turn permits Go programmers to treat goroutines as cheap resources.
+
+ [][50]
+
+This is a slide that shows how stacks are managed in Go 1.2.
+
+When `G` calls to `H` there is not enough space for `H` to run, so the runtime allocates a new stack frame from the heap, then runs `H` on that new stack segment. When `H` returns, the stack area is returned to the heap before returning to `G`.
+
+ [][51]
+
+This method of managing the stack works well in general, but for certain types of code, usually recursive code, it can cause the inner loop of your program to straddle one of these stack boundaries.
+
+For example, in the inner loop of your program, function `G` may call `H` many times in a loop,
+
+Each time this will cause a stack split. This is known as the hot split problem.
+
+ [][52]
+
+To solve hot splits, Go 1.3 has adopted a new stack management method.
+
+Instead of adding and removing additional stack segments, if the stack of a goroutine is too small, a new, larger, stack will be allocated.
+
+The old stack’s contents are copied to the new stack, then the goroutine continues with its new larger stack.
+
+After the first call to `H` the stack will be large enough that the check for available stack space will always succeed.
+
+This resolves the hot split problem.
+
+ [][53]
+
+Values, Inlining, Escape Analysis, Goroutines, and segmented/copying stacks.
+
+These are the five features that I chose to speak about today, but they are by no means the only things that makes Go a fast programming language, just as there more that three reasons that people cite as their reason to learn Go.
+
+As powerful as these five features are individually, they do not exist in isolation.
+
+For example, the way the runtime multiplexes goroutines onto threads would not be nearly as efficient without growable stacks.
+
+Inlining reduces the cost of the stack size check by combining smaller functions into larger ones.
+
+Escape analysis reduces the pressure on the garbage collector by automatically moving allocations from the heap to the stack.
+
+Escape analysis is also provides better cache locality.
+
+Without growable stacks, escape analysis might place too much pressure on the stack.
+
+ [][54]
+
+* Thank you to the Gocon organisers for permitting me to speak today
+* twitter / web / email details
+* thanks to @offbymany, @billkennedy_go, and Minux for their assistance in preparing this talk.
+
+### Related Posts:
+
+1. [Hear me speak about Go performance at OSCON][1]
+
+2. [Why is a Goroutine’s stack infinite ?][2]
+
+3. [A whirlwind tour of Go’s runtime environment variables][3]
+
+4. [Performance without the event loop][4]
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+David is a programmer and author from Sydney Australia.
+
+Go contributor since February 2011, committer since April 2012.
+
+Contact information
+
+* dave@cheney.net
+* twitter: @davecheney
+
+----------------------
+
+via: https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast
+
+作者:[Dave Cheney ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://dave.cheney.net/
+[1]:https://dave.cheney.net/2015/05/31/hear-me-speak-about-go-performance-at-oscon
+[2]:https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite
+[3]:https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables
+[4]:https://dave.cheney.net/2015/08/08/performance-without-the-event-loop
+[5]:http://mindchunk.blogspot.com.au/2014/06/remixing-with-deck.html
+[6]:http://ymotongpoo.hatenablog.com/entry/2014/06/01/124350
+[7]:http://www.goinggo.net/
+[8]:https://twitter.com/offbymany
+[9]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg
+[10]:https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast/gocon-2014-2
+[11]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg
+[12]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg
+[13]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg
+[14]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg
+[15]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg
+[16]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-8.jpg
+[17]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-9.jpg
+[18]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-10.jpg
+[19]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-11.jpg
+[20]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-12.jpg
+[21]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-13.jpg
+[22]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-14.jpg
+[23]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-15.jpg
+[24]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-16.jpg
+[25]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg
+[26]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg
+[27]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg
+[28]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg
+[29]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg
+[30]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg
+[31]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg
+[32]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg
+[33]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg
+[34]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg
+[35]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg
+[36]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg
+[37]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg
+[38]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg
+[39]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg
+[40]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg
+[41]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg
+[42]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg
+[43]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg
+[44]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg
+[45]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg
+[46]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg
+[47]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg
+[48]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg
+[49]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg
+[50]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg
+[51]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg
+[52]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg
+[53]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg
+[54]:https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg
From 1f92c135b2295d09d46276a65f67838a66ad5730 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 10:34:37 +0800
Subject: [PATCH 023/220] =?UTF-8?q?20180421-13=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../tech/20180418 Passwordless Auth Server.md | 809 ++++++++++++++++++
1 file changed, 809 insertions(+)
create mode 100644 sources/tech/20180418 Passwordless Auth Server.md
diff --git a/sources/tech/20180418 Passwordless Auth Server.md b/sources/tech/20180418 Passwordless Auth Server.md
new file mode 100644
index 0000000000..001b55c27c
--- /dev/null
+++ b/sources/tech/20180418 Passwordless Auth Server.md
@@ -0,0 +1,809 @@
+Passwordless Auth: Server
+============================================================
+
+Passwordless authentication allows logging in without a password, just an email. It’s a more secure way of doing than the classic email/password login.
+
+I’ll show you how to code an HTTP API in [Go][6] that provides this service.
+
+### Flow
+
+* User inputs his email.
+
+* Server creates a temporal on-time-use code associated with the user (like a temporal password) and mails it to the user in form of a “magic link”.
+
+* User clicks the magic link.
+
+* Server extract the code from the magic link, fetch the user associated and redirects to the client with a new JWT.
+
+* Client will use the JWT in every new request to authenticate the user.
+
+### Requisites
+
+* Database: We’ll use an SQL database called [CockroachDB][1] for this. It’s much like postgres, but writen in Go.
+
+* SMTP Server: To send mails we’ll use a third party mailing service. For development we’ll use [mailtrap][2]. Mailtrap sends all the mails to it’s inbox, so you don’t have to create multiple fake email accounts to test it.
+
+Install Go from [it’s page][7] and check your installation went ok with `go version`(1.10.1 atm).
+
+Download CockroachDB from [it’s page][8], extract it and add it to your `PATH`. Check that all went ok with `cockroach version` (2.0 atm).
+
+### Database Schema
+
+Now, create a new directory for the project inside `GOPATH` and start a new CockroachDB node with `cockroach start`:
+
+```
+cockroach start --insecure --host 127.0.0.1
+
+```
+
+It will print some things, but check the SQL address line, it should said something like `postgresql://root@127.0.0.1:26257?sslmode=disable`. We’ll use this to connect to the database later.
+
+Create a `schema.sql` file with the following content.
+
+```
+DROP DATABASE IF EXISTS passwordless_demo CASCADE;
+CREATE DATABASE IF NOT EXISTS passwordless_demo;
+SET DATABASE = passwordless_demo;
+
+CREATE TABLE IF NOT EXISTS users (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ email STRING UNIQUE,
+ username STRING UNIQUE
+);
+
+CREATE TABLE IF NOT EXISTS verification_codes (
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+ user_id UUID NOT NULL REFERENCES users ON DELETE CASCADE,
+ created_at TIMESTAMPTZ NOT NULL DEFAULT now()
+);
+
+INSERT INTO users (email, username) VALUES
+ ('john@passwordless.local', 'john_doe');
+
+```
+
+This script creates a database `passwordless_demo`, two tables: `users` and `verification_codes`, and inserts a fake user just to test it later. Each verification code is associated with a user and stores the creation date, useful to check if the code is expired or not.
+
+To execute this script use `cockroach sql` in other terminal:
+
+```
+cat schema.sql | cockroach sql --insecure
+
+```
+
+### Environment Configuration
+
+I want you to set two environment variables: `SMTP_USERNAME` and `SMTP_PASSWORD` that you can get from your mailtrap account. These two will be required by our program.
+
+### Go Dependencies
+
+For Go we’ll need the following packages:
+
+* [github.com/lib/pq][3]: Postgres driver which CockroachDB uses.
+
+* [github.com/matryer/way][4]: Router.
+
+* [github.com/dgrijalva/jwt-go][5]: JWT implementation.
+
+```
+go get -u github.com/lib/pq
+go get -u github.com/matryer/way
+go get -u github.com/dgrijalva/jwt-go
+
+```
+
+### Coding
+
+### Init Function
+
+Create the `main.go` and start by getting some configuration from the environment inside the `init` function.
+
+```
+var config struct {
+ port int
+ appURL *url.URL
+ databaseURL string
+ jwtKey []byte
+ smtpAddr string
+ smtpAuth smtp.Auth
+}
+
+func init() {
+ config.port, _ = strconv.Atoi(env("PORT", "80"))
+ config.appURL, _ = url.Parse(env("APP_URL", "http://localhost:"+strconv.Itoa(config.port)+"/"))
+ config.databaseURL = env("DATABASE_URL", "postgresql://root@127.0.0.1:26257/passwordless_demo?sslmode=disable")
+ config.jwtKey = []byte(env("JWT_KEY", "super-duper-secret-key"))
+ smtpHost := env("SMTP_HOST", "smtp.mailtrap.io")
+ config.smtpAddr = net.JoinHostPort(smtpHost, env("SMTP_PORT", "25"))
+ smtpUsername, ok := os.LookupEnv("SMTP_USERNAME")
+ if !ok {
+ log.Fatalln("could not find SMTP_USERNAME on environment variables")
+ }
+ smtpPassword, ok := os.LookupEnv("SMTP_PASSWORD")
+ if !ok {
+ log.Fatalln("could not find SMTP_PASSWORD on environment variables")
+ }
+ config.smtpAuth = smtp.PlainAuth("", smtpUsername, smtpPassword, smtpHost)
+}
+
+func env(key, fallbackValue string) string {
+ v, ok := os.LookupEnv(key)
+ if !ok {
+ return fallbackValue
+ }
+ return v
+}
+
+```
+
+* `appURL` will allow us to build the “magic link”.
+
+* `port` in which the HTTP server will start.
+
+* `databaseURL` is the CockroachDB address, I added `/passwordless_demo` to the previous address to indicate the database name.
+
+* `jwtKey` used to sign JWTs.
+
+* `smtpAddr` is a join of `SMTP_HOST` + `SMTP_PORT`; we’ll use it to to send mails.
+
+* `smtpUsername` and `smtpPassword` are the two required vars.
+
+* `smtpAuth` is also used to send mails.
+
+The `env` function allow us to get an environment variable with a fallback value in case it doesn’t exist.
+
+### Main Function
+
+```
+var db *sql.DB
+
+func main() {
+ var err error
+ if db, err = sql.Open("postgres", config.databaseURL); err != nil {
+ log.Fatalf("could not open database connection: %v\n", err)
+ }
+ defer db.Close()
+ if err = db.Ping(); err != nil {
+ log.Fatalf("could not ping to database: %v\n", err)
+ }
+
+ router := way.NewRouter()
+ router.HandleFunc("POST", "/api/users", jsonRequired(createUser))
+ router.HandleFunc("POST", "/api/passwordless/start", jsonRequired(passwordlessStart))
+ router.HandleFunc("GET", "/api/passwordless/verify_redirect", passwordlessVerifyRedirect)
+ router.Handle("GET", "/api/auth_user", authRequired(getAuthUser))
+
+ addr := fmt.Sprintf(":%d", config.port)
+ log.Printf("starting server at %s 🚀\n", config.appURL)
+ log.Fatalf("could not start server: %v\n", http.ListenAndServe(addr, router))
+}
+
+```
+
+First, it opens a database connection. Remember to load the driver.
+
+```
+import (
+ _ "github.com/lib/pq"
+)
+
+```
+
+Then, we create the router and define some endpoints. For the passwordless flow we use two endpoints: `/api/passwordless/start` mails the magic link and `/api/passwordless/verify_redirect` respond with the JWT.
+
+Finally, we start the server.
+
+You can create empty handlers and middlewares to test that the server starts.
+
+```
+func createUser(w http.ResponseWriter, r *http.Request) {
+ http.Error(w, http.StatusText(http.StatusNotImplemented), http.StatusNotImplemented)
+}
+
+func passwordlessStart(w http.ResponseWriter, r *http.Request) {
+ http.Error(w, http.StatusText(http.StatusNotImplemented), http.StatusNotImplemented)
+}
+
+func passwordlessVerifyRedirect(w http.ResponseWriter, r *http.Request) {
+ http.Error(w, http.StatusText(http.StatusNotImplemented), http.StatusNotImplemented)
+}
+
+func getAuthUser(w http.ResponseWriter, r *http.Request) {
+ http.Error(w, http.StatusText(http.StatusNotImplemented), http.StatusNotImplemented)
+}
+
+func jsonRequired(next http.HandlerFunc) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ next(w, r)
+ }
+}
+
+func authRequired(next http.HandlerFunc) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ next(w, r)
+ }
+}
+
+```
+
+Now:
+
+```
+go build
+./passwordless-demo
+
+```
+
+I’m on a directory called “passwordless-demo”, but if yours is different, `go build` will create an executable with that name. If you didn’t close the previous cockroach node and you setted `SMTP_USERNAME` and `SMTP_PASSWORD` vars correctly, you should see `starting server at http://localhost/ 🚀` without errors.
+
+### JSON Required Middleware
+
+Endpoints that need to decode JSON from the request body need to make sure the request is of type `application/json`. Because that is a common thing, I decoupled it to a middleware.
+
+```
+func jsonRequired(next http.HandlerFunc) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ ct := r.Header.Get("Content-Type")
+ isJSON := strings.HasPrefix(ct, "application/json")
+ if !isJSON {
+ respondJSON(w, "JSON body required", http.StatusUnsupportedMediaType)
+ return
+ }
+ next(w, r)
+ }
+}
+
+```
+
+As easy as that. First it gets the request content type from the headers, then check if it starts with “application/json”, otherwise it early return with `415 Unsupported Media Type`.
+
+### Respond JSON Function
+
+Responding with JSON is also a common thing so I extracted it to a function.
+
+```
+func respondJSON(w http.ResponseWriter, payload interface{}, code int) {
+ switch value := payload.(type) {
+ case string:
+ payload = map[string]string{"message": value}
+ case int:
+ payload = map[string]int{"value": value}
+ case bool:
+ payload = map[string]bool{"result": value}
+ }
+ b, err := json.Marshal(payload)
+ if err != nil {
+ respondInternalError(w, fmt.Errorf("could not marshal response payload: %v", err))
+ return
+ }
+ w.Header().Set("Content-Type", "application/json; charset=utf-8")
+ w.WriteHeader(code)
+ w.Write(b)
+}
+
+```
+
+First, it does a type assertion for primitive types to wrap they in a `map`. Then it marshalls to JSON, sets the response content type and status code, and writes the JSON. In case the JSON marshalling fails, it respond with an internal error.
+
+### Respond Internal Error Function
+
+`respondInternalError` is a funcion that respond with `500 Internal Server Error`, but it also logs the error to the console.
+
+```
+func respondInternalError(w http.ResponseWriter, err error) {
+ log.Println(err)
+ respondJSON(w,
+ http.StatusText(http.StatusInternalServerError),
+ http.StatusInternalServerError)
+}
+
+```
+
+### Create User Handler
+
+I’ll start coding the `createUser` handler because is the more easy and REST-ish.
+
+```
+type User struct {
+ ID string `json:"id"`
+ Email string `json:"email"`
+ Username string `json:"username"`
+}
+
+```
+
+The `User` type is just like the `users` table.
+
+```
+var (
+ rxEmail = regexp.MustCompile("^[^\\s@]+@[^\\s@]+\\.[^\\s@]+$")
+ rxUsername = regexp.MustCompile("^[a-zA-Z][\\w|-]{1,17}$")
+)
+
+```
+
+These regular expressions are to validate email and username respectively. These are very basic, feel free to adapt they as you need.
+
+Now, **inside** `createUser` function we’ll start by decoding the request body.
+
+```
+var user User
+if err := json.NewDecoder(r.Body).Decode(&user); err != nil {
+ respondJSON(w, err.Error(), http.StatusBadRequest)
+ return
+}
+defer r.Body.Close()
+
+```
+
+We create a JSON decoder using the request body and decode to a user pointer. In case of error we return with a `400 Bad Request`. Don’t forget to close the body reader.
+
+```
+errs := make(map[string]string)
+if user.Email == "" {
+ errs["email"] = "Email required"
+} else if !rxEmail.MatchString(user.Email) {
+ errs["email"] = "Invalid email"
+}
+if user.Username == "" {
+ errs["username"] = "Username required"
+} else if !rxUsername.MatchString(user.Username) {
+ errs["username"] = "Invalid username"
+}
+if len(errs) != 0 {
+ respondJSON(w, errs, http.StatusUnprocessableEntity)
+ return
+}
+
+```
+
+This is how I make validation; a simple `map` and check if `len(errs) != 0` to return with `422 Unprocessable Entity`.
+
+```
+err := db.QueryRowContext(r.Context(), `
+ INSERT INTO users (email, username) VALUES ($1, $2)
+ RETURNING id
+`, user.Email, user.Username).Scan(&user.ID)
+
+if errPq, ok := err.(*pq.Error); ok && errPq.Code.Name() == "unique_violation" {
+ if strings.Contains(errPq.Error(), "email") {
+ errs["email"] = "Email taken"
+ } else {
+ errs["username"] = "Username taken"
+ }
+ respondJSON(w, errs, http.StatusForbidden)
+ return
+} else if err != nil {
+ respondInternalError(w, fmt.Errorf("could not insert user: %v", err))
+ return
+}
+
+```
+
+This SQL query inserts a new user with the given email and username, and returns the auto generated id. Each `$` will be replaced by the next arguments passed to `QueryRowContext`.
+
+Because the `users` table had unique constraints on the `email` and `username`fields I check for the “unique_violation” error to return with `403 Forbidden` or I return with an internal error.
+
+```
+respondJSON(w, user, http.StatusCreated)
+
+```
+
+Finally I just respond with the created user.
+
+### Passwordless Start Handler
+
+```
+type PasswordlessStartRequest struct {
+ Email string `json:"email"`
+ RedirectURI string `json:"redirectUri"`
+}
+
+```
+
+This struct holds the `passwordlessStart` request body. The email of the user who wants to log in. The redirect URI comes from the client (the app that will use our API) ex: `https://frontend.app/callback`.
+
+```
+var magicLinkTmpl = template.Must(template.ParseFiles("templates/magic-link.html"))
+
+```
+
+We’ll use the golang template engine to build the mailing so I’ll need you to create a `magic-link.html` file inside a `templates` directory with a content like so:
+
+```
+
+
+
+
+
+ Magic Link
+
+
+ Click here to login.
+
+ This link expires in 15 minutes and can only be used once.
+
+
+
+```
+
+This template is the mail we’ll send to the user with the magic link. Feel free to style it how you want.
+
+Now, **inside** `passwordlessStart` function:
+
+```
+var input PasswordlessStartRequest
+if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
+ respondJSON(w, err.Error(), http.StatusBadRequest)
+ return
+}
+defer r.Body.Close()
+
+```
+
+First, we decode the request body like before.
+
+```
+errs := make(map[string]string)
+if input.Email == "" {
+ errs["email"] = "Email required"
+} else if !rxEmail.MatchString(input.Email) {
+ errs["email"] = "Invalid email"
+}
+if input.RedirectURI == "" {
+ errs["redirectUri"] = "Redirect URI required"
+} else if u, err := url.Parse(input.RedirectURI); err != nil || !u.IsAbs() {
+ errs["redirectUri"] = "Invalid redirect URI"
+}
+if len(errs) != 0 {
+ respondJSON(w, errs, http.StatusUnprocessableEntity)
+ return
+}
+
+```
+
+For the redirect URI validation we use the golang URL parser and check that the URI is absolute.
+
+```
+var verificationCode string
+err := db.QueryRowContext(r.Context(), `
+ INSERT INTO verification_codes (user_id) VALUES
+ ((SELECT id FROM users WHERE email = $1))
+ RETURNING id
+`, input.Email).Scan(&verificationCode)
+if errPq, ok := err.(*pq.Error); ok && errPq.Code.Name() == "not_null_violation" {
+ respondJSON(w, "No user found with that email", http.StatusNotFound)
+ return
+} else if err != nil {
+ respondInternalError(w, fmt.Errorf("could not insert verification code: %v", err))
+ return
+}
+
+```
+
+This SQL query will insert a new verification code associated with a user with the given email and return the auto generated id. Because the user could not exist, that subquery can resolve to `NULL` which will fail the `NOT NULL`constraint on the `user_id` field so I do a check on that and return with `404 Not Found` in case or an internal error otherwise.
+
+```
+q := make(url.Values)
+q.Set("verification_code", verificationCode)
+q.Set("redirect_uri", input.RedirectURI)
+magicLink := *config.appURL
+magicLink.Path = "/api/passwordless/verify_redirect"
+magicLink.RawQuery = q.Encode()
+
+```
+
+Now, I build the magic link and set the `verification_code` and `redirect_uri`inside the query string. Ex: `http://localhost/api/passwordless/verify_redirect?verification_code=some_code&redirect_uri=https://frontend.app/callback`.
+
+```
+var body bytes.Buffer
+data := map[string]string{"MagicLink": magicLink.String()}
+if err := magicLinkTmpl.Execute(&body, data); err != nil {
+ respondInternalError(w, fmt.Errorf("could not execute magic link template: %v", err))
+ return
+}
+
+```
+
+We’ll get the magic link template content saving it to a buffer. In case of error I return with an internal error.
+
+```
+to := mail.Address{Address: input.Email}
+if err := sendMail(to, "Magic Link", body.String()); err != nil {
+ respondInternalError(w, fmt.Errorf("could not mail magic link: %v", err))
+ return
+}
+
+```
+
+To mail the user I make use of `sendMail` function that I’ll code now. In case of error I return with an internal error.
+
+```
+w.WriteHeader(http.StatusNoContent)
+
+```
+
+Finally, I just set the response status code to `204 No Content`. The client doesn’t need more data than a success status code.
+
+### Send Mail Function
+
+```
+func sendMail(to mail.Address, subject, body string) error {
+ from := mail.Address{
+ Name: "Passwordless Demo",
+ Address: "noreply@" + config.appURL.Host,
+ }
+ headers := map[string]string{
+ "From": from.String(),
+ "To": to.String(),
+ "Subject": subject,
+ "Content-Type": `text/html; charset="utf-8"`,
+ }
+ msg := ""
+ for k, v := range headers {
+ msg += fmt.Sprintf("%s: %s\r\n", k, v)
+ }
+ msg += "\r\n"
+ msg += body
+
+ return smtp.SendMail(
+ config.smtpAddr,
+ config.smtpAuth,
+ from.Address,
+ []string{to.Address},
+ []byte(msg))
+}
+
+```
+
+This function creates the structure of a basic HTML mail and sends it using the SMTP server. There is a lot of things you can customize of a mail, but I kept it simple.
+
+### Passwordless Verify Redirect Handler
+
+```
+var rxUUID = regexp.MustCompile("^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$")
+
+```
+
+First, this regular expression is to validate an UUID (the verification code).
+
+Now, **inside** `passwordlessVerifyRedirect` function:
+
+```
+q := r.URL.Query()
+verificationCode := q.Get("verification_code")
+redirectURI := q.Get("redirect_uri")
+
+```
+
+`/api/passwordless/verify_redirect` is a `GET` endpoint so we read data from the query string.
+
+```
+errs := make(map[string]string)
+if verificationCode == "" {
+ errs["verification_code"] = "Verification code required"
+} else if !rxUUID.MatchString(verificationCode) {
+ errs["verification_code"] = "Invalid verification code"
+}
+var callback *url.URL
+var err error
+if redirectURI == "" {
+ errs["redirect_uri"] = "Redirect URI required"
+} else if callback, err = url.Parse(redirectURI); err != nil || !callback.IsAbs() {
+ errs["redirect_uri"] = "Invalid redirect URI"
+}
+if len(errs) != 0 {
+ respondJSON(w, errs, http.StatusUnprocessableEntity)
+ return
+}
+
+```
+
+Pretty similar validation, but we store the parsed redirect URI into a `callback`variable.
+
+```
+var userID string
+if err := db.QueryRowContext(r.Context(), `
+ DELETE FROM verification_codes
+ WHERE id = $1
+ AND created_at >= now() - INTERVAL '15m'
+ RETURNING user_id
+`, verificationCode).Scan(&userID); err == sql.ErrNoRows {
+ respondJSON(w, "Link expired or already used", http.StatusBadRequest)
+ return
+} else if err != nil {
+ respondInternalError(w, fmt.Errorf("could not delete verification code: %v", err))
+ return
+}
+
+```
+
+This SQL query deletes a verification code with the given id and makes sure it has been created no more than 15 minutes ago, it also returns the `user_id`associated. In case of no rows, means the code didn’t exist or it was expired so we respond with that, otherwise an internal error.
+
+```
+expiresAt := time.Now().Add(time.Hour * 24 * 60)
+tokenString, err := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.StandardClaims{
+ Subject: userID,
+ ExpiresAt: expiresAt.Unix(),
+}).SignedString(config.jwtKey)
+if err != nil {
+ respondInternalError(w, fmt.Errorf("could not create JWT: %v", err))
+ return
+}
+
+```
+
+This is how the JWT is created. We set an expiration date for the JWT within 60 days. Maybe you can give it less time (~2 weeks) and add a new endpoint to refresh tokens, but I didn’t want to add more complexity.
+
+```
+expiresAtB, err := expiresAt.MarshalText()
+if err != nil {
+ respondInternalError(w, fmt.Errorf("could not marshal expiration date: %v", err))
+ return
+}
+f := make(url.Values)
+f.Set("jwt", tokenString)
+f.Set("expires_at", string(expiresAtB))
+callback.Fragment = f.Encode()
+
+```
+
+We plan to redirect; you could use the query string to add the JWT, but I’ve seen that a hash fragment is more used. Ex: `https://frontend.app/callback#jwt=token_here&expires_at=some_date`.
+
+The expiration date could be extracted from the JWT, but then the client will have to implement a JWT library to decode it, so to make the life easier I just added it there too.
+
+```
+http.Redirect(w, r, callback.String(), http.StatusFound)
+
+```
+
+Finally we just redirect with a `302 Found`.
+
+* * *
+
+The passwordless flow is completed. Now we just need to code the `getAuthUser`endpoint which is to get info about the current authenticated user. If you rememeber, this endpoint makes use of `authRequired` middleware.
+
+### With Auth Middleware
+
+Before coding the `authRequired` middleware, I’ll code one that doesn’t require authentication. I mean, if no JWT is passed, it just continues without authenticating the user.
+
+```
+type ContextKey int
+
+const (
+ keyAuthUserID ContextKey = iota
+)
+
+func jwtKeyFunc(*jwt.Token) (interface{}, error) {
+ return config.jwtKey, nil
+}
+
+func withAuth(next http.HandlerFunc) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ a := r.Header.Get("Authorization")
+ hasToken := strings.HasPrefix(a, "Bearer ")
+ if !hasToken {
+ next(w, r)
+ return
+ }
+ tokenString := a[7:]
+
+ p := jwt.Parser{ValidMethods: []string{jwt.SigningMethodHS256.Name}}
+ token, err := p.ParseWithClaims(tokenString, &jwt.StandardClaims{}, jwtKeyFunc)
+ if err != nil {
+ respondJSON(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
+ return
+ }
+
+ claims, ok := token.Claims.(*jwt.StandardClaims)
+ if !ok || !token.Valid {
+ respondJSON(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
+ return
+ }
+
+ ctx := r.Context()
+ ctx = context.WithValue(ctx, keyAuthUserID, claims.Subject)
+
+ next(w, r.WithContext(ctx))
+ }
+}
+
+```
+
+The JWT will come in every request inside the “Authorization” header in the form of “Bearer ”. So if no token is present, we just pass to the next middleware.
+
+We create a parser and parse the token. If fails, we return with `401 Unauthorized`.
+
+Then we extract the claims inside the JWT and add the `Subject` (which is the user ID) to the request context.
+
+### Auth Required Middleware
+
+```
+func authRequired(next http.HandlerFunc) http.HandlerFunc {
+ return withAuth(func(w http.ResponseWriter, r *http.Request) {
+ _, ok := r.Context().Value(keyAuthUserID).(string)
+ if !ok {
+ respondJSON(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
+ return
+ }
+ next(w, r)
+ })
+}
+
+```
+
+Now, `authRequired` will make use of `withAuth` and will try to extract the authenticated user ID from the request context. If fails, it returns with `401 Unauthorized` otherwise continues.
+
+### Get Auth User
+
+**Inside** `getAuthUser` handler:
+
+```
+ctx := r.Context()
+authUserID := ctx.Value(keyAuthUserID).(string)
+
+user, err := fetchUser(ctx, authUserID)
+if err == sql.ErrNoRows {
+ respondJSON(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
+ return
+} else if err != nil {
+ respondInternalError(w, fmt.Errorf("could not query auth user: %v", err))
+ return
+}
+
+respondJSON(w, user, http.StatusOK)
+
+```
+
+First we extract the ID of the authenticated user from the request context, we use that to fetch the user. In case of no row returned, we send a `418 I'm a teapot` or an internal error otherwise. Lastly we just respond with the user 😊
+
+### Fetch User Function
+
+You saw a `fetchUser` function there.
+
+```
+func fetchUser(ctx context.Context, id string) (User, error) {
+ user := User{ID: id}
+ err := db.QueryRowContext(ctx, `
+ SELECT email, username FROM users WHERE id = $1
+ `, id).Scan(&user.Email, &user.Username)
+ return user, err
+}
+
+```
+
+I decoupled it because fetching a user by ID is a common thing.
+
+* * *
+
+That’s all the code. Build it and test it yourself. You can try a live demo [here][9].
+
+If you have problems about `Blocked script execution because the document's frame is sandboxed and the 'allow-scripts' permission is not set` after clicking the magic link on mailtrap, try doing a right click + “Open link in new tab”. This is a security thing where the mail content is [sandboxed][10]. I had this problem sometimes on `localhost`, but I think you should be fine once you deploy the server with `https://`.
+
+Please leave any issues on the [GitHub repo][11] or feel free to send PRs 👍
+
+I’ll write a second part for this post coding a client for the API.
+
+--------------------------------------------------------------------------------
+
+via: https://nicolasparada.netlify.com/posts/passwordless-auth-server/
+
+作者:[Nicolás Parada ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://nicolasparada.netlify.com/
+[1]:https://www.cockroachlabs.com/
+[2]:https://mailtrap.io/
+[3]:https://github.com/lib/pq
+[4]:https://github.com/matryer/way
+[5]:https://github.com/dgrijalva/jwt-go
+[6]:https://golang.org/
+[7]:https://golang.org/dl/
+[8]:https://www.cockroachlabs.com/docs/stable/install-cockroachdb.html
+[9]:https://go-passwordless-demo.herokuapp.com/
+[10]:https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox
+[11]:https://github.com/nicolasparada/go-passwordless-demo
+[12]:https://twitter.com/intent/retweet?tweet_id=986602458716803074
From c943a96df6c370a8abdc50c8abb6931613a67286 Mon Sep 17 00:00:00 2001
From: Ezio
Date: Sat, 21 Apr 2018 10:42:08 +0800
Subject: [PATCH 024/220] =?UTF-8?q?20180421-15=20=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...NG GO PROJECTS WITH DOCKER ON GITLAB CI.md | 155 ++++++++++++++++++
1 file changed, 155 insertions(+)
create mode 100644 sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
diff --git a/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md b/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
new file mode 100644
index 0000000000..eb940ed8fe
--- /dev/null
+++ b/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
@@ -0,0 +1,155 @@
+BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI
+===============================================
+
+### Intro
+
+This post is a summary of my research on building Go projects in a Docker container on CI (Gitlab, specifically). I found solving private dependencies quite hard (coming from a Node/.NET background) so that is the main reason I wrote this up. Please feel free to reach out if there are any issues or a submit pull request on the Docker image.
+
+### Dep
+
+As dep is the best option for managing Go dependencies right now, the build will need to run `dep ensure` before building.
+
+Note: I personally do not commit my `vendor/` folder into source control, if you do, I’m not sure if this step can be skipped or not.
+
+The best way to do this with Docker builds is to use `dep ensure -vendor-only`. [See here][1].
+
+### Docker Build Image
+
+I first tried to use `golang:1.10` but this image doesn’t have:
+
+* curl
+
+* git
+
+* make
+
+* dep
+
+* golint
+
+I have created my own Docker image for builds ([github][2] / [dockerhub][3]) which I will keep up to date - but I offer no guarantees so you should probably create and manage your own.
+
+### Internal Dependencies
+
+We’re quite capable of building any project that has publicly accessible dependencies so far. But what about if your project depends on another private gitlab repository?
+
+Running `dep ensure` locally should work with your git setup, but once on CI this doesn’t apply and builds will fail.
+
+### Gitlab Permissions Model
+
+This was [added in Gitlab 8.12][4] and the most useful feature we care about is the `CI_JOB_TOKEN` environment variable made available during builds.
+
+This basically means we can clone [dependent repositories][5] like so
+
+```
+git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/myuser/mydependentrepo
+
+```
+
+However we do want to make this a bit more user friendly as dep will not magically add credentials when trying to pull code.
+
+We will add this line to the `before_script` section of the `.gitlab-ci.yml`.
+
+```
+before_script:
+ - echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc
+
+```
+
+Using the `.netrc` file allows you to specify which credentials to use for which server. This method allows you to avoid entering a username and password every time you pull (or push) from Git. The password is stored in plaintext so you shouldn’t do this on your own machine. This is actually for `cURL` which Git uses behind the scenes. [Read more here][6].
+
+Project Files
+============================================================
+
+### Makefile
+
+While this is optional, I have found it makes things easier.
+
+Configuring these steps below means in the CI script (and locally) we can run `make lint`, `make build` etc without repeating steps each time.
+
+```
+GOFILES = $(shell find . -name '*.go' -not -path './vendor/*')
+GOPACKAGES = $(shell go list ./... | grep -v /vendor/)
+
+default: build
+
+workdir:
+ mkdir -p workdir
+
+build: workdir/scraper
+
+workdir/scraper: $(GOFILES)
+ GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o workdir/scraper .
+
+test: test-all
+
+test-all:
+ @go test -v $(GOPACKAGES)
+
+lint: lint-all
+
+lint-all:
+ @golint -set_exit_status $(GOPACKAGES)
+
+```
+
+### .gitlab-ci.yml
+
+This is where the Gitlab CI magic happens. You may want to swap out the image for your own.
+
+```
+image: sjdweb/go-docker-build:1.10
+
+stages:
+ - test
+ - build
+
+before_script:
+ - cd $GOPATH/src
+ - mkdir -p gitlab.com/$CI_PROJECT_NAMESPACE
+ - cd gitlab.com/$CI_PROJECT_NAMESPACE
+ - ln -s $CI_PROJECT_DIR
+ - cd $CI_PROJECT_NAME
+ - echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc
+ - dep ensure -vendor-only
+
+lint_code:
+ stage: test
+ script:
+ - make lint
+
+unit_tests:
+ stage: test
+ script:
+ - make test
+
+build:
+ stage: build
+ script:
+ - make
+
+```
+
+### What This Is Missing
+
+I would usually be building a Docker image with my binary and pushing that to the Gitlab Container Registry.
+
+You can see I’m building the binary and exiting, you would at least want to store that binary somewhere (such as a build artifact).
+
+--------------------------------------------------------------------------------
+
+via: https://seandrumm.co.uk/blog/building-go-projects-with-docker-on-gitlab-ci/
+
+作者:[ SEAN DRUMM][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://seandrumm.co.uk/
+[1]:https://github.com/golang/dep/blob/master/docs/FAQ.md#how-do-i-use-dep-with-docker
+[2]:https://github.com/sjdweb/go-docker-build/blob/master/Dockerfile
+[3]:https://hub.docker.com/r/sjdweb/go-docker-build/
+[4]:https://docs.gitlab.com/ce/user/project/new_ci_build_permissions_model.html
+[5]:https://docs.gitlab.com/ce/user/project/new_ci_build_permissions_model.html#dependent-repositories
+[6]:https://github.com/bagder/everything-curl/blob/master/usingcurl-netrc.md
From 467cfb7f0280dfa84cc294bfcf30ac5933472dcf Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sat, 21 Apr 2018 12:56:06 +0800
Subject: [PATCH 025/220] Delete 20180326 Working with calendars on Linux.md
---
...0180326 Working with calendars on Linux.md | 273 ------------------
1 file changed, 273 deletions(-)
delete mode 100644 sources/tech/20180326 Working with calendars on Linux.md
diff --git a/sources/tech/20180326 Working with calendars on Linux.md b/sources/tech/20180326 Working with calendars on Linux.md
deleted file mode 100644
index 110538e96f..0000000000
--- a/sources/tech/20180326 Working with calendars on Linux.md
+++ /dev/null
@@ -1,273 +0,0 @@
-Translating by MjSeven
-
-
-Working with calendars on Linux
-======
-
-
-Linux systems can provide more help with your schedule than just reminding you what day today is. You have a lot of options for displaying calendars — some that are likely to prove helpful and others that just might boggle your mind.
-
-### date
-
-To begin, you probably know that you can show the current date with the **date** command.
-```
-$ date
-Mon Mar 26 08:01:41 EDT 2018
-
-```
-
-### cal and ncal
-
-You can show the entire month with the **cal** command. With no arguments, cal displays the current month and, by default, highlights the current day by reversing the foreground and background colors.
-```
-$ cal
- March 2018
-Su Mo Tu We Th Fr Sa
- 1 2 3
- 4 5 6 7 8 9 10
-11 12 13 14 15 16 17
-18 19 20 21 22 23 24
-25 26 27 28 29 30 31
-
-```
-
-If you want to display the current month in a “sideways” format, you can use the **ncal** command.
-```
-$ ncal
- March 2018
-Su 4 11 18 25
-Mo 5 12 19 26
-Tu 6 13 20 27
-We 7 14 21 28
-Th 1 8 15 22 29
-Fr 2 9 16 23 30
-Sa 3 10 17 24 31
-
-```
-
-That command can be especially useful if, for example, you just want to see the dates for some particular day of the week.
-```
-$ ncal | grep Th
-Th 1 8 15 22 29
-
-```
-
-The ncal command can also display the entire year in the "sideways" format. Just provide the year along with the command.
-```
-$ ncal 2018
- 2018
- January February March April
-Su 7 14 21 28 4 11 18 25 4 11 18 25 1 8 15 22 29
-Mo 1 8 15 22 29 5 12 19 26 5 12 19 26 2 9 16 23 30
-Tu 2 9 16 23 30 6 13 20 27 6 13 20 27 3 10 17 24
-We 3 10 17 24 31 7 14 21 28 7 14 21 28 4 11 18 25
-Th 4 11 18 25 1 8 15 22 1 8 15 22 29 5 12 19 26
-Fr 5 12 19 26 2 9 16 23 2 9 16 23 30 6 13 20 27
-Sa 6 13 20 27 3 10 17 24 3 10 17 24 31 7 14 21 28
-...
-
-```
-
-You can also display the entire year with **cal**. Just remember that you need all four digits for the year. If you type "cal 18", you'll get a calendar year for 18 AD, not 2018.
-```
-$ cal 2018
- 2018
- January February March
-Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
- 1 2 3 4 5 6 1 2 3 1 2 3
- 7 8 9 10 11 12 13 4 5 6 7 8 9 10 4 5 6 7 8 9 10
-14 15 16 17 18 19 20 11 12 13 14 15 16 17 11 12 13 14 15 16 17
-21 22 23 24 25 26 27 18 19 20 21 22 23 24 18 19 20 21 22 23 24
-28 29 30 31 25 26 27 28 25 26 27 28 29 30 31
-
-
- April May June
-Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
- 1 2 3 4 5 6 7 1 2 3 4 5 1 2
- 8 9 10 11 12 13 14 6 7 8 9 10 11 12 3 4 5 6 7 8 9
-15 16 17 18 19 20 21 13 14 15 16 17 18 19 10 11 12 13 14 15 16
-22 23 24 25 26 27 28 20 21 22 23 24 25 26 17 18 19 20 21 22 23
-29 30 27 28 29 30 31 24 25 26 27 28 29 30
-
-
- July August September
-Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
- 1 2 3 4 5 6 7 1 2 3 4 1
- 8 9 10 11 12 13 14 5 6 7 8 9 10 11 2 3 4 5 6 7 8
-15 16 17 18 19 20 21 12 13 14 15 16 17 18 9 10 11 12 13 14 15
-22 23 24 25 26 27 28 19 20 21 22 23 24 25 16 17 18 19 20 21 22
-29 30 31 26 27 28 29 30 31 23 24 25 26 27 28 29
- 30
-
- October November December
-Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
- 1 2 3 4 5 6 1 2 3 1
- 7 8 9 10 11 12 13 4 5 6 7 8 9 10 2 3 4 5 6 7 8
-14 15 16 17 18 19 20 11 12 13 14 15 16 17 9 10 11 12 13 14 15
-21 22 23 24 25 26 27 18 19 20 21 22 23 24 16 17 18 19 20 21 22
-28 29 30 31 25 26 27 28 29 30 23 24 25 26 27 28 29
- 30 31
-
-```
-
-For a particular year and month, use the -d option win a command like this.
-```
-$ cal -d 1949-03
- March 1949
-Su Mo Tu We Th Fr Sa
- 1 2 3 4 5
- 6 7 8 9 10 11 12
-13 14 15 16 17 18 19
-20 21 22 23 24 25 26
-27 28 29 30 31
-
-```
-
-Another potentially useful calendaring option is the **cal** command’s -j option. Let's take a look at what that shows you.
-```
-$ cal -j
- March 2018
- Su Mo Tu We Th Fr Sa
- 60 61 62
- 63 64 65 66 67 68 69
- 70 71 72 73 74 75 76
- 77 78 79 80 81 82 83
- 84 85 86 87 88 89 90
-
-```
-
-"What???" you might be asking. OK, that -j option is displaying Julian dates — the numeric day of the year that runs from 1 to 365 most years. So, 1 is January 1st and 32 is February 1st. The command **cal -j 2018** will show you the entire year, ending like this:
-```
-$ cal -j 2018 | tail -9
-
- November December
- Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
- 305 306 307 335
-308 309 310 311 312 313 314 336 337 338 339 340 341 342
-315 316 317 318 319 320 321 343 344 345 346 347 348 349
-322 323 324 325 326 327 328 350 351 352 353 354 355 356
-329 330 331 332 333 334 357 358 359 360 361 362 363
- 364 365
-
-```
-
-This kind of display might help remind you of how many days have gone by since you made that New Year's resolution that you haven't yet acted on.
-
-Run a similar command for 2020, and you’ll note that it’s a leap year.
-```
-$ cal -j 2020 | tail -9
-
- November December
- Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
-306 307 308 309 310 311 312 336 337 338 339 340
-313 314 315 316 317 318 319 341 342 343 344 345 346 347
-320 321 322 323 324 325 326 348 349 350 351 352 353 354
-327 328 329 330 331 332 333 355 356 357 358 359 360 361
-334 335 362 363 364 365 366
-
-```
-
-### calendar
-
-Another interesting and potentially overwhelming command can inform you about holidays. This command has a lot of options, but let’s just say that you’d like to see a list of upcoming holidays and noteworthy days. The calendar's **-l** option allows you to select how many days you want to see beyond today, so 0 means "today only".
-```
-$ calendar -l 0
-Mar 26 Benjamin Thompson born, 1753, Count Rumford; physicist
-Mar 26 David Packard died, 1996; age of 83
-Mar 26 Popeye statue unveiled, Crystal City TX Spinach Festival, 1937
-Mar 26 Independence Day in Bangladesh
-Mar 26 Prince Jonah Kuhio Kalanianaole Day in Hawaii
-Mar 26* Seward's Day in Alaska (last Monday)
-Mar 26 Emerson, Lake, and Palmer record "Pictures at an Exhibition" live, 1971
-Mar 26 Ludwig van Beethoven dies in Vienna, Austria, 1827
-Mar 26 Bonne fête aux Lara !
-Mar 26 Aujourd'hui, c'est la St(e) Ludger.
-Mar 26 N'oubliez pas les Larissa !
-Mar 26 Ludwig van Beethoven in Wien gestorben, 1827
-Mar 26 Emánuel
-
-```
-
-For most of us, that's a bit more celebrating than we can manage in a single day. If you're seeing something like this, you can blame it on your **calendar.all** file that's telling the system what international calendars you'd like to include. You can, of course, pare this down by removing some of the lines in this file that include other files. The lines look like these:
-```
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-```
-
-Say we cut our display down to world calendars only by removing all but the first #include line shown above. We'd then see this:
-```
-$ calendar -l 0
-Mar 26 Benjamin Thompson born, 1753, Count Rumford; physicist
-Mar 26 David Packard died, 1996; age of 83
-Mar 26 Popeye statue unveiled, Crystal City TX Spinach Festival, 1937
-Mar 26 Independence Day in Bangladesh
-Mar 26 Prince Jonah Kuhio Kalanianaole Day in Hawaii
-Mar 26* Seward's Day in Alaska (last Monday)
-Mar 26 Emerson, Lake, and Palmer record "Pictures at an Exhibition" live, 1971
-Mar 26 Ludwig van Beethoven dies in Vienna, Austria, 1827
-
-```
-
-Clearly, the world calendar's special days are quite numerous. A display like this could, however, keep you from forgetting the all-important Popeye statue unveiling day and its role in observing the "spinach capital of the world."
-
-A more useful calendaring choice might be to put work-related calendars in a special file and use that calendar in the calendar.all file to determine what events you will see when you run the command.
-```
-$ cat /usr/share/calendar/calendar.all
-/*
- * International and national calendar files
- *
- * This is the calendar master file. In the standard setup, it is
- * included by /etc/calendar/default, so you can make any system-wide
- * changes there and they will be kept when you upgrade. If you want
- * to edit this file, copy it into /etc/calendar/calendar.all and
- * edit it there.
- *
- */
-
-#ifndef _calendar_all_
-#define _calendar_all_
-
-#include
-#include <==
-
-#endif /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var !_calendar_all_ */
-
-```
-
-The format for calendar files is very simple — mm/dd for the date, a tab, and the event's description.
-```
-$ cat calendar.work
-03/26 Describe how the cal and calendar commands work
-03/27 Throw a party!
-
-```
-
-### notes and nostalgia
-
-Note that the calendar command might not be available for all Linux distributions. You might have to remember the Popeye statue unveiling day on your own.
-
-And in case you're wondering, you can display a calendar as far ahead as the year 9999 — even for the prophetic [2525][1].
-
-Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3265752/linux/working-with-calendars-on-linux.html
-
-作者:[Sandra Henry-Stocker][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
-[1]:https://www.youtube.com/watch?v=izQB2-Kmiic
-[2]:https://www.facebook.com/NetworkWorld/
-[3]:https://www.linkedin.com/company/network-world
From 50174d3f1d17381dcb95a4c90b6195c7494bb707 Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sat, 21 Apr 2018 12:57:00 +0800
Subject: [PATCH 026/220] Create 20180326 Working with calendars on Linux.md
---
...0180326 Working with calendars on Linux.md | 270 ++++++++++++++++++
1 file changed, 270 insertions(+)
create mode 100644 translated/tech/20180326 Working with calendars on Linux.md
diff --git a/translated/tech/20180326 Working with calendars on Linux.md b/translated/tech/20180326 Working with calendars on Linux.md
new file mode 100644
index 0000000000..d63a669df6
--- /dev/null
+++ b/translated/tech/20180326 Working with calendars on Linux.md
@@ -0,0 +1,270 @@
+在 Linux 上使用日历
+=====
+
+
+Linux 系统可以为你的日程安排提供更多帮助,而不仅仅是提醒你今天是星期几。日历显示有很多选项 -- 有些可能会证明有帮助,有些可能会让你大开眼界。
+
+### 日期
+
+首先,你可能知道可以使用 **date** 命令显示当前日期。
+```
+$ date
+Mon Mar 26 08:01:41 EDT 2018
+
+```
+
+### cal 和 ncal
+
+你可以使用 **cal** 命令显示整个月份。没有参数时,cal 显示当前月份,默认情况下,通过反转前景色和背景颜色来突出显示当天。
+```
+$ cal
+ March 2018
+Su Mo Tu We Th Fr Sa
+ 1 2 3
+ 4 5 6 7 8 9 10
+11 12 13 14 15 16 17
+18 19 20 21 22 23 24
+25 26 27 28 29 30 31
+
+```
+
+如果你想以“横向”格式显示当前月份,则可以使用 **ncal** 命令。
+```
+$ ncal
+ March 2018
+Su 4 11 18 25
+Mo 5 12 19 26
+Tu 6 13 20 27
+We 7 14 21 28
+Th 1 8 15 22 29
+Fr 2 9 16 23 30
+Sa 3 10 17 24 31
+
+```
+
+例如,如果你只想查看一周特定某天的日期,这个命令可能特别有用。
+```
+$ ncal | grep Th
+Th 1 8 15 22 29
+
+```
+
+ncal 命令还可以以“横向”格式显示一整年,只需在命令后提供年份。
+```
+$ ncal 2018
+ 2018
+ January February March April
+Su 7 14 21 28 4 11 18 25 4 11 18 25 1 8 15 22 29
+Mo 1 8 15 22 29 5 12 19 26 5 12 19 26 2 9 16 23 30
+Tu 2 9 16 23 30 6 13 20 27 6 13 20 27 3 10 17 24
+We 3 10 17 24 31 7 14 21 28 7 14 21 28 4 11 18 25
+Th 4 11 18 25 1 8 15 22 1 8 15 22 29 5 12 19 26
+Fr 5 12 19 26 2 9 16 23 2 9 16 23 30 6 13 20 27
+Sa 6 13 20 27 3 10 17 24 3 10 17 24 31 7 14 21 28
+...
+
+```
+
+你也可以使用 **cal** 命令显示一整年。请记住,你需要输入年份的四位数字。如果你输入 "cal 18",你将获得公元 18 年的历年,而不是 2018 年。
+```
+$ cal 2018
+ 2018
+ January February March
+Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+ 1 2 3 4 5 6 1 2 3 1 2 3
+ 7 8 9 10 11 12 13 4 5 6 7 8 9 10 4 5 6 7 8 9 10
+14 15 16 17 18 19 20 11 12 13 14 15 16 17 11 12 13 14 15 16 17
+21 22 23 24 25 26 27 18 19 20 21 22 23 24 18 19 20 21 22 23 24
+28 29 30 31 25 26 27 28 25 26 27 28 29 30 31
+
+
+ April May June
+Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+ 1 2 3 4 5 6 7 1 2 3 4 5 1 2
+ 8 9 10 11 12 13 14 6 7 8 9 10 11 12 3 4 5 6 7 8 9
+15 16 17 18 19 20 21 13 14 15 16 17 18 19 10 11 12 13 14 15 16
+22 23 24 25 26 27 28 20 21 22 23 24 25 26 17 18 19 20 21 22 23
+29 30 27 28 29 30 31 24 25 26 27 28 29 30
+
+
+ July August September
+Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+ 1 2 3 4 5 6 7 1 2 3 4 1
+ 8 9 10 11 12 13 14 5 6 7 8 9 10 11 2 3 4 5 6 7 8
+15 16 17 18 19 20 21 12 13 14 15 16 17 18 9 10 11 12 13 14 15
+22 23 24 25 26 27 28 19 20 21 22 23 24 25 16 17 18 19 20 21 22
+29 30 31 26 27 28 29 30 31 23 24 25 26 27 28 29
+ 30
+
+ October November December
+Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+ 1 2 3 4 5 6 1 2 3 1
+ 7 8 9 10 11 12 13 4 5 6 7 8 9 10 2 3 4 5 6 7 8
+14 15 16 17 18 19 20 11 12 13 14 15 16 17 9 10 11 12 13 14 15
+21 22 23 24 25 26 27 18 19 20 21 22 23 24 16 17 18 19 20 21 22
+28 29 30 31 25 26 27 28 29 30 23 24 25 26 27 28 29
+ 30 31
+
+```
+
+对于特定的年份和月份,使用 -d 选项,如下所示:
+```
+$ cal -d 1949-03
+ March 1949
+Su Mo Tu We Th Fr Sa
+ 1 2 3 4 5
+ 6 7 8 9 10 11 12
+13 14 15 16 17 18 19
+20 21 22 23 24 25 26
+27 28 29 30 31
+
+```
+
+另一个可能有用的日历选项是 **cal** 命令的 -j 选项。让我们来看看它显示的是什么。
+```
+$ cal -j
+ March 2018
+ Su Mo Tu We Th Fr Sa
+ 60 61 62
+ 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76
+ 77 78 79 80 81 82 83
+ 84 85 86 87 88 89 90
+
+```
+
+你可能会问:“什么???” OK,那么 -j 选项显示 Julian 日期 -- 一年中从 1 到 365 年的数字日期。所以,1 是 1 月 1 日,32 是 2 月 1 日。命令 **cal -j 2018** 将显示一整年的数字,像这样:
+```
+$ cal -j 2018 | tail -9
+
+ November December
+ Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+ 305 306 307 335
+308 309 310 311 312 313 314 336 337 338 339 340 341 342
+315 316 317 318 319 320 321 343 344 345 346 347 348 349
+322 323 324 325 326 327 328 350 351 352 353 354 355 356
+329 330 331 332 333 334 357 358 359 360 361 362 363
+ 364 365
+
+```
+
+这种显示可能有助于提醒你,自从你做了新年计划之后,你已经有多少天没有采取行动了。
+
+运行类似的命令,使用 2020 年,你会注意到这是一个闰年:
+```
+$ cal -j 2020 | tail -9
+
+ November December
+ Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+306 307 308 309 310 311 312 336 337 338 339 340
+313 314 315 316 317 318 319 341 342 343 344 345 346 347
+320 321 322 323 324 325 326 348 349 350 351 352 353 354
+327 328 329 330 331 332 333 355 356 357 358 359 360 361
+334 335 362 363 364 365 366
+
+```
+
+### 日历
+
+另一个有趣但潜在的令人沮丧的命令可以告诉你关于假期的事情,这个命令有很多选项,但我们只是说,你想看到即将到来的假期和值得注意的日历列表。日历的 **-l** 选项允许你选择今天想要查看的天数,因此 0 表示“仅限今天”。
+```
+$ calendar -l 0
+Mar 26 Benjamin Thompson born, 1753, Count Rumford; physicist
+Mar 26 David Packard died, 1996; age of 83
+Mar 26 Popeye statue unveiled, Crystal City TX Spinach Festival, 1937
+Mar 26 Independence Day in Bangladesh
+Mar 26 Prince Jonah Kuhio Kalanianaole Day in Hawaii
+Mar 26* Seward's Day in Alaska (last Monday)
+Mar 26 Emerson, Lake, and Palmer record "Pictures at an Exhibition" live, 1971
+Mar 26 Ludwig van Beethoven dies in Vienna, Austria, 1827
+Mar 26 Bonne fête aux Lara !
+Mar 26 Aujourd'hui, c'est la St(e) Ludger.
+Mar 26 N'oubliez pas les Larissa !
+Mar 26 Ludwig van Beethoven in Wien gestorben, 1827
+Mar 26 Emánuel
+
+```
+
+对于我们大多数人来说,这比我们在一天之内可以管理的庆祝活动要多一点。如果你看到类似这样的内容,可以将其归咎于你的 **calendar.all** 文件,该文件告诉系统你希望包含哪些国际日历。当然,你可以通过删除此文件中包含其他文件的一些行来削减此问题。文件看起来像这样:
+```
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+```
+
+假设我们只通过移除除上面显示的第一个 #include 行之外的所有行,将我们的显示切换到世界日历。 我们会看到这个:
+```
+$ calendar -l 0
+Mar 26 Benjamin Thompson born, 1753, Count Rumford; physicist
+Mar 26 David Packard died, 1996; age of 83
+Mar 26 Popeye statue unveiled, Crystal City TX Spinach Festival, 1937
+Mar 26 Independence Day in Bangladesh
+Mar 26 Prince Jonah Kuhio Kalanianaole Day in Hawaii
+Mar 26* Seward's Day in Alaska (last Monday)
+Mar 26 Emerson, Lake, and Palmer record "Pictures at an Exhibition" live, 1971
+Mar 26 Ludwig van Beethoven dies in Vienna, Austria, 1827
+
+```
+
+显然,世界日历的特殊日子非常多。但是,像这样的展示可以让你忘记所有重要的“大力神雕像”揭幕日以及它在观察“世界菠菜之都”中的作用。
+
+更有用的日历选择可能是将与工作相关的日历放入特殊文件中,并在 calendar.all 文件中使用该日历来确定在运行命令时将看到哪些事件。
+```
+$ cat /usr/share/calendar/calendar.all
+/*
+ * International and national calendar files
+ *
+ * This is the calendar master file. In the standard setup, it is
+ * included by /etc/calendar/default, so you can make any system-wide
+ * changes there and they will be kept when you upgrade. If you want
+ * to edit this file, copy it into /etc/calendar/calendar.all and
+ * edit it there.
+ *
+ */
+
+#ifndef _calendar_all_
+#define _calendar_all_
+
+#include
+#include <==
+
+#endif /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var !_calendar_all_ */
+
+```
+
+日历文件的格式非常简单 - mm/dd 格式日期,空格和事件描述。
+```
+$ cat calendar.work
+03/26 Describe how the cal and calendar commands work
+03/27 Throw a party!
+
+```
+
+### 注意事项和 nostalgia
+
+注意,有关日历的命令可能不适用于所有 Linux 发行版,你可能必须记住自己的“大力水手”雕像。
+
+如果你想知道,你可以显示一个日历,远远早于 9999 -- 即使是预言性的 [2525][1]。
+
+在 [Facebook][2] 和 [LinkedIn][3] 上加入网络社区,对那些重要的话题发表评论。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3265752/linux/working-with-calendars-on-linux.html
+
+作者:[Sandra Henry-Stocker][a]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[校对者ID](https://github.com/校对者ID)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[1]:https://www.youtube.com/watch?v=izQB2-Kmiic
+[2]:https://www.facebook.com/NetworkWorld/
+[3]:https://www.linkedin.com/company/network-world
From e19c00d33b36122e0706972f1a90f331c8c3e50e Mon Sep 17 00:00:00 2001
From: kennethXia <37970750+kennethXia@users.noreply.github.com>
Date: Sat, 21 Apr 2018 13:19:25 +0800
Subject: [PATCH 027/220] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...180226 5 keys to building open hardware.md | 106 ------------------
...180226 5 keys to building open hardware.md | 75 +++++++++++++
2 files changed, 75 insertions(+), 106 deletions(-)
delete mode 100644 sources/talk/20180226 5 keys to building open hardware.md
create mode 100644 translated/talk/20180226 5 keys to building open hardware.md
diff --git a/sources/talk/20180226 5 keys to building open hardware.md b/sources/talk/20180226 5 keys to building open hardware.md
deleted file mode 100644
index 33ba7ab32c..0000000000
--- a/sources/talk/20180226 5 keys to building open hardware.md
+++ /dev/null
@@ -1,106 +0,0 @@
-translating by kennethXia
-
-5 keys to building open hardware
-构建开源硬件的5个关键点
-======
-
-科学社区正在加速拥抱自由和开源硬件([FOSH][1]). 研究员正忙于[改进他们自己的装备][2]并创造数以百计基于分布式数字制造模型的设备来推动他们的研究。
-The science community is increasingly embracing free and open source hardware ([FOSH][1]). Researchers have been busy [hacking their own equipment][2] and creating hundreds of devices based on the distributed digital manufacturing model to advance their scientific experiments.
-
-热衷于 FOSH 的主要原因还是钱: 有研究表明,和专用设备相比,FOSH 可以[节约90%到99%的花费][3]。基于[开源硬件商业模式][4]的科学 FOSH 的商业化已经推动其快速地发展为一个新的工程领域,并为此定期[举行年会][5]。
-A major reason for all this interest in distributed digital manufacturing of scientific FOSH is money: Research indicates that FOSH [slashes costs by 90% to 99%][3] compared to proprietary tools. Commercializing scientific FOSH with [open hardware business models][4] has supported the rapid growth of an engineering subfield to develop FOSH for science, which comes together annually at the [Gathering for Open Science Hardware][5].
-
-特别的是,不止一本,而是两本关于这个主题的学术期刊:[Journal of Open Hardware] (由Ubiquity出版,一个新的开放访问出版商,同时出版了[Journal of Open Research Software][8])以及[HardwareX][9](由Elsevier出版的一种[自由访问期刊][10],它是世界上最大的学术出版商之一)。
-Remarkably, not one, but [two new academic journals][6] are devoted to the topic: the [Journal of Open Hardware][7] (from Ubiquity Press, a new open access publisher that also publishes the [Journal of Open Research Software][8] ) and [HardwareX][9] (an [open access journal][10] from Elsevier, one of the world's largest academic publishers).
-
-由于学术社区的支持,科学 FOSH 的开发者在获取制作乐趣并推进科学快速发展的同时获得学术声望。
-Because of the academic community's support, scientific FOSH developers can get academic credit while having fun designing open hardware and pushing science forward faster.
-
-### 5 steps for scientific FOSH
-### 科学 FOSH 的5个步骤
-
-协恩 (Shane Oberloier)和我在名为Designes的自由问工程期刊上共同发表了一篇关于设计 FOSH 科学设备原则的文章。我们以滑动烘干机为例,制造成本低于20美元,仅是专用设备价格的三百分之一。[科学][1]和[医疗][12]设备往往比较复杂,开发 FOSH 替代品将带来巨大的回报。
-Shane Oberloier and I co-authored a new [article][11] published in Designs, an open access engineering design journal, about the principles of designing FOSH scientific equipment. We used the example of a slide dryer, fabricated for under $20, which costs up to 300 times less than proprietary equivalents. [Scientific][1] and [medical][12] equipment tends to be complex with huge payoffs for developing FOSH alternatives.
-
-我总结了5个步骤(包括6条设计原则),它们在协恩和我发表的文章里有详细阐述。这些设计原则也推广到非科学设备,而且制作越复杂的设计越能带来更大的潜在收益。
-I've summarized the five steps (including six design principles) that Shane and I detail in our Designs article. These design principles can be generalized to non-scientific devices, although the more complex the design or equipment, the larger the potential savings.
-
-如果你对科学项目的开源硬件设计感兴趣,这些步骤将使你的项目的影响最大化。
-If you are interested in designing open hardware for scientific projects, these steps will maximize your project's impact.
-
- 1. 评估类似现有工具的功能,你的 FOSH 设计目标应该针对实际效果而不是现有的设计(译者注:作者的意思应该是不要被现有设计缚住手脚)。必要的时候需进行概念证明。
- 1. Evaluate similar existing tools for their functions but base your FOSH design on replicating their physical effects, not pre-existing designs. If necessary, evaluate a proof of concept.
-
- 2. 使用下列设计原则:
- 2. Use the following design principles:
-
- * 在设备生产中,仅适用自由和开源的软件工具链(比如,开源的CAD工具,例如[OpenSCAD][13], [FreeCAD][14], or [Blender][15])和开源硬件。
- * Use only free and open source software toolchains (e.g., open source CAD packages such as [OpenSCAD][13], [FreeCAD][14], or [Blender][15]) and open hardware for device fabrication.
- * 尝试减少部件的数量和类型并降低工具的复杂度
- * Attempt to minimize the number and type of parts and the complexity of the tools.
- * 减少材料的数量和制造成本。
- * Minimize the amount of material and the cost of production.
- * 尽量使用方便易得的工具(比如 RepRap 3D 打印机)进行部件的分布式或数字化生产。
- * Maximize the use of components that can be distributed or digitally manufactured by using widespread and accessible tools such as the open source [RepRap 3D printer][16].
- * 对部件进行[参数化设计][17],这使他人可以对你的设计进行个性化改动。相较于特例化设计,参数化设计会更有用。在未来的项目中,使用者可以通过修改核心参数来继续利用它们。
- * Create [parametric designs][17] with predesigned components, which enable others to customize your design. By making parametric designs rather than solving a specific case, all future cases can also be solved while enabling future users to alter the core variables to make the device useful for them.
- * 所有不能使用开源硬件进行分布制造的零件,必须选择现货产品以方便采购
- * All components that are not easily and economically fabricated with existing open hardware equipment in a distributed fashion should be chosen from off-the-shelf parts that are readily available throughout the world.
-
- 3. 验证功能设计
- 3. Validate the design for the targeted function(s).
-
- 4. 提供关于设计、生产、装配、校准和操作的详尽文档。包括原始设计文件而不仅仅是设计输出。开源硬件协会对于开源设计的发布和文档化有额外的指南,总结如下:
- 4. Meticulously document the design, manufacture, assembly, calibration, and operation of the device. This should include the raw source of the design, not just the files used for production. The Open Source Hardware Association has extensive [guidelines][18] for properly documenting and releasing open source designs, which can be summarized as follows:
-
- * 以通用的形式分享设计文件
- * Share design files in a universal type.
- * 提供详尽的材料清单,包括价格和采购信息
- * Include a fully detailed bill of materials, including prices and sourcing information.
- * 如果包含软件,确保代码对大众来说清晰易懂
- * If software is involved, make sure the code is clear and understandable to the general public.
- * 作为生产时的参考,必须提供足够的照片,以确保没有任何被遮挡的部分。
- * Include many photos so that nothing is obscured, and they can be used as a reference while manufacturing.
- * 在描述方法的章节,整个制作过程必须被细化成简单步骤以便复制此设计。
- * In the methods section, the entire manufacturing process must be detailed to act as instructions for users to replicate the design.
- * 在线上分享并指定许可证。这为用户提供了合理使用设计的信息。
- * Share online and specify a license. This gives users information on what constitutes fair use of the design.
-
- 5. 主动分享!为了使 FOSH 发扬光大,设计必须被广泛、频繁和有效地分享以提升他们的存在感。所有的文档应该在开放存取文献中发表,并与适当的社区共享。[开源科学框架][19]是一个值得考虑的优雅的通用存储库,它由开源科学中心主办,该中心设置为接受任何类型的文件并处理大型数据集。
- 5. Share aggressively! For FOSH to proliferate, designs must be shared widely, frequently, and noticeably to raise awareness of their existence. All documentation should be published in the open access literature and shared with appropriate communities. One nice universal repository to consider is the [Open Science Framework][19], hosted by the Center for Open Science, which is set up to take any type of file and handle large datasets.
-
-
-这篇文章得到了 [Fulbright Finland][20] 的支持,该公司赞助了芬兰 Fulbright-Aalto 大学的特聘校席 Joshua Pearce 在开源科学硬件方面的研究工作。
-This article was supported by [Fulbright Finland][20], which is sponsoring Joshua Pearce's research in open source scientific hardware in Finland as the Fulbright-Aalto University Distinguished Chair.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/2/5-steps-creating-successful-open-hardware
-
-作者:[Joshua Pearce][a]
-译者:[kennethXia](https://github.com/kennethXia)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/jmpearce
-[1]:https://opensource.com/business/16/4/how-calculate-open-source-hardware-return-investment
-[2]:https://opensource.com/node/16840
-[3]:http://www.appropedia.org/Open-source_Lab
-[4]:https://www.academia.edu/32004903/Emerging_Business_Models_for_Open_Source_Hardware
-[5]:http://openhardware.science/
-[6]:https://opensource.com/life/16/7/hardwarex-open-access-journal
-[7]:https://openhardware.metajnl.com/
-[8]:https://openresearchsoftware.metajnl.com/
-[9]:https://www.journals.elsevier.com/hardwarex
-[10]:https://opensource.com/node/30041
-[11]:https://www.academia.edu/35603319/General_Design_Procedure_for_Free_and_Open-Source_Hardware_for_Scientific_Equipment
-[12]:https://www.academia.edu/35382852/Maximizing_Returns_for_Public_Funding_of_Medical_Research_with_Open_source_Hardware
-[13]:http://www.openscad.org/
-[14]:https://www.freecadweb.org/
-[15]:https://www.blender.org/
-[16]:http://reprap.org/
-[17]:https://en.wikipedia.org/wiki/Parametric_design
-[18]:https://www.oshwa.org/sharing-best-practices/
-[19]:https://osf.io/
-[20]:http://www.fulbright.fi/en
diff --git a/translated/talk/20180226 5 keys to building open hardware.md b/translated/talk/20180226 5 keys to building open hardware.md
new file mode 100644
index 0000000000..c470820f5c
--- /dev/null
+++ b/translated/talk/20180226 5 keys to building open hardware.md
@@ -0,0 +1,75 @@
+构建开源硬件的5个关键点
+======
+
+科学社区正在加速拥抱自由和开源硬件([FOSH][1]). 研究员正忙于[改进他们自己的装备][2]并创造数以百计基于分布式数字制造模型的设备来推动他们的研究。
+
+热衷于 FOSH 的主要原因还是钱: 有研究表明,和专用设备相比,FOSH 可以[节省90%到99%的花费][3]。基于[开源硬件商业模式][4]的科学 FOSH 的商业化已经推动其快速地发展为一个新的工程领域,并为此定期[举行年会][5]。
+
+特别的是,不止一本,而是关于这个主题的[两本学术期刊]:[Journal of Open Hardware] (由Ubiquity出版,一个新的自由访问出版商,同时出版了[Journal of Open Research Software][8])以及[HardwareX][9](由Elsevier出版的一种[自由访问期刊][10],它是世界上最大的学术出版商之一)。
+
+由于学术社区的支持,科学 FOSH 的开发者在获取制作乐趣并推进科学快速发展的同时获得学术声望。
+
+### 科学 FOSH 的5个步骤
+
+协恩 (Shane Oberloier)和我在名为Designes的自由问工程期刊上共同发表了一篇关于设计 FOSH 科学设备原则的[文章][11]。我们以滑动烘干机为例,制造成本低于20美元,仅是专用设备价格的三百分之一。[科学][1]和[医疗][12]设备往往比较复杂,开发 FOSH 替代品将带来巨大的回报。
+
+我总结了5个步骤(包括6条设计原则),它们在协恩和我发表的文章里有详细阐述。这些设计原则也推广到非科学设备,而且制作越复杂的设计越能带来更大的潜在收益。
+
+如果你对科学项目的开源硬件设计感兴趣,这些步骤将使你的项目的影响最大化。
+
+ 1. 评估类似现有工具的功能,你的 FOSH 设计目标应该针对实际效果而不是现有的设计(译者注:作者的意思应该是不要被现有设计缚住手脚)。必要的时候需进行概念证明。
+
+ 2. 使用下列设计原则:
+
+ * 在设备生产中,仅适用自由和开源的软件工具链(比如,开源的CAD工具,例如[OpenSCAD][13], [FreeCAD][14], or [Blender][15])和开源硬件。
+ * 尝试减少部件的数量和类型并降低工具的复杂度
+ * 减少材料的数量和制造成本。
+ * 尽量使用方便易得的工具(比如 [RepRap 3D 打印机][16])进行部件的分布式或数字化生产。
+ * 对部件进行[参数化设计][17],这使他人可以对你的设计进行个性化改动。相较于特例化设计,参数化设计会更有用。在未来的项目中,使用者可以通过修改核心参数来继续利用它们。
+ ?* 所有不能使用开源硬件进行分布制造的零件,必须选择现货产品以方便采购。
+
+ 3. 验证功能设计
+
+ ?4. 提供关于设计、生产、装配、校准和操作的详尽文档。包括原始设计文件而不仅仅是设计输出。开源硬件协会对于开源设计的发布和文档化有额外的[指南][18],总结如下:
+
+ * 以通用的形式分享设计文件
+ * 提供详尽的材料清单,包括价格和采购信息
+ * 如果包含软件,确保代码对大众来说清晰易懂
+ * 作为生产时的参考,必须提供足够的照片,以确保没有任何被遮挡的部分。
+ * 在描述方法的章节,整个制作过程必须被细化成简单步骤以便复制此设计。
+ * 在线上分享并指定许可证。这为用户提供了合理使用设计的信息。
+
+ ?5. 主动分享!为了使 FOSH 发扬光大,设计必须被广泛、频繁和有效地分享以提升他们的存在感。所有的文档应该在自由访问文献中发表,并与适当的社区共享。[开源科学框架][19]是一个值得考虑的优雅的通用存储库,它由开源科学中心主办,该中心设置为接受任何类型的文件并处理大型数据集。
+
+这篇文章得到了 [Fulbright Finland][20] 的支持,该公司赞助了芬兰 Fulbright-Aalto 大学的特聘校席 Joshua Pearce 在开源科学硬件方面的研究工作。
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/2/5-steps-creating-successful-open-hardware
+
+作者:[Joshua Pearce][a]
+译者:[kennethXia](https://github.com/kennethXia)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/jmpearce
+[1]:https://opensource.com/business/16/4/how-calculate-open-source-hardware-return-investment
+[2]:https://opensource.com/node/16840
+[3]:http://www.appropedia.org/Open-source_Lab
+[4]:https://www.academia.edu/32004903/Emerging_Business_Models_for_Open_Source_Hardware
+[5]:http://openhardware.science/
+[6]:https://opensource.com/life/16/7/hardwarex-open-access-journal
+[7]:https://openhardware.metajnl.com/
+[8]:https://openresearchsoftware.metajnl.com/
+[9]:https://www.journals.elsevier.com/hardwarex
+[10]:https://opensource.com/node/30041
+[11]:https://www.academia.edu/35603319/General_Design_Procedure_for_Free_and_Open-Source_Hardware_for_Scientific_Equipment
+[12]:https://www.academia.edu/35382852/Maximizing_Returns_for_Public_Funding_of_Medical_Research_with_Open_source_Hardware
+[13]:http://www.openscad.org/
+[14]:https://www.freecadweb.org/
+[15]:https://www.blender.org/
+[16]:http://reprap.org/
+[17]:https://en.wikipedia.org/wiki/Parametric_design
+[18]:https://www.oshwa.org/sharing-best-practices/
+[19]:https://osf.io/
+[20]:http://www.fulbright.fi/en
From d74b7dfeb8fe8fb607fdc821bfe1c8417dfe1b3c Mon Sep 17 00:00:00 2001
From: kennethXia <37970750+kennethXia@users.noreply.github.com>
Date: Sat, 21 Apr 2018 13:47:26 +0800
Subject: [PATCH 028/220] translating by kennethXia
---
sources/tech/20180403 Why I love ARM and PowerPC.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20180403 Why I love ARM and PowerPC.md b/sources/tech/20180403 Why I love ARM and PowerPC.md
index 16dcd2e76c..1ea07ad6e4 100644
--- a/sources/tech/20180403 Why I love ARM and PowerPC.md
+++ b/sources/tech/20180403 Why I love ARM and PowerPC.md
@@ -1,3 +1,4 @@
+translating by kennethXia
Why I love ARM and PowerPC
======
From 61db34d236aeed48c78d061406718696ed153ab9 Mon Sep 17 00:00:00 2001
From: songshunqiang
Date: Sat, 21 Apr 2018 13:49:47 +0800
Subject: [PATCH 029/220] submit tech/20180329 Protect Your Websites with Let-s
Encrypt.md
---
...rotect Your Websites with Let-s Encrypt.md | 89 -------------------
...rotect Your Websites with Let-s Encrypt.md | 88 ++++++++++++++++++
2 files changed, 88 insertions(+), 89 deletions(-)
delete mode 100644 sources/tech/20180329 Protect Your Websites with Let-s Encrypt.md
create mode 100644 translated/tech/20180329 Protect Your Websites with Let-s Encrypt.md
diff --git a/sources/tech/20180329 Protect Your Websites with Let-s Encrypt.md b/sources/tech/20180329 Protect Your Websites with Let-s Encrypt.md
deleted file mode 100644
index d82ba5fc99..0000000000
--- a/sources/tech/20180329 Protect Your Websites with Let-s Encrypt.md
+++ /dev/null
@@ -1,89 +0,0 @@
-pinewall translating
-
-Protect Your Websites with Let's Encrypt
-======
-
-
-Back in the bad old days, setting up basic HTTPS with a certificate authority cost as much as several hundred dollars per year, and the process was difficult and error-prone to set up. Now we have [Let's Encrypt][1] for free, and the whole thing takes just a few minutes.
-
-### Why Encrypt?
-
-Why encrypt your sites? Because unencrypted HTTP sessions are wide open to multiple abuses:
-
- + Eavesdropping on your users
- + Capturing user logins
- + Injecting ads and "important" messages
- + Injecting spyware
- + Injecting SEO spam and links
- + Injecting cryptocurrency miners
-
-Internet service providers lead the code-injecting offenders. How to foil their nefarious desires? Your best defense is HTTPS. Let's review how HTTPS works.
-
-### Chain of Trust
-
-You could set up asymmetric encryption between your site and everyone who is allowed to access it. This is very strong protection: GPG (GNU Privacy Guard, see [How to Encrypt Email in Linux][2]), and OpenSSH are common tools for asymmetric encryption. These rely on public-private key pairs. You can freely share public keys, while your private keys must be protected and never shared. The public key encrypts, and the private key decrypts.
-
-This is a multi-step process that does not scale for random web-surfing, however, because it requires exchanging public keys before establishing a session, and you have to generate and manage key pairs. An HTTPS session automates public key distribution, and sensitive sites, such as shopping and banking, are verified by a third-party certificate authority (CA) such as Comodo, Verisign, or Thawte.
-
-When you visit an HTTPS site, it provides a digital certificate to your web browser. This certificate verifies that your session is strongly encrypted and supplies information about the site, such as organization's name, the organization that issued the certificate, and the name of the certificate authority. You can see all of this information, and the digital certificate, by clicking on the little padlock in your web browser's address bar (Figure 1).
-
-
-![page information][4]
-
-Figure 1: Click on the padlock in your web browser's address bar for information.
-
-[Used with permission][5]
-
-The major web browsers, including Opera, Firefox, Chromium, and Chrome, all rely on the certificate authority to verify the authenticity of the site's digital certificate. The little padlock gives the status at a glance; green = strong SSL encryption and verified identity. Web browsers also warn you about malicious sites, sites with incorrectly configured SSL certificates, and they treat self-signed certificates as untrusted.
-
-So how do web browsers know who to trust? Browsers include a root store, a batch of root certificates, which are stored in `/usr/share/ca-certificates/mozilla/`. Site certificates are verified against your root store. Your root store is maintained by your package manager, just like any other software on your Linux system. On Ubuntu, they are supplied by the `ca-certificates` package. The root store itself is [maintained by Mozilla][6] for Linux.
-
-As you can see, it takes a complex infrastructure to make all of this work. If you perform any sensitive online transactions, such as shopping or banking, you are trusting a whole lot of unknown people to protect you.
-
-### Encryption Everywhere
-
-Let's Encrypt is a global certificate authority, similar to the commercial CAs. Let's Encrypt was founded by the non-profit Internet Security Research Group (ISRG) to make it easier to secure Websites. I don't consider it sufficient for shopping and banking sites, for reasons which I will get to shortly, but it's great for securing blogs, news, and informational sites that don't have financial transactions.
-
-There are at least three ways to use Let's Encrypt. The best way is with the [Certbot client][7], which is maintained by the Electronic Frontier Foundation (EFF). This requires shell access to your site.
-
-If you are on shared hosting then you probably don't have shell access. The easiest method in this case is using a [host that supports Let's Encrypt][8].
-
-If your host does not support Let's Encrypt, but supports custom certificates, then you can [create and upload your certificate manually][8] with Certbot. It's a complex process, so you'll want to study the documentation thoroughly.
-
-When you have installed your certificate use [SSL Server Test][9] to test your site.
-
-Let's Encrypt digital certificates are good for 90 days. When you install Certbot it should also install a cron job for automatic renewal, and it includes a command to test that the automatic renewal works. You may use your existing private key or certificate signing request (CSR), and it supports wildcard certificates.
-
-### Limitations
-
-Let's Encrypt has some limitations: it performs only domain validation, that is, it issues a certificate to whoever controls the domain. This is basic SSL. It does not support Organization Validation (OV) or Extended Validation (EV) because it is not possible to automate identity validation. I would not trust a banking or shopping site that uses Let's Encrypt-- let 'em spend the bucks for a complete package that includes identity validation.
-
-As a free-of-cost service run by a non-profit organization there is no commercial support, but only documentation and community support, both of which are quite good.
-
-The Internet is full of malice. Everything should be encrypted. Start with [Let's Encrypt][10]to protect your site visitors.
-
-Learn more about Linux through the free ["Introduction to Linux" ][11]course from The Linux Foundation and edX.
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/learn/intro-to-linux/2018/3/protect-your-websites-lets-encrypt
-
-作者:[CARLA SCHRODER][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linux.com/users/cschroder
-[1]:https://letsencrypt.org
-[2]:https://www.linux.com/learn/how-encrypt-email-linux
-[3]:https://www.linux.com/files/images/fig-11png-0
-[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_1_1.png?itok=_PPiSNx6 (page information)
-[5]:https://www.linux.com/licenses/category/used-permission
-[6]:https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/
-[7]:https://certbot.eff.org/
-[8]:https://community.letsencrypt.org/t/web-hosting-who-support-lets-encrypt/6920
-[9]:https://www.ssllabs.com/ssltest/
-[10]:https://letsencrypt.org/
-[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
diff --git a/translated/tech/20180329 Protect Your Websites with Let-s Encrypt.md b/translated/tech/20180329 Protect Your Websites with Let-s Encrypt.md
new file mode 100644
index 0000000000..ed7a326b54
--- /dev/null
+++ b/translated/tech/20180329 Protect Your Websites with Let-s Encrypt.md
@@ -0,0 +1,88 @@
+使用 Let's Encrypt 保护你的网站
+======
+
+
+曾几何时,通过证书授权机构搭建基本的HTTPS网站需要每年花费数百美元,而且搭建的过程复杂且容易出错。现在我们免费使用 [Let's Encrypt][1],而且搭建过程也只需要几分钟。
+
+
+### 为何进行加密?
+
+为什么要加密网站呢?这是因为未经加密的 HTTP 会话可以被多种方式滥用:
+
+ + 窃听用户数据包
+ + 捕捉用户登录
+ + 注入广告和“重要”消息
+ + 注入木马
+ + 注入 SEO 垃圾邮件和链接
+ + 注入挖矿脚本
+
+网络服务提供商就是最大的代码注入者。那么如何挫败它们的非法行径呢?你最好的防御手段就是HTTPS。让我们回顾一下HTTPS的工作原理。
+
+### 信任链
+
+你可以在你的网站和每个授权访问用户之间建立非对称加密。这是一种非常强的保护:GPG(GNU Privacy Guard, 参考[如何在 Linux 中加密邮件][2])和 OpenSSH 是非对称加密的通用工具。它们依赖于公钥-私钥对,其中公钥可以任意共享,但私钥必须受到保护且不能共享。公钥用于加密,私钥用于解密。
+
+但上述方法无法适用于随机的网页浏览,因为建立会话之前需要交换公钥,你需要生成并管理密钥对。HTTPS 会话可以自动完成公钥分发,而且购物或银行之类的敏感网站还会使用第三方证书颁发机构 (CA) 验证证书,例如 Comodo, Verisign 和 Thawte。
+
+当你访问一个HTTPS网站时,网站给你的网页浏览器返回了一个数字证书。这个证书说明你的会话被强加密,而且提供了网站信息,包括组织名称,颁发证书的组织和证书颁发机构名称等。你可以点击网页浏览器地址栏的小锁头来查看这些信息(图1),也包括了证书本身。
+
+
+![页面信息][4]
+
+图1: 点击网页浏览器地址栏上的锁头标记查看信息
+
+[已获授权使用][5]
+
+包括 Opera, Chromium 和 Chrome 在内的主流浏览器,验证网站数字证书的合法性都依赖于证书颁发机构。小锁头标记可以让你一眼看出证书状态;绿色意味着使用强 SSL 加密且运营实体经过验证。网页浏览器还会对恶意网站、SSL 证书配置有误的网站和不被信任的自签名证书网站给出警告。
+
+那么网页浏览器如何判断网站是否可信呢?浏览器自带根证书库,包含了一系列根证书,存储在`/usr/share/ca-certificates/mozilla/`。网站证书是否可信可以通过根证书库进行检查。就像你 Linux 系统上其它软件那样,根证书库也由包管理器维护。对于 Ubuntu,对应的包是 `ca-certificates`。Linux 根证书库本身[由 Mozilla 维护][6]。
+
+可见,整个工作流程需要复杂的基础设施才能完成。在你进行购物或金融等敏感在线操作时,你信任了无数陌生人对你的保护。
+
+### 无处不加密
+
+Let's Encrypt 是一家全球证书颁发机构,类似于其它商业根证书颁发机构。Let's Encrpt 由非营利性组织因特网安全研究小组 (Internet Security Research Group, ISRG) 创立,目标是简化网站的安全加密。在我看来,出于后面我会提到的原因,该证书不足以胜任购物及银行网站的安全加密,但很适合加密博客、新闻和信息门户这类不涉及金融操作的网站。
+
+使用 Let's Encrypt 有三种方式。推荐使用电子前沿基金会 (Electronic Frontier Foundation, EFF) 开发的 [Cerbot 客户端][7]。使用该客户端需要在网站服务器上执行 shell 操作。
+
+如果你使用的是共享托管主机,你很可能无法执行 shell 操作。这种情况下,最简单的方法是使用[支持 Let's Encrpt 的托管主机][8]。
+
+如果你的托管主机不支持 Let's Encrypt,但支持自定义证书,那么你可以使用 Certbot [手动创建并上传你的证书][8]。这是一个复杂的过程,你需要彻底地研究文档。
+
+安装证书后,测试使用[ SSL 服务器测试][9]。
+
+Let's Encrypt 的电子证书有效期为90天。Certbot 安装过程中添加了一个证书自动续期的计划任务,也提供了测试证书自动续期是否成功的命令。允许使用已有的私钥或证书签名请求 (certificate signing request, CSR),允许创建通配符证书。
+
+### 限制
+
+Let's Encrypt 有如下限制:它只执行域名验证,即只要有域名控制权就可以获得证书。这是比较基础的 SSL。它不支持组织验证(Organization Validation, OV) 或扩展验证(Extended Validation, EV),因为运营实体验证无法自动完成。我不会信任使用 Let's Encrypt 证书的购物或银行网站,它们应该购买支持运营实体验证的完整版本。
+
+作为非营利性组织提供的免费服务,不提供商业支持,只提供不错的文档和社区支持。
+
+因特网中恶意无处不在,一切数据都应该加密。从使用 [Let's Encrypt][10] 保护你的网站用户开始吧。
+
+想要学习更多 Linux 知识,请参考 Linux 基金会和 edX 提供的免费课程 ["Linux 入门"][11]。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2018/3/protect-your-websites-lets-encrypt
+
+作者:[CARLA SCHRODER][a]
+译者:[pinewall](https://github.com/pinewall)
+校对:[校对者ID](https://github.com/校对者ID)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/cschroder
+[1]:https://letsencrypt.org
+[2]:https://www.linux.com/learn/how-encrypt-email-linux
+[3]:https://www.linux.com/files/images/fig-11png-0
+[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_1_1.png?itok=_PPiSNx6 (页面信息)
+[5]:https://www.linux.com/licenses/category/used-permission
+[6]:https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/
+[7]:https://certbot.eff.org/
+[8]:https://community.letsencrypt.org/t/web-hosting-who-support-lets-encrypt/6920
+[9]:https://www.ssllabs.com/ssltest/
+[10]:https://letsencrypt.org/
+[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
From 4fd3826fda39f14c3d0ce6d211efd72e44ce2f82 Mon Sep 17 00:00:00 2001
From: songshunqiang
Date: Sat, 21 Apr 2018 14:15:59 +0800
Subject: [PATCH 030/220] add translation tag
---
sources/tech/20180416 Running Jenkins builds in containers.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180416 Running Jenkins builds in containers.md b/sources/tech/20180416 Running Jenkins builds in containers.md
index 98e4e756b8..1b03413c3a 100644
--- a/sources/tech/20180416 Running Jenkins builds in containers.md
+++ b/sources/tech/20180416 Running Jenkins builds in containers.md
@@ -1,3 +1,5 @@
+pinewall translating
+
Running Jenkins builds in containers
======
From 06e4524dd278bfe9b0548a8e20dbb912ee97cf7c Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sat, 21 Apr 2018 21:36:42 +0800
Subject: [PATCH 031/220] Update 20180413 Finding what you-re looking for on
Linux.md
---
.../tech/20180413 Finding what you-re looking for on Linux.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/sources/tech/20180413 Finding what you-re looking for on Linux.md b/sources/tech/20180413 Finding what you-re looking for on Linux.md
index 649401b97d..45ecce8aba 100644
--- a/sources/tech/20180413 Finding what you-re looking for on Linux.md
+++ b/sources/tech/20180413 Finding what you-re looking for on Linux.md
@@ -1,3 +1,6 @@
+Translating by MjSeven
+
+
Finding what you’re looking for on Linux
======
From 0f860dd549f532edd721ad52da3ecf110911799f Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 22 Apr 2018 01:00:18 +0800
Subject: [PATCH 032/220] PRF:20180226 How to Use WSL Like a Linux Pro.md
@geekpi
---
...0180226 How to Use WSL Like a Linux Pro.md | 41 ++++++++++---------
1 file changed, 21 insertions(+), 20 deletions(-)
diff --git a/translated/tech/20180226 How to Use WSL Like a Linux Pro.md b/translated/tech/20180226 How to Use WSL Like a Linux Pro.md
index 9a9b1a584f..18e0edd1a8 100644
--- a/translated/tech/20180226 How to Use WSL Like a Linux Pro.md
+++ b/translated/tech/20180226 How to Use WSL Like a Linux Pro.md
@@ -1,22 +1,22 @@
如何像 Linux 专家那样使用 WSL
============================================================
+> 在本 WSL 教程中了解如何执行像挂载 USB 驱动器和操作文件等任务。
+

-在本 WSL 教程中了解如何执行像挂载 USB 驱动器和操作文件等任务。(图片提供:Microsoft)[经许可使用][1]
-在[之前的教程][4]中,我们学习了在 Windows 10 上设置 WSL。你可以在 Windows 10 中使用 WSL 执行许多 Linux 命令。许多系统管理任务都是在终端内部完成的,无论是基于 Linux 的系统还是 macOS。然而,Windows 10 缺乏这样的功能。你想运行一个 cron 任务么?不行。你想 SSH 进入你的服务器,然后 rsync 文件么?没门。如何用强大的命令行工具管理本地文件,而不是使用缓慢和不可靠的 GUI 工具?
+在[之前的教程][4]中,我们学习了如何在 Windows 10 上设置 WSL。你可以在 Windows 10 中使用 WSL 执行许多 Linux 命令。无论是基于 Linux 的系统还是 macOS,它们的许多系统管理任务都是在终端内部完成的。然而,Windows 10 缺乏这样的功能。你想运行一个 cron 任务么?不行。你想 SSH 进入你的服务器,然后 `rsync` 文件么?没门。如何用强大的命令行工具管理本地文件,而不是使用缓慢和不可靠的 GUI 工具呢?
-在本教程中,你将看到如何使用 WSL执行除了管理的其他任务 - 例如挂载 USB 驱动器和操作文件。你需要运行一个完全更新的 Windows 10 并选择一个 Linux 发行版。我在[上一篇文章][5]中介绍了这些步骤,所以如果你需要赶上,那就从那里开始。让我们开始吧。
+在本教程中,你将看到如何使用 WSL 执行除了管理之外的任务 —— 例如挂载 USB 驱动器和操作文件。你需要运行一个完全更新的 Windows 10 并选择一个 Linux 发行版。我在[上一篇文章][5]中介绍了这些步骤,所以如果你跟上进度,那就从那里开始。让我们开始吧。
### 保持你的 Linux 系统更新
-事实上,当你通过 WSL 运行 Ubuntu 或 openSUSE 时,没有 Linux 内核在运行。然而,你必须保持你的发行版完整更新,以保护你的系统免受任何新的已知漏洞的影响。由于在 Windows 应用商店中只有两个免费的社区发行版,所以教程将只覆盖以下两个:openSUSE 和 Ubuntu。
+事实上,当你通过 WSL 运行 Ubuntu 或 openSUSE 时,其底层并没有运行 Linux 内核。然而,你必须保持你的发行版完整更新,以保护你的系统免受任何新的已知漏洞的影响。由于在 Windows 应用商店中只有两个免费的社区发行版,所以教程将只覆盖以下两个:openSUSE 和 Ubuntu。
更新你的 Ubuntu 系统:
```
# sudo apt-get update
-
# sudo apt-get dist-upgrade
```
@@ -26,7 +26,7 @@
# zypper up
```
-您还可以使用 _dup_ 命令将 openSUSE 升级到最新版本。但在运行系统升级之前,请使用上一个命令运行更新。
+您还可以使用 `dup` 命令将 openSUSE 升级到最新版本。但在运行系统升级之前,请使用上一个命令运行更新。
```
# zypper dup
@@ -36,15 +36,17 @@
### 管理本地文件
-如果你想使用优秀的 Linux 命令行工具来管理本地文件,你可以使用 WSL 轻松完成此操作。不幸的是,WSL 还不支持像 _lsblk_ 或 _mnt_ 这样的东西来挂载本地驱动器。但是,你可以 _cd _ 到 C 盘并管理文件:
+如果你想使用优秀的 Linux 命令行工具来管理本地文件,你可以使用 WSL 轻松完成此操作。不幸的是,WSL 还不支持像 `lsblk` 或 `mount` 这样的东西来挂载本地驱动器。但是,你可以 `cd` 到 C 盘并管理文件:
+```
/mnt/c/Users/swapnil/Music
+```
我现在在 C 盘的 Music 目录下。
要安装其他驱动器、分区和外部 USB 驱动器,你需要创建一个挂载点,然后挂载该驱动器。
-打开文件资源管理器并检查该驱动器的挂载点。假设它在 Windows 中被挂载为 S:\
+打开文件资源管理器并检查该驱动器的挂载点。假设它在 Windows 中被挂载为 S:\。
在 Ubuntu/openSUSE 终端中,为驱动器创建一个挂载点。
@@ -58,17 +60,16 @@ sudo mkdir /mnt/s
mount -f drvfs S: /mnt/s
```
-挂载完毕后,你现在可以从发行版访问该驱动器。请记住,使用 WSL 运行的发行版将会看到 Windows 能看到的内容。因此,你无法挂载在 Windows 上无法原生挂载的 ext4 驱动器。
+挂载完毕后,你现在可以从发行版访问该驱动器。请记住,使用 WSL 方式运行的发行版将会看到 Windows 能看到的内容。因此,你无法挂载在 Windows 上无法原生挂载的 ext4 驱动器。
-现在你可以在这里使用所有这些神奇的 Linux 命令。想要将文件从一个文件夹复制或移动到另一个文件夹?只需运行 _cp_ 或 _mv_ 命令。
+现在你可以在这里使用所有这些神奇的 Linux 命令。想要将文件从一个文件夹复制或移动到另一个文件夹?只需运行 `cp` 或 `mv` 命令。
```
cp /source-folder/source-file.txt /destination-folder/
-
cp /music/classical/Beethoven/symphony-2.mp3 /plex-media/music/classical/
```
-如果你想移动文件夹或大文件,我会推荐 _rsync_ 而不是 _cp_ 命令:
+如果你想移动文件夹或大文件,我会推荐 `rsync` 而不是 `cp` 命令:
```
rsync -avzP /music/classical/Beethoven/symphonies/ /plex-media/music/classical/
@@ -76,13 +77,13 @@ rsync -avzP /music/classical/Beethoven/symphonies/ /plex-media/music/classical/
耶!
-想要在 Windows 驱动器中创建新目录,只需使用 _mkdir_ 命令。
+想要在 Windows 驱动器中创建新目录,只需使用 `mkdir` 命令。
-想要在某个时间设置一个 cron 作业来自动执行任务吗?继续使用 _crontab -e_ 创建一个 cron 作业。十分简单。
+想要在某个时间设置一个 cron 作业来自动执行任务吗?继续使用 `crontab -e` 创建一个 cron 作业。十分简单。
-你还可以在 Linux 中挂载网络/远程文件夹,以便你可以使用更好的工具管理它们。我的所有驱动器都插在树莓派或者服务器上,因此我只需 ssh 进入该机器并管理硬盘。在本地计算机和远程系统之间传输文件可以再次使用 _rsync_ 命令完成。
+你还可以在 Linux 中挂载网络/远程文件夹,以便你可以使用更好的工具管理它们。我的所有驱动器都插在树莓派或者服务器上,因此我只需 `ssh` 进入该机器并管理硬盘。在本地计算机和远程系统之间传输文件可以再次使用 `rsync` 命令完成。
-WSL 现在已经不再是测试版了,它将继续获得更多新功能。我很兴奋的两个特性是 lsblk 命令和 dd 命令,它们允许我在 Windows 中本机管理我的驱动器并创建可引导的 Linux 驱动器。如果你是 Linux 命令行的新手,[前一篇教程][7]将帮助你开始使用一些最基本的命令。
+WSL 现在已经不再是测试版了,它将继续获得更多新功能。我很兴奋的两个特性是 `lsblk` 命令和 `dd` 命令,它们允许我在 Windows 中本机管理我的驱动器并创建可引导的 Linux 驱动器。如果你是 Linux 命令行的新手,[前一篇教程][7]将帮助你开始使用一些最基本的命令。
--------------------------------------------------------------------------------
@@ -90,7 +91,7 @@ via: https://www.linux.com/blog/learn/2018/2/how-use-wsl-linux-pro
作者:[SWAPNIL BHARTIYA][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -98,7 +99,7 @@ via: https://www.linux.com/blog/learn/2018/2/how-use-wsl-linux-pro
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://blogs.msdn.microsoft.com/commandline/learn-about-windows-console-and-windows-subsystem-for-linux-wsl/
[3]:https://www.linux.com/files/images/wsl-propng
-[4]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
-[5]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
-[6]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
+[4]:https://linux.cn/article-9545-1.html
+[5]:https://linux.cn/article-9545-1.html
+[6]:https://linux.cn/article-9545-1.html
[7]:https://www.linux.com/learn/how-use-linux-command-line-basics-cli
From 13bcfad44c9f0098352097da0ce9acc7abef101b Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 22 Apr 2018 01:00:39 +0800
Subject: [PATCH 033/220] PUB:20180226 How to Use WSL Like a Linux Pro.md
@geekpi
---
.../20180226 How to Use WSL Like a Linux Pro.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180226 How to Use WSL Like a Linux Pro.md (100%)
diff --git a/translated/tech/20180226 How to Use WSL Like a Linux Pro.md b/published/20180226 How to Use WSL Like a Linux Pro.md
similarity index 100%
rename from translated/tech/20180226 How to Use WSL Like a Linux Pro.md
rename to published/20180226 How to Use WSL Like a Linux Pro.md
From 564ebfc35e210a0d9b385a34102332b201ce27dd Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 22 Apr 2018 01:06:50 +0800
Subject: [PATCH 034/220] PRF:20180301 How to add fonts to Fedora.md
@geekpi
---
.../20180301 How to add fonts to Fedora.md | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/translated/tech/20180301 How to add fonts to Fedora.md b/translated/tech/20180301 How to add fonts to Fedora.md
index f92e897960..55e089758a 100644
--- a/translated/tech/20180301 How to add fonts to Fedora.md
+++ b/translated/tech/20180301 How to add fonts to Fedora.md
@@ -3,27 +3,27 @@

-字体可帮助你通过设计以创意的方式表达你的想法。无论给图片加标题、编写演示文稿,还是设计问候语或广告,字体都可以将你的想法提升到更高水平。很容易仅仅为了它们的审美品质而爱上它们。幸运的是,Fedora 使安装变得简单。以下是如何做的。
+字体可帮助你通过设计以创意的方式表达你的想法。无论给图片加标题、编写演示文稿,还是设计问候语或广告,字体都可以将你的想法提升到更高水平。很容易仅仅为了它们的审美品质而爱上它们。幸运的是,Fedora 使安装字体变得简单。以下是如何做的。
### 全系统安装
如果你在系统范围内安装字体,那么它可以让所有用户使用。此方式的最佳方法是使用官方软件库中的 RPM 软件包。
-开始前打开 Fedora Workstation 中的 _Software_ 工具,或者其他使用官方仓库的工具。选择横栏中选择 _Add-ons_ 类别。接着在 add-on 类别中选择 _Fonts_。你会看到类似于下面截图中的可用字体:
+开始前打开 Fedora Workstation 中的 “Software” 工具,或者其他使用官方仓库的工具。选择横栏中选择 “Add-ons” 类别。接着在该类别中选择 “Fonts”。你会看到类似于下面截图中的可用字体:
[][1]
-当你选择一种字体时,会出现一些细节。根据几种情况,你可能能够预览字体的一些示例文本。点击 _Install_ 按钮将其添加到你的系统。根据系统速度和网络带宽,完成此过程可能需要一些时间。
+当你选择一种字体时,会出现一些细节。根据几种情况,你可能能够预览字体的一些示例文本。点击 “Install” 按钮将其添加到你的系统。根据系统速度和网络带宽,完成此过程可能需要一些时间。
-你还可以在字体细节中通过 _Remove_ 按钮删除前面带有勾的已经安装的字体。
+你还可以在字体细节中通过 “Remove” 按钮删除前面带有勾的已经安装的字体。
### 个人安装
-如果你以兼容格式:_.ttf_、 _otf_ 、_.ttc_、_.pfa_ 、_.pfb_ 或者 . _pcf_ 下载了字体,则此方法效果更好。这些字体扩展名不应通过将它们放入系统文件夹来安装在系统范围内。这种类型的非打包字体不能自动更新。他们也可能会在稍后干扰一些软件操作。安装这些字体的最佳方法是在你自己的个人数据目录中。
+如果你以兼容格式:.ttf、 otf 、.ttc、.pfa 、.pfb 或者 .pcf 下载了字体,则此方法效果更好。这些字体扩展名不应通过将它们放入系统文件夹来安装在系统范围内。这种类型的非打包字体不能自动更新。它们也可能会在稍后干扰一些软件操作。安装这些字体的最佳方法是安装在你自己的个人数据目录中。
-打开 Fedora Workstation 中的 _Files_ 应用或你选择的类似文件管理器应用。如果你使用 _Files_,那么可能需要使用 _Ctrl+H_ 组合键来显示隐藏的文件和文件夹。查找 _.fonts_ 文件夹并将其打开。如果你没有 _.fonts_ 文件夹,请创建它。 (记住最前面的点并全部使用小写。)
+打开 Fedora Workstation 中的 “Files” 应用或你选择的类似文件管理器应用。如果你使用 “Files”,那么可能需要使用 `Ctrl+H` 组合键来显示隐藏的文件和文件夹。查找 `.fonts` 文件夹并将其打开。如果你没有 `.fonts` 文件夹,请创建它。 (记住最前面的点并全部使用小写。)
-将已下载的字体文件复制到 _.fonts_ 文件夹中。此时你可以关闭文件管理器。打开一个终端并输入以下命令:
+将已下载的字体文件复制到 `.fonts` 文件夹中。此时你可以关闭文件管理器。打开一个终端并输入以下命令:
```
fc-cache
@@ -39,16 +39,15 @@ fc-cache
作者简介:
-Paul W. Frields
Paul W. Frields 自 1997 年以来一直是 Linux 用户和爱好者,并于 2003 年 Fedora 发布不久后加入项目。他是 Fedora 项目委员会的创始成员之一,并从事文档、网站发布、倡导、工具链开发和维护软件工作。他于 2008 年 2 月至 2010 年 7 月在红帽担任 Fedora 项目负责人,现任红帽公司工程部经理。他目前和他的妻子和两个孩子一起住在弗吉尼亚州。
-----------------------------
via: https://fedoramagazine.org/add-fonts-fedora/
-作者:[ Paul W. Frields ][a]
+作者:[Paul W. Frields][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 70e7d77bb813588d8cd71ecfec9a7872b1fb1469 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 22 Apr 2018 01:07:15 +0800
Subject: [PATCH 035/220] PUB:20180301 How to add fonts to Fedora.md
@geekpi
---
.../tech => published}/20180301 How to add fonts to Fedora.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180301 How to add fonts to Fedora.md (100%)
diff --git a/translated/tech/20180301 How to add fonts to Fedora.md b/published/20180301 How to add fonts to Fedora.md
similarity index 100%
rename from translated/tech/20180301 How to add fonts to Fedora.md
rename to published/20180301 How to add fonts to Fedora.md
From 8205b15a71b69ff2fccc4535d2d88502075626cc Mon Sep 17 00:00:00 2001
From: Dot
Date: Sun, 22 Apr 2018 09:18:07 +0800
Subject: [PATCH 036/220] translating by Dotcra
---
...ps Can Make Your Bash Scripts Way More Robust And Reliable.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md b/sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
index 61013f03f4..73bb7e841b 100644
--- a/sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
+++ b/sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
@@ -1,3 +1,4 @@
+[ translating by Dotcra ]
How "Exit Traps" Can Make Your Bash Scripts Way More Robust And Reliable
============================================================
From c02621b1653c4de19f16ff35fafc037bdc54bb4e Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sun, 22 Apr 2018 11:12:32 +0800
Subject: [PATCH 037/220] Delete 20180413 Finding what you-re looking for on
Linux.md
---
...inding what you-re looking for on Linux.md | 140 ------------------
1 file changed, 140 deletions(-)
delete mode 100644 sources/tech/20180413 Finding what you-re looking for on Linux.md
diff --git a/sources/tech/20180413 Finding what you-re looking for on Linux.md b/sources/tech/20180413 Finding what you-re looking for on Linux.md
deleted file mode 100644
index 45ecce8aba..0000000000
--- a/sources/tech/20180413 Finding what you-re looking for on Linux.md
+++ /dev/null
@@ -1,140 +0,0 @@
-Translating by MjSeven
-
-
-Finding what you’re looking for on Linux
-======
-
-
-It isn’t hard to find what you’re looking for on a Linux system — a file or a command — but there are a _lot_ of ways to go looking.
-
-### 7 commands to find Linux files
-
-#### find
-
-The most obvious is undoubtedly the **find** command, and find has become easier to use than it was years ago. It used to require a starting location for your search, but these days, you can also use find with just a file name or regular expression if you’re willing to confine your search to the local directory.
-```
-$ find e*
-empty
-examples.desktop
-
-```
-
-In this way, it works much like the **ls** command and isn't doing much of a search.
-
-For more relevant searches, find requires a starting point and some criteria for your search (unless you simply want it to provide a recursive listing of that starting point’s directory. The command **find . -type f** will recursively list all regular files starting with the current directory while **find ~nemo -type f -empty** will find empty files in Nemo’s home directory.
-```
-$ find ~nemo -type f -empty
-/home/nemo/empty
-
-```
-
-**Also on Network world:[11 pointless but awesome Linux terminal tricks][1]**
-
-#### locate
-
-The name of the **locate** command suggests that it does basically the same thing as find, but it works entirely differently. Where the **find** command can select files based on a variety of criteria — name, size, owner, permissions, state (such as empty), etc. with a selectable depth for the search, the **locate** command looks through a file called /var/lib/mlocate/mlocate.db to find what you’re looking for. That db file is periodically updated, so a locate of a file you just created will probably fail to find it. If that bothers you, you can run the updatedb file and get the update to happen right away.
-```
-$ sudo updatedb
-
-```
-
-#### mlocate
-
-The **mlocate** command works like the **locate** command and uses the same mlocate.db file as locate.
-
-#### which
-
-The **which** command works very differently than the **find** and **locate** commands. It uses your search path and checks each directory on it for an executable with the file name you’re looking for. Once it finds one, it stops searching and displays the full path to that executable.
-
-The primary benefit of the **which** command is that it answers the question, “If I enter this command, what executable file will be run?” It ignores files that aren’t executable and doesn’t list all executables on the system with that name — just the one that it finds first. If you wanted to find _all_ executables that have some name, you could run a find command like this, but it might take considerably longer to run the very efficient **which** command.
-```
-$ find / -name locate -perm -a=x 2>/dev/null
-/usr/bin/locate
-/etc/alternatives/locate
-
-```
-
-In this find command, we’re looking for all executables (files that cen be run by anyone) named “locate”. We’re also electing not to view all of the “Permission denied” messages that would otherwise clutter our screens.
-
-#### whereis
-
-The **whereis** command works a lot like the **which** command, but it provides more information. Instead of just looking for executables, it also looks for man pages and source files. Like the **which** command, it uses your search path ($PATH) to drive its search.
-```
-$ whereis locate
-locate: /usr/bin/locate /usr/share/man/man1/locate.1.gz
-
-```
-
-#### whatis
-
-The **whatis** command has its own unique mission. Instead of actually finding files, it looks for information in the man pages for the command you are asking about and provides the brief description of the command from the top of the man page.
-```
-$ whatis locate
-locate (1) - find files by name
-
-```
-
-If you ask about a script that you’ve just set up, it won’t have any idea what you’re referring to and will tell you so.
-```
-$ whatis cleanup
-cleanup: nothing appropriate.
-
-```
-
-#### apropos
-
-The **apropos** command is useful when you know what you want to do, but you have no idea what command you should be using to do it. If you were wondering how to locate files, for example, the commands “apropos find” and “apropos locate” would have a lot of suggestions to offer.
-```
-$ apropos find
-File::IconTheme (3pm) - find icon directories
-File::MimeInfo::Applications (3pm) - Find programs to open a file by mimetype
-File::UserDirs (3pm) - find extra media and documents directories
-find (1) - search for files in a directory hierarchy
-findfs (8) - find a filesystem by label or UUID
-findmnt (8) - find a filesystem
-gst-typefind-1.0 (1) - print Media type of file
-ippfind (1) - find internet printing protocol printers
-locate (1) - find files by name
-mlocate (1) - find files by name
-pidof (8) - find the process ID of a running program.
-sane-find-scanner (1) - find SCSI and USB scanners and their device files
-systemd-delta (1) - Find overridden configuration files
-xdg-user-dir (1) - Find an XDG user dir
-$
-$ apropos locate
-blkid (8) - locate/print block device attributes
-deallocvt (1) - deallocate unused virtual consoles
-fallocate (1) - preallocate or deallocate space to a file
-IO::Tty (3pm) - Low-level allocate a pseudo-Tty, import constants.
-locate (1) - find files by name
-mlocate (1) - find files by name
-mlocate.db (5) - a mlocate database
-mshowfat (1) - shows FAT clusters allocated to file
-ntfsfallocate (8) - preallocate space to a file on an NTFS volume
-systemd-sysusers (8) - Allocate system users and groups
-systemd-sysusers.service (8) - Allocate system users and groups
-updatedb (8) - update a database for mlocate
-updatedb.mlocate (8) - update a database for mlocate
-whereis (1) - locate the binary, source, and manual page files for a...
-which (1) - locate a command
-
-```
-
-### Wrap-up
-
-The commands available on Linux for locating and identifying files are quite varied, but they're all very useful.
-
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3268768/linux/finding-what-you-re-looking-for-on-linux.html
-
-作者:[Sandra Henry-Stocker][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
-[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb
From 25b97e7d8cfb1eec651479f0c8b4c61b784dc39c Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sun, 22 Apr 2018 11:13:20 +0800
Subject: [PATCH 038/220] Create 20180413 Finding what you-re looking for on
Linux.md
---
...inding what you-re looking for on Linux.md | 137 ++++++++++++++++++
1 file changed, 137 insertions(+)
create mode 100644 translated/tech/20180413 Finding what you-re looking for on Linux.md
diff --git a/translated/tech/20180413 Finding what you-re looking for on Linux.md b/translated/tech/20180413 Finding what you-re looking for on Linux.md
new file mode 100644
index 0000000000..b554ff50c8
--- /dev/null
+++ b/translated/tech/20180413 Finding what you-re looking for on Linux.md
@@ -0,0 +1,137 @@
+在 Linux 上寻找你正在寻找的东西
+=====
+
+
+在 Linux 系统上找到你要找的东西并不难 - 一个文件或一个命令 - 但是有很多种方法可以寻找。
+
+### 7 个命令来寻找 Linux 文件
+
+#### find
+
+最明显的无疑是 **find** 命令,并且 find 变得比以前更容易使用。它过去需要一个搜索的起始位置,但是现在,如果你想将搜索限制在本地目录中,你还可以使用仅包含文件名或正则表达式的 find 命令。
+```
+$ find e*
+empty
+examples.desktop
+
+```
+
+这样,它就像 **ls** 命令一样工作,并没有做太多的搜索。
+
+对于更专业的搜索,find 命令需要一个起点和一些搜索条件(除非你只是希望它提供该起点目录的递归列表)。命令 **find -type f** 从当前目录开始将递归列出所有常规文件,而 **find ~nemo -type f -empty** 将在 nemo 的主目录中找到空文件。
+```
+$ find ~nemo -type f -empty
+/home/nemo/empty
+
+```
+
+**同样在网络世界:[11 个毫无意义但是很棒的 Linux 终端技巧][1]。**
+
+#### locate
+
+**locate** 命令的名称表明它与 find 命令基本相同,但它的工作原理完全不同。**find** 命令可以根据各种条件 - 名称,大小,所有者,权限,状态(如空)等等选择文件并作为搜索选择深度,**locate** 命令通过名为 /var/lib/mlocate/mlocate.db 的文件查找你要查找的内容。该数据文件会定期更新,因此你刚创建的文件的位置它可能无法找到。如果这让你感到困扰,你可以运行 updatedb 文件并立即获得更新。
+```
+$ sudo updatedb
+
+```
+
+#### mlocate
+
+**mlocate** 命令的工作类似于 **locate** 命令,它使用与 locate 相同的 mlocate.db 文件。
+
+#### which
+
+**which** 命令的工作方式与 **find** 命令和 **locate** 命令有很大的区别。它使用你的搜索路径并检查其上的每个目录,以查找具有你要查找的文件名的可执行文件。一旦找到一个,它会停止搜索并显示该可执行文件的完整路径。
+
+**which** 命令的主要优点是它回答了“如果我输入此命令,将运行什么可执行文件?”的问题。它会忽略不可执行文件,并且不会列出系统上带有该名称的所有可执行文件 - 列出的就是它找到的第一个。如果你想查找具有某个名称的所有可执行文件,则可以像这样运行 find 命令,但运行非常高效 **which** 命令可能需要相当长的时间。
+```
+$ find / -name locate -perm -a=x 2>/dev/null
+/usr/bin/locate
+/etc/alternatives/locate
+
+```
+
+在这个 find 命令中,我们在寻找名为 “locate” 的所有可执行文件(任何人都可以运行的文件)。我们也选择了不要查看所有“拒绝访问”的消息,否则这些消息会混乱我们的屏幕。
+
+#### whereis
+
+**whereis** 命令与 **which** 命令非常类似,但它提供了更多信息。它不仅仅是寻找可执行文件,它还寻找手册页(man page)和源文件。像 **which** 命令一样,它使用搜索路径($PATH) 来驱动搜索。
+```
+$ whereis locate
+locate: /usr/bin/locate /usr/share/man/man1/locate.1.gz
+
+```
+
+#### whatis
+
+**whatis** 命令有其独特的使命。它不是实际查找文件,而是在手册页中查找有关所询问命令的信息,并从手册页的顶部提供该命令的简要说明。
+```
+$ whatis locate
+locate (1) - find files by name
+
+```
+
+如果你询问你刚刚设置的脚本,它不会知道你指的是什么,并会告诉你。
+```
+$ whatis cleanup
+cleanup: nothing appropriate.
+
+```
+
+#### apropos
+
+当你知道你想要做什么,但不知道应该使用什么命令来执行此操作时,**apropos** 命令很有用。例如,如果你想知道如何查找文件,那么 “apropos find” 和 “apropos locate” 会提供很多建议。
+```
+$ apropos find
+File::IconTheme (3pm) - find icon directories
+File::MimeInfo::Applications (3pm) - Find programs to open a file by mimetype
+File::UserDirs (3pm) - find extra media and documents directories
+find (1) - search for files in a directory hierarchy
+findfs (8) - find a filesystem by label or UUID
+findmnt (8) - find a filesystem
+gst-typefind-1.0 (1) - print Media type of file
+ippfind (1) - find internet printing protocol printers
+locate (1) - find files by name
+mlocate (1) - find files by name
+pidof (8) - find the process ID of a running program.
+sane-find-scanner (1) - find SCSI and USB scanners and their device files
+systemd-delta (1) - Find overridden configuration files
+xdg-user-dir (1) - Find an XDG user dir
+$
+$ apropos locate
+blkid (8) - locate/print block device attributes
+deallocvt (1) - deallocate unused virtual consoles
+fallocate (1) - preallocate or deallocate space to a file
+IO::Tty (3pm) - Low-level allocate a pseudo-Tty, import constants.
+locate (1) - find files by name
+mlocate (1) - find files by name
+mlocate.db (5) - a mlocate database
+mshowfat (1) - shows FAT clusters allocated to file
+ntfsfallocate (8) - preallocate space to a file on an NTFS volume
+systemd-sysusers (8) - Allocate system users and groups
+systemd-sysusers.service (8) - Allocate system users and groups
+updatedb (8) - update a database for mlocate
+updatedb.mlocate (8) - update a database for mlocate
+whereis (1) - locate the binary, source, and manual page files for a...
+which (1) - locate a command
+
+```
+
+### 总结
+
+Linux 上可用于查找和识别文件的命令有很多种,但它们都非常有用。
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3268768/linux/finding-what-you-re-looking-for-on-linux.html
+
+作者:[Sandra Henry-Stocker][a]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[校对者ID](https://github.com/校对者ID)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb
From 88b812f3ccc6c7d18895b3c9c9f923930b2798b3 Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sun, 22 Apr 2018 19:06:21 +0800
Subject: [PATCH 039/220] Update 20180312 How To Quickly Monitor Multiple Hosts
In Linux.md
---
.../20180312 How To Quickly Monitor Multiple Hosts In Linux.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/sources/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md b/sources/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md
index f5aa60e7aa..9892c0c9e3 100644
--- a/sources/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md
+++ b/sources/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md
@@ -1,3 +1,6 @@
+Translating by MjSeven
+
+
How To Quickly Monitor Multiple Hosts In Linux
======
From e5c334c84c15893b0e8a3fb7ad089b3f178b8d59 Mon Sep 17 00:00:00 2001
From: Dot
Date: Sun, 22 Apr 2018 20:14:36 +0800
Subject: [PATCH 040/220] [translated] 20180405 The fc Command Tutorial With
Examples For Beginners.md
---
...nd Tutorial With Examples For Beginners.md | 311 ------------------
...nd Tutorial With Examples For Beginners.md | 311 ++++++++++++++++++
2 files changed, 311 insertions(+), 311 deletions(-)
delete mode 100644 sources/tech/20180405 The fc Command Tutorial With Examples For Beginners.md
create mode 100644 translated/tech/20180405 The fc Command Tutorial With Examples For Beginners.md
diff --git a/sources/tech/20180405 The fc Command Tutorial With Examples For Beginners.md b/sources/tech/20180405 The fc Command Tutorial With Examples For Beginners.md
deleted file mode 100644
index 62a36cb18b..0000000000
--- a/sources/tech/20180405 The fc Command Tutorial With Examples For Beginners.md
+++ /dev/null
@@ -1,311 +0,0 @@
-[ dotcra translating ]
-The fc Command Tutorial With Examples For Beginners
-======
-
-
-The **fc** command, short for **f** ix **c** ommands, is a shell built-in command used to list, edit and re-execute the most recently entered commands in to an interactive shell. You can edit the recently entered commands in your favorite editor and run them without having to retype the entire commands. This command can be helpful to correct the spelling mistakes in the previously entered commands and avoids the repetition of long and complicated commands. Since it is shell-builtin, it is available in most shells, including Bash, Zsh, Ksh etc. In this brief tutorial, we are going to learn to use fc command in Linux.
-
-### The fc Command Tutorial With Examples
-
-**List the recently executed commands**
-
-If you run “fc -l” command with no arguments, it will display the last **16** commands.
-```
-$ fc -l
-507 fish
-508 fc -l
-509 sudo netctl restart wlp9s0sktab
-510 ls -l
-511 pwd
-512 uname -r
-513 uname -a
-514 touch ostechnix.txt
-515 vi ostechnix.txt
-516 echo "Welcome to OSTechNix"
-517 sudo apcman -Syu
-518 sudo pacman -Syu
-519 more ostechnix.txt
-520 wc -l ostechnix.txt
-521 cat ostechnix.txt
-522 clear
-
-```
-
-To reverse the order of the commands, use **-r** flag.
-```
-$ fc -l
-
-```
-
-You can suppress the line numbers using “-n” parameter.
-```
-$ fc -ln
- nano ~/.profile
- source ~/.profile
- source ~/.profile
- fc -ln
- fc -l
- sudo netctl restart wlp9s0sktab
- ls -l
- pwd
- uname -r
- uname -a
- echo "Welcome to OSTechNix"
- sudo apcman -Syu
- cat ostechnix.txt
- wc -l ostechnix.txt
- more ostechnix.txt
- clear
-
-```
-
-Now you won’t see the line numbers.
-
-To list the result staring from a specific command, simply use the line number along with **-l** option. For instance, to display the commands starting from line number 520 up to the present, we do:
-```
-$ fc -l 520
-520 ls -l
-521 pwd
-522 uname -r
-523 uname -a
-524 echo "Welcome to OSTechNix"
-525 sudo apcman -Syu
-526 cat ostechnix.txt
-527 wc -l ostechnix.txt
-528 more ostechnix.txt
-529 clear
-530 fc -ln
-531 fc -l
-
-```
-
-To list a commands within a specific range, for example 520 to 525, do:
-```
-$ fc -l 520 525
-520 ls -l
-521 pwd
-522 uname -r
-523 uname -a
-524 echo "Welcome to OSTechNix"
-525 sudo apcman -Syu
-
-```
-
-Instead of using the line numbers, we can also use strings. For example, list the commands starting from “pwd” command up to the resent, just use the staring letter of that command (i.e **p** ) like below.
-```
-$ fc -l p
-521 pwd
-522 uname -r
-523 uname -a
-524 echo "Welcome to OSTechNix"
-525 sudo apcman -Syu
-526 cat ostechnix.txt
-527 wc -l ostechnix.txt
-528 more ostechnix.txt
-529 clear
-530 fc -ln
-531 fc -l
-532 fc -l 520
-533 fc -l 520 525
-534 fc -l 520
-535 fc -l 522
-536 fc -l l
-
-```
-
-To see everything between “pwd” to “more” command, you could use either:
-```
-$ fc -l p m
-
-```
-
-Or, use combination of first letter of the starting command command and line number of the ending command:
-```
-$ fc -l p 528
-
-```
-
-Or, just line numbers of starting and ending commands:
-```
-$ fc -l 521 528
-
-```
-
-All of these three commands will display the same result.
-
-**Edit and re-run the last command automatically**
-
-At times, you might misspelled a previous command. In such situations, you can easily edit the spelling mistakes of the command using your default editor and execute it without having to retype again.
-
-To edit the last command and re-run it again, do:
-```
-$ fc
-
-```
-
-This will open your last command in the default editor.
-
-![][2]
-
-As you see in the above screenshot, my last command was “fc -l”. You can make any changes in the command and re-run it automatically again once you save and quit the editor. This can be useful when you use long and complicated commands or arguments. Please be mindful that this also can be a **destructive**. For example, if the previous command was a deadly command like “rm -fr ”, it will automatically execute and you may lost your important data. So, be very careful before using command.
-
-**Change the default editor to edit commands**
-
-Another notable option of fc is **“e”** to choose a different editor to edit the commands. For example, we can use “nano” editor to edit the last command like below.
-```
-$ fc -e nano
-
-```
-
-This command will open the nano editor(instead of the default editor) to edit last command.
-
-![][3]
-
-You may find it time consuming to use **-e** option for each command. To make the new editor as your default, just set the environment variable **FCEDIT** to the name of the editor you want **fc** to use.
-
-For example, to set “nano” as the new default editor, edit your **~/.profile** or environment file:
-```
-$ vi ~/.profile
-
-```
-
-Add the following line:
-```
-FCEDIT=nano
-
-```
-
-You can also use the full path of the editor like below.
-```
-FCEDIT=/usr/local/bin/emacs
-
-```
-
-Type **:wq** to save and close the file. To update the changes, run:
-```
-$ source ~/.profile
-
-```
-
-Now, you can just type to “fc” to edit the last command using “nano” editor.
-
-**Re-run the last command without editing it**
-
-We already knew if we run “fc” without any arguments, it loads the editor with the most recent command. At times, you may not want to edit, but simply execute the last command. To do so, use hyphen (-) symbol at the end as shown below.
-```
-$ echo "Welcome to OSTechNix"
-Welcome to OSTechNix
-
-$ fc -e -
-echo "Welcome to OSTechNix"
-Welcome to OSTechNix
-
-```
-
-As you see, fc didn’t edit the last command (i.e echo “Welcome to OSTechNix”) even if I used **-e** option.
-
-Please note that some of the options are shell-specific. They may not work in other shells. For example the following options can be used in **zsh** shell. It won’t work in Bash or Ksh shells.
-
-**Display when the commands were executed**
-
-To view when the commands were run, use **-d** like below.
-```
-fc -ld
-1 18:41 exit
-2 18:41 clear
-3 18:42 fc -l
-4 18:42 sudo netctl restart wlp9s0sktab
-5 18:42 ls -l
-6 18:42 pwd
-7 18:42 uname -r
-8 18:43 uname -a
-9 18:43 cat ostechnix.txt
-10 18:43 echo "Welcome to OSTechNix"
-11 18:43 more ostechnix.txt
-12 18:43 wc -l ostechnix.txt
-13 18:43 cat ostechnix.txt
-14 18:43 clear
-15 18:43 fc -l
-
-```
-
-Now you see the execution time of most recently executed commands.
-
-We can also display the full timestamp of each command using **-f** option.
-```
- fc -lf
- 1 4/5/2018 18:41 exit
- 2 4/5/2018 18:41 clear
- 3 4/5/2018 18:42 fc -l
- 4 4/5/2018 18:42 sudo netctl restart wlp9s0sktab
- 5 4/5/2018 18:42 ls -l
- 6 4/5/2018 18:42 pwd
- 7 4/5/2018 18:42 uname -r
- 8 4/5/2018 18:43 uname -a
- 9 4/5/2018 18:43 cat ostechnix.txt
- 10 4/5/2018 18:43 echo "Welcome to OSTechNix"
- 11 4/5/2018 18:43 more ostechnix.txt
- 12 4/5/2018 18:43 wc -l ostechnix.txt
- 13 4/5/2018 18:43 cat ostechnix.txt
- 14 4/5/2018 18:43 clear
- 15 4/5/2018 18:43 fc -l
- 16 4/5/2018 18:43 fc -ld
-
-```
-
-Of course, the European folks can use european date format using **-E** option.
-```
- fc -lE
- 2 5.4.2018 18:41 clear
- 3 5.4.2018 18:42 fc -l
- 4 5.4.2018 18:42 sudo netctl restart wlp9s0sktab
- 5 5.4.2018 18:42 ls -l
- 6 5.4.2018 18:42 pwd
- 7 5.4.2018 18:42 uname -r
- 8 5.4.2018 18:43 uname -a
- 9 5.4.2018 18:43 cat ostechnix.txt
- 10 5.4.2018 18:43 echo "Welcome to OSTechNix"
- 11 5.4.2018 18:43 more ostechnix.txt
- 12 5.4.2018 18:43 wc -l ostechnix.txt
- 13 5.4.2018 18:43 cat ostechnix.txt
- 14 5.4.2018 18:43 clear
- 15 5.4.2018 18:43 fc -l
- 16 5.4.2018 18:43 fc -ld
- 17 5.4.2018 18:49 fc -lf
-
-```
-
-### TL;DR
-
- * When running without any arguments, fc will load the most recent command in the default text editor.
- * When running with a numeric argument, fc loads the editor with the command with that specified number.
- * When running with a string argument, fc loads the most recent command starting with that specified string.
- * When running with two arguments to fc , the arguments specify the beginning and end of a range of commands.
-
-
-
-For more details, refer man pages.
-```
-$ man fc
-
-```
-
-And, that’s all for today. Hope you find this article useful. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/the-fc-command-tutorial-with-examples-for-beginners/
-
-作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/fc-command-1.png
-[3]:http://www.ostechnix.com/wp-content/uploads/2018/04/fc-command-2.png
diff --git a/translated/tech/20180405 The fc Command Tutorial With Examples For Beginners.md b/translated/tech/20180405 The fc Command Tutorial With Examples For Beginners.md
new file mode 100644
index 0000000000..412ed5203c
--- /dev/null
+++ b/translated/tech/20180405 The fc Command Tutorial With Examples For Beginners.md
@@ -0,0 +1,311 @@
+使用 fc 修改历史命令
+======
+
+
+
+fc (**F**ix **C**ommands 的缩写) 是个 shell 内置命令,用于在交互式 shell 里列出、编辑和执行最近输入的命令。你可以用你喜欢的编辑器编辑最近的命令并再次执行,而不用把它们整个重新输入一遍。除了可以避免重复输入又长又复杂的命令,它对修正拼写错误来说也很有用。因为是 shell 内置命令,大多 shell 都包含它,比如 Bash 、 Zsh 、 Ksh 等。在这篇短文中,我们来学一学在 Linux 中使用 fc 命令。
+
+### fc 命令教程及示例
+
+**列出最近执行的命令**
+
+执行不带参数的"fc -l"命令,它会列出最近 **16** 个命令。
+```
+$ fc -l
+507 fish
+508 fc -l
+509 sudo netctl restart wlp9s0sktab
+510 ls -l
+511 pwd
+512 uname -r
+513 uname -a
+514 touch ostechnix.txt
+515 vi ostechnix.txt
+516 echo "Welcome to OSTechNix"
+517 sudo apcman -Syu
+518 sudo pacman -Syu
+519 more ostechnix.txt
+520 wc -l ostechnix.txt
+521 cat ostechnix.txt
+522 clear
+
+```
+
+**-r** 选项用于将输出反向排序。
+```
+$ fc -lr
+
+```
+
+**-n** 选项用于隐藏行号。
+```
+$ fc -ln
+ nano ~/.profile
+ source ~/.profile
+ source ~/.profile
+ fc -ln
+ fc -l
+ sudo netctl restart wlp9s0sktab
+ ls -l
+ pwd
+ uname -r
+ uname -a
+ echo "Welcome to OSTechNix"
+ sudo apcman -Syu
+ cat ostechnix.txt
+ wc -l ostechnix.txt
+ more ostechnix.txt
+ clear
+
+```
+
+这样行号就不再显示了。
+
+如果想以某个命令开始,只需在 **-l** 选项后面加上行号即可。比如,要显示行号 520 至最近的命令,可以这样:
+```
+$ fc -l 520
+520 ls -l
+521 pwd
+522 uname -r
+523 uname -a
+524 echo "Welcome to OSTechNix"
+525 sudo apcman -Syu
+526 cat ostechnix.txt
+527 wc -l ostechnix.txt
+528 more ostechnix.txt
+529 clear
+530 fc -ln
+531 fc -l
+
+```
+
+要列出一段范围内的命令,将始末行号作为 "fc -l" 的参数即可,比如 520 至 525:
+```
+$ fc -l 520 525
+520 ls -l
+521 pwd
+522 uname -r
+523 uname -a
+524 echo "Welcome to OSTechNix"
+525 sudo apcman -Syu
+
+```
+
+除了使用行号,我们还可以使用字符。比如,要列出最近一个 "pwd" 至上一个命令之间的所以命令,只需要像下面这样使用起始字母即可:
+```
+$ fc -l p
+521 pwd
+522 uname -r
+523 uname -a
+524 echo "Welcome to OSTechNix"
+525 sudo apcman -Syu
+526 cat ostechnix.txt
+527 wc -l ostechnix.txt
+528 more ostechnix.txt
+529 clear
+530 fc -ln
+531 fc -l
+532 fc -l 520
+533 fc -l 520 525
+534 fc -l 520
+535 fc -l 522
+536 fc -l l
+
+```
+
+要列出所有 "pwd" 和 "more" 之间的命令,你可以都使用起始字母,像这样:
+```
+$ fc -l p m
+
+```
+
+或者,使用开始命令的首字母以及结束命令的行号:
+```
+$ fc -l p 528
+
+```
+
+或者都使用行号:
+```
+$ fc -l 521 528
+
+```
+
+这三个命令都显示一样的结果。
+
+**编辑并执行上一个命令**
+
+我们经常敲错命令,这时你可以用默认编辑器修正拼写错误并执行而不用将命令重新再敲一遍。
+
+编辑并执行上一个命令:
+```
+$ fc
+
+```
+
+这会在默认编辑器里载入上一个命令。
+
+
+![][2]
+
+你可以看到,我上一个命令是 "fc -l"。你可以随意修改,它会在你保存退出编辑器时自动执行。这在命令或参数又长又复杂时很有用。需要注意的是,它同时也可能是**毁灭性**的。比如,如果你的上一个命令是危险的 `rm -fr `,当它自动执行时你可能丢掉你的重要数据。所以,小心谨慎对待每一个命令。
+
+**更改默认编辑器**
+
+另一个有用的选项是 **-e** ,它可以用来为 fc 命令选择不同的编辑器。比如,如果我们想用 "nano" 来编辑上一个命令:
+```
+$ fc -e nano
+
+```
+
+这个命令会打开 nano 编辑器(而不是默认编辑器)编辑上一个命令。
+
+![][3]
+
+如果你觉得用 **-e** 选项太麻烦,你可以修改你的默认编辑器,只需要将环境变量 **FCEDIT** 设为你想要让 **fc** 使用的编辑器名称即可。
+
+比如,要把 "nano" 设为默认编辑器,编辑你的 **~/.profile** 或其他初始化文件: ( LCTT 译注:如果 ~/.profile 不存在可自己创建;如果使用的是 bash ,可以编辑 ~/.bash_profile )
+```
+$ vi ~/.profile
+
+```
+
+添加下面一行:
+```
+FCEDIT=nano
+# ( LCTT译注:如果在子 shell 中会用到 fc ,最好在这里 `export FCEDIT` )
+
+```
+
+你也可以使用编辑器的完整路径:
+```
+FCEDIT=/usr/local/bin/emacs
+
+```
+
+输入 **:wq** 保存退出。要使改动立即生效,运行以下命令:
+```
+$ source ~/.profile
+
+```
+
+现在再输入 "fc" 就可以使用 "nano" 编辑器来编辑上一个命令了。
+
+**不编辑而直接执行上一个命令**
+
+我们现在知道 "fc" 命令不带任何参数的话会将上一个命令载入编辑器。但有时你可能不想编辑,仅仅是想再次执行上一个命令。这很简单,在末尾加上连字符(-)就可以了:
+```
+$ echo "Welcome to OSTechNix"
+Welcome to OSTechNix
+
+$ fc -e -
+echo "Welcome to OSTechNix"
+Welcome to OSTechNix
+
+```
+
+如你所见,"fc" 带了 **-e** 选项,但并没有编辑上一个命令(例中的 echo " Welcome to OSTechNix")。
+
+需要注意的是,有些选项仅对指定 shell 有效。比如下面这些选项可以用在 **zsh** 中,但在 Bash 或 Ksh 中则不能用。
+
+**显示命令的执行时间**
+
+想要知道命令是在什么时候执行的,可以用 **-d** 选项:
+```
+fc -ld
+1 18:41 exit
+2 18:41 clear
+3 18:42 fc -l
+4 18:42 sudo netctl restart wlp9s0sktab
+5 18:42 ls -l
+6 18:42 pwd
+7 18:42 uname -r
+8 18:43 uname -a
+9 18:43 cat ostechnix.txt
+10 18:43 echo "Welcome to OSTechNix"
+11 18:43 more ostechnix.txt
+12 18:43 wc -l ostechnix.txt
+13 18:43 cat ostechnix.txt
+14 18:43 clear
+15 18:43 fc -l
+
+```
+
+这样你就可以查看最近命令的具体执行时间了。
+
+使用选项 **-f** ,可以为每个命令显示完整的时间戳。
+```
+ fc -lf
+ 1 4/5/2018 18:41 exit
+ 2 4/5/2018 18:41 clear
+ 3 4/5/2018 18:42 fc -l
+ 4 4/5/2018 18:42 sudo netctl restart wlp9s0sktab
+ 5 4/5/2018 18:42 ls -l
+ 6 4/5/2018 18:42 pwd
+ 7 4/5/2018 18:42 uname -r
+ 8 4/5/2018 18:43 uname -a
+ 9 4/5/2018 18:43 cat ostechnix.txt
+ 10 4/5/2018 18:43 echo "Welcome to OSTechNix"
+ 11 4/5/2018 18:43 more ostechnix.txt
+ 12 4/5/2018 18:43 wc -l ostechnix.txt
+ 13 4/5/2018 18:43 cat ostechnix.txt
+ 14 4/5/2018 18:43 clear
+ 15 4/5/2018 18:43 fc -l
+ 16 4/5/2018 18:43 fc -ld
+
+```
+
+当然,欧洲的老乡们还可以使用 **-E** 选项来显示欧洲时间格式。
+```
+ fc -lE
+ 2 5.4.2018 18:41 clear
+ 3 5.4.2018 18:42 fc -l
+ 4 5.4.2018 18:42 sudo netctl restart wlp9s0sktab
+ 5 5.4.2018 18:42 ls -l
+ 6 5.4.2018 18:42 pwd
+ 7 5.4.2018 18:42 uname -r
+ 8 5.4.2018 18:43 uname -a
+ 9 5.4.2018 18:43 cat ostechnix.txt
+ 10 5.4.2018 18:43 echo "Welcome to OSTechNix"
+ 11 5.4.2018 18:43 more ostechnix.txt
+ 12 5.4.2018 18:43 wc -l ostechnix.txt
+ 13 5.4.2018 18:43 cat ostechnix.txt
+ 14 5.4.2018 18:43 clear
+ 15 5.4.2018 18:43 fc -l
+ 16 5.4.2018 18:43 fc -ld
+ 17 5.4.2018 18:49 fc -lf
+
+```
+
+### fc 用法总结
+
+ * 当不带任何参数时,fc 将上一个命令载入默认编辑器。
+ * 当带一个数字作为参数时,fc 将数字指定的命令载入默认编辑器。
+ * 当带一个字符作为参数时,fc 将最近一个以指定字符开头的命令载入默认编辑器。
+ * 当有两个参数时,它们分别指定需要列出的命令范围的开始和结束。
+
+
+
+更多细节,请参考 man 手册。
+```
+$ man fc
+
+```
+
+好了,今天就这些。希望这篇文章能帮助到你。更多精彩内容,敬请期待!
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/the-fc-command-tutorial-with-examples-for-beginners/
+
+作者:[SK][a]
+译者:[Dotcra](https://github.com/Dotcra)
+校对:[校对者ID](https://github.com/校对者ID)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/fc-command-1.png
+[3]:http://www.ostechnix.com/wp-content/uploads/2018/04/fc-command-2.png
From ce74c92f8fd56cdcbe3f69fead13d363c98443ea Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 22 Apr 2018 22:43:25 +0800
Subject: [PATCH 041/220] PRF:20180411 Awesome GNOME extensions for
developers.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@hopefully2333 翻译的不够仔细,望继续加油。
---
...Awesome GNOME extensions for developers.md | 89 +++++++++++--------
1 file changed, 53 insertions(+), 36 deletions(-)
diff --git a/translated/tech/20180411 Awesome GNOME extensions for developers.md b/translated/tech/20180411 Awesome GNOME extensions for developers.md
index 2e4793a269..9ec1d05a67 100644
--- a/translated/tech/20180411 Awesome GNOME extensions for developers.md
+++ b/translated/tech/20180411 Awesome GNOME extensions for developers.md
@@ -1,89 +1,106 @@
-对开发者来说非常好的GNOME扩展
+开发者的最佳 GNOME 扩展
======

-这个扩展给与了 GNOME3 桌面环境以非常大的灵活性,这种灵活性赋予了用户在定制化桌面上的优势,从而使他们的工作流程变得更加舒适和有效率。Fedora 系统已经已经包含了一部分例如 EasyScreenCast, gTile, 和 OpenWeather 这样很好的桌面扩展,本文接下来会重点报道这些为开发者而改变的扩展。
-如果你需要帮助来安装 GNOME 扩展,那么可以参考《如何安装一个 GNOME 命令行扩展》这篇文章。
+扩展给予 GNOME3 桌面环境以非常大的灵活性,这种灵活性赋予了用户在定制化桌面上的优势,从而使他们的工作流程变得更加舒适和有效率。Fedora Magazine 已经介绍了一些很棒的桌面扩展,例如 [EasyScreenCast][1]、 [gTile][2] 和 [OpenWeather][3] ,本文接下来会重点报道这些为开发者而改变的扩展。
-### ![Docker Integration extension icon][5] Docker Integration
+如果你需要帮助来安装 GNOME 扩展,那么可以参考《[如何安装一个 GNOME Shell 扩展][4]》这篇文章。
+
+### Docker 集成(Docker Integration)
+
+![Docker Integration extension icon][5]
![Docker Integration extension status menu][6]
-对于为自己的应用使用 docker 的开发者而言,这个 docker 集成扩展是必不可少的。这个状态菜单提供了一个带着启动、停止、暂停、甚至删除的这些选项的 docker 容器的列表,这个列表会在新容器加入到这个系统时自动更新。
+对于为自己的应用使用 Docker 的开发者而言,这个 [Docker 集成][7] 扩展是必不可少的。这个状态菜单提供了一个带着启动、停止、暂停、甚至删除它们的选项的 Docker 容器列表,这个列表会在新容器加入到这个系统时自动更新。
-在安装完这些扩展后,Fedora 用户可能会收到这么一条消息:“加载容器时发生错误”。这是因为 docker 命令需要在命令前加 sudo,或者得到默认的 root 权限。去设置你的用户权限再去运行 docker,可以参考 Fedora 门户网站上的 docker 安装这一页。
+在安装完这个扩展后,Fedora 用户可能会收到这么一条消息:“Error occurred when fetching containers.(获取容器时发生错误)”。这是因为 Docker 命令默认需要 `sudo` 或 root 权限。要设置你的用户权限来运行 Docker,可以参考 [Fedora 开发者门户网站上的 Docker 安装这一页][8]。
-你可以在这个扩展的站点上找到更多的信息。
+你可以在该[扩展的站点][9]上找到更多的信息。
-### ![Jenkins CI Server Indicator icon][10] Jenkins CI Server Indicator
+### Jenkins CI 服务器指示器(Jenkins CI Server Indicator)
+
+![Jenkins CI Server Indicator icon][10]
![Jenkins CI Server Indicator extension status menu][11]
-Jenkins CI 服务器指向仪这个扩展使开发者把他们的应用建立在 Jenkins CI 服务器上的这个过程更加简单,它展示了一个菜单,菜单中有一个带有进程和进程状态的列表。他同样包含了很多特点,比如很容易就能创建 Jenkins 的前端,为完整的进程做通知,而且能够触发或者过滤进程。
+[Jenkins CI 服务器指示器][12]这个扩展可以使开发者在 Jenkins CI 服务器建立应用很简单,它展示了一个菜单,包含有任务列表及那些任务的状态。它也包括了一些如轻松访问 Jenkins 网页前端、任务完成提示、以及触发和过滤任务等特性。
-如果想要更多的信息,请去浏览开发者站点。
+如果想要更多的信息,请去浏览[开发者站点][13]。
-### ![android-tool extension icon][14] android-tool
+### 安卓工具(android-tool)
-Android-tool 对于 Android 开发者来说会是一个非常有价值的扩展,它的特点包括捕捉错误报告,设备截屏和屏幕录像。它可以通过 usb 和 tcp 连接两种方式来连接 Android 设备。
+![android-tool extension icon][14]
+
+![android-tool extension status menu][15]
+
+[安卓工具][16]对于 Android 开发者来说会是一个非常有价值的扩展,它的特性包括捕获错误报告、设备截屏和屏幕录像。它可以通过 usb 和 tcp 连接两种方式来连接 Android 设备。
+
+这个扩展需要 `adb` 的包,从 Fedora 官方仓库安装 `adb` 只需要[运行这条命令][17]:
-这个扩展需要 adb 的包,从 Fedora 官方仓库安装 adb 只需要运行这条命令:
```
sudo dnf install android-tools
-
```
-你可以在这个扩展的 GitHub 网页里找到更多信息。
+你可以在这个[扩展的 GitHub 网页][18]里找到更多信息。
-### ![GnomeHub extension icon][19] GnomeHub
+### GnomeHub
-对于为自己的项目使用 GitHub 的 GNOME 用户来说,GnomeHub 是一个非常好的扩展,它可以显示 Github 上的仓库,还可以通知用户有新提交的 pull requests。除此之外,用户可以把他们最喜欢的仓库加在这个扩展的设置里。
+![GnomeHub extension icon][19]
-如果想要更多信息,可以参考一下这个项目的 GitHub 页面。
+![GnomeHub extension status menu][20]
-### ![gistnotes extension icon][23] gistnotes
+对于自己的项目使用 GitHub 的 GNOME 用户来说,[GnomeHub][21] 是一个非常好的扩展,它可以显示 GitHub 上的仓库,还可以通知用户有新提交的拉取请求。除此之外,用户可以把他们最喜欢的仓库加在这个扩展的设置里。
-简单地说,gistnotes 为 gist 用户提供了一种简单的方式来创建、存储和管理注释和代码片段。如果想要更多的信息,可以参考这个项目的网站。
+如果想要更多信息,可以参考一下这个[项目的 GitHub 页面][22]。
+
+### gistnotes
+
+![gistnotes extension icon][23]
+
+简单地说,[gistnotes][24] 为 gist 用户提供了一种创建、存储和管理注释和代码片段的简单方式。如果想要更多的信息,可以参考这个[项目的网站][25]。
![gistnotes window][26]
-### ![Arduino Control extension icon][27] Arduino Control
+### Arduino 控制器(Arduino Control)
-这个 Arduino 控制扩展允许用户去连接或者控制他们自己的单片机电路板,它同样允许用户在状态菜单里增加滑块或者开关。除此之外,开发者模式允许扩展目录里的脚本通过以太网或者 usb 来连接电路板。
+![Arduino Control extension icon][27]
-最重要的是,这个扩展可以被定制化来适合你的项目,在 README 文件里的例子是,它能够“通过网络上任意的电脑来控制你房间里的灯”。
+这个 [Arduino 控制器][28]扩展允许用户去连接或者控制他们自己的 Arduino 电路板,它同样允许用户在状态菜单里增加滑块或者开关。除此之外,开发者放在扩展目录里的脚本可以通过以太网或者 usb 来连接 Arduino 电路板。
-你可以从这个项目的 GitHub 页面上得到更多的产品信息并安装这个扩展。
+最重要的是,这个扩展可以被定制化来适合你的项目,在器 README 文件里的提供例子是,它能够“通过网络上任意的电脑来控制你房间里的灯”。
-### ![Hotel Manager extension icon][30] Hotel Manager
+你可以从这个[项目的 GitHub 页面][29]上得到更多的产品信息并安装这个扩展。
-![Hotel Manager extension status menu.][31]
+### Hotel Manager
-使用 Hotel process manager 开发网站的开发人员,应该尝试一下 Hotel Manager 这个扩展。它展示了一个增加到 hotel 里的网页应用的列表,并给与了用户去开始、停止和重启这些应用的能力。
-此外,还可以通过电脑图标快速打开、浏览这些网页应用。这个扩展同样可以启动、停止或重启 hotel 的后台程序。
+![Hotel Manager extension icon][30]
-作为本文的出版物,GNOME 3.26 版本的 Hotel Manager 版本 4 没有在扩展的下拉式菜单里列出网页应用。版本 4 还会在 Fedora 28 (GNOME 3.28) 上安装时报错。然而,版本 3 工作在 Fedora 27 和 Fedora 28。
+![Hotel Manager extension status menu][31]
-如果想要更多细节,可以去看这个项目在 GitHub 上的网页。
+使用 Hotel 进程管理器开发网站的开发人员,应该尝试一下 [Hotel Manager][32] 这个扩展。它展示了一个增加到 Hotel 里的网页应用的列表,并给与了用户开始、停止和重启这些应用的能力。此外,还可以通过右边的电脑图标快速打开、浏览这些网页应用。这个扩展同样可以启动、停止或重启 Hotel 的后台程序。
-### VSCode Search Provider
+本文发布时,GNOME 3.26 版本的 Hotel Manager 版本 4 没有在该扩展的下拉式菜单里列出网页应用。版本 4 还会在 Fedora 28 (GNOME 3.28) 上安装时报错。然而,版本 3 工作在 Fedora 27 和 Fedora 28。
-VSCode Search Provider 是一个简单的扩展,它能够在 GNOME 综合搜索结果里展示可视化工作代码项目。对于大部分的 VSCode 用户来说,这个扩展可以让用户快速连接到他们的项目,从而节省时间。你可以从这个项目在 GitHub 上的页面来得到更多的信息。
+如果想要更多细节,可以去看这个[项目在 GitHub 上的网页][33]。
+
+### VSCode 搜索插件(VSCode Search Provider)
+
+[VSCode 搜索插件][34]是一个简单的扩展,它能够在 GNOME 综合搜索结果里展示 Visual Studio Code 项目。对于重度 VSCode 用户来说,这个扩展可以让用户快速连接到他们的项目,从而节省时间。你可以从这个[项目在 GitHub 上的页面][35]来得到更多的信息。
![GNOME Overview search results showing VSCode projects.][36]
在开发环境方面,你有没有一个最喜欢的扩展呢?发在评论区里,一起来讨论下吧。
-
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/awesome-gnome-extensions-developers/
作者:[Shaun Assam][a]
-译者:[hopefully2333](https://github.com/hopefully2333)
-校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
+译者:[hopefully2333](https://github.com/hopefully2333)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -91,7 +108,7 @@ via: https://fedoramagazine.org/awesome-gnome-extensions-developers/
[1]:https://fedoramagazine.org/screencast-gnome-extension/
[2]:https://fedoramagazine.org/must-have-gnome-extension-gtile/
[3]:https://fedoramagazine.org/weather-updates-openweather-gnome-shell-extension/
-[4]:https://fedoramagazine.org/install-gnome-shell-extension/
+[4]:https://linux.cn/article-9447-1.html
[5]:https://fedoramagazine.org/wp-content/uploads/2017/08/dockericon.png
[6]:https://fedoramagazine.org/wp-content/uploads/2017/08/docker-extension-menu.png
[7]:https://extensions.gnome.org/extension/1065/docker-status/
From 34f75fa14271cc7f21da10ea5a68e876bb961c35 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 22 Apr 2018 22:44:28 +0800
Subject: [PATCH 042/220] PUB:20180411 Awesome GNOME extensions for
developers.md
@hopefully2333 https://linux.cn/article-9568-1.html
---
.../20180411 Awesome GNOME extensions for developers.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180411 Awesome GNOME extensions for developers.md (100%)
diff --git a/translated/tech/20180411 Awesome GNOME extensions for developers.md b/published/20180411 Awesome GNOME extensions for developers.md
similarity index 100%
rename from translated/tech/20180411 Awesome GNOME extensions for developers.md
rename to published/20180411 Awesome GNOME extensions for developers.md
From 8a150ee7d741f61f44fb871287da9e3dee3ec8d8 Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sun, 22 Apr 2018 23:32:44 +0800
Subject: [PATCH 043/220] Delete 20180312 How To Quickly Monitor Multiple Hosts
In Linux.md
---
...Quickly Monitor Multiple Hosts In Linux.md | 111 ------------------
1 file changed, 111 deletions(-)
delete mode 100644 sources/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md
diff --git a/sources/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md b/sources/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md
deleted file mode 100644
index 9892c0c9e3..0000000000
--- a/sources/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md
+++ /dev/null
@@ -1,111 +0,0 @@
-Translating by MjSeven
-
-
-How To Quickly Monitor Multiple Hosts In Linux
-======
-
-
-There are plenty of monitoring tools out there to monitor local and remote Linux systems. One fine example is [**Cockpit**][1]. Those tools, however, are either bit complicated to install and use, at least for the newbie admins. The newbie admin might need to spend some time to figure out how to configure those tools to monitor the systems. If you want a quick and dirty way to monitor multiple hosts at a time in your local area network, you might need to check **“rwho”** tool. It will instantly and quickly will monitor the local and remote systems as soon as you install rwho utility. You configure nothing! All you have to do is to install “rwho” tool on the systems that you want to monitor.
-
-Please don’t think of rwho as a feature-rich, and complete monitoring tool. This is just a simple tool that monitors only the **uptime** , **load** and **logged in users** of a remote system. Using “rwho” utility, we can find who is logged in on which computer, a list of monitored computers with uptime (time since last reboot), how many users are logged in and the load averages for the past 1, 5, and 15 minutes. Nothing more! Nothing less! Also, it will only monitor the systems that are in the same subnet. hence, it is ideal for small and home office network.
-
-### Monitor Multiple Hosts In Linux
-
-Let me explain how rwho works. Every system that uses rwho on the network will broadcast information about itself. The other computers can access these information using rwhod-daemon. So, every computer on the network must have rwho installed. Also, the rwhod-port (e.g. Port 513/UDP) must be allowed through your firewall/router in-order to distribute or access the information of other hosts.
-
-Alright, let us install it.
-
-I tested in on Ubuntu 16.04 LTS server. rwho is available in the default repositories, so we can install it using the APT package manager like below.
-```
-$ sudo apt-get install rwho
-
-```
-
-On RPM based systems such as CentOS, Fedora, RHEL, use this command to install it:
-```
-$ sudo yum install rwho
-
-```
-
-Make sure you have allowed the rwhod-port 513 if you are behind a firewall/router. Also, verify if the rwhod-daemon is running or not using command:
-
-$ sudo systemctl status rwhod
-
-If it is not started already, run the following commands to enable and start rwhod service:
-```
-$ sudo systemctl enable rwhod
-$ sudo systemctl start rwhod
-
-```
-
-Now, it is time to monitor the systems. Run the following command to find out who is logged on which computer:
-```
-$ rwho
-ostechni ostechnix:pts/5 Mar 12 17:41
-root server:pts/0 Mar 12 17:42
-
-```
-
-As you can see, currently there are two systems on my local area network. The local system user is **ostechnix** (Ubuntu 16.04 LTS) and remote system’s user is **root** (CentOS 7). As you might guessed already, rwho is similar to “who” command, but it will monitor the remote systems too.
-
-And, we can find the uptime of all running systems on the network, using command:
-```
-$ ruptime
-ostechnix up 2:17, 1 user, load 0.09, 0.03, 0.01
-server up 1:54, 1 user, load 0.00, 0.01, 0.05
-
-```
-
-Here, ruptime (similar to “uptime” command) displays the total uptime of my Ubuntu (local) and CentOS (remote) systems. Got it? Great! Here is the sample screenshot from my Ubuntu 16.04 LTS system:
-
-![][3]
-
-You can find the information about all other machines in the local area network in the following location:
-```
-$ ls /var/spool/rwho/
-whod.ostechnix whod.server
-
-```
-
-This is a small, yet very useful to find out who is logged in on which computer and the uptime along with system load details.
-
-**Suggested read:**
-
-Please be mindful that this method has one serious loophole. Since information about every computer is broadcasted over the net, everyone in the subnet could potentially get this information. It is okay normally but on the other side this can be a unwanted side-effect when information about the network is distributed to non-authorized users. So, It is strongly recommended to use it in a trusted and protected local area network.
-
-For more details, refer man pages.
-```
-$ man rwho
-
-```
-
-And, that’s all for now. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/
-
-作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://www.ostechnix.com/cockpit-monitor-administer-linux-servers-via-web-browser/
-[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/rwho.png
-[4]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=reddit (Click to share on Reddit)
-[5]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=twitter (Click to share on Twitter)
-[6]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=facebook (Click to share on Facebook)
-[7]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=google-plus-1 (Click to share on Google+)
-[8]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=linkedin (Click to share on LinkedIn)
-[9]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=pocket (Click to share on Pocket)
-[10]:https://api.whatsapp.com/send?text=How%20To%20Quickly%20Monitor%20Multiple%20Hosts%20In%20Linux%20https%3A%2F%2Fwww.ostechnix.com%2Fhow-to-quickly-monitor-multiple-hosts-in-linux%2F (Click to share on WhatsApp)
-[11]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=telegram (Click to share on Telegram)
-[12]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=email (Click to email this to a friend)
-[13]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/#print (Click to print)
From ef69969d638bbac972ae2170970efda0b3d881da Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sun, 22 Apr 2018 23:33:23 +0800
Subject: [PATCH 044/220] Create 20180312 How To Quickly Monitor Multiple Hosts
In Linux.md
---
...Quickly Monitor Multiple Hosts In Linux.md | 107 ++++++++++++++++++
1 file changed, 107 insertions(+)
create mode 100644 translated/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md
diff --git a/translated/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md b/translated/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md
new file mode 100644
index 0000000000..1f6f085feb
--- /dev/null
+++ b/translated/tech/20180312 How To Quickly Monitor Multiple Hosts In Linux.md
@@ -0,0 +1,107 @@
+如何在 Linux 中快速监控多个主机
+=====
+
+
+有很多监控工具可用来监控本地和远程 Linux 系统,一个很好的例子是 [**Cockpit**][1]。但是,这些工具的安装和使用比较复杂,至少对于新手管理员来说是这样。新手管理员可能需要花一些时间来弄清楚如何配置这些工具来监视系统。如果你想要以快速且粗略地在局域网中一次监控多台主机,你可能需要查看一下 **“rwho”** 工具。只要安装 rwho 实用程序,它将立即快速地监控本地和远程系统。你什么都不用配置!你所要做的就是在要监视的系统上安装 “rwho” 工具。
+
+请不要将 rwho 视为功能丰富且完整的监控工具。这只是一个简单的工具,它只监视远程系统的**正常运行时间**,**加载**和**登录用户**。使用 “rwho” 使用程序,我们可以发现谁在哪台计算机上登录,一个被监视的计算机的列表,有正常运行时间(自上次重新启动以来的时间),有多少用户登录了,以及在过去的 1、5、15 分钟的平均负载。不多不少!而且,它只监视同一子网中的系统。因此,它非常适合小型和家庭办公网络。
+
+### 在 Linux 中监控多台主机
+
+让我来解释一下 rwho 是如何工作的。每个在网络上使用 rwho 的系统都将广播关于它自己的信息,其他计算机可以使用 rwhod-daemon 来访问这些信息。因此,网络上的每台计算机都必须安装 rwho。此外,为了分发或访问其他主机的信息,必须允许 rwho 端口(例如端口 513/UDP)通过防火墙/路由器。
+
+好的,让我们来安装它。
+
+我在 Ubuntu 16.04 LTS 服务器上进行了测试,rwho 在默认仓库中可用,所以,我们可以使用像下面这样的 APT 软件包管理器来安装它。
+```
+$ sudo apt-get install rwho
+
+```
+
+在基于 RPM 的系统如 CentOS, Fedora, RHEL上,使用以下命令来安装它:
+```
+$ sudo yum install rwho
+
+```
+
+如果你在防火墙/路由器之后,确保你已经允许使用 rwhod 513 端口。另外,使用命令验证 rwhod-daemon 是否正在运行:
+
+$ sudo systemctl status rwhod
+
+如果它尚未启动,运行以下命令启用并启动 rwhod 服务:
+```
+$ sudo systemctl enable rwhod
+$ sudo systemctl start rwhod
+
+```
+
+现在是时候来监视系统了。运行以下命令以发现谁在哪台计算机上登录:
+```
+$ rwho
+ostechni ostechnix:pts/5 Mar 12 17:41
+root server:pts/0 Mar 12 17:42
+
+```
+
+正如你所看到的,目前我的局域网中有两个系统。本地系统用户是 **ostechnix** (Ubuntu 16.04 LTS),远程系统的用户是 **root** (CentOS 7)。可能你已经猜到了,rwho 与 “who” 命令相似,但它会监视远程系统。
+
+而且,我们可以使用以下命令找到网络上所有正在运行的系统的正常运行时间:
+```
+$ ruptime
+ostechnix up 2:17, 1 user, load 0.09, 0.03, 0.01
+server up 1:54, 1 user, load 0.00, 0.01, 0.05
+
+```
+
+这里,ruptime(类似于 “uptime” 命令)显示了我的 Ubuntu(本地) and CentOS(远程)系统的总运行时间。明白了吗?棒极了!以下是我的 Ubuntu 16.04 LTS 系统的示例屏幕截图:
+
+![][3]
+
+你可以在以下位置找到有关局域网中所有其他机器的信息:
+```
+$ ls /var/spool/rwho/
+whod.ostechnix whod.server
+
+```
+
+它很小,但却非常有用,可以发现谁在哪台计算机上登录,以及正常运行时间和系统负载详情。
+
+**建议阅读:**
+
+请注意,这种方法有一个严重的漏洞。由于有关每台计算机的信息都通过网络进行广播,因此该子网中的每个人都可能获得此信息。通常情况下可以,但另一方面,当有关网络的信息分发给非授权用户时,这可能是不必要的副作用。因此,强烈建议在受信任和受保护的局域网中使用它。
+
+更多的信息,查找 man 手册页。
+```
+$ man rwho
+
+```
+
+好了,这就是全部了。更多好东西要来了,敬请期待!
+
+干杯!
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/
+
+作者:[SK][a]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://www.ostechnix.com/cockpit-monitor-administer-linux-servers-via-web-browser/
+[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/rwho.png
+[4]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=reddit (Click to share on Reddit)
+[5]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=twitter (Click to share on Twitter)
+[6]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=facebook (Click to share on Facebook)
+[7]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=google-plus-1 (Click to share on Google+)
+[8]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=linkedin (Click to share on LinkedIn)
+[9]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=pocket (Click to share on Pocket)
+[10]:https://api.whatsapp.com/send?text=How%20To%20Quickly%20Monitor%20Multiple%20Hosts%20In%20Linux%20https%3A%2F%2Fwww.ostechnix.com%2Fhow-to-quickly-monitor-multiple-hosts-in-linux%2F (Click to share on WhatsApp)
+[11]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=telegram (Click to share on Telegram)
+[12]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/?share=email (Click to email this to a friend)
+[13]:https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/#print (Click to print)
From a9296a5637995574064f30b097cdd1f4c80a17d2 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Mon, 23 Apr 2018 08:37:59 +0800
Subject: [PATCH 045/220] translated
---
...How To Check User Created Date On Linux.md | 123 ------------------
...How To Check User Created Date On Linux.md | 121 +++++++++++++++++
2 files changed, 121 insertions(+), 123 deletions(-)
delete mode 100644 sources/tech/20180412 How To Check User Created Date On Linux.md
create mode 100644 translated/tech/20180412 How To Check User Created Date On Linux.md
diff --git a/sources/tech/20180412 How To Check User Created Date On Linux.md b/sources/tech/20180412 How To Check User Created Date On Linux.md
deleted file mode 100644
index 80c3ba51a4..0000000000
--- a/sources/tech/20180412 How To Check User Created Date On Linux.md
+++ /dev/null
@@ -1,123 +0,0 @@
-translating---geekpi
-
-How To Check User Created Date On Linux
-======
-Did you know, how to check user account created date on Linux system? If Yes, what are the ways to do.
-
-Are you getting succeed on this? If yes, how to do?
-
-Basically Linux operating system doesn’t track this information so, what are the alternate ways to get this information.
-
-You might ask why i want to check this?
-
-Yes, in some cases you may want to check this information, at that time this will very helpful for you.
-
-This can be verified using below 7 methods.
-
- * Using /var/log/secure file
- * Using aureport utility
- * Using .bash_logout file
- * Using chage Command
- * Using useradd Command
- * Using passwd Command
- * Using last Command
-
-
-
-### Method-1: Using /var/log/secure file
-
-It stores all security related messages including authentication failures and authorization privileges. It also tracks sudo logins, SSH logins and other errors logged by system security services daemon.
-```
-# grep prakash /var/log/secure
-Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new group: name=prakash, GID=501
-Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new user: name=prakash, UID=501, GID=501, home=/home/prakash, shell=/bin/bash
-Apr 12 04:07:34 centos.2daygeek.com passwd: pam_unix(passwd:chauthtok): password changed for prakash
-Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: Accepted password for prakash from 103.5.134.167 port 60554 ssh2
-Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: pam_unix(sshd:session): session opened for user prakash by (uid=0)
-
-```
-
-### Method-2: Using aureport utility
-
-The aureport utility allows you to generate summary and columnar reports on the events recorded in Audit log files. By default, all audit.log files in the /var/log/audit/ directory are queried to create the report.
-```
-# aureport --auth | grep prakash
-46. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 288
-47. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 291
-
-```
-
-### Method-3: Using .bash_logout file
-
-The .bash_logout file in your home directory have a special meaning to bash, it provides a way to execute commands when the user logs out of the system.
-
-We can check for the Change date of the .bash_logout file in the user’s home directory. This file is created upon user’s first logout.
-```
-# stat /home/prakash/.bash_logout
- File: `/home/prakash/.bash_logout'
- Size: 18 Blocks: 8 IO Block: 4096 regular file
-Device: 801h/2049d Inode: 256153 Links: 1
-Access: (0644/-rw-r--r--) Uid: ( 501/ prakash) Gid: ( 501/ prakash)
-Access: 2017-03-22 20:15:00.000000000 -0400
-Modify: 2017-03-22 20:15:00.000000000 -0400
-Change: 2018-04-12 04:07:18.283000323 -0400
-
-```
-
-### Method-4: Using chage Command
-
-chage stand for change age. This command allows user to mange password expiry information. The chage command changes the number of days between password changes and the date of the last password change.
-
-This information is used by the system to determine when a user must change his/her password. This will work if the user does not change the password since the account creation date.
-```
-# chage --list prakash
-Last password change : Apr 12, 2018
-Password expires : never
-Password inactive : never
-Account expires : never
-Minimum number of days between password change : 0
-Maximum number of days between password change : 99999
-Number of days of warning before password expires : 7
-
-```
-
-### Method-5: Using useradd Command
-
-useradd command is used to create new accounts in Linux. By default, it wont add user creation date and we have to add date using “Comment” option.
-```
-# useradd -m prakash -c `date +%Y/%m/%d`
-
-# grep prakash /etc/passwd
-prakash:x:501:501:2018/04/12:/home/prakash:/bin/bash
-
-```
-
-### Method-6: Using useradd Command
-
-passwd command used assign password to local accounts or users. If the user has not changed his password since the account’s creation date, then you can use the passwd command to find out the date of the last password reset.
-```
-# passwd -S prakash
-prakash PS 2018-04-11 0 99999 7 -1 (Password set, MD5 crypt.)
-
-```
-
-### Method-7: Using last Command
-
-last command reads the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created.
-```
-# last | grep "prakash"
-prakash pts/2 103.5.134.167 Thu Apr 12 04:08 still logged in
-
-```
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/how-to-check-user-created-date-on-linux/
-
-作者:[Prakash Subramanian][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.2daygeek.com/author/prakash/
diff --git a/translated/tech/20180412 How To Check User Created Date On Linux.md b/translated/tech/20180412 How To Check User Created Date On Linux.md
new file mode 100644
index 0000000000..55d97384ec
--- /dev/null
+++ b/translated/tech/20180412 How To Check User Created Date On Linux.md
@@ -0,0 +1,121 @@
+如何在 Linux 上查看用户的创建日期
+======
+你知道吗,如何在 Linux 系统上查看帐户的创建日期?如果知道,那么有些什么办法。
+
+你成功了么?如果是的话,该怎么做?
+
+基本上 Linux 系统不会跟踪这些信息,因此,获取这些信息的替代方法是什么?
+
+你可能会问为什么我要查看这个?
+
+是的,在某些情况下,你可能需要查看这些信息,那时就会对你会有帮助。
+
+可以使用以下 7 种方法进行验证。
+
+ * 使用 /var/log/secure
+ * 使用 aureport 工具
+ * 使用 .bash_logout
+ * 使用 chage 命令
+ * 使用 useradd 命令
+ * 使用 passwd 命令
+ * 使用 last 命令
+
+
+
+### 方式 1:使用 /var/log/secure
+
+它存储所有安全相关的消息,包括身份验证失败和授权特权。它还会通过系统安全守护进程跟踪 sudo 登录、SSH 登录和其他错误记录。
+```
+# grep prakash /var/log/secure
+Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new group: name=prakash, GID=501
+Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new user: name=prakash, UID=501, GID=501, home=/home/prakash, shell=/bin/bash
+Apr 12 04:07:34 centos.2daygeek.com passwd: pam_unix(passwd:chauthtok): password changed for prakash
+Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: Accepted password for prakash from 103.5.134.167 port 60554 ssh2
+Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: pam_unix(sshd:session): session opened for user prakash by (uid=0)
+
+```
+
+### 方式 2:使用 aureport 工具
+
+aureport 工具可以根据记录在审计日志中的事件记录生成汇总和柱状报告。默认情况下,它会查询 /var/log/audit/ 目录中的所有 audit.log 文件来创建报告。
+```
+# aureport --auth | grep prakash
+46. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 288
+47. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 291
+
+```
+
+### 方式 3:使用 .bash_logout
+
+家目录中的 .bash_logout 对 bash 有特殊的含义,它提供了一种在用户退出系统时执行命令的方式。
+
+我们可以查看用户家目录中 .bash_logout 的更改日期。该文件是在用户第一次注销时创建的。
+```
+# stat /home/prakash/.bash_logout
+ File: `/home/prakash/.bash_logout'
+ Size: 18 Blocks: 8 IO Block: 4096 regular file
+Device: 801h/2049d Inode: 256153 Links: 1
+Access: (0644/-rw-r--r--) Uid: ( 501/ prakash) Gid: ( 501/ prakash)
+Access: 2017-03-22 20:15:00.000000000 -0400
+Modify: 2017-03-22 20:15:00.000000000 -0400
+Change: 2018-04-12 04:07:18.283000323 -0400
+
+```
+
+### 方式 4:使用 chage 命令
+
+chage 代表 change age。该命令让用户管理密码过期信息。chage 命令更改密码更改时和上次密码更改日期之间的天数。
+
+系统使用此信息来确定用户何时必须更改其密码。如果用户自帐户创建日期以来没有更改密码,这个就有用。
+```
+# chage --list prakash
+Last password change : Apr 12, 2018
+Password expires : never
+Password inactive : never
+Account expires : never
+Minimum number of days between password change : 0
+Maximum number of days between password change : 99999
+Number of days of warning before password expires : 7
+
+```
+
+### 方式 5:使用 useradd 命令
+
+useradd 命令用于在 Linux 中创建新帐户。默认情况下,它不会添加用户创建日期,我们必须使用 “Comment” 选项添加日期。
+```
+# useradd -m prakash -c `date +%Y/%m/%d`
+
+# grep prakash /etc/passwd
+prakash:x:501:501:2018/04/12:/home/prakash:/bin/bash
+
+```
+
+### 方式 6:使用 passwd 命令
+
+passwd 命令用于将密码分配给本地帐户或用户。如果用户在帐户创建后没有修改密码,那么可以使用 passwd 命令查看最后一次密码修改的日期。
+```
+# passwd -S prakash
+prakash PS 2018-04-11 0 99999 7 -1 (Password set, MD5 crypt.)
+
+```
+
+### 方式 7:使用 last 命令
+
+last 命令读取 /var/log/wtmp,并显示自该文件创建以来所有登录(和退出)用户的列表。
+```
+# last | grep "prakash"
+prakash pts/2 103.5.134.167 Thu Apr 12 04:08 still logged in
+
+```
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/how-to-check-user-created-date-on-linux/
+
+作者:[Prakash Subramanian][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2daygeek.com/author/prakash/
From 3bc2b64df46dc40b6c8fbcabac62b53b723b1d7f Mon Sep 17 00:00:00 2001
From: geekpi
Date: Mon, 23 Apr 2018 08:41:45 +0800
Subject: [PATCH 046/220] translating
---
...o Resume Partially Transferred Files Over SSH Using Rsync.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md b/sources/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md
index 5e0583ab4f..f4f532720a 100644
--- a/sources/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md
+++ b/sources/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md
@@ -1,3 +1,5 @@
+translating----geekpi
+
How To Resume Partially Transferred Files Over SSH Using Rsync
======
From ecc1720be0dfd8dc83277a3eb031a80cd5f6306d Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 23 Apr 2018 12:42:10 +0800
Subject: [PATCH 047/220] PRF:20170928 Process Monitoring.md
@qhwdw
---
.../tech/20170928 Process Monitoring.md | 27 ++++++++++---------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/translated/tech/20170928 Process Monitoring.md b/translated/tech/20170928 Process Monitoring.md
index bb528edf5c..b8654a1a42 100644
--- a/translated/tech/20170928 Process Monitoring.md
+++ b/translated/tech/20170928 Process Monitoring.md
@@ -1,42 +1,43 @@
-监视进程
+对进程的监视
======
-由于 fork 了 Mon 项目到 [etbemon [1]][1] 中,我花了一些时间做监视脚本。事实上监视一些事情通常很容易,但是决定监视什么才是困难的部分。进程监视脚本 ps.monitor 是我重新设计过的一个。
+由于复刻了 mon 项目到 [etbemon][1] 中,我花了一些时间做监视脚本。事实上监视一些事情通常很容易,但是决定监视什么才是困难的部分。进程监视脚本 `ps.monitor` 是我重新设计过的一个。
对于进程监视我有一些思路。如果你对进程监视如何做的更好有任何建议,请通过评论区告诉我。
-对于不使用 Mon 的人来说,如果一切 OK 监视脚本就返回 0,而如果有问题它会返回 1,并使用标准输出显示错误信息。虽然我并不知道有谁将 Mon 脚本挂进一个不同的监视系统中,但是,那样做其实很容易实现。我计划去做的一件事情就是,将来实现 mon 和其它的监视系统如 Nagios 之间的互操作性。
+给不使用 mon 的人介绍一下,如果一切 OK 该监视脚本就返回 0,而如果有问题它会返回 1,并使用标准输出显示错误信息。虽然我并不知道有谁将 mon 脚本挂进一个不同的监视系统中,但是,那样做其实很容易实现。我计划去做的一件事情就是,将来实现 mon 和其它的监视系统如 Nagios 之间的互操作性。
### 基本监视
+
```
ps.monitor tor:1-1 master:1-2 auditd:1-1 cron:1-5 rsyslogd:1-1 dbus-daemon:1- sshd:1- watchdog:1-2
```
-我现在计划重写进程监视脚本的一些分类。现在的功能是在命令行上有一个进程名字的列表,它包含了有疑问的实例进程的最小和最大数量。上面的示例是一个监视器的配置。在这里有一些限制,在这个实例中的 "master" 进程引用到 Postfix 的主进程,但是其它的守护进程使用了相同的进程名(这是其中一个错误的名字,因为它太显眼了)。一个显而易见的解决方案是,给一个指定完整路径的选项,这样,那个 /usr/lib/postfix/sbin/master 就可以与其它命名为 “master” 的程序区分开了。
+我现在计划重写该进程监视脚本的某些部分。现在的功能是在命令行上列出进程名字,它包含了要监视的进程的最小和最大实例数量。上面的示例是一个监视的配置。在这里有一些限制,在这个实例中的 `master` 进程指的是 Postfix 的主进程,但是其它的守护进程使用了相同的进程名(这是那些错误的名字之一,因为它太直白了)。一个显而易见的解决方案是,给一个指定完整路径的选项,这样,那个 `/usr/lib/postfix/sbin/master` 就可以与其它命名为 `master` 的程序区分开了。
-下一个问题是那些可能代表多个用户运行的进程。比如 sshd,它有一个以 root 身份运行的单独的进程去接受新的连接请求,以及在每个登入用户的 UID 下运行的进程。因此,作为 root 用户运行的 sshd 进程的数量将多于 root 会话的数量。这意味着如果一个系统管理员直接以 root 身份通过 ssh 登入系统(这是有争议的,但它不是本文的主题—— 只是有些人需要这样做,所以我们支持),然后 master 进程崩溃了(或者系统管理员意外或者故意杀死了它),这时对于进程丢失并不会产生警报。当然正确的做法是监视 22 号端口,查找字符串 "SSH-2.0-OpenSSH_"。有时候,守护进程的多个实例运行在需要单独监视的不同 UIDs 下面。因此,我们需要通过 UID 监视进程的能力。
+下一个问题是那些可能以多个用户身份运行的进程。比如 `sshd`,它有一个以 root 身份运行的单独的进程去接受新的连接请求,以及在每个登入用户的 UID 下运行的进程。因此,作为 root 用户运行的 sshd 进程的数量将比 root 登录会话的数量大 1。这意味着如果一个系统管理员直接以 root 身份通过 `ssh` 登入系统(这是有争议的,但它不是本文的主题—— 只是有些人需要这样做,所以我们必须支持这种情形),然后 master 进程崩溃了(或者系统管理员意外或者故意杀死了它),这时对于该进程丢失并不会产生警报。当然正确的做法是监视 22 号端口,查找字符串 `SSH-2.0-OpenSSH_`。有时候,守护进程的多个实例运行在需要单独监视的不同 UID 下面。因此,我们需要通过 UID 监视进程的能力。
-在许多案例中,进程监视可以被替换为对服务端口的监视。因此,如果在 25 号端口上监视,那么有可能意味着,一个运行着 Postfix 的 “master",而不用去理会其它的 "master” 进程。但是对于我而言,我可以在多个监视中很方便地找到它,如果我得到一个关于无法向一个服务器发送邮件的 Jabber 消息,我可以通过这个来自服务器的 Jabber 消息断定 “master" 没有运行,而不需要挨个查找才能发现问题所在。
+在许多情形中,进程监视可以被替换为对服务端口的监视。因此,如果在 25 号端口上监视,那么有可能意味着,Postfix 的 `master` 在运行着,不用去理会其它的 `master` 进程。但是对于我而言,我可以在方便地进行多个监视,如果我得到一个关于无法向一个服务器发送邮件的 Jabber 消息,我可以通过这个来自服务器的 Jabber 消息断定 `master` 没有运行,而不需要挨个查找才能发现问题所在。
### SE Linux
-我想要的一个功能就是,监视 SE Linux 进程上下文,就像监视 UIDs 一样。虽然我对为其它安全系统编写一个测试不感兴趣,但是,我很乐意将别人写好的代码包含进去。因此,不管我做什么,都希望它能与多个安全系统一起灵活地工作。
+我想要的一个功能就是,监视进程的 SE Linux 上下文,就像监视 UID 一样。虽然我对为其它安全系统编写一个测试不感兴趣,但是,我很乐意将别人写好的代码包含进去。因此,不管我做什么,都希望它能与多个安全系统一起灵活地工作。
### 短暂进程
-大多数守护进程在进程启动期间都有一个相同名字的次级进程(second process)。这意味着如果你为了精确地监视一个进程的实例,你或许会收到一个警报说,当 ”logrotate" 或者类似的守护进程重启时有两个进程运行。如果在重启期间,恰好在一个错误的时间进行检查,你也或许会收到一个警报说,有 0 个实例。我现在处理这种情况的方法是,在与 "alertafter 2" 指令一起的次级进程失败事件之前我的服务器不发出警报。当监视处于一个失败的状态时,"failure_interval" 指令允许指定检查的时间间隔,将其设置为一个低值时,意味着在等待一个次级进程失败结果时并不会使提示延迟太多。
+大多数守护进程在进程启动期间都有一个相同名字的次级进程。这意味着如果你为了精确地监视一个进程的一个实例,当 `logrotate` 或者类似的守护进程重启时,你或许会收到一个警报说有两个进程运行。如果在重启期间,恰好在一个错误的时间进行检查,你也或许会收到一个警报说,有 0 个实例。我现在处理这种情况的方法是,在与 `alertafter 2` 指令一起的次级进程失败事件之前我的服务器不发出警报。当监视处于一个失败的状态时,`failure_interval` 指令允许指定检查的时间间隔,将其设置为一个较低值时,意味着在等待一个次级进程失败结果时并不会使提示延迟太多。
-为处理这种情况,我考虑让 ps.monitor 脚本在一个指定的延迟后再次进行自动检查。我认为使用一个单个参数的监视脚本来解决这个问题比起使用两个配置指令的 mon 要好一些。
+为处理这种情况,我考虑让 `ps.monitor` 脚本在一个指定的延迟后再次进行自动检查。我认为使用一个单个参数的监视脚本来解决这个问题比起使用两个配置指令的 mon 要好一些。
### CPU 使用
-Mon 现在有一个 loadavg.monitor 脚本,它用于检查平均负载。但是它并不能捕获一个单个进程使用了太多的 CPU 时间而没有使系统平均负载上升的情况。同样,也没有捕获一个渴望获得 CPU 的进程进入沉默(例如,在家用服务器上 SETI 运行变少)(译者注:SETI,由加州大学伯克利分校创建的一项利用全球的联网计算机的空闲计算资源来搜寻地外文明的科学实验计划)而其它的进程进入一个无限循环状态的情况。解决这种问题的一个方法是,让 ps.monitor 脚本也配置另外的一个选项去监视 CPU 的使用,但是这也可能会让人产生迷惑。另外的选择是,使用一个独立的脚本,它用来报警任何在它的生命周期或者最后几秒中,使用 CPU 时间超过指定百分比的进程,除非它在一个进程白名单中以及是一个豁免这种检查的用户。或者每个普通用户都应该豁免这种检查,因为当它们运行一个文件压缩程序时,你压根就不知道。这里还有一个包含排除的守护进程(像 BOINC)和系统进程(像 gzip,它是由几个定时任务运行的)的简短列表。
+mon 现在有一个 `loadavg.monitor` 脚本,它用于检查平均负载。但是它并不能捕获一个单个进程使用了太多的 CPU 时间而没有使系统平均负载上升的情况。同样,也没有捕获一个渴望获得 CPU 的进程进入沉默(例如,SETI at Home 停止运行)(LCTT 译注:SETI,由加州大学伯克利分校创建的一项利用全球的联网计算机的空闲计算资源来搜寻地外文明的科学实验计划),而其它的进程进入一个无限循环状态的情况。解决这种问题的一个方法是,让 `ps.monitor` 脚本也配置另外的一个选项去监视 CPU 的使用,但是这也可能会让人产生迷惑。另外的选择是,使用一个独立的脚本,它用来报警任何在它的生命周期或者最后几秒中,使用 CPU 时间超过指定百分比的进程,除非它在一个豁免这种检查的进程或用户的白名单中。或者每个普通用户都应该豁免这种检查,因为你压根就不知道他们什么时候运行一个文件压缩程序。也应该有一个包含排除的守护进程(像 BOINC)和系统进程(像 gzip,有几个定时任务会运行它)的简短列表。
### 对例外的监视
-一个常见的编程错误是在 setgid() 之前调用 setuid(),这意味着那个程序没有权限去调用 setgid()。如果没有检查返回代码(而犯这种低级错误的人往往不会去检查返回代码),那么进程会保持较高的权限。检查以 GID 0 而不是 UID 0 运行的进程是很方便的。顺利说一下,对一个 Debian/测试工作站运行的一个快速检查显示,一个使用 GID 0 的进程并没有获得较高的权限,但是可以使用一个 chmod 770 命令去改变它。
+一个常见的编程错误是在 `setgid()` 之前调用 `setuid()`,这意味着那个程序没有权限去调用 `setgid()`。如果没有检查返回代码(而犯这种低级错误的人往往不会去检查返回代码),那么进程会保持较高的权限。检查以 GID 0 而不是 UID 0 运行的进程是很方便的。顺利说一下,对一个 Debian/Testing 工作站运行的一个快速检查显示,一个使用 GID 0 的进程并没有获得较高的权限,但是可以使用一个 `chmod 770` 命令去改变它。
-在一个 SE Linux 系统上,应该只有一个进程与 init_t 域一起运行。目前在运行守护进程(比如,mysqld 和 tor)的扩展系统中,并不会发生策略与守护进程服务文件所请求的 systemd 的最新功能不匹配的情况。这样的问题将会不断发生,我们需要对它进行自动化测试。
+在一个 SE Linux 系统上,应该只有一个进程与 `init_t` 域一起运行。目前在运行守护进程(比如,mysqld 和 tor)的 Debian Stretch 系统中,并不会发生策略与守护进程服务文件所请求的 systemd 的最新功能不匹配的情况。这样的问题将会不断发生,我们需要对它进行自动化测试。
对配置错误的自动测试可能会影响系统安全,这是一个很大的问题,我将来或许写一篇关于这方面的单独的博客文章。
@@ -46,7 +47,7 @@ via: https://etbe.coker.com.au/2017/09/28/process-monitoring/
作者:[Andrew][a]
译者:[qhwdw](https://github.com/qhwdw)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 9e0f34a54869ca0a2a73a2320c82045259192b44 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 23 Apr 2018 12:43:10 +0800
Subject: [PATCH 048/220] PUB:20170928 Process Monitoring.md
@qhwdw https://linux.cn/article-9569-1.html
---
{translated/tech => published}/20170928 Process Monitoring.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20170928 Process Monitoring.md (100%)
diff --git a/translated/tech/20170928 Process Monitoring.md b/published/20170928 Process Monitoring.md
similarity index 100%
rename from translated/tech/20170928 Process Monitoring.md
rename to published/20170928 Process Monitoring.md
From 4d1c10d46ac5d2200aa935354e72c97ac2966b3c Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 23 Apr 2018 12:58:18 +0800
Subject: [PATCH 049/220] PRF:20180130 Install AWFFull web server log analysis
application on ubuntu 17.10.md
@geekpi
---
...og analysis application on ubuntu 17.10.md | 97 +++++++++----------
1 file changed, 44 insertions(+), 53 deletions(-)
diff --git a/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md b/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md
index 50f2f00451..6245810c69 100644
--- a/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md
+++ b/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md
@@ -1,88 +1,79 @@
在 Ubuntu 17.10 上安装 AWFFull Web 服务器日志分析应用程序
======
-
-AWFFull 是基于 “Webalizer” 的 Web 服务器日志分析程序。AWFFull 以 HTML 格式生成使用统计信息以便用浏览器查看。结果以柱状和图形两种格式显示,这有利于解释。它提供每年、每月、每日和每小时使用统计数据,并显示网站、URL、referrer、user agent(浏览器)、用户名、搜索字符串、进入/退出页面和国家(如果一些信息不存在于处理后日志中那么就没有)。AWFFull 支持 CLF(通用日志格式)日志文件,以及由 NCSA 和其他人定义的组合日志格式,它还能只能地处理这些格式的变体。另外,AWFFull 还支持 wu-ftpd xferlog 格式的日志文件,它能够分析 ftp 服务器和 squid 代理日志。日志也可以通过 gzip 压缩。
+AWFFull 是基于 “Webalizer” 的 Web 服务器日志分析程序。AWFFull 以 HTML 格式生成使用统计信息以便用浏览器查看。结果以柱状和图形两种格式显示,这有利于解释数据。它提供每年、每月、每日和每小时的使用统计数据,并显示网站、URL、referrer、user agent(浏览器)、用户名、搜索字符串、进入/退出页面和国家(如果一些信息不存在于处理后日志中那么就没有)。AWFFull 支持 CLF(通用日志格式)日志文件,以及由 NCSA 等定义的组合日志格式,它还能只能地处理这些格式的变体。另外,AWFFull 还支持 wu-ftpd xferlog 格式的日志文件,它能够分析 ftp 服务器和 squid 代理日志。日志也可以通过 gzip 压缩。
如果检测到压缩日志文件,它将在读取时自动解压缩。压缩日志必须是 .gz 扩展名的标准 gzip 压缩。
### 对于 Webalizer 的修改
-AWFFull 基于 Webalizer 的代码,并有许多大的和小的变化。包括:
+AWFFull 基于 Webalizer 的代码,并有许多或大或小的变化。包括:
-o 不止原始统计数据:利用已发布的公式,提供额外的网站使用情况。
+- 不止原始统计数据:利用已发布的公式,提供额外的网站使用情况。
+- GeoIP IP 地址能更准确地检测国家。
+- 可缩放的图形
+- 与 GNU gettext 集成,能够轻松翻译。目前支持 32 种语言。
+- 在首页显示超过 12 个月的网站历史记录。
+- 额外的页面计数跟踪和排序。
+- 一些小的可视化调整,包括 Geolizer 用量中使用 Kb、Mb。
+- 额外的用于 URL 计数、进入和退出页面、站点的饼图
+- 图形上的水平线更有意义,更易于阅读。
+- User Agent 和 Referral 跟踪现在通过 PAGES 而非 HITS 进行计算。
+- 现在支持 GNU 风格的长命令行选项(例如 --help)。
+- 可以通过排除“什么不是”以及原始的“什么是”来选择页面。
+- 对被分析站点的请求以匹配的引用 URL 显示。
+- 404 错误表,并且可以生成引用 URL。
+- 生成的 html 可以使用外部 CSS 文件。
+- POST 分析总结使得手动优化配置文件性能更简单。
+- 可以将指定的 IP 和地址分配给指定的国家。
+- 便于使用其他工具详细分析的转储选项。
+- 支持检测并处理 Lotus Domin- v6 日志。
-o GeoIP IP 地址能更准确地检测国家。
+### 在 Ubuntu 17.10 上安装 AWFFull
-o 可缩放的图形
+```
+sud- apt-get install awffull
+```
-o 与 GNU gettext 集成,能够轻松翻译。目前支持 32 种语言。
+### 配置 AWFFull
-o 在首页显示超过 12 个月的网站历史记录。
+你必须在 `/etc/awffull/awffull.conf` 中编辑 AWFFull 配置文件。如果你在同一台计算机上运行多个虚拟站点,则可以制作多个默认配置文件的副本。
-o 额外的页面计数跟踪和排序。
+```
+sud- vi /etc/awffull/awffull.conf
+```
-o 一些小的可视化调整,包括 Geolizer 使用在卷中使用 Kb、Mb。
+确保有下面这几行:
-o 额外的用于 URL 计数、进入和退出页面、站点的饼图
+```
+LogFile /var/log/apache2/access.log.1
+OutputDir /var/www/html/awffull
+```
-o 图形上的水平线更有意义,更易于阅读。
+保存并退出文件。
-o User Agent 和 Referral 跟踪现在通过 PAGES 而非 HITS 进行计算。
+你可以使用以下命令运行 awffull。
-o 现在支持 GNU 风格的长命令行选项(例如 --help)。
+```
+awffull -c [your config file name]
+```
-o 可以通过排除“什么不是”以及原始的“什么是”来选择页面。
+这将在 `/var/www/html/awffull` 目录下创建所有必需的文件,以便你可以使用 http://serverip/awffull/ 。
-o 对被分析站点的请求以匹配的引用 URL 显示。
+你应该看到类似于下面的页面:
-o 404 错误表,并且可以生成引用 URL。
+
-o 外部 CSS 文件可以与生成的 html 一起使用。
-
-o POST 分析总结使得手动优化配置文件性能更简单。
-
-o 指定的 IP 和地址可以分配给指定的国家。
-
-o 便于使用其他工具详细分析的转储选项。
-
-o 支持检测并处理 Lotus Domino v6 日志。
-
-**在 Ubuntu 17.10 上安装 awffull**
-
-> sudo apt-get install awffull
-
-### 配置 AWFFULL
-
-你必须在 /etc/awffull/awffull.conf 中编辑 awffull 配置文件。如果你在同一台计算机上运行多个虚拟站点,则可以制作多个默认配置文件的副本。
-
-> sudo vi /etc/awffull/awffull.conf
-
-确保有下面这几行
-
-> LogFile /var/log/apache2/access.log.1
-> OutputDir /var/www/html/awffull
-
-保存并退出文件
-
-你可以使用以下命令运行 awffull
-
-> awffull -c [your config file name]
-
-这将在 /var/www/html/awffull 目录下创建所有必需的文件,以便你可以使用 http://serverip/awffull/
-
-你应该看到类似于下面的页面
如果你有更多站点,你可以使用 shell 和计划任务自动化这个过程。
-
--------------------------------------------------------------------------------
via: http://www.ubuntugeek.com/install-awffull-web-server-log-analysis-application-on-ubuntu-17-10.html
作者:[ruchi][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 91ea6126c78d773f342e5e9cc30e4cf99aaccbb4 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 23 Apr 2018 12:58:33 +0800
Subject: [PATCH 050/220] PUB:20180130 Install AWFFull web server log analysis
application on ubuntu 17.10.md
@geekpi
---
...AWFFull web server log analysis application on ubuntu 17.10.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md (100%)
diff --git a/translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md b/published/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md
similarity index 100%
rename from translated/tech/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md
rename to published/20180130 Install AWFFull web server log analysis application on ubuntu 17.10.md
From dd8875e093df8d21f91a4a533d6c9231d873b5f8 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 23 Apr 2018 14:12:03 +0800
Subject: [PATCH 051/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Getting=20started?=
=?UTF-8?q?=20with=20Anaconda=20Python=20for=20data=20science?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...d with Anaconda Python for data science.md | 117 ++++++++++++++++++
1 file changed, 117 insertions(+)
create mode 100644 sources/tech/20180418 Getting started with Anaconda Python for data science.md
diff --git a/sources/tech/20180418 Getting started with Anaconda Python for data science.md b/sources/tech/20180418 Getting started with Anaconda Python for data science.md
new file mode 100644
index 0000000000..43cd89f17c
--- /dev/null
+++ b/sources/tech/20180418 Getting started with Anaconda Python for data science.md
@@ -0,0 +1,117 @@
+Getting started with Anaconda Python for data science
+======
+
+
+Like many others, I've been trying to get involved in the rapidly expanding field of data science. When I took Udemy courses on the [R][1] and [Python][2] programming languages, I downloaded and installed the applications independently. As I was trying to work through the challenges of installing data science packages like [NumPy][3] and [Matplotlib][4] and solving the various dependencies, I learned about the [Anaconda Python distribution][5].
+
+Anaconda is a complete, [open source][6] data science package with a community of over 6 million users. It is easy to [download][7] and install, and it is supported on Linux, MacOS, and Windows.
+
+I appreciate that Anaconda eases the frustration of getting started for new users. The distribution comes with more than 1,000 data packages as well as the [Conda][8] package and virtual environment manager, so it eliminates the need to learn to install each library independently. As Anaconda's website says, "The Python and R conda packages in the Anaconda Repository are curated and compiled in our secure environment so you get optimized binaries that 'just work' on your system."
+
+I recommend using [Anaconda Navigator][9], a desktop graphical user interface (GUI) system that includes links to all the applications included with the distribution including [RStudio][10], [iPython][11], [Jupyter Notebook][12], [JupyterLab][13], [Spyder][14], [Glue][15], and [Orange][16]. The default environment is Python 3.6, but you can also easily install Python 3.5, Python 2.7, or R. The [documentation][9] is incredibly detailed and there is an excellent community of users for additional support.
+
+### Installing Anaconda
+
+To install Anaconda on my Linux laptop (an I3 with 4GB of RAM), I downloaded the Anaconda 5.1 Linux installer and ran `md5sum` to verify the file:
+```
+$ md5sum Anaconda3-5.1.0-Linux-x86_64.sh
+
+```
+
+Then I followed the directions in the [documentation][17], which instructed me to issue the following Bash command whether I was in the Bash shell or not:
+```
+$ bash Anaconda3-5.1.0-Linux-x86_64.sh
+
+```
+
+`/home//.bashrc`?" I allowed it and restarted the shell, which I found was necessary for the `.bashrc` environment to work correctly.
+
+I followed the installation directions exactly, and the well-scripted install took about five minutes to complete. When the installation prompted: "Do you wish the installer to prepend the Anaconda install location to PATH in your?" I allowed it and restarted the shell, which I found was necessary for theenvironment to work correctly.
+
+After completing the install, I launched Anaconda Navigator by entering the following at the command prompt in the shell:
+```
+$ anaconda-navigator
+
+```
+
+Every time Anaconda Navigator launches, it checks to see if new software is available and prompts you to update if necessary.
+
+
+
+Anaconda updated successfully without needing to return to the command line. Anaconda's initial launch was a little slow; that plus the update meant it took a few additional minutes to get started.
+
+You can also update manually by entering the following:
+```
+$ conda update anaconda-navigator
+
+```
+
+### Exploring and installing applications
+
+Once Navigator launched, I was free to explore the range of applications included with Anaconda Distribution. According to the documentation, the 64-bit Python 3.6 version of Anaconda [supports 499 packages][18]. The first application I explored was [Jupyter QtConsole][19]. The easy-to-use GUI supports inline figures and syntax highlighting.
+
+
+
+Jupyter Notebook is included with the distribution, so (unlike other Python environments I have used) there is no need for a separate install.
+
+
+
+I was already familiar with RStudio. It's not installed by default, but it's easy to add with the click of a mouse. Other applications, including JupyterLab, Orange, Glue, and Spyder, can be launched or installed with just a mouse click.
+
+
+
+One of the Anaconda distribution's strengths is the ability to create multiple environments. For example, if I wanted to create a Python 2.7 environment instead of the default Python 3.6, I would enter the following in the shell:
+```
+$ conda create -n py27 python=2.7 anaconda
+
+```
+
+Conda takes care of the entire install; to launch it, just open the shell and enter:
+```
+$ anaconda-navigator
+
+```
+
+Select the **py27** environment from the "Applications on" drop-down in the Anaconda GUI.
+
+
+
+### Learn more
+
+There's a wealth of information available about Anaconda if you'd like to know more. You can start by searching the [Anaconda Community][20] and its [mailing list][21].
+
+Are you using Anaconda Distribution and Navigator? Let us know your impressions in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/getting-started-anaconda-python
+
+作者:[Don Watkins][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/don-watkins
+[1]:https://www.r-project.org/
+[2]:https://www.python.org/
+[3]:http://www.numpy.org/
+[4]:https://matplotlib.org/
+[5]:https://www.anaconda.com/distribution/
+[6]:https://docs.anaconda.com/anaconda/eula
+[7]:https://www.anaconda.com/download/#linux
+[8]:https://conda.io/
+[9]:https://docs.anaconda.com/anaconda/navigator/
+[10]:https://www.rstudio.com/
+[11]:https://ipython.org/
+[12]:http://jupyter.org/
+[13]:https://blog.jupyter.org/jupyterlab-is-ready-for-users-5a6f039b8906
+[14]:https://spyder-ide.github.io/
+[15]:http://glueviz.org/
+[16]:https://orange.biolab.si/
+[17]:https://docs.anaconda.com/anaconda/install/linux
+[18]:https://docs.anaconda.com/anaconda/packages/py3.6_linux-64
+[19]:http://qtconsole.readthedocs.io/en/stable/
+[20]:https://www.anaconda.com/community/
+[21]:https://groups.google.com/a/continuum.io/forum/#!forum/anaconda
From ec47277a17d6dc369834a258f105bb3f455cec22 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 23 Apr 2018 14:14:55 +0800
Subject: [PATCH 052/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20The=20Linux=20Fil?=
=?UTF-8?q?esystem=20Explained?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...20180418 The Linux Filesystem Explained.md | 245 ++++++++++++++++++
1 file changed, 245 insertions(+)
create mode 100644 sources/tech/20180418 The Linux Filesystem Explained.md
diff --git a/sources/tech/20180418 The Linux Filesystem Explained.md b/sources/tech/20180418 The Linux Filesystem Explained.md
new file mode 100644
index 0000000000..c515abc266
--- /dev/null
+++ b/sources/tech/20180418 The Linux Filesystem Explained.md
@@ -0,0 +1,245 @@
+The Linux Filesystem Explained
+======
+
+
+Back in 1996 I learned how to install software on my spanking new Linux before really understanding the topography of the filesystem. This turned out to be a problem, not so much for programs, because they would just magically work even though I hadn't a clue of where the actual executable files landed. The problem was the documentation.
+
+You see, back then, Linux was not the intuitive, user-friendly system it is today. You had to read a lot. You had to know things about the frequency rate of your CRT monitor and the ins and outs of your noisy dial-up modem, among hundreds of other things. I soon realized I would need to spend some time getting a handle on how the directories were organized and what all their exotic names like /etc (not for miscellaneous files), /usr (not for user files), and /*bin *(not a trash can) meant.
+
+This tutorial will help you get up to speed faster than I did.
+
+### Structure
+
+It makes sense to explore the Linux filesystem from a terminal window, not because the author is a grumpy old man and resents new kids and their pretty graphical tools -- although there is some truth to that -- but because a terminal, despite being text-only, has better tools to show the map of Linux's directory tree.
+
+In fact, that is the name of the first tool you'll install to help you on the way: _tree_. If you are using Ubuntu or Debian, you can do:
+```
+sudo apt install tree
+
+```
+
+On Red Hat or Fedora, do:
+```
+sudo dnf install tree
+
+```
+
+For SUSE/openSUSE use `zypper`:
+```
+sudo zypper install tree
+
+```
+
+For Arch-like distros (Manjaro, Antergos, etc.) use:
+```
+sudo pacman -S tree
+
+```
+
+... and so on.
+
+Once installed, stay in your terminal window and run _tree_ like this:
+```
+tree /
+
+```
+
+`The /` in the instruction above refers to the _root_ directory. The root directory is the one from which all other directories branch off from. When you run `tree` and tell it to start with _/_ , you will see the whole directory tree, all directories and all the subdirectories in the whole system, with all their files, fly by.
+
+If you have been using your system for some time, this may take a while, because, even if you haven't generated many files yourself, a Linux system and its apps are always logging, cacheing, and storing temporal files. The number of entries in the file system can grow quite quickly.
+
+Don't feel overwhelmed, though. Instead, try this:
+```
+tree -L 1 /
+
+```
+
+And you should see what is shown in Figure 1.
+
+
+
+The instruction above can be translated as " _show me only the 1st Level of the directory tree starting at / (root)_ ". The `-L` option tells `tree` how many levels down you want to see.
+
+Most Linux distributions will show you the same or a very similar layout to what you can see in the image above. This means that even if you feel confused now, master this, and you will have a handle on most, if not all, Linux installations in the whole wide world.
+
+To get you started on the road to mastery, let's look at what each directory is used for. While we go through each, you can peek at their contents using ls.
+
+### Directories
+
+From top to bottom, the directories you are seeing are as follows.
+
+#### _/bin_
+
+_/bin_ is the directory that contains _bin_ aries, that is, some of the applications and programs you can run. You will find the _ls_ program mentioned above in this directory, as well as other basic tools for making and removing files and directories, moving them around, and so on. There are more _bin_ directories in other parts of the file system tree, but we'll be talking about those in a minute.
+
+#### _/boot_
+
+The _/boot_ directory contains files required for starting your system. Do I have to say this? Okay, I'll say it: **DO NOT TOUCH!**. If you mess up one of the files in here, you may not be able to run your Linux and it is a pain to repair. On the other hand, don't worry too much about destroying your system by accident: you have to have superuser privileges to do that.
+
+#### _/dev_
+
+_/dev_ contains _dev_ ice files. Many of these are generated at boot time or even on the fly. For example, if you plug in a new webcam or a USB pendrive into your machine, a new device entry will automagically pop up here.
+
+#### _/etc_
+
+_/etc_ is the directory where names start to get confusing. _/etc_ gets its name from the earliest Unixes and it was literally "et cetera" because it was the dumping ground for system files administrators were not sure where else to put.
+
+Nowadays, it would be more appropriate to say that _etc_ stands for "Everything to configure," as it contains most, if not all system-wide configuration files. For example, the files that contain the name of your system, the users and their passwords, the names of machines on your network and when and where the partitions on your hard disks should be mounted are all in here. Again, if you are new to Linux, it may be best if you don't touch too much in here until you have a better understanding of how things work.
+
+#### _/home_
+
+_/home_ is where you will find your users' personal directories. In my case, under _/home_ there are two directories: _/home/paul_ , which contains all my stuff; and _/home/guest_ , in case anybody needs to borrow my computer.
+
+#### _/lib_
+
+_/lib_ is where _lib_ raries live. Libraries are files containing code that your applications can use. They contain snippets of code that applications use to draw windows on your desktop, control peripherals, or send files to your hard disk.
+
+There are more _lib_ directories scattered around the file system, but this one, the one hanging directly off of _/_ is special in that, among other things, it contains the all-important kernel modules. The kernel modules are drivers that make things like your video card, sound card, WiFi, printer, and so on, work.
+
+#### _/media_
+
+The _/media_ directory is where external storage will be automatically mounted when you plug it in and try to access it. As opposed to most of the other items on this list, _/media_ does not hail back to 1970s, mainly because inserting and detecting storage (pendrives, USB hard disks, SD cards, external SSDs, etc) on the fly, while a computer is running, is a relatively new thing.
+
+#### _/mnt_
+
+The _/mnt_ directory, however, is a bit of remnant from days gone by. This is where you would manually mount storage devices or partitions. It is not used very often nowadays.
+
+#### _/opt_
+
+The _/opt_ directory is often where software you compile (that is, you build yourself from source code and do not install from your distribution repositories) sometimes lands. Applications will end up in the _/opt/bin_ directory and libraries in the _/opt/lib_ directory.
+
+A slight digression: another place where applications and libraries end up in is _/usr/local_ , When software gets installed here, there will also be _/usr/local/bin_ and _/usr/local/lib_ directories. What determines which software goes where is how the developers have configured the files that control the compilation and installation process.
+
+#### _/proc_
+
+_/proc_ , like _/dev_ is a virtual directory. It contains information about your computer, such as information about your CPU and the kernel your Linux system is running. As with _/dev_ , the files and directories are generated when your computer starts, or on the fly, as your system is running and things change.
+
+#### _/root_
+
+_/root_ is the home directory of the superuser (also known as the "Administrator") of the system. It is separate from the rest of the users' home directories BECAUSE YOU ARE NOT MEANT TO TOUCH IT. Keep your own stuff in you own directories, people.
+
+#### _/run_
+
+_/run_ is another new directory. System processes use it to store temporary data for their own nefarious reasons. This is another one of those DO NOT TOUCH folders.
+
+#### _/sbin_
+
+_/sbin_ is similar to _/bin_ , but it contains applications that only the superuser (hence the initial _s_ ) will need. You can use these applications with the `sudo` command that temporarily concedes you superuser powers on many distributions. _/sbin_ typically contains tools that can install stuff, delete stuff and format stuff. As you can imagine, some of these instructions are lethal if you use them improperly, so handle with care.
+
+#### _/usr_
+
+The _/usr_ directory was where users' home directories were originally kept back in the early days of UNIX. However, now _/home_ is where users kept their stuff as we saw above. These days, _/usr_ contains a mish-mash of directories which in turn contain applications, libraries, documentation, wallpapers, icons and a long list of other stuff that need to be shared by applications and services.
+
+You will also find _bin_ , _sbin_ and _lib_ directories in _/usr_. What is the difference with their root-hanging cousins? Not much nowadays. Originally, the _/bin_ directory (hanging off of root) would contain very basic commands, like `ls`, `mv` and `rm`; the kind of commands that would come pre-installed in all UNIX/Linux installations, the bare minimum to run and maintain a system. _/usr/bin_ on the other hand would contain stuff the users would install and run to use the system as a work station, things like word processors, web browsers, and other apps.
+
+But many modern Linux distributions just put everything into _/usr/bin_ and have _/bin_ point to _/usr/bin_ just in case erasing it completely would break something. So, while Debian, Ubuntu and Mint still keep _/bin_ and _/usr/bin_ (and _/sbin_ and _/usr/sbin_ ) separate; others, like Arch and its derivatives just have one "real" directory for binaries, _/usr/bin_ , and the rest or _*bin_ s are "fake" directories that point to _/usr/bin_.
+
+#### _/srv_
+
+The _/srv_ directory contains data for servers. If you are running a web server from your Linux box, your HTML files for your sites would go into _/srv/http_ (or _/srv/www_ ). If you were running an FTP server, your files would go into _/srv/ftp_.
+
+#### _/sys_
+
+_/sys_ is another virtual directory like _/proc_ and _/dev_ and also contains information from devices connected to your computer.
+
+In some cases you can also manipulate those devices. I can, for example, change the brightness of the screen of my laptop by modifying the value stored in the _/sys/devices/pci0000:00/0000:00:02.0/drm/card1/card1-eDP-1/intel_backlight/brightness_ file (on your machine you will probably have a different file). But to do that you have to become superuser. The reason for that is, as with so many other virtual directories, messing with the contents and files in _/sys_ can be dangerous and you can trash your system. DO NOT TOUCH until you are sure you know what you are doing.
+
+#### _/tmp_
+
+_/tmp_ contains temporary files, usually placed there by applications that you are running. The files and directories often (not always) contain data that an application doesn't need right now, but may need later on. So, to free up RAM, it gets stored here.
+
+You can also use _/tmp_ to store your own temporary files -- _/tmp_ is one of the few directories hanging of _/_ which you can actually interact with without becoming superuser. The problem is that applications sometimes don't come back to retrieve and delete files and directories and _/tmp_ can often end up eating up space on your hard disk, filling it up with junk. Later on in this series we'll see how to clean it up.
+
+#### _/var_
+
+_/var_ was originally given its name because its contents was deemed _variable_ , in that it changed frequently. Today it is a bit of a misnomer because there are many other directories that also contain data that changes frequently, especially the virtual directories we saw above.
+
+Be that as it may, _/var_ contains things like logs in the _/var/log_ subdirectories. Logs are files that register events that happen on the system. If something fails in the kernel, it will be logged in a file in _/var/log_ ; if someone tries to break into your computer from outside, your firewall will also log the attempt here. It also contains _spools_ for tasks. These "tasks" can be the jobs you send to a shared printer when you have to wait because another user is printing a long document, or mail that is waiting to be delivered to users on the system.
+
+Your system may have some more directories we haven't mentioned above. In the screenshot, for example, there is a _/snap_ directory. That's because the shot was captured on an Ubuntu system. Ubuntu has recently incorporated [snap][1] packages as a way of distributing software. The _/snap_ directory contains all the files and the software installed from snaps.
+
+### Digging Deeper
+
+That is the root directory covered, but many of the subdirectories lead to their own set of files and subdirectories. Figure 2 gives you an overall idea of what the basic file system tree looks like (the image is kindly supplied under a CC By-SA license by Paul Gardner) and [Wikipedia has a break down with a summary of what each directory is used for][2].
+
+
+![filesystem][4]
+
+Figure 2: Standard Unix filesystem hierarchy.
+
+[Used with permission][5]
+
+Paul Gardner
+
+To explore the filesystem yourself, use the `cd` command:
+```
+cd
+
+```
+
+will take you to the directory of your choice ( _cd_ stands for _change directory_.
+
+If you get confused,
+```
+pwd
+
+```
+
+will always tell you where you ( _pwd_ stands for _print working directory_ ). Also,
+```
+cd
+
+```
+
+with no options or parameters, will take you back to your own home directory, where things are safe and cosy.
+
+Finally,
+```
+cd ..
+
+```
+
+will take you up one level, getting you one level closer to the _/_ root directory. If you are in _/usr/share/wallpapers_ and run `cd ..`, you will move up to _/usr/share_.
+
+To see what a directory contains, use
+```
+ls
+
+```
+
+or simply
+```
+ls
+
+```
+
+to list the contents of the directory you are in right now.
+
+And, of course, you always have `tree` to get an overview of what lays within a directory. Try it on _/usr/share_ \-- there is a lot of interesting stuff in there.
+
+### Conclusion
+
+Although there are minor differences between Linux distributions, the layout for their filesystems are mercifully similar. So much so that you could say: once you know one, you know them all. And the best way to know the filesystem is to explore it. So go forth with `tree`, `ls`, and `cd` into uncharted territory.
+
+You cannot damage your filesystem just by looking at it, so move from one directory to another and take a look around. Soon you'll discover that the Linux filesystem and how it is laid out really makes a lot of sense, and you will intuitively know where to find apps, documentation, and other resources.
+
+Learn more about Linux through the free ["Introduction to Linux" ][6]course from The Linux Foundation and edX.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/intro-to-linux/2018/4/linux-filesystem-explained
+
+作者:[PAUL BROWN][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/bro66
+[1]:https://www.ubuntu.com/desktop/snappy
+[2]:https://en.wikipedia.org/wiki/Unix_filesystem#Conventional_directory_layout
+[3]:https://www.linux.com/files/images/standard-unix-filesystem-hierarchypng
+[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/standard-unix-filesystem-hierarchy.png?itok=CVqmyk6P (filesystem)
+[5]:https://www.linux.com/licenses/category/used-permission
+[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
From 6bf0b86b69d43f7f6875a7df927edcf2e151cfb5 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 23 Apr 2018 14:19:07 +0800
Subject: [PATCH 053/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=203=20tips=20for=20?=
=?UTF-8?q?organizing=20your=20open=20source=20project's=20workflow=20on?=
=?UTF-8?q?=20GitHub?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...pen source project-s workflow on GitHub.md | 109 ++++++++++++++++++
1 file changed, 109 insertions(+)
create mode 100644 sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md
diff --git a/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md b/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md
new file mode 100644
index 0000000000..29e4ea2f48
--- /dev/null
+++ b/sources/talk/20180419 3 tips for organizing your open source project-s workflow on GitHub.md
@@ -0,0 +1,109 @@
+3 tips for organizing your open source project's workflow on GitHub
+======
+
+
+
+Managing an open source project is challenging work, and the challenges grow as a project grows. Eventually, a project may need to meet different requirements and span multiple repositories. These problems aren't technical, but they are important to solve to scale a technical project. [Business process management][1] methodologies such as agile and [kanban][2] bring a method to the madness. Developers and managers can make realistic decisions for estimating deadlines and team bandwidth with an organized development focus.
+
+At the [UNICEF Office of Innovation][3], we use GitHub project boards to organize development on the MagicBox project. [MagicBox][4] is a full-stack application and open source platform to serve and visualize data for decision-making in humanitarian crises and emergencies. The project spans multiple GitHub repositories and works with multiple developers. With GitHub project boards, we organize our work across multiple repositories to better understand development focus and team bandwidth.
+
+Here are three tips from the UNICEF Office of Innovation on how to organize your open source projects with the built-in project boards on GitHub.
+
+### 1\. Bring development discussion to issues and pull requests
+
+Transparency is a critical part of an open source community. When mapping out new features or milestones for a project, the community needs to see and understand a decision or why a specific direction was chosen. Filing new GitHub issues for features and milestones is an easy way for someone to follow the project direction. GitHub issues and pull requests are the cards (or building blocks) of project boards. To be successful with GitHub project boards, you need to use issues and pull requests.
+
+
+![GitHub issues for magicbox-maps, MagicBox's front-end application][6]
+
+GitHub issues for magicbox-maps, MagicBox's front-end application.
+
+The UNICEF MagicBox team uses GitHub issues to track ongoing development milestones and other tasks to revisit. The team files new GitHub issues for development goals, feature requests, or bugs. These goals or features may come from external stakeholders or the community. We also use the issues as a place for discussion on those tasks. This makes it easy to cross-reference in the future and visualize upcoming work on one of our projects.
+
+Once you begin using GitHub issues and pull requests as a way of discussing and using your project, organizing with project boards becomes easier.
+
+### 2\. Set up kanban-style project boards
+
+GitHub issues and pull requests are the first step. After you begin using them, it may become harder to visualize what work is in progress and what work is yet to begin. [GitHub's project boards][7] give you a platform to visualize and organize cards into different columns.
+
+There are two types of project boards available:
+
+ * **Repository** : Boards for use in a single repository
+ * **Organization** : Boards for use in a GitHub organization across multiple repositories (but private to organization members)
+
+
+
+The choice you make depends on the structure and size of your projects. The UNICEF MagicBox team uses boards for development and documentation at the organization level, and then repository-specific boards for focused work (like our [community management board][8]).
+
+#### Creating your first board
+
+Project boards are found on your GitHub organization page or on a specific repository. You will see the Projects tab in the same row as Issues and Pull requests. From the page, you'll see a green button to create a new project.
+
+There, you can set a name and description for the project. You can also choose templates to set up basic columns and sorting for your board. Currently, the only options are for kanban-style boards.
+
+
+![Creating a new GitHub project board.][10]
+
+Creating a new GitHub project board.
+
+After creating the project board, you can make adjustments to it as needed. You can create new columns, [set up automation][11], and add pre-existing GitHub issues and pull requests to the project board.
+
+You may notice new options for the metadata in each GitHub issue and pull request. Inside of an issue or pull request, you can add it to a project board. If you use automation, it will automatically enter a column you configured.
+
+### 3\. Build project boards into your workflow
+
+After you set up a project board and populate it with issues and pull requests, you need to integrate it into your workflow. Project boards are effective only when actively used. The UNICEF MagicBox team uses the project boards as a way to track our progress as a team, update external stakeholders on development, and estimate team bandwidth for reaching our milestones.
+
+
+![Tracking progress][13]
+
+Tracking progress with GitHub project boards.
+
+If you are an open source project and community, consider using the project boards for development-focused meetings. It also helps remind you and other core contributors to spend five minutes each day updating progress as needed. If you're at a company using GitHub to do open source work, consider using project boards to update other team members and encourage participation inside of GitHub issues and pull requests.
+
+Once you begin using the project board, yours may look like this:
+
+
+![Development progress board][15]
+
+Development progress board for all UNICEF MagicBox repositories in organization-wide GitHub project boards.
+
+### Open alternatives
+
+GitHub project boards require your project to be on GitHub to take advantage of this functionality. While GitHub is a popular repository for open source projects, it's not an open source platform itself. Fortunately, there are open source alternatives to GitHub with tools to replicate the workflow explained above. [GitLab Issue Boards][16] and [Taiga][17] are good alternatives that offer similar functionality.
+
+### Go forth and organize!
+
+With these tools, you can bring a method to the madness of organizing your open source project. These three tips for using GitHub project boards encourage transparency in your open source project and make it easier to track progress and milestones in the open.
+
+Do you use GitHub project boards for your open source project? Have any tips for success that aren't mentioned in the article? Leave a comment below to share how you make sense of your open source projects.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/keep-your-project-organized-git-repo
+
+作者:[Justin W.Flory][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/jflory
+[1]:https://en.wikipedia.org/wiki/Business_process_management
+[2]:https://en.wikipedia.org/wiki/Kanban_(development)
+[3]:http://unicefstories.org/about/
+[4]:http://unicefstories.org/magicbox/
+[5]:/file/393356
+[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-open-issues.png?itok=OcWPX575 (GitHub issues for magicbox-maps, MagicBox's front-end application)
+[7]:https://help.github.com/articles/about-project-boards/
+[8]:https://github.com/unicef/magicbox/projects/3?fullscreen=true
+[9]:/file/393361
+[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-project-boards-create-board.png?itok=pp7SXH9g (Creating a new GitHub project board.)
+[11]:https://help.github.com/articles/about-automation-for-project-boards/
+[12]:/file/393351
+[13]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-issues-metadata.png?itok=xp5auxCQ (Tracking progress)
+[14]:/file/393366
+[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/github-project-boards-overview.png?itok=QSbOOOkF (Development progress board)
+[16]:https://about.gitlab.com/features/issueboard/
+[17]:https://taiga.io/
From a76a4b8992052d378956ee79c1ea539205d998dd Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 23 Apr 2018 14:26:44 +0800
Subject: [PATCH 054/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20You=20Don?=
=?UTF-8?q?=E2=80=99t=20Know=20About=20Linux=20Open=20Source=20Could=20Be?=
=?UTF-8?q?=20Costing=20to=20More=20Than=20You=20Think?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Could Be Costing to More Than You Think.md | 39 +++++++++++++++++++
1 file changed, 39 insertions(+)
create mode 100644 sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md
diff --git a/sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md b/sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md
new file mode 100644
index 0000000000..10511c3a7d
--- /dev/null
+++ b/sources/talk/20180420 What You Don-t Know About Linux Open Source Could Be Costing to More Than You Think.md
@@ -0,0 +1,39 @@
+What You Don’t Know About Linux Open Source Could Be Costing to More Than You Think
+======
+
+If you would like to test out Linux before completely switching it as your everyday driver, there are a number of means by which you can do it. Linux was not intended to run on Windows, and Windows was not meant to host Linux. To begin with, and perhaps most of all, Linux is open source computer software. In any event, Linux outperforms Windows on all your hardware.
+
+If you’ve always wished to try out Linux but were never certain where to begin, have a look at our how to begin guide for Linux. Linux is not any different than Windows or Mac OS, it’s basically an Operating System but the leading different is the fact that it is Free for everyone. Employing Linux today isn’t any more challenging than switching from one sort of smartphone platform to another.
+
+You’re most likely already using Linux, whether you are aware of it or not. Linux has a lot of distinct versions to suit nearly any sort of user. Today, Linux is a small no-brainer. Linux plays an essential part in keeping our world going.
+
+Even then, it is dependent on the build of Linux that you’re using. Linux runs a lot of the underbelly of cloud operations. Linux is also different in that, even though the core pieces of the Linux operating system are usually common, there are lots of distributions of Linux, like different software alternatives. While Linux might seem intimidatingly intricate and technical to the ordinary user, contemporary Linux distros are in reality very user-friendly, and it’s no longer the case you have to have advanced skills to get started using them. Linux was the very first major Internet-centred open-source undertaking. Linux is beginning to increase the range of patches it pushes automatically, but several of the security patches continue to be opt-in only.
+
+You are able to remove Linux later in case you need to. Linux plays a vital part in keeping our world going. Linux supplies a huge library of functionality which can be leveraged to accelerate development.
+
+Even then, it’s dependent on the build of Linux that you’re using. Linux is also different in that, even though the core pieces of the Linux operating system are typically common, there are lots of distributions of Linux, like different software alternatives. While Linux might seem intimidatingly intricate and technical to the ordinary user, contemporary Linux distros are in fact very user-friendly, and it’s no longer the case you require to have advanced skills to get started using them. Linux runs a lot of the underbelly of cloud operations. Linux is beginning to increase the range of patches it pushes automatically, but several of the security patches continue to be opt-in only. Read More, open source projects including Linux are incredibly capable because of the contributions that all these individuals have added over time.
+
+### Life After Linux Open Source
+
+The development edition of the manual typically has more documentation, but might also document new characteristics that aren’t in the released version. Fortunately, it’s so lightweight you can just jump to some other version in case you don’t like it. It’s extremely hard to modify the compiled version of the majority of applications and nearly not possible to see exactly the way the developer created different sections of the program.
+
+On the challenges of bottoms-up go-to-market It’s really really hard to grasp the difference between your organic product the product your developers use and love and your company product, which ought to be, effectively, a different product. As stated by the report, it’s going to be hard for developers to switch. Developers are now incredibly important and influential in the purchasing procedure. Some OpenWrt developers will attend the event and get ready to reply to your questions!
+
+When the program is installed, it has to be configured. Suppose you discover that the software you bought actually does not do what you would like it to do. Open source software is much more common than you believe, and an amazing philosophy to live by. Employing open source software gives an inexpensive method to bootstrap a business. It’s more difficult to deal with closed source software generally. So regarding Application and Software, you’re all set if you are prepared to learn an alternate software or finding a means to make it run on Linux. Possibly the most famous copyleft software is Linux.
+
+Article sponsored by [Vegas Palms online slots][1]
+
+
+--------------------------------------------------------------------------------
+
+via: https://linuxaria.com/article/what-you-dont-know-about-linux-open-source-could-be-costing-to-more-than-you-think
+
+作者:[Marc Fisher][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://linuxaria.com
+[1]:https://www.vegaspalmscasino.com/casino-games/slots/
From 0765251f56f24874d7785d9e122beeef46ded884 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 23 Apr 2018 14:27:45 +0800
Subject: [PATCH 055/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20Perl=20module?=
=?UTF-8?q?=20for=20better=20debugging?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...0420 A Perl module for better debugging.md | 80 +++++++++++++++++++
1 file changed, 80 insertions(+)
create mode 100644 sources/tech/20180420 A Perl module for better debugging.md
diff --git a/sources/tech/20180420 A Perl module for better debugging.md b/sources/tech/20180420 A Perl module for better debugging.md
new file mode 100644
index 0000000000..a83ff53c7e
--- /dev/null
+++ b/sources/tech/20180420 A Perl module for better debugging.md
@@ -0,0 +1,80 @@
+A Perl module for better debugging
+======
+
+
+It's occasionally useful to have a block of Perl code that you use only for debugging or development tweaking. That's fine, but having blocks like this can be expensive to performance, particularly if the decision whether to execute it is made at runtime.
+
+[Curtis "Ovid" Poe][1] recently wrote a module that can help with this problem: [Keyword::DEVELOPMENT][2]. The module utilizes Keyword::Simple and the pluggable keyword architecture introduced in Perl 5.012 to create a new keyword: DEVELOPMENT. It uses the value of the PERL_KEYWORD_DEVELOPMENT environment variable to determine whether or not a block of code is to be executed.
+
+Using it couldn't be easier:
+```
+use Keyword::DEVELOPMENT;
+
+
+
+sub doing_my_big_loop {
+
+ my $self = shift;
+
+ DEVELOPMENT {
+
+ # insert expensive debugging code here!
+
+ }
+
+}Keyworddoing_my_big_loopDEVELOPMENT
+
+```
+
+At compile time, the code inside the DEVELOPMENT block is optimized away and simply doesn't exist.
+
+Do you see the advantage here? Set up the PERL_KEYWORD_DEVELOPMENT environment variable to be true on your sandbox and false on your production environment, and valuable debugging tools can be committed to your code repo, always there when you need them.
+
+You could also use this module, in the absence of a more evolved configuration management system, to handle variations in settings between production and development or test environments:
+```
+sub connect_to_my_database {
+
+
+
+ my $dsn = "dbi:mysql:productiondb";
+
+ my $user = "db_user";
+
+ my $pass = "db_pass";
+
+
+
+ DEVELOPMENT {
+
+ # Override some of that config information
+
+ $dsn = "dbi:mysql:developmentdb";
+
+ }
+
+
+
+ my $db_handle = DBI->connect($dsn, $user, $pass);
+
+}connect_to_my_databaseDEVELOPMENTDBI
+
+```
+
+Later enhancement to this snippet would have you reading in configuration information from somewhere else, perhaps from a YAML or INI file, but I hope you see the utility here.
+
+I looked at the source code for Keyword::DEVELOPMENT and spent about a half hour wondering, "Gosh, why didn't I think of that?" Once Keyword::Simple is installed, the module that Curtis has given us is surprisingly simple. It's an elegant solution to something I've needed in my own coding practice for a long time.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/perl-module-debugging-code
+
+作者:[Ruth Holloway][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/druthb
+[1]:https://metacpan.org/author/OVID
+[2]:https://metacpan.org/pod/release/OVID/Keyword-DEVELOPMENT-0.04/lib/Keyword/DEVELOPMENT.pm
From a696afb567db8bdb9e2b6808ac6bb79b163f3db0 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 23 Apr 2018 14:28:51 +0800
Subject: [PATCH 056/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Easily=20Install?=
=?UTF-8?q?=20Android=20Studio=20in=20Ubuntu=20And=20Linux=20Mint?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Android Studio in Ubuntu And Linux Mint.md | 98 +++++++++++++++++++
1 file changed, 98 insertions(+)
create mode 100644 sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md
diff --git a/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md b/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md
new file mode 100644
index 0000000000..d5e33425b2
--- /dev/null
+++ b/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md
@@ -0,0 +1,98 @@
+Easily Install Android Studio in Ubuntu And Linux Mint
+======
+[Android Studio][1], Google’s own IDE for Android development, is a nice alternative to Eclipse with ADT plugin. Android Studio can be installed from its source code but in this quick post, we shall see **how to install Android Studio in Ubuntu 18.04, 16.04 and corresponding Linux Mint variants**.
+
+Before you proceed to install Android Studio, make sure that you have [Java installed in Ubuntu][2].
+
+![How to install Android Studio in Ubuntu][3]
+
+### Install Android Studio in Ubuntu and other distributions using Snap
+
+Ever since Ubuntu started focusing on Snap packages, more software have started providing easy to install Snap packages. Android Studio is one of them. Ubuntu users can simply find the Android Studio application in the Software Center and install it from there.
+
+![Install Android Studio in Ubuntu from Software Center][4]
+
+If you see an error while installing Android Studio from Software Center, you can use the [Snap commands][5] to install Android studio.
+```
+sudo snap install android-studio --classic
+
+```
+
+Easy peasy!
+
+### Alternative Method 1: Install Android Studio using umake in Ubuntu
+
+You can also easily install Android Studio using Ubuntu Developer Tools Center, now known as [Ubuntu Make][6]. Ubuntu Make provides a command line tool to install various development tools, IDE etc. Ubuntu Make is available in Ubuntu repository.
+
+To install Ubuntu Make, use the commands below in a terminal:
+
+`sudo apt-get install ubuntu-make`
+
+Once you have installed Ubuntu Make, use the command below to install Android Studio in Ubuntu:
+```
+umake android
+
+```
+
+It will give you a couple of options in the course of the installation. I presume that you can handle it. If you decide to uninstall Android Studio, you can use the same umake tool in the following manner:
+```
+umake android --remove
+
+```
+
+### Alternative Method 2: Install Android Studio in Ubuntu and Linux Mint via unofficial PPA
+
+Thanks to [Paolo Ratolo][7], we have a PPA which can be used to easily install Android Studio in Ubuntu 16.04, 14.04, Linux Mint and other Ubuntu based distributions. Just note that it will download around 650 MB of data. So mind your internet connection as well as data charges (if any).
+
+Open a terminal and use the following commands:
+```
+sudo apt-add-repository ppa:paolorotolo/android-studio
+sudo apt-get update
+sudo apt-get install android-studio
+
+```
+
+Was it not easy? While installing a program from source code is fun in a way, it is always nice to have such PPAs. Once we have seen how to install Android Studio, lets see how to uninstall it.
+
+### Uninstall Android Studio:
+
+If you don’t have already, install PPA Purge:
+```
+sudo apt-get install ppa-purge
+
+```
+
+Now use the PPA Purge to purge the installed PPA:
+```
+sudo apt-get remove android-studio
+
+sudo ppa-purge ppa:paolorotolo/android-studio
+
+```
+
+That’s it. I hope this quick helps you to **install Android Studio in Ubuntu and Linux Mint**. Before you run Android Studio, make sure to [install Java in Ubuntu][8] first. In similar posts, I advise you to read [how to install and configure Ubuntu SDK][9] and [how to easily install Microsoft Visual Studio in Ubuntu][10].
+
+Any questions or suggestions are always welcomed. Ciao :)
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-android-studio-ubuntu-linux/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://itsfoss.com/author/abhishek/
+[1]:http://developer.android.com/sdk/installing/studio.html
+[2]:https://itsfoss.com/install-java-ubuntu-1404/
+[3]:https://itsfoss.com/wp-content/uploads/2014/04/Android_Studio_Ubuntu.jpeg
+[4]:https://itsfoss.com/wp-content/uploads/2014/04/install-android-studio-snap-800x469.jpg
+[5]:https://itsfoss.com/install-snap-linux/
+[6]:https://wiki.ubuntu.com/ubuntu-make
+[7]:https://plus.google.com/+PaoloRotolo
+[8]:https://itsfoss.com/install-java-ubuntu-1404/ (How To Install Java On Ubuntu 14.04)
+[9]:https://itsfoss.com/install-configure-ubuntu-sdk/
+[10]:https://itsfoss.com/install-visual-studio-code-ubuntu/
From 98f84f058151cb63c2e12413a233b2c72cd7a623 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 23 Apr 2018 14:31:05 +0800
Subject: [PATCH 057/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20start?=
=?UTF-8?q?=20developing=20on=20Java=20in=20Fedora?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...w to start developing on Java in Fedora.md | 122 ++++++++++++++++++
1 file changed, 122 insertions(+)
create mode 100644 sources/tech/20180420 How to start developing on Java in Fedora.md
diff --git a/sources/tech/20180420 How to start developing on Java in Fedora.md b/sources/tech/20180420 How to start developing on Java in Fedora.md
new file mode 100644
index 0000000000..9858e1be97
--- /dev/null
+++ b/sources/tech/20180420 How to start developing on Java in Fedora.md
@@ -0,0 +1,122 @@
+How to start developing on Java in Fedora
+======
+
+
+Java is one of the most popular programming languages in the world. It is widely-used to develop IOT appliances, Android apps, web, and enterprise applications. This article will provide a quick guide to install and configure your workstation using [OpenJDK][1].
+
+### Installing the compiler and tools
+
+Installing the compiler, or Java Development Kit (JDK), is easy to do in Fedora. At the time of this article, versions 8 and 9 are available. Simply open a terminal and enter:
+```
+sudo dnf install java-1.8.0-openjdk-devel
+
+```
+
+This will install the JDK for version 8. For version 9, enter:
+```
+sudo dnf install java-9-openjdk-devel
+
+```
+
+For the developer who requires additional tools and libraries such as Ant and Maven, the **Java Development** group is available. To install the suite, enter:
+```
+sudo dnf group install "Java Development"
+
+```
+
+To verify the compiler is installed, run:
+```
+javac -version
+
+```
+
+The output shows the compiler version and looks like this:
+```
+javac 1.8.0_162
+
+```
+
+### Compiling applications
+
+You can use any basic text editor such as nano, vim, or gedit to write applications. This example provides a simple “Hello Fedora” program.
+
+Open your favorite text editor and enter the following:
+```
+public class HelloFedora {
+
+
+ public static void main (String[] args) {
+ System.out.println("Hello Fedora!");
+ }
+}
+
+```
+
+Save the file as HelloFedora.java. In the terminal change to the directory containing the file and do:
+```
+javac HelloFedora.java
+
+```
+
+The compiler will complain if it runs into any syntax errors. Otherwise it will simply display the shell prompt beneath.
+
+You should now have a file called HelloFedora, which is the compiled program. Run it with the following command:
+```
+java HelloFedora
+
+```
+
+And the output will display:
+```
+Hello Fedora!
+
+```
+
+### Installing an Integrated Development Environment (IDE)
+
+Some programs may be more complex and an IDE can make things flow smoothly. There are quite a few IDEs available for Java programmers including:
+
++ Geany, a basic IDE that loads quickly, and provides built-in templates
++ Anjuta
++ GNOME Builder, which has been covered in the article Builder – a new IDE specifically for GNOME app developers
+
+However, one of the most popular open-source IDE’s, mainly written in Java, is [Eclipse][2]. Eclipse is available in the official repositories. To install it, run this command:
+```
+sudo dnf install eclipse-jdt
+
+```
+
+When the installation is complete, a shortcut for Eclipse appears in the desktop menu.
+
+For more information on how to use Eclipse, consult the [User Guide][3] available on their website.
+
+### Browser plugin
+
+If you’re developing web applets and need a plugin for your browser, [IcedTea-Web][4] is available. Like OpenJDK, it is open source and easy to install in Fedora. Run this command:
+```
+sudo dnf install icedtea-web
+
+```
+
+As of Firefox 52, the web plugin no longer works. For details visit the Mozilla support site at [https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct][5].
+
+Congratulations, your Java development environment is ready to use.
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/start-developing-java-fedora/
+
+作者:[Shaun Assam][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://fedoramagazine.org/author/sassam/
+[1]:http://openjdk.java.net/
+[2]:https://www.eclipse.org/
+[3]:http://help.eclipse.org/oxygen/nav/0
+[4]:https://icedtea.classpath.org/wiki/IcedTea-Web
+[5]:https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct
From 303723919fa6f40c9c0898e2f99fed6fc2bebd99 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 23 Apr 2018 14:33:18 +0800
Subject: [PATCH 058/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20handy=20way?=
=?UTF-8?q?=20to=20add=20free=20books=20to=20your=20eReader?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...y way to add free books to your eReader.md | 179 ++++++++++++++++++
1 file changed, 179 insertions(+)
create mode 100644 sources/tech/20180420 A handy way to add free books to your eReader.md
diff --git a/sources/tech/20180420 A handy way to add free books to your eReader.md b/sources/tech/20180420 A handy way to add free books to your eReader.md
new file mode 100644
index 0000000000..449ca5f69e
--- /dev/null
+++ b/sources/tech/20180420 A handy way to add free books to your eReader.md
@@ -0,0 +1,179 @@
+A handy way to add free books to your eReader
+======
+
+
+I do a lot of reading on my tablet every day. While I have bought a few eBooks, I enjoy finding things for free on [Project Gutenberg][1]; it rekindles fond memories of browsing through the stacks of a library for something to catch my interest. There are various ways to search the PG website by title or author, but this presumes you have some idea of what you’re looking for.
+
+I have used the [Magic Catalog][2], but I seem to have seen or read every book listed there that interests me, and as far as I can tell the catalog is about ten years old. In 2017 alone, PG added 2,423 books to its catalog, so perhaps 20,000 have been added over the last ten years.
+
+From the Project Gutenberg website, you can link to the [Offline Catalogs][3] and download a plain-text list of all the books freely available, but the file is 6.6 MB—a little unwieldy. Even the list for 2017 only is a bit tedious to scan. So I decided to make my own web page from this list, including links to each book (similar to the Magic Catalog), and turn that into an eBook. This turned out to be easier than you might expect. The trick is to use `regex`; specifically, `regex` as featured in [Kwrite][4].
+
+First, strip out the preamble text, which explains various details about Project Gutenberg. The listing begins after that:
+```
+~ ~ ~ ~ Posting Dates for the below eBooks: 1 Dec 2017 to 31 Dec 2017 ~ ~ ~ ~
+
+
+
+TITLE and AUTHOR ETEXT NO.
+
+
+
+The Origin and Development of Christian Dogma, by Charles A. H. Tuthill 56279
+
+ [Subtitle: An essay in the science of history]
+
+
+
+Frank Merriwell's Endurance, by Burt L. Standish 56278
+
+ [Subtitle: or A Square Shooter]
+
+
+
+Derelicts, by James Sprunt 56277
+
+ [Subtitle: An Account of Ships Lost at Sea in General Commercial
+
+ Traffic and a Brief History of Blockade Runners Stranded Along
+
+ the North Carolina Coast 1861-1865]
+
+
+
+Comical Pilgrim; or, Travels of a Cynick Philosopher..., by Anonymous 56276
+
+ [Subtitle: Thro' the most Wicked Parts of the World, Namely,
+
+ England, Wales, Scotland, Ireland, and Holland]
+
+
+
+I'r Aifft Ac Yn Ol, by D. Rhagfyr Jones 56275
+
+ [Language: Welsh]
+
+```
+
+This shows the structure of the text file. The 5-digit number is the search term for each book—for example, the first book would be found here: . Each book is separated from the next by an empty line.
+
+To start, download the file `GUTINDEX.2017`, load it into Kwrite, strip off the preamble, and Save As `GUTINDEX.2017.xhtml`, so the original is unedited just in case. You might as well put in the `xhtml` preamble:
+```
+
+
+
+
+
+
+
+
+
+
+GutIndex 2017
+
+
+
+
+
+```
+
+Then at the bottom of the file:
+```
+
+
+
+
+```
+
+I’m not a fan of the `~ ~ ~ ~` (four tildes separated by three spaces), so select Edit > Replace in Kwrite to bring up the Replace dialog at the bottom. You don’t need to select Regular expression as the Mode, but you’ll need it later, so go ahead and do that.
+
+In Find, enter `~ ~ ~ ~` and nothing in Replace. Click Replace All, and they all disappear, with the message: 24 replacements.
+
+Now let’s make the links. In Find, enter: `(\d\d\d\d\d)`. (You must include the parentheses.)
+
+In Replace, enter: `\1`
+
+This searches for a sequence of 5 digits and replaces it with the link information, which includes the particular 5-digit number twice, denoted by `\1`. Now summon the courage to click Replace All (remember that you can undo this if you’ve made a mistake), and the magic happens: 2423 replacements. Here’s a fragment:
+```
+The Origin and Development of Christian Dogma, by Charles A. H. Tuthill 56279
+
+ [Subtitle: An essay in the science of history]
+
+
+
+Frank Merriwell's Endurance, by Burt L. Standish 56278
+
+ [Subtitle: or A Square Shooter]
+
+
+
+Derelicts, by James Sprunt 56277
+
+ [Subtitle: An Account of Ships Lost at Sea in General Commercial
+
+ Traffic and a Brief History of Blockade Runners Stranded Along
+
+ the North Carolina Coast 1861-1865]
+
+```
+
+Witness the power of `regex`! Now let's create paragraphs to separate these individual books as whitespace and newlines mean nothing to HTML. Here is where we use that empty line between books. Before we do that, though, let’s eliminate the lines that contain headings:
+```
+TITLE and AUTHOR ETEXT NO.
+
+```
+
+We're doing this because they’re unnecessary, and the second heading is not going to line up with the ebook number anyway. I wanted to get rid of this line and the extra newline characters, and since there were only 12, I went through the file manually—but you can facilitate this by using Edit > Find, searching for ETEXT.
+
+Now more `regex`. In Find, enter: `\n\n`
+
+In Replace, enter: `
\n\n`
+
+Then Replace All. I leave in the two newline characters so the text file is easier to read. You will need to manually add `
` at the end of the list. At the beginning, you'll see this:
+```
+ Posting Dates for the below eBooks: 1 Dec 2017 to 31 Dec 2017
+
+
+
+The Origin and Development of Christian Dogma, by Charles A. H. Tuthill 56279
+
+```
+
+I’d like to make the posting dates a header, but I also want to eliminate `Posting Dates for the below eBooks:` since simply showing the dates is enough. In Find, enter: `Posting Dates for the below eBooks:`, and in Replace, enter: `
` (or ``).
+
+Now let's fix that trailing `` for each header. You could do this manually, but if you're feeling lazy, enter `2017 ` in Find, and `
` in Replace. With each of these, there's a slight risk of doing too much, but the feedback will tell you how many replacements there are (there should be 12). And you always have Undo.
+
+Now for some manual cleanup. Because you added the `` and `
` tags, and because of the new `` tags, there will be extra paragraph tags and a mismatch in the region of these headers. You could simply scan the file at these points, or get some help by entering `` in the Find space, clicking Find All to highlight them, and scrolling down the file to get rid of any unneeded tags.
+
+The other problem I found with XHTML was ampersands scattered throughout. Since XHTML is stricter than HTML, replace the `&` with `&`. You may want to replace these individually using Replace instead of Replace All.
+
+Some of the lines in the text file have some sort of control character that acts like ` ` (a non-breaking space). To fix this, highlight one in Kwrite—they show up as a faint baseline with a vertical bump—paste it into Find, and enter a space in Replace. This maintains visual spacing as text but is ignored as HTML (by the way, there were 12,586 of these in the document).
+
+Here's how it looks in a narrowed browser window:
+
+
+
+Clicking a link takes you to the book's Project Gutenberg page, where you can view or download it.
+
+I used [Sigil][5] to convert this to an eBook, which was probably the easiest part of the process. Start Sigil, then select "Add Existing Files" from the toolbar and select your XHTML or HTML file. To create a chapter for each month, scroll down to the monthly header line, place the cursor at the beginning of the line, then Split at Cursor (Ctrl + Return) to create 12 chapters. You can also use the headers to create a table of contents; it’s also a good idea to edit the metadata to give it a title that will show up in your eBook reader (you can make yourself the author). Finally, save the file, and you’re done.
+
+Happy reading!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/browse-project-gutenberg-library
+
+作者:[Greg Pittman][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/greg-p
+[1]:http://www.gutenberg.org/wiki/Main_Page
+[2]:http://freekindlebooks.org/MagicCatalog/magiccatalog.html
+[3]:http://www.gutenberg.org/wiki/Gutenberg:Offline_Catalogs
+[4]:https://www.kde.org/applications/utilities/kwrite/
+[5]:https://sigil-ebook.com/
From e31ec9151550bbc7619d50dc0fd0777343cfc839 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 23 Apr 2018 14:36:29 +0800
Subject: [PATCH 059/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Migrating=20to=20?=
=?UTF-8?q?Linux:=20Network=20and=20System=20Settings?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...g to Linux- Network and System Settings.md | 116 ++++++++++++++++++
1 file changed, 116 insertions(+)
create mode 100644 sources/tech/20180419 Migrating to Linux- Network and System Settings.md
diff --git a/sources/tech/20180419 Migrating to Linux- Network and System Settings.md b/sources/tech/20180419 Migrating to Linux- Network and System Settings.md
new file mode 100644
index 0000000000..f3930f1777
--- /dev/null
+++ b/sources/tech/20180419 Migrating to Linux- Network and System Settings.md
@@ -0,0 +1,116 @@
+Migrating to Linux: Network and System Settings
+======
+
+
+In this series, we provide an overview of fundamentals to help you successfully make the transition to Linux from another operating system. If you missed the earlier articles in the series, you can find them here:
+
+[Part 1 - An Introduction][1]
+
+[Part 2 - Disks, Files, and Filesystems][2]
+
+[Part 3 - Graphical Environments][3]
+
+[Part 4 - The Command Line][4]
+
+[Part 5 - Using sudo][5]
+
+[Part 6 - Installing Software][6]
+
+Linux gives you a lot of control over network and system settings. On your desktop, Linux lets you tweak just about anything on the system. Most of these settings are exposed in plain text files under the /etc directory. Here I describe some of the most common settings you’ll use on your desktop Linux system.
+
+A lot of settings can be found in the Settings program, and the available options will vary by Linux distribution. Usually, you can change the background, tweak sound volume, connect to printers, set up displays, and more. While I won't talk about all of the settings here, you can certainly explore what's in there.
+
+### Connect to the Internet
+
+Connecting to the Internet in Linux is often fairly straightforward. If you are wired through an Ethernet cable, Linux will usually get an IP address and connect automatically when the cable is plugged in or at startup if the cable is already connected.
+
+If you are using wireless, in most distributions there is a menu, either in the indicator panel or in settings (depending on your distribution), where you can select the SSID for your wireless network. If the network is password protected, it will usually prompt you for the password. Afterward, it connects, and the process is fairly smooth.
+
+You can adjust network settings in the graphical environment by going into settings. Sometimes this is called System Settings or just Settings. Often you can easily spot the settings program because its icon is a gear or a picture of tools (Figure 1).
+
+
+![Network Settings][8]
+
+Figure 1: Gnome Desktop Network Settings Indicator Icon.
+
+[Used with permission][9]
+
+### Network Interface Names
+
+Under Linux, network devices have names. Historically, these are given names like eth0 and wlan0 -- or Ethernet and wireless, respectively. Newer Linux systems have been using different names that appear more esoteric, like enp4s0 and wlp5s0. If the name starts with en, it's a wired Ethernet interface. If it starts with wl, it's a wireless interface. The rest of the letters and numbers reflect how the device is connected to hardware.
+
+### Network Management from the Command Line
+
+If you want more control over your network settings, or if you are managing network connections without a graphical desktop, you can also manage the network from the command line.
+
+Note that the most common service used to manage networks in a graphical desktop is the Network Manager, and Network Manager will often override setting changes made on the command line. If you are using the Network Manager, it's best to change your settings in its interface so it doesn't undo the changes you make from the command line or someplace else.
+
+Changing settings in the graphical environment is very likely to be interacting with the Network Manager, and you can also change Network Manager settings from the command line using the tool called nmtui. The nmtui tool provides all the settings that you find in the graphical environment but gives it in a text-based semi-graphical interface that works on the command line (Figure 2).
+
+
+
+On the command line, there is an older tool called ifconfig to manage networks and a newer one called ip. On some distributions, ifconfig is considered to be deprecated and is not even installed by default. On other distributions, ifconfig is still in use.
+
+Here are some commands that will allow you to display and change network settings:
+
+
+
+### Process and System Information
+
+In Windows, you can go into the Task Manager to see a list of the all the programs and services that are running. You can also stop programs from running. And you can view system performance in some of the tabs displayed there.
+
+You can do similar things in Linux both from the command line and from graphical tools. In Linux, there are a few graphical tools available depending on your distribution. The most common ones are System Monitor or KSysGuard. In these tools, you can see system performance, see a list of processes, and even kill processes (Figure 3).
+
+
+
+In these tools, you can also view global network traffic on your system (Figure 4).
+
+
+![System Monitor][11]
+
+Figure 4: Screenshot of Gnome System Monitor.
+
+[Used with permission][9]
+
+### Managing Process and System Usage
+
+There are also quite a few tools you can use from the command line. The command ps can be used to list processes on your system. By default, it will list processes running in your current terminal session. But you can list other processes by giving it various command line options. You can get more help on ps with the commands info ps, or man ps.
+
+Most folks though want to get a list of processes because they would like to stop the one that is using up too much memory or CPU time. In this case, there are two commands that make this task much easier. These are top and htop (Figure 5).
+
+
+
+The top and htop tools work very similarly to each other. These commands update their list every second or two and re-sort the list so that the task using the most CPU is at the top. You can also change the sorting to sort by other resources as well such as memory usage.
+
+In either of these programs (top and htop), you can type '?' to get help, and 'q' to quit. With top, you can press 'k' to kill a process and then type in the unique PID number for the process to kill it.
+
+With htop, you can highlight a task by pressing down arrow or up arrow to move the highlight bar, and then press F9 to kill the task followed by Enter to confirm.
+
+The information and tools provided in this series will help you get started with Linux. With a little time and patience, you'll feel right at home.
+
+Learn more about Linux through the free ["Introduction to Linux" ][12]course from The Linux Foundation and edX.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2018/4/migrating-linux-network-and-system-settings
+
+作者:[John Bonesio][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/johnbonesio
+[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
+[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
+[3]:https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments
+[4]:https://www.linux.com/blog/learn/2018/1/migrating-linux-command-line
+[5]:https://www.linux.com/blog/learn/2018/3/migrating-linux-using-sudo
+[6]:https://www.linux.com/blog/learn/2018/3/migrating-linux-installing-software
+[7]:https://www.linux.com/files/images/figure-1png-2
+[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-1_2.png?itok=J-C6q-t5 (Network Settings)
+[9]:https://www.linux.com/licenses/category/used-permission
+[10]:https://www.linux.com/files/images/figure-4png-1
+[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-4_1.png?itok=boI-L1mF (System Monitor)
+[12]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
From 7bc25c0f3c207b43fad1cc9f2817f5655d1acb1e Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 23 Apr 2018 14:39:15 +0800
Subject: [PATCH 060/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=205=20guiding=20pri?=
=?UTF-8?q?nciples=20you=20should=20know=20before=20you=20design=20a=20mic?=
=?UTF-8?q?roservice?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...d know before you design a microservice.md | 157 ++++++++++++++++++
1 file changed, 157 insertions(+)
create mode 100644 sources/talk/20180419 5 guiding principles you should know before you design a microservice.md
diff --git a/sources/talk/20180419 5 guiding principles you should know before you design a microservice.md b/sources/talk/20180419 5 guiding principles you should know before you design a microservice.md
new file mode 100644
index 0000000000..a74dbd26af
--- /dev/null
+++ b/sources/talk/20180419 5 guiding principles you should know before you design a microservice.md
@@ -0,0 +1,157 @@
+5 guiding principles you should know before you design a microservice
+======
+
+
+One of the biggest challenges for teams starting off with microservices is adhering to the Goldilocks Principle: Not too big, not too small, and not too tightly coupled. Part of this challenge arises from confusion about what, exactly, constitutes a well-designed microservice.
+
+Dozens of CTOs shared their experiences through interviews, and those conversations illuminated five characteristics of well-designed microservices. This article will help guide teams as they design microservices. (For more information, check out the upcoming book [Microservices for Startups][1]). This article will briefly touch on microservice boundaries and arbitrary "rules" to avoid before diving into the five characteristics to guide your design of microservices.
+
+### Microservice boundaries
+
+One of the [core benefits of developing new systems with microservices][2] is that the architecture allows developers to build and modify individual components independently—but problems can arise when it comes to minimizing the number of callbacks between each API. The solution, according to Chris McFadden, VP of engineering at [SparkPost][3] , is to apply the appropriate service boundaries.
+
+With respect to boundaries, in contrast to the sometimes difficult-to-grasp and abstract concept of domain-driven design (DDD)—a framework for microservices—this article focuses on practical principles for creating well-defined microservice boundaries with some of our industry's top CTOs.
+
+### Avoid arbitrary "rules"
+
+If you read enough advice about designing and creating a microservice, you're bound to come across some of the "rules" below. Although it's tempting to use them as guideposts for creating microservices, adhesion to these arbitrary rules is not a principled way to determine thoughtful boundaries for microservices.
+
+#### "A microservice should have X lines of code"
+
+Let's get one thing straight: There are no limitations on how many lines of code there are in a microservice. A microservice doesn't suddenly become a monolith just because you write a few lines of extra code. The key is ensuring there is high cohesion for the code within a service (more on this later).
+
+#### "Turn each function into a microservice"
+
+If a function computes something based on three input values and returns a result, is it a good candidate for a microservice? Should it be a separately deployable application of its own? This really depends on what the function is and how it serves to the entire system. Turning each function into a microservice simply might not make sense in your context.
+
+Other arbitrary rules include those that don't take into account your entire context, such as the team's experience, DevOps capacity, what the service is doing, and availability needs of the data.
+
+### 5 characteristics of a well-designed service
+
+If you've read about microservices, you've no doubt come across advice on what makes a well-designed service. Simply put, high cohesion and loose coupling. There are [many][4] [articles][5] on these concepts to review if you're not familiar with them. And while they offer sound advice, these concepts are quite abstract. Below, based on conversations with experienced CTOs, are key characteristics to keep in mind when creating well-designed microservices.
+
+#### #1: It doesn't share database tables with another service
+
+In the early days of SparkPost, Chris McFadden and his team had to solve a problem that every SaaS business faces: They needed to provide basic services like authentication, account management, and billing.
+
+To tackle this, they created two microservices: a Users API and an Accounts API. The Users API would handle user accounts, API keys, and authentication, while the Accounts API would handle all of the billing-related logic. A very logical separation—but before long, they spotted a problem.
+
+"We had one service that was called the User API, and we had another one called the Account API. The problem was that they were actually having several calls back and forth between them. So you would do something in accounts and have a call and endpoint in users or vice versa," McFadden explained.
+
+The two services were too tightly coupled.
+
+When it comes to designing a microservice, it's a red flag if you have multiple services referencing the same table, as it likely means your DB is a source of coupling.
+
+It is really about how the service relates to the data, which is exactly what Oleksiy Kovrin, head of [Swiftype SRE, Elastic][6], told me. "One of the main foundational principles we use when developing new services is that they should not cross database boundaries. Each service should rely on its own set of underlying data stores. This allows us to centralize access controls, audit logging, caching logic, etc.," he said.
+
+Kovyrin went on to explain that if a subset of your database tables "have no or very little connections to the rest of the dataset, it is a strong signal that component could be isolated into a separate API or a separate service."
+
+Darby Frey, co-founder of [Lead Honestly][7], echoed this sentiment: "Each service should have its own tables [and] should never share database tables."
+
+#### #2: It has a minimal amount of database tables
+
+The ideal size of a microservice is small enough, but no smaller. And the same goes for the number of database tables per service.
+
+Steven Czerwinski, head of engineering, [Scaylr][8], explained during an interview that the sweet spot for Scaylr is "one or two database tables for a service."
+
+SparkPost's Chris McFadden agreed: "We have a suppression microservices, and it handles, keeps track of, millions and billions of entries around suppressions, but it's all very focused just around suppression, so there's really only one or two tables there. The same goes for other services like webhooks."
+
+#### #3: It's thoughtfully stateful or stateless
+
+When designing your microservice, you need to ask yourself whether it requires access to a database or whether it's going to be a stateless service processing terabytes of data like emails or logs.
+
+Julien Lemoine, CTO of [Algolia][9], explained, "We define the boundaries of a service by defining its input and output. Sometimes a service is a network API, but it can also be a process consuming files and producing records in a database (this is the case of our log-processing service)."
+
+Be clear about statefulness up front and it will lead to a better-designed service.
+
+#### #4: Its data availability needs are accounted for
+
+When designing a microservice, keep in mind what services will rely on this new service and the system-wide impact if that data becomes unavailable. Taking that into account allows you to properly design data backup and recovery systems for this service
+
+Steven Czerwinski mentioned that at Scaylr, critical customer row space mapping data is replicated and separated in different ways due to its importance.
+
+In contrast, he added, "The per shard information, that's in its own little partition. It sucks if it goes down because that portion of the customer population is not going to have their logs available, but it's only impacting 5 percent of the customers rather than 100 percent of the customers."
+
+#### #5: It's a single source of truth
+
+Design a service to be the single source of truth for something in your system
+
+For example, when you order something from an e-commerce site, an order ID is generated. This order ID can be used by other services to query an order service for complete information about the order. Using the [publish/subscribe pattern][10], the data that is passed around between services should be the order ID, not the attributes/information of the order itself. Only the order service has complete information and is the single source of truth for a given order.
+
+### Considerations for larger teams
+
+Keeping in mind the five considerations listed above, larger teams should be aware of the impacts of their organizational structure on microservice boundaries.
+
+For larger organizations, where entire teams can be dedicated to owning a service, organizational consideration comes into play when determining service boundaries. And there are two considerations to consider: **independent release schedule** and **different uptime importance**.
+
+"The most successful implementation of microservices we've seen is either based on a software design principle like domain-driven design, for example, and service-oriented architecture, or the ones that reflect an organizational approach," said Khash Sajadi, CEO of [Cloud66.][11]
+
+"So [for the] payments team," Sajadi continued, "they have the payment service or credit card validation service, and that's the service they provide to the outside world. So it's not necessarily anything about software. It's mostly about the business unit [that] provides one more service to the outside world."
+
+### The two-pizza principle
+
+Amazon is a perfect example of a large organization with multiple teams. As mentioned in an article published in [API Evangelist][12], Jeff Bezos issued a mandate to all employees informing them that every team within the company had to communicate via API. Anyone who didn't would be fired.
+
+This way, all the data and functionality was exposed through the interface. Bezos also managed to get every team to decouple, define what their resources are, and make them available through the API. Amazon was building a system from the ground up. This allows every team within the company to become a partner of one another.
+
+I spoke to Travis Reeder, CTO of [Iron.io][13], about Bezos' internal initiative.
+
+"Jeff Bezos mandated that all teams had to build API's to communicate with other teams," Reeder said. "He's also the guy who came up with the 'two-pizza' rule: A team shouldn't be larger than what two pizzas can feed.
+
+"I think the same could apply here: Whatever a small team can develop, manage, and be productive with. If it starts to get unwieldy or starts to slow down, it's probably getting too big," Reeder told me.
+
+### Final considerations: Is your service the right size and properly defined?
+
+During the testing and implementation phase of your microservice system, there are indicators to keep in mind.
+
+#### Indicator #1: Is there over-reliance between services?
+
+If two services are constantly calling back to one another, then that's a strong indication of coupling and a signal that they might be better off combined into one service.
+
+Going back to Chris McFadden's example where he had two API services, accounts, and users that were constantly communicating with one another, McFadden came up an idea to merge the services and decided to call it the Accuser's API. This turned out to be a fruitful strategy.
+
+"What we started doing was eliminating these links [which were the] internal API calls between them," McFadden told me. "It's helped simplify the code."
+
+#### Indicator #2: Does the overhead of setting up the service outweigh the benefit of having the service be independent?
+
+Darby Frey explained, "Every app needs to have its logs aggregated somewhere and needs to be monitored. You need to set up alerting for it. You need to have standard operating procedures and run books for when things break. You have to manage SSH access to that thing. There's a huge foundation of things that have to exist in order for an app to just run."
+
+### Key takeaways
+
+Designing microservices can often feel more like an art than a science. For engineers, that may not sit well. There's lots of general advice out there, but at times it can be a bit too abstract. Let's recap the five specific characteristics to look for when designing your next set of microservices:
+
+ 1. It doesn't share database tables with another service
+ 2. It has a minimal amount of database tables
+ 3. It's thoughtfully stateful or stateless
+ 4. Its data availability needs are accounted for
+ 5. It's a single source of truth
+
+
+
+Next time you're designing a set of microservices and determining service boundaries, referring back to these principles should make the task easier.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/guide-design-microservices
+
+作者:[Jake Lumetta][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/jakelumetta
+[1]:https://buttercms.com/books/microservices-for-startups/
+[2]:https://buttercms.com/books/microservices-for-startups/should-you-always-start-with-a-monolith
+[3]:https://www.sparkpost.com/
+[4]:https://thebojan.ninja/2015/04/08/high-cohesion-loose-coupling/
+[5]:https://en.wikipedia.org/wiki/Single_responsibility_principle
+[6]:https://www.elastic.co/solutions/site-search
+[7]:https://leadhonestly.com/
+[8]:https://www.scalyr.com/
+[9]:https://www.algolia.com/
+[10]:https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern
+[11]:https://www.cloud66.com/
+[12]:https://apievangelist.com/2012/01/12/the-secret-to-amazons-success-internal-apis/
+[13]:https://www.iron.io/
From 15d641027a4a0f27ded4503549b13a4621e0b978 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 23 Apr 2018 18:54:59 +0800
Subject: [PATCH 061/220] PRF:20171219 How to set GNOME to display a custom
slideshow.md
@Auk7F7
---
...set GNOME to display a custom slideshow.md | 42 +++++++++++--------
1 file changed, 24 insertions(+), 18 deletions(-)
diff --git a/translated/tech/20171219 How to set GNOME to display a custom slideshow.md b/translated/tech/20171219 How to set GNOME to display a custom slideshow.md
index e345a48d39..5c9160033c 100644
--- a/translated/tech/20171219 How to set GNOME to display a custom slideshow.md
+++ b/translated/tech/20171219 How to set GNOME to display a custom slideshow.md
@@ -1,15 +1,19 @@
-
如何设置 GNOME 显示自定义幻灯片
======
-在 GNOME 中, 一个非常酷, 但却鲜为人知的特性是它能够将幻灯片显示为墙纸。你可以从 [GNOME 控制中心][1]的 "背景设置" 面板中选择墙纸幻灯片。在预览的右下角显示一个小时钟标志, 可以将幻灯片的墙纸与静态墙纸区别开来。
-一些发行版带有预装的幻灯片壁纸。 例如, Ubuntu 包含了库存的 GNOME 定时壁纸幻灯片, 以及 Ubuntu 壁纸大赛优胜者之一。
+> 使用一个简单的 XML,你就可以设置 GNOME 能够在桌面上显示一个幻灯片。
-如果你想创建自己的自定义幻灯片用作壁纸怎么办? 虽然 GNOME 没有为此提供一个用户界面, 但是在你的主目录中使用一些简单的 XML 文件来创建一个是非常容易的。 幸运的是, GNOME 控制中心的背景选择支持一些常见的目录路径,这样就可以轻松创建幻灯片, 而不必编辑分发所提供的任何内容。
+
+
+在 GNOME 中,一个非常酷、但却鲜为人知的特性是它能够将幻灯片显示为墙纸。你可以从 [GNOME 控制中心][1]的 “背景设置” 面板中选择墙纸幻灯片。在预览的右下角显示一个小时钟标志,可以将幻灯片的墙纸与静态墙纸区别开来。
+
+一些发行版带有预装的幻灯片壁纸。 例如,Ubuntu 包含了库存的 GNOME 定时壁纸幻灯片,以及 Ubuntu 壁纸大赛胜出的墙纸。
+
+如果你想创建自己的自定义幻灯片用作壁纸怎么办?虽然 GNOME 没有为此提供一个用户界面,但是在你的主目录中使用一些简单的 XML 文件来创建一个是非常容易的。 幸运的是,GNOME 控制中心的背景选择支持一些常见的目录路径,这样就可以轻松创建幻灯片,而不必编辑你的发行版所提供的任何内容。
### 开始
-使用你最喜欢的文本编辑器在 `$HOME/.local/share/gnome-background-properties/` 创建一个 XML 文件。 虽然文件名不重要, 但目录名称很重要(你可能需要创建目录)。 举个例子, 我创建了带有以下内容的 `/home/ken/.local/share/gnome-background-properties/osdc-wallpapers.xml `:
+使用你最喜欢的文本编辑器在 `$HOME/.local/share/gnome-background-properties/` 创建一个 XML 文件。 虽然文件名不重要,但目录名称很重要(你可能需要创建该目录)。 举个例子,我创建了带有以下内容的 `/home/ken/.local/share/gnome-background-properties/osdc-wallpapers.xml`:
```
@@ -22,9 +26,10 @@
```
-上面的 XML 文件需要为每个幻灯片或静态壁纸设计一 `` 节点, 你需要将它们包含在 GNOME 控制中心的 `背景面板`中。
-在这个例子中, 我的 `osdc.xml` 文件看起来是这样的:
+每一个你需要包含在 GNOME 控制中心的 “背景面板”中的每个幻灯片或静态壁纸,你都要在上面的 XML 文件需要为其增加一个 `` 节点。
+
+在这个例子中,我的 `osdc.xml` 文件看起来是这样的:
```
@@ -52,17 +57,18 @@
```
-上面的XML中有几个重要的部分。 XML中的 `` 节点是你的外部节点。 每个背景都支持多个 `` 和 `` 节点。
-`` 节点定义要显示的图像以及分别用 `` 和 `` 节点显示它的持续时间。
+上面的 XML 中有几个重要的部分。 XML 中的 `` 节点是你的外部节点。 每个背景都支持多个 `` 和 `` 节点。
-`` 节点为每个转换定义`` ,`` 图像和 `` 图像。
+`` 节点定义用 `` 节点要显示的图像以及用 `` 显示它的持续时间。
+
+`` 节点定义 ``(变换时长),`` 和 `` 定义了起止的图像。
### 全天更换壁纸
-另一个很酷的GNOME功能是基于时间的幻灯片。 你可以定义幻灯片的开始时间,GNOME将根据它计算时间。 这对于根据一天中的时间设置不同的壁纸很有用。 例如,你可以将开始时间设置为06:00,并在12:00之前显示一张墙纸,然后在下午和18:00再次更改。
+另一个很酷的 GNOME 功能是基于时间的幻灯片。 你可以定义幻灯片的开始时间,GNOME 将根据它计算时间。 这对于根据一天中的时间设置不同的壁纸很有用。 例如,你可以将开始时间设置为 06:00,并在 12:00 之前显示一张墙纸,然后在下午和 18:00 再次更改。
-这是通过在XML中定义 `` 来完成的,如下所示:
+这是通过在 XML 中定义 `` 来完成的,如下所示:
```
@@ -73,21 +79,21 @@
6
00
00
-
+
```
-上述XML文件于2017年11月21日06:00开始动画,时长为21,600.00,相当于六个小时。 这段时间将显示你的早晨壁纸直到12:00,12:00时它会更改为你的下一张壁纸。 你可以继续以这种方式每隔一段时间更换一次壁纸,但确保所有持续时间的总计为86,400秒(等于24小时)。
+上述 XML 文件定义于 2017 年 11 月 21 日 06:00 开始动画,时长为 21,600.00,相当于六个小时。 这段时间将显示你的早晨壁纸直到 12:00,12:00 时它会更改为你的下一张壁纸。 你可以继续以这种方式每隔一段时间更换一次壁纸,但确保所有持续时间的总计为 86,400 秒(等于 24 小时)。
-GNOME将计算开始时间和当前时间之间的增量,并显示当前时间的正确墙纸。 例如,如果你在16:00选择新壁纸,则GNOME将在06:00开始时间之后显示36,000秒的适当壁纸。
+GNOME 将计算开始时间和当前时间之间的增量,并显示当前时间的正确墙纸。 例如,如果你在 16:00 选择新壁纸,则GNOME 将在 06:00 开始时间之后显示 36,000 秒的适当壁纸。
-有关完整示例,请参阅大多数发行版中由gnome-backgrounds包提供的adwaita-timed幻灯片。 它通常位于 `/usr/share/backgrounds/gnome/adwaita-timed.xml` 中。
+有关完整示例,请参阅大多数发行版中由 gnome-backgrounds 包提供的 adwaita-timed 幻灯片。 它通常位于 `/usr/share/backgrounds/gnome/adwaita-timed.xml` 中。
### 了解更多信息
希望这可以鼓励你深入了解创建自己的幻灯片壁纸。 如果你想下载本文中引用的文件的完整版本,那么你可以在 [GitHub][2] 上找到它们。
-如果你对用于生成XML文件的实用程序脚本感兴趣,你可以在互联网上搜索gnome- backearth -generator。
+如果你对用于生成 XML 文件的实用程序脚本感兴趣,你可以在互联网上搜索 `gnome-backearth-generator`。
--------------------------------------------------------------------------------
@@ -95,7 +101,7 @@ via: https://opensource.com/article/17/12/create-your-own-wallpaper-slideshow-gn
作者:[Ken Vandine][a]
译者:[Auk7F7](https://github.com/Auk7F7)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 618bf8abd81a47e08419438e6e005919bd2aeb02 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 23 Apr 2018 18:55:30 +0800
Subject: [PATCH 062/220] PUB:20171219 How to set GNOME to display a custom
slideshow.md
@Auk7F7 https://linux.cn/article-9571-1.html
---
.../20171219 How to set GNOME to display a custom slideshow.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171219 How to set GNOME to display a custom slideshow.md (100%)
diff --git a/translated/tech/20171219 How to set GNOME to display a custom slideshow.md b/published/20171219 How to set GNOME to display a custom slideshow.md
similarity index 100%
rename from translated/tech/20171219 How to set GNOME to display a custom slideshow.md
rename to published/20171219 How to set GNOME to display a custom slideshow.md
From ffae3531fa51a70c6767c4aeecf5facb87273691 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 23 Apr 2018 19:10:37 +0800
Subject: [PATCH 063/220] PRF:20180405 The fc Command Tutorial With Examples
For Beginners.md
@Dotcra
---
...nd Tutorial With Examples For Beginners.md | 109 +++++++++---------
1 file changed, 53 insertions(+), 56 deletions(-)
diff --git a/translated/tech/20180405 The fc Command Tutorial With Examples For Beginners.md b/translated/tech/20180405 The fc Command Tutorial With Examples For Beginners.md
index 412ed5203c..cecf5f055c 100644
--- a/translated/tech/20180405 The fc Command Tutorial With Examples For Beginners.md
+++ b/translated/tech/20180405 The fc Command Tutorial With Examples For Beginners.md
@@ -1,15 +1,16 @@
-使用 fc 修改历史命令
+给初学者的 fc 示例教程
======

-fc (**F**ix **C**ommands 的缩写) 是个 shell 内置命令,用于在交互式 shell 里列出、编辑和执行最近输入的命令。你可以用你喜欢的编辑器编辑最近的命令并再次执行,而不用把它们整个重新输入一遍。除了可以避免重复输入又长又复杂的命令,它对修正拼写错误来说也很有用。因为是 shell 内置命令,大多 shell 都包含它,比如 Bash 、 Zsh 、 Ksh 等。在这篇短文中,我们来学一学在 Linux 中使用 fc 命令。
+`fc` (**F**ix **C**ommands 的缩写)是个 shell 内置命令,用于在交互式 shell 里列出、编辑和执行最近输入的命令。你可以用你喜欢的编辑器编辑最近的命令并再次执行,而不用把它们整个重新输入一遍。除了可以避免重复输入又长又复杂的命令,它对修正拼写错误来说也很有用。因为是 shell 内置命令,大多 shell 都包含它,比如 Bash 、 Zsh 、 Ksh 等。在这篇短文中,我们来学一学在 Linux 中使用 `fc` 命令。
### fc 命令教程及示例
-**列出最近执行的命令**
+#### 列出最近执行的命令
+
+执行不带其它参数的 `fc -l` 命令,它会列出最近 16 个命令。
-执行不带参数的"fc -l"命令,它会列出最近 **16** 个命令。
```
$ fc -l
507 fish
@@ -28,16 +29,16 @@ $ fc -l
520 wc -l ostechnix.txt
521 cat ostechnix.txt
522 clear
-
```
-**-r** 选项用于将输出反向排序。
+`-r` 选项用于将输出反向排序。
+
```
$ fc -lr
-
```
-**-n** 选项用于隐藏行号。
+`-n` 选项用于隐藏行号。
+
```
$ fc -ln
nano ~/.profile
@@ -56,12 +57,12 @@ $ fc -ln
wc -l ostechnix.txt
more ostechnix.txt
clear
-
```
这样行号就不再显示了。
-如果想以某个命令开始,只需在 **-l** 选项后面加上行号即可。比如,要显示行号 520 至最近的命令,可以这样:
+如果想以某个命令开始,只需在 `-l` 选项后面加上行号即可。比如,要显示行号 520 至最近的命令,可以这样:
+
```
$ fc -l 520
520 ls -l
@@ -76,10 +77,10 @@ $ fc -l 520
529 clear
530 fc -ln
531 fc -l
-
```
-要列出一段范围内的命令,将始末行号作为 "fc -l" 的参数即可,比如 520 至 525:
+要列出一段范围内的命令,将始、末行号作为 `fc -l` 的参数即可,比如 520 至 525:
+
```
$ fc -l 520 525
520 ls -l
@@ -88,10 +89,10 @@ $ fc -l 520 525
523 uname -a
524 echo "Welcome to OSTechNix"
525 sudo apcman -Syu
-
```
-除了使用行号,我们还可以使用字符。比如,要列出最近一个 "pwd" 至上一个命令之间的所以命令,只需要像下面这样使用起始字母即可:
+除了使用行号,我们还可以使用字符。比如,要列出最近一个 `pwd` 至最近一个命令之间的所有命令,只需要像下面这样使用起始字母即可:
+
```
$ fc -l p
521 pwd
@@ -110,90 +111,89 @@ $ fc -l p
534 fc -l 520
535 fc -l 522
536 fc -l l
-
```
-要列出所有 "pwd" 和 "more" 之间的命令,你可以都使用起始字母,像这样:
+要列出所有 `pwd` 和 `more` 之间的命令,你可以都使用起始字母,像这样:
+
```
$ fc -l p m
-
```
或者,使用开始命令的首字母以及结束命令的行号:
+
```
$ fc -l p 528
-
```
或者都使用行号:
+
```
$ fc -l 521 528
-
```
这三个命令都显示一样的结果。
-**编辑并执行上一个命令**
+#### 编辑并执行上一个命令
我们经常敲错命令,这时你可以用默认编辑器修正拼写错误并执行而不用将命令重新再敲一遍。
编辑并执行上一个命令:
+
```
$ fc
-
```
这会在默认编辑器里载入上一个命令。
-
![][2]
-你可以看到,我上一个命令是 "fc -l"。你可以随意修改,它会在你保存退出编辑器时自动执行。这在命令或参数又长又复杂时很有用。需要注意的是,它同时也可能是**毁灭性**的。比如,如果你的上一个命令是危险的 `rm -fr `,当它自动执行时你可能丢掉你的重要数据。所以,小心谨慎对待每一个命令。
+你可以看到,我上一个命令是 `fc -l`。你可以随意修改,它会在你保存退出编辑器时自动执行。这在命令或参数又长又复杂时很有用。需要注意的是,它同时也可能是**毁灭性**的。比如,如果你的上一个命令是危险的 `rm -fr `,当它自动执行时你可能丢掉你的重要数据。所以,小心谨慎对待每一个命令。
-**更改默认编辑器**
+#### 更改默认编辑器
+
+另一个有用的选项是 `-e` ,它可以用来为 `fc` 命令选择不同的编辑器。比如,如果我们想用 `nano` 来编辑上一个命令:
-另一个有用的选项是 **-e** ,它可以用来为 fc 命令选择不同的编辑器。比如,如果我们想用 "nano" 来编辑上一个命令:
```
$ fc -e nano
-
```
-这个命令会打开 nano 编辑器(而不是默认编辑器)编辑上一个命令。
+这个命令会打开 `nano` 编辑器(而不是默认编辑器)编辑上一个命令。
![][3]
-如果你觉得用 **-e** 选项太麻烦,你可以修改你的默认编辑器,只需要将环境变量 **FCEDIT** 设为你想要让 **fc** 使用的编辑器名称即可。
+如果你觉得用 `-e` 选项太麻烦,你可以修改你的默认编辑器,只需要将环境变量 `FCEDIT` 设为你想要让 `fc` 使用的编辑器名称即可。
+
+比如,要把 `nano` 设为默认编辑器,编辑你的 `~/.profile` 或其他初始化文件: (LCTT 译注:如果 `~/.profile` 不存在可自己创建;如果使用的是 bash ,可以编辑 `~/.bash_profile` )
-比如,要把 "nano" 设为默认编辑器,编辑你的 **~/.profile** 或其他初始化文件: ( LCTT 译注:如果 ~/.profile 不存在可自己创建;如果使用的是 bash ,可以编辑 ~/.bash_profile )
```
$ vi ~/.profile
-
```
添加下面一行:
+
```
FCEDIT=nano
-# ( LCTT译注:如果在子 shell 中会用到 fc ,最好在这里 `export FCEDIT` )
-
+# LCTT译注:如果在子 shell 中会用到 fc ,最好在这里 export FCEDIT
```
你也可以使用编辑器的完整路径:
+
```
FCEDIT=/usr/local/bin/emacs
-
```
-输入 **:wq** 保存退出。要使改动立即生效,运行以下命令:
+输入 `:wq` 保存退出。要使改动立即生效,运行以下命令:
+
```
$ source ~/.profile
-
```
-现在再输入 "fc" 就可以使用 "nano" 编辑器来编辑上一个命令了。
+现在再输入 `fc` 就可以使用 `nano` 编辑器来编辑上一个命令了。
-**不编辑而直接执行上一个命令**
+#### 不编辑而直接执行上一个命令
+
+我们现在知道 `fc` 命令不带任何参数的话会将上一个命令载入编辑器。但有时你可能不想编辑,仅仅是想再次执行上一个命令。这很简单,在末尾加上连字符(`-`)就可以了:
-我们现在知道 "fc" 命令不带任何参数的话会将上一个命令载入编辑器。但有时你可能不想编辑,仅仅是想再次执行上一个命令。这很简单,在末尾加上连字符(-)就可以了:
```
$ echo "Welcome to OSTechNix"
Welcome to OSTechNix
@@ -201,16 +201,16 @@ Welcome to OSTechNix
$ fc -e -
echo "Welcome to OSTechNix"
Welcome to OSTechNix
-
```
-如你所见,"fc" 带了 **-e** 选项,但并没有编辑上一个命令(例中的 echo " Welcome to OSTechNix")。
+如你所见,`fc` 带了 `-e` 选项,但并没有编辑上一个命令(例中的 `echo " Welcome to OSTechNix"`)。
-需要注意的是,有些选项仅对指定 shell 有效。比如下面这些选项可以用在 **zsh** 中,但在 Bash 或 Ksh 中则不能用。
+需要注意的是,有些选项仅对指定 shell 有效。比如下面这些选项可以用在 zsh 中,但在 Bash 或 Ksh 中则不能用。
-**显示命令的执行时间**
+#### 显示命令的执行时间
+
+想要知道命令是在什么时候执行的,可以用 `-d` 选项:
-想要知道命令是在什么时候执行的,可以用 **-d** 选项:
```
fc -ld
1 18:41 exit
@@ -228,12 +228,12 @@ fc -ld
13 18:43 cat ostechnix.txt
14 18:43 clear
15 18:43 fc -l
-
```
这样你就可以查看最近命令的具体执行时间了。
-使用选项 **-f** ,可以为每个命令显示完整的时间戳。
+使用选项 `-f` ,可以为每个命令显示完整的时间戳。
+
```
fc -lf
1 4/5/2018 18:41 exit
@@ -252,10 +252,10 @@ fc -ld
14 4/5/2018 18:43 clear
15 4/5/2018 18:43 fc -l
16 4/5/2018 18:43 fc -ld
-
```
-当然,欧洲的老乡们还可以使用 **-E** 选项来显示欧洲时间格式。
+当然,欧洲的老乡们还可以使用 `-E` 选项来显示欧洲时间格式。
+
```
fc -lE
2 5.4.2018 18:41 clear
@@ -274,22 +274,19 @@ fc -ld
15 5.4.2018 18:43 fc -l
16 5.4.2018 18:43 fc -ld
17 5.4.2018 18:49 fc -lf
-
```
### fc 用法总结
- * 当不带任何参数时,fc 将上一个命令载入默认编辑器。
- * 当带一个数字作为参数时,fc 将数字指定的命令载入默认编辑器。
- * 当带一个字符作为参数时,fc 将最近一个以指定字符开头的命令载入默认编辑器。
+ * 当不带任何参数时,`fc` 将上一个命令载入默认编辑器。
+ * 当带一个数字作为参数时,`fc` 将数字指定的命令载入默认编辑器。
+ * 当带一个字符作为参数时,`fc` 将最近一个以指定字符开头的命令载入默认编辑器。
* 当有两个参数时,它们分别指定需要列出的命令范围的开始和结束。
-
-
更多细节,请参考 man 手册。
+
```
$ man fc
-
```
好了,今天就这些。希望这篇文章能帮助到你。更多精彩内容,敬请期待!
@@ -300,9 +297,9 @@ $ man fc
via: https://www.ostechnix.com/the-fc-command-tutorial-with-examples-for-beginners/
作者:[SK][a]
-译者:[Dotcra](https://github.com/Dotcra)
-校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
+译者:[Dotcra](https://github.com/Dotcra)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From fc5922a808d935d3c0cfe243dcd7e607963a9a53 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 23 Apr 2018 19:11:10 +0800
Subject: [PATCH 064/220] PUB:20180405 The fc Command Tutorial With Examples
For Beginners.md
@Dotcra https://linux.cn/article-9572-1.html
---
...0180405 The fc Command Tutorial With Examples For Beginners.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180405 The fc Command Tutorial With Examples For Beginners.md (100%)
diff --git a/translated/tech/20180405 The fc Command Tutorial With Examples For Beginners.md b/published/20180405 The fc Command Tutorial With Examples For Beginners.md
similarity index 100%
rename from translated/tech/20180405 The fc Command Tutorial With Examples For Beginners.md
rename to published/20180405 The fc Command Tutorial With Examples For Beginners.md
From f592449c95838b55af57f163b3323a890a48cde4 Mon Sep 17 00:00:00 2001
From: Dot
Date: Mon, 23 Apr 2018 20:08:24 +0800
Subject: [PATCH 065/220] [translating] 20180417 How To Browse Stack Overflow
From Terminal.md
---
.../tech/20180417 How To Browse Stack Overflow From Terminal.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180417 How To Browse Stack Overflow From Terminal.md b/sources/tech/20180417 How To Browse Stack Overflow From Terminal.md
index 1ebf17ef68..2ad6fb1a08 100644
--- a/sources/tech/20180417 How To Browse Stack Overflow From Terminal.md
+++ b/sources/tech/20180417 How To Browse Stack Overflow From Terminal.md
@@ -1,3 +1,5 @@
+[translating by Dotcra]
+
How To Browse Stack Overflow From Terminal
======
From a2498bcef78d4138ebb4ed03de825a49f804bdb6 Mon Sep 17 00:00:00 2001
From: kennethXia <37970750+kennethXia@users.noreply.github.com>
Date: Mon, 23 Apr 2018 21:01:27 +0800
Subject: [PATCH 066/220] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../20180403 Why I love ARM and PowerPC.md | 78 -------------------
.../20180403 Why I love ARM and PowerPC.md | 77 ++++++++++++++++++
2 files changed, 77 insertions(+), 78 deletions(-)
delete mode 100644 sources/tech/20180403 Why I love ARM and PowerPC.md
create mode 100644 translated/tech/20180403 Why I love ARM and PowerPC.md
diff --git a/sources/tech/20180403 Why I love ARM and PowerPC.md b/sources/tech/20180403 Why I love ARM and PowerPC.md
deleted file mode 100644
index 1ea07ad6e4..0000000000
--- a/sources/tech/20180403 Why I love ARM and PowerPC.md
+++ /dev/null
@@ -1,78 +0,0 @@
-translating by kennethXia
-Why I love ARM and PowerPC
-======
-
-
-Recently I've been asked why I mention [ARM][1] and [PowerPC][2] so often on my blogs and in my tweets. I have two answers: one is personal, the other technical.
-
-### The personal
-
-Once upon a time, I studied environmental protection. While working on my PhD, I was looking for a new computer. As an environmentally aware person, I wanted a high-performing computer that was also efficient. That is how I first became interested in the PowerPC and discovered [Pegasos][3], a PowerPC workstation created by [Genesi][4].
-
-I had already used [RS/6000][5] (PowerPC), [SGI][6] (MIPS), [HP-UX][7] (PA-RISC), and [VMS][8] (Alpha) both as a server and a workstation, and on my PC I used Linux, not Windows, so using a different CPU architecture was not a barrier. [Pegasos][9], which was small and efficient enough for home use, was my first workstation.
-
-Soon I was working for Genesi, enabling [openSUSE][10], Ubuntu, and various other Linux distributions on Pegasos and providing quality assurance and community support. Pegasos was followed by [EFIKA][11], another PowerPC board. It felt strange at first to use an embedded system after using workstations. But as one of the first affordable developer boards, it was the start of a revolution.
-
-I was working on some large-scale server projects when I received another interesting piece of hardware from Genesi: a [Smarttop][12] and a [Smartbook][13] based on ARM. My then-favorite Linux distribution, openSUSE, also received a dozen of these machines. This gave a big boost to ARM-related openSUSE developments at a time when very few ARM machines were available.
-
-Although I have less time available these days, I try to stay up-to-date on ARM and PowerPC news. This helps me support syslog-ng users on non-x86 platforms. And when I have half an hour free, I hack one of my ARM machines. I did some benchmarks on the [Raspberry Pi 2][14] with [syslog-ng][15], and the [results were quite surprising][16]. Recently, I built a music player using a Raspberry Pi, a USB sound card, and the [Music Player Daemon][17], and I use it regularly.
-
-### The technical
-
-Diversity is good: It creates competition, and competition creates better products. While x86 is a solid generic workhorse, chips like ARM and PowerPC (and many others) are better suited in various situations.
-
-If you have an [Android][18] mobile device or an [Apple][19] iPhone or iPad, there's a good chance it is running on an ARM SoC (system on chip). Same with a network-attached storage server. The reason is quite simple: power efficiency. You don't want to constantly recharge batteries or pay more for electricity than you did for your router.
-
-ARM is also conquering the enterprise server world with its 64-bit ARMv8 chips. Many tasks require minimal computing capacity; on the other hand, power efficiency and fast I/O are key— think storage, static web content, email, and other storage- and network-intensive functions. A prime example is [Ceph][20], a distributed object storage and file system. [SoftIron][21], which uses CentOS as reference software on its ARMv8 developer hardware, is working on Ceph-based turnkey storage appliances.
-
-Most people know PowerPC as the former CPU of [Apple Mac][22] machines. While it is no longer used as a generic desktop CPU, it still functions in routers, telecommunications equipment. And [IBM][23] continued to produce chips for high-performance servers. A few years ago, with the introduction of POWER8, IBM opened up the architecture under the aegis of the [OpenPOWER Foundation][24]. POWER8 is an ideal platform for HPC, big data, and analytics, where memory bandwidth is key. POWER9 is right around the corner.
-
-These are all server applications, but there are plans for end-user devices. Raptor Engineering is working on a [POWER9 workstation][25], and there is also an initiative to [create a notebook][26] based on a Freescale/NXP QorIQ e6500 chip. Of course, these machines are not for everybody—you can't install your favorite Windows game or commercial application on them. But they are great for PowerPC developers and enthusiasts, or anyone wanting a fully open system, from hardware to firmware to applications.
-
-### The dream
-
-My dream is a completely x86-free environment—not because I don't like x86, but because I like diversity and always use the most suitable tool for the job. If you look at the [graph][27] on Raptor Engineering's page, you will see that, depending on your use case, ARM and POWER can replace most of x86. Right now I compile, package, and test syslog-ng in x86 virtual machines running on my laptop. Using a strong enough ARMv8 or PowerPC machine, either as a workstation or a server, I could avoid x86 for this kind of tasks.
-
-Right now I am waiting for the next generation of [Pinebook][28] to arrive, as I was told at [FOSDEM][29] in February that the next version is expected to offer much higher performance. Unlike Chromebooks, this ARM-powered laptop runs Linux by design, not as a hack. For a desktop, I am looking for ARMv8 workstation-class hardware. Some are already available—like the [ThunderX Desktop][30] from Avantek—but they do not yet feature the latest, fastest, and more importantly, most energy-efficient ARMv8 CPU generations. Until these arrive, I'll use my Pixel C laptop running Android. It's not as easy and flexible as Linux, but it has a powerful ARM SoC and a Linux kernel at its heart.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/4/why-i-love-arm-and-powerpc
-
-作者:[Peter Czanik][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/czanik
-[1]:https://en.wikipedia.org/wiki/ARM_architecture
-[2]:https://en.wikipedia.org/wiki/PowerPC
-[3]:https://genesi.company/products/opendesktop
-[4]:https://genesi.company/
-[5]:https://en.wikipedia.org/wiki/RS/6000
-[6]:https://en.wikipedia.org/wiki/Silicon_Graphics#Workstations
-[7]:https://en.wikipedia.org/wiki/HP-UX
-[8]:https://en.wikipedia.org/wiki/OpenVMS#Port_to_DEC_Alpha
-[9]:https://en.wikipedia.org/wiki/Pegasos
-[10]:https://www.opensuse.org/
-[11]:https://genesi.company/products/efika/5200b
-[12]:https://genesi.company/products/efika
-[13]:https://genesi.company/products/smartbook
-[14]:https://www.raspberrypi.org/products/raspberry-pi-2-model-b/
-[15]:https://syslog-ng.com/open-source-log-management
-[16]:https://syslog-ng.com/blog/syslog-ng-raspberry-pi-2/
-[17]:https://www.musicpd.org/
-[18]:https://www.android.com/
-[19]:http://www.apple.com/
-[20]:http://ceph.com/
-[21]:http://softiron.co.uk/
-[22]:https://en.wikipedia.org/wiki/Power_Macintosh
-[23]:https://www.ibm.com/us-en/
-[24]:http://openpowerfoundation.org/
-[25]:https://www.raptorcs.com/TALOSII/
-[26]:http://www.powerpc-notebook.org/en/
-[27]:https://secure.raptorengineering.com/TALOS/power_advantages.php
-[28]:https://www.pine64.org/?page_id=3707
-[29]:https://fosdem.org/2018/
-[30]:https://www.avantek.co.uk/store/avantek-32-core-cavium-thunderx-arm-desktop.html
diff --git a/translated/tech/20180403 Why I love ARM and PowerPC.md b/translated/tech/20180403 Why I love ARM and PowerPC.md
new file mode 100644
index 0000000000..9cab22b8b0
--- /dev/null
+++ b/translated/tech/20180403 Why I love ARM and PowerPC.md
@@ -0,0 +1,77 @@
+为啥我喜欢 ARM 和 PowerPC
+======
+
+
+最近我被问起为啥在博客和推特里经常提到 [ARM][1] 和 [PowerPC][2]。我有两个答案:一个是个人原因,另一个是技术上的。
+
+### 个人原因
+
+从前,我是学环境保护的。在我读博的时候,我准备买个新电脑。作为一个环保人士,我需要一台强劲且环保的电脑。这就是我开始对 PowerPC 感兴趣的原因,我找到了 [Pegasos][3], 一台 [Genesi][4] 公司制造的 PowerPC 工作站。
+
+我还用过 [RS/6000][5] (PowerPC), [SGI][6] (MIPS), [HP-UX][7] (PA-RISC),和[VMS][8] (Alpha)的服务器和工作站,由于我的 PC 使用 Linux 而非 Windows,所以使用不同的 CPU 架构对我来说并没有什么区别。 [Pegasos][9] 是我第一台工作站,它很小而且对家用来说性能足够。
+
+很快我就开始为 Genesi 工作,为 Pegasos 移植 [openSUSE][10], Ubuntu 和其他 Linux 发行版,并提供质量保证和社区支持。继 Pegasos 之后是 [EFIKA][11],另一款基于 PowerPC 的开发板。在用过工作站之后,刚开始使用嵌入式系统会感觉有点奇怪。但是作为第一代普及价位的开发板,这是一场革命的开端。
+
+在我收到 Genesi 的另一块有趣的开发板的时候,我开始了一个大规模的服务器项目:基于 ARM 的 [Smarttop][12] 和 [Smartbook][13]。我最喜欢的 Linux 发行版————openSUSE,也收到了一打这种机器。这在当时 ARM 电脑非常稀缺的情况下,极大地促进了 ARM 版 openSUSE 项目的开发。
+
+尽管最近我很忙,我尽量保持对 ARM 和 PowerPC 新闻的关注。这有助于我支持非 x86 平台上的 SysLog-NG 用户。只要有半个小时的空,我就会去捣鼓一下 ARM 机器。我在[树莓派2][14]上做了很多 [syslog-ng][15] 的测试,结果令人振奋。我最近在树莓派上做了个音乐播放器,用了一块 USB 声卡和[音乐播放守护进程][17],我经常使用它。
+
+### 技术方面
+
+美好的多样性:它创造了竞争,而竞争创造了更好的产品。虽然 x86 是一款强劲的通用处理器,但 ARM 和 PowerPC (以及许多其他)这样的芯片在多种特定场景下显得更适合。
+
+如果你有一部运行[安卓][18]的移动设备或者[苹果][19]的 iPhone 或 iPad,极有可能它使用的就是 基于ARM 的 SoC (片上系统)。网络存储服务器也一样。原因很简单:省电。你不会希望手机一直在充电,也不想为你的路由器付更多的电费。
+
+ARM 亦在使用 64-bit ARMv8 芯片征战服务器市场。很多任务只需要极少的计算能力,另一方面省电和快速IO才是关键,比如思维存储(译者注:原文为 think storage),静态网页服务器,电子邮件和其他网络/存储相关的功能。一个最好的例子就是 [Ceph][20],一个分布式的面向对象文件系统。[SoftIron][21] 就是一个基于 ARMv8 开发版,使用 CentOS 作为基准软件,运行在 Ceph 上的完整存储应用。
+
+众所周知 PowerPC 是旧版苹果 [Mac][22] 电脑上的 CPU。虽然它不再作为通用桌面电脑的 CPU ,它依然在路由器和电信设备里发挥作用。而且 [IBM][23] 仍在为高端服务器制造芯片。几年前,随着 Power8 的引入, IBM 在 [OpenPower 基金会][24] 的支持下开放了架构。 Power8 对于关心内存带宽的设备,比如 HPC , 大数据,数据挖掘来说,是非常理想的平台。目前,Power9 也正呼之欲出。
+
+这些都是服务器应用,但也有计划用于终端用户。猛禽工程团队正在开发一款基于 [Power9 的工作站][25],也有一个基于飞思卡尔/恩智浦 QORIQ E6500 芯片[制造笔记本] [26]的倡议。当然,这些电脑并不适合所有人,你不能在它们上面安装 Windows 游戏或者商业应用。但它们对于 PowerPC 开发人员和爱好者,或者任何想要完全开放系统的人来说是理想的选择,因为从硬件到固件到应用程序都是开放的。
+
+### 梦想
+
+我的梦想是完全没有 x86 的环境,不是因为我讨厌 x86 ,而是因为我喜欢多样化而且总是希望使用最适合工作的工具。如果你看看猛禽工程网页上的[图][27],根据不同的使用情景, ARM 和 POWER 完全可以代替 x86 。现在,我在笔记本的 x86 虚拟机上编译、打包和测试 syslog-ng。如果能用上足够强劲的 ARMv8 或者 PowerPC 电脑,无论工作站还是服务器,我就能避免在 x86 上做这些事。
+
+现在我正在等待下一代[菠萝本][28]的到来,就像我在二月份 [FOSDEM][29] 上说的,下一代有望提供更高的性能。和 Chrome 本不同的是,这个 ARM 笔记本设计用于运行 Linux 而非仅是个客户端(译者注:Chrome 笔记本只提供基于网页的应用)。作为桌面系统,我在寻找 ARMv8 工作站级别的硬件。有些已经接近完成——就像 Avantek 公司的 [雷神X 台式机][30]——不过他们还没有装备最新最快最重要也最节能的 ARMv8 CPU。当这些都实现了,我将用我的 Pixel C 笔记本运行安卓。它不像Linux那样简单灵活,但它以强大的ARM SoC和Linux内核为基础。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/why-i-love-arm-and-powerpc
+
+作者:[Peter Czanik][a]
+译者:[kennethXia](https://github.com/kennethXia)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/czanik
+[1]:https://en.wikipedia.org/wiki/ARM_architecture
+[2]:https://en.wikipedia.org/wiki/PowerPC
+[3]:https://genesi.company/products/opendesktop
+[4]:https://genesi.company/
+[5]:https://en.wikipedia.org/wiki/RS/6000
+[6]:https://en.wikipedia.org/wiki/Silicon_Graphics#Workstations
+[7]:https://en.wikipedia.org/wiki/HP-UX
+[8]:https://en.wikipedia.org/wiki/OpenVMS#Port_to_DEC_Alpha
+[9]:https://en.wikipedia.org/wiki/Pegasos
+[10]:https://www.opensuse.org/
+[11]:https://genesi.company/products/efika/5200b
+[12]:https://genesi.company/products/efika
+[13]:https://genesi.company/products/smartbook
+[14]:https://www.raspberrypi.org/products/raspberry-pi-2-model-b/
+[15]:https://syslog-ng.com/open-source-log-management
+[16]:https://syslog-ng.com/blog/syslog-ng-raspberry-pi-2/
+[17]:https://www.musicpd.org/
+[18]:https://www.android.com/
+[19]:http://www.apple.com/
+[20]:http://ceph.com/
+[21]:http://softiron.co.uk/
+[22]:https://en.wikipedia.org/wiki/Power_Macintosh
+[23]:https://www.ibm.com/us-en/
+[24]:http://openpowerfoundation.org/
+[25]:https://www.raptorcs.com/TALOSII/
+[26]:http://www.powerpc-notebook.org/en/
+[27]:https://secure.raptorengineering.com/TALOS/power_advantages.php
+[28]:https://www.pine64.org/?page_id=3707
+[29]:https://fosdem.org/2018/
+[30]:https://www.avantek.co.uk/store/avantek-32-core-cavium-thunderx-arm-desktop.html
From f914bbbf8c667882aaa82f2cfa3c85f147e636d4 Mon Sep 17 00:00:00 2001
From: kennethXia <37970750+kennethXia@users.noreply.github.com>
Date: Mon, 23 Apr 2018 21:03:35 +0800
Subject: [PATCH 067/220] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
一点小修改
---
translated/tech/20180403 Why I love ARM and PowerPC.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/translated/tech/20180403 Why I love ARM and PowerPC.md b/translated/tech/20180403 Why I love ARM and PowerPC.md
index 9cab22b8b0..c92ba36b19 100644
--- a/translated/tech/20180403 Why I love ARM and PowerPC.md
+++ b/translated/tech/20180403 Why I love ARM and PowerPC.md
@@ -32,7 +32,7 @@ ARM 亦在使用 64-bit ARMv8 芯片征战服务器市场。很多任务只需
我的梦想是完全没有 x86 的环境,不是因为我讨厌 x86 ,而是因为我喜欢多样化而且总是希望使用最适合工作的工具。如果你看看猛禽工程网页上的[图][27],根据不同的使用情景, ARM 和 POWER 完全可以代替 x86 。现在,我在笔记本的 x86 虚拟机上编译、打包和测试 syslog-ng。如果能用上足够强劲的 ARMv8 或者 PowerPC 电脑,无论工作站还是服务器,我就能避免在 x86 上做这些事。
-现在我正在等待下一代[菠萝本][28]的到来,就像我在二月份 [FOSDEM][29] 上说的,下一代有望提供更高的性能。和 Chrome 本不同的是,这个 ARM 笔记本设计用于运行 Linux 而非仅是个客户端(译者注:Chrome 笔记本只提供基于网页的应用)。作为桌面系统,我在寻找 ARMv8 工作站级别的硬件。有些已经接近完成——就像 Avantek 公司的 [雷神X 台式机][30]——不过他们还没有装备最新最快最重要也最节能的 ARMv8 CPU。当这些都实现了,我将用我的 Pixel C 笔记本运行安卓。它不像Linux那样简单灵活,但它以强大的ARM SoC和Linux内核为基础。
+现在我正在等待下一代[菠萝本][28]的到来,就像我在二月份 [FOSDEM][29] 上说的,下一代有望提供更高的性能。和 Chrome 本不同的是,这个 ARM 笔记本设计用于运行 Linux 而非仅是个客户端(译者注:Chrome 笔记本只提供基于网页的应用)。作为桌面系统,我在寻找 ARMv8 工作站级别的硬件。有些已经接近完成——就像 Avantek 公司的 [雷神X 台式机][30]——不过他们还没有装备最新最快最重要也最节能的 ARMv8 CPU。当这些都实现了,我将用我的 Pixel C 笔记本运行安卓。它不像 Linux 那样简单灵活,但它以强大的 ARM SoC 和 Linux 内核为基础。
--------------------------------------------------------------------------------
From 834ab8c51f7f761e5ed0b1d986e91c7fd1a98245 Mon Sep 17 00:00:00 2001
From: kennethXia <37970750+kennethXia@users.noreply.github.com>
Date: Mon, 23 Apr 2018 21:12:59 +0800
Subject: [PATCH 068/220] Translating ...
---
...0180410 Bootiso Lets You Safely Create Bootable USB Drive.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md b/sources/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md
index e109f56406..46f90e5f11 100644
--- a/sources/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md
+++ b/sources/tech/20180410 Bootiso Lets You Safely Create Bootable USB Drive.md
@@ -1,3 +1,5 @@
+Translating by kennethXia
+
Bootiso Lets You Safely Create Bootable USB Drive
======
From e3f42935ee4a13a6d82691bc1a0f7d02460a7962 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 23 Apr 2018 21:15:51 +0800
Subject: [PATCH 069/220] PRF:20180329 Protect Your Websites with Let-s
Encrypt.md
@pinewall
---
...rotect Your Websites with Let-s Encrypt.md | 54 ++++++++++---------
1 file changed, 29 insertions(+), 25 deletions(-)
diff --git a/translated/tech/20180329 Protect Your Websites with Let-s Encrypt.md b/translated/tech/20180329 Protect Your Websites with Let-s Encrypt.md
index ed7a326b54..cc5e851acb 100644
--- a/translated/tech/20180329 Protect Your Websites with Let-s Encrypt.md
+++ b/translated/tech/20180329 Protect Your Websites with Let-s Encrypt.md
@@ -1,9 +1,11 @@
使用 Let's Encrypt 保护你的网站
======
-
-曾几何时,通过证书授权机构搭建基本的HTTPS网站需要每年花费数百美元,而且搭建的过程复杂且容易出错。现在我们免费使用 [Let's Encrypt][1],而且搭建过程也只需要几分钟。
+> 未加密的 HTTP 会话暴露于滥用之中,用 Let's Encrypt 把它们保护起来。
+
+
+曾几何时,通过证书授权机构搭建基本的 HTTPS 网站需要每年花费数百美元,而且搭建的过程复杂且容易出错。现在我们免费使用 [Let's Encrypt][1],而且搭建过程也只需要几分钟。
### 为何进行加密?
@@ -11,66 +13,63 @@
+ 窃听用户数据包
+ 捕捉用户登录
- + 注入广告和“重要”消息
- + 注入木马
- + 注入 SEO 垃圾邮件和链接
- + 注入挖矿脚本
+ + 注入[广告][12]和[“重要”消息][13]
+ + 注入[木马][14]
+ + 注入 [SEO 垃圾邮件和链接][15]
+ + 注入[挖矿脚本][16]
-网络服务提供商就是最大的代码注入者。那么如何挫败它们的非法行径呢?你最好的防御手段就是HTTPS。让我们回顾一下HTTPS的工作原理。
+网络服务提供商就是最大的代码注入者。那么如何挫败它们的非法行径呢?你最好的防御手段就是 HTTPS。让我们回顾一下 HTTPS 的工作原理。
### 信任链
-你可以在你的网站和每个授权访问用户之间建立非对称加密。这是一种非常强的保护:GPG(GNU Privacy Guard, 参考[如何在 Linux 中加密邮件][2])和 OpenSSH 是非对称加密的通用工具。它们依赖于公钥-私钥对,其中公钥可以任意共享,但私钥必须受到保护且不能共享。公钥用于加密,私钥用于解密。
+你可以在你的网站和每个授权访问用户之间建立非对称加密。这是一种非常强的保护:GPG(GNU Privacy Guard, 参考[如何在 Linux 中加密邮件][2])和 OpenSSH 就是非对称加密的通用工具。它们依赖于公钥-私钥对,其中公钥可以任意分享,但私钥必须受到保护且不能分享。公钥用于加密,私钥用于解密。
-但上述方法无法适用于随机的网页浏览,因为建立会话之前需要交换公钥,你需要生成并管理密钥对。HTTPS 会话可以自动完成公钥分发,而且购物或银行之类的敏感网站还会使用第三方证书颁发机构 (CA) 验证证书,例如 Comodo, Verisign 和 Thawte。
+但上述方法无法适用于随机的网页浏览,因为建立会话之前需要交换公钥,你需要生成并管理密钥对。HTTPS 会话可以自动完成公钥分发,而且购物或银行之类的敏感网站还会使用第三方证书颁发机构(CA)验证证书,例如 Comodo、 Verisign 和 Thawte。
-当你访问一个HTTPS网站时,网站给你的网页浏览器返回了一个数字证书。这个证书说明你的会话被强加密,而且提供了网站信息,包括组织名称,颁发证书的组织和证书颁发机构名称等。你可以点击网页浏览器地址栏的小锁头来查看这些信息(图1),也包括了证书本身。
+当你访问一个 HTTPS 网站时,网站给你的网页浏览器返回了一个数字证书。这个证书说明你的会话被强加密,而且提供了该网站信息,包括组织名称、颁发证书的组织和证书颁发机构名称等。你可以点击网页浏览器地址栏的小锁头来查看这些信息(图 1),也包括了证书本身。
+![][4]
-![页面信息][4]
+*图1: 点击网页浏览器地址栏上的锁头标记查看信息*
-图1: 点击网页浏览器地址栏上的锁头标记查看信息
+包括 Opera、 Chromium 和 Chrome 在内的主流浏览器,验证网站数字证书的合法性都依赖于证书颁发机构。小锁头标记可以让你一眼看出证书状态;绿色意味着使用强 SSL 加密且运营实体经过验证。网页浏览器还会对恶意网站、SSL 证书配置有误的网站和不被信任的自签名证书网站给出警告。
-[已获授权使用][5]
-
-包括 Opera, Chromium 和 Chrome 在内的主流浏览器,验证网站数字证书的合法性都依赖于证书颁发机构。小锁头标记可以让你一眼看出证书状态;绿色意味着使用强 SSL 加密且运营实体经过验证。网页浏览器还会对恶意网站、SSL 证书配置有误的网站和不被信任的自签名证书网站给出警告。
-
-那么网页浏览器如何判断网站是否可信呢?浏览器自带根证书库,包含了一系列根证书,存储在`/usr/share/ca-certificates/mozilla/`。网站证书是否可信可以通过根证书库进行检查。就像你 Linux 系统上其它软件那样,根证书库也由包管理器维护。对于 Ubuntu,对应的包是 `ca-certificates`。Linux 根证书库本身[由 Mozilla 维护][6]。
+那么网页浏览器如何判断网站是否可信呢?浏览器自带根证书库,包含了一系列根证书,存储在 `/usr/share/ca-certificates/mozilla/` 之类的地方。网站证书是否可信可以通过根证书库进行检查。就像你 Linux 系统上其它软件那样,根证书库也由包管理器维护。对于 Ubuntu,对应的包是 `ca-certificates`,这个 Linux 根证书库本身是[由 Mozilla 维护][6]的。
可见,整个工作流程需要复杂的基础设施才能完成。在你进行购物或金融等敏感在线操作时,你信任了无数陌生人对你的保护。
### 无处不加密
-Let's Encrypt 是一家全球证书颁发机构,类似于其它商业根证书颁发机构。Let's Encrpt 由非营利性组织因特网安全研究小组 (Internet Security Research Group, ISRG) 创立,目标是简化网站的安全加密。在我看来,出于后面我会提到的原因,该证书不足以胜任购物及银行网站的安全加密,但很适合加密博客、新闻和信息门户这类不涉及金融操作的网站。
+Let's Encrypt 是一家全球证书颁发机构,类似于其它商业根证书颁发机构。Let's Encrpt 由非营利性组织因特网安全研究小组(ISRG)创立,目标是简化网站的安全加密。在我看来,出于后面我会提到的原因,该证书不足以胜任购物及银行网站的安全加密,但很适合加密博客、新闻和信息门户这类不涉及金融操作的网站。
-使用 Let's Encrypt 有三种方式。推荐使用电子前沿基金会 (Electronic Frontier Foundation, EFF) 开发的 [Cerbot 客户端][7]。使用该客户端需要在网站服务器上执行 shell 操作。
+使用 Let's Encrypt 有三种方式。推荐使用电子前沿基金会(EFF)开发的 [Cerbot 客户端][7]。使用该客户端需要在网站服务器上执行 shell 操作。
如果你使用的是共享托管主机,你很可能无法执行 shell 操作。这种情况下,最简单的方法是使用[支持 Let's Encrpt 的托管主机][8]。
如果你的托管主机不支持 Let's Encrypt,但支持自定义证书,那么你可以使用 Certbot [手动创建并上传你的证书][8]。这是一个复杂的过程,你需要彻底地研究文档。
-安装证书后,测试使用[ SSL 服务器测试][9]。
+安装证书后,使用 [SSL 服务器测试][9]来测试你的服务器。
-Let's Encrypt 的电子证书有效期为90天。Certbot 安装过程中添加了一个证书自动续期的计划任务,也提供了测试证书自动续期是否成功的命令。允许使用已有的私钥或证书签名请求 (certificate signing request, CSR),允许创建通配符证书。
+Let's Encrypt 的电子证书有效期为 90 天。Certbot 安装过程中添加了一个证书自动续期的计划任务,也提供了测试证书自动续期是否成功的命令。允许使用已有的私钥或证书签名请求(CSR),允许创建通配符证书。
### 限制
-Let's Encrypt 有如下限制:它只执行域名验证,即只要有域名控制权就可以获得证书。这是比较基础的 SSL。它不支持组织验证(Organization Validation, OV) 或扩展验证(Extended Validation, EV),因为运营实体验证无法自动完成。我不会信任使用 Let's Encrypt 证书的购物或银行网站,它们应该购买支持运营实体验证的完整版本。
+Let's Encrypt 有如下限制:它只执行域名验证,即只要有域名控制权就可以获得证书。这是比较基础的 SSL。它不支持组织验证(OV)或扩展验证(EV),因为运营实体验证无法自动完成。我不会信任使用 Let's Encrypt 证书的购物或银行网站,它们应该购买支持运营实体验证的完整版本。
作为非营利性组织提供的免费服务,不提供商业支持,只提供不错的文档和社区支持。
因特网中恶意无处不在,一切数据都应该加密。从使用 [Let's Encrypt][10] 保护你的网站用户开始吧。
-想要学习更多 Linux 知识,请参考 Linux 基金会和 edX 提供的免费课程 ["Linux 入门"][11]。
+想要学习更多 Linux 知识,请参考 Linux 基金会和 edX 提供的免费课程 [“Linux 入门”][11]。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/3/protect-your-websites-lets-encrypt
作者:[CARLA SCHRODER][a]
-译者:[pinewall](https://github.com/pinewall)
-校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
+译者:[pinewall](https://github.com/pinewall)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -86,3 +85,8 @@ via: https://www.linux.com/learn/intro-to-linux/2018/3/protect-your-websites-let
[9]:https://www.ssllabs.com/ssltest/
[10]:https://letsencrypt.org/
[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+[12]:https://www.thesslstore.com/blog/third-party-content-injection/
+[13]:https://blog.ryankearney.com/2013/01/comcast-caught-intercepting-and-altering-your-web-traffic/
+[14]:https://www.eff.org/deeplinks/2018/03/we-still-need-more-https-government-middleboxes-caught-injecting-spyware-ads-and
+[15]:https://techglimpse.com/wordpress-injected-with-spam-security/
+[16]:https://thehackernews.com/2018/03/cryptocurrency-spyware-malware.html
From a7efff56b43ca241256b9e6ae08583ed0ac51f7c Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 23 Apr 2018 21:16:26 +0800
Subject: [PATCH 070/220] PUB:20180329 Protect Your Websites with Let-s
Encrypt.md
@pinewall https://linux.cn/article-9573-1.html
---
.../20180329 Protect Your Websites with Let-s Encrypt.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180329 Protect Your Websites with Let-s Encrypt.md (100%)
diff --git a/translated/tech/20180329 Protect Your Websites with Let-s Encrypt.md b/published/20180329 Protect Your Websites with Let-s Encrypt.md
similarity index 100%
rename from translated/tech/20180329 Protect Your Websites with Let-s Encrypt.md
rename to published/20180329 Protect Your Websites with Let-s Encrypt.md
From 32dcaa1fd77710a98c64014a16a8012bc9c2a7b1 Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Mon, 23 Apr 2018 23:09:21 +0800
Subject: [PATCH 071/220] Update 20180325 Could we run Python 2 and Python 3
code in the same VM with no code changes.md
---
... 2 and Python 3 code in the same VM with no code changes.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md b/sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md
index 1c43c1e430..7ec1044acd 100644
--- a/sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md
+++ b/sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md
@@ -1,3 +1,6 @@
+Translating by MjSeven
+
+
Could we run Python 2 and Python 3 code in the same VM with no code changes?
======
From f8cd767b0eac00117bfc8952a3e5bc7df386ee2f Mon Sep 17 00:00:00 2001
From: geekpi
Date: Tue, 24 Apr 2018 08:38:59 +0800
Subject: [PATCH 072/220] translated
---
...l new projects to try in COPR for April.md | 38 +++++++++----------
1 file changed, 18 insertions(+), 20 deletions(-)
diff --git a/sources/tech/20180416 4 cool new projects to try in COPR for April.md b/sources/tech/20180416 4 cool new projects to try in COPR for April.md
index 871dddda24..518020ccc7 100644
--- a/sources/tech/20180416 4 cool new projects to try in COPR for April.md
+++ b/sources/tech/20180416 4 cool new projects to try in COPR for April.md
@@ -1,24 +1,22 @@
-translating---geekpi
-
-4 cool new projects to try in COPR for April
+4 月 COPR 中 4 个新的酷项目
======

-COPR is a [collection][1] of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
+COPR 是一个人仓库[收集][1],它不在 Fedora 中运行。某些软件不符合易于打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是免费且开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施支持或项目签名。但是,它可能是尝试新软件或实验软件的一种很好的方式。
-Here’s a set of new and interesting projects in COPR.
+这是 COPR 中一系列新的和有趣的项目。
### Anki
-[Anki][2] is a program that helps you learn and remember things using spaced repetition. You can create cards and organize them into decks, or download [existing decks][3]. A card has a question on one side and an answer on the other. It may also include images, video or audio. How well you answer each card determines how often you see that particular card in the future.
+[Anki][2] 是一个程序,它使用间隔重复帮助你学习和记忆事物。你可以创建卡片并将其组织成卡组,或下载[现有卡组][3]。卡片的一面有问题,另一面有答案。它可能还包括图像、视频或音频。你对每张卡的回答好坏决定了你将来看到特定卡的频率。
-While Anki is already in Fedora, this repo provides a newer version.
+虽然 Anki 已经在 Fedora 中,但这个仓库提供了一个更新的版本。
![][4]
-#### Installation instructions
+#### 安装说明
-The repo currently provides Anki for Fedora 27, 28, and Rawhide. To install Anki, use these commands:
+仓库目前为 Fedora 27、28 和 Rawhide 提供 Anki。要安装 Anki,请使用以下命令:
```
sudo dnf copr enable thomasfedb/anki
sudo dnf install anki
@@ -27,11 +25,11 @@ sudo dnf install anki
### Fd
-[Fd][5] is a command-line utility that’s a simple and slightly faster alternative to [find][6]. It can execute commands on found items in parallel. Fd also uses colorized terminal output and ignores hidden files and patterns specified in .gitignore by default.
+[Fd][5] 是一个命令行工具,它是简单而稍快的替代 [find][6] 的方法。它可以并行地查找项目。fd 也使用彩色输出,并默认忽略隐藏文件和 .gitignore 中指定模式的文件。
-#### Installation instructions
+#### 安装说明
-The repo currently provides fd for Fedora 26, 27, 28, and Rawhide. To install fd, use these commands:
+仓库目前为 Fedora 26、27、28 和 Rawhide 提供 fd。要安装 fd,请使用以下命令:
```
sudo dnf copr enable keefle/fd
sudo dnf install fd
@@ -40,15 +38,15 @@ sudo dnf install fd
### KeePass
-[KeePass][7] is a password manager. It holds all passwords in one end-to-end encrypted database locked with a master key or key file. The passwords can be organized into groups and generated by the program’s built-in generator. Among its other features is Auto-Type, which can provide a username and password to selected forms.
+[KeePass][7]是一个密码管理器。它将所有密码保存在一个由主密钥或密钥文件锁定的端对端加密数据库中。密码可以组织成组并由程序的内置生成器生成。其他功能包括自动输入,它可以为选定的表单输入用户名和密码。
-While KeePass is already in Fedora, this repo provides the newest version.
+虽然 KeePass 已经在 Fedora 中,但这个仓库提供了最新版本。
![][8]
-#### Installation instructions
+#### 安装说明
-The repo currently provides KeePass for Fedora 26 and 27. To install KeePass, use these commands:
+仓库目前为 Fedora 26 和 27 提供 KeePass。要安装 KeePass,请使用以下命令:
```
sudo dnf copr enable mavit/keepass
sudo dnf install keepass
@@ -57,11 +55,11 @@ sudo dnf install keepass
### jo
-[Jo][9] is a command-line utility that transforms input to JSON strings or arrays. It features a simple [syntax][10] and recognizes booleans, strings and numbers. In addition, jo supports nesting and can nest its own output as well.
+[Jo][9] 是一个将输入转换为 JSON 字符串或数组的命令行工具。它有一个简单的[语法][10]并识别布尔值、字符串和数字。另外,jo 支持嵌套并且可以嵌套自己的输出。
-#### Installation instructions
+#### 安装说明
-The repo currently provides jo for Fedora 26, 27, and Rawhide, and for EPEL 6 and 7. To install jo, use these commands:
+目前,仓库为 Fedora 26、27 和 Rawhide 以及 EPEL 6 和 7 提供 jo。要安装 jo,请使用以下命令:
```
sudo dnf copr enable ganto/jo
sudo dnf install jo
@@ -74,7 +72,7 @@ sudo dnf install jo
via: https://fedoramagazine.org/4-try-copr-april-2018/
作者:[Dominik Turecek][a]
-译者:[译者ID](https://github.com/译者ID)
+译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
From 0e41d0342b38ef2bf45abcad1fe96bdfa86092c0 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Tue, 24 Apr 2018 08:41:51 +0800
Subject: [PATCH 073/220] translated
---
...l new projects to try in COPR for April.md | 91 -------------------
1 file changed, 91 deletions(-)
delete mode 100644 sources/tech/20180416 4 cool new projects to try in COPR for April.md
diff --git a/sources/tech/20180416 4 cool new projects to try in COPR for April.md b/sources/tech/20180416 4 cool new projects to try in COPR for April.md
deleted file mode 100644
index 518020ccc7..0000000000
--- a/sources/tech/20180416 4 cool new projects to try in COPR for April.md
+++ /dev/null
@@ -1,91 +0,0 @@
-4 月 COPR 中 4 个新的酷项目
-======
-
-
-COPR 是一个人仓库[收集][1],它不在 Fedora 中运行。某些软件不符合易于打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是免费且开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施支持或项目签名。但是,它可能是尝试新软件或实验软件的一种很好的方式。
-
-这是 COPR 中一系列新的和有趣的项目。
-
-### Anki
-
-[Anki][2] 是一个程序,它使用间隔重复帮助你学习和记忆事物。你可以创建卡片并将其组织成卡组,或下载[现有卡组][3]。卡片的一面有问题,另一面有答案。它可能还包括图像、视频或音频。你对每张卡的回答好坏决定了你将来看到特定卡的频率。
-
-虽然 Anki 已经在 Fedora 中,但这个仓库提供了一个更新的版本。
-
-![][4]
-
-#### 安装说明
-
-仓库目前为 Fedora 27、28 和 Rawhide 提供 Anki。要安装 Anki,请使用以下命令:
-```
-sudo dnf copr enable thomasfedb/anki
-sudo dnf install anki
-
-```
-
-### Fd
-
-[Fd][5] 是一个命令行工具,它是简单而稍快的替代 [find][6] 的方法。它可以并行地查找项目。fd 也使用彩色输出,并默认忽略隐藏文件和 .gitignore 中指定模式的文件。
-
-#### 安装说明
-
-仓库目前为 Fedora 26、27、28 和 Rawhide 提供 fd。要安装 fd,请使用以下命令:
-```
-sudo dnf copr enable keefle/fd
-sudo dnf install fd
-
-```
-
-### KeePass
-
-[KeePass][7]是一个密码管理器。它将所有密码保存在一个由主密钥或密钥文件锁定的端对端加密数据库中。密码可以组织成组并由程序的内置生成器生成。其他功能包括自动输入,它可以为选定的表单输入用户名和密码。
-
-虽然 KeePass 已经在 Fedora 中,但这个仓库提供了最新版本。
-
-![][8]
-
-#### 安装说明
-
-仓库目前为 Fedora 26 和 27 提供 KeePass。要安装 KeePass,请使用以下命令:
-```
-sudo dnf copr enable mavit/keepass
-sudo dnf install keepass
-
-```
-
-### jo
-
-[Jo][9] 是一个将输入转换为 JSON 字符串或数组的命令行工具。它有一个简单的[语法][10]并识别布尔值、字符串和数字。另外,jo 支持嵌套并且可以嵌套自己的输出。
-
-#### 安装说明
-
-目前,仓库为 Fedora 26、27 和 Rawhide 以及 EPEL 6 和 7 提供 jo。要安装 jo,请使用以下命令:
-```
-sudo dnf copr enable ganto/jo
-sudo dnf install jo
-
-```
-
-
---------------------------------------------------------------------------------
-
-via: https://fedoramagazine.org/4-try-copr-april-2018/
-
-作者:[Dominik Turecek][a]
-译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://fedoramagazine.org
-[1]:https://copr.fedorainfracloud.org/
-[2]:https://apps.ankiweb.net/
-[3]:https://ankiweb.net/shared/decks/
-[4]:https://fedoramagazine.org/wp-content/uploads/2018/03/anki.png
-[5]:https://github.com/sharkdp/fd
-[6]:https://www.gnu.org/software/findutils/
-[7]:https://keepass.info/
-[8]:https://fedoramagazine.org/wp-content/uploads/2018/03/keepass.png
-[9]:https://github.com/jpmens/jo
-[10]:https://github.com/jpmens/jo/blob/master/jo.md
From 53ba69a1d4ba780f174dcd24ef0d2be40c67666d Mon Sep 17 00:00:00 2001
From: geekpi
Date: Tue, 24 Apr 2018 08:43:28 +0800
Subject: [PATCH 074/220] translated
---
...l new projects to try in COPR for April.md | 91 +++++++++++++++++++
1 file changed, 91 insertions(+)
create mode 100644 translated/tech/20180416 4 cool new projects to try in COPR for April.md
diff --git a/translated/tech/20180416 4 cool new projects to try in COPR for April.md b/translated/tech/20180416 4 cool new projects to try in COPR for April.md
new file mode 100644
index 0000000000..518020ccc7
--- /dev/null
+++ b/translated/tech/20180416 4 cool new projects to try in COPR for April.md
@@ -0,0 +1,91 @@
+4 月 COPR 中 4 个新的酷项目
+======
+
+
+COPR 是一个人仓库[收集][1],它不在 Fedora 中运行。某些软件不符合易于打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是免费且开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施支持或项目签名。但是,它可能是尝试新软件或实验软件的一种很好的方式。
+
+这是 COPR 中一系列新的和有趣的项目。
+
+### Anki
+
+[Anki][2] 是一个程序,它使用间隔重复帮助你学习和记忆事物。你可以创建卡片并将其组织成卡组,或下载[现有卡组][3]。卡片的一面有问题,另一面有答案。它可能还包括图像、视频或音频。你对每张卡的回答好坏决定了你将来看到特定卡的频率。
+
+虽然 Anki 已经在 Fedora 中,但这个仓库提供了一个更新的版本。
+
+![][4]
+
+#### 安装说明
+
+仓库目前为 Fedora 27、28 和 Rawhide 提供 Anki。要安装 Anki,请使用以下命令:
+```
+sudo dnf copr enable thomasfedb/anki
+sudo dnf install anki
+
+```
+
+### Fd
+
+[Fd][5] 是一个命令行工具,它是简单而稍快的替代 [find][6] 的方法。它可以并行地查找项目。fd 也使用彩色输出,并默认忽略隐藏文件和 .gitignore 中指定模式的文件。
+
+#### 安装说明
+
+仓库目前为 Fedora 26、27、28 和 Rawhide 提供 fd。要安装 fd,请使用以下命令:
+```
+sudo dnf copr enable keefle/fd
+sudo dnf install fd
+
+```
+
+### KeePass
+
+[KeePass][7]是一个密码管理器。它将所有密码保存在一个由主密钥或密钥文件锁定的端对端加密数据库中。密码可以组织成组并由程序的内置生成器生成。其他功能包括自动输入,它可以为选定的表单输入用户名和密码。
+
+虽然 KeePass 已经在 Fedora 中,但这个仓库提供了最新版本。
+
+![][8]
+
+#### 安装说明
+
+仓库目前为 Fedora 26 和 27 提供 KeePass。要安装 KeePass,请使用以下命令:
+```
+sudo dnf copr enable mavit/keepass
+sudo dnf install keepass
+
+```
+
+### jo
+
+[Jo][9] 是一个将输入转换为 JSON 字符串或数组的命令行工具。它有一个简单的[语法][10]并识别布尔值、字符串和数字。另外,jo 支持嵌套并且可以嵌套自己的输出。
+
+#### 安装说明
+
+目前,仓库为 Fedora 26、27 和 Rawhide 以及 EPEL 6 和 7 提供 jo。要安装 jo,请使用以下命令:
+```
+sudo dnf copr enable ganto/jo
+sudo dnf install jo
+
+```
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/4-try-copr-april-2018/
+
+作者:[Dominik Turecek][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://fedoramagazine.org
+[1]:https://copr.fedorainfracloud.org/
+[2]:https://apps.ankiweb.net/
+[3]:https://ankiweb.net/shared/decks/
+[4]:https://fedoramagazine.org/wp-content/uploads/2018/03/anki.png
+[5]:https://github.com/sharkdp/fd
+[6]:https://www.gnu.org/software/findutils/
+[7]:https://keepass.info/
+[8]:https://fedoramagazine.org/wp-content/uploads/2018/03/keepass.png
+[9]:https://github.com/jpmens/jo
+[10]:https://github.com/jpmens/jo/blob/master/jo.md
From c9200edcfc2b90950f512f4937810c6011c48666 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Tue, 24 Apr 2018 08:45:27 +0800
Subject: [PATCH 075/220] translating
---
.../20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md b/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
index eb940ed8fe..da8a95e5b2 100644
--- a/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
+++ b/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
@@ -1,3 +1,5 @@
+translating----geekpi
+
BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI
===============================================
From 71613156e6bd62249448a8f9309a6721c7a63bf4 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 24 Apr 2018 09:35:50 +0800
Subject: [PATCH 076/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20reset?=
=?UTF-8?q?=20a=20root=20password=20on=20Fedora?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... How to reset a root password on Fedora.md | 91 +++++++++++++++++++
1 file changed, 91 insertions(+)
create mode 100644 sources/tech/20180423 How to reset a root password on Fedora.md
diff --git a/sources/tech/20180423 How to reset a root password on Fedora.md b/sources/tech/20180423 How to reset a root password on Fedora.md
new file mode 100644
index 0000000000..9cfe51adf7
--- /dev/null
+++ b/sources/tech/20180423 How to reset a root password on Fedora.md
@@ -0,0 +1,91 @@
+How to reset a root password on Fedora
+======
+
+
+A system administrator can easily reset a password for a user that has forgotten their password. But what happens if the system administrator forgets the root password? This guide will show you how to reset a lost or forgotten root password. Note that to reset the root password, you need to have physical access to the machine in order to reboot and to access GRUB settings. Additionally, if the system is encrypted, you will also need to know the LUKS passphrase.
+
+### Edit the GRUB settings
+
+First you need to interrupt the boot process. So you’ll need to turn on the system or restart, if it’s already powered on. The first step is tricky because the grub menu tends to flash by very quickly on the screen.
+
+Press **E** on your keyboard when you see the GRUB menu:
+
+![][1]
+
+After pressing ‘e’ the following screen is shown:
+
+![][2]
+
+Use your arrow keys to move the the **linux16** line.
+
+![][3]
+
+Using your **del** key or **backspace** key, remove **rhgb quiet** and replace with the following.
+```
+rd.break enforcing=0
+
+```
+
+![][4]
+
+After editing the lines, Press **Ctrl-x** to start the system. If the system is encrypted, you will be prompted for the LUKS passphase here.
+
+**Note:** Setting enforcing=0, avoids performing a complete system SELinux relabeling. Once the system is rebooted, restore the correct SELinux context for the /etc/shadow file. (this is explained a little further in this process)
+
+### Mounting the filesystem
+
+The system will now be in emergency mode. Remount the hard drive with read-write access:
+```
+# mount –o remount,rw /sysroot
+
+```
+
+### **Password Change
+
+**
+
+Run chroot to access the system.
+```
+# chroot /sysroot
+
+```
+
+You can now change the root password.
+```
+# passwd
+
+```
+
+Type the new root password twice when prompted. If you are successful, you should see a message that **all authentication tokens updated successfully.**
+
+Type **exit** , twice to reboot the system.
+
+Log in as root and restore the SELinux label to the /etc/shadow file.
+```
+# restorecon -v /etc/shadow
+
+```
+
+Turn SELinux back to enforcing mode.
+```
+# setenforce 1
+
+```
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/reset-root-password-fedora/
+
+作者:[Curt Warfield][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://fedoramagazine.org/author/rcurtiswarfield/
+[1]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub.png
+[2]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub2.png
+[3]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub3.png
+[4]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub4.png
From a59d65c91a80c61150adc6aaccf2f71c882a2347 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 24 Apr 2018 09:37:38 +0800
Subject: [PATCH 077/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Remove?=
=?UTF-8?q?=20Password=20From=20A=20PDF=20File=20in=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...emove Password From A PDF File in Linux.md | 220 ++++++++++++++++++
1 file changed, 220 insertions(+)
create mode 100644 sources/tech/20180420 How To Remove Password From A PDF File in Linux.md
diff --git a/sources/tech/20180420 How To Remove Password From A PDF File in Linux.md b/sources/tech/20180420 How To Remove Password From A PDF File in Linux.md
new file mode 100644
index 0000000000..0e4318c858
--- /dev/null
+++ b/sources/tech/20180420 How To Remove Password From A PDF File in Linux.md
@@ -0,0 +1,220 @@
+How To Remove Password From A PDF File in Linux
+======
+
+
+Today I happen to share a password protected PDF file to one of my friend. I knew the password of that PDF file, but I didn’t want to disclose it. Instead, I just wanted to remove the password and send the file to him. I started to looking for some easy ways to remove the password protection from the pdf files on Internet. After a quick google search, I came up with four methods to remove password from a PDF file in Linux. The funny thing is I had already done it few years ago and I almost forgot it. If you’re wondering how to remove password from a PDF file in Linux, read on! It is not that difficult.
+
+### Remove Password From A PDF File in Linux
+
+**Method 1 – Using Qpdf**
+
+The **Qpdf** is a PDF transformation software which is used to encrypt and decrypt PDF files, convert PDF files to another equivalent pdf files. Qpdf is available in the default repositories of most Linux distributions, so you can install it using the default package manager.
+
+For example, Qpdf can be installed on Arch Linux and its variants using [**pacman**][1] as shown below.
+```
+$ sudo pacman -S qpdf
+
+```
+
+On Debian, Ubuntu, Linux Mint:
+```
+$ sudo apt-get install qpdf
+
+```
+
+Now let us remove the password from a pdf file using qpdf.
+
+I have a password-protected PDF file named **“secure.pdf”**. Whenever I open this file, it prompts me to enter the password to display its contents.
+
+![][3]
+
+I know the password of the above pdf file. However, I don’t want to share the password with anyone. So what I am going to do is to simply remove the password of the PDF file using Qpdf utility with following command.
+```
+$ qpdf --password='123456' --decrypt secure.pdf output.pdf
+
+```
+
+Quite easy, isn’t it? Yes, it is! Here, **123456** is the password of the **secure.pdf** file. Replace the password with your own.
+
+**Method 2 – Using Pdftk**
+
+**Pdftk** is yet another great software for manipulating pdf documents. Pdftk can do almost all sort of pdf operations, such as;
+
+ * Encrypt and decrypt pdf files.
+ * Merge PDF documents.
+ * Collate PDF page Scans.
+ * Split PDF pages.
+ * Rotate PDF files or pages.
+ * Fill PDF forms with X/FDF data and/or flatten forms.
+ * Generate FDF data stencils from PDF forms.
+ * Apply a background watermark or a foreground stamp.
+ * Report PDF metrics, bookmarks and metadata.
+ * Add/update PDF bookmarks or metadata.
+ * Attach files to PDF pages or the PDF document.
+ * Unpack PDF attachments.
+ * Burst a PDF file into single pages.
+ * Compress and decompress page streams.
+ * Repair corrupted PDF file.
+
+
+
+Pddftk is available in AUR, so you can install it using any AUR helper programs on Arch Linux its derivatives.
+
+Using [**Pacaur**][4]:
+```
+$ pacaur -S pdftk
+
+```
+
+Using [**Packer**][5]:
+```
+$ packer -S pdftk
+
+```
+
+Using [**Trizen**][6]:
+```
+$ trizen -S pdftk
+
+```
+
+Using [**Yay**][7]:
+```
+$ yay -S pdftk
+
+```
+
+Using [**Yaourt**][8]:
+```
+$ yaourt -S pdftk
+
+```
+
+On Debian, Ubuntu, Linux Mint, run:
+```
+$ sudo apt-get instal pdftk
+
+```
+
+On CentOS, Fedora, Red Hat:
+
+First, Install EPEL repository:
+```
+$ sudo yum install epel-release
+
+```
+
+Or
+```
+$ sudo dnf install epel-release
+
+```
+
+Then install PDFtk application using command:
+```
+$ sudo yum install pdftk
+
+```
+
+Or
+```
+$ sudo dnf install pdftk
+
+```
+
+Once pdftk installed, you can remove the password from a pdf document using command:
+```
+$ pdftk secure.pdf input_pw 123456 output output.pdf
+
+```
+
+Replace ‘123456’ with your correct password. This command decrypts the “secure.pdf” file and create an equivalent non-password protected file named “output.pdf”.
+
+**Also read:**
+
+**Method 3 – Using Poppler**
+
+**Poppler** is a PDF rendering library based on the xpdf-3.0 code base. It contains the following set of command line utilities for manipulating PDF documents.
+
+ * **pdfdetach** – lists or extracts embedded files.
+ * **pdffonts** – font analyzer.
+ * **pdfimages** – image extractor.
+ * **pdfinfo** – document information.
+ * **pdfseparate** – page extraction tool.
+ * **pdfsig** – verifies digital signatures.
+ * **pdftocairo** – PDF to PNG/JPEG/PDF/PS/EPS/SVG converter using Cairo.
+ * **pdftohtml** – PDF to HTML converter.
+ * **pdftoppm** – PDF to PPM/PNG/JPEG image converter.
+ * **pdftops** – PDF to PostScript (PS) converter.
+ * **pdftotext** – text extraction.
+ * **pdfunite** – document merging tool.
+
+
+
+For the purpose of this guide, we only use the “pdftops” utility.
+
+To install Poppler on Arch Linux based distributions, run:
+```
+$ sudo pacman -S poppler
+
+```
+
+On Debian, Ubuntu, Linux Mint:
+```
+$ sudo apt-get install poppler-utils
+
+```
+
+On RHEL, CentOS, Fedora:
+```
+$ sudo yum install poppler-utils
+
+```
+
+Once Poppler installed, run the following command to decrypt the password protected pdf file and create a new equivalent file named output.pdf.
+```
+$ pdftops -upw 123456 secure.pdf output.pdf
+
+```
+
+Again, replace ‘123456’ with your pdf password.
+
+As you might noticed in all above methods, we just converted the password protected pdf file named “secure.pdf” to another equivalent pdf file named “output.pdf”. Technically speaking, we really didn’t remove the password from the source file, instead we decrypted it and saved it as another equivalent pdf file without password protection.
+
+**Method 4 – Print to a file
+**
+
+This is the easiest method in all of the above methods. You can use your existing PDF viewer such as Atril document viewer, Evince etc., and print the password protected pdf file to another file.
+
+Open the password protected file in your PDF viewer application. Go to **File - > Print**. And save the pdf file in any location of your choice.
+
+![][9]
+
+And, that’s all. Hope this was useful. Do you know/use any other methods to remove the password protection from PDF files? Let us know in the comment section below.
+
+More good stuffs to come. Stay tuned!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-remove-password-from-a-pdf-file-in-linux/
+
+作者:[SK][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://www.ostechnix.com/getting-started-pacman/
+[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[3]:http://www.ostechnix.com/wp-content/uploads/2018/04/Remove-Password-From-A-PDF-File-1.png
+[4]:https://www.ostechnix.com/install-pacaur-arch-linux/
+[5]:https://www.ostechnix.com/install-packer-arch-linux-2/
+[6]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
+[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
+[8]:https://www.ostechnix.com/install-yaourt-arch-linux/
From 1161b0f80ee1fcfdfe6445240f3c125d3909698b Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 24 Apr 2018 09:40:43 +0800
Subject: [PATCH 078/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Managing=20virtua?=
=?UTF-8?q?l=20environments=20with=20Vagrant?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...aging virtual environments with Vagrant.md | 488 ++++++++++++++++++
1 file changed, 488 insertions(+)
create mode 100644 sources/tech/20180423 Managing virtual environments with Vagrant.md
diff --git a/sources/tech/20180423 Managing virtual environments with Vagrant.md b/sources/tech/20180423 Managing virtual environments with Vagrant.md
new file mode 100644
index 0000000000..d6be55a9de
--- /dev/null
+++ b/sources/tech/20180423 Managing virtual environments with Vagrant.md
@@ -0,0 +1,488 @@
+Managing virtual environments with Vagrant
+======
+
+
+Vagrant is a tool that offers a simple and easy to use command-line client for managing virtual environments. I started using it because it made it easier for me to develop websites, test solutions, and learn new things.
+
+According to [Vagrant's website][1], "Vagrant lowers development environment setup time, increases production parity, and makes the 'works on my machine' excuse a relic of the past."
+
+There is a lot Vagrant can do, and you can learn a bit more background in Opensource.com's [Vagrant open source resources article][2].
+
+In this getting-started guide, I'll demonstrate how to use Vagrant to:
+
+ 1. Create and configure a VirtualBox virtual machine (VM)
+ 2. Run post-deployment configuration shell scripts and applications
+
+
+
+Sounds simple, and it is. Vagrant's power comes from having a consistent workflow for deploying and configuring machines regardless of platform or operating system.
+
+We'll start by using VirtualBox as a **provider** , setting up an Ubuntu 16.04 **box** , and applying a few shell commands as the **provisioner**. I'll refer to the physical machine (e.g., a laptop or desktop) as the host machine and the Vagrant VM as the guest.
+
+In this tutorial, we'll put together a [Vagrantfile][3] and offer periodic checkpoints to make sure our files look the same. We'll cover the following introductory and advanced topics:
+
+Introductory topics:
+
+ * Installing Vagrant
+ * Choosing a Vagrant box
+ * Understanding the Vagrantfile
+ * Getting the VM running
+ * Using provisioners
+
+
+
+Advanced topics:
+
+ * Networking
+ * Syncing folders
+ * Deploying multiple machines
+ * Making sure everything works
+
+
+
+It looks like a lot, but it will all fit together nicely once we are finished.
+
+### Installing Vagrant
+
+First, we'll navigate to [Vagrant's][4] and [VirtualBox's][5] download pages to install the latest versions of each.
+
+We can enter the following commands to ensure the latest versions of the applications are installed and ready to use.
+
+**Vagrant:**
+```
+# vagrant --version
+
+Vagrant 2.0.3
+
+```
+
+**VirtualBox:**
+```
+# VBoxManage --version
+
+5.2.8r121009
+
+```
+
+### Choosing a Vagrant box
+
+Picking a Vagrant box is similar to picking an image for a server. At the base level, we choose which operating system (OS) we want to use. Some boxes go further and will have additional software (such as the Puppet or Chef client) already installed.
+
+The go-to online repository for boxes is [Vagrant Cloud][6]; it offers a cornucopia of Vagrant boxes for multiple providers. In this tutorial, we'll be using Ubuntu Xenial Xerus 16.04 LTS daily build.
+
+### Understanding the Vagrantfile
+
+Think of the Vagrantfile as the configuration file for an environment. It describes the Vagrant environment with regard to how to build and configure the VirtualBox VMs.
+
+We need to create an empty project directory to work from, then initialize a Vagrant environment from that directory with this command:
+```
+# vagrant init ubuntu/xenial64
+
+```
+
+This only creates the Vagrantfile; it doesn't bring up the Vagrant box.
+
+The Vagrantfile is well-documented with a lot of guidance on how to use it. We can generate a minimized Vagrantfile with the `--minimal` flag.
+```
+# vagrant init --minimal ubuntu/xenial64
+
+```
+
+The resulting file will look like this:
+```
+Vagrant.configure("2") do |config|
+
+ config.vm.box = "ubuntu/xenial64"
+
+end
+
+```
+
+We will talk more about the Vagrantfile later, but for now, let's get this box up and running.
+
+### Getting the VM running
+
+Let's issue the following command from our project directory:
+```
+# vagrant up
+
+```
+
+It takes a bit of time to execute `vagrant up` the first time because it downloads the box to your machine. It is much faster on subsequent runs because it reuses the same downloaded box.
+
+Once the VM is up and running, we can `ssh` into our single machine by issuing the following command in our project directory:
+```
+# vagrant ssh
+
+```
+
+That's it! From here we should be able to log onto our VM and start working with it.
+
+### Using provisioners
+
+Before we move on, let's review a bit. So far, we've picked an image and gotten the server running. For the most part, the server is unconfigured and doesn't have any of the software we might want.
+
+Provisioners provide a way to use tools such as Ansible, Puppet, Chef, and even shell scripts to configure a server after deployment.
+
+An example of using the shell provisioner can be found in a default Vagrantfile. In this example, we'll run the commands to update apt and install Apache2 to the server.
+```
+ config.vm.provision "shell", inline: <<-SHELL
+
+ apt-get update
+
+ apt-get install -y apache2
+
+ SHELL
+
+```
+
+If we want to use an Ansible playbook, the configuration section would look like this:
+```
+config.vm.provision "ansible" do |ansible|
+
+ ansible.playbook = "playbook.yml"
+
+end
+
+```
+
+A neat thing is we can run only the provisioning part of the Vagrantfile by issuing the `provision` subcommand. This is great for testing out scripts or configuration management plays without having to re-build the VM each time.
+
+#### Vagrantfile checkpoint
+
+Our minimal Vagrantfile should look like this:
+```
+Vagrant.configure("2") do |config|
+
+ config.vm.box = "ubuntu/xenial64"
+
+ config.vm.provision "shell", inline: <<-SHELL
+
+ apt-get update
+
+ apt-get install -y apache2
+
+ SHELL
+
+end
+
+```
+
+After adding the provisioning section, we need to run this provisioning subcommand:
+```
+# vagrant provision
+
+```
+
+Next, we'll continue to build on our Vagrantfile, touching on some more advanced topics to build a foundation for anyone who wants to dig in further.
+
+### Networking
+
+In this section, we'll add an additional IP address on VirtualBox's `vboxnet0` network. This will allow us to access the machine via the `192.168.33.0/24` network.
+
+Adding the following line to the Vagrantfile will configure the machine to have an additional IP on the `192.168.33.0/24` network. This line is also used as an example in the default Vagrantfile.
+```
+config.vm.network "private_network", ip: "192.168.33.10
+
+```
+
+#### Vagrantfile checkpoint
+
+For those following along, here where our working Vagrantfile stands:
+```
+Vagrant.configure("2") do |config|
+
+ config.vm.box = "ubuntu/xenial64"
+
+ config.vm.network "private_network", ip: "192.168.33.10"
+
+ config.vm.provision "shell", inline: <<-SHELL
+
+ apt-get update
+
+ apt-get install -y apache2
+
+ SHELL
+
+end
+
+```
+
+Next, we need to reload our configuration to reconfigure our machine with this new interface and IP. This command will shut down the VM, reconfigure the Virtual Box VM with the new IP address, and bring the VM back up.
+```
+# vagrant reload
+
+```
+
+When it comes back up, our machine should have two IP addresses.
+
+### Syncing folders
+
+Synced folders are what got me into using Vagrant. They allowed me to work on my host machine, using my tools, and at the same time have the files available to the web server or application. It made my workflow much easier.
+
+By default, the project directory on the host machine is mounted to the guest machine as `/home/vagrant`. This worked for me in the beginning, but eventually, I wanted to customize where this directory was mounted.
+
+In our example, we are defining that the HTML directory within our project directory should be mounted as `/var/www/html` with user/group ownership of `root`.
+```
+config.vm.synced_folder "./"html, "/var/www/html",
+
+ owner: "root", group: "root"
+
+```
+
+One thing to note: If you are using a synced folder as a web server document root, you will need to disable `sendfile`, or you might run into an issue where it looks like the files are not updating.
+
+Updating your web server's configuration is out of scope for this article, but here are the directives you will want to update.
+
+In Apache:
+```
+EnableSendFile Off
+
+```
+
+In Nginx:
+```
+sendfile off;
+
+```
+
+#### Vagrantfile checkpoint
+
+After adding our synced folder configuration, our Vagrantfile will look like this:
+```
+Vagrant.configure("2") do |config|
+
+ config.vm.box = "ubuntu/xenial64"
+
+ config.vm.network "private_network", ip: "192.168.33.10"
+
+ config.vm.synced_folder "./html", "/var/www/html",
+
+ owner: "root", group: "root"
+
+ config.vm.provision "shell", inline: <<-SHELL
+
+ apt-get update
+
+ apt-get install -y apache2
+
+ SHELL
+
+end
+
+```
+
+We need to reload our machine to make the new configuration active.
+```
+# vagrant reload
+
+```
+
+### Deploying multiple machines
+
+We sometimes refer to the project directory as an "environment," and one machine is not much of an environment. This last section extends our Vagrantfile to deploy two machines.
+
+To create two machines, we need to enclose the definition of a single machine inside a `vm.define` block. The rest of the configuration is exactly the same.
+
+Here is an example of a server definition within a `define` block.
+```
+Vagrant.configure("2") do |config|
+
+
+
+config.vm.define "web" do |web|
+
+ web.vm.box = "web"
+
+ web.vm.box = "ubuntu/xenial64"
+
+ web.vm.network "private_network", ip: "192.168.33.10"
+
+ web.vm.synced_folder "./html", "/var/www/html",
+
+ owner: "root", group: "root"
+
+ web.vm.provision "shell", inline: <<-SHELL
+
+ apt-get update
+
+ apt-get install -y apache2
+
+ SHELL
+
+ end
+
+
+
+end
+
+```
+
+Notice in the `define` block, our variable is called `"web"` and it is carried through the block to reference each configuration method. We'll use the same name to access it later.
+
+In this next example, we'll add a second machine called `"db"` to our configuration. Where we used `"web"` in our second block before, we'll use `"db"` to reference the second machine. We'll also update our IP address on the `private_network` so we can communicate between the machines.
+```
+Vagrant.configure("2") do |config|
+
+
+
+config.vm.define "web" do |web|
+
+ web.vm.box = "web"
+
+ web.vm.box = "ubuntu/xenial64"
+
+ web.vm.network "private_network", ip: "192.168.33.10"
+
+ web.vm.synced_folder "./html", "/var/www/html",
+
+ owner: "root", group: "root"
+
+ web.vm.provision "shell", inline: <<-SHELL
+
+ apt-get update
+
+ apt-get install -y apache2
+
+ SHELL
+
+ end
+
+
+
+ config.vm.define "db" do |db|
+
+ db.vm.box = "db"
+
+ db.vm.box = "ubuntu/xenial64"
+
+ db.vm.network "private_network", ip: "192.168.33.20"
+
+ db.vm.synced_folder "./html", "/var/www/html",
+
+ owner: "root", group: "root"
+
+ db.vm.provision "shell", inline: <<-SHELL
+
+ apt-get update
+
+ apt-get install -y apache2
+
+ SHELL
+
+ end
+
+
+
+end
+
+```
+
+#### Completed Vagrantfile checkpoint
+
+In our final Vagrantfile, we'll install the MySQL server, update the IP address, and remove the configuration for the synced folder from the second machine.
+```
+Vagrant.configure("2") do |config|
+
+
+
+config.vm.define "web" do |web|
+
+ web.vm.box = "web"
+
+ web.vm.box = "ubuntu/xenial64"
+
+ web.vm.network "private_network", ip: "192.168.33.10"
+
+ web.vm.synced_folder "./html", "/var/www/html",
+
+ owner: "root", group: "root"
+
+ web.vm.provision "shell", inline: <<-SHELL
+
+ apt-get update
+
+ apt-get install -y apache2
+
+ SHELL
+
+ end
+
+
+
+ config.vm.define "db" do |db|
+
+ db.vm.box = "db"
+
+ db.vm.box = "ubuntu/xenial64"
+
+ db.vm.network "private_network", ip: "192.168.33.20"
+
+ db.vm.provision "shell", inline: <<-SHELL
+
+ export DEBIAN_FRONTEND="noninteractive"
+
+ apt-get update
+
+ apt-get install -y mysql-server
+
+ SHELL
+
+ end
+
+
+
+end
+
+```
+
+### Making sure everything works
+
+Now we have a completed Vagrantfile. Let's introduce one more Vagrant command to make sure everything works.
+
+Let's destroy our machine and build it brand new.
+
+The following command will remove our previous Vagrant image but keep the box we downloaded earlier.
+```
+# vagrant destroy --force
+
+```
+
+Now we need to bring the environment back up.
+```
+# vagrant up
+
+```
+
+We can ssh into the machines using the `vagrant ssh` command:
+```
+# vagrant ssh web
+
+```
+
+or
+```
+# vagrant ssh db
+
+```
+
+You should have a working Vagrantfile you can expand upon and serve as a base for learning more. Vagrant is a powerful tool for testing, developing and learning new things. I encourage you to keep adding to it and exploring the options it offers.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/vagrant-guide-get-started
+
+作者:[Alex Juarez][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/mralexjuarez
+[1]:https://www.vagrantup.com/intro/index.html
+[2]:https://opensource.com/resources/vagrant
+[3]:https://www.vagrantup.com/docs/vagrantfile/
+[4]:https://www.vagrantup.com/downloads.html
+[5]:https://www.virtualbox.org/wiki/Downloads
+[6]:https://vagrantcloud.com/
From 91f79cfe5b8893e0f93089de8a4a2e1b87db8f9f Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 24 Apr 2018 09:43:53 +0800
Subject: [PATCH 079/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Breach=20detectio?=
=?UTF-8?q?n=20with=20Linux=20filesystem=20forensics=20|=20Opensource.com?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...x filesystem forensics - Opensource.com.md | 342 ++++++++++++++++++
1 file changed, 342 insertions(+)
create mode 100644 sources/tech/20180423 Breach detection with Linux filesystem forensics - Opensource.com.md
diff --git a/sources/tech/20180423 Breach detection with Linux filesystem forensics - Opensource.com.md b/sources/tech/20180423 Breach detection with Linux filesystem forensics - Opensource.com.md
new file mode 100644
index 0000000000..b622bbbd00
--- /dev/null
+++ b/sources/tech/20180423 Breach detection with Linux filesystem forensics - Opensource.com.md
@@ -0,0 +1,342 @@
+Breach detection with Linux filesystem forensics | Opensource.com
+======
+
+
+Forensic analysis of a Linux disk image is often part of incident response to determine if a breach has occurred. Linux forensics is a different and fascinating world compared to Microsoft Windows forensics. In this article, I will analyze a disk image from a potentially compromised Linux system in order to determine the who, what, when, where, why, and how of the incident and create event and filesystem timelines. Finally, I will extract artifacts of interest from the disk image.
+
+In this tutorial, we will use some new tools and some old tools in creative, new ways to perform a forensic analysis of a disk image.
+
+### The scenario
+
+Premiere Fabrication Engineering (PFE) suspects there has been an incident or compromise involving the company's main server named pfe1. They believe the server may have been involved in an incident and may have been compromised sometime between the first of March and the last of March. They have engaged my services as a forensic examiner to investigate if the server was compromised and involved in an incident. The investigation will determine the who, what, when, where, why, and how behind the possible compromise. Additionally, PFE has requested my recommendations for further security measures for their servers.
+
+### The disk image
+
+To conduct the forensic analysis of the server, I ask PFE to send me a forensic disk image of pfe1 on a USB drive. They agree and say, "the USB is in the mail." The USB drive arrives, and I start to examine its contents. To conduct the forensic analysis, I use a virtual machine (VM) running the SANS SIFT distribution. The [SIFT Workstation][1] is a group of free and open source incident response and forensic tools designed to perform detailed digital forensic examinations in a variety of settings. SIFT has a wide array of forensic tools, and if it doesn't have a tool I want, I can install one without much difficulty since it is an Ubuntu-based distribution.
+
+Upon examination, I find the USB doesn't contain a disk image, rather copies of the VMware ESX host files, which are VMDK files from PFE's hybrid cloud. This was not what I was expecting. I have several options:
+
+ 1. I can contact PFE and be more explicit about what I am expecting from them. Early in an engagement like this, it might not be the best thing to do.
+ 2. I can load the VMDK files into a virtualization tool such as VMPlayer and run it as a live VM using its native Linux programs to perform forensic analysis. There are at least three reasons not to do this. First, timestamps on files and file contents will be altered when running the VMDK files as a live system. Second, since the server is thought to be compromised, every file and program of the VMDK filesystems must be considered compromised. Third, using the native programs on a compromised system to do a forensic analysis may have unforeseen consequences.
+ 3. To analyze the VMDK files, I could use the libvmdk-utils package that contains tools to access data stored in VMDK files.
+ 4. However, a better approach is to convert the VMDK file format into RAW format. This will make it easier to run the different tools in the SIFT distribution on the files in the disk image.
+
+
+
+To convert from VMDK to RAW format, I use the [qemu-img][2] utility, which allows creating, converting, and modifying images offline. The following figure shows the command to convert the VMDK format into a RAW format.
+
+![Converting a VMDK file to RAW format][4]
+
+Fig. 1: Converting a VMDK file to RAW format
+
+Next, I need to list the partition table from the disk image and obtain information about where each partition starts (sectors) using the [mmls][5] utility. This utility displays the layout of the partitions in a volume system, including partition tables and disk labels. Then I use the starting sector and query the details associated with the filesystem using the [fsstat][6] utility, which displays the details associated with a filesystem. The figures below show the `mmls` and `fsstat` commands in operation.
+
+![mmls command output][8]
+
+Fig. 2: `mmls` command output
+
+I learn several interesting things from the `mmls` output: A Linux primary partition starts at sector 2048 and is approximately 8 gigabytes in size. A DOS partition, probably the boot partition, is approximately 8 megabytes in size. Finally, there is a swap partition of approximately 8 gigabytes.
+
+![fsstat command output][10]
+
+Fig. 3: `fsstat` command output
+
+Running `fsstat` tells me many useful things about the partition: the type of filesystem, the last time data was written to the filesystem, whether the filesystem was cleanly unmounted, and where the filesystem was mounted.
+
+I'm ready to mount the partition and start the analysis. To do this, I need to read the partition tables on the raw image specified and create device maps over partition segments detected. I could do this by hand with the information from `mmls` and `fsstat`—or I could use [kpartx][11] to do it for me.
+
+![Using kpartx to create loopback devices][13]
+
+Fig. 4: Using kpartx to create loopback devices
+
+I use options to create read-only mapping (`-r`), add partition mapping (`-a`), and give verbose output (`-v`). The `loop0p1` is the name of a device file under `/dev/mapper` I can use to access the partition. To mount it, I run:
+```
+$ mount -o ro -o loop=/dev/mapper/loop0p1 pf1.raw /mnt
+
+```
+
+Note that I'm mounting the partition as read-only (`-o ro`) to prevent accidental contamination.
+
+After mounting the disk, I start my forensic analysis and investigation by creating a timeline. Some forensic examiners don't believe in creating a timeline. Instead, once they have a mounted partition, they creep through the filesystem looking for artifacts that might be relevant to the investigation. I label these forensic examiners "creepers." While this is one way to forensically investigate, it is far from repeatable, is prone to error, and may miss valuable evidence.
+
+I believe creating a timeline is a crucial step because it includes useful information about files that were modified, accessed, changed, and created in a human-readable format, known as MAC (modified, accessed, changed) time evidence. This activity helps identify the specific time and order an event took place.
+
+### Notes about Linux filesystems
+
+Linux filesystems like ext2 and ext3 don't have timestamps for a file's creation/birthtime. The creation timestamp was introduced in ext4. The book [Forensic Discovery][14] (1st edition) by Dan Farmer and Wietse Venema outlines the different timestamps.
+
+ * **Last modification time:** For directories, this is the last time an entry was added, renamed, or removed. For other file types, it's the last time the file was written to.
+ * **Last access (read) time:** For directories, this is the last time it was searched. For other file types, it's the last time the file was read.
+ * **Last status change:** Examples of status changes are change of owner, change of access permission, change of hard link count, or an explicit change of any of the MAC times.
+ * **Deletion time:** ext2 and ext3 record the time a file was deleted in the `dtime` timestamp, but not all tools support it.
+ * **Creation time:** ext4fs records the time the file was created in the `crtime` timestamp, but not all tools support it.
+
+
+
+The different timestamps are stored in the metadata contained in the inodes. Inodes are similar to the MFT entry number in the Windows world. One way to read the file metadata on a Linux system is to first get the inode number using the command `ls -i file` then use `istat` against the partition device and specify the inode number. This will show you the different metadata attributes, including the timestamps, the file size, owner's group and user id, permissions, and the blocks that contain the actual data.
+
+### Creating the super timeline
+
+My next step is to create a super timeline using log2timeline/plaso. [Plaso][15] is a Python-based rewrite of the Perl-based log2timeline tool initially created by Kristinn Gudjonsson and enhanced by others. It's easy to make a super timeline with log2timeline, but interpretation is difficult. The latest version of the plaso engine can parse the ext4 as well as different type of artifacts, such as syslog messages, audit, utmp, and others.
+
+To create the super timeline, I launch log2timeline against the mounted disk folder and use the Linux parsers. This process takes some time; when it finishes I have a timeline with the different artifacts in plaso database format, then I can use `psort.py` to convert the plaso database into any number of different output formats. To see the output formats that `psort.py` supports, enter `psort -o list`. I used `psort.py` to create an Excel-formatted super timeline. The figure below outlines the steps to perform this operation.
+
+(Note: extraneous lines removed from images)
+
+![Creating a super timeline in. xslx format][17]
+
+Fig. 5: Creating a super timeline in. xslx format
+
+I import the super timeline into a spreadsheet program to make viewing, sorting, and searching easier. While you can view a super timeline in a spreadsheet program, it's easier to work with it in a real database such as MySQL or Elasticsearch. I create a second super timeline and dispatch it directly to an Elasticsearch instance from `psort.py`. Once the super timeline has been indexed by Elasticsearch, I can visualize and analyze the data with [Kibana][18].
+
+![Creating a super timeline and ingesting it into Elasticsearch][20]
+
+Fig. 6: Creating a super timeline and ingesting it into Elasticsearch
+
+### Investigating with Elasticsearch/Kibana
+
+As [Master Sergeant Farrell][21] said, "Through readiness and discipline, we are masters of our fate." During the analysis, it pays to be patient and meticulous and avoid being a creeper. One thing that helps a super timeline analysis is to have an idea of when the incident may have happened. In this case (pun intended), the client says the incident may have happened in March. I still consider the possibility the client is incorrect about the timeframe. Armed with this information, I start reducing the super timeline's timeframe and narrowing it down. I'm looking for artifacts of interest that have a "temporal proximity" with the supposed date of the incident. The goal is to recreate what happened based on different artifacts.
+
+To narrow the scope of the super timeline, I use the Elasticsearch/Kibana instance I set up. With Kibana, I can set up any number of intricate dashboards to display and correlate forensic events of interest, but I want to avoid this level of complexity. Instead, I select indexes of interest for display and create a bar graph of activity by date:
+
+![Activity on pfe1 over time][23]
+
+Fig. 7: Activity on pfe1 over time
+
+The next step is to expand the large bar at the end of the chart:
+
+![Activity on pfe1 during March][25]
+
+Fig. 8: Activity on pfe1 during March
+
+There is a large bar on 05-Mar. I expand that bar out to see the activity on that particular date:
+
+![Activity on pfe1 on 05-Mar][27]
+
+Fig. 9: Activity on pfe1 on 05-Mar
+
+Looking at the logfile activity from the super timeline, I see this activity was from a software install/upgrade. There is very little to be found in this area of activity.
+
+![Log listing from pfe1 on 05-Mar][29]
+
+Fig. 10: Log listing from pfe1 on 05-Mar
+
+I go back to Kibana to see the last set of activities on the system and find this in the logs:
+
+![Last activity on pfe1 before shutdown][31]
+
+Fig. 11: Last activity on pfe1 before shutdown
+
+One of the last activities on the system was user john installed a program from a directory named xingyiquan. Xing Yi Quan is a style of Chinese martial arts similar to Kung Fu and Tai Chi Quan. It seems odd that user john would install a martial arts program on a company server from his own user account. I use Kibana's search capability to find other instances of xingyiquan in the logfiles. I found three periods of activity surrounding the string xingyiquan on 05-Mar, 09-Mar, and 12-Mar.
+
+![xingyiquan activity on pfe1][33]
+
+Fig. 12: xingyiquan activity on pfe1
+
+Next, I look at the log entries for these days. I start with 05-Mar and find evidence of an internet search using the Firefox browser and the Google search engine for a rootkit named xingyiquan. The Google search found the existence of such a rootkit on packetstormsecurity.com. Then, the browser went to packetstormsecurity.com and downloaded a file named `xingyiquan.tar.gz` from that site into user john's download directory.
+
+![Search and download of xingyiquan.tar.gz][35]
+
+Fig. 13: Search and download of xingyiquan.tar.gz
+
+Although it appears user john went to google.com to search for the rootkit and then to packetstormsecurity.com to download the rootkit, these log entries do not indicate the user behind the search and download. I need to look further into this.
+
+The Firefox browser keeps its history information in an SQLite database under the `.mozilla` directory in a user's home directory (i.e., user john) in a file named `places.sqlite`. To view the information in the database, I use a program called [sqlitebrowser][36]. It's a GUI application that allows a user to drill down into an SQLite database and view the records stored there. I launched sqlitebrowser and imported `places.sqlite` from the `.mozilla` directory under user john's home directory. The results are shown below.
+
+![Search and download history of user john][38]
+
+Fig. 14: Search and download history of user john
+
+The number in the far-right column is the timestamp for the activity on the left. As a test of congruence, I converted the timestamp `1425614413880000` to human time and got March 5, 2015, 8:00:13.880 PM. This matches closely with the time March 5th, 2015, 20:00:00.000 from Kibana. We can say with reasonable certainty that user john searched for a rootkit named xingyiquan and downloaded a file from packetstormsecurity.com named `xingyiquan.tar.gz` to user john's download directory.
+
+### Investigating with MySQL
+
+At this point, I decide to import the super timeline into a MySQL database to gain greater flexibility in searching and manipulating data than Elasticsearch/Kibana alone allows.
+
+### Building the xingyiquan rootkit
+
+I load the super timeline I created from the plaso database into a MySQL database. From working with Elasticsearch/Kibana, I know that user john downloaded the rootkit `xingyiquan.tar.gz` from packetstormsecurity.com to the download directory. Here is evidence of the download activity from the MySQL timeline database:
+
+![Downloading the xingyiquan.tar.gz rootkit][40]
+
+Fig. 15: Downloading the xingyiquan.tar.gz rootkit
+
+Shortly after the rootkit was downloaded, the source from the `tar.gz` archive was extracted.
+
+![Extracting the rootkit source from the tar.gz archive][42]
+
+Fig. 16: Extracting the rootkit source from the tar.gz archive
+
+Nothing was done with the rootkit until 09-Mar, when the bad actor read the README file for the rootkit with the More program, then compiled and installed the rootkit.
+
+![Building the xingyiquan rootkit][44]
+
+Fig. 17: Building the xingyiquan rootkit
+
+### Command histories
+
+I load histories of all the users on pfe1 that have `bash` command histories into a table in the MySQL database. Once the histories are loaded, I can easily display them using a query like:
+```
+select * from histories order by recno;
+
+```
+
+To get a history for a specific user, I use a query like:
+```
+select historyCommand from histories where historyFilename like '%%' order by recno;
+
+```
+
+I find several interesting commands from user john's `bash` history. Namely, user john created the johnn account, deleted it, created it again, copied `/bin/true` to `/bin/false`, gave passwords to the whoopsie and lightdm accounts, copied `/bin/bash` to `/bin/false`, edited the password and group files, moved the user johnn's home directory from `johnn` to `.johnn`, (making it a hidden directory), changed the password file using `sed` after looking up how to use sed, and finally installed the xingyiquan rootkit.
+
+![User john's activity][46]
+
+Fig. 18: User john's activity
+
+Next, I look at the `bash` command history for user johnn. It showed no unusual activity.
+
+![User johnn's activity][48]
+
+Fig. 19: User johnn's activity
+
+Noting that user john copied `/bin/bash` to `/bin/false`, I test whether this was true by checking the sizes of these files and getting an MD5 hash of the files. As shown below, the file sizes and the MD5 hashes are the same. Thus, the files are the same.
+
+![Checking /bin/bash and /bin/false][50]
+
+Fig. 20: Checking `/bin/bash` and `/bin/false`
+
+### Investigating successful and failed logins
+
+To answer part of the "when" question, I load the logfiles containing data on logins, logouts, system startups, and shutdowns into a table in the MySQL database. Using a simple query like:
+```
+select * from logins order by start
+
+```
+
+I find the following activity:
+
+![Successful logins to pfe1][52]
+
+Fig. 21: Successful logins to pfe1
+
+From this figure, I see that user john logged into pfe1 from IP address `192.168.56.1`. Five minutes later, user johnn logged into pfe1 from the same IP address. Two logins by user lightdm followed four minutes later and another one minute later, then user johnn logged in less than one minute later. Then pfe1 was rebooted.
+
+Looking at unsuccessful logins, I find this activity:
+
+![Unsuccessful logins to pfe1][54]
+
+Fig. 22: Unsuccessful logins to pfe1
+
+Again, user lightdm attempted to log into pfe1 from IP address `192.168.56.1`. In light of bogus accounts logging into pfe1, one of my recommendations to PFE will be to check the system with IP address `192.168.56.1` for evidence of compromise.
+
+### Investigating logfiles
+
+This analysis of successful and failed logins provides valuable information about when events occurred. I turn my attention to investigating the logfiles on pfe1, particularly the authentication and authorization activity in `/var/log/auth*`. I load all the logfiles on pfe1 into a MySQL database table and use a query like:
+```
+select logentry from logs where logfilename like '%auth%' order by recno;
+
+```
+
+and save that to a file. I open that file with my favorite editor and search for `192.168.56.1`. Following is a section of the activity:
+
+![Account activity on pfe1][56]
+
+Fig. 23: Account activity on pfe1
+
+This section shows that user john logged in from IP address `192.168.56.1` and created the johnn account, removed the johnn account, and created it again. Then, user johnn logged into pfe1 from IP address `192.168.56.1`. Next, user johnn attempted to become user whoopsie with an `su` command, which failed. Then, the password for user whoopsie was changed. User johnn next attempted to become user lightdm with an `su` command, which also failed. This correlates with the activity shown in Figures 21 and 22.
+
+### Conclusions from my investigation
+
+ * User john searched for, downloaded, compiled, and installed a rootkit named xingyiquan onto the server pfe1. The xingyiquan rootkit hides processes, files, directories, processes, and network connections; adds backdoors; and more.
+ * User john created, deleted, and recreated another account on pfe1 named johnn. User john made the home directory of user johnn a hidden file to obscure the existence of this user account.
+ * User john copied the file `/bin/true` over `/bin/false` and then `/bin/bash` over `/bin/false` to facilitate the logins of system accounts not normally used for interactive logins.
+ * User john created passwords for the system accounts whoopsie and lightdm. These accounts normally do not have passwords.
+ * The user account johnn was successfully logged into and user johnn unsuccessfully attempted to become users whoopsie and lightdm.
+ * Server pfe1 has been seriously compromised.
+
+
+
+### My recommendations to PFE
+
+ * Rebuild server pfe1 from the original distribution and apply all relevant patches to the system before returning it to service.
+ * Set up a centralized syslog server and have all systems in the PFE hybrid cloud log to the centralized syslog server and to local logs to consolidate log data and prevent tampering with system logs. Use a security information and event monitoring (SIEM) product to facilitate security event review and correlation.
+ * Implement `bash` command timestamps on all company servers.
+ * Enable audit logging of the root account on all PFE servers and direct the audit logs to the centralized syslog server where they can be correlated with other log information.
+ * Investigate the system with IP address `192.168.56.1` for breaches and compromises, as it was used as a pivot point in the compromise of pfe1.
+
+
+
+If you have used forensics to analyze your Linux filesystem for compromises, please share your tips and recommendations in the comments.
+
+Gary Smith will be speaking at LinuxFest Northwest this year. See [program highlights][57] or [register to attend][58].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/linux-filesystem-forensics
+
+作者:[Gary Smith][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/greptile
+[1]:https://digital-forensics.sans.org/community/downloads
+[2]:http://manpages.ubuntu.com/manpages/trusty/man1/qemu-img.1.html
+[3]:/file/394021
+[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_1.png?itok=97ycgLzk (Converting a VMDK file to RAW format)
+[5]:http://manpages.ubuntu.com/manpages/trusty/man1/mmls.1.html
+[6]:http://manpages.ubuntu.com/manpages/artful/en/man1/fsstat.1.html
+[7]:/file/394026
+[8]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_2.png?itok=xcpFjon4 (mmls command output)
+[9]:/file/394031
+[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_3.png?itok=DKsXkKK- (fsstat command output)
+[11]:http://manpages.ubuntu.com/manpages/trusty/man8/kpartx.8.html
+[12]:/file/394036
+[13]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_4.png?itok=AGJiIXmK (Using kpartx to create loopback devices)
+[14]:https://www.amazon.com/Forensic-Discovery-paperback-Dan-Farmer/dp/0321703251
+[15]:https://github.com/log2timeline/plaso
+[16]:/file/394151
+[17]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_5a_0.png?itok=OgVfAWwD (Creating a super timeline in. xslx format)
+[18]:https://www.elastic.co/products/kibana
+[19]:/file/394051
+[20]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_6.png?itok=1eohddUY (Creating a super timeline and ingesting it into Elasticsearch)
+[21]:http://allyouneediskill.wikia.com/wiki/Master_Sergeant_Farell
+[22]:/file/394056
+[23]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_7.png?itok=avIR86ws (Activity on pfe1 over time)
+[24]:/file/394066
+[25]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_8.png?itok=vfNaPsMB (Activity on pfe1 during March)
+[26]:/file/394071
+[27]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_9.png?itok=2e4oUxJs (Activity on pfe1 on 05-Mar)
+[28]:/file/394076
+[29]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_10.png?itok=0RAjs3WK (Log listing from pfe1 on 05-Mar)
+[30]:/file/394081
+[31]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_11.png?itok=xRLpPw8F (Last activity on pfe1 before shutdown)
+[32]:/file/394086
+[33]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_12.png?itok=JS9YRN6n (xingyiquan activity on pfe1)
+[34]:/file/394091
+[35]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_13.png?itok=jX0wwgla (Search and download of xingyiquan.tar.gz)
+[36]:http://sqlitebrowser.org/
+[37]:/file/394096
+[38]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_14.png?itok=E9u4PoJI (Search and download history of user john)
+[39]:/file/394101
+[40]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_15.png?itok=ZrA8j8ET (Downloading the xingyiquan.tar.gz rootkit)
+[41]:/file/394106
+[42]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_16.png?itok=wMQVSjTF (Extracting the rootkit source from the tar.gz archive)
+[43]:/file/394111
+[44]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_17.png?itok=4H5aKyy9 (Building the xingyiquan rootkit)
+[45]:/file/394116
+[46]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_18.png?itok=vc1EtrRA (User john's activity)
+[47]:/file/394121
+[48]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_19.png?itok=fF6BY3LM (User johnn's activity)
+[49]:/file/394126
+[50]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_20.png?itok=RfLFwep_ (Checking /bin/bash and /bin/false)
+[51]:/file/394131
+[52]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_21.png?itok=oX7YYrSz (Successful logins to pfe1)
+[53]:/file/394136
+[54]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_22.png?itok=wfmLvoi6 (Unsuccessful logins to pfe1)
+[55]:/file/394141
+[56]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/linuxfilesystemforensics_23.png?itok=dyxmwiSw (Account activity on pfe1)
+[57]:https://www.linuxfestnorthwest.org/conferences/lfnw18
+[58]:https://www.linuxfestnorthwest.org/conferences/lfnw18/register/new
From aa2b7812562f1941247c99f6d86f2cc78f42ba7c Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 24 Apr 2018 09:45:33 +0800
Subject: [PATCH 080/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20An=20introduction?=
=?UTF-8?q?=20to=20Python=20bytecode?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...0423 An introduction to Python bytecode.md | 155 ++++++++++++++++++
1 file changed, 155 insertions(+)
create mode 100644 sources/tech/20180423 An introduction to Python bytecode.md
diff --git a/sources/tech/20180423 An introduction to Python bytecode.md b/sources/tech/20180423 An introduction to Python bytecode.md
new file mode 100644
index 0000000000..f170cfac7f
--- /dev/null
+++ b/sources/tech/20180423 An introduction to Python bytecode.md
@@ -0,0 +1,155 @@
+An introduction to Python bytecode
+======
+
+If you've ever written, or even just used, Python, you're probably used to seeing Python source code files; they have names ending in `.py`. And you may also have seen another type of file, with a name ending in `.pyc`, and you may have heard that they're Python "bytecode" files. (These are a bit harder to see on Python 3—instead of ending up in the same directory as your `.py` files, they go into a subdirectory called `__pycache__`.) And maybe you've heard that this is some kind of time-saver that prevents Python from having to re-parse your source code every time it runs.
+
+But beyond "oh, that's Python bytecode," do you really know what's in those files and how Python uses them?
+
+If not, today's your lucky day! I'll take you through what Python bytecode is, how Python uses it to execute your code, and how knowing about it can help you.
+
+### How Python works
+
+Python is often described as an interpreted language—one in which your source code is translated into native CPU instructions as the program runs—but this is only partially correct. Python, like many interpreted languages, actually compiles source code to a set of instructions for a virtual machine, and the Python interpreter is an implementation of that virtual machine. This intermediate format is called "bytecode."
+
+So those `.pyc` files Python leaves lying around aren't just some "faster" or "optimized" version of your source code; they're the bytecode instructions that will be executed by Python's virtual machine as your program runs.
+
+Let's look at an example. Here's a classic "Hello, World!" written in Python:
+```
+def hello()
+
+ print("Hello, World!")
+
+```
+
+And here's the bytecode it turns into (translated into a human-readable form):
+```
+2 0 LOAD_GLOBAL 0 (print)
+
+ 2 LOAD_CONST 1 ('Hello, World!')
+
+ 4 CALL_FUNCTION 1
+
+```
+
+If you type up that `hello()` function and use the [CPython][1] interpreter to run it, the above listing is what Python will execute. It might look a little weird, though, so let's take a deeper look at what's going on.
+
+### Inside the Python virtual machine
+
+CPython uses a stack-based virtual machine. That is, it's oriented entirely around stack data structures (where you can "push" an item onto the "top" of the structure, or "pop" an item off the "top").
+
+CPython uses three types of stacks:
+
+ 1. The **call stack**. This is the main structure of a running Python program. It has one item—a "frame"—for each currently active function call, with the bottom of the stack being the entry point of the program. Every function call pushes a new frame onto the call stack, and every time a function call returns, its frame is popped off.
+ 2. In each frame, there's an **evaluation stack** (also called the **data stack** ). This stack is where execution of a Python function occurs, and executing Python code consists mostly of pushing things onto this stack, manipulating them, and popping them back off.
+ 3. Also in each frame, there's a **block stack**. This is used by Python to keep track of certain types of control structures: loops, `try`/`except` blocks, and `with` blocks all cause entries to be pushed onto the block stack, and the block stack gets popped whenever you exit one of those structures. This helps Python know which blocks are active at any given moment so that, for example, a `continue` or `break` statement can affect the correct block.
+
+
+
+Most of Python's bytecode instructions manipulate the evaluation stack of the current call-stack frame, although there are some instructions that do other things (like jump to specific instructions or manipulate the block stack).
+
+To get a feel for this, suppose we have some code that calls a function, like this: `my_function(my_variable, 2)`. Python will translate this into a sequence of four bytecode instructions:
+
+ 1. A `LOAD_NAME` instruction that looks up the function object `my_function` and pushes it onto the top of the evaluation stack
+ 2. Another `LOAD_NAME` instruction to look up the variable `my_variable` and push it on top of the evaluation stack
+ 3. A `LOAD_CONST` instruction to push the literal integer value `2` on top of the evaluation stack
+ 4. A `CALL_FUNCTION` instruction
+
+
+
+The `CALL_FUNCTION` instruction will have an argument of 2, which indicates that Python needs to pop two positional arguments off the top of the stack; then the function to call will be on top, and it can be popped as well (for functions involving keyword arguments, a different instruction—`CALL_FUNCTION_KW`—is used, but with a similar principle of operation, and a third instruction, `CALL_FUNCTION_EX`, is used for function calls that involve argument unpacking with the `*` or `**` operators). Once Python has all that, it will allocate a new frame on the call stack, populate the local variables for the function call, and execute the bytecode of `my_function` inside that frame. Once that's done, the frame will be popped off the call stack, and in the original frame the return value of `my_function` will be pushed on top of the evaluation stack.
+
+### Accessing and understanding Python bytecode
+
+If you want to play around with this, the `dis` module in the Python standard library is a huge help; the `dis` module provides a "disassembler" for Python bytecode, making it easy to get a human-readable version and look up the various bytecode instructions. [The documentation for the `dis` module][2] goes over its contents and provides a full list of bytecode instructions along with what they do and what arguments they take.
+
+For example, to get the bytecode listing for the `hello()` function above, I typed it into a Python interpreter, then ran:
+```
+import dis
+
+dis.dis(hello)
+
+```
+
+The function `dis.dis()` will disassemble a function, method, class, module, compiled Python code object, or string literal containing source code and print a human-readable version. Another handy function in the `dis` module is `distb()`. You can pass it a Python traceback object or call it after an exception has been raised, and it will disassemble the topmost function on the call stack at the time of the exception, print its bytecode, and insert a pointer to the instruction that raised the exception.
+
+It's also useful to look at the compiled code objects Python builds for every function since executing a function makes use of attributes of those code objects. Here's an example looking at the `hello()` function:
+```
+>>> hello.__code__
+
+", line 1>
+
+>>> hello.__code__.co_consts
+
+(None, 'Hello, World!')
+
+>>> hello.__code__.co_varnames
+
+()
+
+>>> hello.__code__.co_names
+
+('print',)
+
+```
+
+The code object is accessible as the attribute `__code__` on the function and carries a few important attributes:
+
+ * `co_consts` is a tuple of any literals that occur in the function body
+ * `co_varnames` is a tuple containing the names of any local variables used in the function body
+ * `co_names` is a tuple of any non-local names referenced in the function body
+
+
+
+Many bytecode instructions—particularly those that load values to be pushed onto the stack or store values in variables and attributes—use indices in these tuples as their arguments.
+
+So now we can understand the bytecode listing of the `hello()` function:
+
+ 1. `LOAD_GLOBAL 0`: tells Python to look up the global object referenced by the name at index 0 of `co_names` (which is the `print` function) and push it onto the evaluation stack
+ 2. `LOAD_CONST 1`: takes the literal value at index 1 of `co_consts` and pushes it (the value at index 0 is the literal `None`, which is present in `co_consts` because Python function calls have an implicit return value of `None` if no explicit `return` statement is reached)
+ 3. `CALL_FUNCTION 1`: tells Python to call a function; it will need to pop one positional argument off the stack, then the new top-of-stack will be the function to call.
+
+
+
+The "raw" bytecode—as non-human-readable bytes—is also available on the code object as the attribute `co_code`. You can use the list `dis.opname` to look up the names of bytecode instructions from their decimal byte values if you'd like to try to manually disassemble a function.
+
+### Putting bytecode to use
+
+Now that you've read this far, you might be thinking "OK, I guess that's cool, but what's the practical value of knowing this?" Setting aside curiosity for curiosity's sake, understanding Python bytecode is useful in a few ways.
+
+First, understanding Python's execution model helps you reason about your code. People like to joke about C being a kind of "portable assembler," where you can make good guesses about what machine instructions a particular chunk of C source code will turn into. Understanding bytecode will give you the same ability with Python—if you can anticipate what bytecode your Python source code turns into, you can make better decisions about how to write and optimize it.
+
+Second, understanding bytecode is a useful way to answer questions about Python. For example, I often see newer Python programmers wondering why certain constructs are faster than others (like why `{}` is faster than `dict()`). Knowing how to access and read Python bytecode lets you work out the answers (try it: `dis.dis("{}")` versus `dis.dis("dict()")`).
+
+Finally, understanding bytecode and how Python executes it gives a useful perspective on a particular kind of programming that Python programmers don't often engage in: stack-oriented programming. If you've ever used a stack-oriented language like FORTH or Factor, this may be old news, but if you're not familiar with this approach, learning about Python bytecode and understanding how its stack-oriented programming model works is a neat way to broaden your programming knowledge.
+
+### Further reading
+
+If you'd like to learn more about Python bytecode, the Python virtual machine, and how they work, I recommend these resources:
+
+ * [Inside the Python Virtual Machine][3] by Obi Ike-Nwosu is a free online book that does a deep dive into the Python interpreter, explaining in detail how Python actually works.
+ * [A Python Interpreter Written in Python][4] by Allison Kaptur is a tutorial for building a Python bytecode interpreter in—what else—Python itself, and it implements all the machinery to run Python bytecode.
+ * Finally, the CPython interpreter is open source and you can [read through it on GitHub][1]. The implementation of the bytecode interpreter is in the file `Python/ceval.c`. [Here's that file for the Python 3.6.4 release][5]; the bytecode instructions are handled by the `switch` statement beginning on line 1266.
+
+
+
+To learn more, attend James Bennett's talk, [A Bit about Bytes: Understanding Python Bytecode][6], at [PyCon Cleveland 2018][7].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/introduction-python-bytecode
+
+作者:[James Bennett][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/ubernostrum
+[1]:https://github.com/python/cpython
+[2]:https://docs.python.org/3/library/dis.html
+[3]:https://leanpub.com/insidethepythonvirtualmachine
+[4]:http://www.aosabook.org/en/500L/a-python-interpreter-written-in-python.html
+[5]:https://github.com/python/cpython/blob/d48ecebad5ac78a1783e09b0d32c211d9754edf4/Python/ceval.c
+[6]:https://us.pycon.org/2018/schedule/presentation/127/
+[7]:https://us.pycon.org/2018/
From 88c1775b026c23cdefc6a6bf7b095e47d05cef42 Mon Sep 17 00:00:00 2001
From: Dot
Date: Tue, 24 Apr 2018 10:33:41 +0800
Subject: [PATCH 081/220] [translated] 20180101 How Exit Traps Can Make Your
Bash Scripts Way More Robust And Reliable.md
---
...sh Scripts Way More Robust And Reliable.md | 176 ------------------
...sh Scripts Way More Robust And Reliable.md | 173 +++++++++++++++++
2 files changed, 173 insertions(+), 176 deletions(-)
delete mode 100644 sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
create mode 100644 translated/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
diff --git a/sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md b/sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
deleted file mode 100644
index 73bb7e841b..0000000000
--- a/sources/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
+++ /dev/null
@@ -1,176 +0,0 @@
-[ translating by Dotcra ]
-How "Exit Traps" Can Make Your Bash Scripts Way More Robust And Reliable
-============================================================
-
-There is a simple, useful idiom to make your bash scripts more robust - ensuring they always perform necessary cleanup operations, even when something unexpected goes wrong. The secret sauce is a pseudo-signal provided by bash, called EXIT, that you can [trap][1]; commands or functions trapped on it will execute when the script exits for any reason. Let's see how this works.
-
-The basic code structure is like this:
-
-```
-#!/bin/bash
-function finish {
- # Your cleanup code here
-}
-trap finish EXIT
-```
-
-You place any code that you want to be certain to run in this "finish" function. A good common example: creating a temporary scratch directory, then deleting it after.
-
-```
-#!/bin/bash
-scratch=$(mktemp -d -t tmp.XXXXXXXXXX)
-function finish {
- rm -rf "$scratch"
-}
-trap finish EXIT
-```
-
-You can then download, generate, slice and dice intermediate or temporary files to the `$scratch` directory to your heart's content. [[1]][2]
-
-```
-# Download every linux kernel ever.... FOR SCIENCE!
-for major in {1..4}; do
- for minor in {0..99}; do
- for patchlevel in {0..99}; do
- tarball="linux-${major}-${minor}-${patchlevel}.tar.bz2"
- curl -q "http://kernel.org/path/to/$tarball" -o "$scratch/$tarball" || true
- if [ -f "$scratch/$tarball" ]; then
- tar jxf "$scratch/$tarball"
- fi
- done
- done
-done
-# magically merge them into some frankenstein kernel ...
-# That done, copy it to a destination
-cp "$scratch/frankenstein-linux.tar.bz2" "$1"
-# Here at script end, the scratch directory is erased automatically
-```
-
-Compare this to how you'd remove the scratch directory without the trap:
-
-```
-#!/bin/bash
-# DON'T DO THIS!
-scratch=$(mktemp -d -t tmp.XXXXXXXXXX)
-
-# Insert dozens or hundreds of lines of code here...
-
-# All done, now remove the directory before we exit
-rm -rf "$scratch"
-```
-
-What's wrong with this? Plenty:
-
-* If some error causes the script to exit prematurely, the scratch directory and its contents don't get deleted. This is a resource leak, and may have security implications too.
-
-* If the script is designed to exit before the end, you must manually copy 'n paste the rm command at each exit point.
-
-* There are maintainability problems as well. If you later add a new in-script exit, it's easy to forget to include the removal - potentially creating mysterious heisenleaks.
-
-### Keeping Services Up, No Matter What
-
-Another scenario: Imagine you are automating some system administration task, requiring you to temporarily stop a server... and you want to be dead certain it starts again at the end, even if there is some runtime error. Then the pattern is:
-
-```
-function finish {
- # re-start service
- sudo /etc/init.d/something start
-}
-trap finish EXIT
-sudo /etc/init.d/something stop
-# Do the work...
-
-# Allow the script to end and the trapped finish function to start the
-# daemon back up.
-```
-
-A concrete example: suppose you have MongoDB running on an Ubuntu server, and want a cronned script to temporarily stop the process for some regular maintenance task. The way to handle it is:
-
-```
-function finish {
- # re-start service
- sudo service mongdb start
-}
-trap finish EXIT
-# Stop the mongod instance
-sudo service mongdb stop
-# (If mongod is configured to fork, e.g. as part of a replica set, you
-# may instead need to do "sudo killall --wait /usr/bin/mongod".)
-```
-
-### Capping Expensive Resources
-
-There is another situation where the exit trap is very useful: if your script initiates an expensive resource, needed only while the script is executing, and you want to make certain it releases that resource once it's done. For example, suppose you are working with Amazon Web Services (AWS), and want a script that creates a new image.
-
-(If you're not familar with this: Servers running on the Amazon cloud are called "[instances][3]". Instances are launched from Amazon Machine Images, a.k.a. "AMIs" or "images". AMIs are kind of like a snapshot of a server at a specific moment in time.)
-
-A common pattern for creating custom AMIs looks like:
-
-1. Run an instance (i.e. start a server) from some base AMI.
-
-2. Make some modifications to it, perhaps by copying a script over and then executing it.
-
-3. Create a new image from this now-modified instance.
-
-4. Terminate the running instance, which you no longer need.
-
-That last step is **really important**. If your script fails to terminate the instance, it will keep running and accruing charges to your account. (In the worst case, you won't notice until the end of the month, when your bill is way higher than you expect. Believe me, that's no fun!)
-
-If our AMI-creation is encapsulated in a script, we can set an exit trap to destroy the instance. Let's rely on the EC2 command line tools:
-
-```
-#!/bin/bash
-# define the base AMI ID somehow
-ami=$1
-# Store the temporary instance ID here
-instance=''
-# While we are at it, let me show you another use for a scratch directory.
-scratch=$(mktemp -d -t tmp.XXXXXXXXXX)
-function finish {
- if [ -n "$instance" ]; then
- ec2-terminate-instances "$instance"
- fi
- rm -rf "$scratch"
-}
-trap finish EXIT
-# This line runs the instance, and stores the program output (which
-# shows the instance ID) in a file in the scratch directory.
-ec2-run-instances "$ami" > "$scratch/run-instance"
-# Now extract the instance ID.
-instance=$(grep '^INSTANCE' "$scratch/run-instance" | cut -f 2)
-```
-
-At this point in the script, the instance (EC2 server) is running [[2]][4]. You can do whatever you like: install software on the instance, modify its configuration programatically, et cetera, finally creating an image from the final version. The instance will be terminated for you when the script exits - even if some uncaught error causes it to exit early. (Just make sure to block until the image creation process finishes.)
-
-### Plenty Of Uses
-
-I believe what I've covered in this article only scratches the surface; having used this bash pattern for years, I still find new interesting and fun ways to apply it. You will probably discover your own situations where it will help make your bash scripts more reliable.
-
-### Footnotes
-
-1. The -t option to mktemp is optional on Linux, but needed on OS X. Make your scripts using this idiom more portable by including this option.
-
-2. When getting the instance ID, instead of using the scratch file, we could just say: `instance=$(ec2-run-instances "$ami" | grep '^INSTANCE' | cut -f 2)`. But using the scratch file makes the code a bit more readable, leaves us with better logging for debugging, and makes it easy to capture other info from ec2-run-instances's output if we wish.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Writer, software engineer, and entrepreneur in San Francisco, CA, USA.
-
-Author of [Powerful Python][5] and its [blog][6].
-via: http://redsymbol.net/articles/bash-exit-traps/
-
-作者:[aaron maxwell ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://redsymbol.net/
-[1]:http://www.gnu.org/software/bash/manual/bashref.html#index-trap
-[2]:http://redsymbol.net/articles/bash-exit-traps/#footnote-1
-[3]:http://aws.amazon.com/ec2/
-[4]:http://redsymbol.net/articles/bash-exit-traps/#footnote-2
-[5]:https://www.amazon.com/d/0692878971
-[6]:https://powerfulpython.com/blog/
diff --git a/translated/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md b/translated/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
new file mode 100644
index 0000000000..b70bf03f54
--- /dev/null
+++ b/translated/tech/20180101 How Exit Traps Can Make Your Bash Scripts Way More Robust And Reliable.md
@@ -0,0 +1,173 @@
+"Exit Traps" 让你的 Bash 脚本更稳固可靠
+============================================================
+
+有个简单实用的方针可以让你的 bash 脚本更稳健 -- 确保总是执行必要的收尾工作,哪怕是在发生异常的时候。要做到这一点,秘诀就是 bash 提供的一个叫做 EXIT 的伪信号,你可以 trap 它,当脚本因为任何原因退出时,相应的命令或函数就会执行。我们来看看它是如何工作的。
+
+基本的代码结构看起来像这样:
+
+```
+#!/bin/bash
+function finish {
+ # 你的收尾代码
+}
+trap finish EXIT
+```
+
+你可以把任何你觉得务必要运行的代码放在这个 "finish" 函数里。一个很好的例子是:创建一个临时目录,事后再删除它。
+
+```
+#!/bin/bash
+scratch=$(mktemp -d -t tmp.XXXXXXXXXX)
+function finish {
+ rm -rf "$scratch"
+}
+trap finish EXIT
+```
+
+这样,在你的核心代码中,你就可以在这个 `$scratch` 目录里下载、生成、操作中间或临时数据了。[[1]][2]
+
+```
+# 下载所有版本的 linux 内核…… 为了科学!
+for major in {1..4}; do
+ for minor in {0..99}; do
+ for patchlevel in {0..99}; do
+ tarball="linux-${major}-${minor}-${patchlevel}.tar.bz2"
+ curl -q "http://kernel.org/path/to/$tarball" -o "$scratch/$tarball" || true
+ if [ -f "$scratch/$tarball" ]; then
+ tar jxf "$scratch/$tarball"
+ fi
+ done
+ done
+done
+# 整合成单个文件
+# 复制到目标位置
+cp "$scratch/frankenstein-linux.tar.bz2" "$1"
+# 脚本结束, scratch 目录自动被删除
+```
+
+比较一下如果不用 trap ,你是怎么删除 scratch 目录的:
+
+```
+#!/bin/bash
+# 别这样做!
+
+scratch=$(mktemp -d -t tmp.XXXXXXXXXX)
+
+# 在这里插入你的几十上百行代码
+
+# 都搞定了,退出之前把目录删除
+rm -rf "$scratch"
+```
+
+这有什么问题么?很多:
+
+* 如果运行出错导致脚本提前退出, scratch 目录及里面的内容不会被删除。这会导致资料泄漏,可能引发安全问题。
+
+* 如果这个脚本的设计初衷就是在末尾以前退出,那么你必须手动复制粘贴 rm 命令到每一个出口。
+
+* 这也给维护带来了麻烦。如果今后在脚本某处添加了一个 exit ,你很可能就忘了加上删除操作 -- 从而制造潜在的安全漏洞。
+
+### 无论如何,服务要在线
+
+另外一个场景: 想象一下你正在运行一些自动化系统运维任务,要临时关闭一项服务,最后这项服务需要重启,而且要万无一失,即使脚本运行出错。那么你可以这样做:
+
+```
+function finish {
+ # 重启服务
+ sudo /etc/init.d/something start
+}
+trap finish EXIT
+sudo /etc/init.d/something stop
+# 主要任务代码
+
+# 脚本结束,执行 finish 函数重启服务
+```
+
+一个具体的实例:比如 Ubuntu 服务器上运行着 MongoDB ,你要为 crond 写一个脚本来临时关闭服务并做一些日常维护工作。你应该这样写:
+
+```
+function finish {
+ # 重启服务
+ sudo service mongdb start
+}
+trap finish EXIT
+# 关闭 mongod 服务
+sudo service mongdb stop
+# (如果 mongod 配置了 fork ,比如 replica set ,你可能需要执行 "sudo killall --wait /usr/bin/mongod")
+```
+
+### 控制开销
+
+有一种情况特别能体现 EXIT trap 的价值:你要在脚本运行过程中创建一些临时的付费资源,结束时要确保把它们释放掉。比如你在 AWS (Amazon Web Services) 上工作,要在脚本中创建一个镜像。
+
+(名词解释: 在亚马逊云上的运行的服务器叫实例。实例从镜像创建而来,镜像通常被称为 "AMIs" 或 "images" 。AMI 相当于某个特殊时间点的服务器快照。)
+
+我们可以这样创建一个自定义的 AMI :
+
+1. 基于一个基准 AMI 运行(创建)一个实例。
+
+2. 在实例中手动或运行脚本来做一些修改。
+
+3. 用修改后的实例创建一个镜像。
+
+4. 如果不再需要这个实例,可以将其删除。
+
+最后一步**相当重要**。如果你的脚本没有把实例删除掉,它会一直运行并计费。(到月底你的账单让你大跌眼镜时,恐怕哭都来不及了!)
+
+如果把 AMI 的创建封装在脚本里,我们就可以利用 trap EXIT 来删除实例了。我们还可以用上 EC2 的命令行工具:
+
+```
+#!/bin/bash
+# 定义基准 AMI 的 ID
+ami=$1
+# 保存临时实例的 ID
+instance=''
+# 作为 IT 人,让我们看看 scratch 目录的另类用法
+scratch=$(mktemp -d -t tmp.XXXXXXXXXX)
+function finish {
+ if [ -n "$instance" ]; then
+ ec2-terminate-instances "$instance"
+ fi
+ rm -rf "$scratch"
+}
+trap finish EXIT
+# 创建实例,将输出(包含实例 ID )保存到 scratch 目录下的文件里
+ec2-run-instances "$ami" > "$scratch/run-instance"
+# 提取实例 ID
+instance=$(grep '^INSTANCE' "$scratch/run-instance" | cut -f 2)
+```
+
+脚本执行到这里,实例(EC2 服务器)已经开始运行 [[2]][4]。接下来你可以做任何事情:在实例中安装软件,修改配置文件等,然后为最终版本创建一个镜像。实例会在脚本结束时被删除 -- 即使脚本因错误而提前退出。(请确保实例创建成功后再运行业务代码。)
+
+### 更多应用
+
+这篇文章只讲了些皮毛。我已经使用这个 bash 技巧很多年了,现在还能不时发现一些有趣的用法。你也可以把这个方法应用到你自己的场景中,从而提升你的 bash 脚本的可靠性。
+
+### 尾注
+
+1. mktemp 的选项 "-t" 在 Linux 上可选,在 OS X 上必需。带上此选项可以让你的脚本有更好的可移植性。
+
+2. 如果只是为了获取实例 ID ,我们不用创建文件,直接写成 `instance=$(ec2-run-instances "$ami" | grep '^INSTANCE' | cut -f 2)` 就可以。但把输出写入文件可以记录更多有用信息,便于 debug ,代码可读性也更强。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+美国加利福尼亚旧金山的作家,软件工程师,企业家
+
+Author of [Powerful Python][5] and its [blog][6].
+via: http://redsymbol.net/articles/bash-exit-traps/
+
+作者:[aaron maxwell ][a]
+译者:[Dotcra](https://github.com/Dotcra)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://redsymbol.net/
+[1]:http://www.gnu.org/software/bash/manual/bashref.html#index-trap
+[2]:http://redsymbol.net/articles/bash-exit-traps/#footnote-1
+[3]:http://aws.amazon.com/ec2/
+[4]:http://redsymbol.net/articles/bash-exit-traps/#footnote-2
+[5]:https://www.amazon.com/d/0692878971
+[6]:https://powerfulpython.com/blog/
From 5298d5452506cea0db8248c42e76db350d04c1c2 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Tue, 24 Apr 2018 17:48:15 +0800
Subject: [PATCH 082/220] PRF:20180130 Quick Look at the Arch Based Indie Linux
Distribution- MagpieOS.md
@geekpi
---
...ased Indie Linux Distribution- MagpieOS.md | 29 ++++++++++---------
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/translated/tech/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md b/translated/tech/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md
index 178b242760..e891707cb0 100644
--- a/translated/tech/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md
+++ b/translated/tech/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md
@@ -1,48 +1,49 @@
-快速了解基于 Arch 的独立 Linux 发行版:MagpieOS
+一个基于 Arch 的独立 Linux 发行版 MagpieOS
======
-目前使用的大多数 Linux 发行版都是在美国或欧洲创建和开发的。来自孟加拉国的年轻开发人员想要改变这一切。
+
+目前使用的大多数 Linux 发行版都是由欧美创建和开发的。一位来自孟加拉国的年轻开发人员想要改变这一切。
### 谁是 Rizwan?
[Rizwan][1] 是来自孟加拉国的计算机科学专业的学生。他目前正在学习成为一名专业的 Python 程序员。他在 2015 年开始使用 Linux。使用 Linux 启发他创建了自己的 Linux 发行版。他还希望让世界其他地方知道孟加拉国正在升级到 Linux。
-他还致力于从头创建[ Linux live 版本][2]。
+他还致力于从头创建 [LFS 的 live 版本][2]。
-## ![MagpieOS Linux][3]
+![MagpieOS Linux][3]
### 什么是 MagpieOS?
-Rizwan 的新发行版被命名为 MagpieOS。 [MagpieOS][4]非常简单。它基本上是 GNOME3 桌面环境的 Arch。 MagpieOS 还包括一个自定义的仓库,其中包含图标和主题(据称是),这在其他基于 Arch 的发行版或 AUR上 不可用。
+Rizwan 的新发行版被命名为 MagpieOS。 [MagpieOS][4] 非常简单。它基本上是 GNOME3 桌面环境的 Arch。 MagpieOS 还包括一个自定义的仓库,其中包含图标和主题(据称)在其他基于 Arch 的发行版或 AUR 没有。
-下面是 MagpieOS 包含的软件列表:Firefox、LibreOffice、Uget、Bleachbit、Notepadqq、SUSE Studio Image Writer、Pamac 软件包管理器、Gparted、Gimp、Rhythmbox、简单屏幕录像机、包括 Totem 视频播放器在内的所有默认 GNOME 软件,以及一套新的定制壁纸。
+下面是 MagpieOS 包含的软件列表:Firefox、LibreOffice、Uget、Bleachbit、Notepadqq、SUSE Studio Image Writer、Pamac 软件包管理器、Gparted、Gimp、Rhythmbox、简单屏幕录像机等包括 Totem 视频播放器在内的所有默认 GNOME 软件,以及一套新的定制壁纸。
目前,MagpieOS 仅支持 GNOME 桌面环境。Rizwan 选择它是因为这是他的最爱。但是,他计划在未来添加更多的桌面环境。
不幸的是,MagpieOS 不支持孟加拉语或任何其他当地语言。它支持 GNOME 的默认语言,如英语、印地语等。
-Rizwan 命名他的发行为 MagpieOS,因为[喜鹊][5] (magpie) 是孟加拉国的官方鸟。
+Rizwan 命名他的发行为 MagpieOS,因为[喜鹊][5] 是孟加拉国的官方鸟。
-## ![MagpieOS Linux][6]
+![MagpieOS Linux][6]
### 为什么选择 Arch?
和大多数人一样,Rizwan 通过使用 [Ubuntu][7] 开始了他的 Linux 旅程。一开始,他对此感到满意。但是,有时他想安装的软件在仓库中没有,他不得不通过 Google 寻找正确的 PPA。他决定切换到 [Arch][8],因为 Arch 有许多在 Ubuntu 上没有的软件包。Rizwan 也喜欢 Arch 是一个滚动版本,并且始终是最新的。
-Arch 的问题在于它的安装非常复杂和耗时。所以,Rizwan 尝试了几个基于 Arch 的发行版,并且对任何一个都不满意。他不喜欢 [Manjaro][9],因为他们没有权限使用 Arch 的仓库。此外,Arch 仓库镜像比 Manjaro 更快并且拥有更多软件。他喜欢 [Antergos][10],但要安装需要一个持续的互联网连接。如果在安装过程中连接失败,则必须重新开始。
+Arch 的问题在于它的安装非常复杂和耗时。所以,Rizwan 尝试了几个基于 Arch 的发行版,并且对任何一个都不满意。他不喜欢 [Manjaro][9],因为它们没有权限使用 Arch 的仓库。此外,Arch 仓库镜像比 Manjaro 更快并且拥有更多软件。他喜欢 [Antergos][10],但要安装需要一个持续的互联网连接。如果在安装过程中连接失败,则必须重新开始。
由于这些问题,Rizwan 决定创建一个简单的发行版,让他和其他人无需麻烦地安装 Arch。他还希望通过使用他的发行版让他的祖国的开发人员从 Ubuntu 切换到 Arch。
### 如何通过 MagpieOS 帮助 Rizwan
-如果你有兴趣帮助 Rizwan 开发 MagpieOS,你可以通过[ MagpieOS 网站][4]与他联系。你也可以查看该项目的[ GitHub 页面][11]。Rizwan 表示,他目前不寻求财政支持。
+如果你有兴趣帮助 Rizwan 开发 MagpieOS,你可以通过 [MagpieOS 网站][4]与他联系。你也可以查看该项目的 [GitHub 页面][11]。Rizwan 表示,他目前不寻求财政支持。
-## ![MagpieOS Linux][12]
+![MagpieOS Linux][12]
### 最后的想法
-我快速地一次安装完成 MagpieOS。它使用[ Calamares 安装程序][13],这意味着安装它相对快速和无痛。重新启动后,我听到一封欢迎我来到 MagpieOS 的音频消息。
+我快速地安装过一次 MagpieOS。它使用 [Calamares 安装程序][13],这意味着安装它相对快速轻松。重新启动后,我听到一封欢迎我来到 MagpieOS 的音频消息。
-说实话,这是我第一次听到安装后的问候。(Windows 10 可能也有,但我不确定。)屏幕底部还有一个 Mac OS 风格的应用程序停靠栏。除此之外,它感觉像我用过的其他任何 GNOME 3 桌面。
+说实话,这是我第一次听到安装后的问候。(Windows 10 可能也有,但我不确定)屏幕底部还有一个 Mac OS 风格的应用程序停靠栏。除此之外,它感觉像我用过的其他任何 GNOME 3 桌面。
考虑到这是一个刚刚起步的独立项目,我不会推荐它作为你的主要操作系统。但是,如果你是一个发行版尝试者,你一定会试试看。
@@ -58,7 +59,7 @@ via: https://itsfoss.com/magpieos/
作者:[John Paul][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 8486751229df490d9d6b9068d9922197a0e94cb7 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Tue, 24 Apr 2018 17:48:37 +0800
Subject: [PATCH 083/220] PUB:20180130 Quick Look at the Arch Based Indie Linux
Distribution- MagpieOS.md
@geekpi
---
...k Look at the Arch Based Indie Linux Distribution- MagpieOS.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md (100%)
diff --git a/translated/tech/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md b/published/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md
similarity index 100%
rename from translated/tech/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md
rename to published/20180130 Quick Look at the Arch Based Indie Linux Distribution- MagpieOS.md
From 859a83c80ca8473873d8701e373bfdf67a096c04 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Tue, 24 Apr 2018 18:03:57 +0800
Subject: [PATCH 084/220] PRF:20180412 3 password managers for the Linux
command line.md
@MjSeven
---
...ord managers for the Linux command line.md | 29 +++++++++++--------
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/translated/tech/20180412 3 password managers for the Linux command line.md b/translated/tech/20180412 3 password managers for the Linux command line.md
index 346b778217..cf0b6e75b7 100644
--- a/translated/tech/20180412 3 password managers for the Linux command line.md
+++ b/translated/tech/20180412 3 password managers for the Linux command line.md
@@ -1,10 +1,13 @@
3 个 Linux 命令行密码管理器
=====
+> 给在终端窗口花费大量时间的人们的密码管理器。
+

+
我们都希望我们的密码安全可靠。为此,许多人转向密码管理应用程序,如 [KeePassX][1] 和 [Bitwarden][2]。
-如果你在终端中花费了大量时间而且正在寻找更简单的解决方案,那么你需要查看 Linux 命令行的许多密码管理器。它们快速,易于使用且安全。
+如果你在终端中花费了大量时间而且正在寻找更简单的解决方案,那么你需要了解下诸多的 Linux 命令行密码管理器。它们快速,易于使用且安全。
让我们来看看其中的三个。
@@ -12,32 +15,34 @@
[Titan][3] 是一个密码管理器,也可作为文件加密工具。我不确定 Titan 在加密文件方面效果有多好;我只是把它看作密码管理器,在这方面,它确实做的很好。
-Titan 将你的密码存储在加密的 [SQLite 数据库][4]中,你可以在第一次启动应用程序时创建并添加主密码。告诉 Titan 增加一个密码,它要求一个名字来识别它,一个用户名,密码本身,一个 URL 和一个关于密码的注释。
+
-你可以让 Titan 为你生成一个密码,你可以通过条目名称或数字 ID,名称,注释或使用正则表达式来搜索数据库,但是,查看特定的密码可能会有点笨拙,你要么必须列出所有密码滚动查找你想要使用的密码,要么你可以通过使用其数字 ID(如果你知道)列出条目的详细信息来查看密码。
+Titan 将你的密码存储在加密的 [SQLite 数据库][4]中,你可以在第一次启动该应用程序时创建并添加主密码。告诉 Titan 增加一个密码,它需要一个用来识别它的名字、用户名、密码本身、URL 和关于密码的注释。
+
+你可以让 Titan 为你生成一个密码,你可以通过条目名称或数字 ID、名称、注释或使用正则表达式来搜索数据库,但是,查看特定的密码可能会有点笨拙,你要么必须列出所有密码滚动查找你想要使用的密码,要么你可以通过使用其数字 ID(如果你知道)列出条目的详细信息来查看密码。
### Gopass
-[Gopass][5] 被称为“团队密码管理员”。不要让这让你感到失望,它对个人的使用也很好。
+[Gopass][5] 被称为“团队密码管理器”。不要因此感到失望,它对个人的使用也很好。

-Gopass 是用 Go 语言编写的经典 Unix 和 Linux [Pass][6] 密码管理器的更新。在真正的 Linux 潮流中,你可以[编译源代码][7]或[使用安装程序][8]在你的计算机上使用 gopass。
+Gopass 是用 Go 语言编写的经典 Unix 和 Linux [Pass][6] 密码管理器的更新版本。安装纯正的 Linux 方式,你可以[编译源代码][7]或[使用安装程序][8]以在你的计算机上使用 gopass。
-在开始使用 gopass 之前,确保你的系统上有 [GNU Privacy Guard (GPG)][9] 和 [Git][10]。前者对你的密码存储进行加密和解密,后者将提交给一个 [Git 仓库][11]。如果 gopass 是给个人使用,你仍然需要 Git。你只需要担心签名提交。如果你感兴趣,你可以[在文档中][12]了解这些依赖关系。
+在开始使用 gopass 之前,确保你的系统上有 [GNU Privacy Guard (GPG)][9] 和 [Git][10]。前者对你的密码存储进行加密和解密,后者将提交到一个 [Git 仓库][11]。如果 gopass 是给个人使用,你仍然需要 Git。你不需要担心提交到仓库。如果你感兴趣,你可以[在文档中][12]了解这些依赖关系。
-当你第一次启动 gopass 时,你需要创建一个密码存储并生成一个[秘钥][13]以确保存储的安全。当你想添加一个密码( gopass 指的是一个秘密)时,gopass 会要求你提供一些信息,比如 URL,用户名和密码。你可以让 gopass 为你添加的密码生成密码,或者你可以自己输入密码。
+当你第一次启动 gopass 时,你需要创建一个密码存储库并生成一个[密钥][13]以确保存储的安全。当你想添加一个密码(gopass 中称之为“secret”)时,gopass 会要求你提供一些信息,比如 URL、用户名和密码。你可以让 gopass 为你添加的“secret”生成密码,或者你可以自己输入密码。
-根据需要,你可以编辑,查看或删除密码。你还可以查看特定的密码或将其复制到剪贴板以将其粘贴到登录表单或窗口中。
+根据需要,你可以编辑、查看或删除密码。你还可以查看特定的密码或将其复制到剪贴板,以将其粘贴到登录表单或窗口中。
### Kpcli
-许多人选择的是开源密码管理器 [KeePass][14] 和 [KeePassX][15]。 [Kpcli][16] 将 KeePass 和 KeePassX 的功能带到离你最近的终端窗口。
+许多人选择的是开源密码管理器 [KeePass][14] 和 [KeePassX][15]。 [Kpcli][16] 将 KeePass 和 KeePassX 的功能带到你的终端窗口。

-Kpcli 是一个键盘驱动的 shell,可以完成其图形化表亲的大部分功能。这包括打开密码数据库,添加和编辑密码和组(组帮助你组织密码),甚至重命名或删除密码和组。
+Kpcli 是一个键盘驱动的 shell,可以完成其图形化的表亲的大部分功能。这包括打开密码数据库、添加和编辑密码和组(组帮助你组织密码),甚至重命名或删除密码和组。
当你需要时,你可以将用户名和密码复制到剪贴板以粘贴到登录表单中。为了保证这些信息的安全,kpcli 也有清除剪贴板的命令。对于一个小终端应用程序来说还不错。
@@ -48,9 +53,9 @@ Kpcli 是一个键盘驱动的 shell,可以完成其图形化表亲的大部
via: https://opensource.com/article/18/4/3-password-managers-linux-command-line
作者:[Scott Nesbitt][a]
-译者:[MjSeven](https://github.com/mjSeven)
-校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 2a5ba0900d4ac1cf9b2aa5955b4cddfe62a0a4d1 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Tue, 24 Apr 2018 18:04:47 +0800
Subject: [PATCH 085/220] PUB:20180412 3 password managers for the Linux
command line.md
@MjSeven https://linux.cn/article-9575-1.html
---
.../20180412 3 password managers for the Linux command line.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180412 3 password managers for the Linux command line.md (100%)
diff --git a/translated/tech/20180412 3 password managers for the Linux command line.md b/published/20180412 3 password managers for the Linux command line.md
similarity index 100%
rename from translated/tech/20180412 3 password managers for the Linux command line.md
rename to published/20180412 3 password managers for the Linux command line.md
From ea248e1c46fe7fbad3abb92b78f93d58642c7bb3 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Wed, 25 Apr 2018 09:49:29 +0800
Subject: [PATCH 086/220] translated
---
...You To Find Non-free Software In Debian.md | 85 -------------------
1 file changed, 85 deletions(-)
delete mode 100644 sources/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md
diff --git a/sources/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md b/sources/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md
deleted file mode 100644
index ecc1668747..0000000000
--- a/sources/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md
+++ /dev/null
@@ -1,85 +0,0 @@
-translating---geekpi
-
-The Vrms Program Helps You To Find Non-free Software In Debian
-======
-
-
-The other day I was reading an interesting guide that explained the [**difference between free and open source software on Digital ocean**][1]. Until then, I thought both are more or less same. Oh man, I was wrong. There are few significant differences between them. While reading that article, I was wondering how to find non-free software in Linux, hence this post.
-
-### Say hello to “Virtual Richard M. Stallman”, a Perl script to find Non-free Software in Debian
-
-The **Virtual Richard M. Stallman** , shortly **vrms** , is a program, written in Perl, that analyzes the list of installed software on your Debian-based systems and reports all of the packages from non-free and contrib trees which are currently installed. For those wondering, a free software should meet the following [**four essential freedoms**][2].
-
- * **Freedom 0** – The freedom to run the program as you wish, for any purpose.
- * **Freedom 1** – The freedom to study how the program works, and adapt it to your needs. Access to the source code is a precondition for this.
- * **Freedom 2** – The freedom to redistribute copies so you can help your neighbor.
- * **Freedom 3** – The freedom to improve the program, and release your improvements to the public, so that the whole community benefits. Access to the source code is a precondition for this.
-
-
-
-Any software that doesn’t meet the above four conditions are not considered as a free software. In a nutshell, a **Free software means the users have the freedom to run, copy, distribute, study, change and improve the software.**
-
-Now let us find if the installed software is free or non-free, shall we?
-
-The Vrms package is available in the default repositories of Debian and its derivatives like Ubuntu. So, you can install it using apt package manager using the following command.
-```
-$ sudo apt-get install vrms
-
-```
-
-Once installed, run the following command to find non-free software in your debian-based system.
-```
-$ vrms
-
-```
-
-Sample output from my Ubuntu 16.04 LTS desktop.
-```
- Non-free packages installed on ostechnix
-
-unrar Unarchiver for .rar files (non-free version)
-
-1 non-free packages, 0.0% of 2103 installed packages.
-
-```
-
-![][4]
-
-As you can see in the above screenshot, I have one non-free package installed in my Ubuntu box.
-
-If you don’t have any non-free packages on your system, you should see the following output instead.
-```
-No non-free or contrib packages installed on ostechnix! rms would be proud.
-
-```
-
-Vrms can able to find non-free packages not just on Debian but also from Ubuntu, Linux Mint and other deb-based systems as well.
-
-**Limitations**
-
-The Vrms program has some limitations though. Like I already mentioned, it lists the packages from the non-free and contrib sections installed. However, some distributions doesn’t follow the policy which ensures proprietary software only ends up in repository sections recognized by vrms as “non-free” and they make no effort to preserve this separation. In such cases, Vrms won’t recognize the non-free software and will always report that you have non-free software installed on your system. If you’re using distros like Debian and Ubuntu that follows the policy of keeping proprietary software in a non-free repositories, Vrms will definitely help you to find the non-free packages.
-
-And, that’s all. Hope this was useful. More good stuffs to come. Stay tuned!
-
-Happy Tamil new year wishes to all Tamil folks around the world!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/the-vrms-program-helps-you-to-find-non-free-software-in-debian/
-
-作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://www.digitalocean.com/community/tutorials/Free-vs-Open-Source-Software
-[2]:https://www.gnu.org/philosophy/free-sw.html
-[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[4]:http://www.ostechnix.com/wp-content/uploads/2018/04/vrms.png
From 33f2cd7449986dd0f4ef34cbfcf334c1a062bd41 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Wed, 25 Apr 2018 09:50:37 +0800
Subject: [PATCH 087/220] translated
---
...You To Find Non-free Software In Debian.md | 83 +++++++++++++++++++
1 file changed, 83 insertions(+)
create mode 100644 translated/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md
diff --git a/translated/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md b/translated/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md
new file mode 100644
index 0000000000..7c42daae2c
--- /dev/null
+++ b/translated/tech/20180414 The Vrms Program Helps You To Find Non-free Software In Debian.md
@@ -0,0 +1,83 @@
+Vrms 助你在 Debian 中查找非自由软件
+======
+
+
+有一天,我在阅读一篇有趣的指南,它解释了[**在数字海洋中的自由和开源软件之间的区别**][1]。在此之前,我认为两者都差不多。但是,我错了。它们之间有一些显著差异。在阅读那篇文章时,我想知道如何在 Linux 中找到非自由软件,因此有了这篇文章。
+
+### 向 “Virtual Richard M. Stallman” 问好,这是一个在 Debian 中查找非自由软件的 Perl 脚本
+
+**Virtual Richard M. Stallman** ,简称 **vrms**,是一个用 Perl 编写的程序,它在你基于 Debian 的系统上分析已安装软件的列表,并报告所有来自非自由和 contrib 树的已安装软件包。对于那些疑惑的人,免费软件应该符合以下[**四项基本自由**][2]。
+
+ * **自由 0** – 不管任何目的,随意运行程序的自由。
+ * **自由 1** – 自由研究程序如何工作,并根据你的需求进行调整。访问源代码是一个先决条件。
+ * **自由 2** – 自由重新分发拷贝,这样你可以帮助别人。
+ * **自由 3** – 自由改进程序,并向公众发布改进,以便整个社区获益。访问源代码是一个先决条件。
+
+
+
+任何不满足上述四个条件的软件都不被视为自由软件。简而言之,**自由软件意味着用户可以自由运行、拷贝、分发、研究、修改和改进软件。**
+
+现在让我们来看看安装的软件是自由的还是非自由的,好么?
+
+Vrms 包存在于 Debian 及其衍生版(如 Ubuntu)的默认仓库中。因此,你可以使用 apt 包管理器安装它,使用下面的命令。
+```
+$ sudo apt-get install vrms
+
+```
+
+安装完成后,运行以下命令,在基于 debian 的系统中查找非自由软件。
+```
+$ vrms
+
+```
+
+在我的 Ubuntu 16.04 LTS 桌面版上输出的示例。
+```
+ Non-free packages installed on ostechnix
+
+unrar Unarchiver for .rar files (non-free version)
+
+1 non-free packages, 0.0% of 2103 installed packages.
+
+```
+
+![][4]
+
+如你在上面的截图中看到的那样,我的 Ubuntu 中安装了一个非自由软件包。
+
+如果你的系统中没有任何非自由软件包,则应该看到以下输出。
+```
+No non-free or contrib packages installed on ostechnix! rms would be proud.
+
+```
+
+Vrms 不仅可以在 Debian 上找到非自由软件包,还可以在 Ubuntu、Linux Mint 和其他基于 deb 的系统中找到非自由软件包。
+
+**限制**
+
+Vrms 虽然有一些限制。就像我已经提到的那样,它列出了安装的非自由和 contrib 部分的软件包。但是,某些发行版并未遵循确保专有软件仅在 vrm 识别为“非自由”的仓库中存在,并且它们不努力维护分离。在这种情况下,Vrms 将不会识别非自由软件,并且始终会报告你的系统上安装了非自由软件。如果你使用的是像 Debian 和 Ubuntu 这样的发行版,遵循将专有软件保留在非自由仓库的策略,Vrms 一定会帮助你找到非自由软件包。
+
+就是这些。希望它是有用的。还有更好的东西。敬请关注!
+
+祝世上所有的泰米尔人在泰米尔新年快乐!
+
+干杯!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/the-vrms-program-helps-you-to-find-non-free-software-in-debian/
+
+作者:[SK][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://www.digitalocean.com/community/tutorials/Free-vs-Open-Source-Software
+[2]:https://www.gnu.org/philosophy/free-sw.html
+[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[4]:http://www.ostechnix.com/wp-content/uploads/2018/04/vrms.png
From 12abf4dba2fd0cab6c4c298e7e7cc8866213acd0 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Wed, 25 Apr 2018 09:52:24 +0800
Subject: [PATCH 088/220] translating
---
sources/tech/20180423 How to reset a root password on Fedora.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180423 How to reset a root password on Fedora.md b/sources/tech/20180423 How to reset a root password on Fedora.md
index 9cfe51adf7..e3b0234b77 100644
--- a/sources/tech/20180423 How to reset a root password on Fedora.md
+++ b/sources/tech/20180423 How to reset a root password on Fedora.md
@@ -1,3 +1,5 @@
+translating---geekpi
+
How to reset a root password on Fedora
======
From c9f91f89cec016b7f4fd97612bb6d926887eb42e Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 25 Apr 2018 10:15:22 +0800
Subject: [PATCH 089/220] PRF:20180326 Working with calendars on Linux.md
@MjSeven
---
...0180326 Working with calendars on Linux.md | 269 +++++++++---------
1 file changed, 136 insertions(+), 133 deletions(-)
diff --git a/translated/tech/20180326 Working with calendars on Linux.md b/translated/tech/20180326 Working with calendars on Linux.md
index d63a669df6..c89090addd 100644
--- a/translated/tech/20180326 Working with calendars on Linux.md
+++ b/translated/tech/20180326 Working with calendars on Linux.md
@@ -1,191 +1,195 @@
-在 Linux 上使用日历
+在 Linux 命令行上使用日历
=====
+> 通过 Linux 上的日历,不仅仅可以提醒你今天是星期几。诸如 date、cal、 ncal 和 calendar 等命令可以提供很多有用信息。
+

-Linux 系统可以为你的日程安排提供更多帮助,而不仅仅是提醒你今天是星期几。日历显示有很多选项 -- 有些可能会证明有帮助,有些可能会让你大开眼界。
+
+Linux 系统可以为你的日程安排提供更多帮助,而不仅仅是提醒你今天是星期几。日历显示有很多选项 —— 有些可能很有帮助,有些可能会让你大开眼界。
### 日期
-首先,你可能知道可以使用 **date** 命令显示当前日期。
+首先,你可能知道可以使用 `date` 命令显示当前日期。
+
```
$ date
Mon Mar 26 08:01:41 EDT 2018
-
```
### cal 和 ncal
-你可以使用 **cal** 命令显示整个月份。没有参数时,cal 显示当前月份,默认情况下,通过反转前景色和背景颜色来突出显示当天。
+你可以使用 `cal` 命令显示整个月份。没有参数时,`cal` 显示当前月份,默认情况下,通过反转前景色和背景颜色来突出显示当天。
+
```
$ cal
- March 2018
+ March 2018
Su Mo Tu We Th Fr Sa
- 1 2 3
- 4 5 6 7 8 9 10
+ 1 2 3
+ 4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
-
```
-如果你想以“横向”格式显示当前月份,则可以使用 **ncal** 命令。
+如果你想以“横向”格式显示当前月份,则可以使用 `ncal` 命令。
+
```
$ ncal
- March 2018
-Su 4 11 18 25
-Mo 5 12 19 26
-Tu 6 13 20 27
-We 7 14 21 28
-Th 1 8 15 22 29
-Fr 2 9 16 23 30
-Sa 3 10 17 24 31
-
+ March 2018
+Su 4 11 18 25
+Mo 5 12 19 26
+Tu 6 13 20 27
+We 7 14 21 28
+Th 1 8 15 22 29
+Fr 2 9 16 23 30
+Sa 3 10 17 24 31
```
-例如,如果你只想查看一周特定某天的日期,这个命令可能特别有用。
+例如,如果你只想查看特定周几的日期,这个命令可能特别有用。
+
```
$ ncal | grep Th
-Th 1 8 15 22 29
-
+Th 1 8 15 22 29
```
-ncal 命令还可以以“横向”格式显示一整年,只需在命令后提供年份。
+`ncal` 命令还可以以“横向”格式显示一整年,只需在命令后提供年份。
+
```
$ ncal 2018
- 2018
- January February March April
-Su 7 14 21 28 4 11 18 25 4 11 18 25 1 8 15 22 29
-Mo 1 8 15 22 29 5 12 19 26 5 12 19 26 2 9 16 23 30
-Tu 2 9 16 23 30 6 13 20 27 6 13 20 27 3 10 17 24
-We 3 10 17 24 31 7 14 21 28 7 14 21 28 4 11 18 25
-Th 4 11 18 25 1 8 15 22 1 8 15 22 29 5 12 19 26
-Fr 5 12 19 26 2 9 16 23 2 9 16 23 30 6 13 20 27
-Sa 6 13 20 27 3 10 17 24 3 10 17 24 31 7 14 21 28
+ 2018
+ January February March April
+Su 7 14 21 28 4 11 18 25 4 11 18 25 1 8 15 22 29
+Mo 1 8 15 22 29 5 12 19 26 5 12 19 26 2 9 16 23 30
+Tu 2 9 16 23 30 6 13 20 27 6 13 20 27 3 10 17 24
+We 3 10 17 24 31 7 14 21 28 7 14 21 28 4 11 18 25
+Th 4 11 18 25 1 8 15 22 1 8 15 22 29 5 12 19 26
+Fr 5 12 19 26 2 9 16 23 2 9 16 23 30 6 13 20 27
+Sa 6 13 20 27 3 10 17 24 3 10 17 24 31 7 14 21 28
...
-
```
-你也可以使用 **cal** 命令显示一整年。请记住,你需要输入年份的四位数字。如果你输入 "cal 18",你将获得公元 18 年的历年,而不是 2018 年。
+你也可以使用 `cal` 命令显示一整年。请记住,你需要输入年份的四位数字。如果你输入 `cal 18`,你将获得公元 18 年的历年,而不是 2018 年。
+
```
$ cal 2018
- 2018
- January February March
-Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
- 1 2 3 4 5 6 1 2 3 1 2 3
- 7 8 9 10 11 12 13 4 5 6 7 8 9 10 4 5 6 7 8 9 10
-14 15 16 17 18 19 20 11 12 13 14 15 16 17 11 12 13 14 15 16 17
-21 22 23 24 25 26 27 18 19 20 21 22 23 24 18 19 20 21 22 23 24
-28 29 30 31 25 26 27 28 25 26 27 28 29 30 31
+ 2018
+ January February March
+Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+ 1 2 3 4 5 6 1 2 3 1 2 3
+ 7 8 9 10 11 12 13 4 5 6 7 8 9 10 4 5 6 7 8 9 10
+14 15 16 17 18 19 20 11 12 13 14 15 16 17 11 12 13 14 15 16 17
+21 22 23 24 25 26 27 18 19 20 21 22 23 24 18 19 20 21 22 23 24
+28 29 30 31 25 26 27 28 25 26 27 28 29 30 31
- April May June
-Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
- 1 2 3 4 5 6 7 1 2 3 4 5 1 2
- 8 9 10 11 12 13 14 6 7 8 9 10 11 12 3 4 5 6 7 8 9
-15 16 17 18 19 20 21 13 14 15 16 17 18 19 10 11 12 13 14 15 16
-22 23 24 25 26 27 28 20 21 22 23 24 25 26 17 18 19 20 21 22 23
-29 30 27 28 29 30 31 24 25 26 27 28 29 30
+ April May June
+Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+ 1 2 3 4 5 6 7 1 2 3 4 5 1 2
+ 8 9 10 11 12 13 14 6 7 8 9 10 11 12 3 4 5 6 7 8 9
+15 16 17 18 19 20 21 13 14 15 16 17 18 19 10 11 12 13 14 15 16
+22 23 24 25 26 27 28 20 21 22 23 24 25 26 17 18 19 20 21 22 23
+29 30 27 28 29 30 31 24 25 26 27 28 29 30
- July August September
-Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
- 1 2 3 4 5 6 7 1 2 3 4 1
- 8 9 10 11 12 13 14 5 6 7 8 9 10 11 2 3 4 5 6 7 8
-15 16 17 18 19 20 21 12 13 14 15 16 17 18 9 10 11 12 13 14 15
-22 23 24 25 26 27 28 19 20 21 22 23 24 25 16 17 18 19 20 21 22
-29 30 31 26 27 28 29 30 31 23 24 25 26 27 28 29
- 30
-
- October November December
-Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
- 1 2 3 4 5 6 1 2 3 1
- 7 8 9 10 11 12 13 4 5 6 7 8 9 10 2 3 4 5 6 7 8
-14 15 16 17 18 19 20 11 12 13 14 15 16 17 9 10 11 12 13 14 15
-21 22 23 24 25 26 27 18 19 20 21 22 23 24 16 17 18 19 20 21 22
-28 29 30 31 25 26 27 28 29 30 23 24 25 26 27 28 29
- 30 31
+ July August September
+Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+ 1 2 3 4 5 6 7 1 2 3 4 1
+ 8 9 10 11 12 13 14 5 6 7 8 9 10 11 2 3 4 5 6 7 8
+15 16 17 18 19 20 21 12 13 14 15 16 17 18 9 10 11 12 13 14 15
+22 23 24 25 26 27 28 19 20 21 22 23 24 25 16 17 18 19 20 21 22
+29 30 31 26 27 28 29 30 31 23 24 25 26 27 28 29
+ 30
+ October November December
+Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+ 1 2 3 4 5 6 1 2 3 1
+ 7 8 9 10 11 12 13 4 5 6 7 8 9 10 2 3 4 5 6 7 8
+14 15 16 17 18 19 20 11 12 13 14 15 16 17 9 10 11 12 13 14 15
+21 22 23 24 25 26 27 18 19 20 21 22 23 24 16 17 18 19 20 21 22
+28 29 30 31 25 26 27 28 29 30 23 24 25 26 27 28 29
+ 30 31
```
-对于特定的年份和月份,使用 -d 选项,如下所示:
+要指定年份和月份,使用 `-d` 选项,如下所示:
+
```
$ cal -d 1949-03
- March 1949
+ March 1949
Su Mo Tu We Th Fr Sa
- 1 2 3 4 5
- 6 7 8 9 10 11 12
+ 1 2 3 4 5
+ 6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31
-
```
-另一个可能有用的日历选项是 **cal** 命令的 -j 选项。让我们来看看它显示的是什么。
+另一个可能有用的日历选项是 `cal` 命令的 `-j` 选项。让我们来看看它显示的是什么。
+
```
$ cal -j
- March 2018
- Su Mo Tu We Th Fr Sa
- 60 61 62
- 63 64 65 66 67 68 69
- 70 71 72 73 74 75 76
- 77 78 79 80 81 82 83
- 84 85 86 87 88 89 90
-
+ March 2018
+ Su Mo Tu We Th Fr Sa
+ 60 61 62
+ 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76
+ 77 78 79 80 81 82 83
+ 84 85 86 87 88 89 90
```
-你可能会问:“什么???” OK,那么 -j 选项显示 Julian 日期 -- 一年中从 1 到 365 年的数字日期。所以,1 是 1 月 1 日,32 是 2 月 1 日。命令 **cal -j 2018** 将显示一整年的数字,像这样:
+你可能会问:“什么鬼???” OK, `-j` 选项显示 Julian 日期 -- 一年中从 1 到 365 年的数字日期。所以,1 是 1 月 1 日,32 是 2 月 1 日。命令 `cal -j 2018` 将显示一整年的数字,像这样:
+
```
$ cal -j 2018 | tail -9
- November December
- Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
- 305 306 307 335
-308 309 310 311 312 313 314 336 337 338 339 340 341 342
-315 316 317 318 319 320 321 343 344 345 346 347 348 349
-322 323 324 325 326 327 328 350 351 352 353 354 355 356
-329 330 331 332 333 334 357 358 359 360 361 362 363
- 364 365
-
+ November December
+ Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+ 305 306 307 335
+308 309 310 311 312 313 314 336 337 338 339 340 341 342
+315 316 317 318 319 320 321 343 344 345 346 347 348 349
+322 323 324 325 326 327 328 350 351 352 353 354 355 356
+329 330 331 332 333 334 357 358 359 360 361 362 363
+ 364 365
```
这种显示可能有助于提醒你,自从你做了新年计划之后,你已经有多少天没有采取行动了。
-运行类似的命令,使用 2020 年,你会注意到这是一个闰年:
+运行类似的命令,对于 2020 年,你会注意到这是一个闰年:
+
```
$ cal -j 2020 | tail -9
- November December
- Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
-306 307 308 309 310 311 312 336 337 338 339 340
-313 314 315 316 317 318 319 341 342 343 344 345 346 347
-320 321 322 323 324 325 326 348 349 350 351 352 353 354
-327 328 329 330 331 332 333 355 356 357 358 359 360 361
-334 335 362 363 364 365 366
-
+ November December
+ Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
+306 307 308 309 310 311 312 336 337 338 339 340
+313 314 315 316 317 318 319 341 342 343 344 345 346 347
+320 321 322 323 324 325 326 348 349 350 351 352 353 354
+327 328 329 330 331 332 333 355 356 357 358 359 360 361
+334 335 362 363 364 365 366
```
-### 日历
+### calendar
+
+另一个有趣但潜在的令人沮丧的命令可以告诉你关于假期的事情,这个命令有很多选项,但我们这里介绍下你想看到即将到来的假期和值得注意的日历列表。日历的 `-l` 选项允许你选择今天想要查看的天数,因此 `0` 表示“仅限今天”。
-另一个有趣但潜在的令人沮丧的命令可以告诉你关于假期的事情,这个命令有很多选项,但我们只是说,你想看到即将到来的假期和值得注意的日历列表。日历的 **-l** 选项允许你选择今天想要查看的天数,因此 0 表示“仅限今天”。
```
$ calendar -l 0
-Mar 26 Benjamin Thompson born, 1753, Count Rumford; physicist
-Mar 26 David Packard died, 1996; age of 83
-Mar 26 Popeye statue unveiled, Crystal City TX Spinach Festival, 1937
-Mar 26 Independence Day in Bangladesh
-Mar 26 Prince Jonah Kuhio Kalanianaole Day in Hawaii
+Mar 26 Benjamin Thompson born, 1753, Count Rumford; physicist
+Mar 26 David Packard died, 1996; age of 83
+Mar 26 Popeye statue unveiled, Crystal City TX Spinach Festival, 1937
+Mar 26 Independence Day in Bangladesh
+Mar 26 Prince Jonah Kuhio Kalanianaole Day in Hawaii
Mar 26* Seward's Day in Alaska (last Monday)
-Mar 26 Emerson, Lake, and Palmer record "Pictures at an Exhibition" live, 1971
-Mar 26 Ludwig van Beethoven dies in Vienna, Austria, 1827
-Mar 26 Bonne fête aux Lara !
-Mar 26 Aujourd'hui, c'est la St(e) Ludger.
-Mar 26 N'oubliez pas les Larissa !
-Mar 26 Ludwig van Beethoven in Wien gestorben, 1827
-Mar 26 Emánuel
-
+Mar 26 Emerson, Lake, and Palmer record "Pictures at an Exhibition" live, 1971
+Mar 26 Ludwig van Beethoven dies in Vienna, Austria, 1827
+Mar 26 Bonne fête aux Lara !
+Mar 26 Aujourd'hui, c'est la St(e) Ludger.
+Mar 26 N'oubliez pas les Larissa !
+Mar 26 Ludwig van Beethoven in Wien gestorben, 1827
+Mar 26 Emánuel
```
-对于我们大多数人来说,这比我们在一天之内可以管理的庆祝活动要多一点。如果你看到类似这样的内容,可以将其归咎于你的 **calendar.all** 文件,该文件告诉系统你希望包含哪些国际日历。当然,你可以通过删除此文件中包含其他文件的一些行来削减此问题。文件看起来像这样:
+对于我们大多数人来说,这庆祝活动有点多。如果你看到类似这样的内容,可以将其归咎于你的 `calendar.all` 文件,该文件告诉系统你希望包含哪些国际日历。当然,你可以通过删除此文件中包含其他文件的一些行来削减此问题。文件看起来像这样:
+
```
#include
#include
@@ -194,10 +198,10 @@ Mar 26 Emánuel
#include
#include
#include
-
```
-假设我们只通过移除除上面显示的第一个 #include 行之外的所有行,将我们的显示切换到世界日历。 我们会看到这个:
+假设我们只通过移除除上面显示的第一个 `#include` 行之外的所有行,将我们的显示切换到世界日历。 我们会看到这个:
+
```
$ calendar -l 0
Mar 26 Benjamin Thompson born, 1753, Count Rumford; physicist
@@ -208,20 +212,20 @@ Mar 26 Prince Jonah Kuhio Kalanianaole Day in Hawaii
Mar 26* Seward's Day in Alaska (last Monday)
Mar 26 Emerson, Lake, and Palmer record "Pictures at an Exhibition" live, 1971
Mar 26 Ludwig van Beethoven dies in Vienna, Austria, 1827
-
```
-显然,世界日历的特殊日子非常多。但是,像这样的展示可以让你忘记所有重要的“大力神雕像”揭幕日以及它在观察“世界菠菜之都”中的作用。
+显然,世界日历的特殊日子非常多。但是,像这样的展示可以让你不要忘记所有重要的“大力水手雕像”揭幕日以及在庆祝“世界菠菜之都”中它所扮演的角色。
+
+更有用的日历选择可能是将与工作相关的日历放入特殊文件中,并在 `calendar.all` 文件中使用该日历来确定在运行命令时将看到哪些事件。
-更有用的日历选择可能是将与工作相关的日历放入特殊文件中,并在 calendar.all 文件中使用该日历来确定在运行命令时将看到哪些事件。
```
$ cat /usr/share/calendar/calendar.all
/*
* International and national calendar files
*
- * This is the calendar master file. In the standard setup, it is
+ * This is the calendar master file. In the standard setup, it is
* included by /etc/calendar/default, so you can make any system-wide
- * changes there and they will be kept when you upgrade. If you want
+ * changes there and they will be kept when you upgrade. If you want
* to edit this file, copy it into /etc/calendar/calendar.all and
* edit it there.
*
@@ -231,25 +235,24 @@ $ cat /usr/share/calendar/calendar.all
#define _calendar_all_
#include
-#include <==
-
-#endif /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var !_calendar_all_ */
+#include <==
+#endif /* !_calendar_all_ */
```
-日历文件的格式非常简单 - mm/dd 格式日期,空格和事件描述。
+日历文件的格式非常简单 - `mm/dd` 格式日期,空格和事件描述。
+
```
$ cat calendar.work
-03/26 Describe how the cal and calendar commands work
-03/27 Throw a party!
-
+03/26 Describe how the cal and calendar commands work
+03/27 Throw a party!
```
-### 注意事项和 nostalgia
+### 注意事项和怀旧
注意,有关日历的命令可能不适用于所有 Linux 发行版,你可能必须记住自己的“大力水手”雕像。
-如果你想知道,你可以显示一个日历,远远早于 9999 -- 即使是预言性的 [2525][1]。
+如果你想知道,你可以显示一个日历,远至 9999 —— 即使是预言性的 [2525][1]。
在 [Facebook][2] 和 [LinkedIn][3] 上加入网络社区,对那些重要的话题发表评论。
@@ -258,9 +261,9 @@ $ cat calendar.work
via: https://www.networkworld.com/article/3265752/linux/working-with-calendars-on-linux.html
作者:[Sandra Henry-Stocker][a]
-译者:[MjSeven](https://github.com/MjSeven)
-校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From ac444de1200f8985cfa72a452fc0b0abe7ad9f72 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 25 Apr 2018 10:15:56 +0800
Subject: [PATCH 090/220] PUB:20180326 Working with calendars on Linux.md
@MjSeven https://linux.cn/article-9576-1.html
---
.../20180326 Working with calendars on Linux.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180326 Working with calendars on Linux.md (100%)
diff --git a/translated/tech/20180326 Working with calendars on Linux.md b/published/20180326 Working with calendars on Linux.md
similarity index 100%
rename from translated/tech/20180326 Working with calendars on Linux.md
rename to published/20180326 Working with calendars on Linux.md
From 08b323ca9899ed833552c17ab251f0d5af6eff9a Mon Sep 17 00:00:00 2001
From: DarkSun
Date: Wed, 25 Apr 2018 03:22:49 +0000
Subject: [PATCH 091/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Understanding=20m?=
=?UTF-8?q?etrics=20and=20monitoring=20with=20Python=20|=20Opensource.com?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...monitoring with Python | Opensource.com.md | 488 ++++++++++++++++++
1 file changed, 488 insertions(+)
create mode 100644 sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md
diff --git a/sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md b/sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md
new file mode 100644
index 0000000000..9d734f09f3
--- /dev/null
+++ b/sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md
@@ -0,0 +1,488 @@
+# Understanding metrics and monitoring with Python
+
+## Demystify Python application monitoring by learning the meaning of key words and concepts.
+
+
+
+Image by :
+
+opensource.com
+
+## Get the newsletter
+
+Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
+
+My reaction when I first came across the terms counter and gauge and the graphs with colors and numbers labeled "mean" and "upper 90" was one of avoidance. It's like I saw them, but I didn't care because I didn't understand them or how they might be useful. Since my job didn't require me to pay attention to them, they remained ignored.
+
+That was about two years ago. As I progressed in my career, I wanted to understand more about our network applications, and that is when I started learning about metrics.
+
+The three stages of my journey to understanding monitoring (so far) are:
+
+* Stage 1: What? (Looks elsewhere)
+* Stage 2: Without metrics, we are really flying blind.
+* Stage 3: How do we keep from doing metrics wrong?
+
+I am currently in Stage 2 and will share what I have learned so far. I'm moving gradually toward Stage 3, and I will offer some of my resources on that part of the journey at the end of this article.
+
+Let's get started!
+
+## Software prerequisites
+
+All the demos discussed in this article are available on [my GitHub repo][1]. You will need to have docker and docker-compose installed to play with them.
+
+## Why should I monitor?
+
+The top reasons for monitoring are:
+
+* Understanding _normal_ and _abnormal_ system and service behavior
+* Doing capacity planning, scaling up or down
+* Assisting in performance troubleshooting
+* Understanding the effect of software/hardware changes
+* Changing system behavior in response to a measurement
+* Alerting when a system exhibits unexpected behavior
+
+## Metrics and metric types
+
+For our purposes, a **metric** is an _observed_ value of a certain quantity at a given point in _time_. The total of number hits on a blog post, the total number of people attending a talk, the number of times the data was not found in the caching system, the number of logged-in users on your website—all are examples of metrics.
+
+They broadly fall into three categories:
+
+### Counters
+
+Consider your personal blog. You just published a post and want to keep an eye on how many hits it gets over time, a number that can only increase. This is an example of a **counter** metric. Its value starts at 0 and increases during the lifetime of your blog post. Graphically, a counter looks like this:
+
+
+
+A counter metric always increases.
+
+### Gauges
+
+Instead of the total number of hits on your blog post over time, let's say you want to track the number of hits per day or per week. This metric is called a **gauge** and its value can go up or down. Graphically, a gauge looks like this:
+
+
+
+A gauge metric can increase or decrease.
+
+A gauge's value usually has a _ceiling_ and a _floor_ in a certain time window.
+
+### Histograms and timers
+
+A **histogram** (as Prometheus calls it) or a **timer** (as StatsD calls it) is a metric to track _sampled observations_. Unlike a counter or a gauge, the value of a histogram metric doesn't necessarily show an up or down pattern. I know that doesn't make a lot of sense and may not seem different from a gauge. What's different is what you expect to _do_ with histogram data compared to a gauge. Therefore, the monitoring system needs to know that a metric is a histogram type to allow you to do those things.
+
+
+
+A histogram metric can increase or decrease.
+
+## Demo 1: Calculating and reporting metrics
+
+[Demo 1][2] is a basic web application written using the [Flask][3] framework. It demonstrates how we can _calculate_ and _report_ metrics.
+
+The src directory has the application in app.py with the src/helpers/middleware.py containing the following:
+
+```
+from flask import request
+import csv
+import time
+
+
+def start_timer():
+ request.start_time = time.time()
+
+
+def stop_timer(response):
+ # convert this into milliseconds for statsd
+ resp_time = (time.time() - request.start_time)*1000
+ with open('metrics.csv', 'a', newline='') as f:
+ csvwriter = csv.writer(f)
+ csvwriter.writerow([str(int(time.time())), str(resp_time)])
+
+ return response
+
+
+def setup_metrics(app):
+ app.before_request(start_timer)
+ app.after_request(stop_timer)
+```
+
+When setup_metrics() is called from the application, it configures the start_timer() function to be called before a request is processed and the stop_timer() function to be called after a request is processed but before the response has been sent. In the above function, we write the timestamp and the time it took (in milliseconds) for the request to be processed.
+
+When we run docker-compose up in the demo1 directory, it starts the web application, then a client container that makes a number of requests to the web application. You will see a src/metrics.csv file that has been created with two columns: timestamp and request_latency.
+
+Looking at this file, we can infer two things:
+
+* There is a lot of data that has been generated
+* No observation of the metric has any characteristic associated with it
+
+Without a characteristic associated with a metric observation, we cannot say which HTTP endpoint this metric was associated with or which node of the application this metric was generated from. Hence, we need to qualify each metric observation with the appropriate metadata.
+
+## Statistics 101
+
+If we think back to high school mathematics, there are a few statistics terms we should all recall, even if vaguely, including mean, median, percentile, and histogram. Let's briefly recap them without judging their usefulness, just like in high school.
+
+### Mean
+
+The **mean**, or the average of a list of numbers, is the sum of the numbers divided by the cardinality of the list. The mean of 3, 2, and 10 is (3+2+10)/3 = 5.
+
+### Median
+
+The **median** is another type of average, but it is calculated differently; it is the center numeral in a list of numbers ordered from smallest to largest (or vice versa). In our list above (2, 3, 10), the median is 3. The calculation is not very straightforward; it depends on the number of items in the list.
+
+### Percentile
+
+The **percentile** is a measure that gives us a measure below which a certain (k) percentage of the numbers lie. In some sense, it gives us an _idea_ of how this measure is doing relative to the k percentage of our data. For example, the 95th percentile score of the above list is 9.29999. The percentile measure varies from 0 to 100 (non-inclusive). The _zeroth_ percentile is the minimum score in a set of numbers. Some of you may recall that the median is the 50th percentile, which turns out to be 3.
+
+Some monitoring systems refer to the percentile measure as upper_X where _X_ is the percentile; _upper 90_ refers to the value at the 90th percentile.
+
+### Quantile
+
+The **q-Quantile** is a measure that ranks q_N_ in a set of _N_ numbers. The value of **q** ranges between 0 and 1 (both inclusive). When **q** is 0.5, the value is the median. The relationship between the quantile and percentile is that the measure at **q** quantile is equivalent to the measure at **100_q_** percentile.
+
+### Histogram
+
+The metric **histogram**, which we learned about earlier, is an _implementation detail_ of monitoring systems. In statistics, a histogram is a graph that groups data into _buckets_. Let's consider a different, contrived example: the ages of people reading your blog. If you got a handful of this data and wanted a rough idea of your readers' ages by group, plotting a histogram would show you a graph like this:
+
+
+
+### Cumulative histogram
+
+A **cumulative histogram** is a histogram where each bucket's count includes the count of the previous bucket, hence the name _cumulative_. A cumulative histogram for the above dataset would look like this:
+
+
+
+### Why do we need statistics?
+
+In Demo 1 above, we observed that there is a lot of data that is generated when we report metrics. We need statistics when working with metrics because there are just too many of them. We don't care about individual values, rather overall behavior. We expect the behavior the values exhibit is a proxy of the behavior of the system under observation.
+
+## Demo 2: Adding characteristics to metrics
+
+In our Demo 1 application above, when we calculate and report a request latency, it refers to a specific request uniquely identified by few _characteristics_. Some of these are:
+
+* The HTTP endpoint
+* The HTTP method
+* The identifier of the host/node where it's running
+
+If we attach these characteristics to a metric observation, we have more context around each metric. Let's explore adding characteristics to our metrics in [Demo 2][4].
+
+The src/helpers/middleware.py file now writes multiple columns to the CSV file when writing metrics:
+
+```
+node_ids = ['10.0.1.1', '10.1.3.4']
+
+
+def start_timer():
+ request.start_time = time.time()
+
+
+def stop_timer(response):
+ # convert this into milliseconds for statsd
+ resp_time = (time.time() - request.start_time)*1000
+ node_id = node_ids[random.choice(range(len(node_ids)))]
+ with open('metrics.csv', 'a', newline='') as f:
+ csvwriter = csv.writer(f)
+ csvwriter.writerow([
+ str(int(time.time())), 'webapp1', node_id,
+ request.endpoint, request.method, str(response.status_code),
+ str(resp_time)
+ ])
+
+ return response
+```
+
+Since this is a demo, I have taken the liberty of reporting random IPs as the node IDs when reporting the metric. When we run docker-compose up in the demo2 directory, it will result in a CSV file with multiple columns.
+
+### Analyzing metrics with pandas
+
+We'll now analyze this CSV file with [pandas][5]. Running docker-compose up will print a URL that we will use to open a [Jupyter][6] session. Once we upload the Analysis.ipynb notebook into the session, we can read the CSV file into a pandas DataFrame:
+
+```
+import pandas as pd
+metrics = pd.read_csv('/data/metrics.csv', index_col=0)
+```
+
+The index_col specifies that we want to use the timestamp as the index.
+
+Since each characteristic we add is a column in the DataFrame, we can perform grouping and aggregation based on these columns:
+
+```
+import numpy as np
+metrics.groupby(['node_id', 'http_status']).latency.aggregate(np.percentile, 99.999)
+```
+
+Please refer to the Jupyter notebook for more example analysis on the data.
+
+## What should I monitor?
+
+A software system has a number of variables whose values change during its lifetime. The software is running in some sort of an operating system, and operating system variables change as well. In my opinion, the more data you have, the better it is when something goes wrong.
+
+Key operating system metrics I recommend monitoring are:
+
+* CPU usage
+* System memory usage
+* File descriptor usage
+* Disk usage
+
+Other key metrics to monitor will vary depending on your software application.
+
+### Network applications
+
+If your software is a network application that listens to and serves client requests, the key metrics to measure are:
+
+* Number of requests coming in (counter)
+* Unhandled errors (counter)
+* Request latency (histogram/timer)
+* Queued time, if there is a queue in your application (histogram/timer)
+* Queue size, if there is a queue in your application (gauge)
+* Worker processes/threads usage (gauge)
+
+If your network application makes requests to other services in the context of fulfilling a client request, it should have metrics to record the behavior of communications with those services. Key metrics to monitor include number of requests, request latency, and response status.
+
+### HTTP web application backends
+
+HTTP applications should monitor all the above. In addition, they should keep granular data about the count of non-200 HTTP statuses grouped by all the other HTTP status codes. If your web application has user signup and login functionality, it should have metrics for those as well.
+
+### Long-running processes
+
+Long-running processes such as Rabbit MQ consumer or task-queue workers, although not network servers, work on the model of picking up a task and processing it. Hence, we should monitor the number of requests processed and the request latency for those processes.
+
+No matter the application type, each metric should have appropriate **metadata** associated with it.
+
+## Integrating monitoring in a Python application
+
+There are two components involved in integrating monitoring into Python applications:
+
+* Updating your application to calculate and report metrics
+* Setting up a monitoring infrastructure to house the application's metrics and allow queries to be made against them
+
+The basic idea of recording and reporting a metric is:
+
+```
+def work():
+ requests += 1
+ # report counter
+ start_time = time.time()
+
+ # < do the work >
+
+ # calculate and report latency
+ work_latency = time.time() - start_time
+ ...
+```
+
+Considering the above pattern, we often take advantage of _decorators_, _context managers_, and _middleware_ (for network applications) to calculate and report metrics. In Demo 1 and Demo 2, we used decorators in a Flask application.
+
+### Pull and push models for metric reporting
+
+Essentially, there are two patterns for reporting metrics from a Python application. In the _pull_ model, the monitoring system "scrapes" the application at a predefined HTTP endpoint. In the _push_ model, the application sends the data to the monitoring system.
+
+
+
+An example of a monitoring system working in the _pull_ model is [Prometheus][7]. [StatsD][8] is an example of a monitoring system where the application _pushes_ the metrics to the system.
+
+### Integrating StatsD
+
+To integrate StatsD into a Python application, we would use the [StatsD Python client][9], then update our metric-reporting code to push data into StatsD using the appropriate library calls.
+
+First, we need to create a client instance:
+
+```
+statsd = statsd.StatsClient(host='statsd', port=8125, prefix='webapp1')
+```
+
+The prefix keyword argument will add the specified prefix to all the metrics reported via this client.
+
+Once we have the client, we can report a value for a timer using:
+
+```
+statsd.timing(key, resp_time)
+```
+
+To increment a counter:
+
+```
+statsd.incr(key)
+```
+
+To associate metadata with a metric, a key is defined as metadata1.metadata2.metric, where each metadataX is a field that allows aggregation and grouping.
+
+The demo application [StatsD][10] is a complete example of integrating a Python Flask application with statsd.
+
+### Integrating Prometheus
+
+To use the Prometheus monitoring system, we will use the [Promethius Python client][11]. We will first create objects of the appropriate metric class:
+
+```
+REQUEST_LATENCY = Histogram('request_latency_seconds', 'Request latency',
+ ['app_name', 'endpoint']
+)
+```
+
+The third argument in the above statement is the labels associated with the metric. These labels are what defines the metadata associated with a single metric value.
+
+To record a specific metric observation:
+
+```
+REQUEST_LATENCY.labels('webapp', request.path).observe(resp_time)
+```
+
+The next step is to define an HTTP endpoint in our application that Prometheus can scrape. This is usually an endpoint called /metrics:
+
+```
+@app.route('/metrics')
+def metrics():
+ return Response(prometheus_client.generate_latest(), mimetype=CONTENT_TYPE_LATEST)
+```
+
+The demo application [Prometheus][12] is a complete example of integrating a Python Flask application with prometheus.
+
+### Which is better: StatsD or Prometheus?
+
+The natural next question is: Should I use StatsD or Prometheus? I have written a few articles on this topic, and you may find them useful:
+
+* [Your options for monitoring multi-process Python applications with Prometheus][13]
+* [Monitoring your synchronous Python web applications using Prometheus][14]
+* [Monitoring your asynchronous Python web applications using Prometheus][15]
+
+## Ways to use metrics
+
+We've learned a bit about why we want to set up monitoring in our applications, but now let's look deeper into two of them: alerting and autoscaling.
+
+### Using metrics for alerting
+
+A key use of metrics is creating alerts. For example, you may want to send an email or pager notification to relevant people if the number of HTTP 500s over the past five minutes increases. What we use for setting up alerts depends on our monitoring setup. For Prometheus we can use [Alertmanager][16] and for StatsD, we use [Nagios][17].
+
+### Using metrics for autoscaling
+
+Not only can metrics allow us to understand if our current infrastructure is over- or under-provisioned, they can also help implement autoscaling policies in a cloud infrastructure. For example, if worker process usage on our servers routinely hits 90% over the past five minutes, we may need to horizontally scale. How we would implement scaling depends on the cloud infrastructure. AWS Auto Scaling, by default, allows scaling policies based on system CPU usage, network traffic, and other factors. However, to use application metrics for scaling up or down, we must publish [custom CloudWatch metrics][18].
+
+## Application monitoring in a multi-service architecture
+
+When we go beyond a single application architecture, such that a client request can trigger calls to multiple services before a response is sent back, we need more from our metrics. We need a unified view of latency metrics so we can see how much time each service took to respond to the request. This is enabled with [distributed tracing][19].
+
+You can see an example of distributed tracing in Python in my blog post [Introducing distributed tracing in your Python application via Zipkin][20].
+
+## Points to remember
+
+In summary, make sure to keep the following things in mind:
+
+* Understand what a metric type means in your monitoring system
+* Know in what unit of measurement the monitoring system wants your data
+* Monitor the most critical components of your application
+* Monitor the behavior of your application in its most critical stages
+
+The above assumes you don't have to manage your monitoring systems. If that's part of your job, you have a lot more to think about!
+
+## Other resources
+
+Following are some of the resources I found very useful along my monitoring education journey:
+
+### General
+
+* [Monitoring distributed systems][21]
+* [Observability and monitoring best practices][22]
+* [Who wants seconds?][23]
+
+### StatsD/Graphite
+
+* [StatsD metric types][24]
+
+### Prometheus
+
+* [Prometheus metric types][25]
+* [How does a Prometheus gauge work?][26]
+* [Why are Prometheus histograms cumulative?][27]
+* [Monitoring batch jobs in Python][28]
+* [Prometheus: Monitoring at SoundCloud][29]
+
+## Avoiding mistakes (i.e., Stage 3 learnings)
+
+As we learn the basics of monitoring, it's important to keep an eye on the mistakes we don't want to make. Here are some insightful resources I have come across:
+
+* [How not to measure latency][30]
+* [Histograms with Prometheus: A tale of woe][31]
+* [Why averages suck and percentiles are great][32]
+* [Everything you know about latency is wrong][33]
+* [Who moved my 99th percentile latency?][34]
+* [Logs and metrics and graphs][35]
+* [HdrHistogram: A better latency capture method][36]
+
+---
+
+To learn more, attend Amit Saha's talk, [Counter, gauge, upper 90—Oh my!][37], at [PyCon Cleveland 2018][38].
+
+## Topics
+
+[Python][39]
+
+[PyCon][40]
+
+[Programming][41]
+
+## About the author
+
+[][42]
+
+Amit Saha \- I am a software engineer interested in infrastructure, monitoring and tooling. I am the author of "Doing Math with Python" and creator and the maintainer of Fedora Scientific Spin.
+
+[More about me][43]
+
+* [Learn how you can contribute][44]
+
+---
+
+via: [https://opensource.com/article/18/4/metrics-monitoring-and-python][45]
+
+作者: [undefined][46] 选题者: [@lujun9972][47] 译者: [译者ID][48] 校对: [校对者ID][49]
+
+本文由 [LCTT][50] 原创编译,[Linux中国][51] 荣誉推出
+
+[1]: https://github.com/amitsaha/python-monitoring-talk
+[2]: https://github.com/amitsaha/python-monitoring-talk/tree/master/demo1
+[3]: http://flask.pocoo.org/
+[4]: https://github.com/amitsaha/python-monitoring-talk/tree/master/demo2
+[5]: https://pandas.pydata.org/
+[6]: http://jupyter.org/
+[7]: https://prometheus.io/
+[8]: https://github.com/etsy/statsd
+[9]: https://pypi.python.org/pypi/statsd
+[10]: https://github.com/amitsaha/python-monitoring-talk/tree/master/statsd
+[11]: https://pypi.python.org/pypi/prometheus_client
+[12]: https://github.com/amitsaha/python-monitoring-talk/tree/master/prometheus
+[13]: http://echorand.me/your-options-for-monitoring-multi-process-python-applications-with-prometheus.html
+[14]: https://blog.codeship.com/monitoring-your-synchronous-python-web-applications-using-prometheus/
+[15]: https://blog.codeship.com/monitoring-your-asynchronous-python-web-applications-using-prometheus/
+[16]: https://github.com/prometheus/alertmanager
+[17]: https://www.nagios.org/about/overview/
+[18]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
+[19]: http://opentracing.io/documentation/
+[20]: http://echorand.me/introducing-distributed-tracing-in-your-python-application-via-zipkin.html
+[21]: https://landing.google.com/sre/book/chapters/monitoring-distributed-systems.html
+[22]: http://www.integralist.co.uk/posts/monitoring-best-practices/?imm_mid=0fbebf&cmp=em-webops-na-na-newsltr_20180309
+[23]: https://www.robustperception.io/who-wants-seconds/
+[24]: https://github.com/etsy/statsd/blob/master/docs/metric_types.md
+[25]: https://prometheus.io/docs/concepts/metric_types/
+[26]: https://www.robustperception.io/how-does-a-prometheus-gauge-work/
+[27]: https://www.robustperception.io/why-are-prometheus-histograms-cumulative/
+[28]: https://www.robustperception.io/monitoring-batch-jobs-in-python/
+[29]: https://developers.soundcloud.com/blog/prometheus-monitoring-at-soundcloud
+[30]: https://www.youtube.com/watch?v=lJ8ydIuPFeU&feature=youtu.be
+[31]: http://linuxczar.net/blog/2017/06/15/prometheus-histogram-2/
+[32]: https://www.dynatrace.com/news/blog/why-averages-suck-and-percentiles-are-great/
+[33]: https://bravenewgeek.com/everything-you-know-about-latency-is-wrong/
+[34]: https://engineering.linkedin.com/performance/who-moved-my-99th-percentile-latency
+[35]: https://grafana.com/blog/2016/01/05/logs-and-metrics-and-graphs-oh-my/
+[36]: http://psy-lob-saw.blogspot.com.au/2015/02/hdrhistogram-better-latency-capture.html
+[37]: https://us.pycon.org/2018/schedule/presentation/133/
+[38]: https://us.pycon.org/2018/
+[39]: https://opensource.com/tags/python
+[40]: https://opensource.com/tags/pycon
+[41]: https://opensource.com/tags/programming
+[42]: https://opensource.com/users/amitsaha
+[43]: https://opensource.com/users/amitsaha
+[44]: https://opensource.com/participate
+[45]: https://opensource.com/article/18/4/metrics-monitoring-and-python
+[46]: undefined
+[47]: https://github.com/lujun9972
+[48]: https://github.com/译者ID
+[49]: https://github.com/校对者ID
+[50]: https://github.com/LCTT/TranslateProject
+[51]: https://linux.cn/
\ No newline at end of file
From 06feb8485fe65b7ee27af7416bc3a2247a6cbc9b Mon Sep 17 00:00:00 2001
From: DarkSun
Date: Wed, 25 Apr 2018 03:31:57 +0000
Subject: [PATCH 092/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20will=20the?=
=?UTF-8?q?=20GDPR=20impact=20open=20source=20communities=3F=20|=20Opensou?=
=?UTF-8?q?rce....=20How=20will=20the=20GDPR=20impact=20open=20source=20co?=
=?UTF-8?q?mmunities=3F=20|=20Opensource.com.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ll the GDPR impact open source communities | 111 ++++++++++++++++++
1 file changed, 111 insertions(+)
create mode 100644 sources/talk/20180425 How will the GDPR impact open source communities
diff --git a/sources/talk/20180425 How will the GDPR impact open source communities b/sources/talk/20180425 How will the GDPR impact open source communities
new file mode 100644
index 0000000000..720df5bdb4
--- /dev/null
+++ b/sources/talk/20180425 How will the GDPR impact open source communities
@@ -0,0 +1,111 @@
+# How will the GDPR impact open source communities?
+
+
+
+Image by :
+
+opensource.com
+
+
+
+On May 25, 2018 the [General Data Protection Regulation][1] will go into effect. This new regulation by the European Union will impact how organizations need to protect personal data on a global scale. This could include open source projects, including communities.
+
+### GDPR details
+
+The General Data Protection Regulation (GDPR) was approved by the EU Parliament on April 14, 2016, and will be enforced beginning May 25, 2018. The GDPR replaces the Data Protection Directive 95/46/EC that was designed "to harmonize data privacy laws across Europe, to protect and empower all EU citizens data privacy and to reshape the way organizations across the region approach data privacy."
+
+The aim of the GDPR is to protect the personal data of individuals in the EU in an increasingly data-driven world.
+
+### To whom does it apply
+
+One of the biggest changes that comes with the GDPR is an increased territorial scope. The GDPR applies to all organizations processing the personal data of data subjects residing in the European Union, irrelevant to its location.
+
+While most of the online articles covering the GDPR mention companies selling goods or services, we can also look at this territorial scope with open source projects in mind. There are a few variations, such as a software company (profit) running a community, and a non-profit organization, i.e. an open source software project and its community. Once these communities are run on a global scale, it is most likely that EU-based persons are taking part in this community.
+
+When such a global community has an online presence, using platforms such as a website, forum, issue tracker etcetera, it is very likely that they are processing personal data of these EU persons, such as their names, e-mail addresses and possibly even more. These activities will trigger a need to comply with the GDPR.
+
+### GDPR changes and its impact
+
+The GDPR brings [many changes][2], strengthening data protection and privacy of EU persons, compared to the previous Directive. Some of these changes have a direct impact on a community as described earlier. Let's look at some of these changes.
+
+#### Consent
+
+Let's assume that the community in question uses a forum for its members, and also has one or more forms on their website for registration purposes. With the GDPR you will no longer be able to use one lengthy and illegible privacy policy and terms of conditions. For each of those specific purposes, registering on the forum, and on one of those forms, you will need to obtain explicit consent. This consent must be “freely given, specific, informed, and unambiguous.”
+
+In case of such a form, you could have a checkbox, which should not be pre-checked, with clear text indicating for which purposes the personal data is used, preferably linking to an ‘addendum’ of your existing privacy policy and terms of use.
+
+#### Right to access
+
+EU persons get expanded rights by the GDPR. One of them is the right to ask an organization if, where and which personal data is processed. Upon request, they should also be provided with a copy of this data, free of charge, and in an electronic format if this data subject (e.g. EU citizen) asks for it.
+
+#### Right to be forgotten
+
+Another right EU citizens get through the GDPR is the "right to be forgotten," also known as data erasure. This means that subject to certain limitation, the organization will have to erase his/her data, and possibly even stop any further processing, including by the organization’s third parties.
+
+The above three changes imply that your platform(s) software will need to comply with certain aspects of the GDPR as well. It will need to have specific features such as obtaining and storing consent, extracting data and providing a copy in electronic format to a data subject, and finally the means to erase specific data about a data subject.
+
+#### Breach notification
+
+Under the GDPR, a data breach occurs whenever personal data is taken or stolen without the authorization of the data subject. Once discovered, you should notify your affected community members within 72 hours unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. This breach notification is mandatory under the GDPR.
+
+#### Register
+
+As an organization, you will become responsible for keeping a register which will include detailed descriptions of all procedures, purposes etc for which you process personal data. This register will act as proof of the organization's compliance with the GDPR’s requirement to maintain a record of personal data processing activities, and will be used for audit purposes.
+
+#### Fines
+
+Organizations that do not comply with the GDPR risk fines up to 4% of annual global turnover or €20 million (whichever is greater). According to the GDPR, "this is the maximum fine that can be imposed for the most serious infringements e.g.not having sufficient customer consent to process data or violating the core of Privacy by Design concepts."
+
+### Final words
+
+My article should not be used as legal advice or a definite guide to GDPR compliance. I have covered some of the parts of the regulation that could be of impact to an open source community, raising awareness about the GDPR and its impact. Obviously, the regulation contains much more which you will need to know about and possibly comply with.
+
+As you can probably conclude yourself, you will have to take steps when you are running a global community, to comply with the GDPR. If you already apply robust security standards in your community, such as ISO 27001, NIST or PCI DSS, you should have a head start.
+
+You can find more information about the GDPR at the following sites/resources:
+
+* [GDPR Portal][3] (by the EU)
+
+* [Official Regulation (EU) 2016/679][4] (GDPR, including translations)
+
+* [What is GDPR? 8 things leaders should know][5] (The Enterprisers Project)
+
+* [How to avoid a GDPR compliance audit: Best practices][6] (The Enterprisers Project)
+
+
+### About the author
+
+[][7]
+
+Robin Muilwijk \- Robin Muilwijk is Advisor Internet and e-Government. He also serves as a community moderator for Opensource.com, an online publication by Red Hat, and as ambassador for The Open Organization. Robin is also Chair of the eZ Community Board, and Community Manager at [eZ Systems][8]. Robin writes and is active on social media to promote and advocate for open source in our businesses and lives.Follow him on Twitter... [more about Robin Muilwijk][9]
+
+[More about me][10]
+
+* [Learn how you can contribute][11]
+
+---
+
+via: [https://opensource.com/article/18/4/gdpr-impact][12]
+
+作者: [Robin Muilwijk][13] 选题者: [@lujun9972][14] 译者: [译者ID][15] 校对: [校对者ID][16]
+
+本文由 [LCTT][17] 原创编译,[Linux中国][18] 荣誉推出
+
+[1]: https://www.eugdpr.org/eugdpr.org.html
+[2]: https://www.eugdpr.org/key-changes.html
+[3]: https://www.eugdpr.org/eugdpr.org.html
+[4]: http://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1520531479111&uri=CELEX:32016R0679
+[5]: https://enterprisersproject.com/article/2018/4/what-gdpr-8-things-leaders-should-know
+[6]: https://enterprisersproject.com/article/2017/9/avoiding-gdpr-compliance-audit-best-practices
+[7]: https://opensource.com/users/robinmuilwijk
+[8]: http://ez.no
+[9]: https://opensource.com/users/robinmuilwijk
+[10]: https://opensource.com/users/robinmuilwijk
+[11]: https://opensource.com/participate
+[12]: https://opensource.com/article/18/4/gdpr-impact
+[13]: https://opensource.com/users/robinmuilwijk
+[14]: https://github.com/lujun9972
+[15]: https://github.com/译者ID
+[16]: https://github.com/校对者ID
+[17]: https://github.com/LCTT/TranslateProject
+[18]: https://linux.cn/
\ No newline at end of file
From 76c8ba9d44cf20da39fd7f879ec177380116fe8f Mon Sep 17 00:00:00 2001
From: DarkSun
Date: Wed, 25 Apr 2018 03:35:41 +0000
Subject: [PATCH 093/220] Update: 20180425 Understanding metrics and monitoring
with Python |... Understanding metrics and monitoring with Python |
Opensource.com.md
---
...monitoring with Python | Opensource.com.md | 200 +++++++++---------
1 file changed, 100 insertions(+), 100 deletions(-)
diff --git a/sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md b/sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md
index 9d734f09f3..7b401b518b 100644
--- a/sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md
+++ b/sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md
@@ -1,7 +1,5 @@
# Understanding metrics and monitoring with Python
-## Demystify Python application monitoring by learning the meaning of key words and concepts.
-

Image by :
@@ -28,7 +26,15 @@ Let's get started!
## Software prerequisites
-All the demos discussed in this article are available on [my GitHub repo][1]. You will need to have docker and docker-compose installed to play with them.
+More Python Resources
+
+* [What is Python?][1]
+* [Top Python IDEs][2]
+* [Top Python GUI frameworks][3]
+* [Latest Python content][4]
+* [More developer resources][5]
+
+All the demos discussed in this article are available on [my GitHub repo][6]. You will need to have docker and docker-compose installed to play with them.
## Why should I monitor?
@@ -75,7 +81,7 @@ A histogram metric can increase or decrease.
## Demo 1: Calculating and reporting metrics
-[Demo 1][2] is a basic web application written using the [Flask][3] framework. It demonstrates how we can _calculate_ and _report_ metrics.
+[Demo 1][7] is a basic web application written using the [Flask][8] framework. It demonstrates how we can _calculate_ and _report_ metrics.
The src directory has the application in app.py with the src/helpers/middleware.py containing the following:
@@ -161,7 +167,7 @@ In our Demo 1 application above, when we calculate and report a request latency,
* The HTTP method
* The identifier of the host/node where it's running
-If we attach these characteristics to a metric observation, we have more context around each metric. Let's explore adding characteristics to our metrics in [Demo 2][4].
+If we attach these characteristics to a metric observation, we have more context around each metric. Let's explore adding characteristics to our metrics in [Demo 2][9].
The src/helpers/middleware.py file now writes multiple columns to the CSV file when writing metrics:
@@ -192,7 +198,7 @@ Since this is a demo, I have taken the liberty of reporting random IPs as the no
### Analyzing metrics with pandas
-We'll now analyze this CSV file with [pandas][5]. Running docker-compose up will print a URL that we will use to open a [Jupyter][6] session. Once we upload the Analysis.ipynb notebook into the session, we can read the CSV file into a pandas DataFrame:
+We'll now analyze this CSV file with [pandas][10]. Running docker-compose up will print a URL that we will use to open a [Jupyter][11] session. Once we upload the Analysis.ipynb notebook into the session, we can read the CSV file into a pandas DataFrame:
```
import pandas as pd
@@ -276,11 +282,11 @@ Essentially, there are two patterns for reporting metrics from a Python applicat

-An example of a monitoring system working in the _pull_ model is [Prometheus][7]. [StatsD][8] is an example of a monitoring system where the application _pushes_ the metrics to the system.
+An example of a monitoring system working in the _pull_ model is [Prometheus][12]. [StatsD][13] is an example of a monitoring system where the application _pushes_ the metrics to the system.
### Integrating StatsD
-To integrate StatsD into a Python application, we would use the [StatsD Python client][9], then update our metric-reporting code to push data into StatsD using the appropriate library calls.
+To integrate StatsD into a Python application, we would use the [StatsD Python client][14], then update our metric-reporting code to push data into StatsD using the appropriate library calls.
First, we need to create a client instance:
@@ -304,11 +310,11 @@ statsd.incr(key)
To associate metadata with a metric, a key is defined as metadata1.metadata2.metric, where each metadataX is a field that allows aggregation and grouping.
-The demo application [StatsD][10] is a complete example of integrating a Python Flask application with statsd.
+The demo application [StatsD][15] is a complete example of integrating a Python Flask application with statsd.
### Integrating Prometheus
-To use the Prometheus monitoring system, we will use the [Promethius Python client][11]. We will first create objects of the appropriate metric class:
+To use the Prometheus monitoring system, we will use the [Promethius Python client][16]. We will first create objects of the appropriate metric class:
```
REQUEST_LATENCY = Histogram('request_latency_seconds', 'Request latency',
@@ -332,15 +338,15 @@ def metrics():
return Response(prometheus_client.generate_latest(), mimetype=CONTENT_TYPE_LATEST)
```
-The demo application [Prometheus][12] is a complete example of integrating a Python Flask application with prometheus.
+The demo application [Prometheus][17] is a complete example of integrating a Python Flask application with prometheus.
### Which is better: StatsD or Prometheus?
The natural next question is: Should I use StatsD or Prometheus? I have written a few articles on this topic, and you may find them useful:
-* [Your options for monitoring multi-process Python applications with Prometheus][13]
-* [Monitoring your synchronous Python web applications using Prometheus][14]
-* [Monitoring your asynchronous Python web applications using Prometheus][15]
+* [Your options for monitoring multi-process Python applications with Prometheus][18]
+* [Monitoring your synchronous Python web applications using Prometheus][19]
+* [Monitoring your asynchronous Python web applications using Prometheus][20]
## Ways to use metrics
@@ -348,17 +354,17 @@ We've learned a bit about why we want to set up monitoring in our applications,
### Using metrics for alerting
-A key use of metrics is creating alerts. For example, you may want to send an email or pager notification to relevant people if the number of HTTP 500s over the past five minutes increases. What we use for setting up alerts depends on our monitoring setup. For Prometheus we can use [Alertmanager][16] and for StatsD, we use [Nagios][17].
+A key use of metrics is creating alerts. For example, you may want to send an email or pager notification to relevant people if the number of HTTP 500s over the past five minutes increases. What we use for setting up alerts depends on our monitoring setup. For Prometheus we can use [Alertmanager][21] and for StatsD, we use [Nagios][22].
### Using metrics for autoscaling
-Not only can metrics allow us to understand if our current infrastructure is over- or under-provisioned, they can also help implement autoscaling policies in a cloud infrastructure. For example, if worker process usage on our servers routinely hits 90% over the past five minutes, we may need to horizontally scale. How we would implement scaling depends on the cloud infrastructure. AWS Auto Scaling, by default, allows scaling policies based on system CPU usage, network traffic, and other factors. However, to use application metrics for scaling up or down, we must publish [custom CloudWatch metrics][18].
+Not only can metrics allow us to understand if our current infrastructure is over- or under-provisioned, they can also help implement autoscaling policies in a cloud infrastructure. For example, if worker process usage on our servers routinely hits 90% over the past five minutes, we may need to horizontally scale. How we would implement scaling depends on the cloud infrastructure. AWS Auto Scaling, by default, allows scaling policies based on system CPU usage, network traffic, and other factors. However, to use application metrics for scaling up or down, we must publish [custom CloudWatch metrics][23].
## Application monitoring in a multi-service architecture
-When we go beyond a single application architecture, such that a client request can trigger calls to multiple services before a response is sent back, we need more from our metrics. We need a unified view of latency metrics so we can see how much time each service took to respond to the request. This is enabled with [distributed tracing][19].
+When we go beyond a single application architecture, such that a client request can trigger calls to multiple services before a response is sent back, we need more from our metrics. We need a unified view of latency metrics so we can see how much time each service took to respond to the request. This is enabled with [distributed tracing][24].
-You can see an example of distributed tracing in Python in my blog post [Introducing distributed tracing in your Python application via Zipkin][20].
+You can see an example of distributed tracing in Python in my blog post [Introducing distributed tracing in your Python application via Zipkin][25].
## Points to remember
@@ -377,112 +383,106 @@ Following are some of the resources I found very useful along my monitoring educ
### General
-* [Monitoring distributed systems][21]
-* [Observability and monitoring best practices][22]
-* [Who wants seconds?][23]
+* [Monitoring distributed systems][26]
+* [Observability and monitoring best practices][27]
+* [Who wants seconds?][28]
### StatsD/Graphite
-* [StatsD metric types][24]
+* [StatsD metric types][29]
### Prometheus
-* [Prometheus metric types][25]
-* [How does a Prometheus gauge work?][26]
-* [Why are Prometheus histograms cumulative?][27]
-* [Monitoring batch jobs in Python][28]
-* [Prometheus: Monitoring at SoundCloud][29]
+* [Prometheus metric types][30]
+* [How does a Prometheus gauge work?][31]
+* [Why are Prometheus histograms cumulative?][32]
+* [Monitoring batch jobs in Python][33]
+* [Prometheus: Monitoring at SoundCloud][34]
## Avoiding mistakes (i.e., Stage 3 learnings)
As we learn the basics of monitoring, it's important to keep an eye on the mistakes we don't want to make. Here are some insightful resources I have come across:
-* [How not to measure latency][30]
-* [Histograms with Prometheus: A tale of woe][31]
-* [Why averages suck and percentiles are great][32]
-* [Everything you know about latency is wrong][33]
-* [Who moved my 99th percentile latency?][34]
-* [Logs and metrics and graphs][35]
-* [HdrHistogram: A better latency capture method][36]
+* [How not to measure latency][35]
+* [Histograms with Prometheus: A tale of woe][36]
+* [Why averages suck and percentiles are great][37]
+* [Everything you know about latency is wrong][38]
+* [Who moved my 99th percentile latency?][39]
+* [Logs and metrics and graphs][40]
+* [HdrHistogram: A better latency capture method][41]
---
-To learn more, attend Amit Saha's talk, [Counter, gauge, upper 90—Oh my!][37], at [PyCon Cleveland 2018][38].
-
-## Topics
-
-[Python][39]
-
-[PyCon][40]
-
-[Programming][41]
+To learn more, attend Amit Saha's talk, [Counter, gauge, upper 90—Oh my!][42], at [PyCon Cleveland 2018][43].
## About the author
-[][42]
+[][44]
Amit Saha \- I am a software engineer interested in infrastructure, monitoring and tooling. I am the author of "Doing Math with Python" and creator and the maintainer of Fedora Scientific Spin.
-[More about me][43]
+[More about me][45]
-* [Learn how you can contribute][44]
+* [Learn how you can contribute][46]
---
-via: [https://opensource.com/article/18/4/metrics-monitoring-and-python][45]
+via: [https://opensource.com/article/18/4/metrics-monitoring-and-python][47]
-作者: [undefined][46] 选题者: [@lujun9972][47] 译者: [译者ID][48] 校对: [校对者ID][49]
+作者: [Amit Saha][48] 选题者: [@lujun9972][49] 译者: [译者ID][50] 校对: [校对者ID][51]
-本文由 [LCTT][50] 原创编译,[Linux中国][51] 荣誉推出
+本文由 [LCTT][52] 原创编译,[Linux中国][53] 荣誉推出
-[1]: https://github.com/amitsaha/python-monitoring-talk
-[2]: https://github.com/amitsaha/python-monitoring-talk/tree/master/demo1
-[3]: http://flask.pocoo.org/
-[4]: https://github.com/amitsaha/python-monitoring-talk/tree/master/demo2
-[5]: https://pandas.pydata.org/
-[6]: http://jupyter.org/
-[7]: https://prometheus.io/
-[8]: https://github.com/etsy/statsd
-[9]: https://pypi.python.org/pypi/statsd
-[10]: https://github.com/amitsaha/python-monitoring-talk/tree/master/statsd
-[11]: https://pypi.python.org/pypi/prometheus_client
-[12]: https://github.com/amitsaha/python-monitoring-talk/tree/master/prometheus
-[13]: http://echorand.me/your-options-for-monitoring-multi-process-python-applications-with-prometheus.html
-[14]: https://blog.codeship.com/monitoring-your-synchronous-python-web-applications-using-prometheus/
-[15]: https://blog.codeship.com/monitoring-your-asynchronous-python-web-applications-using-prometheus/
-[16]: https://github.com/prometheus/alertmanager
-[17]: https://www.nagios.org/about/overview/
-[18]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
-[19]: http://opentracing.io/documentation/
-[20]: http://echorand.me/introducing-distributed-tracing-in-your-python-application-via-zipkin.html
-[21]: https://landing.google.com/sre/book/chapters/monitoring-distributed-systems.html
-[22]: http://www.integralist.co.uk/posts/monitoring-best-practices/?imm_mid=0fbebf&cmp=em-webops-na-na-newsltr_20180309
-[23]: https://www.robustperception.io/who-wants-seconds/
-[24]: https://github.com/etsy/statsd/blob/master/docs/metric_types.md
-[25]: https://prometheus.io/docs/concepts/metric_types/
-[26]: https://www.robustperception.io/how-does-a-prometheus-gauge-work/
-[27]: https://www.robustperception.io/why-are-prometheus-histograms-cumulative/
-[28]: https://www.robustperception.io/monitoring-batch-jobs-in-python/
-[29]: https://developers.soundcloud.com/blog/prometheus-monitoring-at-soundcloud
-[30]: https://www.youtube.com/watch?v=lJ8ydIuPFeU&feature=youtu.be
-[31]: http://linuxczar.net/blog/2017/06/15/prometheus-histogram-2/
-[32]: https://www.dynatrace.com/news/blog/why-averages-suck-and-percentiles-are-great/
-[33]: https://bravenewgeek.com/everything-you-know-about-latency-is-wrong/
-[34]: https://engineering.linkedin.com/performance/who-moved-my-99th-percentile-latency
-[35]: https://grafana.com/blog/2016/01/05/logs-and-metrics-and-graphs-oh-my/
-[36]: http://psy-lob-saw.blogspot.com.au/2015/02/hdrhistogram-better-latency-capture.html
-[37]: https://us.pycon.org/2018/schedule/presentation/133/
-[38]: https://us.pycon.org/2018/
-[39]: https://opensource.com/tags/python
-[40]: https://opensource.com/tags/pycon
-[41]: https://opensource.com/tags/programming
-[42]: https://opensource.com/users/amitsaha
-[43]: https://opensource.com/users/amitsaha
-[44]: https://opensource.com/participate
-[45]: https://opensource.com/article/18/4/metrics-monitoring-and-python
-[46]: undefined
-[47]: https://github.com/lujun9972
-[48]: https://github.com/译者ID
-[49]: https://github.com/校对者ID
-[50]: https://github.com/LCTT/TranslateProject
-[51]: https://linux.cn/
\ No newline at end of file
+[1]: https://opensource.com/resources/python?intcmp=7016000000127cYAAQ
+[2]: https://opensource.com/resources/python/ides?intcmp=7016000000127cYAAQ
+[3]: https://opensource.com/resources/python/gui-frameworks?intcmp=7016000000127cYAAQ
+[4]: https://opensource.com/tags/python?intcmp=7016000000127cYAAQ
+[5]: https://developers.redhat.com/?intcmp=7016000000127cYAAQ
+[6]: https://github.com/amitsaha/python-monitoring-talk
+[7]: https://github.com/amitsaha/python-monitoring-talk/tree/master/demo1
+[8]: http://flask.pocoo.org/
+[9]: https://github.com/amitsaha/python-monitoring-talk/tree/master/demo2
+[10]: https://pandas.pydata.org/
+[11]: http://jupyter.org/
+[12]: https://prometheus.io/
+[13]: https://github.com/etsy/statsd
+[14]: https://pypi.python.org/pypi/statsd
+[15]: https://github.com/amitsaha/python-monitoring-talk/tree/master/statsd
+[16]: https://pypi.python.org/pypi/prometheus_client
+[17]: https://github.com/amitsaha/python-monitoring-talk/tree/master/prometheus
+[18]: http://echorand.me/your-options-for-monitoring-multi-process-python-applications-with-prometheus.html
+[19]: https://blog.codeship.com/monitoring-your-synchronous-python-web-applications-using-prometheus/
+[20]: https://blog.codeship.com/monitoring-your-asynchronous-python-web-applications-using-prometheus/
+[21]: https://github.com/prometheus/alertmanager
+[22]: https://www.nagios.org/about/overview/
+[23]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
+[24]: http://opentracing.io/documentation/
+[25]: http://echorand.me/introducing-distributed-tracing-in-your-python-application-via-zipkin.html
+[26]: https://landing.google.com/sre/book/chapters/monitoring-distributed-systems.html
+[27]: http://www.integralist.co.uk/posts/monitoring-best-practices/?imm_mid=0fbebf&cmp=em-webops-na-na-newsltr_20180309
+[28]: https://www.robustperception.io/who-wants-seconds/
+[29]: https://github.com/etsy/statsd/blob/master/docs/metric_types.md
+[30]: https://prometheus.io/docs/concepts/metric_types/
+[31]: https://www.robustperception.io/how-does-a-prometheus-gauge-work/
+[32]: https://www.robustperception.io/why-are-prometheus-histograms-cumulative/
+[33]: https://www.robustperception.io/monitoring-batch-jobs-in-python/
+[34]: https://developers.soundcloud.com/blog/prometheus-monitoring-at-soundcloud
+[35]: https://www.youtube.com/watch?v=lJ8ydIuPFeU&feature=youtu.be
+[36]: http://linuxczar.net/blog/2017/06/15/prometheus-histogram-2/
+[37]: https://www.dynatrace.com/news/blog/why-averages-suck-and-percentiles-are-great/
+[38]: https://bravenewgeek.com/everything-you-know-about-latency-is-wrong/
+[39]: https://engineering.linkedin.com/performance/who-moved-my-99th-percentile-latency
+[40]: https://grafana.com/blog/2016/01/05/logs-and-metrics-and-graphs-oh-my/
+[41]: http://psy-lob-saw.blogspot.com.au/2015/02/hdrhistogram-better-latency-capture.html
+[42]: https://us.pycon.org/2018/schedule/presentation/133/
+[43]: https://us.pycon.org/2018/
+[44]: https://opensource.com/users/amitsaha
+[45]: https://opensource.com/users/amitsaha
+[46]: https://opensource.com/participate
+[47]: https://opensource.com/article/18/4/metrics-monitoring-and-python
+[48]: https://opensource.com/users/amitsaha
+[49]: https://github.com/lujun9972
+[50]: https://github.com/译者ID
+[51]: https://github.com/校对者ID
+[52]: https://github.com/LCTT/TranslateProject
+[53]: https://linux.cn/
\ No newline at end of file
From 44273e61bd5e53693074fd79317561ffb5c37549 Mon Sep 17 00:00:00 2001
From: DarkSun
Date: Wed, 25 Apr 2018 03:38:45 +0000
Subject: [PATCH 094/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20gentle=20intr?=
=?UTF-8?q?oduction=20to=20FreeDOS=20|=20Opensource.com?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ntroduction to FreeDOS | Opensource.com.md | 141 ++++++++++++++++++
1 file changed, 141 insertions(+)
create mode 100644 sources/tech/20180425 A gentle introduction to FreeDOS | Opensource.com.md
diff --git a/sources/tech/20180425 A gentle introduction to FreeDOS | Opensource.com.md b/sources/tech/20180425 A gentle introduction to FreeDOS | Opensource.com.md
new file mode 100644
index 0000000000..d37c88ffd7
--- /dev/null
+++ b/sources/tech/20180425 A gentle introduction to FreeDOS | Opensource.com.md
@@ -0,0 +1,141 @@
+# A gentle introduction to FreeDOS
+
+
+
+Image credits :
+
+Jim Hall, CC BY
+
+## Get the newsletter
+
+Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
+
+FreeDOS is an old operating system, but it is new to many people. In 1994, several developers and I came together to [create FreeDOS][1]—a complete, free, DOS-compatible operating system you can use to play classic DOS games, run legacy business software, or develop embedded systems. Any program that works on MS-DOS should also run on FreeDOS.
+
+In 1994, FreeDOS was immediately familiar to anyone who had used Microsoft's proprietary MS-DOS. And that was by design; FreeDOS intended to mimic MS-DOS as much as possible. As a result, DOS users in the 1990s were able to jump right into FreeDOS. But times have changed. Today, open source developers are more familiar with the Linux command line or they may prefer a graphical desktop like [GNOME][2], making the FreeDOS command line seem alien at first.
+
+New users often ask, "I [installed FreeDOS][3], but how do I use it?" If you haven't used DOS before, the blinking C:\> DOS prompt can seem a little unfriendly. And maybe scary. This gentle introduction to FreeDOS should get you started. It offers just the basics: how to get around and how to look at files. If you want to learn more than what's offered here, visit the [FreeDOS wiki][4].
+
+## The DOS prompt
+
+First, let's look at the empty prompt and what it means.
+
+
+
+DOS is a "disk operating system" created when personal computers ran from floppy disks. Even when computers supported hard drives, it was common in the 1980s and 1990s to switch frequently between the different drives. For example, you might make a backup copy of your most important files to a floppy disk.
+
+DOS referenced each drive by a letter. Early PCs could have only two floppy drives, which were assigned as the A: and B: drives. The first partition on the first hard drive was the C: drive, and so on for other drives. The C: in the prompt means you are using the first partition on the first hard drive.
+
+Starting with PC-DOS 2.0 in 1983, DOS also supported directories and subdirectories, much like the directories and subdirectories on Linux filesystems. But unlike Linux, DOS directory names are delimited by \ instead of /. Putting that together with the drive letter, the C:\ in the prompt means you are in the top, or "root," directory of the C: drive.
+
+The > is the literal prompt where you type your DOS commands, like the $ prompt on many Linux shells. The part before the > tells you the current working directory, and you type commands at the > prompt.
+
+## Finding your way around in DOS
+
+The basics of navigating through directories in DOS are very similar to the steps you'd use on the Linux command line. You need to remember only a few commands.
+
+### Displaying a directory
+
+When you want to see the contents of the current directory, use the DIR command. Since DOS commands are not case-sensitive, you could also type dir. By default, DOS displays the details of every file and subdirectory, including the name, extension, size, and last modified date and time.
+
+
+
+If you don't want the extra details about individual file sizes, you can display a "wide" directory by using the /w option with the DIR command. Note that Linux uses the hyphen (-) or double-hyphen (--) to start command-line options, but DOS uses the slash character (/).
+
+
+
+You can look inside a specific subdirectory by passing the pathname as a parameter to DIR. Again, another difference from Linux is that Linux files and directories are case-sensitive, but DOS names are case-insensitive. DOS will usually display files and directories in all uppercase, but you can equally reference them in lowercase.
+
+
+
+### Changing the working directory
+
+Once you can see the contents of a directory, you can "move into" any other directory. On DOS, you change your working directory with the CHDIR command, also abbreviated as CD. You can change into a subdirectory with a command like CD CHOICE or into a new path with CD \FDOS\DOC\CHOICE.
+
+
+
+Just like on the Linux command line, DOS uses . to represent the current directory, and .. for the parent directory (one level "up" from the current directory). You can combine these. For example, CD .. changes to the parent directory, and CD ..\.. moves you two levels "up" from the current directory.
+
+FreeDOS also borrows a feature from Linux: You can use CD - to jump back to your previous working directory. That is handy after you change into a new path to do one thing and want to go back to your previous work.
+
+
+
+### Changing the working drive
+
+Under Linux, the concept of a "drive" is hidden. In Linux and other Unix systems, you "mount" a drive to a directory path, such as /backup, or the system does it for you automatically, such as /var/run/media/user/flashdrive. But DOS is a much simpler system. With DOS, you must change the working drive by yourself.
+
+Remember that DOS assigns the first partition on the first hard drive as the C: drive, and so on for other drive letters. On modern systems, people rarely divide a hard drive with multiple DOS partitions; they simply use the whole disk—or as much of it as they can assign to DOS. Today, C: is usually the first hard drive, and D: is usually another hard drive or the CD-ROM drive. Other network drives can be mapped to other letters, such as E: or Z: or however you want to organize them.
+
+Changing drives is easy under DOS. Just type the drive letter followed by a colon (:) on the command line, and DOS will change to that working drive. For example, on my [QEMU][5] system, I set my D: drive to a shared directory in my Linux home directory, where I keep installers for various DOS applications and games I want to test.
+
+
+
+Be careful that you don't try to change to a drive that doesn't exist. DOS may set the working drive, but if you try to do anything there you'll get the somewhat infamous "Abort, Retry, Fail" DOS error message.
+
+
+
+## Other things to try
+
+With the CD and DIR commands, you have the basics of DOS navigation. These commands allow you to find your way around DOS directories and see what other subdirectories and files exist. Once you are comfortable with basic navigation, you might also try these other basic DOS commands:
+
+* MKDIR or MD to create new directories
+* RMDIR or RD to remove directories
+* TREE to view a list of directories and subdirectories in a tree-like format
+* TYPE and MORE to display file contents
+* RENAME or REN to rename files
+* DEL or ERASE to delete files
+* EDIT to edit files
+* CLS to clear the screen
+
+If those aren't enough, you can find a list of [all DOS commands][6] on the FreeDOS wiki.
+
+In FreeDOS, you can use the /? parameter to get brief instructions to use each command. For example, EDIT /? will show you the usage and options for the editor. Or you can type HELP to use an interactive help system.
+
+Like any DOS, FreeDOS is meant to be a simple operating system. The DOS filesystem is pretty simple to navigate with only a few basic commands. So fire up a QEMU session, install FreeDOS, and experiment with the DOS command line. Maybe now it won't seem so scary.
+
+## Related stories:
+
+* [How to install FreeDOS in QEMU][7]
+* [How to install FreeDOS on Raspberry Pi][8]
+* [The origin and evolution of FreeDOS][9]
+* [Four cool facts about FreeDOS][10]
+
+## About the author
+
+[][11]
+
+Jim Hall \- Jim Hall is an open source software developer and advocate, probably best known as the founder and project coordinator for FreeDOS. Jim is also very active in the usability of open source software, as a mentor for usability testing in GNOME Outreachy, and as an occasional adjunct professor teaching a course on the Usability of Open Source Software. From 2016 to 2017, Jim served as a director on the GNOME Foundation Board of Directors. At work, Jim is Chief Information Officer in local... [more about Jim Hall][12]
+
+[More about me][13]
+
+* [Learn how you can contribute][14]
+
+---
+
+via: [https://opensource.com/article/18/4/gentle-introduction-freedos][15]
+
+作者: [undefined][16] 选题者: [@lujun9972][17] 译者: [译者ID][18] 校对: [校对者ID][19]
+
+本文由 [LCTT][20] 原创编译,[Linux中国][21] 荣誉推出
+
+[1]: https://opensource.com/article/17/10/freedos
+[2]: https://opensource.com/article/17/8/gnome-20-anniversary
+[3]: http://www.freedos.org/
+[4]: http://wiki.freedos.org/
+[5]: https://www.qemu.org/
+[6]: http://wiki.freedos.org/wiki/index.php/Dos_commands
+[7]: https://opensource.com/article/17/10/run-dos-applications-linux
+[8]: https://opensource.com/article/18/3/can-you-run-dos-raspberry-pi
+[9]: https://opensource.com/article/17/10/freedos
+[10]: https://opensource.com/article/17/6/freedos-still-cool-today
+[11]: https://opensource.com/users/jim-hall
+[12]: https://opensource.com/users/jim-hall
+[13]: https://opensource.com/users/jim-hall
+[14]: https://opensource.com/participate
+[15]: https://opensource.com/article/18/4/gentle-introduction-freedos
+[16]: undefined
+[17]: https://github.com/lujun9972
+[18]: https://github.com/译者ID
+[19]: https://github.com/校对者ID
+[20]: https://github.com/LCTT/TranslateProject
+[21]: https://linux.cn/
\ No newline at end of file
From 691df008c866f724e515c72aa3c814298e04f686 Mon Sep 17 00:00:00 2001
From: qhwdw
Date: Wed, 25 Apr 2018 11:44:11 +0800
Subject: [PATCH 095/220] Translated by qhwdw
---
...331 How to build a plotter with Arduino.md | 263 ------------------
...331 How to build a plotter with Arduino.md | 262 +++++++++++++++++
2 files changed, 262 insertions(+), 263 deletions(-)
delete mode 100644 sources/tech/20180331 How to build a plotter with Arduino.md
create mode 100644 translated/tech/20180331 How to build a plotter with Arduino.md
diff --git a/sources/tech/20180331 How to build a plotter with Arduino.md b/sources/tech/20180331 How to build a plotter with Arduino.md
deleted file mode 100644
index daebb3ec54..0000000000
--- a/sources/tech/20180331 How to build a plotter with Arduino.md
+++ /dev/null
@@ -1,263 +0,0 @@
-Translating by qhwdw
-How to build a plotter with Arduino
-======
-
-
-Back in school, there was an HP plotter well hidden in a closet in the science department. I got to play with it for a while and always wanted to have one of my own. Fast forward many, many years. Stepper motors are easily available, I am back into doing stuff with electronics and micro-controllers, and I recently saw someone creating displays with engraved acrylic. This triggered me to finally build my own plotter.
-
-
-![The plotter in action ][2]
-
-The DIY plotter; see it in action in this [video][3].
-
-As an old-school 5V guy, I really like the original [Arduino Uno][4]. Here's a list of the other components I used (fyi, I am not affiliated with any of these companies):
-
- * [FabScan shield][5]: Physically hosts the stepper motor drivers.
- * [SilentStepSticks][6]: Motor drivers, as the Arduino on its own can't handle the voltage and current that a stepper motor needs. I am using ones with a Trinamic TMC2130 chip, but in standalone mode for now. Those are replacements for the Pololu 4988, but allow for much quieter operation.
- * [SilentStepStick protectors][7]: Diodes that prevent the turning motor from frying your motor drivers (you want them, believe me).
- * Stepper motors: I selected NEMA 17 motors with 12V (e.g., models from [Watterott][8] and [SparkFun][9]).
- * [Linear guide rails][10]
- * Wooden base plate
- * Wood screws
- * GT2 belt
- * [GT2 timing pulley][11]
-
-
-
-This is a work in progress that I created as a personal project. If you are looking for a ready-made kit, then check out the [MaXYposi][12] from German Make magazine.
-
-### Hardware setup
-
-As you can see here, I started out much too large. This plotter can't comfortably sit on my desk, but it's okay, as I did it for learning purposes (and, as I have to re-do some things, next time I'll use smaller beams).
-
-
-![Plotter base plate with X-axis and Y-axis rails][14]
-
-Plotter base plate with X-axis and Y-axis rails
-
-The belt is mounted on both sides of the rail and then slung around the motor with some helper wheels:
-
-
-![The belt routing on the motor][16]
-
-The belt routing on the motor
-
-I've stacked several components on top of the Arduino. The Arduino is on the bottom, above that is the FabScan shield, next is a StepStick protector on motor slots 1+2, and the SilentStepStick is on top. Note that the SCK and SDI pins are not connected.
-
-
-![Arduino and Shield][18]
-
-Arduino stack setup ([larger image][19])
-
-Be careful to correctly attach the wires to the motor. When in doubt, look at the data sheet or an ohmmeter to figure out which wires belong together.
-
-### Software setup
-
-#### The basics
-
-While software like [grbl][20] can interpret so-called G-codes for tool movement and other things, and I could have just flashed it to the Arduino, I am curious and wanted to better understand things. (My X-Y plotter software is available at [GitHub][21] and comes without any warranty.)
-
-To drive a stepper motor with the StepStick (or compatible) driver, you basically need to send a high and then a low signal to the respective pin. Or in Arduino terms:
-```
-digitalWrite(stepPin, HIGH);
-
-delayMicroseconds(30);
-
-digitalWrite(stepPin, LOW);
-
-```
-
-Where `stepPin` is the pin number for the stepper: 3 for motor 1 and 6 for motor 2.
-
-Before the stepper does any work, it must be enabled.
-```
-digitalWrite(enPin, LOW);
-
-```
-
-Actually, the StepStick knows three states for the pin:
-
- * Low: Motor is enabled
- * High: Motor is disabled
- * Pin not connected: Motor is enabled but goes into an energy-saving mode after a while
-
-
-
-When a motor is enabled, its coils are powered and it keeps its position. It is almost impossible to manually turn its axis. This is good for precision purposes, but it also means that both motors and driver chips are "flooded" with power and will warm up.
-
-And last, but not least, we need a way to determine the plotter's direction:
-```
-digitalWrite(dirPin, direction);
-
-```
-
-The following table lists the functions and the pins
-
-Function Motor1 Motor2 Enable 2 5 Direction 4 7 Step 3 6
-
-Before we can use the pins, we need to set them to `OUTPUT` mode in the `setup()` section of the code
-```
-pinMode(enPin1, OUTPUT);
-
-pinMode(stepPin1, OUTPUT);
-
-pinMode(dirPin1, OUTPUT);
-
-digitalWrite(enPin1, LOW);
-
-```
-
-With this knowledge, we can easily get the stepper to move around:
-```
- totalRounds = ...
-
- for (int rounds =0 ; rounds < 2*totalRounds; rounds++) {
-
- if (dir==0){ // set direction
-
- digitalWrite(dirPin2, LOW);
-
- } else {
-
- digitalWrite(dirPin2, HIGH);
-
- }
-
- delay(1); // give motors some breathing time
-
- dir = 1-dir; // reverse direction
-
- for (int i=0; i < 6400; i++) {
-
- int t = abs(3200-i) / 200;
-
- digitalWrite(stepPin2, HIGH);
-
- delayMicroseconds(70 + t);
-
- digitalWrite(stepPin2, LOW);
-
- delayMicroseconds(70 + t);
-
- }
-
- }
-
-```
-
-This will make the slider move left and right. This code deals with one stepper, but for an X-Y plotter, we have two axes to consider.
-
-#### Command interpreter
-
-I started to implement a simple command interpreter to use path specifications, such as:
-```
-"X30|Y30|X-30 Y-30|X-20|Y-20|X20|Y20|X-40|Y-25|X40 Y25
-
-```
-
-to describe relative movements in millimeters (1mm equals 80 steps).
-
-The plotter software implements a _continuous mode_ , which allows a PC to feed large paths (in chunks) to the plotter. (This how I plotted the Hilbert curve in this [video][22].)
-
-### Building a better pen holder
-
-In the first image above, the pen was tied to the Y-axis with some metal string. This was not precise and also did not enable the software to raise and lower the hand (this explains the big black dots).
-
-I have since created a better, more precise pen holder that uses a servo to raise and lower the pen. This new, improved holder can be seen in this picture and in the Hilbert curve video linked above.
-
-![Servo to raise/lower the pen ][24]
-
-Close-up view of the servo arm in the upper position raising the pen
-
-The pen is attached with a little clamp (the one shown is a size 8 clamp typically used to attach cables to walls). The servo arm can raise the pen; when the arm goes down, gravity will lower the pen.
-
-#### Driving the servo
-
-Driving the servo is relatively straightforward: Just provide the position and the servo does all the work.
-```
-#include
-
-
-
-// Servo pin
-
-#define servoData PIN_A1
-
-
-
-// Positions
-
-#define PEN_UP 10
-
-#define PEN_DOWN 50
-
-
-
-Servo penServo;
-
-
-
-void setup() {
-
- // Attach to servo and raise pen
-
- penServo.attach(servoData);
-
- penServo.write(PEN_UP);
-
-}
-
-```
-
-I am using the servo headers on the Motor 4 place of the FabScan shield, so I've used analog pin 1.
-
-Lowering the pen is as easy as:
-```
- penServo.write(PEN_DOWN);
-
-```
-
-### Next steps
-
-One of my next steps will be to add some end detectors, but I may skip them and use the StallGuard mode of the TMC2130 instead. Those detectors can also be used to implement a `home` command.
-
-And perhaps in the future I'll add a real Z-axis that can hold an engraver to do wood milling, or PCB drilling, or acrylic engraving, or ... (a laser comes to mind as well).
-
-This was originally published on the [Some Things to Remember][25] blog and is reprinted with permission.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/18/3/diy-plotter-arduino
-
-作者:[Heiko W.Rupp][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/pilhuhn
-[1]:https://opensource.com/file/384786
-[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/plotter-in-action.png?itok=q9oHrJGr (The plotter in action )
-[3]:https://twitter.com/pilhuhn/status/948205323726344193
-[4]:https://en.wikipedia.org/wiki/Arduino#Official_boards
-[5]:http://www.watterott.com/de/Arduino-FabScan-Shield
-[6]:http://www.watterott.com/de/SilentStepStick-TMC2130
-[7]:http://www.watterott.com/de/SilentStepStick-Protector
-[8]:http://www.watterott.com/de/Schrittmotor-incl-Anschlusskabel
-[9]:https://www.sparkfun.com/products/9238
-[10]:https://www.ebay.de/itm/CNC-Set-12x-600mm-Linearfuhrung-Linear-Guide-Rail-Stage-3D-/322917927292?hash=item4b2f68897c
-[11]:http://www.watterott.com/de/OpenBuilds-GT2-2mm-Aluminium-Timing-Pulley
-[12]:https://www.heise.de/make/artikel/MaXYposi-Projektseite-zum-universellen-XY-Portalroboter-von-Make-3676050.html
-[13]:https://opensource.com/file/384776
-[14]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/entire_plotter.jpg?itok=40iSEs5K (Plotter base plate with X-axis and Y-axis rails)
-[15]:https://opensource.com/file/384791
-[16]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/y_motor_detail.jpg?itok=SICJBdRv (The belt routing on the motor)
-[17]:https://opensource.com/file/384771
-[18]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/arduino_and_shield.jpg?itok=OFumhpJm
-[19]:https://www.dropbox.com/s/7bp3bo5g2ujser8/IMG_20180103_110111.jpg?dl=0
-[20]:https://github.com/gnea/grbl
-[21]:https://github.com/pilhuhn/xy-plotter
-[22]:https://twitter.com/pilhuhn/status/949737734654124032
-[23]:/https://opensource.comfile/384781
-[24]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pen_servo.jpg?itok=b2cnwB3P (Servo to raise/lower the pen )
-[25]:http://pilhuhn.blogspot.com/2018/01/homegrown-xy-plotter-with-arduino.html
diff --git a/translated/tech/20180331 How to build a plotter with Arduino.md b/translated/tech/20180331 How to build a plotter with Arduino.md
new file mode 100644
index 0000000000..48412c613b
--- /dev/null
+++ b/translated/tech/20180331 How to build a plotter with Arduino.md
@@ -0,0 +1,262 @@
+如何使用 Arduino 制作一个绘图仪
+======
+
+
+在上学时,科学系的壁橱里藏着一台惠普绘图仪。虽然我在上学期间可以经常使用它,但我还是想拥有一台属于自己的绘图仪。许多年之后,步进电机已经很容易获得了,我又在从事电子产品和微控制器方面的工作,最近,我看到有人用丙烯酸塑料(acrylic)制作了一个显示器。这件事启发了我,并最终制作了我自己的绘图仪。
+
+
+![The plotter in action ][2]
+
+我 DIY 的绘图仪;在这里看它工作的[视频][3]。
+
+由于我是一个很怀旧的人,我真的很喜欢最初的 [Arduino Uno][4]。下面是我用到的其它东西的一个清单(仅供参考,其中一些我也不是很满意):
+
+ * [FabScan shield][5]:承载步进电机驱动器。
+ * [SilentStepSticks][6]:步进电机驱动器,因为 Arduino 自身不能处理步进电机所需的电压和电流。因此我使用了一个 Trinamic TMC2130 芯片,但它是工作在单独模式。这些都是为替换 Pololu 4988,但是它们运转更安静。
+ * [SilentStepStick 保护装置][7]:一个防止你的电机驱动器转动过快的二极管(相信我,你肯定会需要它的)。
+ * 步进电机:我选择的是使用 12 V 电压的 NEMA 17 电机(如,来自 [Watterott][8] 和 [SparkFun][9] 的型号)。
+ * [直线导杆][10]
+ * 木制的基板
+ * 木螺丝
+ * GT2 皮带
+ * [GT2 同步滑轮][11]
+
+
+
+这是我作为个人项目而设计的。如果你想找到一个现成的工具套件,你可以从 German Make 杂志上找到 [MaXYposi][12]。
+
+### 硬件安装
+
+正如你所看到的,我刚开始做的太大了。这个绘图仪并不合适放在我的桌子上。但是,没有关系,我只是为了学习它(并且,我也将一些东西进行重新制作,下次我将使用一个更小的横梁)。
+
+
+![Plotter base plate with X-axis and Y-axis rails][14]
+
+带 X 轴和 Y 轴轨道的绘图仪基板
+
+皮带安装在轨道的侧面,并且用它将一些辅助轮和电机挂在一起:
+
+
+![The belt routing on the motor][16]
+
+电机上的皮带路由
+
+我在 Arduino 上堆叠了几个组件。Arduino 在最下面,它之上是 FabScan shield,接着是一个安装在 1 和 2 号电机槽上的 StepStick 保护装置,SilentStepStick 在最上面。注意,SCK 和 SDI 针脚没有连接。
+
+
+![Arduino and Shield][18]
+
+Arduino 堆叠配置([高清大图][19])
+
+注意将电机的连接线接到正确的针脚上。如果有疑问,就去查看它的数据表,或者使用欧姆表去找出哪一对线是正确的。
+
+### 软件配置
+
+#### 基础部分
+
+虽然像 [grbl][20] 这样的软件可以解释诸如像装置移动和其它一些动作的 G-codes,并且,我也可以将它刷进 Arduino 中,但是我很好奇,想更好地理解它是如何工作的。(我的 X-Y 绘图仪软件可以在 [GitHub][21] 上找到,不过我不提供任何保修。)
+
+使用 StepStick(或者其它兼容的)驱动器去驱动步进电机,基本上只需要发送一个高电平信号或者低电平信号到各自的针脚即可。或者使用 Arduino 的术语:
+```
+digitalWrite(stepPin, HIGH);
+
+delayMicroseconds(30);
+
+digitalWrite(stepPin, LOW);
+
+```
+
+在 `stepPin` 的位置上是步进电机的针脚编号:3 是 1 号电机,而 6 是 2 号电机。
+
+在步进电机能够工作之前,它必须先被启用。
+```
+digitalWrite(enPin, LOW);
+
+```
+
+实际上,StepStick 能够理解针脚的三个状态:
+
+ * Low:电机已启用
+ * High:电机已禁用
+ * Pin 未连接:电机已启用,但在一段时间后进入节能模式
+
+
+
+电机启用后,它的线圈已经有了力量并用来保持位置。这时候几乎不可能用手来转动它的轴。这样可以保证很好的精度,但是也意味着电机和驱动器芯片都“充满着”力量,并且也因此会发热。
+
+最后,也是很重要的,我们需要一个决定绘图仪方向的方法:
+```
+digitalWrite(dirPin, direction);
+
+```
+
+下面的表列出了功能和针脚~~(致核对:下面的表格式错误)~~
+
+Function Motor1 Motor2 Enable 2 5 Direction 4 7 Step 3 6
+
+在我们使用这些针脚之前,我们需要在代码的 `setup()` 节中设置它的 `OUTPUT` 模式。
+```
+pinMode(enPin1, OUTPUT);
+
+pinMode(stepPin1, OUTPUT);
+
+pinMode(dirPin1, OUTPUT);
+
+digitalWrite(enPin1, LOW);
+
+```
+
+了解这些知识后,我们可以很容易地让步进电机四处移动:
+```
+ totalRounds = ...
+
+ for (int rounds =0 ; rounds < 2*totalRounds; rounds++) {
+
+ if (dir==0){ // set direction
+
+ digitalWrite(dirPin2, LOW);
+
+ } else {
+
+ digitalWrite(dirPin2, HIGH);
+
+ }
+
+ delay(1); // give motors some breathing time
+
+ dir = 1-dir; // reverse direction
+
+ for (int i=0; i < 6400; i++) {
+
+ int t = abs(3200-i) / 200;
+
+ digitalWrite(stepPin2, HIGH);
+
+ delayMicroseconds(70 + t);
+
+ digitalWrite(stepPin2, LOW);
+
+ delayMicroseconds(70 + t);
+
+ }
+
+ }
+
+```
+
+这将使滑块向左和向右移动。这些代码只操纵一个步进电机,但是,对于一个 X-Y 绘图仪,我们要考虑两个轴。
+
+#### 命令解释器
+
+我开始做一个简单的命令解释器去使用规范的路径,比如:
+```
+"X30|Y30|X-30 Y-30|X-20|Y-20|X20|Y20|X-40|Y-25|X40 Y25
+
+```
+
+用毫米来描述相对移动(1 毫米等于 80 步)。
+
+绘图仪软件实现了一个 _持续模式_ ,这可以允许一台 PC 给它提供一个很大的路径(很多的路径)去绘制。(在这个[视频][22]中展示了如何绘制 Hilbert 曲线)
+
+### 设计一个好用的握笔器
+
+在上面的第一张图中,绘图笔是细绳子绑到 Y 轴上的。这样绘图也不精确,并且也无法在软件中实现提笔和下笔(如示例中的大黑点)。
+
+因此,我设计了一个更好用的、更精确的握笔器,它使用一个伺服器去提笔和下笔。可以在下面的这张图中看到这个新的、改进后的握笔器,上面视频链接中的 Hilbert 曲线就是使用它绘制的。
+
+![Servo to raise/lower the pen ][24]
+
+图中的特写镜头就是伺服器臂提起笔的图像
+
+笔是用一个小夹具固定住的(图上展示的是一个大小为 8 的夹具,它一般用于将线缆固定在墙上)。伺服器臂能够提起笔;当伺服器臂放下来的时候,笔就会被放下来。
+
+#### 驱动伺服器
+
+驱动伺服器是非常简单的:只需要提供位置,伺服器就可以完成所有的工作。
+```
+#include
+
+
+
+// Servo pin
+
+#define servoData PIN_A1
+
+
+
+// Positions
+
+#define PEN_UP 10
+
+#define PEN_DOWN 50
+
+
+
+Servo penServo;
+
+
+
+void setup() {
+
+ // Attach to servo and raise pen
+
+ penServo.attach(servoData);
+
+ penServo.write(PEN_UP);
+
+}
+
+```
+
+我把伺服器接头连接在 FabScan shield 的 4 号电机上,因此,我将用 1 号模拟针脚。
+
+放下笔也很容易:
+```
+ penServo.write(PEN_DOWN);
+
+```
+
+### 进一步扩展
+
+我的进一步扩展的其中一项就是添加一些终止检测器,但是,我也可以不用它们,进而使用 TMC2130 的 StallGuard 模式来代替。这些检测器也可以用于去实现一个 `home` 命令。
+
+以后,我或许还将添加一个真实的 Z 轴,这样它就可以对一个木头进行铣削雕刻,或者钻一个 PCB 板,或者雕刻一块丙烯酸塑料,或者 … (我还想到了用激光)。
+
+这篇文章最初发布在 [Some Things to Remember][25] 博客中并授权重分发。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/3/diy-plotter-arduino
+
+作者:[Heiko W.Rupp][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/pilhuhn
+[1]:https://opensource.com/file/384786
+[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/plotter-in-action.png?itok=q9oHrJGr "The plotter in action "
+[3]:https://twitter.com/pilhuhn/status/948205323726344193
+[4]:https://en.wikipedia.org/wiki/Arduino#Official_boards
+[5]:http://www.watterott.com/de/Arduino-FabScan-Shield
+[6]:http://www.watterott.com/de/SilentStepStick-TMC2130
+[7]:http://www.watterott.com/de/SilentStepStick-Protector
+[8]:http://www.watterott.com/de/Schrittmotor-incl-Anschlusskabel
+[9]:https://www.sparkfun.com/products/9238
+[10]:https://www.ebay.de/itm/CNC-Set-12x-600mm-Linearfuhrung-Linear-Guide-Rail-Stage-3D-/322917927292?hash=item4b2f68897c
+[11]:http://www.watterott.com/de/OpenBuilds-GT2-2mm-Aluminium-Timing-Pulley
+[12]:https://www.heise.de/make/artikel/MaXYposi-Projektseite-zum-universellen-XY-Portalroboter-von-Make-3676050.html
+[13]:https://opensource.com/file/384776
+[14]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/entire_plotter.jpg?itok=40iSEs5K "Plotter base plate with X-axis and Y-axis rails"
+[15]:https://opensource.com/file/384791
+[16]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/y_motor_detail.jpg?itok=SICJBdRv "The belt routing on the motor"
+[17]:https://opensource.com/file/384771
+[18]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/arduino_and_shield.jpg?itok=OFumhpJm
+[19]:https://www.dropbox.com/s/7bp3bo5g2ujser8/IMG_20180103_110111.jpg?dl=0
+[20]:https://github.com/gnea/grbl
+[21]:https://github.com/pilhuhn/xy-plotter
+[22]:https://twitter.com/pilhuhn/status/949737734654124032
+[23]:/https://opensource.comfile/384781
+[24]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pen_servo.jpg?itok=b2cnwB3P "Servo to raise/lower the pen "
+[25]:http://pilhuhn.blogspot.com/2018/01/homegrown-xy-plotter-with-arduino.html
From 7778c9156dd9a2e19465db7f0ec5decbd4e36848 Mon Sep 17 00:00:00 2001
From: DarkSun
Date: Wed, 25 Apr 2018 04:25:01 +0000
Subject: [PATCH 096/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20An=20introduction?=
=?UTF-8?q?=20to=20the=20GNU=20Core=20Utilities=20|=20Opensource.com?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...the GNU Core Utilities | Opensource.com.md | 146 ++++++++++++++++++
1 file changed, 146 insertions(+)
create mode 100644 sources/tech/20180425 An introduction to the GNU Core Utilities | Opensource.com.md
diff --git a/sources/tech/20180425 An introduction to the GNU Core Utilities | Opensource.com.md b/sources/tech/20180425 An introduction to the GNU Core Utilities | Opensource.com.md
new file mode 100644
index 0000000000..f6f8708fb2
--- /dev/null
+++ b/sources/tech/20180425 An introduction to the GNU Core Utilities | Opensource.com.md
@@ -0,0 +1,146 @@
+An introduction to the GNU Core Utilities
+======
+
+
+
+Image credits :
+
+[Bella67][1] via Pixabay. [CC0][2].
+
+Two sets of utilities—the [GNU Core Utilities][3] and util-linux—comprise many of the Linux system administrator's most basic and regularly used tools. Their basic functions allow sysadmins to perform many of the tasks required to administer a Linux computer, including management and manipulation of text files, directories, data streams, storage media, process controls, filesystems, and much more.
+
+These tools are indispensable because, without them, it is impossible to accomplish any useful work on a Unix or Linux computer. Given their importance, let's examine them.
+
+### GNU coreutils
+
+The Linux Terminal
+
+* [Top 7 terminal emulators for Linux][4]
+* [10 command-line tools for data analysis in Linux][5]
+* [Download Now: SSH cheat sheet][6]
+* [Advanced Linux commands cheat sheet][7]
+
+To understand the origins of the GNU Core Utilities, we need to take a short trip in the Wayback machine to the early days of Unix at Bell Labs. [Unix was written][8] so Ken Thompson, Dennis Ritchie, Doug McIlroy, and Joe Ossanna could continue with something they had started while working on a large multi-tasking and multi-user computer project called [Multics][9]. That little something was a game called Space Travel. As remains true today, it always seems to be the gamers who drive forward the technology of computing. This new operating system was much more limited than Multics, as only two users could log in at a time, so it was called Unics. This name was later changed to Unix.
+
+Over time, Unix turned out to be such a success that Bell Labs began essentially giving it away it to universities and later to companies for the cost of the media and shipping. Back in those days, system-level software was shared between organizations and programmers as they worked to achieve common goals within the context of system administration.
+
+Eventually, the [PHBs][10] at AT&T decided they should make money on Unix and started using more restrictive—and expensive—licensing. This was taking place at a time when software was becoming more proprietary, restricted, and closed. It was becoming impossible to share software with other users and organizations.
+
+Some people did not like this and fought it with free software. Richard M. Stallman, aka RMS, led a group of rebels who were trying to write an open and freely available operating system they called the GNU Operating System. This group created the GNU Utilities but didn't produce a viable kernel.
+
+When Linus Torvalds first wrote and compiled the Linux kernel, he needed a set of very basic system utilities to even begin to perform marginally useful work. The kernel does not provide commands or any type of command shell such as Bash. It is useless by itself. So, Linus used the freely available GNU Core Utilities and recompiled them for Linux. This gave him a complete, if quite basic, operating system.
+
+You can learn about all the individual programs that comprise the GNU Utilities by entering the command info coreutils at a terminal command line. The following list of the core utilities is part of that info page. The utilities are grouped by function to make specific ones easier to find; in the terminal, highlight the group you want more information on and press the Enter key.
+
+```
+* Output of entire files:: cat tac nl od base32 base64
+* Formatting file contents:: fmt pr fold
+* Output of parts of files:: head tail split csplit
+* Summarizing files:: wc sum cksum b2sum md5sum sha1sum sha2
+* Operating on sorted files:: sort shuf uniq comm ptx tsort
+* Operating on fields:: cut paste join
+* Operating on characters:: tr expand unexpand
+* Directory listing:: ls dir vdir dircolors
+* Basic operations:: cp dd install mv rm shred
+* Special file types:: mkdir rmdir unlink mkfifo mknod ln link readlink
+* Changing file attributes:: chgrp chmod chown touch
+* Disk usage:: df du stat sync truncate
+* Printing text:: echo printf yes
+* Conditions:: false true test expr
+* Redirection:: tee
+* File name manipulation:: dirname basename pathchk mktemp realpath
+* Working context:: pwd stty printenv tty
+* User information:: id logname whoami groups users who
+* System context:: date arch nproc uname hostname hostid uptime
+* SELinux context:: chcon runcon
+* Modified command invocation:: chroot env nice nohup stdbuf timeout
+* Process control:: kill
+* Delaying:: sleep
+* Numeric operations:: factor numfmt seq
+```
+
+There are 102 utilities on this list. It covers many of the functions necessary to perform basic tasks on a Unix or Linux host. However, many basic utilities are missing. For example, the mount and umount commands are not in this list. Those and many of the other commands that are not in the GNU coreutils can be found in the util-linux collection.
+
+### util-linux
+
+The util-linix package of utilities contains many of the other common commands that sysadmins use. These utilities are distributed by the Linux Kernel Organization, and virtually every one of these 107 commands were originally three separate collections—fileutils, shellutils, and textutils—which were [combined into the single package][11] util-linux in 2003.
+
+```
+agetty fsck.minix mkfs.bfs setpriv
+blkdiscard fsfreeze mkfs.cramfs setsid
+blkid fstab mkfs.minix setterm
+blockdev fstrim mkswap sfdisk
+cal getopt more su
+cfdisk hexdump mount sulogin
+chcpu hwclock mountpoint swaplabel
+chfn ionice namei swapoff
+chrt ipcmk newgrp swapon
+chsh ipcrm nologin switch_root
+colcrt ipcs nsenter tailf
+col isosize partx taskset
+colrm kill pg tunelp
+column last pivot_root ul
+ctrlaltdel ldattach prlimit umount
+ddpart line raw unshare
+delpart logger readprofile utmpdump
+dmesg login rename uuidd
+eject look renice uuidgen
+fallocate losetup reset vipw
+fdformat lsblk resizepart wall
+fdisk lscpu rev wdctl
+findfs lslocks RTC Alarm whereis
+findmnt lslogins runuser wipefs
+flock mcookie script write
+fsck mesg scriptreplay zramctl
+fsck.cramfs mkfs setarch
+```
+
+Some of these utilities have been deprecated and will likely fall out of the collection at some point in the future. You should check [Wikipedia's util-linux page][12] for information on many of the utilities, and the man pages also provide details on the commands.
+
+### Summary
+
+These two collections of Linux utilities, the GNU Core Utilities and util-linux, together provide the basic utilities required to administer a Linux system. As I researched this article, I found several interesting utilities I never knew about. Many of these commands are seldom needed, but when you need them, they are indispensable.
+
+Between these two collections, there are over 200 Linux utilities. While Linux has many more commands, these are the ones needed to manage the basic functions of a typical Linux host.
+
+### About the author
+
+[][13]
+
+David Both \- David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for... [more about David Both][14]
+
+[More about me][15]
+
+* [Learn how you can contribute][16]
+
+---
+
+via: [https://opensource.com/article/18/4/gnu-core-utilities][17]
+
+作者: [David Both][18] 选题者: [@lujun9972][19] 译者: [译者ID][20] 校对: [校对者ID][21]
+
+本文由 [LCTT][22] 原创编译,[Linux中国][23] 荣誉推出
+
+[1]: https://pixabay.com/en/tiny-people-core-apple-apple-half-700921/
+[2]: https://creativecommons.org/publicdomain/zero/1.0/
+[3]: https://www.gnu.org/software/coreutils/coreutils.html
+[4]: https://opensource.com/life/17/10/top-terminal-emulators?intcmp=7016000000127cYAAQ
+[5]: https://opensource.com/article/17/2/command-line-tools-data-analysis-linux?intcmp=7016000000127cYAAQ
+[6]: https://opensource.com/downloads/advanced-ssh-cheat-sheet?intcmp=7016000000127cYAAQ
+[7]: https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=7016000000127cYAAQ
+[8]: https://en.wikipedia.org/wiki/History_of_Unix
+[9]: https://en.wikipedia.org/wiki/Multics
+[10]: https://en.wikipedia.org/wiki/Pointy-haired_Boss
+[11]: https://en.wikipedia.org/wiki/GNU_Core_Utilities
+[12]: https://en.wikipedia.org/wiki/Util-linux
+[13]: https://opensource.com/users/dboth
+[14]: https://opensource.com/users/dboth
+[15]: https://opensource.com/users/dboth
+[16]: https://opensource.com/participate
+[17]: https://opensource.com/article/18/4/gnu-core-utilities
+[18]: https://opensource.com/users/dboth
+[19]: https://github.com/lujun9972
+[20]: https://github.com/译者ID
+[21]: https://github.com/校对者ID
+[22]: https://github.com/LCTT/TranslateProject
+[23]: https://linux.cn/
\ No newline at end of file
From efe8ff232ee2b94d8fe4e6f2055d44585263d0a1 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 25 Apr 2018 23:43:51 +0800
Subject: [PATCH 097/220] PRF:20180206 How to start an open source program in
your company.md
@Valoniakim
---
... an open source program in your company.md | 26 ++++++++++---------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/translated/talk/20180206 How to start an open source program in your company.md b/translated/talk/20180206 How to start an open source program in your company.md
index 74d9c4c873..43a6e09736 100644
--- a/translated/talk/20180206 How to start an open source program in your company.md
+++ b/translated/talk/20180206 How to start an open source program in your company.md
@@ -1,25 +1,27 @@
-如何在企业中开展开源项目
+如何在企业中开展开源计划
======
+> 有 65% 的企业使用开源软件,并非只有互联网企业才能受惠于开源计划。
+

-很多互联网企业如 Google, Facebook, Twitter 等,都已经正式建立了数个开源项目(有的公司中建立了单独的开源项目部门)。在这个特定的项目内,开源项目的成本和成果都由公司内部消化。在这样一个实际的部门中,企业可以清晰透明地执行开源策略,这是企业成功开源化的一个必要过程。开源项目部门的职责包括:制定使用、分配、选择和审查代码的相关政策;培训开发技术人员和服从法律法规。
+很多互联网企业如 Google、 Facebook、 Twitter 等,都已经正式建立了开源计划(有的公司中建立了单独的开源计划部门(OSPO)),这是在公司内部消化和支持开源产品的地方。在这样一个实际的部门中,企业可以清晰透明地执行开源策略,这是企业成功开源化的一个必要过程。开源计划部门的职责包括:制定使用、分配、选择和审查代码的相关政策;培育开源社区;培训开发技术人员和确保法律合规。
-互联网企业并不是唯一一种运行开源项目的企业,有调查发现产业中 [65% 的企业][1]的运营都与开源相关。在过去几年中 [VMware][2], [Amazon][3], [Microsoft][4] 等企业,甚至连[英国政府][5]都开始聘用开源相关人员,开展开源项目。可见近年来商业领域乃至政府都十分重视开源策略,在这样的环境下,各界也需要跟上他们的步伐,建立开源项目。
+互联网企业并不是唯一建立开源计划的企业,有调查发现各种行业中有 [65% 的企业][1]的在使用开源和向开源贡献。在过去几年中 [VMware][2]、 [Amazon][3]、 [Microsoft][4] 等企业,甚至连[英国政府][5]都开始聘用开源管理人员,开展开源计划。可见近年来商业领域乃至政府都十分重视开源策略,在这样的环境下,各界也需要跟上他们的步伐,建立开源计划。
-### 怎样建立开源项目
+### 怎样建立开源计划
-虽然根据企业的需求不同,各开源部门会有特殊的调整,但下面几个基本步骤是建立每个开源部门都会经历的,它们是:
+虽然根据企业的需求不同,各开源计划部门会有特殊的调整,但下面几个基本步骤是建立每个公司都会经历的,它们是:
- * **选定一位领导者:** 选出一位合适的领导之是建立开源项目的第一步。 [TODO Group][6] 发布了一份[开源人员基础工作任务清单][7],你可以根据这个清单筛选人员。
- * **确定项目构架:** 开源项目可以根据其服务的企业类型改变侧重点,来适应不同种类的企业需求,以在各类企业中成功运行。知识型企业可以把开源项目放在法律事务部运行,技术驱动型企业可以把开源项目放在着眼于提高企业效能的部门中,如工程部。其他类型的企业可以把开源项目放在市场部内运行,以此促进开源产品的销售。TODO Group 发布的[开源项目案例][8]或许可以给你些启发。
- * **制定规章制度:** 开源策略的实施需要有一套规章制度,其中应当具体列出企业成员进行开源工作的标准流程,来减少失误的发生。这个流程应当简洁明了且简单易行,最好可以用设备进行自动监察。如果工作人员有质疑标准流程的热情和能力,并提出改进意见,那再好不过了。许多活跃在开源领域的企业中,Google 和 TODO 发布的规章制度十分值得借鉴。你可以参照 [Google 发布的制度][9]起草适用于自己企业的规章制度,用 [TODO 的规章制度][10]进行参考。
+ * **选定一位领导者:** 选出一位合适的领导之是建立开源计划的第一步。 [TODO Group][6] 发布了一份[开源人员基础工作任务清单][7],你可以根据这个清单筛选人员。
+ * **确定计划构架:** 开源计划部门可以根据其服务的企业类型的侧重点,来适应不同种类的企业需求,以在各类企业中成功运行。知识型企业可以把开源计划放在法律事务部运行,技术驱动型企业可以把开源计划放在着眼于提高企业效能的部门中,如工程部。其他类型的企业可以把开源计划放在市场部内运行,以此促进开源产品的销售。TODO Group 发布的[开源计划案例][8]或许可以给你些启发。
+ * **制定规章制度:** 开源策略的实施需要有一套规章制度,其中应当具体列出企业成员进行开源工作的标准流程,来减少失误的发生。这个流程应当简洁明了且简单易行,最好可以用设备进行自动化。如果工作人员有质疑标准流程的热情和能力,并提出改进意见,那再好不过了。许多活跃在开源领域的企业中,Google 发布的规章制度十分值得借鉴。你可以参照 [Google 发布的制度][9]起草适用于自己企业的规章制度,用 [TODO 提供其它开源策略][10]也可以参考。
-### 建立开源项目是企业发展中的关键一步
+### 建立开源计划是企业发展中的关键一步
-建立开源项目部门对很多企业来说是关键一步,尤其是对于那些软件公司或是想要转型进入软件领域的公司。不论雇员的满意度或是开发效率上,在开源项目中企业可以获得巨大的利益,这些利益远远大于对开源项目需要的长期投资。在开源之路上有很多资源可以帮助你成功,例如 TODO Group 的[《怎样创建开源项目》][11],[《开源项目的价值评估》][12]和[《管理开源项目的几种工具》][13]都很适合初学者阅读。
+建立开源计划部门对很多企业来说是关键一步,尤其是对于那些软件公司或是想要转型进入软件领域的公司。不论雇员的满意度或是开发效率上,在开源计划中企业可以获得巨大的利益,这些利益远远大于对开源计划所需要的长期投资。在开源之路上有很多资源可以帮助你成功,例如 TODO Group 的[《怎样创建开源计划》][11]、[《开源计划的价值评估》][12]和[《管理开源计划的几种工具》][13]都很适合初学者阅读。
-随着越来越多的企业形成开源项目,开源社区自身的可持续性逐渐加强,这会对这些企业的开源项目产生积极影响,促进企业的发展,这是企业和开源间的良性循环。我希望以上这些信息能够帮到你,祝你在建立开源项目的路上一路顺风。
+随着越来越多的企业形成开源计划,开源社区自身的可持续性逐渐加强,这会对这些企业的开源计划产生积极影响,促进企业的发展,这是企业和开源间的良性循环。我希望以上这些信息能够帮到你,祝你在建立开源计划的路上一路顺风。
--------------------------------------------------------------------------------
@@ -27,7 +29,7 @@ via: https://opensource.com/article/18/1/how-start-open-source-program-your-comp
作者:[Chris Aniszczyk][a]
译者:[Valoniakim](https://github.com/Valoniakim)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From b76da39c26b2068ceca9cb9c233109afa6f0354c Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 25 Apr 2018 23:44:23 +0800
Subject: [PATCH 098/220] PUB: 20180206 How to start an open source program in
your company.md
@Valoniakim https://linux.cn/article-9577-1.html
---
...0180206 How to start an open source program in your company.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/talk => published}/20180206 How to start an open source program in your company.md (100%)
diff --git a/translated/talk/20180206 How to start an open source program in your company.md b/published/20180206 How to start an open source program in your company.md
similarity index 100%
rename from translated/talk/20180206 How to start an open source program in your company.md
rename to published/20180206 How to start an open source program in your company.md
From 69610ee1cf51372441a061d09a526fcdbdb8da83 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 25 Apr 2018 23:52:12 +0800
Subject: [PATCH 099/220] PRF:20180307 An Open Source Desktop YouTube Player
For Privacy-minded People.md
@geekpi
---
...ouTube Player For Privacy-minded People.md | 24 ++++++++-----------
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a/translated/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md b/translated/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md
index 20a352349d..4dc842ff4c 100644
--- a/translated/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md
+++ b/translated/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md
@@ -1,4 +1,4 @@
-注重隐私的开源桌面 YouTube 播放器
+FreeTube:注重隐私的开源桌面 YouTube 播放器
======

@@ -13,36 +13,34 @@
* 本地存储订阅、历史记录和已保存的视频。
* 导入/备份订阅。
* 迷你播放器。
-* 轻/黑暗的主题。
+* 亮/暗的主题。
* 免费、开源。
* 跨平台。
-
-
### 安装 FreeTube
-进入[**发布页面**][1]并根据你使用的操作系统获取版本。在本指南中,我将使用 **.tar.gz** 文件。
+进入[发布页面][1]并根据你使用的操作系统获取版本。在本指南中,我将使用 **.tar.gz** 文件。
+
```
$ wget https://github.com/FreeTubeApp/FreeTube/releases/download/v0.1.3-beta/FreeTube-linux-x64.tar.xz
-
```
解压下载的归档:
+
```
$ tar xf FreeTube-linux-x64.tar.xz
-
```
进入 Freetube 文件夹:
+
```
$ cd FreeTube-linux-x64/
-
```
使用命令启动 Freeube:
+
```
$ ./FreeTub
-
```
这就是 FreeTube 默认界面的样子。
@@ -51,7 +49,7 @@ $ ./FreeTub
### 用法
-FreeTube 目前使用 **YouTube API ** 搜索视频。然后,它使用 **Youtube-dl HTTP API** 获取原始视频文件并在基础的 HTML5 视频播放器中播放它们。由于订阅、历史记录和已保存的视频都存储在本地系统中,因此你的详细信息将不会发送给 Google 或其他任何人。
+FreeTube 目前使用 **YouTube API** 搜索视频。然后,它使用 **Youtube-dl HTTP API** 获取原始视频文件并在基础的 HTML5 视频播放器中播放它们。由于订阅、历史记录和已保存的视频都存储在本地系统中,因此你的详细信息将不会发送给 Google 或其他任何人。
在搜索框中输入视频名称,然后按下回车键。FreeTube 会根据你的搜索查询列出结果。
@@ -67,9 +65,7 @@ FreeTube 目前使用 **YouTube API ** 搜索视频。然后,它使用 **Youtu
请注意,FreeTube 仍处于 **beta** 阶段,所以仍然有 bug。如果有任何 bug,请在本指南最后给出的 GitHub 页面上报告。
-干杯!
-
-
+干杯!
--------------------------------------------------------------------------------
@@ -77,7 +73,7 @@ via: https://www.ostechnix.com/freetube-an-open-source-desktop-youtube-player-fo
作者:[SK][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From c29f3a961db5d4114e24b6002e343034074c5bab Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 25 Apr 2018 23:52:31 +0800
Subject: [PATCH 100/220] PUB:20180307 An Open Source Desktop YouTube Player
For Privacy-minded People.md
@geekpi
---
...pen Source Desktop YouTube Player For Privacy-minded People.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md (100%)
diff --git a/translated/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md b/published/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md
similarity index 100%
rename from translated/tech/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md
rename to published/20180307 An Open Source Desktop YouTube Player For Privacy-minded People.md
From 35133181cd419b53df146701c75e8c85501fbabe Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Thu, 26 Apr 2018 00:00:54 +0800
Subject: [PATCH 101/220] PRF:20180328 How To Create-Extend Swap Partition In
Linux Using LVM.md
@geekpi
---
...xtend Swap Partition In Linux Using LVM.md | 46 +++++++++----------
1 file changed, 21 insertions(+), 25 deletions(-)
diff --git a/translated/tech/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md b/translated/tech/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md
index c897f526a4..dd27d0ba42 100644
--- a/translated/tech/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md
+++ b/translated/tech/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md
@@ -1,12 +1,9 @@
-如何在 Linux 中使用 LVM 创建/扩展交换分区
+如何在 Linux 中使用 LVM 创建和扩展交换分区
======
+
我们使用 LVM 进行灵活的卷管理,为什么我们不能将 LVM 用于交换分区呢?
-这让用户在需要时增加交换分区。
-
-如果你升级系统中的内存,则需要添加更多交换空间。
-
-这有助于你管理运行需要大量内存的应用的系统。
+这可以让用户在需要时增加交换分区。如果你升级系统中的内存,则需要添加更多交换空间。这有助于你管理运行需要大量内存的应用的系统。
可以通过三种方式创建交换分区
@@ -14,13 +11,12 @@
* 创建一个新的交换文件
* 在现有逻辑卷(LVM)上扩展交换分区
-
-
建议创建专用交换分区而不是交换文件。
**建议阅读:**
-**(#)** [3 种简单的方法在 Linux 中创建或扩展交换空间][1]
-**(#)** [使用 Shell 脚本在 Linux 中自动创建/删除和挂载交换文件][2]
+
+* [3 种简单的方法在 Linux 中创建或扩展交换空间][1]
+* [使用 Shell 脚本在 Linux 中自动创建/删除和挂载交换文件][2]
Linux 中推荐的交换大小是多少?
@@ -28,47 +24,48 @@ Linux 中推荐的交换大小是多少?
当物理内存 (RAM) 已满时,将使用 Linux 中的交换空间。当物理内存已满时,内存中的非活动页将移到交换空间。
-这有助于系统连续运行应用程序,但它不被认为是更多内存的替代品。
+这有助于系统连续运行应用程序,但它不能当做是更多内存的替代品。
-交换空间位于硬盘上,因此它不会像物理内存那样处理请求。
+交换空间位于硬盘上,因此它不能像物理内存那样处理请求。
-### 如何使用LVM创建交换分区
+### 如何使用 LVM 创建交换分区
由于我们已经知道如何创建逻辑卷,所以交换分区也是如此。只需按照以下过程。
创建你需要的逻辑卷。在我这里,我要创建 `5GB` 的交换分区。
+
```
$ sudo lvcreate -L 5G -n LogVol_swap1 vg00
Logical volume "LogVol_swap1" created.
-
```
格式化新的交换空间。
+
```
$ sudo mkswap /dev/vg00/LogVol_swap1
Setting up swapspace version 1, size = 5 GiB (5368705024 bytes)
no label, UUID=d278e9d6-4c37-4cb0-83e5-2745ca708582
-
```
将以下条目添加到 `/etc/fstab` 中。
+
```
# vi /etc/fstab
/dev/mapper/vg00-LogVol_swap1 swap swap defaults 0 0
-
```
启用扩展逻辑卷。
+
```
$ sudo swapon -va
swapon: /swapfile: already active -- ignored
swapon: /dev/mapper/vg00-LogVol_swap1: found signature [pagesize=4096, signature=swap]
swapon: /dev/mapper/vg00-LogVol_swap1: pagesize=4096, swapsize=5368709120, devsize=5368709120
swapon /dev/mapper/vg00-LogVol_swap1
-
```
测试交换空间是否已正确添加。
+
```
$ cat /proc/swaps
Filename Type Size Used Priority
@@ -79,7 +76,6 @@ $ free -g
total used free shared buff/cache available
Mem: 1 1 0 0 0 0
Swap: 6 0 6
-
```
### 如何使用 LVM 扩展交换分区
@@ -87,40 +83,41 @@ Swap: 6 0 6
只需按照以下过程来扩展 LVM 交换逻辑卷。
禁用相关逻辑卷的交换。
+
```
$ sudo swapoff -v /dev/vg00/LogVol_swap1
swapoff /dev/vg00/LogVol_swap1
-
```
-调整逻辑卷的大小。我将把交换空间从 `5GB 增加到 11GB`。
+调整逻辑卷的大小。我将把交换空间从 `5GB` 增加到 `11GB`。
+
```
$ sudo lvresize /dev/vg00/LogVol_swap1 -L +6G
Size of logical volume vg00/LogVol_swap1 changed from 5.00 GiB (1280 extents) to 11.00 GiB (2816 extents).
Logical volume vg00/LogVol_swap1 successfully resized.
-
```
格式化新的交换空间。
+
```
$ sudo mkswap /dev/vg00/LogVol_swap1
mkswap: /dev/vg00/LogVol_swap1: warning: wiping old swap signature.
Setting up swapspace version 1, size = 11 GiB (11811155968 bytes)
no label, UUID=2e3b2ee0-ad0b-402c-bd12-5a9431b73623
-
```
启用扩展逻辑卷。
+
```
$ sudo swapon -va
swapon: /swapfile: already active -- ignored
swapon: /dev/mapper/vg00-LogVol_swap1: found signature [pagesize=4096, signature=swap]
swapon: /dev/mapper/vg00-LogVol_swap1: pagesize=4096, swapsize=11811160064, devsize=11811160064
swapon /dev/mapper/vg00-LogVol_swap1
-
```
测试逻辑卷是否已正确扩展。
+
```
$ free -g
total used free shared buff/cache available
@@ -131,7 +128,6 @@ $ cat /proc/swaps
Filename Type Size Used Priority
/swapfile file 1459804 237024 -1
/dev/dm-0 partition 11534332 0 -2
-
```
--------------------------------------------------------------------------------
@@ -140,7 +136,7 @@ via: https://www.2daygeek.com/how-to-create-extend-swap-partition-in-linux-using
作者:[Ramya Nuvvula][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 2aa55cdd41ecc271d3d958cf97d11f140409376b Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Thu, 26 Apr 2018 00:01:22 +0800
Subject: [PATCH 102/220] PUB:20180328 How To Create-Extend Swap Partition In
Linux Using LVM.md
@geekpi
---
...0328 How To Create-Extend Swap Partition In Linux Using LVM.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md (100%)
diff --git a/translated/tech/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md b/published/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md
similarity index 100%
rename from translated/tech/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md
rename to published/20180328 How To Create-Extend Swap Partition In Linux Using LVM.md
From 5823ed226af35cb97289dea293bcbc9a5de74fd0 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Thu, 26 Apr 2018 00:08:10 +0800
Subject: [PATCH 103/220] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=A4=9A=E4=BD=99?=
=?UTF-8?q?=E9=80=89=E9=A2=98?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@lujun9972 @FSSlc
---
...ow to do math on the Linux command line.md | 347 ------------------
1 file changed, 347 deletions(-)
delete mode 100644 sources/tech/201804417 How to do math on the Linux command line.md
diff --git a/sources/tech/201804417 How to do math on the Linux command line.md b/sources/tech/201804417 How to do math on the Linux command line.md
deleted file mode 100644
index a7a9c64a70..0000000000
--- a/sources/tech/201804417 How to do math on the Linux command line.md
+++ /dev/null
@@ -1,347 +0,0 @@
-How to do math on the Linux command line
-======
-
-
-Can you do math on the Linux command line? You sure can! In fact, there are quite a few commands that can make the process easy and some you might even find interesting. Let's look at some very useful commands and syntax for command line math.
-
-### expr
-
-First and probably the most obvious and commonly used command for performing mathematical calculations on the command line is the **expr** (expression) command. It can manage addition, subtraction, division, and multiplication. It can also be used to compare numbers. Here are some examples:
-
-#### Incrementing a variable
-```
-$ count=0
-$ count=`expr $count + 1`
-$ echo $count
-1
-
-```
-
-#### Performing a simple calculations
-```
-$ expr 11 + 123
-134
-$ expr 134 / 11
-12
-$ expr 134 - 11
-123
-$ expr 11 * 123
-expr: syntax error <== oops!
-$ expr 11 \* 123
-1353
-$ expr 20 % 3
-2
-
-```
-
-Notice that you have to use a \ character in front of * to avoid the syntax error. The % operator is for modulo calculations.
-
-Here's a slightly more complex example:
-```
-participants=11
-total=156
-share=`expr $total / $participants`
-remaining=`expr $total - $participants \* $share`
-echo $share
-14
-echo $remaining
-2
-
-```
-
-If we have 11 participants in some event and 156 prizes to distribute, each participant's fair share of the take is 14, leaving 2 in the pot.
-
-#### Making comparisons
-
-Now let's look at the logic for comparisons. These statements may look a little odd at first. They are not setting values, but only comparing the numbers. What **expr** is doing in the examples below is determining whether the statements are true. If the result is 1, the statement is true; otherwise, it's false.
-```
-$ expr 11 = 11
-1
-$ expr 11 = 12
-0
-
-```
-
-Read them as "Does 11 equal 11?" and "Does 11 equal 12?" and you'll get used to how this works. Of course, no one would be asking if 11 equals 11 on the command line, but they might ask if $age equals 11.
-```
-$ age=11
-$ expr $age = 11
-1
-
-```
-
-If you put the numbers in quotes, you'd actually be doing a string comparison rather than a numeric one.
-```
-$ expr "11" = "11"
-1
-$ expr "eleven" = "11"
-0
-
-```
-
-In the following examples, we're asking whether 10 is greater than 5 and, then, whether it's greater than 99.
-```
-$ expr 10 \> 5
-1
-$ expr 10 \> 99
-0
-
-```
-
-Of course, having true comparisons resulting in 1 and false resulting in 0 goes against what we generally expect on Linux systems. The example below shows that using **expr** in this kind of context doesn't work because **if** works with the opposite orientation (0=true).
-```
-#!/bin/bash
-
-echo -n "Cost to us> "
-read cost
-echo -n "Price we're asking> "
-read price
-
-if [ `expr $price \> $cost` ]; then
- echo "We make money"
-else
- echo "Don't sell it"
-fi
-
-```
-
-Now, let's run this script:
-```
-$ ./checkPrice
-Cost to us> 11.50
-Price we're asking> 6
-We make money
-
-```
-
-That sure isn't going to help with sales! With a small change, this would work as we'd expect:
-```
-#!/bin/bash
-
-echo -n "Cost to us> "
-read cost
-echo -n "Price we're asking> "
-read price
-
-if [ `expr $price \> $cost` == 1 ]; then
- echo "We make money"
-else
- echo "Don't sell it"
-fi
-
-```
-
-### factor
-
-The **factor** command works just like you'd probably expect. You feed it a number, and it tells you what its factors are.
-```
-$ factor 111
-111: 3 37
-$ factor 134
-134: 2 67
-$ factor 17894
-17894: 2 23 389
-$ factor 1987
-1987: 1987
-
-```
-
-NOTE: The factor command didn't get very far on factoring that last value because 1987 is a **prime number**.
-
-### jot
-
-The **jot** command allows you to create a list of numbers. Provide it with the number of values you want to see and the number that you want to start with.
-```
-$ jot 8 10
-10
-11
-12
-13
-14
-15
-16
-17
-
-```
-
-You can also use **jot** like this. Here we're asking it to decrease the numbers by telling it we want to stop when we get to 2:
-```
-$ jot 8 10 2
-10
-9
-8
-7
-5
-4
-3
-2
-
-```
-
-The **jot** command can be useful if you want to iterate through a series of numbers to create a list for some other purpose.
-```
-$ for i in `jot 7 17`; do echo April $i; done
-April 17
-April 18
-April 19
-April 20
-April 21
-April 22
-April 23
-
-```
-
-### bc
-
-The **bc** command is probably one of the best tools for doing calculations on the command line. Enter the calculation that you want performed, and pipe it to the command like this:
-```
-$ echo "123.4+5/6-(7.89*1.234)" | bc
-113.664
-
-```
-
-Notice that **bc** doesn't shy away from precision and that the string you need to enter is fairly straightforward. It can also make comparisons, handle Booleans, and calculate square roots, sines, cosines, tangents, etc.
-```
-$ echo "sqrt(256)" | bc
-16
-$ echo "s(90)" | bc -l
-.89399666360055789051
-
-```
-
-In fact, **bc** can even calculate pi. You decide how many decimal points you want to see:
-```
-$ echo "scale=5; 4*a(1)" | bc -l
-3.14156
-$ echo "scale=10; 4*a(1)" | bc -l
-3.1415926532
-$ echo "scale=20; 4*a(1)" | bc -l
-3.14159265358979323844
-$ echo "scale=40; 4*a(1)" | bc -l
-3.1415926535897932384626433832795028841968
-
-```
-
-And **bc** isn't just for receiving data through pipes and sending answers back. You can also start it interactively and enter the calculations you want it to perform. Setting the scale (as shown below) determines how many decimal places you'll see.
-```
-$ bc
-bc 1.06.95
-Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
-This is free software with ABSOLUTELY NO WARRANTY.
-For details type `warranty'.
-scale=2
-3/4
-.75
-2/3
-.66
-quit
-
-```
-
-Using **bc** , you can also convert numbers between different bases. The **obase** setting determines the output base.
-```
-$ bc
-bc 1.06.95
-Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
-This is free software with ABSOLUTELY NO WARRANTY.
-For details type `warranty'.
-obase=16
-16 <=== entered
-10 <=== response
-256 <=== entered
-100 <=== response
-quit
-
-```
-
-One of the easiest ways to convert between hex and decimal is to use **bc** like this:
-```
-$ echo "ibase=16; F2" | bc
-242
-$ echo "obase=16; 242" | bc
-F2
-
-```
-
-In the first example above, we're converting from hex to decimal by setting the input base (ibase) to hex (base 16). In the second, we're doing the reverse by setting the outbut base (obase) to hex.
-
-### Easy bash math
-
-With sets of double-parentheses, we can do some easy math in bash. In the examples below, we create a variable and give it a value and then perform addition, decrement the result, and then square the remaining value.
-```
-$ ((e=11))
-$ (( e = e + 7 ))
-$ echo $e
-18
-
-$ ((e--))
-$ echo $e
-17
-
-$ ((e=e**2))
-$ echo $e
-289
-
-```
-
-The arithmetic operators allow you to:
-```
-+ - Add and subtract
-++ -- Increment and decrement
-* / % Multiply, divide, find remainder
-^ Get exponent
-
-```
-
-You can also use both logical and boolean operators:
-```
-$ ((x=11)); ((y=7))
-$ if (( x > y )); then
-> echo "x > y"
-> fi
-x > y
-
-$ ((x=11)); ((y=7)); ((z=3))
-$ if (( x > y )) >> (( y > z )); then
-> echo "letters roll downhill"
-> fi
-letters roll downhill
-
-```
-
-or if you prefer ...
-```
-$ if [ x > y ] << [ y > z ]; then echo "letters roll downhill"; fi
-letters roll downhill
-
-```
-
-Now let's raise 2 to the 3rd power:
-```
-$ echo "2 ^ 3"
-2 ^ 3
-$ echo "2 ^ 3" | bc
-8
-
-```
-
-### Wrap-up
-
-There are sure a lot of different ways to work with numbers and perform calculations on the command line on Linux systems. I hope you picked up a new trick or two by reading this post.
-
-Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3268964/linux/how-to-do-math-on-the-linux-command-line.html
-
-作者:[Sandra Henry-Stocker][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
-[1]:https://www.facebook.com/NetworkWorld/
-[2]:https://www.linkedin.com/company/network-world
From 47af3a8fdd010b7c8487c1608e0d4e257ab35730 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Thu, 26 Apr 2018 08:47:36 +0800
Subject: [PATCH 104/220] translated
---
...aphical Activity Monitor, Written In Go.md | 138 ------------------
...aphical Activity Monitor, Written In Go.md | 136 +++++++++++++++++
2 files changed, 136 insertions(+), 138 deletions(-)
delete mode 100644 sources/tech/20180409 Yet Another TUI Graphical Activity Monitor, Written In Go.md
create mode 100644 translated/tech/20180409 Yet Another TUI Graphical Activity Monitor, Written In Go.md
diff --git a/sources/tech/20180409 Yet Another TUI Graphical Activity Monitor, Written In Go.md b/sources/tech/20180409 Yet Another TUI Graphical Activity Monitor, Written In Go.md
deleted file mode 100644
index 14cb0df03b..0000000000
--- a/sources/tech/20180409 Yet Another TUI Graphical Activity Monitor, Written In Go.md
+++ /dev/null
@@ -1,138 +0,0 @@
-translating---geekpi
-
-Yet Another TUI Graphical Activity Monitor, Written In Go
-======
-
-
-You already know about “top” command, don’t you? Yes, It provides dynamic real-time information about running processes in any Unix-like operating systems. A few developers have built graphical front-ends for top command, so the users can easily find out their system’s activity in a graphical window. One of them is **Gotop**. As the name implies, Gotop is a TUI graphical activity monitor, written in **Go** language. It is completely free, open source and inspired by [gtop][1] and [vtop][2] programs.
-
-In this brief guide, we are going to discuss how to install and use Gotop program to monitor a Linux system’s activity.
-
-### Installing Gotop
-
-Gotop is written using Go, so we need to install it first. To install Go programming language in Linux, refer the following guide.
-
-After installing Go, download the latest Gotop binary using the following command.
-```
-$ sh -c "$(curl https://raw.githubusercontent.com/cjbassi/gotop/master/download.sh)"
-
-```
-
-And, then move the downloaded binary to your $PATH, for example **/usr/local/bin/**.
-```
-$ cp gotop /usr/local/bin
-
-```
-
-Finally, make it executable using command:
-```
-$ chmod +x /usr/local/bin/gotop
-
-```
-
-If you’re using Arch-based systems, Gotop is available in **AUR** , so you can install it using any AUR helper programs.
-
-Using [**Cower**][3]:
-```
-$ cower -S gotop
-
-```
-
-Using [**Pacaur**][4]:
-```
-$ pacaur -S gotop
-
-```
-
-Using [**Packer**][5]:
-```
-$ packer -S gotop
-
-```
-
-Using [**Trizen**][6]:
-```
-$ trizen -S gotop
-
-```
-
-Using [**Yay**][7]:
-```
-$ yay -S gotop
-
-```
-
-Using [yaourt][8]:
-```
-$ yaourt -S gotop
-
-```
-
-### Usage
-
-Gotop usage is easy! All you have to do is to run the following command from the Terminal.
-```
-$ gotop
-
-```
-
-There you go! You will see the usage of your system’s CPU, disk, memory, network, cpu temperature and process list in a simple TUI window.
-
-![][10]
-
-To show only CPU, Mem and Process widgets, use **-m** flag like below.
-```
-$ gotop -m
-
-```
-
-![][11]
-
-You can sort the process table by using the following keyboard shortcuts.
-
- * **c** – CPU
- * **m** – Mem
- * **p** – PID
-
-
-
-For process navigation, use the following keys.
-
- * **UP/DOWN** arrows or **j/k** keys to go up and down.
- * **Ctrl-d** and **Ctrl-u** – up and down half a page.
- * **Ctrl-f** and **Ctrl-b** – up and down a full page.
- * **gg** and **G** – iump to top and bottom.
-
-
-
-Press **< TAB>** to toggle process grouping. To kill the selected process or process group, type **dd**. To select a process, just click on it. To scroll down/up, use the mouse scroll button. To zoom in and zoom out CPU and memory graphs, use **h** and **l**. To display the help menu at anytime, just press **?**.
-
-**Recommended read:**
-
-And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/gotop-yet-another-tui-graphical-activity-monitor-written-in-go/
-
-作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://github.com/aksakalli/gtop
-[2]:https://github.com/MrRio/vtop
-[3]:https://www.ostechnix.com/cower-simple-aur-helper-arch-linux/
-[4]:https://www.ostechnix.com/install-pacaur-arch-linux/
-[5]:https://www.ostechnix.com/install-packer-arch-linux-2/
-[6]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
-[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
-[8]:https://www.ostechnix.com/install-yaourt-arch-linux/
-[9]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/Gotop-1.png
-[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/Gotop-2.png
diff --git a/translated/tech/20180409 Yet Another TUI Graphical Activity Monitor, Written In Go.md b/translated/tech/20180409 Yet Another TUI Graphical Activity Monitor, Written In Go.md
new file mode 100644
index 0000000000..6df5c2dc11
--- /dev/null
+++ b/translated/tech/20180409 Yet Another TUI Graphical Activity Monitor, Written In Go.md
@@ -0,0 +1,136 @@
+另一个 TUI 图形活动监视器,使用 Go 编写
+======
+
+
+你已经知道 “top” 命令,对么?是的,它提供类 Unix 操作系统中运行中的进程的动态实时信息。一些开发人员为 top 命令构建了图形前端,因此用户可以在图形窗口中轻松找到他们系统的活动。其中之一是 **Gotop**。顾名思义,Gotop 是一个 TUI 图形活动监视器,使用 **Go** 语言编写。它是完全免费、开源的,受到 [gtop][1] 和 [vtop][2] 的启发。
+
+在此简要的指南中,我们将讨论如何安装和使用 Gotop 来监视 Linux 系统的活动。
+
+### 安装 Gotop
+
+Gotop 是用 Go 编写的,所以我们需要先安装它。要在 Linux 中安装 Go 语言,请参阅以下指南。
+
+安装 Go 之后,使用以下命令下载最新的 Gotop 二进制文件。
+```
+$ sh -c "$(curl https://raw.githubusercontent.com/cjbassi/gotop/master/download.sh)"
+
+```
+
+然后,将下载的二进制文件移动到您的 $PATH 中,例如 **/usr/local/bin/**。
+```
+$ cp gotop /usr/local/bin
+
+```
+
+最后,用下面的命令使其可执行:
+```
+$ chmod +x /usr/local/bin/gotop
+
+```
+
+如果你使用的是基于 Arch 的系统,Gotop 存在于 **AUR** 中,所以你可以使用任何 AUR 助手程序进行安装。
+
+使用 [**Cower**][3]:
+```
+$ cower -S gotop
+
+```
+
+使用 [**Pacaur**][4]:
+```
+$ pacaur -S gotop
+
+```
+
+使用 [**Packer**][5]:
+```
+$ packer -S gotop
+
+```
+
+使用 [**Trizen**][6]:
+```
+$ trizen -S gotop
+
+```
+
+使用 [**Yay**][7]:
+```
+$ yay -S gotop
+
+```
+
+使用 [yaourt][8]:
+```
+$ yaourt -S gotop
+
+```
+
+### 用法
+
+Gotop 的使用非常简单!你所要做的就是从终端运行以下命令。
+```
+$ gotop
+
+```
+
+这样就行了!你将在简单的 TUI 窗口中看到系统 CPU、磁盘、内存、网络、CPU温度和进程列表的使用情况。
+
+![][10]
+
+要仅显示CPU、内存和进程组件,请使用下面的 **-m** 标志
+```
+$ gotop -m
+
+```
+
+![][11]
+
+你可以使用以下键盘快捷键对进程表进行排序。
+
+ * **c** – CPU
+ * **m** – 内存
+ * **p** – PID
+
+
+
+对于进程浏览,请使用以下键。
+
+ * **上/下** 箭头或者 **j/k** 键用于上移下移。
+ * **Ctrl-d** 和 **Ctrl-u** – 上移和下移半页。
+ * **Ctrl-f** 和 **Ctrl-b** – 上移和下移整页。
+ * **gg** 和 **G** – 跳转顶部和底部。
+
+
+
+按下 **< TAB>** 切换进程分组。要杀死选定的进程或进程组,请输入 **dd**。要选择一个进程,只需点击它。要向下/向上滚动,请使用鼠标滚动按钮。要放大和缩小 CPU 和内存图,请使用 **h** 和 **l**。要显示帮助菜单,只需按 **?**。
+
+**推荐阅读:**
+
+就是这些了。希望这有帮助。还有更多好东西。敬请关注!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/gotop-yet-another-tui-graphical-activity-monitor-written-in-go/
+
+作者:[SK][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://github.com/aksakalli/gtop
+[2]:https://github.com/MrRio/vtop
+[3]:https://www.ostechnix.com/cower-simple-aur-helper-arch-linux/
+[4]:https://www.ostechnix.com/install-pacaur-arch-linux/
+[5]:https://www.ostechnix.com/install-packer-arch-linux-2/
+[6]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
+[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
+[8]:https://www.ostechnix.com/install-yaourt-arch-linux/
+[9]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/Gotop-1.png
+[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/Gotop-2.png
From 04c70649edc1109a0c5fbc0b7a46977a323c316f Mon Sep 17 00:00:00 2001
From: geekpi
Date: Thu, 26 Apr 2018 08:51:49 +0800
Subject: [PATCH 105/220] rename it to compatible with windows
---
...180425 A gentle introduction to FreeDOS - Opensource.com.md} | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
rename sources/tech/{20180425 A gentle introduction to FreeDOS | Opensource.com.md => 20180425 A gentle introduction to FreeDOS - Opensource.com.md} (99%)
diff --git a/sources/tech/20180425 A gentle introduction to FreeDOS | Opensource.com.md b/sources/tech/20180425 A gentle introduction to FreeDOS - Opensource.com.md
similarity index 99%
rename from sources/tech/20180425 A gentle introduction to FreeDOS | Opensource.com.md
rename to sources/tech/20180425 A gentle introduction to FreeDOS - Opensource.com.md
index d37c88ffd7..d68334db33 100644
--- a/sources/tech/20180425 A gentle introduction to FreeDOS | Opensource.com.md
+++ b/sources/tech/20180425 A gentle introduction to FreeDOS - Opensource.com.md
@@ -138,4 +138,4 @@ via: [https://opensource.com/article/18/4/gentle-introduction-freedos][15]
[18]: https://github.com/译者ID
[19]: https://github.com/校对者ID
[20]: https://github.com/LCTT/TranslateProject
-[21]: https://linux.cn/
\ No newline at end of file
+[21]: https://linux.cn/
From 688c9014107bea8c74892094e5f0dab9d3e0a558 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Thu, 26 Apr 2018 08:52:14 +0800
Subject: [PATCH 106/220] rename it to compatible with windows
---
... introduction to the GNU Core Utilities - Opensource.com.md} | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
rename sources/tech/{20180425 An introduction to the GNU Core Utilities | Opensource.com.md => 20180425 An introduction to the GNU Core Utilities - Opensource.com.md} (99%)
diff --git a/sources/tech/20180425 An introduction to the GNU Core Utilities | Opensource.com.md b/sources/tech/20180425 An introduction to the GNU Core Utilities - Opensource.com.md
similarity index 99%
rename from sources/tech/20180425 An introduction to the GNU Core Utilities | Opensource.com.md
rename to sources/tech/20180425 An introduction to the GNU Core Utilities - Opensource.com.md
index f6f8708fb2..aaa5a6ca00 100644
--- a/sources/tech/20180425 An introduction to the GNU Core Utilities | Opensource.com.md
+++ b/sources/tech/20180425 An introduction to the GNU Core Utilities - Opensource.com.md
@@ -143,4 +143,4 @@ via: [https://opensource.com/article/18/4/gnu-core-utilities][17]
[20]: https://github.com/译者ID
[21]: https://github.com/校对者ID
[22]: https://github.com/LCTT/TranslateProject
-[23]: https://linux.cn/
\ No newline at end of file
+[23]: https://linux.cn/
From 31bb09cc72e07e6f9bf9e4bd8d775195b31ad212 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Thu, 26 Apr 2018 08:52:30 +0800
Subject: [PATCH 107/220] rename it to compatible with windows
---
...ding metrics and monitoring with Python - Opensource.com.md} | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
rename sources/tech/{20180425 Understanding metrics and monitoring with Python | Opensource.com.md => 20180425 Understanding metrics and monitoring with Python - Opensource.com.md} (99%)
diff --git a/sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md b/sources/tech/20180425 Understanding metrics and monitoring with Python - Opensource.com.md
similarity index 99%
rename from sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md
rename to sources/tech/20180425 Understanding metrics and monitoring with Python - Opensource.com.md
index 7b401b518b..f181016aba 100644
--- a/sources/tech/20180425 Understanding metrics and monitoring with Python | Opensource.com.md
+++ b/sources/tech/20180425 Understanding metrics and monitoring with Python - Opensource.com.md
@@ -485,4 +485,4 @@ via: [https://opensource.com/article/18/4/metrics-monitoring-and-python][47]
[50]: https://github.com/译者ID
[51]: https://github.com/校对者ID
[52]: https://github.com/LCTT/TranslateProject
-[53]: https://linux.cn/
\ No newline at end of file
+[53]: https://linux.cn/
From 7c23b8598baf2efdf10ac6de4748d9f6f94b92ab Mon Sep 17 00:00:00 2001
From: geekpi
Date: Thu, 26 Apr 2018 08:53:45 +0800
Subject: [PATCH 108/220] translating
---
sources/tech/20180420 A Perl module for better debugging.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180420 A Perl module for better debugging.md b/sources/tech/20180420 A Perl module for better debugging.md
index a83ff53c7e..ca9f8bd8fd 100644
--- a/sources/tech/20180420 A Perl module for better debugging.md
+++ b/sources/tech/20180420 A Perl module for better debugging.md
@@ -1,3 +1,5 @@
+translating---geekpi
+
A Perl module for better debugging
======
From 5c596da6d81541fef6ec5de07ed715d81aba5449 Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 26 Apr 2018 11:27:46 +0800
Subject: [PATCH 109/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20There=E2=80=99s?=
=?UTF-8?q?=20a=20Server=20in=20Every=20Serverless=20Platform?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...s a Server in Every Serverless Platform.md | 87 +++++++++++++++++++
1 file changed, 87 insertions(+)
create mode 100644 sources/talk/20180424 There-s a Server in Every Serverless Platform.md
diff --git a/sources/talk/20180424 There-s a Server in Every Serverless Platform.md b/sources/talk/20180424 There-s a Server in Every Serverless Platform.md
new file mode 100644
index 0000000000..9bc935c06d
--- /dev/null
+++ b/sources/talk/20180424 There-s a Server in Every Serverless Platform.md
@@ -0,0 +1,87 @@
+There’s a Server in Every Serverless Platform
+======
+
+
+Serverless computing or Function as a Service (FaaS) is a new buzzword created by an industry that loves to coin new terms as market dynamics change and technologies evolve. But what exactly does it mean? What is serverless computing?
+
+Before getting into the definition, let’s take a brief history lesson from Sirish Raghuram, CEO and co-founder of Platform9, to understand the evolution of serverless computing.
+
+“In the 90s, we used to build applications and run them on hardware. Then came virtual machines that allowed users to run multiple applications on the same hardware. But you were still running the full-fledged OS for each application. The arrival of containers got rid of OS duplication and process level isolation which made it lightweight and agile,” said Raghuram.
+
+Serverless, specifically, Function as a Service, takes it to the next level as users are now able to code functions and run them at the granularity of build, ship and run. There is no complexity of underlying machinery needed to run those functions. No need to worry about spinning containers using Kubernetes. Everything is hidden behind the scenes.
+
+“That’s what is driving a lot of interest in function as a service,” said Raghuram.
+
+### What exactly is serverless?
+
+There is no single definition of the term, but to build some consensus around the idea, the [Cloud Native Computing Foundation (CNCF)][1] Serverless Working Group wrote a [white paper][2] to define serverless computing.
+
+According to the white paper, “Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.”
+
+Ken Owens, a member of the Technical Oversight Committee at CNCF said that the primary goal of serverless computing is to help users build and run their applications without having to worry about the cost and complexity of servers in terms of provisioning, management and scaling.
+
+“Serverless is a natural evolution of cloud-native computing. The CNCF is advancing serverless adoption through collaboration and community-driven initiatives that will enable interoperability,” [said][3] Chris Aniszczyk, COO, CNCF.
+
+### It’s not without servers
+
+First things first, don’t get fooled by the term “serverless.” There are still servers in serverless computing. Remember what Raghuram said: all the machinery is hidden; it’s not gone.
+
+The clear benefit here is that developers need not concern themselves with tasks that don’t add any value to their deliverables. Instead of worrying about managing the function, they can dedicate their time to adding featured and building apps that add business value. Time is money and every minute saved in management goes toward innovation. Developers don’t have to worry about scaling based on peaks and valleys; it’s automated. Because cloud providers charge only for the duration that functions are run, developers cut costs by not having to pay for blinking lights.
+
+But… someone still has to do the work behind the scenes. There are still servers offering FaaS platforms.
+
+In the case of public cloud offerings like Google Cloud Platform, AWS, and Microsoft Azure, these companies manage the servers and charge customers for running those functions. In the case of private cloud or datacenters, where developers don’t have to worry about provisioning or interacting with such servers, there are other teams who do.
+
+The CNCF white paper identifies two groups of professionals that are involved in the serverless movement: developers and providers. We have already talked about developers. But, there are also providers that offer serverless platforms; they deal with all the work involved in keeping that server running.
+
+That’s why many companies, like SUSE, refrain from using the term “serverless” and prefer the term function as a service, because they offer products that run those “serverless” servers. But what kind of functions are these? Is it the ultimate future of app delivery?
+
+### Event-driven computing
+
+Many see serverless computing as an umbrella that offers FaaS among many other potential services. According to CNCF, FaaS provides event-driven computing where functions are triggered by events or HTTP requests. “Developers run and manage application code with functions that are triggered by events or HTTP requests. Developers deploy small units of code to the FaaS, which are executed as needed as discrete actions, scaling without the need to manage servers or any other underlying infrastructure,” said the white paper.
+
+Does that mean FaaS is the silver bullet that solves all problems for developing and deploying applications? Not really. At least not at the moment. FaaS does solve problems in several use cases and its scope is expanding. A good use case of FaaS could be the functions that an application needs to run when an event takes place.
+
+Let’s take an example: a user takes a picture from a phone and uploads it to the cloud. Many things happen when the picture is uploaded - it’s scanned (exif data is read), a thumbnail is created, based on deep learning/machine learning the content of the image is analyzed, the information of the image is stored in the database. That one event of uploading that picture triggers all those functions. Those functions die once the event is over. That’s what FaaS does. It runs code quickly to perform all those tasks and then disappears.
+
+That’s just one example. Another example could be an IoT device where a motion sensor triggers an event that instructs the camera to start recording and sends the clip to the designated contant. Your thermostat may trigger the fan when the sensor detects a change in temperature. These are some of the many use cases where function as a service make more sense than the traditional approach. Which also says that not all applications (at least at the moment, but that will change as more organizations embrace the serverless platform) can be run as function as service.
+
+According to CNCF, serverless computing should be considered if you have these kinds of workloads:
+
+ * Asynchronous, concurrent, easy to parallelize into independent units of work
+
+ * Infrequent or has sporadic demand, with large, unpredictable variance in scaling requirements
+
+ * Stateless, ephemeral, without a major need for instantaneous cold start time
+
+ * Highly dynamic in terms of changing business requirements that drive a need for accelerated developer velocity
+
+
+
+
+### Why should you care?
+
+Serverless is a very new technology and paradigm, just the way VMs and containers transformed the app development and delivery models, FaaS can also bring dramatic changes. We are still in the early days of serverless computing. As the market evolves, consensus is created and new technologies evolve, and FaaS may grow beyond the workloads and use cases mentioned here.
+
+What is becoming quite clear is that companies who are embarking on their cloud native journey must have serverless computing as part of their strategy. The only way to stay ahead of competitors is by keeping up with the latest technologies and trends.
+
+It’s about time to put serverless into servers.
+
+For more information, check out the CNCF Working Group's serverless whitepaper [here][2]. And, you can learn more at [KubeCon + CloudNativeCon Europe][4], coming up May 2-4 in Copenhagen, Denmark.
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2018/4/theres-server-every-serverless-platform
+
+作者:[SWAPNIL BHARTIYA][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/arnieswap
+[1]:https://www.cncf.io/
+[2]:https://github.com/cncf/wg-serverless/blob/master/whitepaper/cncf_serverless_whitepaper_v1.0.pdf
+[3]:https://www.cncf.io/blog/2018/02/14/cncf-takes-first-step-towards-serverless-computing/
+[4]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/attend/register/
From 039c69dfc7046181f21489ce1715aa6a14f34a7b Mon Sep 17 00:00:00 2001
From: darksun
Date: Thu, 26 Apr 2018 11:29:27 +0800
Subject: [PATCH 110/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20share?=
=?UTF-8?q?=20files=20between=20Linux=20and=20Windows?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...o share files between Linux and Windows.md | 108 ++++++++++++++++++
1 file changed, 108 insertions(+)
create mode 100644 sources/tech/20180424 How to share files between Linux and Windows.md
diff --git a/sources/tech/20180424 How to share files between Linux and Windows.md b/sources/tech/20180424 How to share files between Linux and Windows.md
new file mode 100644
index 0000000000..bea550372d
--- /dev/null
+++ b/sources/tech/20180424 How to share files between Linux and Windows.md
@@ -0,0 +1,108 @@
+How to share files between Linux and Windows
+======
+
+
+Many people today work on mixed networks, with both Linux and Windows systems playing important roles. Sharing files between the two can be critical at times and is surprisingly easy with the right tools. With fairly little effort, you can copy files from Windows to Linux or Linux to Windows. In this post, we'll look at what is needed to configure your Linux and Windows system to allow you to easily move files from one OS to the other.
+
+### Copying files between Linux and Windows
+
+The first step toward moving files between Windows and Linux is to download and install a tool such as PuTTY's pscp. You can get PuTTY from [putty.org][1] and set it up on your Windows system easily. PuTTY comes with a terminal emulator (putty) as well as tools like **pscp** for securely copying files between Linux and Windows systems. When you go to the PuTTY site, you can elect to install all of the tools or pick just the ones you want to use by choosing either the installer or the individual .exe files.
+
+You will also need to have ssh-server set up and running on your Linux system. This allows it to support the client (Windows side) connection requests. If you don't already have an ssh server set up, the following steps should work on Debian systems (Ubuntu, etc.).
+```
+sudo apt update
+sudo apt install ssh-server
+sudo service ssh start
+
+```
+
+For Red Hat and related Linux systems, use similar commands:
+```
+sudo yum install openssh-server
+sudo systemctl start sshd
+
+```
+
+Note that if you are running a firewall such as ufw, you may have to open port 22 to allow the connections.
+
+Using the **pscp** command, you can then move files from Windows to Linux or vice versa. The syntax is quite straightforward with its "copy from to" commands.
+
+#### Windows to Linux
+
+In the command shown below, we are copying a file from a user's account on a Windows system to the /tmp directory on the Linux system.
+```
+C:\Program Files\PuTTY>pscp \Users\shs\copy_me.txt shs@192.168.0.18:/tmp
+shs@192.168.0.18's password:
+copy_me.txt | 0 kB | 0.1 kB/s | ETA: 00:00:00 | 100%
+
+```
+
+#### Linux to Windows
+
+Moving the files from Linux to Windows is just as easy. Just reverse the arguments.
+```
+C:\Program Files\PuTTY>pscp shs@192.168.0.18:/tmp/copy_me.txt \Users\shs
+shs@192.168.0.18's password:
+copy_me.txt | 0 kB | 0.1 kB/s | ETA: 00:00:00 | 100%
+
+```
+
+The process can be made a little smoother and easier if 1) pscp is in your Windows search path and 2) your Linux system is in your Windows hosts file.
+
+#### Windows search path
+
+If you install the PuTTY tools with the PuTTY installer, you will probably find that **C:\Program files\PuTTY** is on your Windows search path. You can check to see if this is the case by typing **echo %path%** in a Windows command prompt (type "cmd" in the search bar to open the command prompt). If it is, you don't need to be concerned with where you are in the file system relative to the pscp executable. Moving into the folder containing the files you want to move will likely prove easier.
+```
+C:\Users\shs>pscp copy_me.txt shs@192.168.0.18:/tmp
+shs@192.168.0.18's password:
+copy_me.txt | 0 kB | 0.1 kB/s | ETA: 00:00:00 | 100%
+
+```
+
+#### Updating your Windows hosts file
+
+Here's the other little fix. With administrator rights, you can add your Linux system to the Windows host file (C:\Windows\System32\drivers\etc\hosts) and then use the host name in place of its IP address. Keep in mind that this will not work indefinitely if the IP address on your Linux system is dynamically assigned.
+```
+C:\Users\shs>pscp copy_me.txt shs@stinkbug:/tmp
+shs@192.168.0.18's password:
+hosts | 0 kB | 0.8 kB/s | ETA: 00:00:00 | 100%
+
+```
+
+Note that Windows host files are formatted like the /etc/hosts file on Linux systems — IP address, white space and host name. Comments are prefaced with pound signs.
+```
+# Linux systems
+192.168.0.18 stinkbug
+
+```
+
+#### Those pesky line endings
+
+Keep in mind that lines in text files on Windows end with both a carriage return and a linefeed. The pscp tool will not remove the carriage returns to make the files look like Linux text files. Instead, it simply copies the files intact. You might consider installing the **tofrodos** package to enable you to use the **fromdos** and **todos** commands on your Linux system to adjust the files you are moving between platforms.
+
+### Sharing folders between Windows and Linux
+
+Sharing folders is an entirely different operation. You end up mounting a Windows directory on your Linux system or a Linux directory on your Windows box so that both systems can use the same set of files rather than copying the files from one system to the other. One of the best tools for this is Samba, which emulates Windows protocols and runs on the Linux system.
+
+Once Samba is installed, you will be able to mount a Linux folder on Windows or a Windows folder on Linux. This is, of course, very different than copying files as described earlier in this post. Instead, each of the two systems involved will have access to the same files at the same time.
+
+More tips on choosing the right tool for sharing files between Linux and Windows systems are available [here][2].
+
+Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3269189/linux/sharing-files-between-linux-and-windows.html
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[1]:https://www.putty.org
+[2]:https://www.infoworld.com/article/2617683/linux/linux-moving-files-between-unix-and-windows-systems.html
+[3]:https://www.facebook.com/NetworkWorld/
+[4]:https://www.linkedin.com/company/network-world
From 2109520c30d5ac46ed0b739210d329071c108e3a Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Thu, 26 Apr 2018 21:16:38 +0800
Subject: [PATCH 111/220] PRF:20170927 Linux directory structure- -lib
explained.md
@CYLeft
---
...nux directory structure- -lib explained.md | 84 ++++++++++++-------
1 file changed, 56 insertions(+), 28 deletions(-)
diff --git a/translated/tech/20170927 Linux directory structure- -lib explained.md b/translated/tech/20170927 Linux directory structure- -lib explained.md
index 3472981eb9..66ec8e41f4 100644
--- a/translated/tech/20170927 Linux directory structure- -lib explained.md
+++ b/translated/tech/20170927 Linux directory structure- -lib explained.md
@@ -1,60 +1,88 @@
Linux 目录结构:/lib 分析
======
+
[![linux 目录 lib][1]][1]
-我们在之前的文章中已经分析了其他重要系统目录,比如 bin、/boot、/dev、 /etc 等。可以根据自己的兴趣进入下列链接了解更多信息。本文中,让我们来看看 /lib 目录都有些什么。
+我们在之前的文章中已经分析了其他重要系统目录,比如 `/bin`、`/boot`、`/dev`、 `/etc` 等。可以根据自己的兴趣进入下列链接了解更多信息。本文中,让我们来看看 `/lib` 目录都有些什么。
-[**目录结构分析:/bin 文件夹**][2]
-
-[**目录结构分析:/boot 文件夹**][3]
-
-[**目录结构分析:/dev 文件夹**][4]
-
-[**目录结构分析:/etc 文件夹**][5]
-
-[**目录结构分析:/lost+found 文件夹**][6]
-
-[**目录结构分析:/home 文件夹**][7]
+- [目录结构分析:/bin 文件夹][2]
+- [目录结构分析:/boot 文件夹][3]
+- [目录结构分析:/dev 文件夹][4]
+- [目录结构分析:/etc 文件夹][5]
+- [目录结构分析:/lost+found 文件夹][6]
+- [目录结构分析:/home 文件夹][7]
### Linux 中,/lib 文件夹是什么?
-lib 文件夹是 **库文件目录** ,包含了所有对系统有用的库文件。简单来说,它是应用程序、命令或进程正确执行所需要的文件。指令在 /bin 或 /sbin 目录,而动态库文件正是在此目录中。内核模块同样也在这里。
+`/lib` 文件夹是 **库文件目录** ,包含了所有对系统有用的库文件。简单来说,它是应用程序、命令或进程正确执行所需要的文件。在 `/bin` 或 `/sbin` 目录中的命令的动态库文件正是在此目录中。内核模块同样也在这里。
-以 pwd 命令执行为例。正确执行,需要调用一些库文件。让我们来探索一下 pwd 命令执行时都发生了什么。我们需要使用 [strace 命令][8] 找出调用的库文件。
+以 `pwd` 命令执行为例。执行它需要调用一些库文件。让我们来探索一下 `pwd` 命令执行时都发生了什么。我们需要使用 [strace 命令][8] 找出调用的库文件。
示例:
-如果你在观察的话,会发现我们使用的 pwd 命令仅进行了内核调用,命令正确执行需要调用两个库文件。
+```
+root@linuxnix:~# strace -e open pwd
+open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
+open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
+open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
+/root
++++ exited with 0 +++
+root@linuxnix:~#
+```
-Linux 中 /lib 文件夹内部信息
+如果你注意到的话,会发现我们使用的 `pwd` 命令的执行需要调用两个库文件。
+
+### Linux 中 /lib 文件夹内部信息
正如之前所说,这个文件夹包含了目标文件和一些库文件,如果能了解这个文件夹的一些重要子文件,想必是极好的。下面列举的内容是基于我自己的系统,对于你的来说,可能会有所不同。
-**/lib/firmware** - 这个文件夹包含了一些硬件、固件(Firmware)代码。
+```
+root@linuxnix:/lib# find . -maxdepth 1 -type d
+./firmware
+./modprobe.d
+./xtables
+./apparmor
+./terminfo
+./plymouth
+./init
+./lsb
+./recovery-mode
+./resolvconf
+./crda
+./modules
+./hdparm
+./udev
+./ufw
+./ifupdown
+./systemd
+./modules-load.d
+```
-### 硬件和固件(Firmware)之间有什么不同?
+`/lib/firmware` - 这个文件夹包含了一些硬件、固件代码。
-为了使硬件合法运行,很多设备软件有两部分软件组成。加载了一个代码片段的切实硬件就是固件,固件与内核交流的软件,被称为驱动。这样一来,确保被指派工作的硬件完成内核直接与硬件交流的工作。
+> **硬件和固件之间有什么不同?**
-**/lib/modprobe.d** - 自动处理可载入模块命令配置目录
+> 为了使硬件正常运行,很多设备软件由两部分软件组成。加载到实际硬件的代码部分就是固件,用于在固件和内核之间通讯的软件被称为驱动程序。这样一来,内核就可以直接与硬件通讯,并确保硬件完成内核指派的工作。
-**/lib/modules** - 所有可加载的内核模块都存储在这个目录下。如果你有多个内核,那这个目录下有且不仅有一个文件夹,其中每一个都代表一个内核。
+`/lib/modprobe.d` - modprobe 命令的配置目录。
-**/lib/hdparm** - 包含 SATA/IDE 硬盘正确运行的参数。
+`/lib/modules` - 所有的可加载内核模块都存储在这个目录下。如果你有多个内核,你会在这个目录下看到代表美国内核的目录。
-**/lib/udev** - Userspace /dev,是 Linux 内核设备管理器。这个文件夹包含了所有的 udev,类似 rules.d 这样描述特殊规则的相关文件/文件夹。
+`/lib/hdparm` - 包含 SATA/IDE 硬盘正确运行的参数。
+
+`/lib/udev` - 用户空间 /dev 是 Linux 内核设备管理器。这个文件夹包含了所有的 udev 相关的文件和文件夹,例如 `rules.d` 包含了 udev 规范文件。
### /lib 的姊妹文件夹:/lib32 和 /lib64
-这两个文件夹包含了特殊结构的库文件。它们几乎和 /lib 文件夹一样,除了架构级别的差异。
+这两个文件夹包含了特殊结构的库文件。它们几乎和 `/lib` 文件夹一样,除了架构级别的差异。
### Linux 其他的库文件
-**/usr/lib** - 所有软件的库都安装在这里。但是不包含系统默认库文件和内核库文件。
+`/usr/lib` - 所有软件的库都安装在这里。但是不包含系统默认库文件和内核库文件。
-**/usr/local/lib** - 放置额外的系统文件。不同应用都可以调用。
+`/usr/local/lib` - 放置额外的系统文件。这些库能够用于各种应用。
-**/var/lib** - rpm/dpkg 数据和游戏缓存类似的动态库/文件都存储在这里。
+`/var/lib` - 存储动态数据的库和文件,例如 rpm/dpkg 数据和游戏记录。
--------------------------------------------------------------------------------
@@ -62,7 +90,7 @@ via: https://www.linuxnix.com/linux-directory-structure-lib-explained/
作者:[Surendra Anne][a]
译者:[CYLeft](https://github.com/CYLeft)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From c5e61c912be44454c6e3579166dc61ac96671af3 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Thu, 26 Apr 2018 21:17:12 +0800
Subject: [PATCH 112/220] PUB:20170927 Linux directory structure- -lib
explained.md
@CYLeft https://linux.cn/article-9580-1.html
---
.../20170927 Linux directory structure- -lib explained.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20170927 Linux directory structure- -lib explained.md (100%)
diff --git a/translated/tech/20170927 Linux directory structure- -lib explained.md b/published/20170927 Linux directory structure- -lib explained.md
similarity index 100%
rename from translated/tech/20170927 Linux directory structure- -lib explained.md
rename to published/20170927 Linux directory structure- -lib explained.md
From 688bd6c115b8f581f3be00b5ba15dbc7f26c7369 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Thu, 26 Apr 2018 21:43:30 +0800
Subject: [PATCH 113/220] PRF:20170201 Prevent Files And Folders From
Accidental Deletion Or Modification In Linux.md
@yizhuoyan
---
...ental Deletion Or Modification In Linux.md | 218 +++++++++---------
1 file changed, 107 insertions(+), 111 deletions(-)
diff --git a/translated/tech/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md b/translated/tech/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md
index f3d8facea3..0af616dccd 100644
--- a/translated/tech/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md
+++ b/translated/tech/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md
@@ -1,182 +1,187 @@
-Linux系统中防止文件和目录被意外的删除或修改
+如何在 Linux 系统中防止文件和目录被意外的删除或修改
======

-有时,我会不小心的按下`SHIFT+DELETE`来删除我的文件数据。是的,我是个笨蛋,不会再次确认下我实际准备要删除的东西。而且我太笨或者说太懒,没有备份我的文件数据。结果呢?数据丢失了!在一瞬间就丢失了。
+有时,我会不小心的按下 `SHIFT+DELETE`来删除我的文件数据。是的,我是个笨蛋,没有再次确认下我实际准备要删除的东西。而且我太笨或者说太懒,没有备份我的文件数据。结果呢?数据丢失了!在一瞬间就丢失了。
-这种事时不时就会发生在我身上。如果你和我一样,有个好消息告诉你。有个简单又有用的命令行工具叫**“chattr”**(**Ch** ange **Attr** ibute的缩写 ),在类Unix等发行版中,能够用来防止文件和目录被意外的删除或修改。
+这种事时不时就会发生在我身上。如果你和我一样,有个好消息告诉你。有个简单又有用的命令行工具叫`chattr`(**Ch**ange **Attr**ibute 的缩写),在类 Unix 等发行版中,能够用来防止文件和目录被意外的删除或修改。
-通过给文件或目录添加或删除某些属性,来保证用户不能删除或修改这些文件和目录,不管是有意的还是无意的,甚至root用户也不行。听起来很有用,是不是?
-
-
-在这篇简短的教程中,我们一起来看看怎么在实际应用中使用chattr命令,来防止文件和目录被意外删除。
+通过给文件或目录添加或删除某些属性,来保证用户不能删除或修改这些文件和目录,不管是有意的还是无意的,甚至 root 用户也不行。听起来很有用,是不是?
+在这篇简短的教程中,我们一起来看看怎么在实际应用中使用 `chattr` 命令,来防止文件和目录被意外删除。
### Linux中防止文件和目录被意外删除和修改
-默认,Chattr命令在大多数现代Linux操作系统中是可用的。
+默认,`chattr` 命令在大多数现代 Linux 操作系统中是可用的。
默认语法是:
```
chattr [operator] [switch] [file]
-
```
-
-chattr 具有如下操作符:
+`chattr` 具有如下操作符:
+ * 操作符 `+`,追加指定属性到文件已存在属性中
+ * 操作符 `-`,删除指定属性
+ * 操作符 `=`,直接设置文件属性为指定属性
- * 操作符**‘+’**追加指定属性到文件已存在属性中
- * 操作符**‘-‘**删除指定属性
- * 操作符**‘=’**直接设置文件属性为指定属性
+`chattr` 提供不同的属性,也就是 `aAcCdDeijsStTu`。每个字符代表一个特定文件属性。
-Chattr 提供不同的属性,也就是-**aAcCdDeijsStTu**。每个字符代表一个特定文件属性。
- * **a** – 只能向文件中添加数据,而不能删除(appened only),
- * **A** – 不更新文件或目录的最后存取时间(no atime updates),
- * **c** – 将文件或目录压缩后存放(compressed),
- * **C** – 不适用写入时复制机制(no copy on write),
- * **d** – 设定文件不能成为dump程序的备份目标(no dump),
- * **D** – 同步目录更新(synchronous directory updates),
- * **e** – extend格式存储(extent format),
- * **i** – 文件或目录不可改变(immutable),
- * **j** – 设定此参数使得当通过mount参数:data=ordered 或者 data=writeback挂载的文件系统,文件在写入时会先被记录在journal中。(data journalling),
- * **P** – project层次结构(project hierarchy),
- * **s** – 保密性删除文件或目录(secure deletion),
- * **S** – 即时更新文件或目录(synchronous updates),
- * **t** – 不进行尾部合并(no tail-merging),
- * **T** – 顶层目录层次结构(top of directory hierarchy),
- * **u** – 不可删除(undeletable).
+ * `a` – 只能向文件中添加数据
+ * `A` – 不更新文件或目录的最后访问时间
+ * `c` – 将文件或目录压缩后存放
+ * `C` – 不适用写入时复制机制(CoW)
+ * `d` – 设定文件不能成为 `dump` 程序的备份目标
+ * `D` – 同步目录更新
+ * `e` – extend 格式存储
+ * `i` – 文件或目录不可改变
+ * `j` – 设定此参数使得当通过 `mount` 参数:`data=ordered` 或者 `data=writeback` 挂载的文件系统,文件在写入时会先被记录在日志中
+ * `P` – project 层次结构
+ * `s` – 安全删除文件或目录
+ * `S` – 即时更新文件或目录
+ * `t` – 不进行尾部合并
+ * `T` – 顶层目录层次结构
+ * `u` – 不可删除
-在本教程中,我们将讨论两个属性的使用,即**a** , **i** ,这个两个属性可以用于防止文件和目录的被删除。这是我们今天的主题,对吧?来开始吧!
+在本教程中,我们将讨论两个属性的使用,即 `a`、`i` ,这个两个属性可以用于防止文件和目录的被删除。这是我们今天的主题,对吧?来开始吧!
### 防止文件被意外删除和修改
-我先在我的当前目录创建一个**file.txt**文件。
-```
-$ touch file.txt
+我先在我的当前目录创建一个`file.txt`文件。
```
-现在,我将给文件应用**“i”**属性,让文件不可改变。就是说你不能删除或修改这个文件,就算你是文件的拥有者和root用户也不行。
+$ touch file.txt
+```
+
+现在,我将给文件应用 `i` 属性,让文件不可改变。就是说你不能删除或修改这个文件,就算你是文件的拥有者和 root 用户也不行。
+
```
$ sudo chattr +i file.txt
```
-使用`lsattr`命令检查文件已有属性
+使用`lsattr`命令检查文件已有属性:
+
```
$ lsattr file.txt
```
-**输出:**
+输出:
+
```
----i---------e---- file.txt
```
-现在,试着用普通用户去删除文件
+现在,试着用普通用户去删除文件:
+
```
$ rm file.txt
-
```
-**输出:**
+输出:
+
```
-#不能删除文件,非法操作
+# 不能删除文件,非法操作
rm: cannot remove 'file.txt': Operation not permitted
```
-我来试试sudo特权:
+我来试试 `sudo` 特权:
+
```
$ sudo rm file.txt
```
-**输出:**
+输出:
+
```
-#不能删除文件,非法操作
+# 不能删除文件,非法操作
rm: cannot remove 'file.txt': Operation not permitted
```
-我们试试追加写内容到这个文本文件
+我们试试追加写内容到这个文本文件:
+
```
$ echo 'Hello World!' >> file.txt
-
```
-**输出:**
+输出:
+
```
-#非法操作
+# 非法操作
bash: file.txt: Operation not permitted
```
-试试 **sudo** 特权:
+试试 `sudo` 特权:
+
```
$ sudo echo 'Hello World!' >> file.txt
-
```
-**输出:**
+输出:
+
```
-#非法操作
+# 非法操作
bash: file.txt: Operation not permitted
-
```
-你应该注意到了,我们不能删除或修改这个文件,甚至root用户或者文件所有者也不行。
-要撤销属性,使用**“-i”**即可。
+你应该注意到了,我们不能删除或修改这个文件,甚至 root 用户或者文件所有者也不行。
+
+要撤销属性,使用 `-i` 即可。
+
```
$ sudo chattr -i file.txt
-
```
现在,这不可改变属性已经被删除掉了。你现在可以删除或修改这个文件了。
+
```
$ rm file.txt
-
```
类似的,你能够限制目录被意外删除或修改,如下一节所述。
### 防止目录被意外删除和修改
-创建一个dir1目录,放入文件file.txt。
+创建一个 `dir1` 目录,放入文件 `file.txt`。
+
```
$ mkdir dir1 && touch dir1/file.txt
-
```
-现在,让目录及其内容(file.txt文件)不可改变:
+现在,让目录及其内容(`file.txt` 文件)不可改变:
+
```
$ sudo chattr -R +i dir1
-
```
-命令中,
+命令中,
- * **-R** – 递归使dir目录及其内容不可修改
- * **+i** – 使目录不可修改
+ * `-R` – 递归使 `dir1` 目录及其内容不可修改
+ * `+i` – 使目录不可修改
+现在,来试试删除这个目录,要么用普通用户,要么用 `sudo` 特权。
-现在,来试试删除这个目录,要么用普通用户,要么用sudo特权。
```
$ rm -fr dir1
-
$ sudo rm -fr dir1
-
```
-你会看到如下输出:
+你会看到如下输出:
+
```
-#不可删除'dir1/file.txt':非法操作
+# 不可删除'dir1/file.txt':非法操作
rm: cannot remove 'dir1/file.txt': Operation not permitted
-
```
-尝试用“echo”命令追加内容到文件,你成功了吗?当然,你做不到。
+
+尝试用 `echo` 命令追加内容到文件,你成功了吗?当然,你做不到。
+
撤销此属性,输入:
+
```
$ sudo chattr -R -i dir1
-
```
现在你就能想平常一样删除或修改这个目录内容了。
@@ -185,115 +190,106 @@ $ sudo chattr -R -i dir1
我们现已知道如何防止文件和目录被意外删除和修改了。接下来,我们将防止文件被删除但仅仅允许文件被追加内容。意思是你不可以编辑修改文件已存在的数据,或者重命名这个文件或者删除这个文件,你仅可以使用追加模式打开这个文件。
-
为了设置追加属性到文件或目录,我们像下面这么操作:
-**针对文件:**
+
+针对文件:
+
```
$ sudo chattr +a file.txt
-
```
-**针对目录: **
+针对目录:
+
```
$ sudo chattr -R +a dir1
-
```
-一个文件或目录被设置了‘a’这个属性就仅仅能够被追加模式打开进行写入。
+一个文件或目录被设置了 `a` 这个属性就仅仅能够以追加模式打开进行写入。
+
添加些内容到这个文件以测试是否有效果。
+
```
$ echo 'Hello World!' >> file.txt
-
$ echo 'Hello World!' >> dir1/file.txt
-
```
-
查看文件内容使用cat命令
+
```
$ cat file.txt
-
$ cat dir1/file.txt
-
```
-**输出:**
+输出:
+
```
Hello World!
-
```
-
你将看到你现在可以追加内容。就表示我们可以修改这个文件或目录。
现在让我们试试删除这个文件或目录。
+
```
$ rm file.txt
-
```
-**输出:**
+输出:
+
```
-#不能删除文件'file.txt':非法操作
+# 不能删除文件'file.txt':非法操作
rm: cannot remove 'file.txt': Operation not permitted
-
```
-
让我们试试删除这个目录:
+
```
$ rm -fr dir1/
-
```
-**输出:**
+输出:
+
```
-#不能删除文件'dir1/file.txt':非法操作
+# 不能删除文件'dir1/file.txt':非法操作
rm: cannot remove 'dir1/file.txt': Operation not permitted
-
```
-
删除这个属性,执行下面这个命令:
-**针对文件:**
+
+针对文件:
+
```
$ sudo chattr -R -a file.txt
-
```
-**针对目录:**
+针对目录:
+
```
$ sudo chattr -R -a dir1/
-
```
-
现在,你可以想平常一样删除或修改这个文件和目录了。
-更多详情,查看man页面。
+更多详情,查看 man 页面。
+
```
man chattr
-
```
### 总结
+保护数据是系统管理人员的主要工作之一。市场上有众多可用的免费和收费的数据保护软件。幸好,我们已经拥有这个内置命令可以帮助我们去保护数据被意外的删除和修改。在你的 Linux 系统中,`chattr` 可作为保护重要系统文件和数据的附加工具。
-保护数据是系统管理人员的主要工作之一。市场上有众多可用的免费和收费的数据保护软件。幸好,我们已经拥有这个内置命令可以帮助我们去保护数据被意外的删除和修改。在你的Linux系统中,Chattr可作为保护重要系统文件和数据的附加工具。
-
-然后,这就是今天所有内容了。希望对大家有所帮助。接下来我将会在这提供其他有用的文章。在那之前,敬请期待OSTechNix。
-再见!
-
-
+然后,这就是今天所有内容了。希望对大家有所帮助。接下来我将会在这提供其他有用的文章。在那之前,敬请期待。再见!
--------------------------------------------------------------------------------
-来源: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/
+via: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/
作者:[SK][a]
译者:[yizhuoyan](https://github.com/yizhuoyan)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 5d72cde02c558a6c3768802211d45090568bc45f Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Thu, 26 Apr 2018 21:43:52 +0800
Subject: [PATCH 114/220] PUB: 20170201 Prevent Files And Folders From
Accidental Deletion Or Modification In Linux.md
@yizhuoyan
---
...d Folders From Accidental Deletion Or Modification In Linux.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md (100%)
diff --git a/translated/tech/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md b/published/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md
similarity index 100%
rename from translated/tech/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md
rename to published/20170201 Prevent Files And Folders From Accidental Deletion Or Modification In Linux.md
From 2518b4c930d963ea2a0884201279dd75b893dc77 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Fri, 27 Apr 2018 08:38:41 +0800
Subject: [PATCH 115/220] PRF:20171123 Why microservices are a security
issue.md
@erlinux
---
... Why microservices are a security issue.md | 76 +++++++------------
1 file changed, 29 insertions(+), 47 deletions(-)
diff --git a/translated/tech/20171123 Why microservices are a security issue.md b/translated/tech/20171123 Why microservices are a security issue.md
index e0a5b3a078..816baaba85 100644
--- a/translated/tech/20171123 Why microservices are a security issue.md
+++ b/translated/tech/20171123 Why microservices are a security issue.md
@@ -1,85 +1,67 @@
为什么微服务是一个安全问题
============================================================
-### 你可能并不想把所有的遗留应用全部分解为微服务,或许你可以考虑开始一段安全之旅。
+> 你可能并不想把所有的遗留应用全部分解为微服务,或许你可以考虑从安全功能开始。
-
+
Image by : Opensource.com
-我为这篇文章起个标题,使出 “洪荒之力”,也很担心这会遇到 “好奇心点击”。如果你点击它,是因为激起了你的好奇,那么(我)表示抱歉。[1][5] 我是希望你留下来的 [2][6]:这里有有趣的观点以及很多 [3][7] 注解。我不是故意提出微服务会导致安全问题——尽管如同很多组件一样(都有安全问题)。当然,这些微服务是那些涉及安全(人员)的趣向所在,最佳对象。
+我为了给这篇文章起个标题,使出 “洪荒之力”,也很担心这会变成标题党。如果你点击它,是因为它激起了你的好奇,那么我表示抱歉 ^[注1][5] 。我当然是希望你留下来阅读的 ^[注2][6] :我有很多有趣的观点以及很多 ^[注3][7] 脚注。我不是故意提出微服务会导致安全问题——尽管如同很多组件一样都有安全问题。当然,这些微服务是那些安全方面的人员的趣向所在。进一步地说,我认为对于那些担心安全的人来说,它们是优秀的架构。
-为什么这样说?好(问题),对于我们这些有[系统安全][16] (的人来说),此时这个世界才是一个有趣的地方。我们看到分布式系统的增长,带宽便宜了并且延迟低了。加上
-"轻松上云"(部署到云的便利性在增加),越来越多的架构师们开始意识到应用是可以分解的。他们可以分解应用程序而不只是多个层,并且层内还能分为多个组件。当然均衡负载,对一个层次内的各个组件协同一个任务有帮助。但是增长揭露不同的服务作为小附件已经导致架构的增长,以及实施微服务的部署。
+为什么这样说?这是个好问题,对于我们这些有[系统安全][16] 经验的人来说,此时这个世界才是一个有趣的地方。我们看到随着带宽便宜了并且延迟降低了,分布式系统在增长。加上部署到云愈加便利,越来越多的架构师们开始意识到应用是可以分解的,不只是分成多个层,并且层内还能分为多个组件。当然,均衡负载可以用于让一个层内的各个组件协同工作,但是将不同的服务输出为各种小组件的能力导致了微服务在设计、实施和部署方面的增长。
-更多关于微服务
+所以,[到底什么是微服务呢][23]?我同意[维基百科的定义][24],尽管没有提及安全性方面的内容^[注4][17] 。 我喜欢微服务的一点是,经过精心设计,其符合 Peter H. Salus 描述的 [UNIX 哲学][25] 的前俩点:
-* [如何向你的 CEO 首席执行官 解释微服务][1]
-
-* [免费电子书:微服务与面向服务的体系架构][2]
-
-* [为微服务的 DevOps 保驾护航][3]
-
-所以,[什么是微服务][23]?我同意[维基百科的定义][24],尽管有趣的关于安全性没有提起。[4][17]我喜欢微服务的一点是,精心设计符合 Peter H. Salus 描述的 [UNIX 哲学][25] 的前俩点:
-
-1. 程序应该只关注一个目标,并尽可能把它做好。
+1. 程序应该只做一件事,并尽可能把它做好。
2. 让程序能够互相协同工作。
3. 应该让程序处理文本数据流,因为这是一个通用的接口。
-三者中最后一个小小的不相关,因为 UNIX 哲学 通常被用来指代独立应用,它常有一个命令实例化。但是,它确实包含了微服务的基本要求之一:必须具有定义 "明确" 的接口。
+三者中最后一个有点不太相关,因为 UNIX 哲学通常被用来指代独立应用,它常有一个实例化的命令。但是,它确实包含了微服务的基本要求之一:必须具有“定义明确”的接口。
-明确下,我指的不仅仅是很多外部 API 访问的方法,还有正常的微服务输入输出操作——以及,如果有任何副作用。就像我之前的文章描述的,“[五个特征良好的系统架构][18]”,如果你能设计一个系统,数据和描述主体是至关重要的。这里,在我们的微服务描述上,我们得到查看为什么这些是很重要的。因为对我来说,微服务架构的关键未来定义是可分解性。如果你要分解 [5][8] 你的架构,你必须非常非常非常的清楚 "bits"
-组件要做什么。
+这里的“定义明确”,我指的不仅仅是可外部访问的 API 的方法描述,也指正常的微服务输入输出操作——以及,如果有的话,还有其副作用。就像我之前的文章描述的,“[良好的系统架构的五个特征][18]”,如果你能够去设计一个系统,数据和主体描述是至关重要的。在此,在我们的微服务描述上,我们要去看看为什么这些是如此重要。因为对我来说,微服务架构的关键定义是可分解性。如果你要分解 ^[注5][8] 你的架构,你必须非常、非常地清楚每个细节(“组件”)要做什么。
-在这里,安全的要来了。准确描述特定组件应该做什么以允许你:
+在这里,就要开始考虑安全了。特定组件的准确描述可以让你:
-* 查看您的样图
+* 审查您的设计
* 确保您的实现符合描述
* 提出可重用测试单元来审查功能
* 跟踪实施中的错误并纠正错误
-* 测试意料外的产出
+* 测试意料之外的产出
* 监视不当行为
-* 审核未来可能的真实行为
+* 审核未来可能的实际行为
-现在,这些东西(微服务)可能都在一个大架构里了吗?是的。但如果实体是在更复杂的配置中链接或组合在一起,他们会随着越来越难。为确保正确的实施和贯彻,当你有小块一起工作。以及如果你不能确定单个组件正在做他们应正在工作的,那么衍生出复杂系统运行状况和不正确行为就困难的多了。
+现在,这些微服务能用在一个大型架构里了吗?是的。但如果实体是在更复杂的配置中彼此链接或组合在一起,它们会随之越来越难。当你让一小部分可以彼此配合工作时,确保正确的实施和行为是非常、非常容易的。并且如果你不能确定单个组件正在做它们应该作的,那么确保其衍生出来的复杂系统的正确行为及不正确行为就困难的多了。
-不管怎样,它不止于此。由于我已经在许多[以往场合][19]提过,写足够安全的代码是困难的,[7][9] 证实它应该做的更加困难。因此,有理由限制特定安全要求的代码——密码检测、加密、加密密钥管理、授权、等等。——变的小,明确的快。然后你可以执行上面提到所有事情,以确定正确完成。
+而且还不止于此。如我已经在许多[以往场合][19]提过的,写足够安全的代码是困难的^[注7][9] ,证实它应该做的更加困难。因此,有理由限制有特定安全要求的代码——密码检测、加密、加密密钥管理、授权等等——将它们变成小而定义明确的代码块。然后你就可以执行我上面提及所有工作,以确保正确完成。
-以及还有更多。我们都知道并不是每个人都擅长于编写与安全相关的代码。通过分解你的体系架构,你得到机会去把最棒的安全人员去限制 J. 随机编码器 [8][10] 会把一些关键的安全控制措施绕过或降级的危险。
+还有,我们都知道并不是每个人都擅长于编写与安全相关的代码。通过分解你的体系架构,将安全敏感的代码限制到定义明确的组件中,你就可以把你最棒的安全人员放到这方面,并限制了 J.佛系.码奴 ^[注8][10] 绕过或降级一些关键的安全控制措施的危险。
-它可以作为学校的机会:它总能够指向 设计/实现/测试/监视元组 并且说:“听,读,标记,学习,内在消化。这是应该做的。[9][11] ”
+它可以作为学习的机会:它对于设计/实现/测试/监视的兄弟们都是好的,而且给他们说:“听、读、标记、学习,并且引为己用 ^[注9][11] 。这是应该做的。”
-是否应该将所有遗留应用程序分解为微服务? 你可能可能不会。 但是考虑到所有的好处,你可以考虑从安全功能开始。
+是否应该将所有遗留应用程序分解为微服务? 不一定。 但是考虑到其带来的好处,你可以考虑从安全入手。
* * *
-1、有一点——有读者总是好的。
+- 注1、有一点——有读者总是好的。
+- 注2、这是我写下文章的意义。
+- 注3、可能没那么有趣。
+- 注4、在我写这篇文章时。我或你们中的一个可能会去编辑改变它。
+- 注5、这很有趣,听起来想一个园艺术语。并不是说我很喜欢园艺,但仍然... ^[注6][12]
+- 注6、有意思的是,我最先写的 “如果你要分解你的架构....” 听起来想是一个 IT 主题的谋杀电影标题。
+- 注7、长期读者可能会记得提到的优秀电影 “The Thick of It”
+- 注8、其他的什么人:请随便选择。
+- 注9、不是加密摘要:我不认同原作者的想法。
-2、我知道他们的意义:我写下了他们。
-
-3、可能不那么使人着迷。
-
-4、在写这篇文章时。我或你们中的一个可能会去编辑改变它。
-
-5、这很有趣,听起来想一个园艺术语。并不是说我很喜欢园艺,但仍然... [6][12]
-
-6、有趣地,我首先写了 “如果你要分解你的架构....” 这听起来想是一个 IT 主题的谋杀电影标题。
-
-7、定期的读者可能会记得提到的优秀电影 “The Thick of It”
-
-8、其他存在的常规人物:请随便选择。
-
-9、不是加密摘要:我不认同原作者的想法。
-
-这篇文章最初出在[爱丽丝与鲍伯](https://zh.wikipedia.org/zh-hans/%E6%84%9B%E9%BA%97%E7%B5%B2%E8%88%87%E9%AE%91%E4%BC%AF)——一个安全博客上,并被许可转载。
+这篇文章最初出在[爱丽丝、伊娃与鲍伯](https://zh.wikipedia.org/zh-hans/%E6%84%9B%E9%BA%97%E7%B5%B2%E8%88%87%E9%AE%91%E4%BC%AF)——一个安全博客上,并被许可转载。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/microservices-are-security-issue
-作者:[Mike Bursell ][a]
+作者:[Mike Bursell][a]
译者:[erlinux](https://itxdm.me)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From bc2cc03dcfbdc2c35247cc25921bd3f180ad73eb Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Fri, 27 Apr 2018 08:39:08 +0800
Subject: [PATCH 116/220] PUB:20171123 Why microservices are a security
issue.md
@erlinux https://linux.cn/article-9582-1.html
---
.../20171123 Why microservices are a security issue.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20171123 Why microservices are a security issue.md (100%)
diff --git a/translated/tech/20171123 Why microservices are a security issue.md b/published/20171123 Why microservices are a security issue.md
similarity index 100%
rename from translated/tech/20171123 Why microservices are a security issue.md
rename to published/20171123 Why microservices are a security issue.md
From 2a6fff6bbca3c6942c1d36eef25f71e35499b58e Mon Sep 17 00:00:00 2001
From: geekpi
Date: Fri, 27 Apr 2018 08:50:19 +0800
Subject: [PATCH 117/220] translated
---
... Transferred Files Over SSH Using Rsync.md | 103 ------------------
... Transferred Files Over SSH Using Rsync.md | 101 +++++++++++++++++
2 files changed, 101 insertions(+), 103 deletions(-)
delete mode 100644 sources/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md
create mode 100644 translated/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md
diff --git a/sources/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md b/sources/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md
deleted file mode 100644
index f4f532720a..0000000000
--- a/sources/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md
+++ /dev/null
@@ -1,103 +0,0 @@
-translating----geekpi
-
-How To Resume Partially Transferred Files Over SSH Using Rsync
-======
-
-
-
-There are chances that the large files which are being copied over SSH using SCP command might be interrupted or cancelled or broken due to various reasons such as power failure or network failure or user intervention. The other day I was copying the Ubuntu 16.04 ISO file to my remote system. Unfortunately, the power is gone, and the network connection is dropped immediately. The result? The copy process is terminated! This is just a simple example. The Ubuntu ISO is not so big, and I could restart the copy process as soon as the power is restored. But in production environment, you might not want to do it while you're transferring large files.
-
-Also, you can't always resume the aborted process using **scp** command. Because, If you do, It will simply overwrite the existing files. What would you do in such situations? No worries! This is where **Rsync** utility comes in handy! Rsync can help you to resume the interrupted copy or download process where you left it off. For those wondering, Rsync is a fast, versatile file copying utility that can be used to copy and transfer files or folders to and from remote and local systems.
-
-It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
-
-Just like SCP, rsync will also copy files over SSH. In case you wanted to download or transfer a big files and folders over SSH, I recommend you to use rsync utility. Be mindful that the **rsync utility should be installed on both sides** (remote and local systems) in order to resume partially transferred files.
-
-### Resume Partially Transferred Files Using Rsync
-
-Well, let me show you an example. I am going to copy Ubuntu 16.04 ISO from my local system to remote system with command:
-
-```
-$ scp Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
-```
-
-Here,
-
- * **sk** is my remote system 's username
- * **192.168.43.2** is the IP address of the remote machine.
-
-
-
-Now, I terminated it by pressing **CTRL+c**.
-
-**Sample output:**
-
-```
-sk@192.168.43.2's password:
-ubuntu-16.04-desktop-amd64.iso 26% 372MB 26.2MB/s 00:39 ETA^c
-```
-
-[![][1]][2]
-
-As you see in the above output, I terminated the copy process when it reached 26%.
-
-If I re-run the above command, it will simply overwrite the existing file. In other words, the copy process will not resume where I left it off.
-
-In order to resume the copy process, we can use **rsync** command as shown below.
-
-```
-$ rsync -P -rsh=ssh Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
-```
-
-**Sample output:**
-```
-sk@192.168.1.103's password:
-sending incremental file list
-ubuntu-16.04-desktop-amd64.iso
- 380.56M 26% 41.05MB/s 0:00:25
-```
-
-[![][1]][4]
-
-See? Now, the copying process is resumed where we left it off earlier. You also can use "-partial" instead of parameter "-P" like below.
-```
-$ rsync --partial -rsh=ssh Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
-```
-
-Here, the parameter "-partial" or "-P" tells the rsync command to keep the partial downloaded file and resumes the process.
-
-Alternatively, we can use the following commands as well to resume partially transferred files over SSH.
-
-```
-$ rsync -avP Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
-```
-
-Or,
-
-```
-rsync -av --partial Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
-```
-
-That's it. You know now how to resume the cancelled, interrupted, and partially downloaded files using rsync command. As you can see, it is not so difficult either. If rsync is installed on both systems, we can easily resume the copy process as described above.
-
-If you find this tutorial helpful, please share it on your social, professional networks and support OSTechNix. More good stuffs to come. Stay tuned!
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/how-to-resume-partially-downloaded-or-transferred-files-using-rsync/
-
-作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[2]:http://www.ostechnix.com/wp-content/uploads/2016/02/scp.png ()
-[3]:/cdn-cgi/l/email-protection
-[4]:http://www.ostechnix.com/wp-content/uploads/2016/02/rsync.png ()
diff --git a/translated/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md b/translated/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md
new file mode 100644
index 0000000000..3ae2d7761c
--- /dev/null
+++ b/translated/tech/20180129 How To Resume Partially Transferred Files Over SSH Using Rsync.md
@@ -0,0 +1,101 @@
+如何使用 Rsync 通过 SSH 恢复部分传输的文件
+======
+
+
+
+由于诸如电源故障、网络故障或用户干预等各种原因,使用 SCP 命令通过 SSH 复制的大型文件可能会中断,取消或损坏。有一天,我将 Ubuntu 16.04 ISO 文件复制到我的远程系统。不幸的是断电了,网络连接立即丢失。结果么?复制过程终止!这只是一个简单的例子。Ubuntu ISO 并不是那么大,一旦电源恢复,我就可以重新启动复制过程。但在生产环境中,当你在传输大型文件时,你可能并不希望这样做。
+
+而且,你不能总是使用 **scp** 命令恢复被中止的进度。因为,如果你这样做,它只会覆盖现有的文件。这时你会怎么做?别担心!这是 **Rsync** 派上用场的地方!Rsync 可以帮助你恢复中断的复制或下载过程。对于那些好奇的人,Rsync 是一个快速、多功能的文件复制程序,可用于复制和传输远程和本地系统中的文件或文件夹。
+
+它提供了大量控制其行为的每个方面的选项,并允许非常灵活地指定要复制的一组文件。它以增量传输算法而闻名,它通过仅发送源文件和目标中现有文件之间的差异来减少通过网络发送的数据量。 Rsync 广泛用于备份和镜像,以及日常使用中改进的复制命令。
+
+就像 SCP 一样,rsync 也会通过 SSH 复制文件。如果你想通过 SSH 下载或传输大文件和文件夹,我建议您使用 rsync。请注意,**应该在两边都安装 rsync**(远程和本地系统)来恢复部分传输的文件。
+
+### 使用 Rsync 恢复部分传输的文件
+
+好吧,让我给你看一个例子。我将使用命令将 Ubuntu 16.04 ISO 从本地系统复制到远程系统:
+
+```
+$ scp Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
+```
+
+这里,
+
+ * **sk**是我的远程系统的用户名
+ * **192.168.43.2** 是远程机器的 IP 地址。
+
+
+
+现在,我按下 **CTRL+c** 结束它。
+
+**示例输出:**
+
+```
+sk@192.168.43.2's password:
+ubuntu-16.04-desktop-amd64.iso 26% 372MB 26.2MB/s 00:39 ETA^c
+```
+
+[![][1]][2]
+
+正如你在上面的输出中看到的,当它达到 26% 时,我终止了复制过程。
+
+如果我重新运行上面的命令,它只会覆盖现有的文件。换句话说,复制过程不会在我断开的地方恢复。
+
+为了恢复复制过程,我们可以使用 **rsync** 命令,如下所示。
+
+```
+$ rsync -P -rsh=ssh Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
+```
+
+**示例输出:**
+```
+sk@192.168.1.103's password:
+sending incremental file list
+ubuntu-16.04-desktop-amd64.iso
+ 380.56M 26% 41.05MB/s 0:00:25
+```
+
+[![][1]][4]
+
+看见了吗?现在,复制过程在我们之前断开的地方恢复了。你也可以像下面那样使用 “-partial” 而不是 “-P” 参数。
+```
+$ rsync --partial -rsh=ssh Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
+```
+
+这里,参数 “-partial” 或 “-P” 告诉 rsync 命令保留部分下载的文件并恢复进度。
+
+或者,我们也可以使用以下命令通过 SSH 恢复部分传输的文件。
+
+```
+$ rsync -avP Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
+```
+
+或者,
+
+```
+rsync -av --partial Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
+```
+
+就是这样了。你现在知道如何使用 rsync 命令恢复取消、中断和部分下载的文件。正如你所看到的,它也不是那么难。如果两个系统都安装了 rsync,我们可以轻松地通过上面描述的那样恢复复制进度。
+
+如果你觉得本教程有帮助,请在你的社交、专业网络上分享,并支持 OSTechNix。还有更多的好东西。敬请关注!
+
+干杯!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-resume-partially-downloaded-or-transferred-files-using-rsync/
+
+作者:[SK][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[2]:http://www.ostechnix.com/wp-content/uploads/2016/02/scp.png ()
+[3]:/cdn-cgi/l/email-protection
+[4]:http://www.ostechnix.com/wp-content/uploads/2016/02/rsync.png ()
From 4e171c76eee0ef0ca23753bc659041635067385d Mon Sep 17 00:00:00 2001
From: geekpi
Date: Fri, 27 Apr 2018 08:53:24 +0800
Subject: [PATCH 118/220] translating
---
.../tech/20180420 How to start developing on Java in Fedora.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180420 How to start developing on Java in Fedora.md b/sources/tech/20180420 How to start developing on Java in Fedora.md
index 9858e1be97..cc192c915c 100644
--- a/sources/tech/20180420 How to start developing on Java in Fedora.md
+++ b/sources/tech/20180420 How to start developing on Java in Fedora.md
@@ -1,3 +1,5 @@
+translating----geekpi
+
How to start developing on Java in Fedora
======
From 1be0a70721d78362eba47f34eeab0dc35b5ec129 Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Fri, 27 Apr 2018 23:51:36 +0800
Subject: [PATCH 119/220] Delete 20180325 Could we run Python 2 and Python 3
code in the same VM with no code changes.md
---
...ode in the same VM with no code changes.md | 123 ------------------
1 file changed, 123 deletions(-)
delete mode 100644 sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md
diff --git a/sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md b/sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md
deleted file mode 100644
index 7ec1044acd..0000000000
--- a/sources/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md
+++ /dev/null
@@ -1,123 +0,0 @@
-Translating by MjSeven
-
-
-Could we run Python 2 and Python 3 code in the same VM with no code changes?
-======
-
-Theoretically, yes. Zed Shaw famously jested that if this is impossible then Python 3 must not be Turing-complete. But in practice, this is unrealistic and I'll share why by giving you a few examples.
-
-### What does it mean to be a dict?
-
-Let’s imagine a Python 6 VM. It can read `module3.py` which was written for Python 3.6 but in this module it can import `module2.py` which was written for Python 2.7 and successfully use it with no issues. This is obviously toy code but let’s say that `module2.py` includes functions like:
-```
-
-
-def update_config_from_dict(config_dict):
- items = config_dict.items()
- while items:
- k, v = items.pop()
- memcache.set(k, v)
-
-def config_to_dict():
- result = {}
- for k, v in memcache.getall():
- result[k] = v
- return result
-
-def update_in_place(config_dict):
- for k, v in config_dict.items():
- new_value = memcache.get(k)
- if new_value is None:
- del config_dict[k]
- elif new_value != v:
- config_dict[k] = v
-
-```
-
-Now, when we want to use those functions from `module3`, we are faced with a problem: the dict type in Python 3.6 is different from the dict type in Python 2.7. In Python 2, dicts were unordered and their `.keys()`, `.values()`, `.items()` methods returned proper lists. That meant calling `.items()` created a copy of the state in the dictionary. In Python 3 those methods return dynamic views on the current state of the dictionary.
-
-This means if `module3` called `module2.update_config_from_dict(some_dictionary)`, it would fail to run because the value returned by `dict.items()` in Python 3 isn’t a list and doesn’t have a `.pop()` method. The reverse is also true. If `module3` called `module2.config_to_dict()`, it would presumably return a Python 2 dictionary. Now calling `.items()` is suddenly returning a list so this code would not work correctly (which works fine with Python 3 dictionaries):
-```
-
-
-def main(cmdline_options):
- d = module2.config_to_dict()
- items = d.items()
- for k, v in items:
- print(f'Config from memcache: {k}={v}')
- for k, v in cmdline_options:
- d[k] = v
- for k, v in items:
- print(f'Config with cmdline overrides: {k}={v}')
-
-```
-
-Finally, using `module2.update_in_place()` would fail because the value of `.items()` in Python 3 now doesn’t allow the dictionary to change during iteration.
-
-There’s more issues with this dictionary situation. Should a Python 2 dictionary respond `True` to `isinstance(d, dict)` on Python 3? If it did, it’d be a lie. If it didn’t, code would break anyway.
-
-### Python should magically know types and translate!
-
-Why can’t our Python 6 VM recognize that in Python 3 code we mean something else when calling `some_dict.keys()` than in Python 2 code? Well, Python doesn’t know what the author of the code thought `some_dict` should be when she was writing that code. There is nothing in the code that signifies whether it’s a dictionary at all. Type annotations weren’t there in Python 2 and, since they’re optional, even in Python 3 most code doesn’t use them yet.
-
-At runtime, when you call `some_dict.keys()`, Python simply looks up an attribute on the object that happens to hide under the `some_dict` name and tries to run `__call__()` on that attribute. There’s some technicalities with method binding, descriptors, slots, etc. but this is the gist of it. We call this behavior “duck typing”.
-
-Because of duck typing, the Python 6 VM would not be able to make compile-time decisions to translate calls and attribute lookups correctly.
-
-### OK, so let’s make this decision at runtime instead
-
-The Python 6 VM could implement this by tagging every attribute lookup with information “call comes from py2” or “call comes from py3” and make the object on the other side dispatch the right attribute. That would slow things down a lot and use more memory, too. It would require us to keep both versions of the given type in memory with a proxy used by user code. We would need to sync the state of those objects behind the user’s back, doubling the work. After all, the memory representation of the new dictionary is different than in Python 2.
-
-If your head spun thinking about the problems with dictionaries, think about all the issues with Unicode strings in Python 3 and the do-it-all byte strings in Python 2.
-
-### Is everything lost? Can’t Python 3 run old code ever?
-
-Everything is not lost. Projects get ported to Python 3 every day. The recommended way to port Python 2 code to work on both versions of Python is to run [Python-Modernize][1] on your code. It will catch code that wouldn’t work on Python 3 and translate it to use the [six][2] library instead so it runs on both Python 2 and Python 3 after. It’s an adaptation of `2to3` which was producing Python 3-only code. `Modernize` is preferred since it provides a more incremental migration route. All this is outlined very well in the [Porting Python 2 Code to Python 3][3] document in the Python documentation.
-
-But wait, didn’t you say a Python 6 VM couldn’t do this automatically? Right. `Modernize` looks at your code and tries to guess what’s going to be safe. It will make some changes that are unnecessary and miss others that are necessary. Famously, it won’t help you with strings. That transformation is not trivial if your code didn’t keep the boundaries between “binary data from outside” and “text data within the process”.
-
-So, migrating big projects cannot be done automatically and involves humans running tests, finding problems and fixing them. Does it work? Yes, I helped [moving a million lines of code to Python 3][4] and the switch caused no incidents. This particular move regained 1/3 of memory on our servers and made the code run 12% faster. That was on Python 3.5. But Python 3.6 got quite a bit faster and depending on your workload you could maybe even achieve [a 4X speedup][5] .
-
-### Dear Zed
-
-Hi, man. I follow your story for over 10 years now. I’ve been watching when you were upset you were getting no credit for Mongrel even though the Rails ecosystem pretty much ran all on it. I’ve been there when you reimagined it and started the Mongrel 2 project. I was following your surprising move to use Fossil for it. I’ve seen you abruptly depart from the Ruby community with your “Rails is a Ghetto” post. I was thrilled when you started working on “Learn Python The Hard Way” and have been recommending it ever since. I met you in 2013 at [DjangoCon Europe][6] and we talked quite a bit about painting, singing and burnout. [This photo of you][7] is one of my first posts on Instagram.
-
-You almost pulled another “Is a Ghetto” move with your [“The Case Against Python 3”][8] post. I think your heart is in the right place but that post caused a lot of confusion, including many people seriously thinking you believe Python 3 is not Turing-complete. I spent quite a few hours convincing people that you said so in jest. But given your very valuable contribution of “Learn Python The Hard Way”, I think it was worth doing that. Especially that you did update your book for Python 3. Thank you for doing that work. If there really are people in our community that called for blacklisting you and your book on the grounds of your post alone, call them out. It’s a lose-lose situation and it’s wrong.
-
-For the record, no core Python dev thinks that the Python 2 -> Python 3 transition was smooth and well planned, [including Guido van Rossum][9]. Seriously, watch that video. Hindsight is 20/20 of course. In this sense we are in fact aggressively agreeing with each other. If we went to do it all over again, it would look differently. But at this point, [on January 1st 2020 Python 2 will reach End Of Life][10]. Most third-party libraries already support Python 3 and even started releasing Python 3-only versions (see [Django][11] or the [Python 3 Statement of the scientific projects][12]).
-
-We are also aggressively agreeing on another thing. Just like you with Mongrel, Python core devs are volunteers who aren’t compensated for their work. Most of us invested a lot of time and effort in this project, and so [we are naturally sensitive][13] to dismissive and aggressive comments against their contribution. Especially if the message is both attacking the current state of affairs and calling for more free labor.
-
-I hoped that by 2018 you’d let your 2016 post go. There were a bunch of good rebuttals. [I especially like eevee’s][14]. It specifically addresses the “run Python 2 alongside Python 3” scenario as not realistic, just like running Ruby 1.8 and Ruby 2.x in the same VM, or like running Lua 5.1 alongside 5.3. You can’t even run C binaries compiled against libc.so.5 with libc.so.6. What I find most surprising though is that you claimed that Python core is “purposefully” creating broken tools like 2to3, created by Guido in whose best interest it is for everybody to migrate as smoothly and quickly as possible. I’m glad that you backed out of that claim in your post later but you have to realize you antagonized people who read the original version. Accusations of deliberate harm better be backed by strong evidence.
-
-But it seems like you still do that. [Just today][15] you said that Python core “ignores” attempts to solve the API problem, specifically `six`. As I wrote above, `six` is covered by the official porting guide in the Python documentation. More importantly, `six` was written by Benjamin Peterson, the release manager of Python 2.7. A lot of people learned to program thanks to you and you have a large following online. People will read a tweet like this and they will believe it at face value. This is harmful.
-
-I have a suggestion. Let’s put this “Python 3 was poorly managed” dispute behind us. Python 2 is dying, we are slow to kill it and the process was ugly and bloody but it’s a one way street. Arguing about it is not actionable anymore. Instead, let’s focus on what we can do now to make Python 3.8 better than any other Python release. Maybe you prefer the role of an outsider looking in, but you would be much more impactful as a member of this community. Saying “we” instead of “they”.
-
---------------------------------------------------------------------------------
-
-via: http://lukasz.langa.pl/13/could-we-run-python-2-and-python-3-code-same-vm/
-
-作者:[Łukasz Langa][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-选题:[lujun9972](https://github.com/lujun9972)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://lukasz.langa.pl
-[1]:https://python-modernize.readthedocs.io/
-[2]:http://pypi.python.org/pypi/six
-[3]:https://docs.python.org/3/howto/pyporting.html
-[4]:https://www.youtube.com/watch?v=66XoCk79kjM
-[5]:https://twitter.com/llanga/status/963834977745022976
-[6]:https://www.instagram.com/p/ZVC9CwH7G1/
-[7]:https://www.instagram.com/p/ZXtdtUn7Gk/
-[8]:https://learnpythonthehardway.org/book/nopython3.html
-[9]:https://www.youtube.com/watch?v=Oiw23yfqQy8
-[10]:https://mail.python.org/pipermail/python-dev/2018-March/152348.html
-[11]:https://pypi.python.org/pypi/Django/2.0.3
-[12]:http://python3statement.org/
-[13]:https://www.youtube.com/watch?v=-Nk-8fSJM6I
-[14]:https://eev.ee/blog/2016/11/23/a-rebuttal-for-python-3/
-[15]:https://twitter.com/zedshaw/status/977909970795745281
From 6d1ea63e3d81fd5df8fd9c48b324c1e2b4f510e4 Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Fri, 27 Apr 2018 23:52:04 +0800
Subject: [PATCH 120/220] =?UTF-8?q?Create=20Could=20we=20run=20Python=202?=
=?UTF-8?q?=20and=20Python=203=20code=20in=20the=20same=20V=E2=80=A6=20=20?=
=?UTF-8?q?=E2=80=A6M=20with=20no=20code=20changes.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...in the same V… …M with no code changes.md | 120 ++++++++++++++++++
1 file changed, 120 insertions(+)
create mode 100644 translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md
diff --git a/translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md b/translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md
new file mode 100644
index 0000000000..5ffe1a6fc6
--- /dev/null
+++ b/translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md
@@ -0,0 +1,120 @@
+我们可以在同一个虚拟机中运行 Python 2 和 Python 3 代码而不需要更改代码吗?
+=====
+
+从理论上来说,可以。Zed Shaw 说过一句著名的话,如果不行,那么 Python 3 一定不是图灵完备的。但在实践中,这是不现实的,我将通过给你们举几个例子来说明原因。
+
+### 对于字典(dict)来说,这意味着什么?
+
+让我们来想象一台拥有 Python 6 的虚拟机,它可以读取 Python 3.6 编写的 `module3.py`。但是在这个模块中,它可以导入 Python 2.7 编写的 `module2.py`,并成功使用它,没有问题。这显然是实验代码,但假设 `module2.py` 包含以下的功能:
+```
+
+
+def update_config_from_dict(config_dict):
+ items = config_dict.items()
+ while items:
+ k, v = items.pop()
+ memcache.set(k, v)
+
+def config_to_dict():
+ result = {}
+ for k, v in memcache.getall():
+ result[k] = v
+ return result
+
+def update_in_place(config_dict):
+ for k, v in config_dict.items():
+ new_value = memcache.get(k)
+ if new_value is None:
+ del config_dict[k]
+ elif new_value != v:
+ config_dict[k] = v
+
+```
+
+现在,当我们想从 `module3` 中调用这些函数时,我们遇到了一个问题:Python 3.6 中的字典类型与 Python 2.7 中的字典类型不同。在 Python 2 中,dicts 是无序的,它们的 `.keys()`, `.values()`, `.items()` 方法返回了正确的序列,这意味着调用 `.items()` 会在字典中创建状态的副本。在 Python 3 中,这些方法返回字典当前状态的动态视图。
+
+这意味着如果 `module3` 调用 `module2.update_config_from_dict(some_dictionary)`,它将无法运行,因为 Python 3 中 `dict.items()` 返回的值不是一个列表,并且没有 `.pop()` 方法。反过来也是如此。如果 `module3` 调用 `module2.config_to_dict()`,它可能会返回一个 Python 2 的字典。现在调用 `.items()` 突然返回一个列表,所以这段代码无法正常工作(这对 Python 3 字典来说工作正常):
+```
+
+def main(cmdline_options):
+ d = module2.config_to_dict()
+ items = d.items()
+ for k, v in items:
+ print(f'Config from memcache: {k}={v}')
+ for k, v in cmdline_options:
+ d[k] = v
+ for k, v in items:
+ print(f'Config with cmdline overrides: {k}={v}')
+
+```
+
+最后,使用 `module2.update_in_place()` 会失败,因为 Python 3 中 `.items()` 的值现在不允许在迭代过程中改变。
+
+对于字典来说,还有很多问题。Python 2 的字典在 Python 3 上使用 `isinstance(d, dict)` 应该返回 `True` 吗?如果是的话,这将是一个谎言。如果没有,代码将无法继续。
+
+### Python 应该神奇地知道类型并会自动转换!
+
+为什么拥有 Python 6 的虚拟机无法识别 Python 3 的代码,在 Python 2 中调用 `some_dict.keys()` 时,我们还有别的意思吗?好吧,Python 不知道代码的作者在编写代码时,她所认为的 `some_dict` 应该是什么。代码中没有任何内容表明它是否是一个字典。在 Python 2 中没有类型注释,因为它们是可选的,即使在 Python 3 中,大多数代码也不会使用它们。
+
+在运行时,当你调用 `some_dict.keys()` 的时候,Python 只是简单地在对象上查找一个属性,该属性恰好隐藏在 `some_dict` 名下,并试图在该属性上运行 `__call__()`。这里有一些关于方法绑定,描述符,slots 等技术问题,但这是它的核心。我们称这种行为为“鸭子类型”。
+
+由于鸭子类型,Python 6 的虚拟机将无法做出编译时决定,以正确转换调用和属性查找。
+
+### 好的,让我们在运行时做出这个决定
+
+Python 6 的虚拟机可以通过标记每个属性查找来实现这一点,信息是“来自 py2 的调用”或“来自 py3 的调用”,并使对象发送正确的属性。这会让事情变得很慢,并且使用更多的内存。这将要求我们使用用户代码使用的代理将两种版本的给定类型保留在内存中。我们需要将这些对象的状态同步到用户背后,使工作加倍。毕竟,新字典的内存表示与 Python 2 不同。
+
+如果你在思考字典问题,考虑 Python 3 中的 Unicode 字符串和 Python 2 中的字节(byte)字符串的所有问题。
+
+### 一切都会丢失吗?Python 3 不能运行旧代码?
+
+一切都不会丢失。每天都会有项目移植到 Python 3。将 Python 2 代码移植到两个版本的 Python 上推荐方法是在代码上运行 [Python-Modernize][1]。它会捕获那些在 Python 3 上不起作用的代码,并使用 [six][2] 库将其替换,以便它在 Python 2 和 Python 3 上运行。这是对 `2to3` 的改编,它正在生成 Python 3-only 代码。`Modernize` 是首选,因为它提供了更多的增量迁移路线。所有的这些在 Python 文档中的 [Porting Python 2 Code to Python 3][3]文档中都有很好的概述。
+
+但是,等一等,你不是说 Python 6 的虚拟机不能自动执行此操作吗?对。`Modernize` 查看你的代码,并试图猜测哪些是安全的。它会做出一些不必要的改变,还会错过其他必要的改变。但是,它不会帮助你处理字符串。如果你的代码没有保留“来自外部的二进制数据”和“流程中的文本数据”之间的界限,那么这种转换并非微不足道。
+
+因此,迁移大项目不能自动完成,并且涉及人类进行测试,发现问题并修复它们。它工作吗?是的,我曾帮助[将一百万行代码迁移到 Python 3][4],并且交换没有造成事故。这一举措重获了我们服务器内存的 1/3,并使代码运行速度提高了 12%。那是在 Python 3.5 上,但是 Python 3.6 的速度要快得多,根据你的工作量,你甚至可以达到 [4 倍加速][5]。
+
+### 亲爱的 Zed
+
+hi,伙计,我关注你已经超过 10 年了。我一直在观察,当你感到沮丧的时候,你对 Mongrel 没有任何信任,尽管 Rails 生态系统几乎全部都在上面运行。当你重新设计它并开始 Mongrel 2 项目时,我一直在观察。我一直在关注你使用 Fossil 这一令人惊讶的举动。随着你发布 “Rails 是一个贫民窟”的帖子,我看到你突然离开了 Ruby 社区。当你开始编写 “Learn Python The Hard Way” 并且开始推荐它时,我感到非常兴奋。2013 年我在 [DjangoCon Europe][6] 见过你,我们谈了很多关于绘画,唱歌和倦怠的内容。关于[这张你的照片][7]是我在 Instagram 上的第一篇文章。
+
+你几乎把另一个“贫民区”的行动与 [“反对 Python 3” 案例][8] 文章拉到一起。我认为你本意是好的,但是这篇文章引起了很多混乱,包括许多人认为你认为 Python 3 不是图灵完整的。我花了好几个小时让人们相信,你是在开玩笑。但是,鉴于你对 Python 学习方式的重大贡献,我认为这是值得的。特别是你为 Python 3 更新了你的书。感谢你做这件事。如果我们社区中真的有人要求因你的帖子为由将你和你的书列入黑名单,把他们请出去。这是一个双输的局面,这是错误的。
+
+在记录中,没有一个核心 Python 开发人员认为 Python 2 到 Python 3 的转换过程会顺利而且计划得当,[包括 Guido van Rossum][9]。真的,可以看那个视频,这有点事后诸葛亮的意思了。从这个意义上说,我们实际上是积极地相互认同的。如果我们再做一次,它会看起来不一样。但在这一点上,[在 2020 年 1 月 1 日,Python 2 将会到达终结][10]。大多数第三方库已经支持 Python 3,甚至开始发布只支持 Python 3 版本(参见[Django][11]或 [科学项目关于 Python 3 的声明][12])。
+
+我们也积极地就另一件事达成一致。就像你于 Mongrel 一样,Python 核心开发人员是志愿者,他们的工作没有得到报酬。我们大多数人在这个项目上投入了大量的时间和精力,因此[我们自然而然敏感][13]于那些对他们的贡献不屑一顾和激烈的评论。特别是如果这个信息既攻击目前的事态,又要求更多的自由劳动。
+
+我希望到 2018 年你会让忘记 2016 发布的帖子,有一堆好的反驳。[我特别喜欢 eevee][14](译注:eevee 是一个为 Blender 设计的渲染器)。它特别针对“一起运行 Python 2 和 Python 3 ”的场景,这是不现实的,就像在同一个虚拟机中运行 Ruby 1.8 和 Ruby 2.x 一样,或者像 Lua 5.3 和 Lua 5.1 同时运行一样。你甚至不能用 libc.so.6 运行针对 libc.so.5 编译的 C 二进制文件。然而,我发现最令人惊讶的是,你声称 Python 核心开发者是“有目的地”创造诸如 2to3 之类的破坏工具,这些由 Guido 创建,其最大利益就是让每个人尽可能顺利,快速地迁移。我很高兴你在之后的帖子中放弃了这个说法,但是你必须意识到你会激怒那些阅读原始版本的人。对蓄意伤害的指控最好有强有力的证据支持。
+
+但看起来你仍然会这样做。[就在今天][15]你说 Python 核心开发者“忽略”尝试解决 API 的问题,特别是 `six`。正如我上面写的那样,Python 文档中的官方移植指南涵盖了 “six”。更重要的是,`six` 是由 Python 2.7 的发布管理者 Benjamin Peterson 编写。很多人学会了编程,这要归功于你,而且由于你在网上有大量的粉丝,人们会阅读这样的推文,他们会相信它的价值,这是有害的。
+
+我有一个建议,让我们把 “Python 3 管理不善”的争议搁置起来。Python 2 正在死亡,这个过程会很慢,并且它是丑陋而血腥的,但它是一条单行道。争论那些没有用。相反,让我们专注于我们现在可以做什么来使 Python 3.8 比其他任何 Python 版本更好。也许你更喜欢看外面的角色,但作为这个社区的成员,你会更有影响力。请说“我们”而不是“他们”。
+
+
+--------------------------------------------------------------------------------
+
+via: http://lukasz.langa.pl/13/could-we-run-python-2-and-python-3-code-same-vm/
+
+作者:[Łukasz Langa][a]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[校对者ID](https://github.com/校对者ID)
+选题:[lujun9972](https://github.com/lujun9972)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://lukasz.langa.pl
+[1]:https://python-modernize.readthedocs.io/
+[2]:http://pypi.python.org/pypi/six
+[3]:https://docs.python.org/3/howto/pyporting.html
+[4]:https://www.youtube.com/watch?v=66XoCk79kjM
+[5]:https://twitter.com/llanga/status/963834977745022976
+[6]:https://www.instagram.com/p/ZVC9CwH7G1/
+[7]:https://www.instagram.com/p/ZXtdtUn7Gk/
+[8]:https://learnpythonthehardway.org/book/nopython3.html
+[9]:https://www.youtube.com/watch?v=Oiw23yfqQy8
+[10]:https://mail.python.org/pipermail/python-dev/2018-March/152348.html
+[11]:https://pypi.python.org/pypi/Django/2.0.3
+[12]:http://python3statement.org/
+[13]:https://www.youtube.com/watch?v=-Nk-8fSJM6I
+[14]:https://eev.ee/blog/2016/11/23/a-rebuttal-for-python-3/
+[15]:https://twitter.com/zedshaw/status/977909970795745281
From 417c321c47373882aad0aaa7e067677a24d3138b Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Fri, 27 Apr 2018 23:52:46 +0800
Subject: [PATCH 121/220] =?UTF-8?q?Rename=20Could=20we=20run=20Python=202?=
=?UTF-8?q?=20and=20Python=203=20code=20in=20the=20same=20V=E2=80=A6=20=20?=
=?UTF-8?q?=E2=80=A6M=20with=20no=20code=20changes.md=20to=2020180325=20Co?=
=?UTF-8?q?uld=20we=20run=20Python=202=20and=20Python=203=20code=20in=20th?=
=?UTF-8?q?e=20same=20VM=20with=20no=20code=20changes.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Python 2 and Python 3 code in the same VM with no code changes.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename translated/tech/{Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md => 20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md} (100%)
diff --git a/translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md b/translated/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md
similarity index 100%
rename from translated/tech/Could we run Python 2 and Python 3 code in the same V… …M with no code changes.md
rename to translated/tech/20180325 Could we run Python 2 and Python 3 code in the same VM with no code changes.md
From 4b36848a4423ad6548b039a8c31d08a5206f7de3 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sat, 28 Apr 2018 06:55:04 +0800
Subject: [PATCH 122/220] PRF:20180127 Your instant Kubernetes cluster.md
@qhwdw
---
...0180127 Your instant Kubernetes cluster.md | 37 ++++++-------------
1 file changed, 12 insertions(+), 25 deletions(-)
diff --git a/translated/tech/20180127 Your instant Kubernetes cluster.md b/translated/tech/20180127 Your instant Kubernetes cluster.md
index ac06ba730f..a2ccd874fe 100644
--- a/translated/tech/20180127 Your instant Kubernetes cluster.md
+++ b/translated/tech/20180127 Your instant Kubernetes cluster.md
@@ -1,19 +1,15 @@
“开箱即用” 的 Kubernetes 集群
============================================================
-
-这是我以前的 [10 分钟内配置 Kubernetes][10] 教程的精简版和更新版。我删除了一些我认为可以去掉的内容,所以,这个指南仍然是可理解的。当你想在云上创建一个集群或者尽可能快地构建基础设施时,你可能会用到它。
+这是我以前的 [10 分钟内配置 Kubernetes][10] 教程的精简版和更新版。我删除了一些我认为可以去掉的内容,所以,这个指南仍然是通顺的。当你想在云上创建一个集群或者尽可能快地构建基础设施时,你可能会用到它。
### 1.0 挑选一个主机
我们在本指南中将使用 Ubuntu 16.04,这样你就可以直接拷贝/粘贴所有的指令。下面是我用本指南测试过的几种环境。根据你运行的主机,你可以从中挑选一个。
* [DigitalOcean][1] - 开发者云
-
* [Civo][2] - UK 开发者云
-
* [Packet][3] - 裸机云
-
* 2x Dell Intel i7 服务器 —— 它在我家中
> Civo 是一个相对较新的开发者云,我比较喜欢的一点是,它开机时间只有 25 秒,我就在英国,因此,它的延迟很低。
@@ -25,14 +21,12 @@
下面是一些其他的指导原则:
* 最好选至少有 2 GB 内存的双核主机
-
* 在准备主机的时候,如果你可以自定义用户名,那么就不要使用 root。例如,Civo 通常让你在 `ubuntu`、`civo` 或者 `root` 中选一个。
-现在,在每台机器上都运行以下的步骤。它将需要 5-10 钟时间。如果你觉得太慢了,你可以使用我的脚本 [kept in a Gist][11]:
+现在,在每台机器上都运行以下的步骤。它将需要 5-10 钟时间。如果你觉得太慢了,你可以使用我[放在 Gist][11] 的脚本 :
```
$ curl -sL https://gist.githubusercontent.com/alexellis/e8bbec45c75ea38da5547746c0ca4b0c/raw/23fc4cd13910eac646b13c4f8812bab3eeebab4c/configure.sh | sh
-
```
### 1.2 登入和安装 Docker
@@ -42,7 +36,6 @@ $ curl -sL https://gist.githubusercontent.com/alexellis/e8bbec45c75ea38da5547746
```
$ sudo apt-get update \
&& sudo apt-get install -qy docker.io
-
```
### 1.3 禁用 swap 文件
@@ -69,7 +62,6 @@ $ sudo apt-get update \
kubelet \
kubeadm \
kubernetes-cni
-
```
### 1.5 创建集群
@@ -80,7 +72,6 @@ $ sudo apt-get update \
```
$ sudo kubeadm init
-
```
如果你错过一个步骤或者有问题,`kubeadm` 将会及时告诉你。
@@ -90,41 +81,37 @@ $ sudo kubeadm init
```
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
-sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
+sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
-确保你一定要记下如下的加入 token 命令。
+确保你一定要记下如下的加入 `token` 的命令。
```
$ sudo kubeadm join --token c30633.d178035db2b4bb9a 10.0.0.5:6443 --discovery-token-ca-cert-hash sha256:
-
```
### 2.0 安装网络
-Kubernetes 可用于任何网络供应商的产品或服务,但是,默认情况下什么也没有,因此,我们使用来自 [Weaveworks][12] 的 Weave Net,它是 Kebernetes 社区中非常流行的选择之一。它倾向于不需要额外配置的 “开箱即用”。
+许多网络提供商提供了 Kubernetes 支持,但是,默认情况下 Kubernetes 都没有包括。这里我们使用来自 [Weaveworks][12] 的 Weave Net,它是 Kebernetes 社区中非常流行的选择之一。它近乎不需要额外配置的 “开箱即用”。
```
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
-
```
-如果在你的主机上启用了私有网络,那么,你可能需要去修改 Weavenet 使用的私有子网络,以便于为 Pods(容器)分配 IP 地址。下面是命令示例:
+如果在你的主机上启用了私有网络,那么,你可能需要去修改 Weavenet 使用的私有子网络,以便于为 Pod(容器)分配 IP 地址。下面是命令示例:
```
$ curl -SL "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=172.16.6.64/27" \
| kubectl apply -f -
-
```
-> Weave 也有很酷的称为 Weave Cloud 的可视化工具。它是免费的,你可以在它上面看到你的 Pods 之间的路径流量。[这里有一个使用 OpenFaaS 项目的示例][6]。
+> Weave 也有很酷的称为 Weave Cloud 的可视化工具。它是免费的,你可以在它上面看到你的 Pod 之间的路径流量。[这里有一个使用 OpenFaaS 项目的示例][6]。
### 2.2 在集群中加入工作节点
现在,你可以切换到你的每一台工作节点,然后使用 1.5 节中的 `kubeadm join` 命令。运行完成后,登出那个工作节点。
-### 3.0 收益
+### 3.0 收获
到此为止 —— 我们全部配置完成了。你现在有一个正在运行着的集群,你可以在它上面部署应用程序。如果你需要设置仪表板 UI,你可以去参考 [Kubernetes 文档][13]。
@@ -134,21 +121,21 @@ NAME STATUS ROLES AGE VERSION
openfaas1 Ready master 20m v1.9.2
openfaas2 Ready 19m v1.9.2
openfaas3 Ready 19m v1.9.2
-
```
如果你想看到我一步一步创建集群并且展示 `kubectl` 如何工作的视频,你可以看下面我的视频,你可以订阅它。
+
-想在你的 Mac 电脑上,使用 Minikube 或者 Docker 的 Mac Edge 版本,安装一个 “开箱即用” 的 Kubernetes 集群,[阅读在这里的我的评估和第一印象][14]。
+你也可以在你的 Mac 电脑上,使用 Minikube 或者 Docker 的 Mac Edge 版本,安装一个 “开箱即用” 的 Kubernetes 集群。[阅读在这里的我的评估和第一印象][14]。
--------------------------------------------------------------------------------
via: https://blog.alexellis.io/your-instant-kubernetes-cluster/
-作者:[Alex Ellis ][a]
+作者:[Alex Ellis][a]
译者:[qhwdw](https://github.com/qhwdw)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From e29b97959c3f7e451d1e0defac33d5020144a85d Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sat, 28 Apr 2018 06:56:34 +0800
Subject: [PATCH 123/220] PUB:20180127 Your instant Kubernetes cluster.md
@qhwdw https://linux.cn/article-9584-1.html
---
.../20180127 Your instant Kubernetes cluster.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180127 Your instant Kubernetes cluster.md (100%)
diff --git a/translated/tech/20180127 Your instant Kubernetes cluster.md b/published/20180127 Your instant Kubernetes cluster.md
similarity index 100%
rename from translated/tech/20180127 Your instant Kubernetes cluster.md
rename to published/20180127 Your instant Kubernetes cluster.md
From 103ba8045660e85d4e282d815dc102e873a43e9e Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sat, 28 Apr 2018 07:09:58 +0800
Subject: [PATCH 124/220] PRF:20180405 How to find files in Linux.md
@geekpi
---
.../20180405 How to find files in Linux.md | 31 ++++++++++---------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/translated/tech/20180405 How to find files in Linux.md b/translated/tech/20180405 How to find files in Linux.md
index 2566575d72..5af7e476fe 100644
--- a/translated/tech/20180405 How to find files in Linux.md
+++ b/translated/tech/20180405 How to find files in Linux.md
@@ -1,14 +1,17 @@
如何在 Linux 中查找文件
======
-
-如果你是 Windows 或 OSX 的非高级用户,那么可能使用 GUI 来查找文件。你也可能发现界面限制,令人沮丧,或者两者兼而有之,并学会组织文件并记住它们的确切顺序。你也可以在 Linux 中做到这一点 - 但你不必这样做。
+> 使用简单的命令在 Linux 下基于类型、内容等快速查找文件。
-Linux 的好处之一是它提供了多种方式来处理。你可以打开任何文件管理器或按下 `ctrl` +`f`,你也可以使用程序手动打开文件,或者你可以开始输入字母,它会过滤当前目录列表。
+
+
+如果你是 Windows 或 OSX 的非资深用户,那么可能使用 GUI 来查找文件。你也可能发现界面受限,令人沮丧,或者两者兼而有之,并学会了组织文件并记住它们的确切顺序。你也可以在 Linux 中做到这一点 —— 但你不必这样做。
+
+Linux 的好处之一是它提供了多种方式来处理。你可以打开任何文件管理器或按下 `Ctrl+F`,你也可以使用程序手动打开文件,或者你可以开始输入字母,它会过滤当前目录列表。
![Screenshot of how to find files in Linux with Ctrl+F][2]
-使用 Ctrl+F 在 Linux 中查找文件的截图
+*使用 Ctrl+F 在 Linux 中查找文件的截图*
但是如果你不知道你的文件在哪里,又不想搜索整个磁盘呢?对于这个以及其他各种情况,Linux 都很合适。
@@ -16,7 +19,7 @@ Linux 的好处之一是它提供了多种方式来处理。你可以打开任
如果你习惯随心所欲地放文件,Linux 文件系统看起来会让人望而生畏。对我而言,最难习惯的一件事是找到程序在哪里。
-例如,`which bash` 通常会返回 `/bin/bash`,但是如果你下载了一个程序并且它没有出现在你的菜单中,`which` 命令可以是一个很好的工具。
+例如,`which bash` 通常会返回 `/bin/bash`,但是如果你下载了一个程序并且它没有出现在你的菜单中,那么 `which` 命令就是一个很好的工具。
一个类似的工具是 `locate` 命令,我发现它对于查找配置文件很有用。我不喜欢输入程序名称,因为像 `locate php` 这样的简单程序通常会提供很多需要进一步过滤的结果。
@@ -25,31 +28,29 @@ Linux 的好处之一是它提供了多种方式来处理。你可以打开任
* `man which`
* `man locate`
+### find
+`find` 工具提供了更先进的功能。以下是我安装在许多服务器上的脚本示例,我用于确保特定模式的文件(也称为 glob)仅存在五天,并且所有早于此的文件都将被删除。 (自上次修改以来,分数用于保留最多 240 分钟的偏差)
-### Find
-
-`find` 工具提供了更先进的功能。以下是我安装在许多服务器上的脚本示例,我用于确保特定模式的文件(也称为 glob)仅存在五天,并且所有早于该文件的文件都将被删除。 (自上次修改以来,十进制用于计算最多 240 分钟的差异)
```
find ./backup/core-files*.tar.gz -mtime +4.9 -exec rm {} \;
-
```
`find` 工具有许多高级用法,但最常见的是对结果执行命令,而不用链式地按照类型、创建日期、修改日期过滤文件。
-find 的另一个有趣用处是找到所有有可执行权限的文件。这有助于确保没有人在你昂贵的服务器上安装比特币挖矿程序或僵尸网络。
+`find` 的另一个有趣用处是找到所有有可执行权限的文件。这有助于确保没有人在你昂贵的服务器上安装比特币挖矿程序或僵尸网络。
+
```
find / -perm /+x
-
```
有关 `find` 的更多信息,请使用 `man find` 参考 `man` 页面。
-### Grep
+### grep
想通过内容中查找文件? Linux 已经实现了。你可以使用许多 Linux 工具来高效搜索符合模式的文件,但是 `grep` 是我经常使用的工具。
-假设你有一个程序发布代码引用和堆栈跟踪的错误消息。你要在日志中找到这些。 grep 不总是最好的方法,但如果文件是一个给定的值,我经常使用 `grep -R`。
+假设你有一个程序发布代码引用和堆栈跟踪的错误消息。你要在日志中找到这些。 `grep` 不总是最好的方法,但如果文件是一个给定的值,我经常使用 `grep -R`。
越来越多的 IDE 正在实现查找功能,但是如果你正在访问远程系统或出于任何原因没有 GUI,或者如果你想在当前目录递归查找,请使用:`grep -R {searchterm} ` 或在支持 `egrep` 别名的系统上,只需将 `-e` 标志添加到命令 `egrep -r {regex-pattern}`。
@@ -62,9 +63,9 @@ find / -perm /+x
via: https://opensource.com/article/18/4/how-find-files-linux
作者:[Lewis Cowles][a]
-译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 9717d40425f85780385979ac07efb9f82d667272 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sat, 28 Apr 2018 07:10:39 +0800
Subject: [PATCH 125/220] PUB:20180405 How to find files in Linux.md
@geekpi
---
.../tech => published}/20180405 How to find files in Linux.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180405 How to find files in Linux.md (100%)
diff --git a/translated/tech/20180405 How to find files in Linux.md b/published/20180405 How to find files in Linux.md
similarity index 100%
rename from translated/tech/20180405 How to find files in Linux.md
rename to published/20180405 How to find files in Linux.md
From 069fb07755cbfaed8f95a00f685510a3d3240ea2 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Sat, 28 Apr 2018 08:37:54 +0800
Subject: [PATCH 126/220] translated
---
...NG GO PROJECTS WITH DOCKER ON GITLAB CI.md | 157 ------------------
...NG GO PROJECTS WITH DOCKER ON GITLAB CI.md | 155 +++++++++++++++++
2 files changed, 155 insertions(+), 157 deletions(-)
delete mode 100644 sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
create mode 100644 translated/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
diff --git a/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md b/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
deleted file mode 100644
index da8a95e5b2..0000000000
--- a/sources/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
+++ /dev/null
@@ -1,157 +0,0 @@
-translating----geekpi
-
-BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI
-===============================================
-
-### Intro
-
-This post is a summary of my research on building Go projects in a Docker container on CI (Gitlab, specifically). I found solving private dependencies quite hard (coming from a Node/.NET background) so that is the main reason I wrote this up. Please feel free to reach out if there are any issues or a submit pull request on the Docker image.
-
-### Dep
-
-As dep is the best option for managing Go dependencies right now, the build will need to run `dep ensure` before building.
-
-Note: I personally do not commit my `vendor/` folder into source control, if you do, I’m not sure if this step can be skipped or not.
-
-The best way to do this with Docker builds is to use `dep ensure -vendor-only`. [See here][1].
-
-### Docker Build Image
-
-I first tried to use `golang:1.10` but this image doesn’t have:
-
-* curl
-
-* git
-
-* make
-
-* dep
-
-* golint
-
-I have created my own Docker image for builds ([github][2] / [dockerhub][3]) which I will keep up to date - but I offer no guarantees so you should probably create and manage your own.
-
-### Internal Dependencies
-
-We’re quite capable of building any project that has publicly accessible dependencies so far. But what about if your project depends on another private gitlab repository?
-
-Running `dep ensure` locally should work with your git setup, but once on CI this doesn’t apply and builds will fail.
-
-### Gitlab Permissions Model
-
-This was [added in Gitlab 8.12][4] and the most useful feature we care about is the `CI_JOB_TOKEN` environment variable made available during builds.
-
-This basically means we can clone [dependent repositories][5] like so
-
-```
-git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/myuser/mydependentrepo
-
-```
-
-However we do want to make this a bit more user friendly as dep will not magically add credentials when trying to pull code.
-
-We will add this line to the `before_script` section of the `.gitlab-ci.yml`.
-
-```
-before_script:
- - echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc
-
-```
-
-Using the `.netrc` file allows you to specify which credentials to use for which server. This method allows you to avoid entering a username and password every time you pull (or push) from Git. The password is stored in plaintext so you shouldn’t do this on your own machine. This is actually for `cURL` which Git uses behind the scenes. [Read more here][6].
-
-Project Files
-============================================================
-
-### Makefile
-
-While this is optional, I have found it makes things easier.
-
-Configuring these steps below means in the CI script (and locally) we can run `make lint`, `make build` etc without repeating steps each time.
-
-```
-GOFILES = $(shell find . -name '*.go' -not -path './vendor/*')
-GOPACKAGES = $(shell go list ./... | grep -v /vendor/)
-
-default: build
-
-workdir:
- mkdir -p workdir
-
-build: workdir/scraper
-
-workdir/scraper: $(GOFILES)
- GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o workdir/scraper .
-
-test: test-all
-
-test-all:
- @go test -v $(GOPACKAGES)
-
-lint: lint-all
-
-lint-all:
- @golint -set_exit_status $(GOPACKAGES)
-
-```
-
-### .gitlab-ci.yml
-
-This is where the Gitlab CI magic happens. You may want to swap out the image for your own.
-
-```
-image: sjdweb/go-docker-build:1.10
-
-stages:
- - test
- - build
-
-before_script:
- - cd $GOPATH/src
- - mkdir -p gitlab.com/$CI_PROJECT_NAMESPACE
- - cd gitlab.com/$CI_PROJECT_NAMESPACE
- - ln -s $CI_PROJECT_DIR
- - cd $CI_PROJECT_NAME
- - echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc
- - dep ensure -vendor-only
-
-lint_code:
- stage: test
- script:
- - make lint
-
-unit_tests:
- stage: test
- script:
- - make test
-
-build:
- stage: build
- script:
- - make
-
-```
-
-### What This Is Missing
-
-I would usually be building a Docker image with my binary and pushing that to the Gitlab Container Registry.
-
-You can see I’m building the binary and exiting, you would at least want to store that binary somewhere (such as a build artifact).
-
---------------------------------------------------------------------------------
-
-via: https://seandrumm.co.uk/blog/building-go-projects-with-docker-on-gitlab-ci/
-
-作者:[ SEAN DRUMM][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://seandrumm.co.uk/
-[1]:https://github.com/golang/dep/blob/master/docs/FAQ.md#how-do-i-use-dep-with-docker
-[2]:https://github.com/sjdweb/go-docker-build/blob/master/Dockerfile
-[3]:https://hub.docker.com/r/sjdweb/go-docker-build/
-[4]:https://docs.gitlab.com/ce/user/project/new_ci_build_permissions_model.html
-[5]:https://docs.gitlab.com/ce/user/project/new_ci_build_permissions_model.html#dependent-repositories
-[6]:https://github.com/bagder/everything-curl/blob/master/usingcurl-netrc.md
diff --git a/translated/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md b/translated/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
new file mode 100644
index 0000000000..765dd14f33
--- /dev/null
+++ b/translated/tech/20180412 BUILDING GO PROJECTS WITH DOCKER ON GITLAB CI.md
@@ -0,0 +1,155 @@
+在 GITLAB CI 中使用 DOCKER 构建 GO 项目
+===============================================
+
+### 介绍
+
+这篇文章是我在 CI 的 Docker 容器中构建 Go 项目的研究总结(特别是在 Gitlab 中)。我发现很难解决私有依赖问题(来自 Node/.NET 背景),因此这是我写这篇文章的主要原因。如果 Docker 镜像上存在任何问题或提交请求,请随时与我们联系。
+
+### Dep
+
+由于 dep 是现在管理 Go 依赖关系的最佳选择,因此在构建前之前运行 `dep ensure`。
+
+注意:我个人不会将我的 `vendor/` 文件夹提交到源码控制,如果你这样做,我不确定这个步骤是否可以跳过。
+
+使用 Docker 构建的最好方法是使用 `dep ensure -vendor-only`。 [见这里][1]。
+
+### Docker 构建镜像
+
+我第一次尝试使用 `golang:1.10`,但这个镜像没有:
+
+* curl
+
+* git
+
+* make
+
+* dep
+
+* golint
+
+我已经为我将不断更新的构建创建好了镜像([github][2] / [dockerhub][3]) - 但我不提供任何保证,因此你应该创建并管理自己的 Dockerhub。
+
+### 内部依赖关系
+
+我们完全有能力创建一个有公共依赖关系的项目。但是如果你的项目依赖于另一个私人 gitlab 仓库呢?
+
+在本地运行 `dep ensure` 应该可以和你的 git 设置一起工作,但是一旦在 CI 上不适用,构建就会失败。
+
+### Gitlab 权限模型
+
+这是在[ Gitlab 8.12 中添加的][4],我们关心的最有用的功能是在构建期提供的 `CI_JOB_TOKEN` 环境变量。
+
+这基本上意味着我们可以像这样克隆[依赖仓库][5]
+
+```
+git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/myuser/mydependentrepo
+
+```
+
+然而,我们希望使这更友好一点,因为 dep 在试图拉取代码时不会奇迹般地添加凭据。
+
+我们将把这一行添加到 `.gitlab-ci.yml` 的 `before_script` 部分。
+
+```
+before_script:
+ - echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc
+
+```
+
+使用 `.netrc` 文件可以指定哪个凭证用于哪个服务器。这种方法可以避免每次从 Git 中拉取(或推送)时输入用户名和密码。密码以明文形式存储,因此你不应在自己的计算机上执行此操作。这实际用于 Git 在背后使用 `cURL`。 [在这里阅读更多][6]。
+
+项目文件
+============================================================
+
+### Makefile
+
+虽然这是可选的,但我发现它使事情变得更容易。
+
+配置这些步骤意味着在 CI 脚本(和本地)中,我们可以运行 `make lint`、`make build` 等,而无需每次重复步骤。
+
+```
+GOFILES = $(shell find . -name '*.go' -not -path './vendor/*')
+GOPACKAGES = $(shell go list ./... | grep -v /vendor/)
+
+default: build
+
+workdir:
+ mkdir -p workdir
+
+build: workdir/scraper
+
+workdir/scraper: $(GOFILES)
+ GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o workdir/scraper .
+
+test: test-all
+
+test-all:
+ @go test -v $(GOPACKAGES)
+
+lint: lint-all
+
+lint-all:
+ @golint -set_exit_status $(GOPACKAGES)
+
+```
+
+### .gitlab-ci.yml
+
+这是 Gitlab CI 魔术发生的地方。你可能想使用自己的镜像。
+
+```
+image: sjdweb/go-docker-build:1.10
+
+stages:
+ - test
+ - build
+
+before_script:
+ - cd $GOPATH/src
+ - mkdir -p gitlab.com/$CI_PROJECT_NAMESPACE
+ - cd gitlab.com/$CI_PROJECT_NAMESPACE
+ - ln -s $CI_PROJECT_DIR
+ - cd $CI_PROJECT_NAME
+ - echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc
+ - dep ensure -vendor-only
+
+lint_code:
+ stage: test
+ script:
+ - make lint
+
+unit_tests:
+ stage: test
+ script:
+ - make test
+
+build:
+ stage: build
+ script:
+ - make
+
+```
+
+### 缺少了什么
+
+我通常会用我的二进制文件构建 Docker 镜像,并将其推送到 Gitlab 容器注册器中。
+
+你可以看到我正在构建二进制文件并退出,你至少需要将该二进制文件(例如生成文件)存储在某处。
+
+--------------------------------------------------------------------------------
+
+via: https://seandrumm.co.uk/blog/building-go-projects-with-docker-on-gitlab-ci/
+
+作者:[ SEAN DRUMM][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://seandrumm.co.uk/
+[1]:https://github.com/golang/dep/blob/master/docs/FAQ.md#how-do-i-use-dep-with-docker
+[2]:https://github.com/sjdweb/go-docker-build/blob/master/Dockerfile
+[3]:https://hub.docker.com/r/sjdweb/go-docker-build/
+[4]:https://docs.gitlab.com/ce/user/project/new_ci_build_permissions_model.html
+[5]:https://docs.gitlab.com/ce/user/project/new_ci_build_permissions_model.html#dependent-repositories
+[6]:https://github.com/bagder/everything-curl/blob/master/usingcurl-netrc.md
From b22adaf3f9d7769f640a329e35024ea4c541413b Mon Sep 17 00:00:00 2001
From: geekpi
Date: Sat, 28 Apr 2018 08:40:44 +0800
Subject: [PATCH 127/220] translating
---
...3 Easily Install Android Studio in Ubuntu And Linux Mint.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md b/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md
index d5e33425b2..7fb5b4facc 100644
--- a/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md
+++ b/sources/tech/20140403 Easily Install Android Studio in Ubuntu And Linux Mint.md
@@ -1,3 +1,6 @@
+translating---geekpi
+
+
Easily Install Android Studio in Ubuntu And Linux Mint
======
[Android Studio][1], Google’s own IDE for Android development, is a nice alternative to Eclipse with ADT plugin. Android Studio can be installed from its source code but in this quick post, we shall see **how to install Android Studio in Ubuntu 18.04, 16.04 and corresponding Linux Mint variants**.
From 35e8c157014bed4ee52ae3328a2519482e4bcc33 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 28 Apr 2018 12:53:25 +0800
Subject: [PATCH 128/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Enhance=20your=20?=
=?UTF-8?q?Python=20with=20an=20interactive=20shell?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...e your Python with an interactive shell.md | 93 +++++++++++++++++++
1 file changed, 93 insertions(+)
create mode 100644 sources/tech/20180425 Enhance your Python with an interactive shell.md
diff --git a/sources/tech/20180425 Enhance your Python with an interactive shell.md b/sources/tech/20180425 Enhance your Python with an interactive shell.md
new file mode 100644
index 0000000000..19826d142b
--- /dev/null
+++ b/sources/tech/20180425 Enhance your Python with an interactive shell.md
@@ -0,0 +1,93 @@
+Enhance your Python with an interactive shell
+======
+
+The Python programming language has become one of the most popular languages used in IT. One reason for this success is it can be used to solve a variety of problems. From web development to data science, machine learning to task automation, the Python ecosystem is rich in popular frameworks and libraries. This article presents some useful Python shells available in the Fedora packages collection to make development easier.
+
+### Python Shell
+
+The Python Shell lets you use the interpreter in an interactive mode. It’s very useful to test code or try a new library. In Fedora you can invoke the default shell by typing python3 in a terminal session. Some more advanced and enhanced shells are available to Fedora, though.
+
+### IPython
+
+IPython provides many useful enhancements to the Python shell. Examples include tab completion, object introspection, system shell access and command history retrieval. Many of these features are also used by the [Jupyter Notebook][1] , since it uses IPython underneath.
+
+#### Install and run IPython
+```
+dnf install ipython3
+ipython3
+
+```
+
+Using tab completion prompts you with possible choices. This features comes in handy when you use an unfamiliar library.
+
+![][2]
+
+If you need more information, use the documentation by typing the ? command. For more details, you can use the ?? command.
+
+![][3]
+
+Another cool feature is the ability to execute a system shell command using the ! character. The result of the command can then be referenced in the IPython shell.
+
+![][4]
+
+A comprehensive list of IPython features is available in the [official documentation][5].
+
+### bpython
+
+bpython doesn’t do as much as IPython, but nonetheless it provides a useful set of features in a simple and lightweight package. Among other features, bpython provides:
+
+ * In-line syntax highlighting
+ * Autocomplete with suggestions as you type
+ * Expected parameter list
+ * Ability to send or save code to a pastebin service or file
+
+
+
+#### Install and run bpython
+```
+dnf install bpython3
+bpython3
+
+```
+
+As you type, bpython offers you choices to autocomplete your code.
+
+![][6]
+
+When you call a function or method, the expected parameters and the docstring are automatically displayed.
+
+![][7]
+
+Another neat feature is the ability to open the current bpython session in an external editor (Vim by default) using the function key F7. This is very useful when testing more complex programs.
+
+For more details about configuration and features, consult the bpython [documentation][8].
+
+### Conclusion
+
+Using an enhanced Python shell is a good way to increase productivity. It gives you enhanced features to write a quick prototype or try out a new library. Are you using an enhanced Python shell? Feel free to mention it in the comment section below.
+
+Photo by [David Clode][9] on [Unsplash][10]
+
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/enhance-python-interactive-shell/
+
+作者:[Clément Verna][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://fedoramagazine.org/author/cverna/
+[1]:https://ipython.org/notebook.html
+[2]:https://fedoramagazine.org/wp-content/uploads/2018/03/ipython-tabcompletion.png
+[3]:https://fedoramagazine.org/wp-content/uploads/2018/03/ipyhton_doc1.png
+[4]:https://fedoramagazine.org/wp-content/uploads/2018/03/ipython_shell.png
+[5]:https://ipython.readthedocs.io/en/stable/overview.html#main-features-of-the-interactive-shell
+[6]:https://fedoramagazine.org/wp-content/uploads/2018/03/bpython1.png
+[7]:https://fedoramagazine.org/wp-content/uploads/2018/03/bpython2.png
+[8]:https://docs.bpython-interpreter.org/
+[9]:https://unsplash.com/photos/d0CasEMHDQs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[10]:https://unsplash.com/search/photos/python?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
From 6e420c8152e4d38953af914e03f19f47e9e5b8ff Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 28 Apr 2018 12:55:30 +0800
Subject: [PATCH 129/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Get=20started=20w?=
=?UTF-8?q?ith=20Pidgin:=20An=20open=20source=20replacement=20for=20Skype?=
=?UTF-8?q?=20for=20Business?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...urce replacement for Skype for Business.md | 63 +++++++++++++++++++
1 file changed, 63 insertions(+)
create mode 100644 sources/talk/20180426 Get started with Pidgin- An open source replacement for Skype for Business.md
diff --git a/sources/talk/20180426 Get started with Pidgin- An open source replacement for Skype for Business.md b/sources/talk/20180426 Get started with Pidgin- An open source replacement for Skype for Business.md
new file mode 100644
index 0000000000..9521ac92e2
--- /dev/null
+++ b/sources/talk/20180426 Get started with Pidgin- An open source replacement for Skype for Business.md
@@ -0,0 +1,63 @@
+Get started with Pidgin: An open source replacement for Skype for Business
+======
+
+
+Technology is at an interesting crossroads, where Linux rules the server landscape but Microsoft rules the enterprise desktop. Office 365, Skype for Business, Microsoft Teams, OneDrive, Outlook... the list goes on of Microsoft software and services that dominate the enterprise workspace.
+
+What if you could replace that proprietary software with free and open source applications and make them work with an Office 365 backend you have no choice but to use? Buckle up, because that is exactly what we are going to do with Pidgin, an open source replacement for Skype.
+
+### Installing Pidgin and SIPE
+
+Microsoft's Office Communicator became Microsoft Lync which became what we know today as Skype for Business. There are [pay software options][1] for Linux that provide feature parity with Skype for Business, but [Pidgin][2] is a fully free and open source option licensed under the GNU GPL.
+
+Pidgin can be found in just about every Linux distro's repository, so getting your hands on it should not be a problem. The only Skype feature that won't work with Pidgin is screen sharing, and file sharing can be a bit hit or miss—but there are ways to work around it.
+
+You also need a [SIPE][3] plugin, as it's part of the secret sauce to make Pidgin work as a Skype for Business replacement. Please note that the `sipe` library has different names in different distros. For example, the library's name on System76's Pop_OS! is `pidgin-sipe` while in the Solus 3 repo it is simply `sipe`.
+
+With the prerequisites out of the way, you can begin configuring Pidgin.
+
+### Configuring Pidgin
+
+When firing up Pidgin for the first time, click on **Add** to add a new account. In the Basic tab (shown in the screenshot below), select** Office Communicator** in the **Protocol** drop-down, then type your **business email address** in the **Username** field.
+
+
+
+Next, click on the Advanced tab. In the **Server[:Port]** field enter **sipdir.online.lync.com:443** and in **User Agent** enter **UCCAPI/16.0.6965.5308 OC/16.0.6965.2117**.
+
+Your Advanced tab should now look like this:
+
+
+
+You shouldn't need to make any changes to the Proxy tab or the Voice and Video tab. Just to be certain, make sure **Proxy type** is set to **Use Global Proxy Settings** and in the Voice and Video tab, the **Use silence suppression** checkbox is **unchecked**.
+
+
+
+
+
+After you've completed those configurations, click **Add,** and you'll be prompted for your email account's password.
+
+### Adding contacts
+
+To add contacts to your buddy list, click on **Manage Accounts** in the **Buddy Window**. Hover over your account and select **Contact Search** to look up your colleagues. If you run into any problems when searching by first and last name, try searching with your colleague's full email address, and you should always get the right person.
+
+You are now up and running with a Skype for Business replacement that gives you about 98% of the functionality you need to banish the proprietary option from your desktop.
+
+Ray Shimko will be speaking about [Linux in a Microsoft World][4] at [LinuxFest NW][5] April 28-29. See program highlights or register to attend.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/pidgin-open-source-replacement-skype-business
+
+作者:[Ray Shimko][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/shickmo
+[1]:https://tel.red/linux.php
+[2]:https://pidgin.im/
+[3]:http://sipe.sourceforge.net/
+[4]:https://www.linuxfestnorthwest.org/conferences/lfnw18/program/proposals/32
+[5]:https://www.linuxfestnorthwest.org/conferences/lfnw18
From c74f1e3f09b8d59d1346e55143b75486534576de Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 28 Apr 2018 12:59:41 +0800
Subject: [PATCH 130/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Things=20to=20do?=
=?UTF-8?q?=20After=20Installing=20Ubuntu=2018.04?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ngs to do After Installing Ubuntu 18.04.md | 294 ++++++++++++++++++
1 file changed, 294 insertions(+)
create mode 100644 sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md
diff --git a/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md b/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md
new file mode 100644
index 0000000000..4e69d04837
--- /dev/null
+++ b/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md
@@ -0,0 +1,294 @@
+Things to do After Installing Ubuntu 18.04
+======
+**Brief: This list of things to do after installing Ubuntu 18.04 helps you get started with Bionic Beaver for a smoother desktop experience.**
+
+[Ubuntu][1] 18.04 Bionic Beaver releases today. You are perhaps already aware of the [new features in Ubuntu 18.04 LTS][2] release. If not, here’s the video review of Ubuntu 18.04 LTS:
+
+[Subscribe to YouTube Channel for more Ubuntu Videos][3]
+
+If you opted to install Ubuntu 18.04, I have listed out a few recommended steps that you can follow to get started with it.
+
+### Things to do after installing Ubuntu 18.04 Bionic Beaver
+
+![Things to do after installing Ubuntu 18.04][4]
+
+I should mention that the list of things to do after installing Ubuntu 18.04 depends a lot on you and your interests and needs. If you are a programmer, you’ll focus on installing programming tools. If you are a graphic designer, you’ll focus on installing graphics tools.
+
+Still, there are a few things that should be applicable to most Ubuntu users. This list is composed of those things plus a few of my of my favorites.
+
+Also, this list is for the default [GNOME desktop][5]. If you are using some other flavor like [Kubuntu][6], Lubuntu etc then the GNOME-specific stuff won’t be applicable to your system.
+
+You don’t have to follow each and every point on the list blindly. You should see if the recommended action suits your requirements or not.
+
+With that said, let’s get started with this list of things to do after installing Ubuntu 18.04.
+
+#### 1\. Update the system
+
+This is the first thing you should do after installing Ubuntu. Update the system without fail. It may sound strange because you just installed a fresh OS but still, you must check for the updates.
+
+In my experience, if you don’t update the system right after installing Ubuntu, you might face issues while trying to install a new program.
+
+To update Ubuntu 18.04, press Super Key (Windows Key) to launch the Activity Overview and look for Software Updater. Run it to check for updates.
+
+![Software Updater in Ubuntu 17.10][7]
+
+**Alternatively** , you can use these famous commands in the terminal ( Use Ctrl+Alt+T):
+```
+sudo apt update && sudo apt upgrade
+
+```
+
+#### 2\. Enable additional repositories for more software
+
+[Ubuntu has several repositories][8] from where it provides software for your system. These repositories are:
+
+ * Main – Free and open-source software supported by Ubuntu team
+ * Universe – Free and open-source software maintained by the community
+ * Restricted – Proprietary drivers for devices.
+ * Multiverse – Software restricted by copyright or legal issues.
+ * Canonical Partners – Software packaged by Ubuntu for their partners
+
+
+
+Enabling all these repositories will give you access to more software and proprietary drivers.
+
+Go to Activity Overview by pressing Super Key (Windows key), and search for Software & Updates:
+
+![Software and Updates in Ubuntu 17.10][9]
+
+Under the Ubuntu Software tab, make sure you have checked all of the Main, Universe, Restricted and Multiverse repository checked.
+
+![Setting repositories in Ubuntu 18.04][10]
+
+Now move to the **Other Software** tab, check the option of **Canonical Partners**.
+
+![Enable Canonical Partners repository in Ubuntu 17.10][11]
+
+You’ll have to enter your password in order to update the software sources. Once it completes, you’ll find more applications to install in the Software Center.
+
+#### 3\. Install media codecs
+
+In order to play media files like MP#, MPEG4, AVI etc, you’ll need to install media codecs. Ubuntu has them in their repository but doesn’t install it by default because of copyright issues in various countries.
+
+As an individual, you can install these media codecs easily using the Ubuntu Restricted Extra package. Click on the link below to install it from the Software Center.
+
+[Install Ubuntu Restricted Extras][12]
+
+Or alternatively, use the command below to install it:
+```
+sudo apt install ubuntu-restricted-extras
+
+```
+
+#### 4\. Install software from the Software Center
+
+Now that you have setup the repositories and installed the codecs, it is time to get software. If you are absolutely new to Ubuntu, please follow this [guide to installing software in Ubuntu][13].
+
+There are several ways to install software. The most convenient way is to use the Software Center that has thousands of software available in various categories. You can install them in a few clicks from the software center.
+
+![Software Center in Ubuntu 17.10 ][14]
+
+It depends on you what kind of software you would like to install. I’ll suggest some of my favorites here.
+
+ * **VLC** – media player for videos
+ * **GIMP** – Photoshop alternative for Linux
+ * **Pinta** – Paint alternative in Linux
+ * **Calibre** – eBook management tool
+ * **Chromium** – Open Source web browser
+ * **Kazam** – Screen recorder tool
+ * [**Gdebi**][15] – Lightweight package installer for .deb packages
+ * **Spotify** – For streaming music
+ * **Skype** – For video messaging
+ * **Kdenlive** – [Video editor for Linux][16]
+ * **Atom** – [Code editor][17] for programming
+
+
+
+You may also refer to this list of [must-have Linux applications][18] for more software recommendations.
+
+#### 5\. Install software from the Web
+
+Though Ubuntu has thousands of applications in the software center, you may not find some of your favorite applications despite the fact that they support Linux.
+
+Many software vendors provide ready to install .deb packages. You can download these .deb files from their website and install it by double-clicking on it.
+
+[Google Chrome][19] is one such software that you can download from the web and install it.
+
+#### 6\. Opt out of data collection in Ubuntu 18.04 (optional)
+
+Ubuntu 18.04 collects some harmless statistics about your system hardware and your system installation preference. It also collects crash reports.
+
+You’ll be given the option to not send this data to Ubuntu servers when you log in to Ubuntu 18.04 for the first time.
+
+![Opt out of data collection in Ubuntu 18.04][20]
+
+If you miss it that time, you can disable it by going to System Settings -> Privacy and then set the Problem Reporting to Manual.
+
+![Privacy settings in Ubuntu 18.04][21]
+
+#### 7\. Customize the GNOME desktop (Dock, themes, extensions and more)
+
+The GNOME desktop looks good in Ubuntu 18.04 but doesn’t mean you cannot change it.
+
+You can do a few visual changes from the System Settings. You can change the wallpaper of the desktop and the lock screen, you can change the position of the dock (launcher on the left side), change power settings, Bluetooth etc. In short, you can find many settings that you can change as per your need.
+
+![Ubuntu 17.10 System Settings][22]
+
+Changing themes and icons are the major way to change the looks of your system. I advise going through the list of [best GNOME themes][23] and [icons for Ubuntu][24]. Once you have found the theme and icon of your choice, you can use them with GNOME Tweaks tool.
+
+You can install GNOME Tweaks via the Software Center or you can use the command below to install it:
+```
+sudo apt install gnome-tweak-tool
+
+```
+
+Once it is installed, you can easily [install new themes and icons][25].
+
+![Change theme is one of the must to do things after installing Ubuntu 17.10][26]
+
+You should also have a look at [use GNOME extensions][27] to further enhance the looks and capabilities of your system. I made this video about using GNOME extensions in 17.10 and you can follow the same for Ubuntu 18.04.
+
+If you are wondering which extension to use, do take a look at this list of [best GNOME extensions][28].
+
+I also recommend reading this article on [GNOME customization in Ubuntu][29] so that you can know the GNOME desktop in detail.
+
+#### 8\. Prolong your battery and prevent overheating
+
+Let’s move on to [prevent overheating in Linux laptops][30]. TLP is a wonderful tool that controls CPU temperature and extends your laptops’ battery life in the long run.
+
+Make sure that you haven’t installed any other power saving application such as [Laptop Mode Tools][31]. You can install it using the command below in a terminal:
+```
+sudo apt install tlp tlp-rdw
+
+```
+
+Once installed, run the command below to start it:
+```
+sudo tlp start
+
+```
+
+#### 9\. Save your eyes with Nightlight
+
+Nightlight is my favorite feature in GNOME desktop. Keeping [your eyes safe at night][32] from the computer screen is very important. Reducing blue light helps reducing eye strain at night.
+
+![flux effect][33]
+
+GNOME provides a built-in Night Light option, which you can activate in the System Settings.
+
+Just go to System Settings-> Devices-> Displays and turn on the Night Light option.
+
+![Enabling night light is a must to do in Ubuntu 17.10][34]
+
+#### 9\. Disable automatic suspend for laptops
+
+Ubuntu 18.04 comes with a new automatic suspend feature for laptops. If the system is running on battery and is inactive for 20 minutes, it will go in suspend mode.
+
+I understand that the intention is to save battery life but it is an inconvenience as well. You can’t keep the power plugged in all the time because it’s not good for the battery life. And you may need the system to be running even when you are not using it.
+
+Thankfully, you can change this behavior. Go to System Settings -> Power. Under Suspend & Power Button section, either turn off the Automatic Suspend option or extend its time period.
+
+![Disable automatic suspend in Ubuntu 18.04][35]
+
+You can also change the screen dimming behavior in here.
+
+#### 10\. System cleaning
+
+I have written in detail about [how to clean up your Ubuntu system][36]. I recommend reading that article to know various ways to keep your system free of junk.
+
+Normally, you can use this little command to free up space from your system:
+```
+sudo apt autoremove
+
+```
+
+It’s a good idea to run this command every once a while. If you don’t like the command line, you can use a GUI tool like [Stacer][37] or [Bleach Bit][38].
+
+#### 11\. Going back to Unity or Vanilla GNOME (not recommended)
+
+If you have been using Unity or GNOME in the past, you may not like the new customized GNOME desktop in Ubuntu 18.04. Ubuntu has customized GNOME so that it resembles Unity but at the end of the day, it is neither completely Unity nor completely GNOME.
+
+So if you are a hardcore Unity or GNOMEfan, you may want to use your favorite desktop in its ‘real’ form. I wouldn’t recommend but if you insist here are some tutorials for you:
+
+#### 12\. Can’t log in to Ubuntu 18.04 after incorrect password? Here’s a workaround
+
+I noticed a [little bug in Ubuntu 18.04][39] while trying to change the desktop session to Ubuntu Community theme. It seems if you try to change the sessions at the login screen, it rejects your password first and at the second attempt, the login gets stuck. You can wait for 5-10 minutes to get it back or force power it off.
+
+The workaround here is that after it displays the incorrect password message, click Cancel, then click your name, then enter your password again.
+
+#### 13\. Experience the Community theme (optional)
+
+Ubuntu 18.04 was supposed to have a dashing new theme developed by the community. The theme could not be completed so it could not become the default look of Bionic Beaver release. I am guessing that it will be the default theme in Ubuntu 18.10.
+
+![Ubuntu 18.04 Communitheme][40]
+
+You can try out the aesthetic theme even today. [Installing Ubuntu Community Theme][41] is very easy. Just look for it in the software center, install it, restart your system and then at the login choose the Communitheme session.
+
+#### 14\. Get Windows 10 in Virtual Box (if you need it)
+
+In a situation where you must use Windows for some reasons, you can [install Windows in virtual box inside Linux][42]. It will run as a regular Ubuntu application.
+
+It’s not the best way but it still gives you an option. You can also [use WINE to run Windows software on Linux][43]. In both cases, I suggest trying the alternative native Linux application first before jumping to virtual machine or WINE.
+
+#### What do you do after installing Ubuntu?
+
+Those were my suggestions for getting started with Ubuntu. There are many more tutorials that you can find under [Ubuntu 18.04][44] tag. You may go through them as well to see if there is something useful for you.
+
+Enough from myside. Your turn now. What are the items on your list of **things to do after installing Ubuntu 18.04**? The comment section is all yours.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://itsfoss.com/author/abhishek/
+[1]:https://www.ubuntu.com/
+[2]:https://itsfoss.com/ubuntu-18-04-release-features/
+[3]:https://www.youtube.com/c/itsfoss?sub_confirmation=1
+[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/things-to-after-installing-ubuntu-18-04-featured-800x450.jpeg
+[5]:https://www.gnome.org/
+[6]:https://kubuntu.org/
+[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-update-ubuntu-17-10.jpg
+[8]:https://help.ubuntu.com/community/Repositories/Ubuntu
+[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-updates-ubuntu-17-10.jpg
+[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/repositories-ubuntu-18.png
+[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-repository-ubuntu-17-10.jpeg
+[12]:apt://ubuntu-restricted-extras
+[13]:https://itsfoss.com/remove-install-software-ubuntu/
+[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/Ubuntu-software-center-17-10-800x551.jpeg
+[15]:https://itsfoss.com/gdebi-default-ubuntu-software-center/
+[16]:https://itsfoss.com/best-video-editing-software-linux/
+[17]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
+[18]:https://itsfoss.com/essential-linux-applications/
+[19]:https://www.google.com/chrome/
+[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/opt-out-of-data-collection-ubuntu-18-800x492.jpeg
+[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/privacy-ubuntu-18-04-800x417.png
+[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/System-Settings-Ubuntu-17-10-800x573.jpeg
+[23]:https://itsfoss.com/best-gtk-themes/
+[24]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
+[25]:https://itsfoss.com/install-themes-ubuntu/
+[26]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/GNOME-Tweak-Tool-Ubuntu-17-10.jpeg
+[27]:https://itsfoss.com/gnome-shell-extensions/
+[28]:https://itsfoss.com/best-gnome-extensions/
+[29]:https://itsfoss.com/gnome-tricks-ubuntu/
+[30]:https://itsfoss.com/reduce-overheating-laptops-linux/
+[31]:https://wiki.archlinux.org/index.php/Laptop_Mode_Tools
+[32]:https://itsfoss.com/night-shift-flux-ubuntu-linux/
+[33]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/03/flux-eyes-strain.jpg
+[34]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/Enable-Night-Light-Feature-Ubuntu-17-10-800x396.jpeg
+[35]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/disable-automatic-suspend-ubuntu-18-800x586.jpeg
+[36]:https://itsfoss.com/free-up-space-ubuntu-linux/
+[37]:https://itsfoss.com/optimize-ubuntu-stacer/
+[38]:https://itsfoss.com/bleachbit-2-release/
+[39]:https://gitlab.gnome.org/GNOME/gnome-shell/issues/227
+[40]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/ubunt-18-theme.jpeg
+[41]:https://itsfoss.com/ubuntu-community-theme/
+[42]:https://itsfoss.com/install-windows-10-virtualbox-linux/
+[43]:https://itsfoss.com/use-windows-applications-linux/
+[44]:https://itsfoss.com/tag/ubuntu-18-04/
From ebd2fdfc580884f4b802e39406b83772d2ba3318 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 28 Apr 2018 13:02:09 +0800
Subject: [PATCH 131/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Using=20machine?=
=?UTF-8?q?=20learning=20to=20color=20cartoons?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...sing machine learning to color cartoons.md | 125 ++++++++++++++++++
1 file changed, 125 insertions(+)
create mode 100644 sources/tech/20180425 Using machine learning to color cartoons.md
diff --git a/sources/tech/20180425 Using machine learning to color cartoons.md b/sources/tech/20180425 Using machine learning to color cartoons.md
new file mode 100644
index 0000000000..32efd931e6
--- /dev/null
+++ b/sources/tech/20180425 Using machine learning to color cartoons.md
@@ -0,0 +1,125 @@
+Using machine learning to color cartoons
+======
+
+
+A big problem with supervised machine learning is the need for huge amounts of labeled data. It's a big problem especially if you don't have the labeled data—and even in a world awash with big data, most of us don't.
+
+Although a few companies have access to enormous quantities of certain kinds of labeled data, for most organizations and many applications, creating sufficient quantities of the right kind of labeled data is cost prohibitive or impossible. Sometimes the domain is one in which there just isn't much data (for example, when diagnosing a rare disease or determining whether a signature matches a few known exemplars). Other times the volume of data needed multiplied by the cost of human labeling by [Amazon Turkers][1] or summer interns is just too high. Paying to label every frame of a movie-length video adds up fast, even at a penny a frame.
+
+### The big problem of big data requirements
+
+The specific problem our group set out to solve was: Can we train a model to automate applying a simple color scheme to a black and white character without hand-drawing hundreds or thousands of examples as training data?
+
+ * A rule-based strategy for extreme augmentation of small datasets
+ * A borrowed TensorFlow image-to-image translation model, Pix2Pix, to automate cartoon coloring with very limited training data
+
+
+
+In this experiment (which we called DragonPaint), we confronted the problem of deep learning's enormous labeled-data requirements using:
+
+I had seen [Pix2Pix][2], a machine learning image-to-image translation model described in a paper ("Image-to-Image Translation with Conditional Adversarial Networks," by Isola, et al.), that colorizes landscapes after training on AB pairs where A is the grayscale version of landscape B. My problem seemed similar. The only problem was training data.
+
+I needed the training data to be very limited because I didn't want to draw and color a lifetime supply of cartoon characters just to train the model. The tens of thousands (or hundreds of thousands) of examples often required by deep-learning models were out of the question.
+
+Based on Pix2Pix's examples, we would need at least 400 to 1,000 sketch/colored pairs. How many was I willing to draw? Maybe 30. I drew a few dozen cartoon flowers and dragons and asked whether I could somehow turn this into a training set.
+
+### The 80% solution: color by component
+
+
+![Characters colored by component rules][4]
+
+Characters colored by component rules
+
+When faced with a shortage of training data, the first question to ask is whether there is a good non-machine-learning based approach to our problem. If there's not a complete solution, is there a partial solution, and would a partial solution do us any good? Do we even need machine learning to color flowers and dragons? Or can we specify geometric rules for coloring?
+
+
+![How to color by components][6]
+
+How to color by components
+
+There is a non-machine-learning approach to solving my problem. I could tell a kid how I want my drawings colored: Make the flower's center orange and the petals yellow. Make the dragon's body orange and the spikes yellow.
+
+At first, that doesn't seem helpful because our computer doesn't know what a center or a petal or a body or a spike is. But it turns out we can define the flower or dragon parts in terms of connected components and get a geometric solution for coloring about 80% of our drawings. Although 80% isn't enough, we can bootstrap from that partial-rule-based solution to 100% using strategic rule-breaking transformations, augmentations, and machine learning.
+
+Connected components are what is colored when you use Windows Paint (or a similar application). For example, when coloring a binary black and white image, if you click on a white pixel, the white pixels that are be reached without crossing over black are colored the new color. In a "rule-conforming" cartoon dragon or flower sketch, the biggest white component is the background. The next biggest is the body (plus the arms and legs) or the flower's center. The rest are spikes or petals, except for the dragon's eye, which can be distinguished by its distance from the background.
+
+### Using strategic rule breaking and Pix2Pix to get to 100%
+
+Some of my sketches aren't rule-conforming. A sloppily drawn line might leave a gap. A back limb will get colored like a spike. A small, centered daisy will switch a petal and the center's coloring rules.
+
+
+
+For the 20% we couldn't color with the geometric rules, we needed something else. We turned to Pix2Pix, which requires a minimum training set of 400 to 1,000 sketch/colored pairs (i.e., the smallest training sets in the [Pix2Pix paper][7]) including rule-breaking pairs.
+
+So, for each rule-breaking example, we finished the coloring by hand (e.g., back limbs) or took a few rule-abiding sketch/colored pairs and broke the rule. We erased a bit of a line in A or we transformed a fat, centered flower pair A and B with the same function (f) to create a new pair f(A) and f(B)—a small, centered flower. That got us to a training set.
+
+### Extreme augmentations with gaussian filters and homeomorphisms
+
+It's common in computer vision to augment an image training set with geometric transformations, such as rotation, translation, and zoom.
+
+But what if we need to turn sunflowers into daisies or make a dragon's nose bulbous or pointy?
+
+Or what if we just need an enormous increase in data volume without overfitting? Here we need a dataset 10 to 30 times larger than what we started with.
+
+![Sunflower turned into a daisy with r -> r cubed][9]
+
+Sunflower turned into a daisy with r -> r cubed
+
+![Gaussian filter augmentations][11]
+
+Gaussian filter augmentations
+
+Certain homeomorphisms of the unit disk make good daisies (e.g., r -> r cubed) and Gaussian filters change a dragon's nose. Both were extremely useful for creating augmentations for our dataset and produced the augmentation volume we needed, but they also started to change the style of the drawings in ways that an [affine transformation][12] could not.
+
+This inspired questions beyond how to automate a simple coloring scheme: What defines an artist's style, either to an outside viewer or the artist? When does an artist adopt as their own a drawing they could not have made without the algorithm? When does the subject matter become unrecognizable? What's the difference between a tool, an assistant, and a collaborator?
+
+### How far can we go?
+
+How little can we draw for input and how much variation and complexity can we create while staying within a subject and style recognizable as the artist's? What would we need to do to make an infinite parade of giraffes or dragons or flowers? And if we had one, what could we do with it?
+
+Those are questions we'll continue to explore in future work.
+
+But for now, the rules, augmentations, and Pix2Pix model worked. We can color flowers really well, and the dragons aren't bad.
+
+
+![Results: flowers colored by model trained on flowers][14]
+
+Results: Flowers colored by model trained on flowers
+
+
+![Results: dragons trained on model trained on dragons][16]
+
+Results: Dragons trained on model trained on dragons
+
+To learn more, attend Gretchen Greene's talk, [DragonPaint – bootstrapping small data to color cartoons][17], at [PyCon Cleveland 2018][18].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/dragonpaint-bootstrapping
+
+作者:[K. Gretchen Greene][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/kggreene
+[1]:https://www.mturk.com/
+[2]:https://phillipi.github.io/pix2pix/
+[3]:/file/393246
+[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint2.png?itok=qw_q72A5 (Characters colored by component rules)
+[5]:/file/393251
+[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint3.png?itok=JK3TPcvp (How to color by components)
+[7]:https://arxiv.org/abs/1611.07004
+[8]:/file/393261
+[9]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint5.png?itok=GvipU8l8 (Sunflower turned into a daisy with r -> r cubed)
+[10]:/file/393266
+[11]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint6.png?itok=r14I2Fyz (Gaussian filter augmentations)
+[12]:https://en.wikipedia.org/wiki/Affine_transformation
+[13]:/file/393271
+[14]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint7.png?itok=xKWvyi_T (Results: flowers colored by model trained on flowers)
+[15]:/file/393276
+[16]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint8.png?itok=fSM5ovBT (Results: dragons trained on model trained on dragons)
+[17]:https://us.pycon.org/2018/schedule/presentation/113/
+[18]:https://us.pycon.org/2018/
From 8c581ea40fd7cf6197a45999c526f6b9583786d4 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 28 Apr 2018 13:03:27 +0800
Subject: [PATCH 132/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Configuring=20loc?=
=?UTF-8?q?al=20storage=20in=20Linux=20with=20Stratis?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ing local storage in Linux with Stratis.md | 79 +++++++++++++++++++
1 file changed, 79 insertions(+)
create mode 100644 sources/tech/20180425 Configuring local storage in Linux with Stratis.md
diff --git a/sources/tech/20180425 Configuring local storage in Linux with Stratis.md b/sources/tech/20180425 Configuring local storage in Linux with Stratis.md
new file mode 100644
index 0000000000..fc9471c88f
--- /dev/null
+++ b/sources/tech/20180425 Configuring local storage in Linux with Stratis.md
@@ -0,0 +1,79 @@
+Configuring local storage in Linux with Stratis
+======
+
+
+Configuring local storage is something desktop Linux users do very infrequently—maybe only once, during installation. Linux storage tech moves slowly, and many storage tools used 20 years ago are still used regularly today. But some things have improved since then. Why aren't people taking advantage of these new capabilities?
+
+This article is about Stratis, a new project that aims to bring storage advances to all Linux users, from the simple laptop single SSD to a hundred-disk array. Linux has the capabilities, but its lack of an easy-to-use solution has hindered widespread adoption. Stratis's goal is to make Linux's advanced storage features accessible.
+
+### Simple, reliable access to advanced storage features
+
+Stratis aims to make three things easier: initial configuration of storage; making later changes; and using advanced storage features like snapshots, thin provisioning, and even tiering.
+
+### Stratis: a volume-managing filesystem
+
+Stratis is a volume-managing filesystem (VMF) like [ZFS][1] and [Btrfs][2] . It starts with the central idea of a storage "pool," an idea common to VMFs and also standalone volume managers such as [LVM][3] . This pool is created from one or more local disks (or partitions), and volumes are created from the pool. Their exact layout is not specified by the user, unlike traditional disk partitioning using [fdisk][4] or [GParted][5]
+
+VMFs take it a step further and integrate the filesystem layer. The user no longer picks a filesystem to put on the volume. The filesystem and volume are merged into a single thing—a conceptual tree of files (which ZFS calls a dataset, Btrfs a subvolume, and Stratis a filesystem) whose data resides in the pool but that has no size limit except for the pool's total size.
+
+Another way of looking at this: Just as a filesystem abstracts the actual location of storage blocks that make up a single file within the filesystem, a VMF abstracts the actual storage blocks of a filesystem within the pool.
+
+The pool enables other useful features. Some of these, like filesystem snapshots, occur naturally from the typical implementation of a VMF, where multiple filesystems can share physical data blocks within the pool. Others, like redundancy, tiering, and integrity, make sense because the pool is a central place to manage these features for all the filesystems on the system.
+
+The result is that a VMF is simpler to set up and manage and easier to enable for advanced storage features than independent volume manager and filesystem layers.
+
+### What makes Stratis different from ZFS or Btrfs?
+
+Stratis is a new project, which gives it the benefit of learning from previous projects. What Stratis learned from ZFS, Btrfs, and LVM will be covered in depth in [Part 2][6], but to summarize, the differences in Stratis come from seeing what worked and what didn't work for others, from changes in how people use and automate computers, and changes in the underlying hardware.
+
+First, Stratis focuses on being easy and safe to use. This is important for the individual user, who may go for long stretches of time between interactions with Stratis. If these interactions are unfriendly, especially if there's a possibility of losing data, most people will stick with the basics instead of using new features.
+
+Second, APIs and DevOps-style automation are much more important today than they were even a few years ago. Stratis supports automation by providing a first-class API, so people and software tools alike can use Stratis directly.
+
+Third, SSDs have greatly expanded in capacity as well as market share. Earlier filesystems went to great lengths to optimize for rotational media's slow access times, but flash-based media makes these efforts less important. Even if a pool's data is too big to use SSDs economically for the entire pool, an SSD caching tier is still an option and can give excellent results. Assuming good performance because of SSDs lets Stratis focus its pool design on flexibility and reliability.
+
+Finally, Stratis has a very different implementation model from ZFS and Btrfs (I'll this discuss further in [Part 2][6]). This means some things are easier for Stratis, while other things are harder. It also increases Stratis's pace of development.
+
+### Learn more
+
+To learn more about Stratis, check out [Part 2][6] of this series. You'll also find a detailed [design document][7] on the [Stratis website][8].
+
+### Get involved
+
+To develop, test, or offer feedback on Stratis, subscribe to our [mailing list][9].
+
+Development is on [GitHub][10] for both the [daemon][11] (written in [Rust][12]) and the [command-line tool][13] (written in [Python][14]).
+
+Join us on the [Freenode][15] IRC network on channel #stratis-storage.
+
+Andy Grover will be speaking at LinuxFest Northwest this year. See [program highlights][16] or [register to attend][17].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/stratis-easy-use-local-storage-management-linux
+
+作者:[Andy Grover][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/agrover
+[1]:https://en.wikipedia.org/wiki/ZFS
+[2]:https://en.wikipedia.org/wiki/Btrfs
+[3]:https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
+[4]:https://en.wikipedia.org/wiki/Fdisk
+[5]:https://gparted.org/
+[6]:https://opensource.com/article/18/4/stratis-lessons-learned
+[7]:https://stratis-storage.github.io/StratisSoftwareDesign.pdf
+[8]:https://stratis-storage.github.io/
+[9]:https://lists.fedoraproject.org/admin/lists/stratis-devel.lists.fedorahosted.org/
+[10]:https://github.com/stratis-storage/
+[11]:https://github.com/stratis-storage/stratisd
+[12]:https://www.rust-lang.org/
+[13]:https://github.com/stratis-storage/stratis-cli
+[14]:https://www.python.org/
+[15]:https://freenode.net/
+[16]:https://www.linuxfestnorthwest.org/conferences/lfnw18
+[17]:https://www.linuxfestnorthwest.org/conferences/lfnw18/register/new
From 81abc03da5a09ddcaaf4ae85b17692956e32bc79 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 28 Apr 2018 13:04:47 +0800
Subject: [PATCH 133/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20Stratis=20?=
=?UTF-8?q?learned=20from=20ZFS,=20Btrfs,=20and=20Linux=20Volume=20Manager?=
=?UTF-8?q?=20|=20Opensource.com?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...d Linux Volume Manager - Opensource.com.md | 66 +++++++++++++++++++
1 file changed, 66 insertions(+)
create mode 100644 sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md
diff --git a/sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md b/sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md
new file mode 100644
index 0000000000..a32792e532
--- /dev/null
+++ b/sources/tech/20180426 What Stratis learned from ZFS, Btrfs, and Linux Volume Manager - Opensource.com.md
@@ -0,0 +1,66 @@
+What Stratis learned from ZFS, Btrfs, and Linux Volume Manager | Opensource.com
+======
+
+
+
+As discussed in [Part 1][1] of this series, Stratis is a volume-managing filesystem (VMF) with functionality similar to [ZFS][2] and [Btrfs][3]. In designing Stratis, we studied the choices that developers of existing solutions made.
+
+### Why not adopt an existing solution?
+
+The reasons vary. First, let's consider [ZFS][2]. Originally developed by Sun Microsystems for Solaris (now owned by Oracle), ZFS has been ported to Linux. However, its [CDDL][4]-licensed code cannot be merged into the [GPL][5]-licensed Linux source tree. Whether CDDL and GPLv2 are truly incompatible is a subject for debate, but the uncertainty is enough to make enterprise Linux vendors unwilling to adopt and support it.
+
+[Btrfs][3] is also well-established and has no licensing issues. For years it was the "Chosen One" for many users, but it just hasn't yet gotten to where it needs to be in terms of stability and features.
+
+So, fuelled by a desire to improve the status quo and frustration with existing options, Stratis was conceived.
+
+### How Stratis is different
+
+One thing that ZFS and Btrfs have clearly shown is that writing a VMF as an in-kernel filesystem takes a tremendous amount of work and time to work out the bugs and stabilize. It's essential to get right when it comes to precious data. Starting from scratch and taking the same approach with Stratis would probably take a decade as well, which was not acceptable.
+
+Instead, Stratis chose to use some of the Linux kernel's other existing capabilities: The [device mapper][6] subsystem, which is most notably used by LVM to provide RAID, thin-provisioning, and other features on top of block devices; and the well-tested and high-performance [XFS][7] filesystem. Stratis builds its pool using layers of existing technology, with the goal of managing them to appear as a seamless whole to the user.
+
+### What Stratis learned from ZFS
+
+For many users, ZFS set the expectations for what a next-generation filesystem should be. Reading comments online from people talking about ZFS helped set Stratis's initial development goals. ZFS's design also implicitly highlighted things to avoid. For example, ZFS requires an "import" step when attaching a pool created on another system. There are a few reasons for this, and each reason was likely an issue that Stratis had to solve, either by taking the same approach or a different one.
+
+One thing we didn't like about ZFS was that it has some restrictions on adding new hard drives or replacing existing drives with bigger ones, especially if the pool is configured for redundancy. Of course, there is a reason for this, but we thought it was an area we could improve.
+
+Finally, using ZFS's tools at the command line, once learned, is a good experience. We wanted to have that same feeling with Stratis's command-line tool, and we also liked the tool's tendency to use positional parameters and limit the amount of typing required for each command.
+
+### What Stratis learned from Btrfs
+
+One thing we liked about Btrfs was the single command-line tool, with positional subcommands. Btrfs also treats redundancy (Btrfs profiles) as a property of the pool, which seems easier to understand than ZFS's approach and allows drives to be added and even removed.
+
+Finally, looking at the features that both ZFS and Btrfs offer, such as snapshot implementations and send/receive support, helped determine which features Stratis should include.
+
+### What Stratis learned from LVM
+
+From the early design stages of Stratis, we studied LVM extensively. LVM is currently the most significant user of the Linux device mapper (DM) subsystem—in fact, DM is maintained by the LVM core team. We examined it both from the possibility of actually using LVM as a layer of Stratis and an example of using DM, which Stratis could do directly with LVM as a peer. We looked at LVM's on-disk metadata format (along with ZFS's and XFS's) for inspiration in defining Stratis's on-disk metadata format.
+
+Among the listed projects, LVM shares the most in common with Stratis internally, because they both use DM. However, from a usage standpoint, LVM is much more transparent about its inner workings. This gives expert users a great deal of control and options for precisely configuring the volume group (pool) layout in a way that Stratis doesn't.
+
+### A diversity of solutions
+
+One great thing about working on free software and open source is that there are no irreplaceable components. Every part—even the kernel—is open for view, modification, and even replacement if the current software isn't meeting users' needs. A new project doesn't need to end an existing one if there is enough support for both to be sustained in parallel.
+
+Stratis is an attempt to better meet some users' needs for local storage management—those looking for a no-hassle, easy-to-use, powerful solution. This means making design choices that might not be right for all users. Alternatives make tough choices possible since users have other options. All users ultimately benefit from their ability to use whichever tool works best for them.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/stratis-lessons-learned
+
+作者:[Andy Grover][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/agrover
+[1]:https://opensource.com/article/18/4/stratis-easy-use-local-storage-management-linux
+[2]:https://en.wikipedia.org/wiki/ZFS
+[3]:https://en.wikipedia.org/wiki/Btrfs
+[4]:https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License
+[5]:https://en.wikipedia.org/wiki/GNU_General_Public_License
+[6]:https://en.wikipedia.org/wiki/Device_mapper
+[7]:https://en.wikipedia.org/wiki/XFS
From c4d90eb198cb1ca63cbdf355321d056910cfa7b9 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 28 Apr 2018 13:05:40 +0800
Subject: [PATCH 134/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20use=20?=
=?UTF-8?q?FIND=20in=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../tech/20180427 How to use FIND in Linux.md | 90 +++++++++++++++++++
1 file changed, 90 insertions(+)
create mode 100644 sources/tech/20180427 How to use FIND in Linux.md
diff --git a/sources/tech/20180427 How to use FIND in Linux.md b/sources/tech/20180427 How to use FIND in Linux.md
new file mode 100644
index 0000000000..829514dd3c
--- /dev/null
+++ b/sources/tech/20180427 How to use FIND in Linux.md
@@ -0,0 +1,90 @@
+How to use FIND in Linux
+======
+
+
+
+In [a recent Opensource.com article][1], Lewis Cowles introduced the `find` command.
+
+`find` is one of the more powerful and flexible command-line programs in the daily toolbox, so it's worth spending a little more time on it.
+
+At a minimum, `find` takes a path to find things. For example:
+```
+find /
+
+```
+
+will find (and print) every file on the system. And since everything is a file, you will get a lot of output to sort through. This probably doesn't help you find what you're looking for. You can change the path argument to narrow things down a bit, but it's still not really any more helpful than using the `ls` command. So you need to think about what you're trying to locate.
+
+Perhaps you want to find all the JPEG files in your home directory. The `-name` argument allows you to restrict your results to files that match the given pattern.
+```
+find ~ -name '*jpg'
+
+```
+
+But wait! What if some of them have an uppercase extension? `-iname` is like `-name`, but it is case-insensitive.
+```
+find ~ -iname '*jpg'
+
+```
+
+Great! But the 8.3 name scheme is so 1985. Some of the pictures might have a .jpeg extension. Fortunately, we can combine patterns with an "or," represented by `-o`.
+```
+find ~ ( -iname 'jpeg' -o -iname 'jpg' )
+
+```
+
+We're getting closer. But what if you have some directories that end in jpg? (Why you named a directory `bucketofjpg` instead of `pictures` is beyond me.) We can modify our command with the `-type` argument to look only for files.
+```
+find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f
+
+```
+
+Or maybe you'd like to find those oddly named directories so you can rename them later:
+```
+find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d
+
+```
+
+It turns out you've been taking a lot of pictures lately, so let's narrow this down to files that have changed in the last week.
+```
+find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7
+
+```
+
+`ctime`), modification time (`mtime`), or access time (`atime`). These are in days, so if you want finer-grained control, you can express it in minutes instead (`cmin`, `mmin`, and `amin`, respectively). Unless you know exactly the time you want, you'll probably prefix the number with `+` (more than) or `–` (less than).
+
+You can do time filters based on file status change time (), modification time (), or access time (). These are in days, so if you want finer-grained control, you can express it in minutes instead (, and, respectively). Unless you know exactly the time you want, you'll probably prefix the number with(more than) or(less than).
+
+But maybe you don't care about your pictures. Maybe you're running out of disk space, so you want to find all the gigantic (let's define that as "greater than 1 gigabyte") files in the `log` directory:
+```
+find /var/log -size +1G
+
+```
+
+Or maybe you want to find all the files owned by bcotton in `/data`:
+```
+find /data -owner bcotton
+
+```
+
+You can also look for files based on permissions. Perhaps you want to find all the world-readable files in your home directory to make sure you're not oversharing.
+```
+find ~ -perm -o=r
+
+```
+
+This post only scratches the surface of what `find` can do. Combining tests with Boolean logic can give you incredible flexibility to find exactly the files you're looking for. And with arguments like `-exec` or `-delete`, you can have `find` take action on what it... finds. Have any favorite `find` expressions? Share them in the comments!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/4/how-use-find-linux
+
+作者:[Ben Cotton][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/bcotton
+[1]:https://opensource.com/article/18/4/how-find-files-linux
From b69cf7b047904ca5b2af531a0fed3bb3d63d3cd0 Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 28 Apr 2018 13:06:51 +0800
Subject: [PATCH 135/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Things=20You=20Sh?=
=?UTF-8?q?ould=20Know=20About=20Ubuntu=2018.04?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ings You Should Know About Ubuntu 18.04.md | 154 ++++++++++++++++++
1 file changed, 154 insertions(+)
create mode 100644 sources/tech/20180424 Things You Should Know About Ubuntu 18.04.md
diff --git a/sources/tech/20180424 Things You Should Know About Ubuntu 18.04.md b/sources/tech/20180424 Things You Should Know About Ubuntu 18.04.md
new file mode 100644
index 0000000000..5061c1f6e9
--- /dev/null
+++ b/sources/tech/20180424 Things You Should Know About Ubuntu 18.04.md
@@ -0,0 +1,154 @@
+Things You Should Know About Ubuntu 18.04
+======
+[Ubuntu 18.04 release][1] is just around the corner. I can see lots of questions from Ubuntu users in various Facebook groups and forums. I also organized Q&A sessions on Facebook and Instagram to know what Ubuntu users are wondering about Ubuntu 18.04.
+
+I have tried to answer those frequently asked questions about Ubuntu 18.04 here. I hope it helps clear your doubts if you had any. And if you still have questions, feel free to ask in the comment section below.
+
+### What to expect in Ubuntu 18.04
+
+![Ubuntu 18.04 Frequently Asked Questions][2]
+
+Just for clarification, some of the answers here are influenced by my personal opinion. If you are an experienced/aware Ubuntu user, some of the questions may sound silly to you. If that’s case, just ignore those questions.
+
+#### Can I install Unity on Ubuntu 18.04?
+
+Yes, you can.
+
+Canonical knows that there are people who simply loved Unity. This is why it has made Unity 7 available in the Universe repository. This is a community maintained edition and Ubuntu doesn’t develop it directly.
+
+I advise using the default GNOME first and if you really cannot tolerate it, then go on [installing Unity on Ubuntu 18.04][3].
+
+#### What GNOME version does it have?
+
+At the time of its release, Ubuntu 18.04 has GNOME 3.28.
+
+#### Can I install vanilla GNOME on it?
+
+Yes, you can.
+
+Existing GNOME users might not like the Unity resembling, customized GNOME desktop in Ubuntu 18.04. There are some packages available in Ubuntu’s main and universe repositories that allows you to [install vanilla GNOME on Ubuntu 18.04][4].
+
+#### Has the memory leak in GNOME fixed?
+
+Yes. The [infamous memory leak in GNOME 3.28][5] has been fixed and [Ubuntu is already testing the fix][6].
+
+Just to clarify, the memory leak was not caused by Ubuntu. It was/is impacting all Linux distributions that use GNOME 3.28. A new patch was released under GNOME 3.28.1 to fix this memory leak.
+
+#### How long will Ubuntu 18.04 be supported?
+
+It is a long-term support (LTS) release and like any LTS release, it will be supported for five years. Which means that Ubuntu 18.04 will get security and maintenance updates until April 2023. This is also true for all participating flavors except Ubuntu Studio.
+
+#### When will Ubuntu 18.04 be released?
+
+Ubuntu 18.04 LTS has been released on 26th April. All the participating flavors like Kubuntu, Lubuntu, Xubuntu, Budgie, MATE etc will have their 18.04 release available on the same day.
+
+It seems [Ubuntu Studio will not have 18.04 as LTS release][7].
+
+#### Is it possible to upgrade to Ubuntu 18.04 from 16.04/17.10? Can I upgrade from Ubuntu 16.04 with Unity to Ubuntu 18.04 with GNOME?
+
+Yes, absolutely. Once Ubuntu 18.04 LTS is released, you can easily upgrade to the new version.
+
+If you are using Ubuntu 17.10, make sure that in Software & Updates -> Updates, the ‘Notify me of a new Ubuntu version’ is set to ‘For any new version’.
+
+![Get notified for a new version in Ubuntu][8]
+
+If you are using Ubuntu 16.04, make sure that in Software & Updates -> Updates, the ‘Notify me of a new Ubuntu version’ is set to ‘For long-term support versions’.
+
+![Ubuntu 18.04 upgrade from Ubuntu 16.04][9]
+
+You should get system notification about the availability of the new versions. After that, upgrading to Ubuntu 18.04 is a matter of clicks.
+
+Even if Ubuntu 16.04 was Unity, you can still [upgrade to Ubuntu 18.04][10] GNOME.
+
+#### What does upgrading to Ubuntu 18.04 mean? Will I lose my data?
+
+If you are using Ubuntu 17.10 or Ubuntu 16.04, sooner or later, Ubuntu will notify you that Ubuntu 18.04 is available. If you have a good internet connection that can download 1.5 Gb of data, you can upgrade to Ubuntu 18.04 in a few clicks and in under 30 minutes.
+
+You don’t need to create a new USB and do a fresh install. Once the upgrade procedure finishes, you’ll have the new Ubuntu version available.
+
+Normally, your data, documents etc are safe in the upgrade procedure. However, keeping a backup of your important documents is always a good idea.
+
+#### When will I get to upgrade to Ubuntu 18.04?
+
+If you are using Ubuntu 17.10 and have correct update settings in place (as mentioned in the previous section), you should be notified for upgrading to Ubuntu 18.04 within a few days of Ubuntu 18.04 release. Since Ubuntu servers encounter heavy load on the release day, not everyone gets the upgrade the same day.
+
+For Ubuntu 16.04 users, it may take some weeks before they are officially notified of the availability of Ubuntu 18.04. Usually, this will happen after the first point release Ubuntu 18.04.1. This point release fixes the newly discovered bugs in 18.04.
+
+#### If I upgrade to Ubuntu 18.04 can I downgrade to 17.10 or 16.04?
+
+No, you cannot. While upgrading to the newer version is easy, there is no option to downgrade. If you want to go back to Ubuntu 16.04, you’ll have to do a fresh install.
+
+#### Can I use Ubuntu 18.04 on 32-bit systems?
+
+Yes and no.
+
+If you are already using the 32-bit version of Ubuntu 16.04 or 17.10, you may still get to upgrade to Ubuntu 18.04. However, you won’t find Ubuntu 18.04 bit ISO in 32-bit format anymore. In other words, you cannot do a fresh install of the 32-bit version of Ubuntu 18.04 GNOME.
+
+The good news here is that other official flavors like Ubuntu MATE, Lubuntu etc still have the 32-bit ISO of their new versions.
+
+In any case, if you have a 32-bit system, chances are that your system is weak on hardware. You’ll be better off using lightweight [Ubuntu MATE][11] or [Lubuntu][12] on such system.
+
+#### Where can I download Ubuntu 18.04?
+
+Once 18.04 is released, you can get the ISO image of Ubuntu 18.04 from its website. You have both direct download and torrent options. Other official flavors will be available on their official websites.
+
+#### Should I do a fresh install of Ubuntu 18.04 or upgrade to it from 16.04/17.10?
+
+If you have a choice, make a backup of your data and do a fresh install of Ubuntu 18.04.
+
+Upgrading to 18.04 from an existing version is a convenient option. However, in my opinion, it still keeps some traces/packages of the older version. A fresh install is always cleaner.
+
+For a fresh install, should I install Ubuntu 16.04 or Ubuntu 18.04?
+
+If you are going to install Ubuntu on a system, go for Ubuntu 18.04 instead of 16.04.
+
+Both of them are long-term support release and will be supported for a long time. Ubuntu 16.04 will get maintenance and security updates until 2021 and 18.04 until 2023.
+
+However, I would suggest that you use Ubuntu 18.04. Any LTS release gets [hardware updates for a limited time][13] (two and a half years I think). After that, it only gets maintenance updates. If you have newer hardware, you’ll get better support in 18.04.
+
+Also, many application developers will start focusing on Ubuntu 18.04 soon. Newly created PPAs might only support 18.04 in a few months. Using 18.04 has its advantages over 16.04.
+
+#### Will it be easier to install printer-scanner drivers instead of using the CLI?
+
+I am not an expert when it comes to printers so my opinion is based on my limited knowledge in this field. Most of the new printers support [IPP protocol][14] and thus they should be well supported in Ubuntu 18.04. I cannot say the same about older printers.
+
+#### Does Ubuntu 18.04 have better support for Realtek and other WiFi adapters?
+
+No specific information on this part.
+
+#### What are the system requirements for Ubuntu 18.04?
+
+For the default GNOME version, you should have [4 GB of RAM for a comfortable use][15]. A processor released in last 8 years will work as well. Anything older than that should use a [lightweight Linux distribution][16] such as [Lubuntu][12].
+
+#### Any other questions about Ubuntu 18.04?
+
+If you have any other doubts regarding Ubuntu 18.04, please feel free to leave a comment below. If you think some other information should be added to the list, please let me know.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/ubuntu-18-04-faq/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://itsfoss.com/author/abhishek/
+[1]:https://itsfoss.com/ubuntu-18-04-release-features/
+[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/ubuntu-18-04-faq-800x450.png
+[3]:https://itsfoss.com/use-unity-ubuntu-17-10/
+[4]:https://itsfoss.com/vanilla-gnome-ubuntu/
+[5]:https://feaneron.com/2018/04/20/the-infamous-gnome-shell-memory-leak/
+[6]:https://community.ubuntu.com/t/help-test-memory-leak-fixes-in-18-04-lts/5251
+[7]:https://www.omgubuntu.co.uk/2018/04/ubuntu-studio-plans-to-reboot
+[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/03/upgrade-ubuntu-2.jpeg
+[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/ubuntu-18-04-upgrade-settings-800x379.png
+[10]:https://itsfoss.com/upgrade-ubuntu-version/
+[11]:https://ubuntu-mate.org/
+[12]:https://lubuntu.net/
+[13]:https://www.ubuntu.com/info/release-end-of-life
+[14]:https://www.pwg.org/ipp/everywhere.html
+[15]:https://help.ubuntu.com/community/Installation/SystemRequirements
+[16]:https://itsfoss.com/lightweight-linux-beginners/
From 202f39e6ac68dfb250139696493c172239b722bb Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 28 Apr 2018 13:08:00 +0800
Subject: [PATCH 136/220] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Check?=
=?UTF-8?q?=20System=20Hardware=20Manufacturer,=20Model=20And=20Serial=20N?=
=?UTF-8?q?umber=20In=20Linux?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...turer, Model And Serial Number In Linux.md | 155 ++++++++++++++++++
1 file changed, 155 insertions(+)
create mode 100644 sources/tech/20180426 How To Check System Hardware Manufacturer, Model And Serial Number In Linux.md
diff --git a/sources/tech/20180426 How To Check System Hardware Manufacturer, Model And Serial Number In Linux.md b/sources/tech/20180426 How To Check System Hardware Manufacturer, Model And Serial Number In Linux.md
new file mode 100644
index 0000000000..da97e87fc6
--- /dev/null
+++ b/sources/tech/20180426 How To Check System Hardware Manufacturer, Model And Serial Number In Linux.md
@@ -0,0 +1,155 @@
+How To Check System Hardware Manufacturer, Model And Serial Number In Linux
+======
+Getting system hardware information is not a problem for Linux GUI and Windows users but CLI users facing trouble to get this details.
+
+Even most of us don’t know what is the best command to get this. There are many utilities available in Linux to get system hardware information such as
+
+System Hardware Manufacturer, Model And Serial Number.
+
+We are trying to write possible ways to get this details but you can choose the best method for you.
+
+It is mandatory to know all these information because it will be needed when you raise a case with hardware vendor for any kind of hardware issues.
+
+This can be achieved in six methods, let me show you how to do that.
+
+### Method-1 : Using Dmidecode Command
+
+Dmidecode is a tool which reads a computer’s DMI (stands for Desktop Management Interface) (some say SMBIOS – stands for System Management BIOS) table contents and display system hardware information in a human-readable format.
+
+This table contains a description of the system’s hardware components, as well as other useful information such as serial number, Manufacturer information, Release Date, and BIOS revision, etc,.,
+
+The DMI table doesn’t only describe what the system is currently made of, it also can report the possible evolution (such as the fastest supported CPU or the maximal amount of memory supported).
+
+This will help you to analyze your hardware capability like whether it’s support latest application version or not?
+```
+# dmidecode -t system
+
+# dmidecode 2.12
+# SMBIOS entry point at 0x7e7bf000
+SMBIOS 2.7 present.
+
+Handle 0x0024, DMI type 1, 27 bytes
+System Information
+ Manufacturer: IBM
+ Product Name: System x2530 M4: -[1214AC1]-
+ Version: 0B
+ Serial Number: MK2RL11
+ UUID: 762A99BF-6916-450F-80A6-B2E9E78FC9A1
+ Wake-up Type: Power Switch
+ SKU Number: Not Specified
+ Family: System X
+
+Handle 0x004B, DMI type 12, 5 bytes
+System Configuration Options
+ Option 1: JP20 pin1-2: TPM PP Disable, pin2-3: TPM PP Enable
+
+Handle 0x004D, DMI type 32, 20 bytes
+System Boot Information
+ Status: No errors detected
+
+```
+
+**Suggested Read :** [Dmidecode – Easy Way To Get Linux System Hardware Information][1]
+
+### Method-2 : Using inxi Command
+
+inxi is a nifty tool to check hardware information on Linux and offers wide range of option to get all the hardware information on Linux system that i never found in any other utility which are available in Linux. It was forked from the ancient and mindbendingly perverse yet ingenius infobash, by locsmif.
+
+inxi is a script that quickly shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, GCC version(s), Processes, RAM usage, and a wide variety of other useful information, also used for forum technical support & debugging tool.
+```
+# inxi -M
+Machine: Device: server System: IBM product: N/A v: 0B serial: MK2RL11
+ Mobo: IBM model: 00Y8494 serial: 37M17D UEFI: IBM v: -[VVE134MUS-1.50]- date: 08/30/2013
+
+```
+
+**Suggested Read :** [inxi – A Great Tool to Check Hardware Information on Linux][2]
+
+### Method-3 : Using lshw Command
+
+lshw (stands for Hardware Lister) is a small nifty tool that generates detailed reports about various hardware components on the machine such as memory configuration, firmware version, mainboard configuration, CPU version and speed, cache configuration, usb, network card, graphics cards, multimedia, printers, bus speed, etc.
+
+It’s generating hardware information by reading varies files under /proc directory and DMI table.
+
+lshw must be run as super user to detect the maximum amount of information or it will only report partial information. Special option is available in lshw called class which will shows specific given hardware information in detailed manner.
+```
+# lshw -C system
+enal-dbo01t
+ description: Blade
+ product: System x2530 M4: -[1214AC1]-
+ vendor: IBM
+ version: 0B
+ serial: MK2RL11
+ width: 64 bits
+ capabilities: smbios-2.7 dmi-2.7 vsyscall32
+ configuration: boot=normal chassis=enclosure family=System X uuid=762A99BF-6916-450F-80A6-B2E9E78FC9A1
+
+```
+
+**Suggested Read :** [LSHW (Hardware Lister) – A Nifty Tool To Get A Hardware Information On Linux][3]
+
+### Method-4 : Using /sys file system
+
+The kernel expose some DMI information in the /sys virtual filesystem. So we can easily get the machine type by running grep command with following format.
+```
+# grep "" /sys/class/dmi/id/[pbs]*
+
+```
+
+Alternatively we can print only specific details by using cat command.
+```
+# cat /sys/class/dmi/id/board_vendor
+IBM
+
+# cat /sys/class/dmi/id/product_name
+System x2530 M4: -[1214AC1]-
+
+# cat /sys/class/dmi/id/product_serial
+MK2RL11
+
+# cat /sys/class/dmi/id/bios_version
+-[VVE134MUS-1.50]-
+
+```
+
+### Method-5 : Using dmesg Command
+
+The dmesg command is used to write the kernel messages (boot-time messages) in Linux before syslogd or klogd start. It obtains its data by reading the kernel ring buffer. dmesg can be very useful when troubleshooting or just trying to obtain information about the hardware on a system.
+```
+# dmesg | grep -i DMI
+DMI: System x2530 M4: -[1214AC1]-/00Y8494, BIOS -[VVE134MUS-1.50]- 08/30/2013
+
+```
+
+### Method-6 : Using hwinfo Command
+
+hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format.
+
+It reports information about CPU, RAM, keyboard, mouse, graphics card, sound, storage, network interface, disk, partition, bios, and bridge, etc,., This tool could display more detailed information among others like lshw, dmidecode, inxi, etc,.
+
+hwinfo uses libhd library libhd.so to gather hardware information on the system. This tool especially designed for openSUSE system, later other distributions are included the tool into their official repository.
+```
+# hwinfo | egrep "system.hardware.vendor|system.hardware.product"
+ system.hardware.vendor = 'IBM'
+ system.hardware.product = 'System x2530 M4: -[1214AC1]-'
+
+```
+
+**Suggested Read :** [hwinfo (Hardware Info) – A Nifty Tool To Detect System Hardware Information On Linux][4]
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/how-to-check-system-hardware-manufacturer-model-and-serial-number-in-linux/
+
+作者:[VINOTH KUMAR][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.2daygeek.com/author/vinoth/
+[1]:https://www.2daygeek.com/dmidecode-get-print-display-check-linux-system-hardware-information/
+[2]:https://www.2daygeek.com/inxi-system-hardware-information-on-linux/
+[3]:https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/
+[4]:https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
From 6898c90a4e6c02f0df54dd862fdbc37fdb9912a9 Mon Sep 17 00:00:00 2001
From: Auk7F7 <34982730+Auk7F7@users.noreply.github.com>
Date: Sat, 28 Apr 2018 21:12:02 +0800
Subject: [PATCH 137/220] Update 20180219 Learn to code with Thonny - a Python
IDE for beginners.md
translating by Auk7F7
---
...19 Learn to code with Thonny - a Python IDE for beginners.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md b/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md
index 4ee603aa6d..078f9576e8 100644
--- a/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md
+++ b/sources/tech/20180219 Learn to code with Thonny - a Python IDE for beginners.md
@@ -1,3 +1,5 @@
+translating by Auk7F7
+
Learn to code with Thonny — a Python IDE for beginners
======
From 69fc6c34a95d159d52f400ca2265392511b8b111 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sat, 28 Apr 2018 22:50:14 +0800
Subject: [PATCH 138/220] PRF:20180215 Check Linux Distribution Name and
Version.md
@HankChow
---
...eck Linux Distribution Name and Version.md | 85 +++++++++----------
1 file changed, 41 insertions(+), 44 deletions(-)
diff --git a/translated/tech/20180215 Check Linux Distribution Name and Version.md b/translated/tech/20180215 Check Linux Distribution Name and Version.md
index 9a3b99eda5..a4b8d45c56 100644
--- a/translated/tech/20180215 Check Linux Distribution Name and Version.md
+++ b/translated/tech/20180215 Check Linux Distribution Name and Version.md
@@ -1,29 +1,28 @@
-查看 Linux 发行版名称和版本号的8种方法
+查看 Linux 发行版名称和版本号的 8 种方法
======
+
如果你加入了一家新公司,要为开发团队安装所需的软件并重启服务,这个时候首先要弄清楚它们运行在什么发行版以及哪个版本的系统上,你才能正确完成后续的工作。作为系统管理员,充分了解系统信息是首要的任务。
查看 Linux 发行版名称和版本号有很多种方法。你可能会问,为什么要去了解这些基本信息呢?
-因为对于诸如 RHEL、Debian、openSUSE、Arch Linux 这几种主流发行版来说,它们各自拥有不同的包管理器来管理系统上的软件包,如果不知道所使用的是哪一个发行版的系统,在包安装的时候就会无从下手,而且由于大多数发行版都是用 systemd 命令而不是 SysVinit 脚本,在重启服务的时候也难以执行正确的命令。
+因为对于诸如 RHEL、Debian、openSUSE、Arch Linux 这几种主流发行版来说,它们各自拥有不同的包管理器来管理系统上的软件包,如果不知道所使用的是哪一个发行版的系统,在软件包安装的时候就会无从下手,而且由于大多数发行版都是用 systemd 命令而不是 SysVinit 脚本,在重启服务的时候也难以执行正确的命令。
下面来看看可以使用那些基本命令来查看 Linux 发行版名称和版本号。
### 方法总览
- * lsb_release command
- * /etc/*-release file
- * uname command
- * /proc/version file
- * dmesg Command
- * YUM or DNF Command
- * RPM command
- * APT-GET command
+ * `lsb_release` 命令
+ * `/etc/*-release` 文件
+ * `uname` 命令
+ * `/proc/version` 文件
+ * `dmesg` 命令
+ * YUM 或 DNF 命令
+ * RPM 命令
+ * APT-GET 命令
+### 方法 1: lsb_release 命令
-
-### 方法1: lsb_release 命令
-
-LSB(Linux Standard Base,Linux 标准库)能够打印发行版的具体信息,包括发行版名称、版本号、代号等。
+LSB(Linux 标准库)能够打印发行版的具体信息,包括发行版名称、版本号、代号等。
```
# lsb_release -a
@@ -32,12 +31,11 @@ Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
-
```
-### 方法2: /etc/arch-release /etc/os-release File
+### 方法 2: /etc/*-release 文件
-版本文件通常被视为操作系统的标识。在 `/etc` 目录下放置了很多记录着发行版各种信息的文件,每个发行版都各自有一套这样记录着相关信息的文件。下面是一组在 Ubuntu/Debian 系统上显示出来的文件内容。
+release 文件通常被视为操作系统的标识。在 `/etc` 目录下放置了很多记录着发行版各种信息的文件,每个发行版都各自有一套这样记录着相关信息的文件。下面是一组在 Ubuntu/Debian 系统上显示出来的文件内容。
```
# cat /etc/issue
@@ -67,10 +65,10 @@ UBUNTU_CODENAME=xenial
# cat /etc/debian_version
9.3
-
```
下面这一组是在 RHEL/CentOS/Fedora 系统上显示出来的文件内容。其中 `/etc/redhat-release` 和 `/etc/system-release` 文件是指向 `/etc/[发行版名称]-release` 文件的一个连接。
+
```
# cat /etc/centos-release
CentOS release 6.9 (Final)
@@ -100,34 +98,34 @@ Fedora release 27 (Twenty Seven)
# cat /etc/system-release
Fedora release 27 (Twenty Seven)
-
```
-### 方法3: uname 命令
+### 方法 3: uname 命令
-uname(unix name) 是一个打印系统信息的工具,包括内核名称、版本号、系统详细信息以及所运行的操作系统等等。
+uname(unix name 的意思) 是一个打印系统信息的工具,包括内核名称、版本号、系统详细信息以及所运行的操作系统等等。
+
+- **建议阅读:** [6种查看系统 Linux 内核的方法][1]
-**建议阅读:** [6种查看系统 Linux 内核的方法][1]
```
# uname -a
Linux localhost.localdomain 4.12.14-300.fc26.x86_64 #1 SMP Wed Sep 20 16:28:07 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
-
```
以上运行结果说明使用的操作系统版本是 Fedora 26。
-### 方法4: /proc/version File
+### 方法 4: /proc/version 文件
这个文件记录了 Linux 内核的版本、用于编译内核的 gcc 的版本、内核编译的时间,以及内核编译者的用户名。
+
```
# cat /proc/version
Linux version 4.12.14-300.fc26.x86_64 ([email protected]) (gcc version 7.2.1 20170915 (Red Hat 7.2.1-2) (GCC) ) #1 SMP Wed Sep 20 16:28:07 UTC 2017
-
```
-### Method-5: dmesg 命令
+### 方法 5: dmesg 命令
+
+dmesg(展示信息 或驱动程序信息)是大多数类 Unix 操作系统上的一个命令,用于打印内核的消息缓冲区的信息。
-dmesg(display message/driver message,展示信息/驱动程序信息)是大多数类 Unix 操作系统上的一个命令,用于打印内核上消息缓冲区的信息。
```
# dmesg | grep "Linux"
[ 0.000000] Linux version 4.12.14-300.fc26.x86_64 ([email protected]) (gcc version 7.2.1 20170915 (Red Hat 7.2.1-2) (GCC) ) #1 SMP Wed Sep 20 16:28:07 UTC 2017
@@ -139,14 +137,14 @@ dmesg(display message/driver message,展示信息/驱动程序信息)是
[ 0.688949] usb usb2: Manufacturer: Linux 4.12.14-300.fc26.x86_64 ohci_hcd
[ 2.564554] SELinux: Disabled at runtime.
[ 2.564584] SELinux: Unregistering netfilter hooks
-
```
-### Method-6: Yum/Dnf 命令
+### 方法 6: Yum/Dnf 命令
-Yum(Yellowdog Updater Modified)是 Linux 操作系统上的一个包管理工具,而 `yum` 命令则是一些基于 RedHat 的 Linux 发行版上用于安装、更新、查找、删除软件包的命令。
+Yum(Yellowdog 更新器修改版)是 Linux 操作系统上的一个包管理工具,而 `yum` 命令被用于一些基于 RedHat 的 Linux 发行版上安装、更新、查找、删除软件包。
+
+- **建议阅读:** [在 RHEL/CentOS 系统上使用 yum 命令管理软件包][2]
-**建议阅读:** [在 RHEL/CentOS 系统上使用 yum 命令管理软件包][2]
```
# yum info nano
Loaded plugins: fastestmirror, ovl
@@ -165,10 +163,10 @@ Summary : A small text editor
URL : http://www.nano-editor.org
License : GPLv3+
Description : GNU nano is a small and friendly text editor.
-
```
下面的 `yum repolist` 命令执行后显示了 yum 的基础源仓库、额外源仓库、更新源仓库都来自 CentOS 7 仓库。
+
```
# yum repolist
Loaded plugins: fastestmirror, ovl
@@ -181,12 +179,12 @@ base/7/x86_64 CentOS-7 - Base 9591
extras/7/x86_64 CentOS-7 - Extras 388
updates/7/x86_64 CentOS-7 - Updates 1929
repolist: 11908
-
```
使用 `dnf` 命令也同样可以查看发行版名称和版本号。
-**建议阅读:** [在 Fedora 系统上使用 DNF(YUM 的一个分支)命令管理软件包][3]
+- **建议阅读:** [在 Fedora 系统上使用 DNF(YUM 的一个分支)命令管理软件包][3]
+
```
# dnf info nano
Last metadata expiration check: 0:01:25 ago on Thu Feb 15 01:59:31 2018.
@@ -203,25 +201,25 @@ Summary : A small text editor
URL : https://www.nano-editor.org
License : GPLv3+
Description : GNU nano is a small and friendly text editor.
-
```
-### Method-7: RPM 命令
+### 方法 7: RPM 命令
-RPM(RedHat Package Manager, RedHat 包管理器)是在 CentOS、Oracle Linux、Fedora 这些基于 RedHat 的操作系统上的一个强大的命令行包管理工具,同样也可以帮助我们查看系统的版本信息。
+RPM(红帽包管理器)是在 CentOS、Oracle Linux、Fedora 这些基于 RedHat 的操作系统上的一个强大的命令行包管理工具,同样也可以帮助我们查看系统的版本信息。
+
+- **建议阅读:** [在基于 RHEL 的系统上使用 RPM 命令管理软件包][4]
-**建议阅读:** [在基于 RHEL 的系统上使用 RPM 命令管理软件包][4]
```
# rpm -q nano
nano-2.8.7-1.fc27.x86_64
-
```
-### Method-8: APT-GET 命令
+### 方法 8: APT-GET 命令
-Apt-Get(Advanced Packaging Tool)是一个强大的命令行工具,可以自动下载安装新软件包、更新已有的软件包、更新软件包列表索引,甚至更新整个 Debian 系统。
+Apt-Get(高级打包工具)是一个强大的命令行工具,可以自动下载安装新软件包、更新已有的软件包、更新软件包列表索引,甚至更新整个 Debian 系统。
+
+- **建议阅读:** [在基于 Debian 的系统上使用 Apt-Get 和 Apt-Cache 命令管理软件包][5]
-**建议阅读:** [在基于 Debian 的系统上使用 Apt-Get 和 Apt-Cache 命令管理软件包][5]
```
# apt-cache policy nano
nano:
@@ -233,7 +231,6 @@ nano:
100 /var/lib/dpkg/status
2.5.3-2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
-
```
--------------------------------------------------------------------------------
@@ -242,7 +239,7 @@ via: https://www.2daygeek.com/check-find-linux-distribution-name-and-version/
作者:[Magesh Maruthamuthu][a]
译者:[HankChow](https://github.com/HankChow)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 2c211ea44e99016b4925c650fe7980549d876b24 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sat, 28 Apr 2018 22:51:26 +0800
Subject: [PATCH 139/220] PUB:20180215 Check Linux Distribution Name and
Version.md
@HankChow https://linux.cn/article-9586-1.html
---
.../20180215 Check Linux Distribution Name and Version.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180215 Check Linux Distribution Name and Version.md (100%)
diff --git a/translated/tech/20180215 Check Linux Distribution Name and Version.md b/published/20180215 Check Linux Distribution Name and Version.md
similarity index 100%
rename from translated/tech/20180215 Check Linux Distribution Name and Version.md
rename to published/20180215 Check Linux Distribution Name and Version.md
From efc5de22da12ab71ba8dc30a3e2fd3ec246a9429 Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sun, 29 Apr 2018 00:00:20 +0800
Subject: [PATCH 140/220] Update 20180122 A Simple Command-line Snippet
Manager.md
---
sources/tech/20180122 A Simple Command-line Snippet Manager.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/sources/tech/20180122 A Simple Command-line Snippet Manager.md b/sources/tech/20180122 A Simple Command-line Snippet Manager.md
index 1c8ef14fb6..6f2b83f6d0 100644
--- a/sources/tech/20180122 A Simple Command-line Snippet Manager.md
+++ b/sources/tech/20180122 A Simple Command-line Snippet Manager.md
@@ -1,3 +1,6 @@
+Translating by MjSeven
+
+
A Simple Command-line Snippet Manager
======
From e8f828ac0206e588e0e909e827f3d4f22474a20b Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 09:36:45 +0800
Subject: [PATCH 141/220] PRF:20180104 How does gdb call functions.md
@ucasFL
---
.../20180104 How does gdb call functions.md | 61 ++++++-------------
1 file changed, 20 insertions(+), 41 deletions(-)
diff --git a/translated/tech/20180104 How does gdb call functions.md b/translated/tech/20180104 How does gdb call functions.md
index 575563ad3d..28c26ba615 100644
--- a/translated/tech/20180104 How does gdb call functions.md
+++ b/translated/tech/20180104 How does gdb call functions.md
@@ -1,13 +1,13 @@
gdb 如何调用函数?
============================================================
-(之前的 gdb 系列文章:[gdb 如何工作(2016)][4] 和[通过 gdb 你能够做的三件事(2014)][5])
+(之前的 gdb 系列文章:[gdb 如何工作(2016)][4] 和[三步上手 gdb(2014)][5])
-在这个周,我发现,我可以从 gdb 上调用 C 函数。这看起来很酷,因为在过去我认为 gdb 最多只是一个只读调试工具。
+在这周,我发现我可以从 gdb 上调用 C 函数。这看起来很酷,因为在过去我认为 gdb 最多只是一个只读调试工具。
我对 gdb 能够调用函数感到很吃惊。正如往常所做的那样,我在 [Twitter][6] 上询问这是如何工作的。我得到了大量的有用答案。我最喜欢的答案是 [Evan Klitzke 的示例 C 代码][7],它展示了 gdb 如何调用函数。代码能够运行,这很令人激动!
-我相信(通过一些跟踪和实验)那个示例 C 代码和 gdb 实际上如何调用函数不同。因此,在这篇文章中,我将会阐述 gdb 是如何调用函数的,以及我是如何知道的。
+我(通过一些跟踪和实验)认为那个示例 C 代码和 gdb 实际上如何调用函数不同。因此,在这篇文章中,我将会阐述 gdb 是如何调用函数的,以及我是如何知道的。
关于 gdb 如何调用函数,还有许多我不知道的事情,并且,在这儿我写的内容有可能是错误的。
@@ -15,17 +15,14 @@ gdb 如何调用函数?
在开始讲解这是如何工作之前,我先快速的谈论一下我是如何发现这件令人惊讶的事情的。
-所以,你已经在运行一个 C 程序(目标程序)。你可以运行程序中的一个函数,只需要像下面这样做:
+假如,你已经在运行一个 C 程序(目标程序)。你可以运行程序中的一个函数,只需要像下面这样做:
* 暂停程序(因为它已经在运行中)
-
* 找到你想调用的函数的地址(使用符号表)
-
* 使程序(目标程序)跳转到那个地址
-
* 当函数返回时,恢复之前的指令指针和寄存器
-通过符号表来找到想要调用的函数的地址非常容易。下面是一段非常简单但能够工作的代码,我在 Linux 上使用这段代码作为例子来讲解如何找到地址。这段代码使用 [elf crate][8]。如果我想找到 PID 为 2345 的进程中的 foo 函数的地址,那么我可以运行 `elf_symbol_value("/proc/2345/exe", "foo")`。
+通过符号表来找到想要调用的函数的地址非常容易。下面是一段非常简单但能够工作的代码,我在 Linux 上使用这段代码作为例子来讲解如何找到地址。这段代码使用 [elf crate][8]。如果我想找到 PID 为 2345 的进程中的 `foo` 函数的地址,那么我可以运行 `elf_symbol_value("/proc/2345/exe", "foo")`。
```
fn elf_symbol_value(file_name: &str, symbol_name: &str) -> Result> {
@@ -42,7 +39,6 @@ fn elf_symbol_value(file_name: &str, symbol_name: &str) -> Result Result
@@ -66,7 +62,6 @@ int foo() {
int main() {
sleep(1000);
}
-
```
接下来,编译并运行它:
@@ -74,7 +69,6 @@ int main() {
```
$ gcc -o test test.c
$ ./test
-
```
最后,我们使用 gdb 来跟踪 `test` 这一程序:
@@ -84,54 +78,42 @@ $ sudo gdb -p $(pgrep -f test)
(gdb) p foo()
$1 = 3
(gdb) quit
-
```
我运行 `p foo()` 然后它运行了这个函数!这非常有趣。
-### 为什么这是有用的?
+### 这有什么用?
下面是一些可能的用途:
-* 它使得你可以把 gdb 当成一个 C 应答式程序,这很有趣,我想对开发也会有用
-
+* 它使得你可以把 gdb 当成一个 C 应答式程序(REPL),这很有趣,我想对开发也会有用
* 在 gdb 中进行调试的时候展示/浏览复杂数据结构的功能函数(感谢 [@invalidop][1])
-
* [在进程运行时设置一个任意的名字空间][2](我的同事 [nelhage][3] 对此非常惊讶)
-
* 可能还有许多我所不知道的用途
### 它是如何工作的
-当我在 Twitter 上询问从 gdb 中调用函数是如何工作的时,我得到了大量有用的回答。许多答案是”你从符号表中得到了函数的地址“,但这并不是完整的答案。
+当我在 Twitter 上询问从 gdb 中调用函数是如何工作的时,我得到了大量有用的回答。许多答案是“你从符号表中得到了函数的地址”,但这并不是完整的答案。
-有个人告诉了我两篇关于 gdb 如何工作的系列文章:[和本地人一起调试-第一部分][9],[和本地人一起调试-第二部分][10]。第一部分讲述了 gdb 是如何调用函数的(指出了 gdb 实际上完成这件事并不简单,但是我将会尽力)。
+有个人告诉了我两篇关于 gdb 如何工作的系列文章:[原生调试:第一部分][9],[原生调试:第二部分][10]。第一部分讲述了 gdb 是如何调用函数的(指出了 gdb 实际上完成这件事并不简单,但是我将会尽力)。
步骤列举如下:
1. 停止进程
-
2. 创建一个新的栈框(远离真实栈)
-
3. 保存所有寄存器
-
4. 设置你想要调用的函数的寄存器参数
-
-5. 设置栈指针指向新的栈框
-
+5. 设置栈指针指向新的栈框
6. 在内存中某个位置放置一条陷阱指令
-
7. 为陷阱指令设置返回地址
-
8. 设置指令寄存器的值为你想要调用的函数地址
-
9. 再次运行进程!
(LCTT 译注:如果将这个调用的函数看成一个单独的线程,gdb 实际上所做的事情就是一个简单的线程上下文切换)
我不知道 gdb 是如何完成这些所有事情的,但是今天晚上,我学到了这些所有事情中的其中几件。
-**创建一个栈框**
+#### 创建一个栈框
如果你想要运行一个 C 函数,那么你需要一个栈来存储变量。你肯定不想继续使用当前的栈。准确来说,在 gdb 调用函数之前(通过设置函数指针并跳转),它需要设置栈指针到某个地方。
@@ -154,14 +136,13 @@ Breakpoint 1 at 0x40052a
Breakpoint 1, 0x000000000040052a in foo ()
(gdb) p $rsp
$8 = (void *) 0x7ffea3d0bc00
-
```
-这看起来符合”gdb 在当前栈的栈顶构造了一个新的栈框“这一理论。因为栈指针(`$rsp`)从 `0x7ffea3d0bca8` 变成了 `0x7ffea3d0bc00` - 栈指针从高地址往低地址长。所以 `0x7ffea3d0bca8` 在 `0x7ffea3d0bc00` 的后面。真是有趣!
+这看起来符合“gdb 在当前栈的栈顶构造了一个新的栈框”这一理论。因为栈指针(`$rsp`)从 `0x7ffea3d0bca8` 变成了 `0x7ffea3d0bc00` —— 栈指针从高地址往低地址长。所以 `0x7ffea3d0bca8` 在 `0x7ffea3d0bc00` 的后面。真是有趣!
所以,看起来 gdb 只是在当前栈所在位置创建了一个新的栈框。这令我很惊讶!
-**改变指令指针**
+#### 改变指令指针
让我们来看一看 gdb 是如何改变指令指针的!
@@ -181,7 +162,7 @@ $3 = (void (*)()) 0x40052a
我盯着输出看了很久,但仍然不理解它是如何改变指令指针的,但这并不影响什么。
-**如何设置断点**
+#### 如何设置断点
上面我写到 `break foo` 。我跟踪 gdb 运行程序的过程,但是没有任何发现。
@@ -202,10 +183,9 @@ $3 = (void (*)()) 0x40052a
// 将 0x400528 处的指令更改为之前的样子
25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003cce589]) = 0
25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003b8e589) = 0
-
```
-**在某处放置一条陷阱指令**
+#### 在某处放置一条陷阱指令
当 gdb 运行一个函数的时候,它也会在某个地方放置一条陷阱指令。这是其中一条。它基本上是用 `cc` 来替换一条指令(`int3`)。
@@ -213,7 +193,6 @@ $3 = (void (*)()) 0x40052a
5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0
5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0
5908 ptrace(PTRACE_POKEDATA, 5810, 0x7f6fa7c0b260, 0x48f389fd894853cc) = 0
-
```
`0x7f6fa7c0b260` 是什么?我查看了进程的内存映射,发现它位于 `/lib/x86_64-linux-gnu/libc-2.23.so` 中的某个位置。这很奇怪,为什么 gdb 将陷阱指令放在 libc 中?
@@ -226,7 +205,7 @@ $3 = (void (*)()) 0x40052a
我将要在这儿停止了(现在已经凌晨 1 点),但是我知道的多一些了!
-看起来”gdb 如何调用函数“这一问题的答案并不简单。我发现这很有趣并且努力找出其中一些答案,希望你也能够找到。
+看起来“gdb 如何调用函数”这一问题的答案并不简单。我发现这很有趣并且努力找出其中一些答案,希望你也能够找到。
我依旧有很多未回答的问题,关于 gdb 是如何完成这些所有事的,但是可以了。我不需要真的知道关于 gdb 是如何工作的所有细节,但是我很开心,我有了一些进一步的理解。
@@ -236,7 +215,7 @@ via: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/
作者:[Julia Evans][a]
译者:[ucasFL](https://github.com/ucasFL)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -244,8 +223,8 @@ via: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/
[1]:https://twitter.com/invalidop/status/949161146526781440
[2]:https://github.com/baloo/setns/blob/master/setns.c
[3]:https://github.com/nelhage
-[4]:https://jvns.ca/blog/2016/08/10/how-does-gdb-work/
-[5]:https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/
+[4]:https://linux.cn/article-9491-1.html
+[5]:https://linux.cn/article-9276-1.html
[6]:https://twitter.com/b0rk/status/948060808243765248
[7]:https://github.com/eklitzke/ptrace-call-userspace/blob/master/call_fprintf.c
[8]:https://cole14.github.io/rust-elf
From bfd7480b278957c7cb67253fbb0cd1ee88805d0c Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 09:38:43 +0800
Subject: [PATCH 142/220] PUB:20180104 How does gdb call functions.md
@ucasFL https://linux.cn/article-9588-1.html
---
.../tech => published}/20180104 How does gdb call functions.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180104 How does gdb call functions.md (100%)
diff --git a/translated/tech/20180104 How does gdb call functions.md b/published/20180104 How does gdb call functions.md
similarity index 100%
rename from translated/tech/20180104 How does gdb call functions.md
rename to published/20180104 How does gdb call functions.md
From c88a3fade0bab7d46ce90e30439e9d23d8248727 Mon Sep 17 00:00:00 2001
From: FelixYFZ <33593534+FelixYFZ@users.noreply.github.com>
Date: Sun, 29 Apr 2018 09:58:13 +0800
Subject: [PATCH 143/220] Update 20171116 10 easy steps from proprietary to
open source.md
Translating by FelixYFZ
---
.../20171116 10 easy steps from proprietary to open source.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20171116 10 easy steps from proprietary to open source.md b/sources/tech/20171116 10 easy steps from proprietary to open source.md
index 4e085872a5..a8bd86314d 100644
--- a/sources/tech/20171116 10 easy steps from proprietary to open source.md
+++ b/sources/tech/20171116 10 easy steps from proprietary to open source.md
@@ -1,4 +1,4 @@
-10 easy steps from proprietary to open source
+Translating by FelixYFZ 10 easy steps from proprietary to open source
======
"But surely open source software is less secure, because everybody can see it, and they can just recompile it and replace it with bad stuff they've written." Hands up: who's heard this?1
From 41a45e580a6595796de9e8a4c87b7ec11e414583 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 10:10:14 +0800
Subject: [PATCH 144/220] PRF:20180226 5 keys to building open hardware.md
@kennethXia
---
...180226 5 keys to building open hardware.md | 49 +++++++++----------
1 file changed, 24 insertions(+), 25 deletions(-)
diff --git a/translated/talk/20180226 5 keys to building open hardware.md b/translated/talk/20180226 5 keys to building open hardware.md
index c470820f5c..e1ea822451 100644
--- a/translated/talk/20180226 5 keys to building open hardware.md
+++ b/translated/talk/20180226 5 keys to building open hardware.md
@@ -1,54 +1,53 @@
-构建开源硬件的5个关键点
+构建开源硬件的 5 个关键点
======
+
+> 最大化你的项目影响。
+

-科学社区正在加速拥抱自由和开源硬件([FOSH][1]). 研究员正忙于[改进他们自己的装备][2]并创造数以百计基于分布式数字制造模型的设备来推动他们的研究。
-热衷于 FOSH 的主要原因还是钱: 有研究表明,和专用设备相比,FOSH 可以[节省90%到99%的花费][3]。基于[开源硬件商业模式][4]的科学 FOSH 的商业化已经推动其快速地发展为一个新的工程领域,并为此定期[举行年会][5]。
+科学社区正在加速拥抱自由及开源硬件([FOSH][1])。 研究员正忙于[改进他们自己的装备][2]并创造数以百计的基于分布式数字制造模型的设备来推动他们的研究。
-特别的是,不止一本,而是关于这个主题的[两本学术期刊]:[Journal of Open Hardware] (由Ubiquity出版,一个新的自由访问出版商,同时出版了[Journal of Open Research Software][8])以及[HardwareX][9](由Elsevier出版的一种[自由访问期刊][10],它是世界上最大的学术出版商之一)。
+热衷于 FOSH 的主要原因还是钱: 有研究表明,和专用设备相比,FOSH 可以[节省 90% 到 99% 的花费][3]。基于[开源硬件商业模式][4]的科学 FOSH 的商业化已经推动其快速地发展为一个新的工程领域,并为此定期[举行 GOSH 年会][5]。
+
+特别的是,不止一本,而是关于这个主题的[两本学术期刊]:[Journal of Open Hardware] (由 Ubiquity 出版,一个新的自由访问出版商,同时出版了 [Journal of Open Research Software][8] )以及 [HardwareX][9](由 Elsevier 出版的一种[自由访问期刊][10],它是世界上最大的学术出版商之一)。
由于学术社区的支持,科学 FOSH 的开发者在获取制作乐趣并推进科学快速发展的同时获得学术声望。
### 科学 FOSH 的5个步骤
-协恩 (Shane Oberloier)和我在名为Designes的自由问工程期刊上共同发表了一篇关于设计 FOSH 科学设备原则的[文章][11]。我们以滑动烘干机为例,制造成本低于20美元,仅是专用设备价格的三百分之一。[科学][1]和[医疗][12]设备往往比较复杂,开发 FOSH 替代品将带来巨大的回报。
+Shane Oberloier 和我在名为 Designs 的自由访问工程期刊上共同发表了一篇关于设计 FOSH 科学设备原则的[文章][11]。我们以滑动式烘干机为例,制造成本低于 20 美元,仅是专用设备价格的三百分之一。[科学][1]和[医疗][12]设备往往比较复杂,开发 FOSH 替代品将带来巨大的回报。
-我总结了5个步骤(包括6条设计原则),它们在协恩和我发表的文章里有详细阐述。这些设计原则也推广到非科学设备,而且制作越复杂的设计越能带来更大的潜在收益。
+我总结了 5 个步骤(包括 6 条设计原则),它们在 Shane Oberloier 和我发表的文章里有详细阐述。这些设计原则也可以推广到非科学设备,而且制作越复杂的设计越能带来更大的潜在收益。
如果你对科学项目的开源硬件设计感兴趣,这些步骤将使你的项目的影响最大化。
- 1. 评估类似现有工具的功能,你的 FOSH 设计目标应该针对实际效果而不是现有的设计(译者注:作者的意思应该是不要被现有设计缚住手脚)。必要的时候需进行概念证明。
-
- 2. 使用下列设计原则:
-
- * 在设备生产中,仅适用自由和开源的软件工具链(比如,开源的CAD工具,例如[OpenSCAD][13], [FreeCAD][14], or [Blender][15])和开源硬件。
+1. 评估类似现有工具的功能,你的 FOSH 设计目标应该针对实际效果而不是现有的设计(LCTT 译注:作者的意思应该是不要被现有设计缚住手脚)。必要的时候需进行概念证明。
+2. 使用下列设计原则:
+ * 在设备生产中,仅使用自由和开源的软件工具链(比如,开源的 CAD 工具,例如 [OpenSCAD][13]、 [FreeCAD][14] 或 [Blender][15])和开源硬件。
* 尝试减少部件的数量和类型并降低工具的复杂度
* 减少材料的数量和制造成本。
- * 尽量使用方便易得的工具(比如 [RepRap 3D 打印机][16])进行部件的分布式或数字化生产。
+ * 尽量使用能够分发的部件或使用方便易得的工具(比如 [RepRap 3D 打印机][16])进行部件的数字化生产。
* 对部件进行[参数化设计][17],这使他人可以对你的设计进行个性化改动。相较于特例化设计,参数化设计会更有用。在未来的项目中,使用者可以通过修改核心参数来继续利用它们。
- ?* 所有不能使用开源硬件进行分布制造的零件,必须选择现货产品以方便采购。
-
- 3. 验证功能设计
-
- ?4. 提供关于设计、生产、装配、校准和操作的详尽文档。包括原始设计文件而不仅仅是设计输出。开源硬件协会对于开源设计的发布和文档化有额外的[指南][18],总结如下:
-
- * 以通用的形式分享设计文件
- * 提供详尽的材料清单,包括价格和采购信息
- * 如果包含软件,确保代码对大众来说清晰易懂
+ * 所有不能使用现有的开源硬件以分布式的方式轻松且经济地制造的零件,必须选择现货产品以方便采购。
+3. 验证功能设计。
+4. 提供关于设计、生产、装配、校准和操作的详尽设备文档。包括原始设计文件而不仅仅是用于生产的。开源硬件协会对于开源设计的发布和文档化有额外的[指南][18],总结如下:
+ * 以通用的形式分享设计文件。
+ * 提供详尽的材料清单,包括价格和采购信息。
+ * 如果涉及软件,确保代码对大众来说清晰易懂。
* 作为生产时的参考,必须提供足够的照片,以确保没有任何被遮挡的部分。
* 在描述方法的章节,整个制作过程必须被细化成简单步骤以便复制此设计。
- * 在线上分享并指定许可证。这为用户提供了合理使用设计的信息。
-
- ?5. 主动分享!为了使 FOSH 发扬光大,设计必须被广泛、频繁和有效地分享以提升他们的存在感。所有的文档应该在自由访问文献中发表,并与适当的社区共享。[开源科学框架][19]是一个值得考虑的优雅的通用存储库,它由开源科学中心主办,该中心设置为接受任何类型的文件并处理大型数据集。
+ * 在线上分享并指定许可证。这为用户提供了合理使用该设计的信息。
+5. 主动分享!为了使 FOSH 发扬光大,设计必须被广泛、频繁和有效地分享以提升它们的存在感。所有的文档应该在自由访问文献中发表,并与适当的社区共享。[开源科学框架][19]是一个值得考虑的优雅的通用存储库,它由开源科学中心主办,该中心设置为接受任何类型的文件并处理大型数据集。
这篇文章得到了 [Fulbright Finland][20] 的支持,该公司赞助了芬兰 Fulbright-Aalto 大学的特聘校席 Joshua Pearce 在开源科学硬件方面的研究工作。
+
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/5-steps-creating-successful-open-hardware
作者:[Joshua Pearce][a]
译者:[kennethXia](https://github.com/kennethXia)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From e8aea11be2d1d9fd665c25d0fbef202621d2f708 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 10:10:34 +0800
Subject: [PATCH 145/220] PUB:20180226 5 keys to building open hardware.md
@kennethXia https://linux.cn/article-9589-1.html
---
.../20180226 5 keys to building open hardware.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/talk => published}/20180226 5 keys to building open hardware.md (100%)
diff --git a/translated/talk/20180226 5 keys to building open hardware.md b/published/20180226 5 keys to building open hardware.md
similarity index 100%
rename from translated/talk/20180226 5 keys to building open hardware.md
rename to published/20180226 5 keys to building open hardware.md
From 4ede653e8dd3b6af2197dd9fa4fcb500e88555fe Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sun, 29 Apr 2018 13:54:55 +0800
Subject: [PATCH 146/220] Delete 20180122 A Simple Command-line Snippet
Manager.md
---
...2 A Simple Command-line Snippet Manager.md | 322 ------------------
1 file changed, 322 deletions(-)
delete mode 100644 sources/tech/20180122 A Simple Command-line Snippet Manager.md
diff --git a/sources/tech/20180122 A Simple Command-line Snippet Manager.md b/sources/tech/20180122 A Simple Command-line Snippet Manager.md
deleted file mode 100644
index 6f2b83f6d0..0000000000
--- a/sources/tech/20180122 A Simple Command-line Snippet Manager.md
+++ /dev/null
@@ -1,322 +0,0 @@
-Translating by MjSeven
-
-
-A Simple Command-line Snippet Manager
-======
-
-
-
-We can't remember all the commands, right? Yes. Except the frequently used commands, it is nearly impossible to remember some long commands that we rarely use. That's why we need to some external tools to help us to find the commands when we need them. In the past, we have reviewed two useful utilities named [**" Bashpast"**][1] and [**" Keep"**][2]. Using Bashpast, we can easily bookmark the Linux commands for easier repeated invocation. And, the Keep utility can be used to keep the some important and lengthy commands in your Terminal, so you can use them on demand. Today, we are going to see yet another tool in the series to help you remembering commands. Say hello to **" Pet"**, a simple command-line snippet manager written in **Go** language.
-
-Using Pet, you can;
-
- * Register/add your important, long and complex command snippets.
- * Search the saved command snippets interactively.
- * Run snippets directly without having to type over and over.
- * Edit the saved command snippets easily.
- * Sync the snippets via Gist.
- * Use variables in snippets.
- * And more yet to come.
-
-
-
-#### Installing Pet CLI Snippet Manager
-
-Since it is written in Go language, make sure you have installed Go in your system.
-
-After Go language, grab the latest binaries from [**the releases page**][3].
-```
-wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_amd64.zip
-```
-
-For 32 bit:
-```
-wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_386.zip
-```
-
-Extract the downloaded archive:
-```
-unzip pet_0.2.4_linux_amd64.zip
-```
-
-32 bit:
-```
-unzip pet_0.2.4_linux_386.zip
-```
-
-Copy the pet binary file to your PATH (i.e **/usr/local/bin** or the like).
-```
-sudo cp pet /usr/local/bin/
-```
-
-Finally, make it executable:
-```
-sudo chmod +x /usr/local/bin/pet
-```
-
-If you're using Arch based systems, then you can install it from AUR using any AUR helper tools.
-
-Using [**Pacaur**][4]:
-```
-pacaur -S pet-git
-```
-
-Using [**Packer**][5]:
-```
-packer -S pet-git
-```
-
-Using [**Yaourt**][6]:
-```
-yaourt -S pet-git
-```
-
-Using [**Yay** :][7]
-```
-yay -S pet-git
-```
-
-Also, you need to install **[fzf][8]** or [**peco**][9] tools to enable interactive search. Refer the official GitHub links to know how to install these tools.
-
-#### Usage
-
-Run 'pet' without any arguments to view the list of available commands and general options.
-```
-$ pet
-pet - Simple command-line snippet manager.
-
-Usage:
- pet [command]
-
-Available Commands:
- configure Edit config file
- edit Edit snippet file
- exec Run the selected commands
- help Help about any command
- list Show all snippets
- new Create a new snippet
- search Search snippets
- sync Sync snippets
- version Print the version number
-
-Flags:
- --config string config file (default is $HOME/.config/pet/config.toml)
- --debug debug mode
- -h, --help help for pet
-
-Use "pet [command] --help" for more information about a command.
-```
-
-To view the help section of a specific command, run:
-```
-$ pet [command] --help
-```
-
-**Configure Pet**
-
-It just works fine with default values. However, you can change the default directory to save snippets, choose the selector (fzf or peco) to use, the default text editor to edit snippets, add GIST id details etc.
-
-To configure Pet, run:
-```
-$ pet configure
-```
-
-This command will open the default configuration in the default text editor (for example **vim** in my case). Change/edit the values as per your requirements.
-```
-[General]
- snippetfile = "/home/sk/.config/pet/snippet.toml"
- editor = "vim"
- column = 40
- selectcmd = "fzf"
-
-[Gist]
- file_name = "pet-snippet.toml"
- access_token = ""
- gist_id = ""
- public = false
-~
-```
-
-**Creating Snippets**
-
-To create a new snippet, run:
-```
-$ pet new
-```
-
-Add the command and the description and hit ENTER to save it.
-```
-Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9'
-Description> Remove numbers from output.
-```
-
-[![][10]][11]
-
-This is a simple command to remove all numbers from the echo command output. You can easily remember it. But, if you rarely use it, you may forgot it completely after few days. Of course we can search the history using "CTRL+r", but "Pet" is much easier. Also, Pet can help you to add any number of entries.
-
-Another cool feature is we can easily add the previous command. To do so, add the following lines in your **.bashrc** or **.zshrc** file.
-```
-function prev() {
- PREV=$(fc -lrn | head -n 1)
- sh -c "pet new `printf %q "$PREV"`"
-}
-```
-
-Do the following command to take effect the saved changes.
-```
-source .bashrc
-```
-
-Or,
-```
-source .zshrc
-```
-
-Now, run any command, for example:
-```
-$ cat Documents/ostechnix.txt | tr '|' '\n' | sort | tr '\n' '|' | sed "s/.$/\\n/g"
-```
-
-To add the above command, you don't have to use "pet new" command. just do:
-```
-$ prev
-```
-
-Add the description to the command snippet and hit ENTER to save.
-
-[![][10]][12]
-
-**List snippets**
-
-To view the saved snippets, run:
-```
-$ pet list
-```
-
-[![][10]][13]
-
-**Edit Snippets**
-
-If you want to edit the description or the command of a snippet, run:
-```
-$ pet edit
-```
-
-This will open all saved snippets in your default text editor. You can edit or change the snippets as you wish.
-```
-[[snippets]]
- description = "Remove numbers from output."
- command = "echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9'"
- output = ""
-
-[[snippets]]
- description = "Alphabetically sort one line of text"
- command = "\t prev"
- output = ""
-```
-
-**Use Tags in snippets**
-
-To use tags to a snippet, use **-t** flag like below.
-```
-$ pet new -t
-Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9
-Description> Remove numbers from output.
-Tag> tr command examples
-
-```
-
-**Execute Snippets**
-
-To execute a saved snippet, run:
-```
-$ pet exec
-```
-
-Choose the snippet you want to run from the list and hit ENTER to run it.
-
-[![][10]][14]
-
-Remember you need to install fzf or peco to use this feature.
-
-**Search Snippets**
-
-If you have plenty of saved snippets, you can easily search them using a string or key word like below.
-```
-$ pet search
-```
-
-Enter the search term or keyword to narrow down the search results.
-
-[![][10]][15]
-
-**Sync Snippets**
-
-First, you need to obtain the access token. Go to this link and create access token (only need "gist" scope).
-
-Configure Pet using command:
-```
-$ pet configure
-```
-
-Set that token to **access_token** in **[Gist]** field.
-
-After setting, you can upload snippets to Gist like below.
-```
-$ pet sync -u
-Gist ID: 2dfeeeg5f17e1170bf0c5612fb31a869
-Upload success
-
-```
-
-You can also download snippets on another PC. To do so, edit configuration file and set **Gist ID** to **gist_id** in **[Gist]**.
-
-Then, download the snippets using command:
-```
-$ pet sync
-Download success
-
-```
-
-For more details, refer the help section:
-```
-pet -h
-```
-
-Or,
-```
-pet [command] -h
-```
-
-And, that's all. Hope this helps. As you can see, Pet usage is fairly simple and easy to use! If you're having hard time remembering lengthy commands, Pet utility can definitely be useful.
-
-Cheers!
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/pet-simple-command-line-snippet-manager/
-
-作者:[SK][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.ostechnix.com/author/sk/
-[1]:https://www.ostechnix.com/bookmark-linux-commands-easier-repeated-invocation/
-[2]:https://www.ostechnix.com/save-commands-terminal-use-demand/
-[3]:https://github.com/knqyf263/pet/releases
-[4]:https://www.ostechnix.com/install-pacaur-arch-linux/
-[5]:https://www.ostechnix.com/install-packer-arch-linux-2/
-[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
-[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
-[8]:https://github.com/junegunn/fzf
-[9]:https://github.com/peco/peco
-[10]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
-[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-1.png ()
-[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-2.png ()
-[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-3.png ()
-[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-4.png ()
-[15]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-5.png ()
From 943affc3d212746b49044c8acc42833a41995ef1 Mon Sep 17 00:00:00 2001
From: MjSeven <33125422+MjSeven@users.noreply.github.com>
Date: Sun, 29 Apr 2018 13:55:26 +0800
Subject: [PATCH 147/220] Create 20180122 A Simple Command-line Snippet
Manager.md
---
...2 A Simple Command-line Snippet Manager.md | 318 ++++++++++++++++++
1 file changed, 318 insertions(+)
create mode 100644 translated/tech/20180122 A Simple Command-line Snippet Manager.md
diff --git a/translated/tech/20180122 A Simple Command-line Snippet Manager.md b/translated/tech/20180122 A Simple Command-line Snippet Manager.md
new file mode 100644
index 0000000000..f9e827024e
--- /dev/null
+++ b/translated/tech/20180122 A Simple Command-line Snippet Manager.md
@@ -0,0 +1,318 @@
+一个简单的命令行片段管理器
+=====
+
+
+我们不可能记住所有的命令,对吧?是的。除了经常使用的命令之外,我们几乎不可能记住一些很少使用的长命令。这就是为什么需要一些外部工具来帮助我们在需要时找到命令。在过去,我们已经审查了两个有用的工具,名为 "Bashpast" 和 "Keep"。使用 Bashpast,我们可以轻松地为 Linux 命令添加书签,以便更轻松地重复调用。而且,Keep 实用程序可以用来在终端中保留一些重要且冗长的命令,以便你可以按需使用它们。今天,我们将看到该系列中的另一个工具,以帮助你记住命令。现在向 "Pet" 打个招呼,这是一个用 Go 语言编写的简单的命令行代码管理器。
+
+使用 Pet,你可以:
+
+ * 注册/添加你重要的,冗长和复杂的命令片段。
+ * 以交互方式来搜索保存的命令片段。
+ * 直接运行代码片段而无须一遍又一遍地输入。
+ * 轻松编辑保存的代码片段。
+ * 通过 Gist 同步片段。
+ * 在片段中使用变量
+ * 还有很多特性即将来临。
+
+
+#### 安装 Pet 命令行接口代码管理器
+
+由于它是用 Go 语言编写的,所以确保你在系统中已经安装了 Go。
+
+安装 Go 后,从 [**Pet 发布页面**][3] 获取最新的二进制文件。
+```
+wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_amd64.zip
+```
+
+对于 32 位计算机:
+```
+wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_386.zip
+```
+
+解压下载的文件:
+```
+unzip pet_0.2.4_linux_amd64.zip
+```
+
+对于 32 位:
+```
+unzip pet_0.2.4_linux_386.zip
+```
+
+将 pet 二进制文件复制到 PATH(即 **/usr/local/bin** 之类的)。
+```
+sudo cp pet /usr/local/bin/
+```
+
+最后,让它可以执行:
+```
+sudo chmod +x /usr/local/bin/pet
+```
+
+如果你使用的是基于 Arch 的系统,那么你可以使用任何 AUR 帮助工具从 AUR 安装它。
+
+使用 [**Pacaur**][4]:
+```
+pacaur -S pet-git
+```
+
+使用 [**Packer**][5]:
+```
+packer -S pet-git
+```
+
+使用 [**Yaourt**][6]:
+```
+yaourt -S pet-git
+```
+
+使用 [**Yay** :][7]
+```
+yay -S pet-git
+```
+
+此外,你需要安装 **[fzf][8]** 或 [**peco**][9] 工具已启用交互式搜索。请参阅官方 GitHub 链接了解如何安装这些工具。
+
+#### 用法
+
+运行没有任何参数的 'pet' 来查看可用命令和常规选项的列表。
+```
+$ pet
+pet - Simple command-line snippet manager.
+
+Usage:
+ pet [command]
+
+Available Commands:
+ configure Edit config file
+ edit Edit snippet file
+ exec Run the selected commands
+ help Help about any command
+ list Show all snippets
+ new Create a new snippet
+ search Search snippets
+ sync Sync snippets
+ version Print the version number
+
+Flags:
+ --config string config file (default is $HOME/.config/pet/config.toml)
+ --debug debug mode
+ -h, --help help for pet
+
+Use "pet [command] --help" for more information about a command.
+```
+
+要查看特定命令的帮助部分,运行:
+```
+$ pet [command] --help
+```
+
+**配置 Pet**
+
+它只适用于默认值。但是,你可以更改默认目录来保存片段,选择要使用的选择器 (fzf 或 peco),默认文本编辑器编辑片段,添加 GIST id 详细信息等。
+
+
+要配置 Pet,运行:
+```
+$ pet configure
+```
+
+该命令将在默认的文本编辑器中打开默认配置(例如我是 **vim**),根据你的要求更改或编辑特定值。
+```
+[General]
+ snippetfile = "/home/sk/.config/pet/snippet.toml"
+ editor = "vim"
+ column = 40
+ selectcmd = "fzf"
+
+[Gist]
+ file_name = "pet-snippet.toml"
+ access_token = ""
+ gist_id = ""
+ public = false
+~
+```
+
+**创建片段**
+
+为了创建一个新的片段,运行:
+```
+$ pet new
+```
+
+添加命令和描述,然后按下 ENTER 键保存它。
+```
+Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9'
+Description> Remove numbers from output.
+```
+
+[![][10]][11]
+
+这是一个简单的命令,用于从 echo 命令输出中删除所有数字。你可以很轻松地记住它。但是,如果你很少使用它,几天后你可能会完全忘记它。当然,我们可以使用 "CTRL+r" 搜索历史记录,但 "Pet" 会更容易。另外,Pet 可以帮助你添加任意数量的条目。
+
+另一个很酷的功能是我们可以轻松添加以前的命令。为此,在你的 **.bashrc** 或 **.zshrc** 文件中添加以下行。
+```
+function prev() {
+ PREV=$(fc -lrn | head -n 1)
+ sh -c "pet new `printf %q "$PREV"`"
+}
+```
+
+执行以下命令来使保存的更改生效。
+```
+source .bashrc
+```
+
+或者
+```
+source .zshrc
+```
+
+现在,运行任何命令,例如:
+```
+$ cat Documents/ostechnix.txt | tr '|' '\n' | sort | tr '\n' '|' | sed "s/.$/\\n/g"
+```
+
+要添加上述命令,你不必使用 "pet new" 命令。只需要:
+```
+$ prev
+```
+
+将说明添加到命令代码片段中,然后按下 ENTER 键保存。
+
+![][12]
+
+**片段列表**
+
+要查看保存的片段,运行:
+```
+$ pet list
+```
+
+![][13]
+
+**编辑片段**
+
+如果你想编辑描述或代码片段的命令,运行:
+```
+$ pet edit
+```
+
+这将在你的默认文本编辑器中打开所有保存的代码片段,你可以根据需要编辑或更改片段。
+```
+[[snippets]]
+ description = "Remove numbers from output."
+ command = "echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9'"
+ output = ""
+
+[[snippets]]
+ description = "Alphabetically sort one line of text"
+ command = "\t prev"
+ output = ""
+```
+
+**在片段中使用标签**
+
+要将标签用于判断,使用下面的 **-t** 标志。
+```
+$ pet new -t
+Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9
+Description> Remove numbers from output.
+Tag> tr command examples
+
+```
+
+**执行片段**
+
+要执行一个保存的片段,运行:
+```
+$ pet exec
+```
+
+从列表中选择你要运行的代码段,然后按 ENTER 键来运行它:
+
+![][14]
+
+记住你需要安装 fzf 或 peco 才能使用此功能。
+
+**寻找片段**
+
+如果你有很多要保存的片段,你可以使用字符串或关键词如 below.qjz 轻松搜索它们。
+```
+$ pet search
+```
+
+输入搜索字词或关键字以缩小搜索结果范围。
+
+![][15]
+
+**同步片段**
+
+首先,你需要获取访问令牌。转到此链接 并创建访问令牌(只需要 "gist" 范围)。
+
+使用以下命令来配置 Pet:
+```
+$ pet configure
+```
+
+将标记设置到 **[Gist]** 字段中的 **access_token**。
+
+设置完成后,你可以像下面一样将片段上传到 Gist。
+```
+$ pet sync -u
+Gist ID: 2dfeeeg5f17e1170bf0c5612fb31a869
+Upload success
+
+```
+
+你也可以在其他 PC 上下载片段。为此,编辑配置文件并在 **[Gist]** 中将 **Gist ID** 设置为 **gist_id**。
+
+之后,下载片段使用以下命令:
+```
+$ pet sync
+Download success
+
+```
+
+获取更多细节,参阅 help 选项:
+```
+pet -h
+```
+
+或者
+```
+pet [command] -h
+```
+
+这就是全部了。希望这可以帮助到你。正如你所看到的,Pet 使用相当简单易用!如果你很难记住冗长的命令,Pet 实用程序肯定会有用。
+
+干杯!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/pet-simple-command-line-snippet-manager/
+
+作者:[SK][a]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.ostechnix.com/author/sk/
+[1]:https://www.ostechnix.com/bookmark-linux-commands-easier-repeated-invocation/
+[2]:https://www.ostechnix.com/save-commands-terminal-use-demand/
+[3]:https://github.com/knqyf263/pet/releases
+[4]:https://www.ostechnix.com/install-pacaur-arch-linux/
+[5]:https://www.ostechnix.com/install-packer-arch-linux-2/
+[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
+[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
+[8]:https://github.com/junegunn/fzf
+[9]:https://github.com/peco/peco
+[10]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-1.png
+[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-2.png
+[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-3.png
+[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-4.png
+[15]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-5.png
From 8974a852995a3853287a1a50d8aa676063a3ec53 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 17:42:35 +0800
Subject: [PATCH 148/220] PRF:20180321 The Command line Personal Assistant For
Your Linux System.md
@amwps290
---
...ersonal Assistant For Your Linux System.md | 110 +++++-------------
1 file changed, 32 insertions(+), 78 deletions(-)
diff --git a/translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md b/translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md
index a96ed69bc4..c483ed94be 100644
--- a/translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md
+++ b/translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md
@@ -1,37 +1,37 @@
-# 您的 Linux 系统命令行个人助理
+Yoda:您的 Linux 系统命令行个人助理
+===========

-不久前,我们写了一个名为 [**“Betty”**][1] 的命令行虚拟助手。今天,我偶然发现了一个类似的实用程序,叫做 **“Yoda”**。Yoda 是一个命令行个人助理,可以帮助您在 Linux 中完成一些琐碎的任务。它是用 Python 编写的一个免费的开源应用程序。在本指南中,我们将了解如何在 GNU/Linux 中安装和使用 Yoda。
+不久前,我们介绍了一个名为 [“Betty”][1] 的命令行虚拟助手。今天,我偶然发现了一个类似的实用程序,叫做 “Yoda”。Yoda 是一个命令行个人助理,可以帮助您在 Linux 中完成一些琐碎的任务。它是用 Python 编写的一个自由开源应用程序。在本指南中,我们将了解如何在 GNU/Linux 中安装和使用 Yoda。
### 安装 Yoda,命令行私人助理。
-Yoda 需要 **Python 2** 和 PIP 。如果在您的 Linux 中没有安装 PIP,请参考下面的指南来安装它。只要确保已经安装了 **python2-pip** 。Yoda 可能不支持 Python 3。
+Yoda 需要 Python 2 和 PIP 。如果在您的 Linux 中没有安装 PIP,请参考下面的指南来安装它。只要确保已经安装了 python2-pip 。Yoda 可能不支持 Python 3。
-**注意**:我建议你在虚拟环境下试用 Yoda。 不仅仅是 Yoda,总是在虚拟环境中尝试任何 Python 应用程序,让它们不会干扰全局安装的软件包。 您可以按照上文链接中标题为“创建虚拟环境”一节中所述设置虚拟环境。
+- [如何使用 pip 管理 Python 包](https://www.ostechnix.com/manage-python-packages-using-pip/)
-在您的系统上安装了 pip 之后,使用下面的命令克隆 Yoda 库。
+注意:我建议你在 Python 虚拟环境下试用 Yoda。 不仅仅是 Yoda,应该总在虚拟环境中尝试任何 Python 应用程序,让它们不会干扰全局安装的软件包。 您可以按照上文链接中标题为“创建虚拟环境”一节中所述设置虚拟环境。
+
+在您的系统上安装了 `pip` 之后,使用下面的命令克隆 Yoda 库。
```
$ git clone https://github.com/yoda-pa/yoda
-
```
-上面的命令将在当前工作目录中创建一个名为 “yoda” 的目录,并在其中克隆所有内容。转到 Yoda 目录:
+上面的命令将在当前工作目录中创建一个名为 `yoda` 的目录,并在其中克隆所有内容。转到 `yoda` 目录:
```
$ cd yoda/
-
```
-运行以下命令安装Yoda应用程序。
+运行以下命令安装 Yoda 应用程序。
```
$ pip install .
-
```
-请注意最后的点(.)。 现在,所有必需的软件包将被下载并安装。
+请注意最后的点(`.`)。 现在,所有必需的软件包将被下载并安装。
### 配置 Yoda
@@ -41,7 +41,6 @@ $ pip install .
```
$ yoda setup new
-
```
填写下列的问题:
@@ -59,7 +58,6 @@ Where shall your config be stored? (Default: ~/.yoda/)
A configuration file already exists. Are you sure you want to overwrite it? (y/n)
y
-
```
你的密码在加密后保存在配置文件中,所以不用担心。
@@ -68,25 +66,22 @@ y
```
$ yoda setup check
-
```
你会看到如下的输出。
```
Name: Senthil Kumar
-Email: [email protected]
+Email: sk@senthilkumar.com
Github username: sk
-
```
-默认情况下,您的信息存储在 **~/.yoda** 目录中。
+默认情况下,您的信息存储在 `~/.yoda` 目录中。
要删除现有配置,请执行以下操作:
```
$ yoda setup delete
-
```
### 用法
@@ -95,7 +90,6 @@ Yoda 包含一个简单的聊天机器人。您可以使用下面的聊天命令
```
$ yoda chat who are you
-
```
样例输出:
@@ -107,14 +101,13 @@ I'm a virtual agent
$ yoda chat how are you
Yoda speaks:
I'm doing very well. Thanks!
-
```
-以下是我们可以用 Yoda 做的事情:
+以下是我们可以用 Yoda 做的事情:
-**测试网络速度**
+#### 测试网络速度
-让我们问一下 Yoda 关于互联网速度的问题。运行:
+让我们问一下 Yoda 关于互联网速度的问题。运行:
```
$ yoda speedtest
@@ -122,18 +115,16 @@ Speed test results:
Ping: 108.45 ms
Download: 0.75 Mb/s
Upload: 1.95 Mb/s
-
```
-**缩短并展开网址**
+#### 缩短和展开网址
-Yoda 还有助于缩短任何网址。
+Yoda 还有助于缩短任何网址:
```
$ yoda url shorten https://www.ostechnix.com/
Here's your shortened URL:
https://goo.gl/hVW6U0
-
```
要展开缩短的网址:
@@ -142,11 +133,9 @@ https://goo.gl/hVW6U0
$ yoda url expand https://goo.gl/hVW6U0
Here's your original URL:
https://www.ostechnix.com/
-
```
-
-**阅读黑客新闻**
+#### 阅读 Hacker News
我是 Hacker News 网站的常客。 如果你像我一样,你可以使用 Yoda 从下面的 Hacker News 网站阅读新闻。
@@ -159,12 +148,11 @@ Description-- I came up with this idea "a Yelp for developers" when talking with
url-- https://news.ycombinator.com/item?id=16636071
Continue? [press-"y"]
-
```
-Yoda 将一次显示一个项目。 要阅读下一条新闻,只需输入 “y” 并按下 ENTER。
+Yoda 将一次显示一个项目。 要阅读下一条新闻,只需输入 `y` 并按下回车。
-**管理个人日记**
+#### 管理个人日记
我们也可以保留个人日记以记录重要事件。
@@ -174,7 +162,6 @@ Yoda 将一次显示一个项目。 要阅读下一条新闻,只需输入 “y
$ yoda diary nn
Input your entry for note:
Today I learned about Yoda
-
```
要创建新笔记,请再次运行上述命令。
@@ -188,7 +175,6 @@ Today's notes:
Time | Note
--------|-----
16:41:41| Today I learned about Yoda
-
```
不仅仅是笔记,Yoda 还可以帮助你创建任务。
@@ -199,7 +185,6 @@ Today's notes:
$ yoda diary nt
Input your entry for task:
Write an article about Yoda and publish it on OSTechNix
-
```
要查看任务列表,请运行:
@@ -217,10 +202,9 @@ Summary:
----------------
Incomplete tasks: 1
Completed tasks: 0
-
```
-正如你在上面看到的,我有一个未完成的任务。 要将其标记为已完成,请运行以下命令并输入已完成的任务序列号并按下 ENTER 键:
+正如你在上面看到的,我有一个未完成的任务。 要将其标记为已完成,请运行以下命令并输入已完成的任务序列号并按下回车键:
```
$ yoda diary ct
@@ -231,7 +215,6 @@ Number | Time | Task
1 | 16:44:03: Write an article about Yoda and publish it on OSTechNix
Enter the task number that you would like to set as completed
1
-
```
您可以随时使用命令分析当前月份的任务:
@@ -241,18 +224,16 @@ $ yoda diary analyze
Percentage of incomplete task : 0
Percentage of complete task : 100
Frequency of adding task (Task/Day) : 3
-
```
有时候,你可能想要记录一个关于你爱的或者敬佩的人的个人资料。
-**记录关于爱人的笔记**
+#### 记录关于爱人的笔记
首先,您需要设置配置来存储朋友的详细信息。 请运行:
```
$ yoda love setup
-
```
输入你的朋友的详细信息:
@@ -264,7 +245,6 @@ Enter sex(M/F):
M
Where do they live?
Rameswaram
-
```
要查看此人的详细信息,请运行:
@@ -272,7 +252,6 @@ Rameswaram
```
$ yoda love status
{'place': 'Rameswaram', 'name': 'Abdul Kalam', 'sex': 'M'}
-
```
要添加你的爱人的生日:
@@ -281,7 +260,6 @@ $ yoda love status
$ yoda love addbirth
Enter birthday
15-10-1931
-
```
查看生日:
@@ -289,7 +267,6 @@ Enter birthday
```
$ yoda love showbirth
Birthday is 15-10-1931
-
```
你甚至可以添加关于该人的笔记:
@@ -297,7 +274,6 @@ Birthday is 15-10-1931
```
$ yoda love note
Avul Pakir Jainulabdeen Abdul Kalam better known as A. P. J. Abdul Kalam, was the 11th President of India from 2002 to 2007.
-
```
您可以使用命令查看笔记:
@@ -306,7 +282,6 @@ Avul Pakir Jainulabdeen Abdul Kalam better known as A. P. J. Abdul Kalam, was th
$ yoda love notes
Notes:
1: Avul Pakir Jainulabdeen Abdul Kalam better known as A. P. J. Abdul Kalam, was the 11th President of India from 2002 to 2007.
-
```
你也可以写下这个人喜欢的东西:
@@ -317,7 +292,6 @@ Add things they like
Physics, Aerospace
Want to add more things they like? [y/n]
n
-
```
要查看他们喜欢的东西,请运行:
@@ -326,12 +300,9 @@ n
$ yoda love likes
Likes:
1: Physics, Aerospace
-
```
-****
-
-**跟踪资金费用**
+#### 跟踪资金费用
您不需要单独的工具来维护您的财务支出。 Yoda 会替您处理好。
@@ -339,7 +310,6 @@ Likes:
```
$ yoda money setup
-
```
输入您的货币代码和初始金额:
@@ -360,7 +330,6 @@ Enter initial amount:
```
$ yoda money status
{'initial_money': 10000, 'currency_code': 'INR'}
-
```
让我们假设你买了一本价值 250 卢比的书。 要添加此费用,请运行:
@@ -369,7 +338,6 @@ $ yoda money status
$ yoda money exp
Spend 250 INR on books
output:
-
```
要查看花费,请运行:
@@ -377,44 +345,35 @@ output:
```
$ yoda money exps
2018-03-21 17:12:31 INR 250 books
-
```
-****
+#### 创建想法列表
-**创建想法列表**
-
-创建一个新的想法:
+创建一个新的想法:
```
$ yoda ideas add --task --inside
-
```
-列出想法:
+列出想法:
```
$ yoda ideas show
-
```
-从任务中移除一个想法:
+从任务中移除一个想法:
```
$ yoda ideas remove --task --inside
-
```
-要完全删除这个想法,请运行:
+要完全删除这个想法,请运行:
```
$ yoda ideas remove --project
-
```
-****
-
-**学习英语词汇**
+#### 学习英语词汇
Yoda 帮助你学习随机英语单词并追踪你的学习进度。
@@ -422,36 +381,31 @@ Yoda 帮助你学习随机英语单词并追踪你的学习进度。
```
$ yoda vocabulary word
-
```
-它会随机显示一个单词。 按 ENTER 键显示单词的含义。 再一次,Yoda 问你是否已经知道这个词的意思。 如果您已经知道,请输入“是”。 如果您不知道,请输入“否”。 这可以帮助你跟踪你的进度。 使用以下命令来了解您的进度。
+它会随机显示一个单词。 按回车键显示单词的含义。 再一次,Yoda 问你是否已经知道这个词的意思。 如果您已经知道,请输入“是”。 如果您不知道,请输入“否”。 这可以帮助你跟踪你的进度。 使用以下命令来了解您的进度。
```
$ yoda vocabulary accuracy
-
```
此外,Yoda 可以帮助您做其他一些事情,比如找到单词的定义和创建插卡以轻松学习任何内容。 有关更多详细信息和可用选项列表,请参阅帮助部分。
```
$ yoda --help
-
```
更多好的东西来了。请继续关注!
干杯!
-
-
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/yoda-the-command-line-personal-assistant-for-your-linux-system/
作者:[SK][a]
译者:[amwps290](https://github.com/amwps290)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From a1e5bdc28bd23cb00f5ae3d510a39e04702325bd Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 17:43:05 +0800
Subject: [PATCH 149/220] PUB:20180321 The Command line Personal Assistant For
Your Linux System.md
@amwps290 https://linux.cn/article-9590-1.html
---
...1 The Command line Personal Assistant For Your Linux System.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180321 The Command line Personal Assistant For Your Linux System.md (100%)
diff --git a/translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md b/published/20180321 The Command line Personal Assistant For Your Linux System.md
similarity index 100%
rename from translated/tech/20180321 The Command line Personal Assistant For Your Linux System.md
rename to published/20180321 The Command line Personal Assistant For Your Linux System.md
From 0a32be70ec7f4b03de26ca4d183dd1f7d7a116ea Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 18:29:39 +0800
Subject: [PATCH 150/220] PRF:20180208 Become a Hollywood movie hacker with
these three command line tools.md
@wyxplus
---
...ker with these three command line tools.md | 38 +++++++++----------
1 file changed, 17 insertions(+), 21 deletions(-)
diff --git a/translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md b/translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md
index 1e1c8aa032..a131f84969 100644
--- a/translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md
+++ b/translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md
@@ -1,53 +1,49 @@
-用这三个命令行工具成为好莱坞电影中的黑客
+假装很忙的三个命令行工具
======
+> 有时候你很忙。而有时候你只是需要看起来很忙,就像电影中的黑客一样。有一些开源工具就是干这个的。
+

-如果在你成长过程中有看过谍战片、动作片或犯罪片,那么你就会清楚地了解黑客的电脑屏幕是什么样子。就像是在《黑客帝国》电影中,[代码雨][1] 一样的十六进制数字流,又或是一排排快速移动的代码 。
+如果在你在消磨时光时看过谍战片、动作片或犯罪片,那么你就会清晰地在脑海中勾勒出黑客的电脑屏幕的样子。就像是在《黑客帝国》电影中,[代码雨][1] 一样的十六进制数字流,又或是一排排快速移动的代码。
-也许电影中出现一幅世界地图,其中布满了闪烁的光点除和一些快速更新的字符。 而且是3D旋转的几何图像,为什么不可能出现在现实中呢? 如果可能的话,那么就会出现数量多得不可思议的显示屏,以及不符合人体工学的电脑椅或其他配件。 在《剑鱼行动》电影中黑客就使用了七个显示屏。
+也许电影中出现一幅世界地图,其中布满了闪烁的光点和一些快速更新的图表。不可或缺的,也可能有 3D 旋转的几何形状。甚至,这一切都会显示在一些完全不符合人类习惯的数量荒谬的显示屏上。 在《剑鱼行动》电影中黑客就使用了七个显示屏。
-当然,我们这些从事计算机行业的人一下子就明白这完全是胡说八道。虽然在我们中,许多人都有双显示器(或更多),但一个闪烁的数据仪表盘通常和专注工作是相互矛盾的。编写代码、项目管理和系统管理与日常工作不同。我们遇到的大多数情况,为了解决问题,都需要大量的思考,与客户沟通所得到一些研究和组织的资料,然后才是少许的 [敲代码][7]。
+当然,我们这些从事计算机行业的人一下子就明白这完全是胡说八道。虽然在我们中,许多人都有双显示器(或更多),但一个闪烁的数据仪表盘、刷新的数据通常和专注工作是相互矛盾的。编写代码、项目管理和系统管理与日常工作不同。我们遇到的大多数情况,为了解决问题,都需要大量的思考,与客户沟通所得到一些研究和组织的资料,然后才是少许的 [敲代码][7]。
然而,这与我们想追求电影中的效果并不矛盾,也许,我们只是想要看起来“忙于工作”而已。
-**注:当然,我仅仅是在此胡诌。**如果实际上您公司是根据您繁忙程度来评估您的工作时,无论您是蓝领还是白领,都需要亟待解决这样的工作文化。假装工作很忙是一种有毒的文化,对公司和员工都有害无益。
-
-这就是说,让我们找些乐子,用一些老式的、毫无意义的数据和代码片段填充我们的屏幕。(当然,数据或许有意义,而不是没有上下文。)当然有许多有趣的图形界面,如 [hackertyper.net][8] 或是 [GEEKtyper.com][9] 网站(译者注:是在线模拟黑客网站),为什么不使用Linux终端程序呢?对于更老派的外观,可以考虑使用 [酷炫复古终端][10],这听起来确实如此:一个酷炫的复古终端程序。我将在下面的屏幕截图中使用酷炫复古终端,因为它看起来的确很酷。
+**注:当然,我仅仅是在此胡诌。**如果您公司实际上是根据您繁忙程度来评估您的工作时,无论您是蓝领还是白领,都需要亟待解决这样的工作文化。假装工作很忙是一种有毒的文化,对公司和员工都有害无益。
+这就是说,让我们找些乐子,用一些老式的、毫无意义的数据和代码片段填充我们的屏幕。(当然,数据或许有意义,但不是在这种没有上下文的环境中。)当然有一些用于此用途的有趣的图形界面程序,如 [hackertyper.net][8] 或是 [GEEKtyper.com][9] 网站(LCTT 译注:是在线假装黑客操作的网站),为什么不使用标准的 Linux 终端程序呢?对于更老派的外观,可以考虑使用 [酷炫复古终端][10],这听起来确实如此:一个酷炫的复古终端程序。我将在下面的屏幕截图中使用酷炫复古终端,因为它看起来的确很酷。
### Genact
-我们来看下第一个工具——Genact。Genact的原理很简单,就是慢慢地循环播放您选择的一个序列,让您的代码在您外出休息时“编译”。由您来决定播放顺序,但是其中默认包含数字货币挖矿模拟器、PHP管理依赖关系工具、内核编译器、下载器、内存转储等工具。其中我最喜欢的是其中类似《模拟城市》加载显示。所以只要没有人仔细检查,你可以花一整个下午等待您的电脑完成进度条。
+我们来看下第一个工具——Genact。Genact 的原理很简单,就是慢慢地无尽循环播放您选择的一个序列,让您的代码在您外出休息时“编译”。由您来决定播放顺序,但是其中默认包含数字货币挖矿模拟器、Composer PHP 依赖关系管理工具、内核编译器、下载器、内存转储等工具。其中我最喜欢的是其中类似《模拟城市》加载显示。所以只要没有人仔细检查,你可以花一整个下午等待您的电脑完成进度条。
-Genact[发行版][11]支持Linux、OS X和Windows。并且用Rust编写。[源代码][12] 在GitHub上开源(遵循[MIT许可证][13])
+Genact [发布了][11] 支持 Linux、OS X 和 Windows 的版本。并且其 Rust [源代码][12] 在 GitHub 上开源(遵循 [MIT 许可证][13])。

### Hollywood
+Hollywood 采取更直接的方法。它本质上是在终端中创建一个随机的数量和配置的分屏,并启动那些看起来很繁忙的应用程序,如 htop、目录树、源代码文件等,并每隔几秒将其切换。它被组织成一个 shell 脚本,所以可以非常容易地根据需求进行修改。
-Hollywood采取更直接的方法。它本质上是在终端中创建一个随机数字和配置分屏,并启动跑个不停的应用程序,如htop,目录树,源代码文件等,并每隔几秒将其切换。它被放在一起作为一个shell脚本,所以可以非常容易地根据需求进行修改。
-
-
-Hollywood的 [源代码][14] 在GitHub上开源(遵循[Apache 2.0许可证][15])。
+Hollywood的 [源代码][14] 在 GitHub 上开源(遵循 [Apache 2.0 许可证][15])。

### Blessed-contrib
-Blessed-contrib是我个人最喜欢的应用,实际上并不是为了表演而专门设计的应用。相反地,它是一个基于Node.js的后台终端构建库的演示文件。与其他两个不同,实际上我已经在工作中使用Blessed-contrib的库,而不是用于假装忙于工作。因为它是一个相当有用的库,并且可以使用一组在命令行显示信息的小部件。与此同时填充虚拟数据也很容易,所以可以很容易实现模拟《战争游戏》的想法。
-
-
-Blessed-contrib的[源代码][16]在GitHub上(遵循[MIT许可证][17])。
+Blessed-contrib 是我个人最喜欢的应用,实际上并不是为了这种表演而专门设计的应用。相反地,它是一个基于 Node.js 的终端仪表盘的构建库的演示文件。与其他两个不同,实际上我已经在工作中使用 Blessed-contrib 的库,而不是用于假装忙于工作。因为它是一个相当有用的库,并且可以使用一组在命令行显示信息的小部件。与此同时填充虚拟数据也很容易,所以可以很容易实现你在计算机上模拟《战争游戏》的想法。
+Blessed-contrib 的[源代码][16]在 GitHub 上(遵循 [MIT 许可证][17])。

-当然,尽管这些工具很容易使用,但也有很多其他的方式使你的屏幕丰富。在你看到电影中最常用的工具之一就是Nmap,一个开源的网络安全扫描工具。实际上,它被广泛用作展示好莱坞电影中,黑客电脑屏幕上的工具。因此Nmap的开发者创建了一个 [页面][18],列出了它出现在其中的一些电影,从《黑客帝国2:重装上阵》到《谍影重重3》、《龙纹身的女孩》,甚至《虎胆龙威4》。
-
-当然,您可以创建自己的组合,使用终端多路复用器(如屏幕或tmux)启动您希望的任何数据分散应用程序。
+当然,尽管这些工具很容易使用,但也有很多其他的方式使你的屏幕丰富。在你看到电影中最常用的工具之一就是Nmap,这是一个开源的网络安全扫描工具。实际上,它被广泛用作展示好莱坞电影中,黑客电脑屏幕上的工具。因此 Nmap 的开发者创建了一个 [页面][18],列出了它出现在其中的一些电影,从《黑客帝国 2:重装上阵》到《谍影重重3》、《龙纹身的女孩》,甚至《虎胆龙威 4》。
+当然,您可以创建自己的组合,使用终端多路复用器(如 `screen` 或 `tmux`)启动您希望使用的任何数据切分程序。
那么,您是如何使用您的屏幕的呢?
@@ -57,7 +53,7 @@ via: https://opensource.com/article/18/2/command-line-tools-productivity
作者:[Jason Baker][a]
译者:[wyxplus](https://github.com/wyxplus)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 3d197e23f42e0245fe394ed8f3bfb2b718db177d Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 18:30:27 +0800
Subject: [PATCH 151/220] PUB:20180208 Become a Hollywood movie hacker with
these three command line tools.md
@wyxplus https://linux.cn/article-9591-1.html
---
... Hollywood movie hacker with these three command line tools.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180208 Become a Hollywood movie hacker with these three command line tools.md (100%)
diff --git a/translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md b/published/20180208 Become a Hollywood movie hacker with these three command line tools.md
similarity index 100%
rename from translated/tech/20180208 Become a Hollywood movie hacker with these three command line tools.md
rename to published/20180208 Become a Hollywood movie hacker with these three command line tools.md
From c2ab15f41e39a33f3f0f8eda90e29b4299697713 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 21:32:06 +0800
Subject: [PATCH 152/220] PRF:20180118 Securing the Linux filesystem with
Tripwire.md
@geekpi
---
...ring the Linux filesystem with Tripwire.md | 47 ++++++++-----------
1 file changed, 19 insertions(+), 28 deletions(-)
diff --git a/translated/tech/20180118 Securing the Linux filesystem with Tripwire.md b/translated/tech/20180118 Securing the Linux filesystem with Tripwire.md
index 8d2f6db004..f82584a772 100644
--- a/translated/tech/20180118 Securing the Linux filesystem with Tripwire.md
+++ b/translated/tech/20180118 Securing the Linux filesystem with Tripwire.md
@@ -1,54 +1,50 @@
使用 Tripwire 保护 Linux 文件系统
======
+> 如果恶意软件或其情况改变了你的文件系统,Linux 完整性检查工具会提示你。
+

-尽管 Linux 被认为是最安全的操作系统(在 Windows 和 MacOS 之前),但它仍然容易受到 rootkit 和其他恶意软件的影响。因此,Linux 用户需要知道如何保护他们的服务器或个人电脑免遭破坏,他们需要采取的第一步就是保护文件系统。
+尽管 Linux 被认为是最安全的操作系统(排在 Windows 和 MacOS 之前),但它仍然容易受到 rootkit 和其他恶意软件的影响。因此,Linux 用户需要知道如何保护他们的服务器或个人电脑免遭破坏,他们需要采取的第一步就是保护文件系统。
-在本文中,我们将看看 [Tripwire][1],这是保护 Linux 文件系统的绝佳工具。Tripwire 是一个完整性检查工具,使系统管理员、安全工程师和其他人能够检测系统文件的变更。虽然它不是唯一的选择([AIDE][2] 和 [Samhain][3] 提供类似功能),但 Tripwire 可以说是 Linux 系统文件中最常用的完整性检查程序,并在 GPLv2 许可证下开源。
+在本文中,我们将看看 [Tripwire][1],这是保护 Linux 文件系统的绝佳工具。Tripwire 是一个完整性检查工具,使得系统管理员、安全工程师和其他人能够检测系统文件的变更。虽然它不是唯一的选择([AIDE][2] 和 [Samhain][3] 提供类似功能),但 Tripwire 可以说是 Linux 系统文件中最常用的完整性检查程序,并在 GPLv2 许可证下开源。
### Tripwire 如何工作
了解 Tripwire 如何运行对了解 Tripwire 在安装后会做什么有所帮助。Tripwire 主要由两个部分组成:策略和数据库。策略列出了完整性检查器应该生成快照的所有文件和目录,还创建了用于识别对目录和文件更改违规的规则。数据库由 Tripwire 生成的快照组成。
-Tripwire 还有一个配置文件,它指定数据库、策略文件和 Tripwire 可执行文件的位置。它还提供两个加密密钥 - 站点密钥和本地密钥 - 以保护重要文件免遭篡改。站点密钥保护策略和配置文件,而本地密钥保护数据库和生成的报告。
+Tripwire 还有一个配置文件,它指定数据库、策略文件和 Tripwire 可执行文件的位置。它还提供两个加密密钥 —— 站点密钥和本地密钥 —— 以保护重要文件免遭篡改。站点密钥保护策略和配置文件,而本地密钥保护数据库和生成的报告。
-Tripwire 定期将目录和文件与数据库中的快照进行比较并报告所有的更改。
+Tripwire 会定期将目录和文件与数据库中的快照进行比较并报告所有的更改。
### 安装 Tripwire
要 Tripwire,我们需要先下载并安装它。Tripwire 适用于几乎所有的 Linux 发行版。你可以从 [Sourceforge][4] 下载一个开源版本,并如下根据你的 Linux 版本进行安装。
Debian 和 Ubuntu 用户可以使用 `apt-get` 直接从仓库安装 Tripwire。非 root 用户应该输入 `sudo` 命令通过 `apt-get` 安装 Tripwire。
+
```
-
-
sudo apt-get update
-
sudo apt-get install tripwire
```
-CentOS 和其他基于 rpm 的发行版使用类似的过程。为了最佳实践,请在安装新软件包(如 Tripwire)之前更新仓库。命令 `yum install epel-release` 意思是我们想要安装额外的存储库。 (`epel` 代表 Extra Packages for Enterprise Linux。)
+CentOS 和其他基于 RPM 的发行版使用类似的过程。为了最佳实践,请在安装新软件包(如 Tripwire)之前更新仓库。命令 `yum install epel-release` 意思是我们想要安装额外的存储库。 (`epel` 代表 Extra Packages for Enterprise Linux。)
+
```
-
-
yum update
-
yum install epel-release
-
yum install tripwire
```
此命令会在安装中运行让 Tripwire 有效运行所需的配置。另外,它会在安装过程中询问你是否使用密码。你可以两个选择都选择 “Yes”。
-另外,如果需要构建配置文件,请选择 “Yes”。选择并确认站点密钥和本地密钥的密码。(建议使用复杂的密码,例如 `Il0ve0pens0urce`。)
+另外,如果需要构建配置文件,请选择 “Yes”。选择并确认站点密钥和本地密钥的密码。(建议使用复杂的密码,例如 `Il0ve0pens0urce` 这样的。)
### 建立并初始化 Tripwire 数据库
接下来,按照以下步骤初始化 Tripwire 数据库:
+
```
-
-
tripwire --init
```
@@ -57,39 +53,34 @@ tripwire --init
### 使用 Tripwire 进行基本的完整性检查
你可以使用以下命令让 Tripwire 检查你的文件或目录是否已被修改。Tripwire 将文件和目录与数据库中的初始快照进行比较的能力依赖于你在活动策略中创建的规则。
+
```
-
-
tripwire --check
```
你还可以将 `-check` 命令限制为特定的文件或目录,如下所示:
+
```
-
-
tripwire --check /usr/tmp
```
另外,如果你需要使用 Tripwire 的 `-check` 命令的更多帮助,该命令能够查阅 Tripwire 的手册:
+
```
-
-
tripwire --check --help
```
### 使用 Tripwire 生成报告
-要轻松生成每日系统完整性报告,请使用以下命令创建一个 “crontab”:
+要轻松生成每日系统完整性报告,请使用以下命令创建一个 crontab 任务:
+
```
-
-
crontab -e
```
-之后,你可以编辑此文件(使用你选择的文本编辑器)来引入由 cron 运行的任务。例如,你可以使用以下命令设置一个 cron 作业,在每天的 5:40 将 Tripwire 的报告发送到你的邮箱:
+之后,你可以编辑此文件(使用你选择的文本编辑器)来引入由 cron 运行的任务。例如,你可以使用以下命令设置一个 cron 任务,在每天的 5:40 将 Tripwire 的报告发送到你的邮箱:
+
```
-
-
40 5 * * * usr/sbin/tripwire --check
```
@@ -101,7 +92,7 @@ via: https://opensource.com/article/18/1/securing-linux-filesystem-tripwire
作者:[Michael Kwaku Aboagye][a]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 14cfa4f7618638a5573e11ba408512c3934a4677 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 21:32:23 +0800
Subject: [PATCH 153/220] PUB:20180118 Securing the Linux filesystem with
Tripwire.md
@geekpi
---
.../20180118 Securing the Linux filesystem with Tripwire.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {translated/tech => published}/20180118 Securing the Linux filesystem with Tripwire.md (100%)
diff --git a/translated/tech/20180118 Securing the Linux filesystem with Tripwire.md b/published/20180118 Securing the Linux filesystem with Tripwire.md
similarity index 100%
rename from translated/tech/20180118 Securing the Linux filesystem with Tripwire.md
rename to published/20180118 Securing the Linux filesystem with Tripwire.md
From d320cdfaf1574a2fe5b5c9de3ea26643924858d2 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 29 Apr 2018 23:01:20 +0800
Subject: [PATCH 154/220] PRF:20180125 BUILDING A FULL-TEXT SEARCH APP USING
DOCKER AND ELASTICSEARCH.md
@qhwdw
---
...ARCH APP USING DOCKER AND ELASTICSEARCH.md | 216 +++++++-----------
1 file changed, 83 insertions(+), 133 deletions(-)
diff --git a/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md b/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md
index 3210c7d284..053f70bff3 100644
--- a/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md
+++ b/translated/tech/20180125 BUILDING A FULL-TEXT SEARCH APP USING DOCKER AND ELASTICSEARCH.md
@@ -1,59 +1,55 @@
-使用 DOCKER 和 ELASTICSEARCH 构建一个全文搜索应用程序
+使用 Docker 和 Elasticsearch 构建一个全文搜索应用程序
============================================================
- _如何在超过 500 万篇文章的 Wikipedia 上找到与你研究相关的文章?_
+
- _如何在超过 20 亿用户的 Facebook 中找到你的朋友(并且还拼错了名字)?_
+_如何在超过 500 万篇文章的 Wikipedia 上找到与你研究相关的文章?_
- _谷歌如何在整个因特网上搜索你的模糊的、充满拼写错误的查询?_
+_如何在超过 20 亿用户的 Facebook 中找到你的朋友(并且还拼错了名字)?_
-在本教程中,我们将带你探索如何配置我们自己的全文探索应用程序(与上述问题中的系统相比,它的复杂度要小很多)。我们的示例应用程序将提供一个 UI 和 API 去从 100 部经典文学(比如,_Peter Pan_ , _Frankenstein_ , 和 _Treasure Island_ )中搜索完整的文本。
+_谷歌如何在整个因特网上搜索你的模糊的、充满拼写错误的查询?_
-你可以在这里([https://search.patricktriest.com][6])预览教程中应用程序的完整版本。
+在本教程中,我们将带你探索如何配置我们自己的全文搜索应用程序(与上述问题中的系统相比,它的复杂度要小很多)。我们的示例应用程序将提供一个 UI 和 API 去从 100 部经典文学(比如,《彼得·潘》 、 《弗兰肯斯坦》 和 《金银岛》)中搜索完整的文本。
+
+你可以在这里([https://search.patricktriest.com][6])预览该教程应用的完整版本。

-这个应用程序的源代码是 100% 公开的,可以在 GitHub 仓库上找到它们 —— [https://github.com/triestpa/guttenberg-search][7]
+这个应用程序的源代码是 100% 开源的,可以在 GitHub 仓库上找到它们 —— [https://github.com/triestpa/guttenberg-search][7] 。
-在应用程序中添加一个快速灵活的全文搜索可能是个挑战。大多数的主流数据库,比如,[PostgreSQL][8] 和 [MongoDB][9],在它们的查询和索引结构中都提供一个有限的、基础的、文本搜索的功能。为实现高质量的全文搜索,通常的最佳选择是单独数据存储。[Elasticsearch][10] 是一个开源数据存储的领导者,它专门为执行灵活而快速的全文搜索进行了优化。
+在应用程序中添加一个快速灵活的全文搜索可能是个挑战。大多数的主流数据库,比如,[PostgreSQL][8] 和 [MongoDB][9],由于受其查询和索引结构的限制只能提供一个非常基础的文本搜索功能。为实现高质量的全文搜索,通常的最佳选择是单独的数据存储。[Elasticsearch][10] 是一个开源数据存储的领导者,它专门为执行灵活而快速的全文搜索进行了优化。
-我们将使用 [Docker][11] 去配置我们自己的项目环境和依赖。Docker 是一个容器化引擎,它被 [Uber][12]、[Spotify][13]、[ADP][14]、以及 [Paypal][15] 使用。构建容器化应用的一个主要优势是,项目的设置在 Windows、macOS、以及 Linux 上都是相同的 —— 这使我写这个教程快速又简单。如果你还没有使用过 Docker,不用担心,我们接下来将经历完整的项目配置。
+我们将使用 [Docker][11] 去配置我们自己的项目环境和依赖。Docker 是一个容器化引擎,它被 [Uber][12]、[Spotify][13]、[ADP][14] 以及 [Paypal][15] 使用。构建容器化应用的一个主要优势是,项目的设置在 Windows、macOS、以及 Linux 上都是相同的 —— 这使我写这个教程快速又简单。如果你还没有使用过 Docker,不用担心,我们接下来将经历完整的项目配置。
-我也会使用 [Node.js][16] (使用 [Koa][17] 框架)、和 [Vue.js][18],用它们分别去构建我们自己的搜索 API 和前端 Web 应用程序。
+我也会使用 [Node.js][16] (使用 [Koa][17] 框架)和 [Vue.js][18],用它们分别去构建我们自己的搜索 API 和前端 Web 应用程序。
-### 1 - ELASTICSEARCH 是什么?
+### 1 - Elasticsearch 是什么?
-全文搜索在现代应用程序中是一个有大量需求的特性。搜索也可能是最难的一项特性 —— 许多流行的网站的搜索功能都不合格,要么返回结果太慢,要么找不到精确的结果。通常,这种情况是被底层的数据库所局限:大多数标准的关系型数据库在基本的 `CONTAINS` 或 `LIKE` SQL 查询上有局限性,它仅提供大多数基本的字符串匹配功能。
+全文搜索在现代应用程序中是一个有大量需求的特性。搜索也可能是最难的一项特性 —— 许多流行的网站的搜索功能都不合格,要么返回结果太慢,要么找不到精确的结果。通常,这种情况是被底层的数据库所局限:大多数标准的关系型数据库局限于基本的 `CONTAINS` 或 `LIKE` SQL 查询上,它仅提供最基本的字符串匹配功能。
我们的搜索应用程序将具备:
1. **快速** - 搜索结果将快速返回,为用户提供一个良好的体验。
-
-2. **灵活** - 我们希望能够去修改搜索如何执行,这是为了便于在不同的数据库和用户场景下进行优化。
-
-3. **容错** - 如果搜索内容有拼写错误,我们将仍然会返回相关的结果,而这个结果可能正是用户希望去搜索的结果。
-
-4. **全文** - 我们不想限制我们的搜索只能与指定的关键字或者标签相匹配 —— 我们希望它可以搜索在我们的数据存储中的任何东西(包括大的文本域)。
+2. **灵活** - 我们希望能够去修改搜索如何执行的方式,这是为了便于在不同的数据库和用户场景下进行优化。
+3. **容错** - 如果所搜索的内容有拼写错误,我们将仍然会返回相关的结果,而这个结果可能正是用户希望去搜索的结果。
+4. **全文** - 我们不想限制我们的搜索只能与指定的关键字或者标签相匹配 —— 我们希望它可以搜索在我们的数据存储中的任何东西(包括大的文本字段)。

-为了构建一个功能强大的搜索功能,通常最理想的方法是使用一个为全文搜索任务优化过的用户数据存储。在这里我们使用 [Elasticsearch][19],Elasticsearch 是一个开源的内存中的数据存储,它是用 Java 写的,最初是在 [Apache Lucene][20] 库上构建的。
+为了构建一个功能强大的搜索功能,通常最理想的方法是使用一个为全文搜索任务优化过的数据存储。在这里我们使用 [Elasticsearch][19],Elasticsearch 是一个开源的内存中的数据存储,它是用 Java 写的,最初是在 [Apache Lucene][20] 库上构建的。
这里有一些来自 [Elastic 官方网站][21] 上的 Elasticsearch 真实使用案例。
* Wikipedia 使用 Elasticsearch 去提供带高亮搜索片断的全文搜索功能,并且提供按类型搜索和 “did-you-mean” 建议。
-
-* Guardian 使用 Elasticsearch 把社交网络数据和访客日志相结合,为编辑去提供大家对新文章的实时的反馈。
-
+* Guardian 使用 Elasticsearch 把社交网络数据和访客日志相结合,为编辑去提供新文章的公众意见的实时反馈。
* Stack Overflow 将全文搜索和地理查询相结合,并使用 “类似” 的方法去找到相关的查询和回答。
-
* GitHub 使用 Elasticsearch 对 1300 亿行代码进行查询。
### 与 “普通的” 数据库相比,Elasticsearch 有什么不一样的地方?
-Elasticsearch 之所以能够提供快速灵活的全文搜索,秘密在于它使用 _反转索引_ 。
+Elasticsearch 之所以能够提供快速灵活的全文搜索,秘密在于它使用反转索引 。
-“索引” 是数据库中的一种数据结构,它能够以超快的速度进行数据查询和检索操作。数据库通过存储与表中行相关联的字段来生成索引。在一种可搜索的数据结构(一般是 [B树][22])中排序索引,在优化过的查询中,数据库能够达到接近线速的时间(比如,“使用 ID=5 查找行)。
+“索引” 是数据库中的一种数据结构,它能够以超快的速度进行数据查询和检索操作。数据库通过存储与表中行相关联的字段来生成索引。在一种可搜索的数据结构(一般是 [B 树][22])中排序索引,在优化过的查询中,数据库能够达到接近线性的时间(比如,“使用 ID=5 查找行”)。

@@ -63,41 +59,38 @@ Elasticsearch 之所以能够提供快速灵活的全文搜索,秘密在于它

-这种反转索引数据结构可以使我们非常快地查询到,所有出现 ”football" 的文档。通过使用大量优化过的内存中的反转索引,Elasticsearch 可以让我们在存储的数据上,执行一些非常强大的和自定义的全文搜索。
+这种反转索引数据结构可以使我们非常快地查询到,所有出现 “football” 的文档。通过使用大量优化过的内存中的反转索引,Elasticsearch 可以让我们在存储的数据上,执行一些非常强大的和自定义的全文搜索。
### 2 - 项目设置
-### 2.0 - Docker
+#### 2.0 - Docker
我们在这个项目上使用 [Docker][23] 管理环境和依赖。Docker 是个容器引擎,它允许应用程序运行在一个独立的环境中,不会受到来自主机操作系统和本地开发环境的影响。现在,许多公司将它们的大规模 Web 应用程序主要运行在容器架构上。这样将提升灵活性和容器化应用程序组件的可组构性。

-对我们来说,使用 Docker 的优势是,它对本教程非常友好,它的本地环境设置量最小,并且跨 Windows、macOS、和 Linux 系统的一致性很好。我们只需要在 Docker 配置文件中定义这些依赖关系,而不是按安装说明分别去安装 Node.js、Elasticsearch、和 Nginx,然后,就可以使用这个配置文件在任何其它地方运行我们的应用程序。而且,因为每个应用程序组件都运行在它自己的独立容器中,它们受本地机器上的其它 “垃圾” 干扰的可能性非常小,因此,在调试问题时,像 "But it works on my machine!" 这类的问题将非常少。
+对我来说,使用 Docker 的优势是,它对本教程的作者非常方便,它的本地环境设置量最小,并且跨 Windows、macOS 和 Linux 系统的一致性很好。我们只需要在 Docker 配置文件中定义这些依赖关系,而不是按安装说明分别去安装 Node.js、Elasticsearch 和 Nginx,然后,就可以使用这个配置文件在任何其它地方运行我们的应用程序。而且,因为每个应用程序组件都运行在它自己的独立容器中,它们受本地机器上的其它 “垃圾” 干扰的可能性非常小,因此,在调试问题时,像“它在我这里可以工作!”这类的问题将非常少。
-### 2.1 - 安装 Docker & Docker-Compose
+#### 2.1 - 安装 Docker & Docker-Compose
这个项目只依赖 [Docker][24] 和 [docker-compose][25],docker-compose 是 Docker 官方支持的一个工具,它用来将定义的多个容器配置 _组装_ 成单一的应用程序栈。
-安装 Docker - [https://docs.docker.com/engine/installation/][26]
-安装 Docker Compose - [https://docs.docker.com/compose/install/][27]
+- 安装 Docker - [https://docs.docker.com/engine/installation/][26]
+- 安装 Docker Compose - [https://docs.docker.com/compose/install/][27]
-### 2.2 - 设置项目主目录
+#### 2.2 - 设置项目主目录
为项目创建一个主目录(名为 `guttenberg_search`)。我们的项目将工作在主目录的以下两个子目录中。
* `/public` - 保存前端 Vue.js Web 应用程序。
-
* `/server` - 服务器端 Node.js 源代码。
-### 2.3 - 添加 Docker-Compose 配置
+#### 2.3 - 添加 Docker-Compose 配置
接下来,我们将创建一个 `docker-compose.yml` 文件来定义我们的应用程序栈中的每个容器。
1. `gs-api` - 后端应用程序逻辑使用的 Node.js 容器
-
2. `gs-frontend` - 前端 Web 应用程序使用的 Ngnix 容器。
-
3. `gs-search` - 保存和搜索数据的 Elasticsearch 容器。
```
@@ -140,12 +133,11 @@ services:
volumes: # Define seperate volume for Elasticsearch data
esdata:
-
```
-这个文件定义了我们全部的应用程序栈 —— 不需要在你的本地系统上安装 Elasticsearch、Node、和 Nginx。每个容器都将端口转发到宿主机系统(`localhost`)上,以便于我们在宿主机上去访问和调试 Node API、Elasticsearch instance、和前端 Web 应用程序。
+这个文件定义了我们全部的应用程序栈 —— 不需要在你的本地系统上安装 Elasticsearch、Node 和 Nginx。每个容器都将端口转发到宿主机系统(`localhost`)上,以便于我们在宿主机上去访问和调试 Node API、Elasticsearch 实例和前端 Web 应用程序。
-### 2.4 - 添加 Dockerfile
+#### 2.4 - 添加 Dockerfile
对于 Nginx 和 Elasticsearch,我们使用了官方预构建的镜像,而 Node.js 应用程序需要我们自己去构建。
@@ -169,7 +161,6 @@ COPY . .
# Start app
CMD [ "npm", "start" ]
-
```
这个 Docker 配置扩展了官方的 Node.js 镜像、拷贝我们的应用程序源代码、以及在容器内安装 NPM 依赖。
@@ -181,12 +172,11 @@ node_modules/
npm-debug.log
books/
public/
-
```
-> 请注意:我们之所以不拷贝 `node_modules` 目录到我们的容器中 —— 是因为我们要在容器中运行 `npm install` 来构建这个进程。从宿主机系统拷贝 `node_modules` 可能会引起错误,因为一些包需要在某些操作系统上专门构建。比如说,在 macOS 上安装 `bcrypt` 包,然后尝试将这个模块直接拷贝到一个 Ubuntu 容器上将不能工作,因为 `bcyrpt` 需要为每个操作系统构建一个特定的二进制文件。
+> 请注意:我们之所以不拷贝 `node_modules` 目录到我们的容器中 —— 是因为我们要在容器构建过程里面运行 `npm install`。从宿主机系统拷贝 `node_modules` 到容器里面可能会引起错误,因为一些包需要为某些操作系统专门构建。比如说,在 macOS 上安装 `bcrypt` 包,然后尝试将这个模块直接拷贝到一个 Ubuntu 容器上将不能工作,因为 `bcyrpt` 需要为每个操作系统构建一个特定的二进制文件。
-### 2.5 - 添加基本文件
+#### 2.5 - 添加基本文件
为了测试我们的配置,我们需要添加一些占位符文件到应用程序目录中。
@@ -194,7 +184,6 @@ public/
```
Hello World From The Frontend Container
-
```
接下来,在 `server/app.js` 中添加 Node.js 占位符文件。
@@ -213,10 +202,9 @@ app.listen(port, err => {
if (err) console.error(err)
console.log(`App Listening on Port ${port}`)
})
-
```
-最后,添加我们的 `package.json` 节点应用配置。
+最后,添加我们的 `package.json` Node 应用配置。
```
{
@@ -244,14 +232,13 @@ app.listen(port, err => {
"koa-router": "7.2.1"
}
}
-
```
这个文件定义了应用程序启动命令和 Node.js 包依赖。
-> 注意:不要运行 `npm install` —— 当它构建时,这个依赖将在容器内安装。
+> 注意:不要运行 `npm install` —— 当它构建时,依赖会在容器内安装。
-### 2.6 - 测试它的输出
+#### 2.6 - 测试它的输出
现在一切新绪,我们来测试应用程序的每个组件的输出。从应用程序的主目录运行 `docker-compose build`,它将构建我们的 Node.js 应用程序容器。
@@ -261,13 +248,13 @@ app.listen(port, err => {

-> 这一步可能需要几分钟时间,因为 Docker 要为每个容器去下载基础镜像,接着再去运行,启动应用程序非常快,因为所需要的镜像已经下载完成了。
+> 这一步可能需要几分钟时间,因为 Docker 要为每个容器去下载基础镜像。以后再次运行,启动应用程序会非常快,因为所需要的镜像已经下载完成了。
-在你的浏览器中尝试访问 `localhost:8080` —— 你将看到简单的 “Hello World" Web 页面。
+在你的浏览器中尝试访问 `localhost:8080` —— 你将看到简单的 “Hello World” Web 页面。

-访问 `localhost:3000` 去验证我们的 Node 服务器,它将返回 "Hello World" 信息。
+访问 `localhost:3000` 去验证我们的 Node 服务器,它将返回 “Hello World” 信息。

@@ -289,16 +276,15 @@ app.listen(port, err => {
},
"tagline" : "You Know, for Search"
}
-
```
-如果三个 URLs 都显示成功,祝贺你!整个容器栈已经正常运行了,接下来我们进入最有趣的部分。
+如果三个 URL 都显示成功,祝贺你!整个容器栈已经正常运行了,接下来我们进入最有趣的部分。
-### 3 - 连接到 ELASTICSEARCH
+### 3 - 连接到 Elasticsearch
我们要做的第一件事情是,让我们的应用程序连接到我们本地的 Elasticsearch 实例上。
-### 3.0 - 添加 ES 连接模块
+#### 3.0 - 添加 ES 连接模块
在新文件 `server/connection.js` 中添加如下的 Elasticsearch 初始化代码。
@@ -328,7 +314,6 @@ async function checkConnection () {
}
checkConnection()
-
```
现在,我们重新构建我们的 Node 应用程序,我们将使用 `docker-compose build` 来做一些改变。接下来,运行 `docker-compose up -d` 去启动应用程序栈,它将以守护进程的方式在后台运行。
@@ -351,12 +336,11 @@ checkConnection()
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 50 }
-
```
继续之前,我们先删除最下面的 `checkConnection()` 调用,因为,我们最终的应用程序将调用外部的连接模块。
-### 3.1 - 添加函数去重置索引
+#### 3.1 - 添加函数去重置索引
在 `server/connection.js` 中的 `checkConnection` 下面添加如下的函数,以便于重置 Elasticsearch 索引。
@@ -370,12 +354,11 @@ async function resetIndex (index) {
await client.indices.create({ index })
await putBookMapping()
}
-
```
-### 3.2 - 添加图书模式
+#### 3.2 - 添加图书模式
-接下来,我们将为图书的数据模式添加一个 "mapping"。在 `server/connection.js` 中的 `resetIndex` 函数下面添加如下的函数。
+接下来,我们将为图书的数据模式添加一个 “映射”。在 `server/connection.js` 中的 `resetIndex` 函数下面添加如下的函数。
```
/** Add book section schema mapping to ES */
@@ -389,12 +372,11 @@ async function putBookMapping () {
return client.indices.putMapping({ index, type, body: { properties: schema } })
}
-
```
-这是为 `book` 索引定义了一个 mapping。一个 Elasticsearch 的 `index` 大概类似于 SQL 的 `table` 或者 MongoDB 的 `collection`。我们通过添加 mapping 来为存储的文档指定每个字段和它的数据类型。Elasticsearch 是无模式的,因此,从技术角度来看,我们是不需要添加 mapping 的,但是,这样做,我们可以更好地控制如何处理数据。
+这是为 `book` 索引定义了一个映射。Elasticsearch 中的 `index` 大概类似于 SQL 的 `table` 或者 MongoDB 的 `collection`。我们通过添加映射来为存储的文档指定每个字段和它的数据类型。Elasticsearch 是无模式的,因此,从技术角度来看,我们是不需要添加映射的,但是,这样做,我们可以更好地控制如何处理数据。
-比如,我们给 "title" 和 ”author" 字段分配 `keyword` 类型,给 “text" 字段分配 `text` 类型。之所以这样做的原因是,搜索引擎可以区别处理这些字符串字段 —— 在搜索的时候,搜索引擎将在 `text` 字段中搜索可能的匹配项,而对于 `keyword` 类型字段,将对它们进行全文匹配。这看上去差别很小,但是它们对在不同的搜索上的速度和行为的影响非常大。
+比如,我们给 `title` 和 `author` 字段分配 `keyword` 类型,给 `text` 字段分配 `text` 类型。之所以这样做的原因是,搜索引擎可以区别处理这些字符串字段 —— 在搜索的时候,搜索引擎将在 `text` 字段中搜索可能的匹配项,而对于 `keyword` 类型字段,将对它们进行全文匹配。这看上去差别很小,但是它们对在不同的搜索上的速度和行为的影响非常大。
在文件的底部,导出对外发布的属性和函数,这样我们的应用程序中的其它模块就可以访问它们了。
@@ -402,31 +384,29 @@ async function putBookMapping () {
module.exports = {
client, index, type, checkConnection, resetIndex
}
-
```
### 4 - 加载原始数据
-我们将使用来自 [Gutenberg 项目][28] 的数据 —— 它致力于为公共提供免费的线上电子书。在这个项目中,我们将使用 100 本经典图书来充实我们的图书馆,包括_《The Adventures of Sherlock Holmes》_、_《Treasure Island》_、_《The Count of Monte Cristo》_、_《Around the World in 80 Days》_、_《Romeo and Juliet》_ 、和_《The Odyssey》_。
+我们将使用来自 [古登堡项目][28] 的数据 —— 它致力于为公共提供免费的线上电子书。在这个项目中,我们将使用 100 本经典图书来充实我们的图书馆,包括《福尔摩斯探案集》、《金银岛》、《基督山复仇记》、《环游世界八十天》、《罗密欧与朱丽叶》 和《奥德赛》。

-### 4.1 - 下载图书文件
+#### 4.1 - 下载图书文件
我将这 100 本书打包成一个文件,你可以从这里下载它 ——
[https://cdn.patricktriest.com/data/books.zip][29]
将这个文件解压到你的项目的 `books/` 目录中。
-你可以使用以下的命令来完成(需要在命令行下使用 [wget][30] 和 ["The Unarchiver"][31])。
+你可以使用以下的命令来完成(需要在命令行下使用 [wget][30] 和 [The Unarchiver][31])。
```
wget https://cdn.patricktriest.com/data/books.zip
unar books.zip
-
```
-### 4.2 - 预览一本书
+#### 4.2 - 预览一本书
尝试打开其中的一本书的文件,假设打开的是 `219-0.txt`。你将注意到它开头是一个公开访问的协议,接下来是一些标识这本书的书名、作者、发行日期、语言和字符编码的行。
@@ -441,7 +421,6 @@ Last Updated: September 7, 2016
Language: English
Character set encoding: UTF-8
-
```
在 `*** START OF THIS PROJECT GUTENBERG EBOOK HEART OF DARKNESS ***` 这些行后面,是这本书的正式内容。
@@ -450,7 +429,7 @@ Character set encoding: UTF-8
下一步,我们将使用程序从文件头部来解析书的元数据,提取 `*** START OF` 和 `***END OF` 之间的内容。
-### 4.3 - 读取数据目录
+#### 4.3 - 读取数据目录
我们将写一个脚本来读取每本书的内容,并将这些数据添加到 Elasticsearch。我们将定义一个新的 Javascript 文件 `server/load_data.js` 来执行这些操作。
@@ -486,7 +465,6 @@ async function readAndInsertBooks () {
}
readAndInsertBooks()
-
```
我们将使用一个快捷命令来重构我们的 Node.js 应用程序,并更新运行的容器。
@@ -501,7 +479,7 @@ readAndInsertBooks()

-### 4.4 - 读取数据文件
+#### 4.4 - 读取数据文件
接下来,我们读取元数据和每本书的内容。
@@ -536,32 +514,26 @@ function parseBookFile (filePath) {
console.log(`Parsed ${paragraphs.length} Paragraphs\n`)
return { title, author, paragraphs }
}
-
```
这个函数执行几个重要的任务。
1. 从文件系统中读取书的文本。
-
2. 使用正则表达式(关于正则表达式,请参阅 [这篇文章][1] )解析书名和作者。
-
-3. 通过匹配 ”Guttenberg 项目“ 头部和尾部,识别书的正文内容。
-
+3. 通过匹配 “古登堡项目” 的头部和尾部,识别书的正文内容。
4. 提取书的内容文本。
-
5. 分割每个段落到它的数组中。
-
6. 清理文本并删除空白行。
-它的返回值,我们将构建一个对象,这个对象包含书名、作者、以及书中各段落的数据。
+它的返回值,我们将构建一个对象,这个对象包含书名、作者、以及书中各段落的数组。
-再次运行 `docker-compose up -d --build` 和 `docker exec gs-api "node" "server/load_data.js"`,你将看到如下的输出,在输出的末尾有三个额外的行。
+再次运行 `docker-compose up -d --build` 和 `docker exec gs-api "node" "server/load_data.js"`,你将看到输出同之前一样,在输出的末尾有三个额外的行。

成功!我们的脚本从文本文件中成功解析出了书名和作者。脚本再次以错误结束,因为到现在为止,我们还没有定义辅助函数。
-### 4.5 - 在 ES 中索引数据文件
+#### 4.5 - 在 ES 中索引数据文件
最后一步,我们将批量上传每个段落的数组到 Elasticsearch 索引中。
@@ -596,12 +568,11 @@ async function insertBookData (title, author, paragraphs) {
await esConnection.client.bulk({ body: bulkOps })
console.log(`Indexed Paragraphs ${paragraphs.length - (bulkOps.length / 2)} - ${paragraphs.length}\n\n\n`)
}
-
```
-这个函数将使用书名、作者、和附加元数据的段落位置来索引书中的每个段落。我们通过批量操作来插入段落,它比逐个段落插入要快的多。
+这个函数将使用书名、作者和附加元数据的段落位置来索引书中的每个段落。我们通过批量操作来插入段落,它比逐个段落插入要快的多。
-> 我们分批索引段落,而不是一次性插入全部,是为运行这个应用程序的、内存稍有点小(1.7 GB)的服务器 `search.patricktriest.com` 上做的一个重要优化。如果你的机器内存还行(4 GB 以上),你或许不用分批上传。
+> 我们分批索引段落,而不是一次性插入全部,是为运行这个应用程序的内存稍有点小(1.7 GB)的服务器 `search.patricktriest.com` 上做的一个重要优化。如果你的机器内存还行(4 GB 以上),你或许不用分批上传。
运行 `docker-compose up -d --build` 和 `docker exec gs-api "node" "server/load_data.js"` 一次或多次 —— 现在你将看到前面解析的 100 本书的完整输出,并插入到了 Elasticsearch。这可能需要几分钟时间,甚至更长。
@@ -611,13 +582,13 @@ async function insertBookData (title, author, paragraphs) {
现在,Elasticsearch 中已经有了 100 本书了(大约有 230000 个段落),现在我们尝试搜索查询。
-### 5.0 - 简单的 HTTP 查询
+#### 5.0 - 简单的 HTTP 查询
首先,我们使用 Elasticsearch 的 HTTP API 对它进行直接查询。
在你的浏览器上访问这个 URL - `http://localhost:9200/library/_search?q=text:Java&pretty`
-在这里,我们将执行一个极简的全文搜索,在我们的图书馆的书中查找 ”Java" 这个词。
+在这里,我们将执行一个极简的全文搜索,在我们的图书馆的书中查找 “Java” 这个词。
你将看到类似于下面的一个 JSON 格式的响应。
@@ -663,12 +634,11 @@ async function insertBookData (title, author, paragraphs) {
]
}
}
-
```
用 Elasticseach 的 HTTP 接口可以测试我们插入的数据是否成功,但是如果直接将这个 API 暴露给 Web 应用程序将有极大的风险。这个 API 将会暴露管理功能(比如直接添加和删除文档),最理想的情况是完全不要对外暴露它。而是写一个简单的 Node.js API 去接收来自客户端的请求,然后(在我们的本地网络中)生成一个正确的查询发送给 Elasticsearch。
-### 5.1 - 查询脚本
+#### 5.1 - 查询脚本
我们现在尝试从我们写的 Node.js 脚本中查询 Elasticsearch。
@@ -694,7 +664,6 @@ module.exports = {
return client.search({ index, type, body })
}
}
-
```
我们的搜索模块定义一个简单的 `search` 函数,它将使用输入的词 `match` 查询。
@@ -702,13 +671,9 @@ module.exports = {
这是查询的字段分解 -
* `from` - 允许我们分页查询结果。默认每个查询返回 10 个结果,因此,指定 `from: 10` 将允许我们取回 10-20 的结果。
-
* `query` - 这里我们指定要查询的词。
-
-* `operator` - 我们可以修改搜索行为;在本案例中,我们使用 "and" 操作去对查询中包含所有 tokens(要查询的词)的结果来确定优先顺序。
-
+* `operator` - 我们可以修改搜索行为;在本案例中,我们使用 `and` 操作去对查询中包含所有字元(要查询的词)的结果来确定优先顺序。
* `fuzziness` - 对拼写错误的容错调整,`auto` 的默认为 `fuzziness: 2`。模糊值越高,结果越需要更多校正。比如,`fuzziness: 1` 将允许以 `Patricc` 为关键字的查询中返回与 `Patrick` 匹配的结果。
-
* `highlights` - 为结果返回一个额外的字段,这个字段包含 HTML,以显示精确的文本字集和查询中匹配的关键词。
你可以去浏览 [Elastic Full-Text Query DSL][32],学习如何随意调整这些参数,以进一步自定义搜索查询。
@@ -717,7 +682,7 @@ module.exports = {
为了能够从前端应用程序中访问我们的搜索功能,我们来写一个快速的 HTTP API。
-### 6.0 - API 服务器
+#### 6.0 - API 服务器
用以下的内容替换现有的 `server/app.js` 文件。
@@ -761,7 +726,6 @@ app
if (err) throw err
console.log(`App Listening on Port ${port}`)
})
-
```
这些代码将为 [Koa.js][33] Node API 服务器导入服务器依赖,设置简单的日志,以及错误处理。
@@ -782,10 +746,9 @@ router.get('/search', async (ctx, next) => {
ctx.body = await search.queryTerm(term, offset)
}
)
-
```
-使用 `docker-compose up -d --build` 重启动应用程序。之后在你的浏览器中尝试调用这个搜索端点。比如,`http://localhost:3000/search?term=java` 这个请求将搜索整个图书馆中提到 “Jave" 的内容。
+使用 `docker-compose up -d --build` 重启动应用程序。之后在你的浏览器中尝试调用这个搜索端点。比如,`http://localhost:3000/search?term=java` 这个请求将搜索整个图书馆中提到 “Java” 的内容。
结果与前面直接调用 Elasticsearch HTTP 界面的结果非常类似。
@@ -835,7 +798,6 @@ router.get('/search', async (ctx, next) => {
]
}
}
-
```
### 6.2 - 输入校验
@@ -864,7 +826,6 @@ router.get('/search',
ctx.body = await search.queryTerm(term, offset)
}
)
-
```
现在,重启服务器,如果你使用一个没有搜索关键字的请求(`http://localhost:3000/search`),你将返回一个带相关消息的 HTTP 400 错误,比如像 `Invalid URL Query - child "term" fails because ["term" is required]`。
@@ -875,7 +836,7 @@ router.get('/search',
现在我们的 `/search` 端点已经就绪,我们来连接到一个简单的 Web 应用程序来测试这个 API。
-### 7.0 - Vue.js 应用程序
+#### 7.0 - Vue.js 应用程序
我们将使用 Vue.js 去协调我们的前端。
@@ -934,14 +895,13 @@ const vm = new Vue ({
}
}
})
-
```
-这个应用程序非常简单 —— 我们只定义了一些共享的数据属性,以及添加了检索和分页搜索结果的方法。为防止每按键一次都调用 API,搜索输入有一个 100 毫秒的除颤功能。
+这个应用程序非常简单 —— 我们只定义了一些共享的数据属性,以及添加了检索和分页搜索结果的方法。为防止每次按键一次都调用 API,搜索输入有一个 100 毫秒的除颤功能。
解释 Vue.js 是如何工作的已经超出了本教程的范围,如果你使用过 Angular 或者 React,其实一些也不可怕。如果你完全不熟悉 Vue,想快速了解它的功能,我建议你从官方的快速指南入手 —— [https://vuejs.org/v2/guide/][36]
-### 7.1 - HTML
+#### 7.1 - HTML
使用以下的内容替换 `/public/index.html` 文件中的占位符,以便于加载我们的 Vue.js 应用程序和设计一个基本的搜索界面。
@@ -1004,10 +964,9 @@ const vm = new Vue ({