From e6fb3720b55e6b1d21bcdf8e5440a49d9e80d2f6 Mon Sep 17 00:00:00 2001
From: Hank <280630620@qq.com>
Date: Wed, 17 Apr 2019 10:28:51 +0800
Subject: [PATCH 0001/1154] hankchow translated
---
...2 Using Square Brackets in Bash- Part 2.md | 168 ------------------
...2 Using Square Brackets in Bash- Part 2.md | 158 ++++++++++++++++
2 files changed, 158 insertions(+), 168 deletions(-)
delete mode 100644 sources/tech/20190402 Using Square Brackets in Bash- Part 2.md
create mode 100644 translated/tech/20190402 Using Square Brackets in Bash- Part 2.md
diff --git a/sources/tech/20190402 Using Square Brackets in Bash- Part 2.md b/sources/tech/20190402 Using Square Brackets in Bash- Part 2.md
deleted file mode 100644
index cfab28025a..0000000000
--- a/sources/tech/20190402 Using Square Brackets in Bash- Part 2.md
+++ /dev/null
@@ -1,168 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (HankChow)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Using Square Brackets in Bash: Part 2)
-[#]: via: (https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2)
-[#]: author: (Paul Brown https://www.linux.com/users/bro66)
-
-Using Square Brackets in Bash: Part 2
-======
-
-![square brackets][1]
-
-We continue our tour of square brackets in Bash with a look at how they can act as a command.
-
-[Creative Commons Zero][2]
-
-Welcome back to our mini-series on square brackets. In the [previous article][3], we looked at various ways square brackets are used at the command line, including globbing. If you've not read that article, you might want to start there.
-
-Square brackets can also be used as a command. Yep, for example, in:
-
-```
-[ "a" = "a" ]
-```
-
-which is, by the way, a valid command that you can execute, `[ ... ]` is a command. Notice that there are spaces between the opening bracket `[` and the parameters `"a" = "a"`, and then between the parameters and the closing bracket `]`. That is precisely because the brackets here act as a command, and you are separating the command from its parameters.
-
-You would read the above line as " _test whether the string "a" is the same as string "a"_ ". If the premise is true, the `[ ... ]` command finishes with an exit status of 0. If not, the exit status is 1. [We talked about exit statuses in a previous article][4], and there you saw that you could access the value by checking the `$?` variable.
-
-Try it out:
-
-```
-[ "a" = "a" ]
-echo $?
-```
-
-And now try:
-
-```
-[ "a" = "b" ]
-echo $?
-```
-
-In the first case, you will get a 0 (the premise is true), and running the second will give you a 1 (the premise is false). Remember that, in Bash, an exit status from a command that is 0 means it exited normally with no errors, and that makes it `true`. If there were any errors, the exit value would be a non-zero value (`false`). The `[ ... ]` command follows the same rules so that it is consistent with the rest of the other commands.
-
-The `[ ... ]` command comes in handy in `if ... then` constructs and also in loops that require a certain condition to be met (or not) before exiting, like the `while` and `until` loops.
-
-The logical operators for testing stuff are pretty straightforward:
-
-```
-[ STRING1 = STRING2 ] => checks to see if the strings are equal
-[ STRING1 != STRING2 ] => checks to see if the strings are not equal
-[ INTEGER1 -eq INTEGER2 ] => checks to see if INTEGER1 is equal to INTEGER2
-[ INTEGER1 -ge INTEGER2 ] => checks to see if INTEGER1 is greater than or equal to INTEGER2
-[ INTEGER1 -gt INTEGER2 ] => checks to see if INTEGER1 is greater than INTEGER2
-[ INTEGER1 -le INTEGER2 ] => checks to see if INTEGER1 is less than or equal to INTEGER2
-[ INTEGER1 -lt INTEGER2 ] => checks to see if INTEGER1 is less than INTEGER2
-[ INTEGER1 -ne INTEGER2 ] => checks to see if INTEGER1 is not equal to INTEGER2
-etc...
-```
-
-You can also test for some very shell-specific things. The `-f` option, for example, tests whether a file exists or not:
-
-```
-for i in {000..099}; \
- do \
- if [ -f file$i ]; \
- then \
- echo file$i exists; \
- else \
- touch file$i; \
- echo I made file$i; \
- fi; \
-done
-```
-
-If you run this in your test directory, line 3 will test to whether a file is in your long list of files. If it does exist, it will just print a message; but if it doesn't exist, it will create it, to make sure the whole set is complete.
-
-You could write the loop more compactly like this:
-
-```
-for i in {000..099};\
-do\
- if [ ! -f file$i ];\
- then\
- touch file$i;\
- echo I made file$i;\
- fi;\
-done
-```
-
-The `!` modifier in the condition inverts the premise, thus line 3 would translate to " _if the file`file$i` does not exist_ ".
-
-Try it: delete some random files from the bunch you have in your test directory. Then run the loop shown above and watch how it rebuilds the list.
-
-There are plenty of other tests you can try, including `-d` tests to see if the name belongs to a directory and `-h` tests to see if it is a symbolic link. You can also test whether a files belongs to a certain group of users (`-G`), whether one file is older than another (`-ot`), or even whether a file contains something or is, on the other hand, empty.
-
-Try the following for example. Add some content to some of your files:
-
-```
-echo "Hello World" >> file023
-echo "This is a message" >> file065
-echo "To humanity" >> file010
-```
-
-and then run this:
-
-```
-for i in {000..099};\
-do\
- if [ ! -s file$i ];\
- then\
- rm file$i;\
- echo I removed file$i;\
- fi;\
-done
-```
-
-And you'll remove all the files that are empty, leaving only the ones you added content to.
-
-To find out more, check the manual page for the `test` command (a synonym for `[ ... ]`) with `man test`.
-
-You may also see double brackets (`[[ ... ]]`) sometimes used in a similar way to single brackets. The reason for this is because double brackets give you a wider range of comparison operators. You can use `==`, for example, to compare a string to a pattern instead of just another string; or < and `>` to test whether a string would come before or after another in a dictionary.
-
-To find out more about extended operators [check out this full list of Bash expressions][5].
-
-### Next Time
-
-In an upcoming article, we'll continue our tour and take a look at the role of parentheses `()` in Linux command lines. See you then!
-
-_Read more:_
-
- 1. [The Meaning of Dot (`.`)][6]
- 2. [Understanding Angle Brackets in Bash (`<...>`)][7]
- 3. [More About Angle Brackets in Bash(`<` and `>`)][8]
- 4. [And, Ampersand, and & in Linux (`&`)][9]
- 5. [Ampersands and File Descriptors in Bash (`&`)][10]
- 6. [Logical & in Bash (`&`)][4]
- 7. [All about {Curly Braces} in Bash (`{}`)][11]
- 8. [Using Square Brackets in Bash: Part 1][3]
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
-
-作者:[Paul Brown][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.linux.com/users/bro66
-[b]: https://github.com/lujun9972
-[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-brackets-3734552_1920.jpg?itok=hv9D6TBy (square brackets)
-[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
-[3]: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
-[4]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
-[5]: https://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions
-[6]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
-[7]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
-[8]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
-[9]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
-[10]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
-[11]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
diff --git a/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md b/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md
new file mode 100644
index 0000000000..70652f894c
--- /dev/null
+++ b/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md
@@ -0,0 +1,158 @@
+[#]: collector: (lujun9972)
+[#]: translator: (HankChow)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Using Square Brackets in Bash: Part 2)
+[#]: via: (https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2)
+[#]: author: (Paul Brown https://www.linux.com/users/bro66)
+
+在 Bash 中使用[方括号](二)
+======
+
+![square brackets][1]
+
+> 我们继续来看方括号的用法,它们甚至还可以在 Bash 当中作为一个命令使用。
+
+[Creative Commons Zero][2]
+
+欢迎回到我们的方括号专题。在[前一篇文章][3]当中,我们介绍了方括号在命令行中可以用于通配操作,如果你已经读过前一篇文章,就可以从这里继续了。
+
+方括号还可以以一个命令的形式使用,就像这样:
+
+```
+[ "a" = "a" ]
+```
+
+上面这种 `[ ... ]` 的形式就可以看成是一个可执行的命令。要注意,方括号内部的内容 `"a" = "a"` 和方括号 `[`、`]` 之间是有空格隔开的。因为这里的方括号被视作一个命令,因此要用空格将命令和它的参数隔开。
+
+上面这个命令的含义是“判断字符串 `"a"` 和字符串 `"a"` 是否相同”,如果判断结果为真,那么 `[ ... ]` 就会以状态码 0 退出,否则以状态码 1 退出。在之前的文章中,我们也有介绍过状态码的概念,可以通过 `$?` 变量获取到最近一个命令的状态码。
+
+分别执行
+
+```
+[ "a" = "a" ]
+echo $?
+```
+
+以及
+
+```
+[ "a" = "b" ]
+echo $?
+```
+
+这两段命令中,前者会输出 0(判断结果为真),后者则会输出 1(判断结果为假)。在 Bash 当中,如果一个命令的状态码是 0,表示这个命令正常执行完成并退出,而且其中没有出现错误,对应布尔值 `true`;如果在命令执行过程中出现错误,就会返回一个非零的状态码,对应布尔值 `false`。而 `[ ... ]`也同样遵循这样的规则。
+
+因此,`[ ... ]` 很适合在 `if ... then`、`while` 或 `until` 这种在代码块结束前需要判断是否达到某个条件结构中使用。
+
+对应使用的逻辑判断运算符也相当直观:
+
+```
+[ STRING1 = STRING2 ] => checks to see if the strings are equal
+[ STRING1 != STRING2 ] => checks to see if the strings are not equal
+[ INTEGER1 -eq INTEGER2 ] => checks to see if INTEGER1 is equal to INTEGER2
+[ INTEGER1 -ge INTEGER2 ] => checks to see if INTEGER1 is greater than or equal to INTEGER2
+[ INTEGER1 -gt INTEGER2 ] => checks to see if INTEGER1 is greater than INTEGER2
+[ INTEGER1 -le INTEGER2 ] => checks to see if INTEGER1 is less than or equal to INTEGER2
+[ INTEGER1 -lt INTEGER2 ] => checks to see if INTEGER1 is less than INTEGER2
+[ INTEGER1 -ne INTEGER2 ] => checks to see if INTEGER1 is not equal to INTEGER2
+etc...
+```
+
+方括号的这种用法也可以很有 shell 风格,例如通过带上 `-f` 参数可以判断某个文件是否存在:
+
+```
+for i in {000..099}; \
+ do \
+ if [ -f file$i ]; \
+ then \
+ echo file$i exists; \
+ else \
+ touch file$i; \
+ echo I made file$i; \
+ fi; \
+done
+```
+
+如果你在上一篇文章使用到的测试目录中运行以上这串命令,其中的第 3 行会判断那几十个文件当中的某个文件是否存在。如果文件存在,会输出一条提示信息;如果文件不存在,就会把对应的文件创建出来。最终,这个目录中会完整存在从 `file000` 到 `file099` 这一百个文件。
+
+上面这段命令还可以写得更加简洁:
+
+```
+for i in {000..099};\
+do\
+ if [ ! -f file$i ];\
+ then\
+ touch file$i;\
+ echo I made file$i;\
+ fi;\
+done
+```
+
+其中 `!` 运算符表示将判断结果取反,因此第 3 行的含义就是“如果文件 `file$i` 不存在”。
+
+可以尝试一下将测试目录中那几十个文件随意删除几个,然后运行上面的命令,你就可以看到它是如何把被删除的文件重新创建出来的。
+
+除了 `-f` 之外,还有很多有用的参数。`-d` 参数可以判断某个目录是否存在,`-h` 参数可以判断某个文件是不是一个符号链接。可以用 `-G` 参数判断某个文件是否属于某个用户组,用 `-ot` 参数判断某个文件的最后更新时间是否早于另一个文件,甚至还可以判断某个文件是否为空文件。
+
+运行下面的几条命令,可以向几个文件中写入一些内容:
+
+```
+echo "Hello World" >> file023
+echo "This is a message" >> file065
+echo "To humanity" >> file010
+```
+
+然后运行:
+
+```
+for i in {000..099};\
+do\
+ if [ ! -s file$i ];\
+ then\
+ rm file$i;\
+ echo I removed file$i;\
+ fi;\
+done
+```
+
+你就会发现所有空文件都被删除了,只剩下少数几个非空的文件。
+
+如果你还想了解更多别的参数,可以执行 `man test` 来查看 `test` 命令的 man 手册(`test` 是 `[ ... ]` 的命令别名)。
+
+有时候你还会看到 `[[ ... ]]` 这种双方括号的形式,使用起来和单方括号差别不大。但双方括号支持的比较运算符更加丰富:例如可以使用 `==` 来判断某个字符串是否符合某个模式,也可以使用 `<`、`>` 来判断两个字符串的出现顺序。
+
+可以在 [Bash 表达式文档][5]中了解到双方括号支持的更多运算符。
+
+### 下一集
+
+在下一篇文章中,我们会开始介绍圆括号 `()` 在 Linux 命令行中的用法,敬请关注!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
+
+作者:[Paul Brown][a]
+选题:[lujun9972][b]
+译者:[HankChow](https://github.com/HankChow)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/bro66
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-brackets-3734552_1920.jpg?itok=hv9D6TBy "square brackets"
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
+[4]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
+[5]: https://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions
+[6]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
+[7]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
+[8]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
+[9]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
+[10]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
+[11]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
+
From 5752ba4bf47d2bd7de79391b401ac5909b1b30b9 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:46:26 +0800
Subject: [PATCH 0002/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190417=20HTTP?=
=?UTF-8?q?ie=20=E2=80=93=20A=20Modern=20Command=20Line=20HTTP=20Client=20?=
=?UTF-8?q?For=20Curl=20And=20Wget=20Alternative=20sources/tech/20190417?=
=?UTF-8?q?=20HTTPie=20-=20A=20Modern=20Command=20Line=20HTTP=20Client=20F?=
=?UTF-8?q?or=20Curl=20And=20Wget=20Alternative.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...TP Client For Curl And Wget Alternative.md | 312 ++++++++++++++++++
1 file changed, 312 insertions(+)
create mode 100644 sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
diff --git a/sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md b/sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
new file mode 100644
index 0000000000..46298a6fa0
--- /dev/null
+++ b/sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
@@ -0,0 +1,312 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (HTTPie – A Modern Command Line HTTP Client For Curl And Wget Alternative)
+[#]: via: (https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+HTTPie – A Modern Command Line HTTP Client For Curl And Wget Alternative
+======
+
+Most of the time we use Curl Command or Wget Command for file download and other purpose.
+
+We had written **[best command line download manager][1]** in the past. You can navigate those articles by clicking the corresponding URLs.
+
+ * **[aria2 – A Command Line Multi-Protocol Download Tool For Linux][2]**
+ * **[Axel – A Lightweight Command Line Download Accelerator For Linux][3]**
+ * **[Wget – A Standard Command Line Download Utility For Linux][4]**
+ * **[curl – A Nifty Command Line Download Tool For Linux][5]**
+
+
+
+Today we are going to discuss about the same kind of topic. The utility name is HTTPie.
+
+It’s modern command line http client and best alternative for curl and wget commands.
+
+### What Is HTTPie?
+
+HTTPie (pronounced aitch-tee-tee-pie) is a command line HTTP client.
+
+The httpie tool is a modern command line http client which makes CLI interaction with web services.
+
+It provides a simple http command that allows for sending arbitrary HTTP requests using a simple and natural syntax, and displays colorized output.
+
+HTTPie can be used for testing, debugging, and generally interacting with HTTP servers.
+
+### Main Features
+
+ * Expressive and intuitive syntax
+ * Formatted and colorized terminal output
+ * Built-in JSON support
+ * Forms and file uploads
+ * HTTPS, proxies, and authentication
+ * Arbitrary request data
+ * Custom headers
+ * Persistent sessions
+ * Wget-like downloads
+ * Python 2.7 and 3.x support
+
+
+
+### How To Install HTTPie In Linux?
+
+Most Linux distributions provide a package that can be installed using the system package manager.
+
+For **`Fedora`** system, use **[DNF Command][6]** to install httpie.
+
+```
+$ sudo dnf install httpie
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][7]** or **[APT Command][8]** to install httpie.
+
+```
+$ sudo apt install httpie
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install httpie.
+
+```
+$ sudo pacman -S httpie
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][10]** to install httpie.
+
+```
+$ sudo yum install httpie
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][11]** to install httpie.
+
+```
+$ sudo zypper install httpie
+```
+
+### 1) How To Request A URL Using HTTPie?
+
+The basic usage of httpie is to request a website URL as an argument.
+
+```
+# http 2daygeek.com
+HTTP/1.1 301 Moved Permanently
+CF-RAY: 4c4a618d0c02ce6d-LHR
+Cache-Control: max-age=3600
+Connection: keep-alive
+Date: Tue, 09 Apr 2019 06:21:28 GMT
+Expires: Tue, 09 Apr 2019 07:21:28 GMT
+Location: https://2daygeek.com/
+Server: cloudflare
+Transfer-Encoding: chunked
+Vary: Accept-Encoding
+```
+
+### 2) How To Download A File Using HTTPie?
+
+You can download a file using HTTPie with the `--download` parameter. This is similar to wget command.
+
+```
+# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png
+HTTP/1.1 200 OK
+Accept-Ranges: bytes
+CF-Cache-Status: HIT
+CF-RAY: 4c4a65d5ca360a66-LHR
+Cache-Control: public, max-age=7200
+Connection: keep-alive
+Content-Length: 32066
+Content-Type: image/png
+Date: Tue, 09 Apr 2019 06:24:23 GMT
+Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
+Expires: Tue, 09 Apr 2019 08:24:23 GMT
+Last-Modified: Mon, 08 Apr 2019 04:54:25 GMT
+Server: cloudflare
+Set-Cookie: __cfduid=dd2034b2f95ae42047e082f59f2b964f71554791063; expires=Wed, 08-Apr-20 06:24:23 GMT; path=/; domain=.2daygeek.com; HttpOnly; Secure
+Vary: Accept-Encoding
+
+Downloading 31.31 kB to "Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png"
+Done. 31.31 kB in 0.01187s (2.58 MB/s)
+```
+
+Alternatively you can save the output file with different name by using `-o` parameter.
+
+```
+# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png -o Anbox-1.png
+HTTP/1.1 200 OK
+Accept-Ranges: bytes
+CF-Cache-Status: HIT
+CF-RAY: 4c4a68194daa0a66-LHR
+Cache-Control: public, max-age=7200
+Connection: keep-alive
+Content-Length: 32066
+Content-Type: image/png
+Date: Tue, 09 Apr 2019 06:25:56 GMT
+Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
+Expires: Tue, 09 Apr 2019 08:25:56 GMT
+Last-Modified: Mon, 08 Apr 2019 04:54:25 GMT
+Server: cloudflare
+Set-Cookie: __cfduid=d3eea753081690f9a2d36495a74407dd71554791156; expires=Wed, 08-Apr-20 06:25:56 GMT; path=/; domain=.2daygeek.com; HttpOnly; Secure
+Vary: Accept-Encoding
+
+Downloading 31.31 kB to "Anbox-1.png"
+Done. 31.31 kB in 0.01551s (1.97 MB/s)
+```
+
+### 3) How To Resume Partial Download Using HTTPie?
+
+You can resume the download using HTTPie with the `-c` parameter.
+
+```
+# http --download --continue https://speed.hetzner.de/100MB.bin -o 100MB.bin
+HTTP/1.1 206 Partial Content
+Connection: keep-alive
+Content-Length: 100442112
+Content-Range: bytes 4415488-104857599/104857600
+Content-Type: application/octet-stream
+Date: Tue, 09 Apr 2019 06:32:52 GMT
+ETag: "5253f0fd-6400000"
+Last-Modified: Tue, 08 Oct 2013 11:48:13 GMT
+Server: nginx
+Strict-Transport-Security: max-age=15768000; includeSubDomains
+
+Downloading 100.00 MB to "100MB.bin"
+ | 24.14 % 24.14 MB 1.12 MB/s 0:01:07 ETA^C
+```
+
+You can verify the same in the below output.
+
+```
+[email protected]:/var/log# ls -lhtr 100MB.bin
+-rw-r--r-- 1 root root 25M Apr 9 01:33 100MB.bin
+```
+
+### 5) How To Upload A File Using HTTPie?
+
+You can upload a file using HTTPie with the `less-than symbol "<"` symbol.
+
+```
+$ http https://transfer.sh < Anbox-1.png
+```
+
+### 6) How To Download A File Using HTTPie With Redirect Symbol ">"?
+
+You can download a file using HTTPie with the `redirect ">"` symbol.
+
+```
+# http https://www.2daygeek.com/wp-content/uploads/2019/03/How-To-Install-And-Enable-Flatpak-Support-On-Linux-1.png > Flatpak.png
+
+# ls -ltrh Flatpak.png
+-rw-r--r-- 1 root root 47K Apr 9 01:44 Flatpak.png
+```
+
+### 7) Send a HTTP GET Method?
+
+You can send a HTTP GET method in the request. The GET method is used to retrieve information from the given server using a given URI.
+
+```
+# http GET httpie.org
+HTTP/1.1 301 Moved Permanently
+CF-RAY: 4c4a83a3f90dcbe6-SIN
+Cache-Control: max-age=3600
+Connection: keep-alive
+Date: Tue, 09 Apr 2019 06:44:44 GMT
+Expires: Tue, 09 Apr 2019 07:44:44 GMT
+Location: https://httpie.org/
+Server: cloudflare
+Transfer-Encoding: chunked
+Vary: Accept-Encoding
+```
+
+### 8) Submit A Form?
+
+Use the following format to Submit a forms. A POST request is used to send data to the server, for example, customer information, file upload, etc. using HTML forms.
+
+```
+# http -f POST Ubuntu18.2daygeek.com hello='World'
+HTTP/1.1 200 OK
+Accept-Ranges: bytes
+Connection: Keep-Alive
+Content-Encoding: gzip
+Content-Length: 3138
+Content-Type: text/html
+Date: Tue, 09 Apr 2019 06:48:12 GMT
+ETag: "2aa6-5844bf1b047fc-gzip"
+Keep-Alive: timeout=5, max=100
+Last-Modified: Sun, 17 Mar 2019 15:29:55 GMT
+Server: Apache/2.4.29 (Ubuntu)
+Vary: Accept-Encoding
+```
+
+Run the following command to see the request that is being sent.
+
+```
+# http -v Ubuntu18.2daygeek.com
+GET / HTTP/1.1
+Accept: */*
+Accept-Encoding: gzip, deflate
+Connection: keep-alive
+Host: ubuntu18.2daygeek.com
+User-Agent: HTTPie/0.9.8
+
+hello=World
+
+HTTP/1.1 200 OK
+Accept-Ranges: bytes
+Connection: Keep-Alive
+Content-Encoding: gzip
+Content-Length: 3138
+Content-Type: text/html
+Date: Tue, 09 Apr 2019 06:48:30 GMT
+ETag: "2aa6-5844bf1b047fc-gzip"
+Keep-Alive: timeout=5, max=100
+Last-Modified: Sun, 17 Mar 2019 15:29:55 GMT
+Server: Apache/2.4.29 (Ubuntu)
+Vary: Accept-Encoding
+```
+
+### 9) HTTP Authentication?
+
+The currently supported authentication schemes are Basic and Digest
+
+Basic auth
+
+```
+$ http -a username:password example.org
+```
+
+Digest auth
+
+```
+$ http -A digest -a username:password example.org
+```
+
+Password prompt
+
+```
+$ http -a username example.org
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/best-4-command-line-download-managers-accelerators-for-linux/
+[2]: https://www.2daygeek.com/aria2-linux-command-line-download-utility-tool/
+[3]: https://www.2daygeek.com/axel-linux-command-line-download-accelerator/
+[4]: https://www.2daygeek.com/wget-linux-command-line-download-utility-tool/
+[5]: https://www.2daygeek.com/curl-linux-command-line-download-manager/
+[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
From d1bd15536a71101f8096bdbcc19108bfabaf064a Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:46:37 +0800
Subject: [PATCH 0003/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190415=20Kube?=
=?UTF-8?q?rnetes=20on=20Fedora=20IoT=20with=20k3s=20sources/tech/20190415?=
=?UTF-8?q?=20Kubernetes=20on=20Fedora=20IoT=20with=20k3s.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...90415 Kubernetes on Fedora IoT with k3s.md | 211 ++++++++++++++++++
1 file changed, 211 insertions(+)
create mode 100644 sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md
diff --git a/sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md b/sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md
new file mode 100644
index 0000000000..5650e80aee
--- /dev/null
+++ b/sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md
@@ -0,0 +1,211 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Kubernetes on Fedora IoT with k3s)
+[#]: via: (https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/)
+[#]: author: (Lennart Jern https://fedoramagazine.org/author/lennartj/)
+
+Kubernetes on Fedora IoT with k3s
+======
+
+![][1]
+
+Fedora IoT is an upcoming Fedora edition targeted at the Internet of Things. It was introduced last year on Fedora Magazine in the article [How to turn on an LED with Fedora IoT][2]. Since then, it has continued to improve together with Fedora Silverblue to provide an immutable base operating system aimed at container-focused workflows.
+
+Kubernetes is an immensely popular container orchestration system. It is perhaps most commonly used on powerful hardware handling huge workloads. However, it can also be used on lightweight devices such as the Raspberry Pi 3. Read on to find out how.
+
+### Why Kubernetes?
+
+While Kubernetes is all the rage in the cloud, it may not be immediately obvious to run it on a small single board computer. But there are certainly reasons for doing it. First of all it is a great way to learn and get familiar with Kubernetes without the need for expensive hardware. Second, because of its popularity, there are [tons of applications][3] that comes pre-packaged for running in Kubernetes clusters. Not to mention the large community to provide help if you ever get stuck.
+
+Last but not least, container orchestration may actually make things easier, even at the small scale in a home lab. This may not be apparent when tackling the the learning curve, but these skills will help when dealing with any cluster in the future. It doesn’t matter if it’s a single node Raspberry Pi cluster or a large scale machine learning farm.
+
+#### K3s – a lightweight Kubernetes
+
+A “normal” installation of Kubernetes (if such a thing can be said to exist) is a bit on the heavy side for IoT. The recommendation is a minimum of 2 GB RAM per machine! However, there are plenty of alternatives, and one of the newcomers is [k3s][4] – a lightweight Kubernetes distribution.
+
+K3s is quite special in that it has replaced etcd with SQLite for its key-value storage needs. Another thing to note is that k3s ships as a single binary instead of one per component. This diminishes the memory footprint and simplifies the installation. Thanks to the above, k3s should be able to run k3s with just 512 MB of RAM, perfect for a small single board computer!
+
+### What you will need
+
+ 1. Fedora IoT in a virtual machine or on a physical device. See the excellent getting started guide [here][5]. One machine is enough but two will allow you to test adding more nodes to the cluster.
+ 2. [Configure the firewall][6] to allow traffic on ports 6443 and 8472. Or simply disable it for this experiment by running “systemctl stop firewalld”.
+
+
+
+### Install k3s
+
+Installing k3s is very easy. Simply run the installation script:
+
+```
+curl -sfL https://get.k3s.io | sh -
+```
+
+This will download, install and start up k3s. After installation, get a list of nodes from the server by running the following command:
+
+```
+kubectl get nodes
+```
+
+Note that there are several options that can be passed to the installation script through environment variables. These can be found in the [documentation][7]. And of course, there is nothing stopping you from installing k3s manually by downloading the binary directly.
+
+While great for experimenting and learning, a single node cluster is not much of a cluster. Luckily, adding another node is no harder than setting up the first one. Just pass two environment variables to the installation script to make it find the first node and avoid running the server part of k3s
+
+```
+curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \
+ K3S_TOKEN=XXX sh -
+```
+
+The example-url above should be replaced by the IP address or fully qualified domain name of the first node. On that node the token (represented by XXX) is found in the file /var/lib/rancher/k3s/server/node-token.
+
+### Deploy some containers
+
+Now that we have a Kubernetes cluster, what can we actually do with it? Let’s start by deploying a simple web server.
+
+```
+kubectl create deployment my-server --image nginx
+```
+
+This will create a [Deployment][8] named “my-server” from the container image “nginx” (defaulting to docker hub as registry and the latest tag). You can see the Pod created by running the following command.
+
+```
+kubectl get pods
+```
+
+In order to access the nginx server running in the pod, first expose the Deployment through a [Service][9]. The following command will create a Service with the same name as the deployment.
+
+```
+kubectl expose deployment my-server --port 80
+```
+
+The Service works as a kind of load balancer and DNS record for the Pods. For instance, when running a second Pod, we will be able to _curl_ the nginx server just by specifying _my-server_ (the name of the Service). See the example below for how to do this.
+
+```
+# Start a pod and run bash interactively in it
+kubectl run debug --generator=run-pod/v1 --image=fedora -it -- bash
+# Wait for the bash prompt to appear
+curl my-server
+# You should get the "Welcome to nginx!" page as output
+```
+
+### Ingress controller and external IP
+
+By default, a Service only get a ClusterIP (only accessible inside the cluster), but you can also request an external IP for the service by setting its type to [LoadBalancer][10]. However, not all applications require their own IP address. Instead, it is often possible to share one IP address among many services by routing requests based on the host header or path. You can accomplish this in Kubernetes with an [Ingress][11], and this is what we will do. Ingresses also provide additional features such as TLS encryption of the traffic without having to modify your application.
+
+Kubernetes needs an ingress controller to make the Ingress resources work and k3s includes [Traefik][12] for this purpose. It also includes a simple service load balancer that makes it possible to get an external IP for a Service in the cluster. The [documentation][13] describes the service like this:
+
+> k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.
+>
+> k3s README
+
+The ingress controller is already exposed with this load balancer service. You can find the IP address that it is using with the following command.
+
+```
+$ kubectl get svc --all-namespaces
+NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ default kubernetes ClusterIP 10.43.0.1 443/TCP 33d
+ default my-server ClusterIP 10.43.174.38 80/TCP 30m
+ kube-system kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 33d
+ kube-system traefik LoadBalancer 10.43.145.104 10.0.0.8 80:31596/TCP,443:31539/TCP 33d
+```
+
+Look for the Service named traefik. In the above example the IP we are interested in is 10.0.0.8.
+
+### Route incoming requests
+
+Let’s create an Ingress that routes requests to our web server based on the host header. This example uses [xip.io][14] to avoid having to set up DNS records. It works by including the IP adress as a subdomain, to use any subdomain of 10.0.0.8.xip.io to reach the IP 10.0.0.8. In other words, my-server.10.0.0.8.xip.io is used to reach the ingress controller in the cluster. You can try this right now (with your own IP instead of 10.0.0.8). Without an ingress in place you should reach the “default backend” which is just a page showing “404 page not found”.
+
+We can tell the ingress controller to route requests to our web server Service with the following Ingress.
+
+```
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: my-server
+spec:
+ rules:
+ - host: my-server.10.0.0.8.xip.io
+ http:
+ paths:
+ - path: /
+ backend:
+ serviceName: my-server
+ servicePort: 80
+```
+
+Save the above snippet in a file named _my-ingress.yaml_ and add it to the cluster by running this command:
+
+```
+kubectl apply -f my-ingress.yaml
+```
+
+You should now be able to reach the default nginx welcoming page on the fully qualified domain name you chose. In my example this would be my-server.10.0.0.8.xip.io. The ingress controller is routing the requests based on the information in the Ingress. A request to my-server.10.0.0.8.xip.io will be routed to the Service and port defined as backend in the Ingress (my-server and 80 in this case).
+
+### What about IoT then?
+
+Imagine the following scenario. You have dozens of devices spread out around your home or farm. It is a heterogeneous collection of IoT devices with various hardware capabilities, sensors and actuators. Maybe some of them have cameras, weather or light sensors. Others may be hooked up to control the ventilation, lights, blinds or blink LEDs.
+
+In this scenario, you want to gather data from all the sensors, maybe process and analyze it before you finally use it to make decisions and control the actuators. In addition to this, you may want to visualize what’s going on by setting up a dashboard. So how can Kubernetes help us manage something like this? How can we make sure that Pods run on suitable devices?
+
+The simple answer is labels. You can label the nodes according to capabilities, like this:
+
+```
+kubectl label nodes =
+# Example
+kubectl label nodes node2 camera=available
+```
+
+Once they are labeled, it is easy to select suitable nodes for your workload with [nodeSelectors][15]. The final piece to the puzzle, if you want to run your Pods on _all_ suitable nodes is to use [DaemonSets][16] instead of Deployments. In other words, create one DaemonSet for each data collecting application that uses some unique sensor and use nodeSelectors to make sure they only run on nodes with the proper hardware.
+
+The service discovery feature that allows Pods to find each other simply by Service name makes it quite easy to handle these kinds of distributed systems. You don’t need to know or configure IP addresses or custom ports for the applications. Instead, they can easily find each other through named Services in the cluster.
+
+#### Utilize spare resources
+
+With the cluster up and running, collecting data and controlling your lights and climate control you may feel that you are finished. However, there are still plenty of compute resources in the cluster that could be used for other projects. This is where Kubernetes really shines.
+
+You shouldn’t have to worry about where exactly those resources are or calculate if there is enough memory to fit an extra application here or there. This is exactly what orchestration solves! You can easily deploy more applications in the cluster and let Kubernetes figure out where (or if) they will fit.
+
+Why not run your own [NextCloud][17] instance? Or maybe [gitea][18]? You could also set up a CI/CD pipeline for all those IoT containers. After all, why would you build and cross compile them on your main computer if you can do it natively in the cluster?
+
+The point here is that Kubernetes makes it easier to make use of the “hidden” resources that you often end up with otherwise. Kubernetes handles scheduling of Pods in the cluster based on available resources and fault tolerance so that you don’t have to. However, in order to help Kubernetes make reasonable decisions you should definitely add [resource requests][19] to your workloads.
+
+### Summary
+
+While Kubernetes, or container orchestration in general, may not usually be associated with IoT, it certainly makes a lot of sense to have an orchestrator when you are dealing with distributed systems. Not only does is allow you to handle a diverse and heterogeneous fleet of devices in a unified way, but it also simplifies communication between them. In addition, Kubernetes makes it easier to utilize spare resources.
+
+Container technology made it possible to build applications that could “run anywhere”. Now Kubernetes makes it easier to manage the “anywhere” part. And as an immutable base to build it all on, we have Fedora IoT.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/
+
+作者:[Lennart Jern][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/lennartj/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/k3s-1-816x345.png
+[2]: https://fedoramagazine.org/turnon-led-fedora-iot/
+[3]: https://hub.helm.sh/
+[4]: https://k3s.io
+[5]: https://docs.fedoraproject.org/en-US/iot/getting-started/
+[6]: https://github.com/rancher/k3s#open-ports--network-security
+[7]: https://github.com/rancher/k3s#systemd
+[8]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
+[9]: https://kubernetes.io/docs/concepts/services-networking/service/
+[10]: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
+[11]: https://kubernetes.io/docs/concepts/services-networking/ingress/
+[12]: https://traefik.io/
+[13]: https://github.com/rancher/k3s/blob/master/README.md#service-load-balancer
+[14]: http://xip.io/
+[15]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+[16]: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
+[17]: https://nextcloud.com/
+[18]: https://gitea.io/en-us/
+[19]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
From c515cf9e3d57492cf38c8e63ac59df49fd56d9c6 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:46:48 +0800
Subject: [PATCH 0004/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190416=20Linu?=
=?UTF-8?q?x=20Server=20Hardening=20Using=20Idempotency=20with=20Ansible:?=
=?UTF-8?q?=20Part=203=20sources/tech/20190416=20Linux=20Server=20Hardenin?=
=?UTF-8?q?g=20Using=20Idempotency=20with=20Ansible-=20Part=203.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Using Idempotency with Ansible- Part 3.md | 118 ++++++++++++++++++
1 file changed, 118 insertions(+)
create mode 100644 sources/tech/20190416 Linux Server Hardening Using Idempotency with Ansible- Part 3.md
diff --git a/sources/tech/20190416 Linux Server Hardening Using Idempotency with Ansible- Part 3.md b/sources/tech/20190416 Linux Server Hardening Using Idempotency with Ansible- Part 3.md
new file mode 100644
index 0000000000..50f4981c08
--- /dev/null
+++ b/sources/tech/20190416 Linux Server Hardening Using Idempotency with Ansible- Part 3.md
@@ -0,0 +1,118 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 3)
+[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-3)
+[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie)
+
+Linux Server Hardening Using Idempotency with Ansible: Part 3
+======
+
+![][1]
+
+[Creative Commons Zero][2]
+
+In the previous articles, we introduced idempotency as a way to approach your server’s security posture and looked at some specific Ansible examples, including the kernel, system accounts, and IPtables. In this final article of the series, we’ll look at a few more server-hardening examples and talk a little more about how the idempotency playbook might be used.
+
+#### **Time**
+
+Due to its reduced functionality, and therefore attack surface, the preference amongst a number of OSs has been to introduce “chronyd” over “ntpd”. If you’re new to “chrony” then fret not. It’s still using the NTP (Network Time Protocol) that we all know and love but in a more secure fashion.
+
+The first thing I do with Ansible within the “chrony.conf” file is alter the “bind address” and if my memory serves there’s also a “command port” option. These config options allow Chrony to only listen on the localhost. In other words you are still syncing as usual with other upstream time servers (just as NTP does) but no remote servers can query your time services; only your local machine has access.
+
+There’s more information on the “bindcmdaddress 127.0.0.1” and “cmdport 0” on this Chrony page () under “2.5. How can I make chronyd more secure?” which you should read for clarity. This premise behind the comment on that page is a good idea: “you can disable the internet command sockets completely by adding cmdport 0 to the configuration file”.
+
+Additionally I would also focus on securing the file permissions for Chrony and insist that the service starts as expected just like the syslog config above. Otherwise make sure that your time sources are sane, have a degree of redundancy with multiple sources set up and then copy the whole config file over using Ansible.
+
+#### **Logging**
+
+You can clearly affect the level of detail included in the logs from a number pieces of software on a server. Thinking back to what we’ve looked at in relation to syslog already you can also tweak that application’s config using Ansible to your needs and then use the example Ansible above in addition.
+
+#### **PAM**
+
+Apparently PAM (Pluggable Authentication Modules) has been a part of Linux since 1997. It is undeniably useful (a common use is that you can force SSH to use it for password logins, as per the SSH YAML file above). It is extensible, sophisticated and can perform useful functions such as preventing brute force attacks on password logins using a clever rate limiting system. The syntax varies a little between OSes but if you have the time then getting PAM working well (even if you’re only using SSH keys and not passwords for your logins) is a worthwhile effort. Attackers like their own users on a system with lots of usernames, something innocuous such as “webadmin” or similar might be easy to miss on a server, and PAM can help you out in this respect.
+
+#### **Auditd**
+
+We’ve looked at logging a little already but what about capturing every “system call” that a kernel makes. The Linux kernel is a super-busy component of any system and logging almost every single thing that a system does is an excellent way of providing post-event forensics. This article will hopefully shed some light on where to begin: . Note the comments in that article about performance, there’s little point in paying extra for compute and disk IO resource because you’ve misconfigured your logging so spend some time getting it correct would be my advice.
+
+For concerns over disk space I will usually change a few lines in the file “/etc/audit/auditd.conf” in order to prevent there firstly being too many log files created and secondly logs that grow very large without being rotated. This is also on the proviso that logs are being ingested upstream via another mechanism too. Clearly the files permissions and the service starting are also the basics you need to cover here too. Generally file permissions for auditd are tight as it’s a “root” oriented service so there’s less changes needed here generally.
+
+#### **Filesystems**
+
+With a little reading you can discover which filesystems that are made available to your OS by default. You should disable these (at the “modprode.d” file level) with Ansible to prevent weird and wonderful things being attached unwittingly to your servers. You are reducing the attack surface with this approach. The Ansible might look something like this below for example.
+
+```
+name: Make sure filesystems which are not needed are forced as off
+
+lineinfile: dest="/etcmodprobe.d/harden.conf" line='install squashfs /bin/true' state=present
+```
+
+#### **SELinux**
+
+The old, but sometimes avoided due to complexity, security favourite, SELinux, should be set to “enforcing” mode. Or, at the every least, set to log sensibly using “permissive” mode. Permissive mode will at least fill your auditd logs up with any correct rule matches nicely. In terms of what Ansible looks like it’s simple and is along these lines:
+
+```
+name: Configure SElinux to be running in permissive mode
+
+replace: path=”/etc/selinux/config” regexp='SELINUX=disabled' replace='SELINUX=permissive'
+```
+
+#### **Packages**
+
+Needless to say the compliance hardening playbook is also a good place to upgrade all the packages (with some selective exclusions) on the system. Pay attention to the section relating to reboots and idempotency in a moment however. With other mechanisms in place you might not want to update packages here but instead as per the Automation Documents article mentioned in a moment.
+
+### **Idempotency**
+
+Now we’ve run through some of the aspects you would want to look at when hardening on a server, let’s think a little more about how the playbook might be used.
+
+When it comes to cloud platforms most of my professional work has been on AWS and therefore, more often than not, a fresh AMI is launched and then a playbook is run over the top of it. There’s a mountain of detail in one way of doing that in this article () which you may be pleased to discover accommodates a mechanism to spawn a script or playbook.
+
+It is important to note, when it comes to idempotency, that it may take a little more effort initially to get your head around the logic involved in being able to re-run Ansible repeatedly without disturbing the required status quo of your server estate.
+
+One thing to be absolutely certain of however (barring rare edge cases) is that after you apply your hardening for the very first time, on a new AMI or server build, you will require a reboot. This is an important element due to a number of system facets not being altered correctly without a reboot. These include applying kernel changes so alterations become live, writing auditd rules as immutable config and also starting or stopping services to improve the security posture.
+
+Note though that you’re probably not going to want to execute all plays in a playbook every twenty or thirty minutes, such as updating all packages and stopping and restarting key customer-facing services. As a result you should factor the logic into your Ansible so that some tasks only run once initially and then maybe write a “completed” placeholder file to the filesystem afterwards for referencing. There’s a million different ways of achieving a status checker.
+
+The nice thing about Ansible is that the logic for rerunning playbooks is implicit and unlike shell scripts which for this type of task can be arduous to code the logic into. Sometimes, such as updating the GRUB bootloader for example, trying to guess the many permutations of a system change can be painful.
+
+### **Bedtime Reading**
+
+I still think that you can’t beat trial and error when it comes to computing. Experience is valued for good reason.
+
+Be warned that you’ll find contradictory advice sometimes from the vast array of online resources in this area. Advice differs probably because of the different use cases. The only way to harden the varying flavours of OS to my mind is via a bespoke approach. This is thanks to the environments that servers are used within and the requirements of the security framework or standard that an organisation needs to meet.
+
+For OS hardening details you can check with resources such as the NSA ([https://www.nsa.gov][3]), the Cloud Security Alliance (), proprietary training organisations such as GIAC ([https://www.giac.org][4]) who offer resources (), the diverse CIS Benchmarks ([https://www.cisecurity.org][5]) for industry consensus-based benchmarking, the SANS Institute (), NIST’s Computer Security Research ([https://csrc.nist.gov][6]) and of course print media too.
+
+### **Conclusion**
+
+Hopefully, you can see how powerful an idempotent server infrastructure is and are tempted to try it for yourself.
+
+The ever-present threat of APT (Advanced Persistent Threat) attacks on infrastructure, where a successful attacker will sit silently monitoring events and then when it’s opportune infiltrate deeper into an estate, makes this type of configuration highly valuable.
+
+The amount of detail that goes into the tests and configuration changes is key to the value that such an approach will bring to an organisation. Like the tests in a CI/CD pipeline they’re only as ever as good as their coverage.
+
+Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website: [https://www.devsecops.cc][7]
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-3
+
+作者:[Chris Binnie][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/chrisbinnie
+[b]: https://github.com/lujun9972
+[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tech-1495181_1280.jpg?itok=5WcwApNN
+[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
+[3]: https://www.nsa.gov/
+[4]: https://www.giac.org/
+[5]: https://www.cisecurity.org/
+[6]: https://csrc.nist.gov/
+[7]: https://www.devsecops.cc/
From 0db46a2024eb5670599a9d855f9d0058e976d7e0 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:46:58 +0800
Subject: [PATCH 0005/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190415=20Trou?=
=?UTF-8?q?bleshooting=20slow=20WiFi=20on=20Linux=20sources/tech/20190415?=
=?UTF-8?q?=20Troubleshooting=20slow=20WiFi=20on=20Linux.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...0415 Troubleshooting slow WiFi on Linux.md | 39 +++++++++++++++++++
1 file changed, 39 insertions(+)
create mode 100644 sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
diff --git a/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md b/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
new file mode 100644
index 0000000000..52af44459a
--- /dev/null
+++ b/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
@@ -0,0 +1,39 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Troubleshooting slow WiFi on Linux)
+[#]: via: (https://www.linux.com/blog/troubleshooting-slow-wifi-linux)
+[#]: author: (OpenSource.com https://www.linux.com/USERS/OPENSOURCECOM)
+
+Troubleshooting slow WiFi on Linux
+======
+
+I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
+
+Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
+
+Read more at: [OpenSource.com][2]
+
+I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
+
+Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
+
+Read more at: [OpenSource.com][2]
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/troubleshooting-slow-wifi-linux
+
+作者:[OpenSource.com][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/USERS/OPENSOURCECOM
+[b]: https://github.com/lujun9972
+[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
+[2]: https://opensource.com/article/19/4/troubleshooting-wifi-linux
From 32fe47d1f95e97150a3ca5f644a53225bd9b2e83 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:47:14 +0800
Subject: [PATCH 0006/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190416=20Linu?=
=?UTF-8?q?x=20Foundation=20Training=20Courses=20Sale=20&=20Discount=20Cou?=
=?UTF-8?q?pon=20sources/tech/20190416=20Linux=20Foundation=20Training=20C?=
=?UTF-8?q?ourses=20Sale=20-=20Discount=20Coupon.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...Training Courses Sale - Discount Coupon.md | 68 +++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md
diff --git a/sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md b/sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md
new file mode 100644
index 0000000000..04c1feb5ba
--- /dev/null
+++ b/sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md
@@ -0,0 +1,68 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Linux Foundation Training Courses Sale & Discount Coupon)
+[#]: via: (https://itsfoss.com/linux-foundation-discount-coupon/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+Linux Foundation Training Courses Sale & Discount Coupon
+======
+
+Linux Foundation is the non-profit organization that employs Linux creator Linus Torvalds and manages the development of the Linux kernel. Linux Foundation aims to promote the adoption of Linux and Open Source in the industry and it is doing a great job in this regard.
+
+Open Source jobs are in demand and no one knows is better than Linux Foundation, the official Linux organization. This is why the Linux Foundation provides a number of training and certification courses on Linux related technology. You can browse the [entire course offering on Linux Foundations’ training webpage][1].
+
+### Linux Foundation Latest Offer: 40% off on all courses [Limited Time]
+
+At present Linux Foundation is offering some great offers for sysadmin, devops and cloud professionals.
+
+At present, Linux Foundation is offering massive discount of 40% on the entire range of their e-learning courses and certification bundles, including the growing catalog of cloud and devops e-learning courses like Kubernetes!
+
+Just use coupon code **APRIL40** at checkout to get your discount.
+
+[Linux Foundation 40% Off (Coupon Code APRIL40)][2]
+
+_Do note that this offer is valid till 22nd April 2019 only._
+
+### Linux Foundation Discount Coupon [Valid all the time]
+
+You can get a 16% off on any training or certification course provided by The Linux Foundation at any given time. All you have to do is to use the coupon code **FOSS16** at the checkout page.
+
+Note that it might not be combined with sysadmin day offer.
+
+[Get 16% off on Linux Foundation Courses with FOSS16 Code][1]
+
+This article contains affiliate links. Please read our [affiliate policy][3].
+
+#### Should you get certified?
+
+![][4]
+
+This is the question I have been asked regularly. Are Linux certifications worth it? The short answer is yes.
+
+As per the [open source jobs report in 2018][5], over 80% of open source professionals said that certifications helped with their careers. Certifications enable you to demonstrate technical knowledge to potential employers and thus certifications make you more employable in general.
+
+Almost half of the hiring managers said that employing certified open source professionals is a priority for them.
+
+Certifications from a reputed authority like Linux Foundation, Red Hat, LPIC etc are particularly helpful when you are a fresh graduate or if you want to switch to a new domain in your career.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/linux-foundation-discount-coupon/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://shareasale.com/u.cfm?d=507759&m=59485&u=747593&afftrack=
+[2]: http://shrsl.com/1k5ug
+[3]: https://itsfoss.com/affiliate-policy/
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/07/linux-foundation-training-certification-discount.png?ssl=1
+[5]: https://www.linuxfoundation.org/publications/open-source-jobs-report-2018/
From 98791f80324dd11d02c065c763a95f5238c8395a Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:47:32 +0800
Subject: [PATCH 0007/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190416=20How?=
=?UTF-8?q?=20to=20Install=20MySQL=20in=20Ubuntu=20Linux=20sources/tech/20?=
=?UTF-8?q?190416=20How=20to=20Install=20MySQL=20in=20Ubuntu=20Linux.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...16 How to Install MySQL in Ubuntu Linux.md | 238 ++++++++++++++++++
1 file changed, 238 insertions(+)
create mode 100644 sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
diff --git a/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md b/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
new file mode 100644
index 0000000000..ee3a82ca03
--- /dev/null
+++ b/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
@@ -0,0 +1,238 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Install MySQL in Ubuntu Linux)
+[#]: via: (https://itsfoss.com/install-mysql-ubuntu/)
+[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
+
+How to Install MySQL in Ubuntu Linux
+======
+
+_**Brief: This tutorial teaches you to install MySQL in Ubuntu based Linux distributions. You’ll also learn how to verify your install and how to connect to MySQL for the first time.**_
+
+**[MySQL][1]** is the quintessential database management system. It is used in many tech stacks, including the popular **[LAMP][2]** (Linux, Apache, MySQL, PHP) stack. It has proven its stability. Another thing that makes **MySQL** so great is that it is **open-source**.
+
+**MySQL** uses **relational databases** (basically **tabular data** ). It is really easy to store, organize and access data this way. For managing data, **SQL** ( **Structured Query Language** ) is used.
+
+In this article I’ll show you how to **install** and **use** MySQL 8.0 in Ubuntu 18.04. Let’s get to it!
+
+### Installing MySQL in Ubuntu
+
+![][3]
+
+I’ll be covering two ways you can install **MySQL** in Ubuntu 18.04:
+
+ 1. Install MySQL from the Ubuntu repositories. Very basic, not the latest version (5.7)
+ 2. Install MySQL using the official repository. There is a bigger step that you’ll have to add to the process, but nothing to worry about. Also, you’ll have the latest version (8.0)
+
+
+
+When needed, I’ll provide screenshots to guide you. For most of this guide, I’ll be entering commands in the **terminal** ( **default hotkey** : CTRL+ALT+T). Don’t be scared of it!
+
+#### Method 1. Installing MySQL from the Ubuntu repositories
+
+First of all, make sure your repositories are updated by entering:
+
+```
+sudo apt update
+```
+
+Now, to install **MySQL 5.7** , simply type:
+
+```
+sudo apt install mysql-server -y
+```
+
+That’s it! Simple and efficient.
+
+#### Method 2. Installing MySQL using the official repository
+
+Although this method has a few more steps, I’ll go through them one by one and I’ll try writing down clear notes.
+
+The first step is browsing to the [download page][4] of the official MySQL website.
+
+![][5]
+
+Here, go down to the **download link** for the **DEB Package**.
+
+![][6]
+
+Scroll down past the info about Oracle Web and right-click on **No thanks, just start my download.** Select **Copy link location**.
+
+Now go back to the terminal. We’ll [use][7] **[Curl][7]** [command][7] to the download the package:
+
+```
+curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb
+```
+
+**** is the link I copied from the website. It might be different based on the current version of MySQL. Let’s use **dpkg** to start installing MySQL:
+
+```
+sudo dpkg -i mysql-apt-config*
+```
+
+Update your repositories:
+
+```
+sudo apt update
+```
+
+To actually install MySQL, we’ll use the same command as in the first method:
+
+```
+sudo apt install mysql-server -y
+```
+
+Doing so will open a prompt in your terminal for **package configuration**. Use the **down arrow** to select the **Ok** option.
+
+![][8]
+
+Press **Enter**. This should prompt you to enter a **password** :. Your are basically setting the root password for MySQL. Don’t confuse it with [root password of Ubuntu][9] system.
+
+![][10]
+
+Type in a password and press **Tab** to select **< Ok>**. Press **Enter.** You’ll now have to **re-enter** the **password**. After doing so, press **Tab** again to select **< Ok>**. Press **Enter**.
+
+![][11]
+
+Some **information** on configuring MySQL Server will be presented. Press **Tab** to select **< Ok>** and **Enter** again:
+
+![][12]
+
+Here you need to choose a **default authentication plugin**. Make sure **Use Strong Password Encryption** is selected. Press **Tab** and then **Enter**.
+
+That’s it! You have successfully installed MySQL.
+
+#### Verify your MySQL installation
+
+To **verify** that MySQL installed correctly, use:
+
+```
+sudo systemctl status mysql.service
+```
+
+This will show some information about the service:
+
+![][13]
+
+You should see **Active: active (running)** in there somewhere. If you don’t, use the following command to start the **service** :
+
+```
+sudo systemctl start mysql.service
+```
+
+#### Configuring/Securing MySQL
+
+For a new install, you should run the provided command for security-related updates. That’s:
+
+```
+sudo mysql_secure_installation
+```
+
+Doing so will first of all ask you if you want to use the **VALIDATE PASSWORD COMPONENT**. If you want to use it, you’ll have to select a minimum password strength ( **0 – Low, 1 – Medium, 2 – High** ). You won’t be able to input any password doesn’t respect the selected rules. If you don’t have the habit of using strong passwords (you should!), this could come in handy. If you think it might help, type in **y** or **Y** and press **Enter** , then choose a **strength level** for your password and input the one you want to use. If successful, you’ll continue the **securing** process; otherwise you’ll have to re-enter a password.
+
+If, however, you do not want this feature (I won’t), just press **Enter** or **any other key** to skip using it.
+
+For the other options, I suggest **enabling** them (typing in **y** or **Y** and pressing **Enter** for each of them). They are (in this order): **remove anonymous user, disallow root login remotely, remove test database and access to it, reload privilege tables now**.
+
+#### Connecting to & Disconnecting from the MySQL Server
+
+To be able to run SQL queries, you’ll first have to connect to the server using MySQL and use the MySQL prompt. The command for doing this is:
+
+```
+mysql -h host_name -u user -p
+```
+
+ * **-h** is used to specify a **host name** (if the server is located on another machine; if it isn’t, just omit it)
+ * **-u** mentions the **user**
+ * **-p** specifies that you want to input a **password**.
+
+
+
+Although not recommended (for safety reasons), you can enter the password directly in the command by typing it in right after **-p**. For example, if the password for **test_user** is **1234** and you are trying to connect on the machine you are using, you could use:
+
+```
+mysql -u test_user -p1234
+```
+
+If you successfully inputted the required parameters, you’ll be greeted by the **MySQL shell prompt** ( **mysql >**):
+
+![][14]
+
+To **disconnect** from the server and **leave** the mysql prompt, type:
+
+```
+QUIT
+```
+
+Typing **quit** (MySQL is case insensitive) or **\q** will also work. Press **Enter** to exit.
+
+You can also output info about the **version** with a simple command:
+
+```
+sudo mysqladmin -u root version -p
+```
+
+If you want to see a **list of options** , use:
+
+```
+mysql --help
+```
+
+#### Uninstalling MySQL
+
+If you decide that you want to use a newer release or just want to stop using MySQL.
+
+First, disable the service:
+
+```
+sudo systemctl stop mysql.service && sudo systemctl disable mysql.service
+```
+
+Make sure you backed up your databases, in case you want to use them later on. You can uninstall MySQL by running:
+
+```
+sudo apt purge mysql*
+```
+
+To clean up dependecies:
+
+```
+sudo apt autoremove
+```
+
+**Wrapping Up**
+
+In this article, I’ve covered **installing MySQL** in Ubuntu Linux. I’d be glad if this guide helps struggling users and beginners.
+
+Tell us in the comments if you found this post to be a useful resource. What do you use MySQL for? We’re eager to receive any feedback, impressions or suggestions. Thanks for reading and have don’t hesitate to experiment with this incredible tool!
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-mysql-ubuntu/
+
+作者:[Sergiu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/sergiu/
+[b]: https://github.com/lujun9972
+[1]: https://www.mysql.com/
+[2]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-mysql-ubuntu.png?resize=800%2C450&ssl=1
+[4]: https://dev.mysql.com/downloads/repo/apt/
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_apt_download_page.jpg?fit=800%2C280&ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_deb_download_link.jpg?fit=800%2C507&ssl=1
+[7]: https://linuxhandbook.com/curl-command-examples/
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_package_configuration_ok.jpg?fit=800%2C587&ssl=1
+[9]: https://itsfoss.com/change-password-ubuntu/
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_enter_password.jpg?fit=800%2C583&ssl=1
+[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_information_on_configuring.jpg?fit=800%2C581&ssl=1
+[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_default_authentication_plugin.jpg?fit=800%2C586&ssl=1
+[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_service_information.jpg?fit=800%2C402&ssl=1
+[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_shell_prompt-2.jpg?fit=800%2C423&ssl=1
From bfa5835c255f02ff07fff75c64179e560f8d1631 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:47:44 +0800
Subject: [PATCH 0008/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190415=2012?=
=?UTF-8?q?=20Single=20Board=20Computers:=20Alternative=20to=20Raspberry?=
=?UTF-8?q?=20Pi=20sources/tech/20190415=2012=20Single=20Board=20Computers?=
=?UTF-8?q?-=20Alternative=20to=20Raspberry=20Pi.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... Computers- Alternative to Raspberry Pi.md | 354 ++++++++++++++++++
1 file changed, 354 insertions(+)
create mode 100644 sources/tech/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md
diff --git a/sources/tech/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md b/sources/tech/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md
new file mode 100644
index 0000000000..c30c286142
--- /dev/null
+++ b/sources/tech/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md
@@ -0,0 +1,354 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (12 Single Board Computers: Alternative to Raspberry Pi)
+[#]: via: (https://itsfoss.com/raspberry-pi-alternatives/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+12 Single Board Computers: Alternative to Raspberry Pi
+======
+
+_**Brief: Looking for a Raspberry Pi alternative? Here are some other single board computers to satisfy your DIY cravings.**_
+
+Raspberry Pi is the most popular single board computer right now. You can use it for your DIY projects or can use it as a cost effective system to learn coding or maybe utilize a [media server software][1] on it to stream media at your convenience.
+
+You can do a lot of things with Raspberry Pi but it is not the ultimate solution for all kinds of tinkerers. Some might be looking for a cheaper board and some might be on the lookout for a powerful one.
+
+Whatever be the case, we do need Raspberry Pi alternatives for a variety of reasons. So, in this article, we will talk about the best ten single board computers that we think are the best Raspberry Pi alternatives.
+
+![][2]
+
+### Raspberry Pi alternatives to satisfy your DIY craving
+
+The list is in no particular order of ranking. Some of the links here are affiliate links. Please read our [affiliate policy][3].
+
+#### 1\. Onion Omega2+
+
+![][4]
+
+For just **$13** , the Omega2+ is one of the cheapest IoT single board computers you can find out there. It runs on LEDE (Linux Embedded Development Environment) Linux OS – a distribution based on [OpenWRT][5].
+
+Its form factor, cost, and the flexibility that comes from running a customized version of Linux OS makes it a perfect fit for almost any type of IoT applications.
+
+You can find [Onion Omega kit on Amazon][6] or order from their own website that would cost you extra shipping charges.
+
+**Key Specifications**
+
+ * MT7688 SoC
+ * 2.4 GHz IEEE 802.11 b/g/n WiFi
+ * 128 MB DDR2 RAM
+ * 32 MB on-board flash storage
+ * MicroSD Slot
+ * USB 2.0
+ * 12 GPIO Pins
+
+
+
+[Visit WEBSITE
+][7]
+
+#### 2\. NVIDIA Jetson Nano Developer Kit
+
+![][8]
+
+This is a very unique and interesting Raspberry Pi alternative from NVIDIA for just **$99**. Yes, it’s not something that everyone can make use of – but for a specific group of tinkerers or developers.
+
+NVIDIA explains it for the following use-case:
+
+> NVIDIA® Jetson Nano™ Developer Kit is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. All in an easy-to-use platform that runs in as little as 5 watts.
+>
+> nvidia
+
+So, basically, if you are into AI and deep learning, you can make use of the developer kit. If you are curious, the production compute module of this will be arriving in June 2019.
+
+**Key Specifications:**
+
+ * CPU: Quad-core ARM A57 @ 1.43 GHz
+ * GPU: 128-core Maxwell
+ * RAM: 4 GB 64-bit LPDDR4 25.6 GB/s
+ * Display: HDMI 2.0
+ * 4 x USB 3.0 and eDP 1.4
+
+
+
+[VISIT WEBSITE
+][9]
+
+#### 3\. ASUS Tinker Board S
+
+![][10]
+
+ASUS Tinker Board S isn’t the most affordable Raspberry Pi alternative at **$82** (on [Amazon][11]) but it is a powerful alternative. It features the same 40-pin connector that you’d normally find in the standard Raspberry Pi 3 Model but offers a powerful processor and a GPU.Also, the size of the Tinker Board S is exactly the same as a standard Raspberry Pi 3.
+
+The main highlight of this board is the presence of 16 GB [eMMC][12] (in layman terms, it has got SSD-like storage on board that makes it faster while working on it).
+
+**Key Specifications:**
+
+ * Rockchip Quad-Core RK3288 processor
+ * 2 GB DDR3 RAM
+ * Integrated Graphics Processor
+ * ARM® Mali™-T764 GPU
+ * 16 GB eMMC
+ * MicroSD Card Slot
+ * 802.11 b/g/n, Bluetooth V4.0 + EDR
+ * USB 2.0
+ * 28 GPIO pins
+ * HDMI Interface
+
+
+
+[Visit website
+][13]
+
+#### 4\. ClockworkPi
+
+![][14]
+
+Clockwork Pi is usually a part of the [GameShell Kit][15] if you are looking to assemble a modular retro gaming console. However, you can purchase the board separately for $49.
+
+Its compact size, WiFi connectivity, and the presence of micro HDMI port make it a great choice for a lot of things.
+
+**Key Specifications:**
+
+ * Allwinner R16-J Quad-core Cortex-A7 CPU @1.2GHz
+ * Mali-400 MP2 GPU
+ * RAM: 1GB DDR3
+ * WiFi & Bluetooth v4.0
+ * Micro HDMI output
+ * MicroSD Card Slot
+
+
+
+[visit website
+][16]
+
+#### 5\. Arduino Mega 2560
+
+![][17]
+
+If you are into robotic projects or you want something for a 3D printer – Arduino Mega 2560 will be a handy replacement to Raspberry Pi. Unlike Raspberry Pi, it is based on a microcontroller and not a microprocessor.
+
+It would cost you $38.50 on their [official site][18] and and around [$33 on Amazon][19].
+
+**Key Specifications:**
+
+ * Microcontroller: ATmega2560
+ * Clock Speed: 16 MHz
+ * Digital I/O Pins: 54
+ * Analog Input Pins: 16
+ * Flash Memory: 256 KB of which 8 KB used by bootloader
+
+
+
+[visit website
+][18]
+
+#### 6\. Rock64 Media Board
+
+![][20]
+
+For the same investment as you would on a Raspberry Pi 3 B+, you will be getting a faster processor and double the memory on Rock64 Media Board. In addition, it also offers a cheaper alternative to Raspberry Pi if you want the 1 GB RAM model – which would cost $10 less.
+
+Unlike Raspberry Pi, you do not have wireless connectivity support here but the presence of USB 3.0 and HDMI 2.0 does make a good difference if that matters to you.
+
+**Key Specifications:**
+
+ * Rockchip RK3328 Quad-Core ARM Cortex A53 64-Bit Processor
+ * Supports up to 4GB 1600MHz LPDDR3 RAM
+ * eMMC module socket
+ * MicroSD Card slot
+ * USB 3.0
+ * HDMI 2.0
+
+
+
+[visit website
+][21]
+
+#### 7\. Odroid-XU4
+
+![][22]
+
+Odroid-XU4 is the perfect alternative to Raspberry Pi if you have room to spend a little more ($80-$100 or even lower, depending on the store/availability).
+
+It is indeed a powerful replacement and technically a bit smaller in size. The support for eMMC and USB 3.0 makes it faster to work with.
+
+**Key Specifications:**
+
+ * Samsung Exynos 5422 Octa ARM Cortex™-A15 Quad 2Ghz and Cortex™-A7 Quad 1.3GHz CPUs
+ * 2Gbyte LPDDR3 RAM
+ * GPU: Mali-T628 MP6
+ * USB 3.0
+ * HDMI 1.4a
+ * eMMC 5.0 module socket
+ * MicroSD Card Slot
+
+
+
+[visit website
+][23]
+
+#### 8\. **PocketBeagle**
+
+![][24]
+
+It is an incredibly small SBC – almost similar to the Raspberry Pi Zero. However, it would cost you the same as that of a full-sized Raspberry Pi 3 model. The main highlight here is that you can use it as a USB key-fob and then access the Linux terminal to work on it.
+
+**Key Specifications:**
+
+ * Processor: Octavo Systems OSD3358 1GHz ARM® Cortex-A8
+ * RAM: 512 MB DDR3
+ * 72 expansion pin headers
+ * microUSB
+ * USB 2.0
+
+
+
+[visit website
+][25]
+
+#### 9\. Le Potato
+
+![][26]
+
+Le Potato by [Libre Computer][27], also identified by its model number AML-S905X-CC. It would [cost you $45][28].
+
+If you want double the memory along with HDMI 2.0 interface by spending a bit more than a Raspberry Pi – this would be the perfect choice. Although, you won’t find wireless connectivity baked in.
+
+**Key Specifications:**
+
+ * Amlogic S905X SoC
+ * 2GB DDR3 SDRAM
+ * USB 2.0
+ * HDMI 2.0
+ * microUSB
+ * MicroSD Card Slot
+ * eMMC Interface
+
+
+
+[visit website
+][29]
+
+#### 10\. Banana Pi M64
+
+![][30]
+
+It comes loaded with 8 Gigs of eMMC – which is the key highlight of this Raspberry Pi alternative. For the very same reason, it would cost you $60.
+
+The presence of HDMI interface makes it 4K-ready. In addition, Banana Pi offers a lot more variety of open source SBCs as an alternative to Raspberry Pi.
+
+**Key Specifications:**
+
+ * 1.2 Ghz Quad-Core ARM Cortex A53 64-Bit Processor-R18
+ * 2GB DDR3 SDRAM
+ * 8 GB eMMC
+ * WiFi & Bluetooth
+ * USB 2.0
+ * HDMI
+
+
+
+[visit website
+][31]
+
+#### 11\. Orange Pi Zero
+
+![][32]
+
+The Orange Pi Zero is an incredibly cheap alternative to Raspberry Pi. You will be able to get it for almost $10 on Aliexpress or Amazon. For a [little more investment, you can get 512 MB RAM][33].
+
+If that isn’t sufficient, you can also go for Orange Pi 3 with better specifications which will cost you around $25.
+
+**Key Specifications:**
+
+ * H2 Quad-core Cortex-A7
+ * Mali400MP2 GPU
+ * RAM: Up to 512 MB
+ * TF Card support
+ * WiFi
+ * USB 2.0
+
+
+
+[Visit website
+][34]
+
+#### 12\. VIM 2 SBC by Khadas
+
+![][35]
+
+VIM 2 by Khadas is one of the latest SBCs that you can grab with Bluetooth 5.0 on board. It [starts from $99 (the basic model) and goes up to $140][36].
+
+The basic model includes 2 GB RAM, 16 GB eMMC and Bluetooth 4.1. However, the Pro/Max versions would include Bluetooth 5.0, more memory, and more eMMC storage.
+
+**Key Specifications:**
+
+ * Amlogic S912 1.5GHz 64-bit Octa-Core CPU
+ * T820MP3 GPU
+ * Up to 3 GB DDR4 RAM
+ * Up to 64 GB eMMC
+ * Bluetooth 5.0 (Pro/Max)
+ * Bluetooth 4.1 (Basic)
+ * HDMI 2.0a
+ * WiFi
+
+
+
+**Wrapping Up**
+
+We do know that there are different types of single board computers. Some are better than Raspberry Pi – and some scaled down versions of it for a cheaper price tag. Also, SBCs like Jetson Nano have been tailored for a specific use. So, depending on what you require – you should verify the specifications of the single board computer.
+
+If you think that you know about something that is better than the ones mentioned above, feel free to let us know in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/raspberry-pi-alternatives/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/best-linux-media-server/
+[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-alternatives.png?resize=800%2C450&ssl=1
+[3]: https://itsfoss.com/affiliate-policy/
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/omega-2-plus-e1555306748755-800x444.jpg?resize=800%2C444&ssl=1
+[5]: https://openwrt.org/
+[6]: https://amzn.to/2Xj8pkn
+[7]: https://onion.io/store/omega2p/
+[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Jetson-Nano-e1555306350976-800x590.jpg?resize=800%2C590&ssl=1
+[9]: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/asus-tinker-board-s-e1555304945760-800x450.jpg?resize=800%2C450&ssl=1
+[11]: https://amzn.to/2XfkOFT
+[12]: https://en.wikipedia.org/wiki/MultiMediaCard
+[13]: https://www.asus.com/in/Single-Board-Computer/Tinker-Board-S/
+[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/clockwork-pi-e1555305016242-800x506.jpg?resize=800%2C506&ssl=1
+[15]: https://itsfoss.com/gameshell-console/
+[16]: https://www.clockworkpi.com/product-page/cpi-v3-1
+[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/arduino-mega-2560-e1555305257633.jpg?ssl=1
+[18]: https://store.arduino.cc/usa/mega-2560-r3
+[19]: https://amzn.to/2KCi041
+[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/ROCK64_board-e1555306092845-800x440.jpg?resize=800%2C440&ssl=1
+[21]: https://www.pine64.org/?product=rock64-media-board-computer
+[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/odroid-xu4.jpg?fit=800%2C354&ssl=1
+[23]: https://www.hardkernel.com/shop/odroid-xu4-special-price/
+[24]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/PocketBeagle.jpg?fit=800%2C450&ssl=1
+[25]: https://beagleboard.org/p/products/pocketbeagle
+[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/aml-libre.-e1555306237972-800x514.jpg?resize=800%2C514&ssl=1
+[27]: https://libre.computer/
+[28]: https://amzn.to/2DpG3xl
+[29]: https://libre.computer/products/boards/aml-s905x-cc/
+[30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/banana-pi-m6.jpg?fit=800%2C389&ssl=1
+[31]: http://www.banana-pi.org/m64.html
+[32]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/orange-pi-zero.jpg?fit=800%2C693&ssl=1
+[33]: https://amzn.to/2IlI81g
+[34]: http://www.orangepi.org/orangepizero/index.html
+[35]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/khadas-vim-2-e1555306505640-800x563.jpg?resize=800%2C563&ssl=1
+[36]: https://amzn.to/2UDvrFE
From f388493920f45b52e3749824bf0833a65ee5a094 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:48:01 +0800
Subject: [PATCH 0009/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190416=20Buil?=
=?UTF-8?q?ding=20a=20DNS-as-a-service=20with=20OpenStack=20Designate=20so?=
=?UTF-8?q?urces/tech/20190416=20Building=20a=20DNS-as-a-service=20with=20?=
=?UTF-8?q?OpenStack=20Designate.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...S-as-a-service with OpenStack Designate.md | 263 ++++++++++++++++++
1 file changed, 263 insertions(+)
create mode 100644 sources/tech/20190416 Building a DNS-as-a-service with OpenStack Designate.md
diff --git a/sources/tech/20190416 Building a DNS-as-a-service with OpenStack Designate.md b/sources/tech/20190416 Building a DNS-as-a-service with OpenStack Designate.md
new file mode 100644
index 0000000000..2dc628a49c
--- /dev/null
+++ b/sources/tech/20190416 Building a DNS-as-a-service with OpenStack Designate.md
@@ -0,0 +1,263 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building a DNS-as-a-service with OpenStack Designate)
+[#]: via: (https://opensource.com/article/19/4/getting-started-openstack-designate)
+[#]: author: (Amjad Yaseen https://opensource.com/users/ayaseen)
+
+Building a DNS-as-a-service with OpenStack Designate
+======
+Learn how to install and configure Designate, a multi-tenant
+DNS-as-a-service (DNSaaS) for OpenStack.
+![Command line prompt][1]
+
+[Designate][2] is a multi-tenant DNS-as-a-service that includes a REST API for domain and record management, a framework for integration with [Neutron][3], and integration support for Bind9.
+
+You would want to consider a DNSaaS for the following:
+
+ * A clean REST API for managing zones and records
+ * Automatic records generated (with OpenStack integration)
+ * Support for multiple authoritative name servers
+ * Hosting multiple projects/organizations
+
+
+
+![Designate's architecture][4]
+
+This article explains how to manually install and configure the latest release of Designate service on CentOS or Red Hat Enterprise Linux 7 (RHEL 7), but you can use the same configuration on other distributions.
+
+### Install Designate on OpenStack
+
+I have Ansible roles for bind and Designate that demonstrate the setup in my [GitHub repository][5].
+
+This setup presumes bind service is external (even though you can install bind locally) on the OpenStack controller node.
+
+ 1. Install Designate's packages and bind (on OpenStack controller): [code]`# yum install openstack-designate-* bind bind-utils -y`
+```
+ 2. Create the Designate database and user: [code] MariaDB [(none)]> CREATE DATABASE designate CHARACTER SET utf8 COLLATE utf8_general_ci;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO \
+'designate'@'localhost' IDENTIFIED BY 'rhlab123';
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO 'designate'@'%' \
+IDENTIFIED BY 'rhlab123';
+```
+
+
+
+
+Note: Bind packages must be installed on the controller side for Remote Name Daemon Control (RNDC) to function properly.
+
+### Configure bind (DNS server)
+
+ 1. Generate RNDC files: [code] rndc-confgen -a -k designate -c /etc/rndc.key -r /dev/urandom
+
+cat < etcrndc.conf
+include "/etc/rndc.key";
+options {
+default-key "designate";
+default-server {{ DNS_SERVER_IP }};
+default-port 953;
+};
+EOF
+```
+ 2. Add the following into **named.conf** : [code]`include "/etc/rndc.key"; controls { inet {{ DNS_SERVER_IP }} allow { localhost;{{ CONTROLLER_SERVER_IP }}; } keys { "designate"; }; };`[/code] In the **option** section, add: [code] options {
+...
+allow-new-zones yes;
+request-ixfr no;
+listen-on port 53 { any; };
+recursion no;
+allow-query { 127.0.0.1; {{ CONTROLLER_SERVER_IP }}; };
+}; [/code] Add the right permissions: [code] chown named:named /etc/rndc.key
+chown named:named /etc/rndc.conf
+chmod 600 /etc/rndc.key
+chown -v root:named /etc/named.conf
+chmod g+w /var/named
+
+# systemctl restart named
+# setsebool named_write_master_zones 1
+```
+
+ 3. Push **rndc.key** and **rndc.conf** into the OpenStack controller: [code]`# scp -r /etc/rndc* {{ CONTROLLER_SERVER_IP }}:/etc/`
+```
+## Create OpenStack Designate service and endpoints
+
+Enter:
+```
+
+
+# openstack user create --domain default --password-prompt designate
+# openstack role add --project services --user designate admin
+# openstack service create --name designate --description "DNS" dns
+
+# openstack endpoint create --region RegionOne dns public http://{{ CONTROLLER_SERVER_IP }}:9001/
+# openstack endpoint create --region RegionOne dns internal http://{{ CONTROLLER_SERVER_IP }}:9001/
+# openstack endpoint create --region RegionOne dns admin http://{{ CONTROLLER_SERVER_IP }}:9001/
+
+```
+## Configure Designate service
+
+ 1. Edit **/etc/designate/designate.conf** :
+ * In the **[service:api]** section, configure **auth_strategy** : [code] [service:api]
+listen = 0.0.0.0:9001
+auth_strategy = keystone
+api_base_uri = http://{{ CONTROLLER_SERVER_IP }}:9001/
+enable_api_v2 = True
+enabled_extensions_v2 = quotas, reports
+```
+ * In the **[keystone_authtoken]** section, configure the following options: [code] [keystone_authtoken]
+auth_type = password
+username = designate
+password = rhlab123
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+www_authenticate_uri = http://{{ CONTROLLER_SERVER_IP }}:5000/
+auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000/
+```
+ * In the **[service:worker]** section, enable the worker model: [code] enabled = True
+notify = True
+```
+ * In the **[storage:sqlalchemy]** section, configure database access: [code] [storage:sqlalchemy]
+connection = mysql+pymysql://designate:rhlab123@{{ CONTROLLER_SERVER_IP }}/designate
+```
+* Populate the Designate database: [code]`# su -s /bin/sh -c "designate-manage database sync" designate`
+```
+
+
+ 2. Create Designate's **pools.yaml** file (has target and bind details):
+ * Edit **/etc/designate/pools.yaml** : [code] - name: default
+# The name is immutable. There will be no option to change the name after
+# creation and the only way will to change it will be to delete it
+# (and all zones associated with it) and recreate it.
+description: Default Pool
+
+attributes: {}
+
+# List out the NS records for zones hosted within this pool
+# This should be a record that is created outside of designate, that
+# points to the public IP of the controller node.
+ns_records:
+\- hostname: {{Controller_FQDN}}. # Thisis mDNS
+priority: 1
+
+# List out the nameservers for this pool. These are the actual BIND servers.
+# We use these to verify changes have propagated to all nameservers.
+nameservers:
+\- host: {{ DNS_SERVER_IP }}
+port: 53
+
+# List out the targets for this pool. For BIND there will be one
+# entry for each BIND server, as we have to run rndc command on each server
+targets:
+\- type: bind9
+description: BIND9 Server 1
+
+# List out the designate-mdns servers from which BIND servers should
+# request zone transfers (AXFRs) from.
+# This should be the IP of the controller node.
+# If you have multiple controllers you can add multiple masters
+# by running designate-mdns on them, and adding them here.
+masters:
+\- host: {{ CONTROLLER_SERVER_IP }}
+port: 5354
+
+# BIND Configuration options
+options:
+host: {{ DNS_SERVER_IP }}
+port: 53
+rndc_host: {{ DNS_SERVER_IP }}
+rndc_port: 953
+rndc_key_file: /etc/rndc.key
+rndc_config_file: /etc/rndc.conf
+```
+* Populate Designate's pools: [code]`su -s /bin/sh -c "designate-manage pool update" designate`
+```
+
+
+
+ 3. Start Designate central and API services: [code]`systemctl enable --now designate-central designate-api`
+```
+ 4. Verify Designate's services are up: [code] # openstack dns service list
+
++--------------+--------+-------+--------------+
+| service_name | status | stats | capabilities |
++--------------+--------+-------+--------------+
+| central | UP | - | - |
+| api | UP | - | - |
+| mdns | UP | - | - |
+| worker | UP | - | - |
+| producer | UP | - | - |
++--------------+--------+-------+--------------+
+```
+
+
+
+
+### Configure OpenStack Neutron with external DNS
+
+ 1. Configure iptables for Designate services: [code] # iptables -I INPUT -p tcp -m multiport --dports 9001 -m comment --comment "designate incoming" -j ACCEPT
+
+# iptables -I INPUT -p tcp -m multiport --dports 5354 -m comment --comment "Designate mdns incoming" -j ACCEPT
+
+# iptables -I INPUT -p tcp -m multiport --dports 53 -m comment --comment "bind incoming" -j ACCEPT
+
+
+# iptables -I INPUT -p udp -m multiport --dports 53 -m comment --comment "bind/powerdns incoming" -j ACCEPT
+
+# iptables -I INPUT -p tcp -m multiport --dports 953 -m comment --comment "rndc incoming - bind only" -j ACCEPT
+
+# service iptables save; service iptables restart
+# setsebool named_write_master_zones 1
+```
+2. Edit the **[default]** section of **/etc/neutron/neutron.conf** : [code]`external_dns_driver = designate`
+```
+
+ 3. Add the **[designate]** section in **/_etc/_neutron/neutron.conf** : [code] [designate]
+url = http://{{ CONTROLLER_SERVER_IP }}:9001/v2 ## This end point of designate
+auth_type = password
+auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000
+username = designate
+password = rhlab123
+project_name = services
+project_domain_name = Default
+user_domain_name = Default
+allow_reverse_dns_lookup = True
+ipv4_ptr_zone_prefix_size = 24
+ipv6_ptr_zone_prefix_size = 116
+```
+ 4. Edit **dns_domain** in **neutron.conf** : [code] dns_domain = rhlab.dev.
+
+# systemctl restart neutron-*
+```
+
+ 5. Add **dns** to the list of Modular Layer 2 (ML2) drivers in **/etc/neutron/plugins/ml2/ml2_conf.ini** : [code]`extension_drivers=port_security,qos,dns`
+```
+6. Add **zone** in Designate: [code]`# openstack zone create –email=admin@rhlab.dev rhlab.dev.`[/code] Add a new record in **zone rhlab.dev** : [code]`# openstack recordset create --record '192.168.1.230' --type A rhlab.dev. Test`
+```
+
+
+
+
+Designate should now be installed and configured.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/getting-started-openstack-designate
+
+作者:[Amjad Yaseen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ayaseen
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
+[2]: https://docs.openstack.org/designate/latest/
+[3]: /article/19/3/openstack-neutron
+[4]: https://opensource.com/sites/default/files/uploads/openstack_designate_architecture.png (Designate's architecture)
+[5]: https://github.com/ayaseen/designate
From 3141b55cf0d19e1a4ebbcaaffea274f81474018d Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:48:15 +0800
Subject: [PATCH 0010/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190416=20Dete?=
=?UTF-8?q?cting=20malaria=20with=20deep=20learning=20sources/tech/2019041?=
=?UTF-8?q?6=20Detecting=20malaria=20with=20deep=20learning.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...16 Detecting malaria with deep learning.md | 792 ++++++++++++++++++
1 file changed, 792 insertions(+)
create mode 100644 sources/tech/20190416 Detecting malaria with deep learning.md
diff --git a/sources/tech/20190416 Detecting malaria with deep learning.md b/sources/tech/20190416 Detecting malaria with deep learning.md
new file mode 100644
index 0000000000..77df4a561b
--- /dev/null
+++ b/sources/tech/20190416 Detecting malaria with deep learning.md
@@ -0,0 +1,792 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Detecting malaria with deep learning)
+[#]: via: (https://opensource.com/article/19/4/detecting-malaria-deep-learning)
+[#]: author: (Dipanjan (DJ) Sarkar (Red Hat) https://opensource.com/users/djsarkar)
+
+Detecting malaria with deep learning
+======
+Artificial intelligence combined with open source tools can improve
+diagnosis of the fatal disease malaria.
+![][1]
+
+Artificial intelligence (AI) and open source tools, technologies, and frameworks are a powerful combination for improving society. _"Health is wealth"_ is perhaps a cliche, yet it's very accurate! In this article, we will examine how AI can be leveraged for detecting the deadly disease malaria with a low-cost, effective, and accurate open source deep learning solution.
+
+While I am neither a doctor nor a healthcare researcher and I'm nowhere near as qualified as they are, I am interested in applying AI to healthcare research. My intent in this article is to showcase how AI and open source solutions can help malaria detection and reduce manual labor.
+
+![Python and TensorFlow][2]
+
+Python and TensorFlow: A great combo to build open source deep learning solutions
+
+Thanks to the power of Python and deep learning frameworks like TensorFlow, we can build robust, scalable, and effective deep learning solutions. Because these tools are free and open source, we can build solutions that are very cost-effective and easily adopted and used by anyone. Let's get started!
+
+### Motivation for the project
+
+Malaria is a deadly, infectious, mosquito-borne disease caused by _Plasmodium_ parasites that are transmitted by the bites of infected female _Anopheles_ mosquitoes. There are five parasites that cause malaria, but two types— _P. falciparum_ and _P. vivax_ —cause the majority of the cases.
+
+![Malaria heat map][3]
+
+This map shows that malaria is prevalent around the globe, especially in tropical regions, but the nature and fatality of the disease is the primary motivation for this project.
+
+If an infected mosquito bites you, parasites carried by the mosquito enter your blood and start destroying oxygen-carrying red blood cells (RBC). Typically, the first symptoms of malaria are similar to a virus like the flu and they usually begin within a few days or weeks after the mosquito bite. However, these deadly parasites can live in your body for over a year without causing symptoms, and a delay in treatment can lead to complications and even death. Therefore, early detection can save lives.
+
+The World Health Organization's (WHO) [malaria facts][4] indicate that nearly half the world's population is at risk from malaria, and there are over 200 million malaria cases and approximately 400,000 deaths due to malaria every year. This is a motivatation to make malaria detection and diagnosis fast, easy, and effective.
+
+### Methods of malaria detection
+
+There are several methods that can be used for malaria detection and diagnosis. The paper on which our project is based, "[Pre-trained convolutional neural networks as feature extractors toward improved Malaria parasite detection in thin blood smear images][5]," by Rajaraman, et al., introduces some of the methods, including polymerase chain reaction (PCR) and rapid diagnostic tests (RDT). These two tests are typically used where high-quality microscopy services are not readily available.
+
+The standard malaria diagnosis is typically based on a blood-smear workflow, according to Carlos Ariza's article "[Malaria Hero: A web app for faster malaria diagnosis][6]," which I learned about in Adrian Rosebrock's "[Deep learning and medical image analysis with Keras][7]." I appreciate the authors of these excellent resources for giving me more perspective on malaria prevalence, diagnosis, and treatment.
+
+![Blood smear workflow for Malaria detection][8]
+
+A blood smear workflow for Malaria detection
+
+According to WHO protocol, diagnosis typically involves intensive examination of the blood smear at 100X magnification. Trained people manually count how many red blood cells contain parasites out of 5,000 cells. As the Rajaraman, et al., paper cited above explains:
+
+> Thick blood smears assist in detecting the presence of parasites while thin blood smears assist in identifying the species of the parasite causing the infection (Centers for Disease Control and Prevention, 2012). The diagnostic accuracy heavily depends on human expertise and can be adversely impacted by the inter-observer variability and the liability imposed by large-scale diagnoses in disease-endemic/resource-constrained regions (Mitiku, Mengistu, and Gelaw, 2003). Alternative techniques such as polymerase chain reaction (PCR) and rapid diagnostic tests (RDT) are used; however, PCR analysis is limited in its performance (Hommelsheim, et al., 2014) and RDTs are less cost-effective in disease-endemic regions (Hawkes, Katsuva, and Masumbuko, 2009).
+
+Thus, malaria detection could benefit from automation using deep learning.
+
+### Deep learning for malaria detection
+
+Manual diagnosis of blood smears is an intensive manual process that requires expertise in classifying and counting parasitized and uninfected cells. This process may not scale well, especially in regions where the right expertise is hard to find. Some advancements have been made in leveraging state-of-the-art image processing and analysis techniques to extract hand-engineered features and build machine learning-based classification models. However, these models are not scalable with more data being available for training and given the fact that hand-engineered features take a lot of time.
+
+Deep learning models, or more specifically convolutional neural networks (CNNs), have proven very effective in a wide variety of computer vision tasks. (If you would like additional background knowledge on CNNs, I recommend reading [CS231n Convolutional Neural Networks for Visual Recognition][9].) Briefly, the key layers in a CNN model include convolution and pooling layers, as shown in the following figure.
+
+![A typical CNN architecture][10]
+
+A typical CNN architecture
+
+Convolution layers learn spatial hierarchical patterns from data, which are also translation-invariant, so they are able to learn different aspects of images. For example, the first convolution layer will learn small and local patterns, such as edges and corners, a second convolution layer will learn larger patterns based on the features from the first layers, and so on. This allows CNNs to automate feature engineering and learn effective features that generalize well on new data points. Pooling layers helps with downsampling and dimension reduction.
+
+Thus, CNNs help with automated and scalable feature engineering. Also, plugging in dense layers at the end of the model enables us to perform tasks like image classification. Automated malaria detection using deep learning models like CNNs could be very effective, cheap, and scalable, especially with the advent of transfer learning and pre-trained models that work quite well, even with constraints like less data.
+
+The Rajaraman, et al., paper leverages six pre-trained models on a dataset to obtain an impressive accuracy of 95.9% in detecting malaria vs. non-infected samples. Our focus is to try some simple CNN models from scratch and a couple of pre-trained models using transfer learning to see the results we can get on the same dataset. We will use open source tools and frameworks, including Python and TensorFlow, to build our models.
+
+### The dataset
+
+The data for our analysis comes from researchers at the Lister Hill National Center for Biomedical Communications (LHNCBC), part of the National Library of Medicine (NLM), who have carefully collected and annotated the [publicly available dataset][11] of healthy and infected blood smear images. These researchers have developed a mobile [application for malaria detection][12] that runs on a standard Android smartphone attached to a conventional light microscope. They used Giemsa-stained thin blood smear slides from 150 _P. falciparum_ -infected and 50 healthy patients, collected and photographed at Chittagong Medical College Hospital, Bangladesh. The smartphone's built-in camera acquired images of slides for each microscopic field of view. The images were manually annotated by an expert slide reader at the Mahidol-Oxford Tropical Medicine Research Unit in Bangkok, Thailand.
+
+Let's briefly check out the dataset's structure. First, I will install some basic dependencies (based on the operating system being used).
+
+![Installing dependencies][13]
+
+I am using a Debian-based system on the cloud with a GPU so I can run my models faster. To view the directory structure, we must install the tree dependency (if we don't have it) using **sudo apt install tree**.
+
+![Installing the tree dependency][14]
+
+We have two folders that contain images of cells, infected and healthy. We can get further details about the total number of images by entering:
+
+
+```
+import os
+import glob
+
+base_dir = os.path.join('./cell_images')
+infected_dir = os.path.join(base_dir,'Parasitized')
+healthy_dir = os.path.join(base_dir,'Uninfected')
+
+infected_files = glob.glob(infected_dir+'/*.png')
+healthy_files = glob.glob(healthy_dir+'/*.png')
+len(infected_files), len(healthy_files)
+
+# Output
+(13779, 13779)
+```
+
+It looks like we have a balanced dataset with 13,779 malaria and 13,779 non-malaria (uninfected) cell images. Let's build a data frame from this, which we will use when we start building our datasets.
+
+
+```
+import numpy as np
+import pandas as pd
+
+np.random.seed(42)
+
+files_df = pd.DataFrame({
+'filename': infected_files + healthy_files,
+'label': ['malaria'] * len(infected_files) + ['healthy'] * len(healthy_files)
+}).sample(frac=1, random_state=42).reset_index(drop=True)
+
+files_df.head()
+```
+
+![Datasets][15]
+
+### Build and explore image datasets
+
+To build deep learning models, we need training data, but we also need to test the model's performance on unseen data. We will use a 60:10:30 split for train, validation, and test datasets, respectively. We will leverage the train and validation datasets during training and check the performance of the model on the test dataset.
+
+
+```
+from sklearn.model_selection import train_test_split
+from collections import Counter
+
+train_files, test_files, train_labels, test_labels = train_test_split(files_df['filename'].values,
+files_df['label'].values,
+test_size=0.3, random_state=42)
+train_files, val_files, train_labels, val_labels = train_test_split(train_files,
+train_labels,
+test_size=0.1, random_state=42)
+
+print(train_files.shape, val_files.shape, test_files.shape)
+print('Train:', Counter(train_labels), '\nVal:', Counter(val_labels), '\nTest:', Counter(test_labels))
+
+# Output
+(17361,) (1929,) (8268,)
+Train: Counter({'healthy': 8734, 'malaria': 8627})
+Val: Counter({'healthy': 970, 'malaria': 959})
+Test: Counter({'malaria': 4193, 'healthy': 4075})
+```
+
+The images will not be of equal dimensions because blood smears and cell images vary based on the human, the test method, and the orientation of the photo. Let's get some summary statistics of our training dataset to determine the optimal image dimensions (remember, we don't touch the test dataset at all!).
+
+
+```
+import cv2
+from concurrent import futures
+import threading
+
+def get_img_shape_parallel(idx, img, total_imgs):
+if idx % 5000 == 0 or idx == (total_imgs - 1):
+print('{}: working on img num: {}'.format(threading.current_thread().name,
+idx))
+return cv2.imread(img).shape
+
+ex = futures.ThreadPoolExecutor(max_workers=None)
+data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)]
+print('Starting Img shape computation:')
+train_img_dims_map = ex.map(get_img_shape_parallel,
+[record[0] for record in data_inp],
+[record[1] for record in data_inp],
+[record[2] for record in data_inp])
+train_img_dims = list(train_img_dims_map)
+print('Min Dimensions:', np.min(train_img_dims, axis=0))
+print('Avg Dimensions:', np.mean(train_img_dims, axis=0))
+print('Median Dimensions:', np.median(train_img_dims, axis=0))
+print('Max Dimensions:', np.max(train_img_dims, axis=0))
+
+# Output
+Starting Img shape computation:
+ThreadPoolExecutor-0_0: working on img num: 0
+ThreadPoolExecutor-0_17: working on img num: 5000
+ThreadPoolExecutor-0_15: working on img num: 10000
+ThreadPoolExecutor-0_1: working on img num: 15000
+ThreadPoolExecutor-0_7: working on img num: 17360
+Min Dimensions: [46 46 3]
+Avg Dimensions: [132.77311215 132.45757733 3.]
+Median Dimensions: [130. 130. 3.]
+Max Dimensions: [385 394 3]
+```
+
+We apply parallel processing to speed up the image-read operations and, based on the summary statistics, we will resize each image to 125x125 pixels. Let's load up all of our images and resize them to these fixed dimensions.
+
+
+```
+IMG_DIMS = (125, 125)
+
+def get_img_data_parallel(idx, img, total_imgs):
+if idx % 5000 == 0 or idx == (total_imgs - 1):
+print('{}: working on img num: {}'.format(threading.current_thread().name,
+idx))
+img = cv2.imread(img)
+img = cv2.resize(img, dsize=IMG_DIMS,
+interpolation=cv2.INTER_CUBIC)
+img = np.array(img, dtype=np.float32)
+return img
+
+ex = futures.ThreadPoolExecutor(max_workers=None)
+train_data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)]
+val_data_inp = [(idx, img, len(val_files)) for idx, img in enumerate(val_files)]
+test_data_inp = [(idx, img, len(test_files)) for idx, img in enumerate(test_files)]
+
+print('Loading Train Images:')
+train_data_map = ex.map(get_img_data_parallel,
+[record[0] for record in train_data_inp],
+[record[1] for record in train_data_inp],
+[record[2] for record in train_data_inp])
+train_data = np.array(list(train_data_map))
+
+print('\nLoading Validation Images:')
+val_data_map = ex.map(get_img_data_parallel,
+[record[0] for record in val_data_inp],
+[record[1] for record in val_data_inp],
+[record[2] for record in val_data_inp])
+val_data = np.array(list(val_data_map))
+
+print('\nLoading Test Images:')
+test_data_map = ex.map(get_img_data_parallel,
+[record[0] for record in test_data_inp],
+[record[1] for record in test_data_inp],
+[record[2] for record in test_data_inp])
+test_data = np.array(list(test_data_map))
+
+train_data.shape, val_data.shape, test_data.shape
+
+# Output
+Loading Train Images:
+ThreadPoolExecutor-1_0: working on img num: 0
+ThreadPoolExecutor-1_12: working on img num: 5000
+ThreadPoolExecutor-1_6: working on img num: 10000
+ThreadPoolExecutor-1_10: working on img num: 15000
+ThreadPoolExecutor-1_3: working on img num: 17360
+
+Loading Validation Images:
+ThreadPoolExecutor-1_13: working on img num: 0
+ThreadPoolExecutor-1_18: working on img num: 1928
+
+Loading Test Images:
+ThreadPoolExecutor-1_5: working on img num: 0
+ThreadPoolExecutor-1_19: working on img num: 5000
+ThreadPoolExecutor-1_8: working on img num: 8267
+((17361, 125, 125, 3), (1929, 125, 125, 3), (8268, 125, 125, 3))
+```
+
+We leverage parallel processing again to speed up computations pertaining to image load and resizing. Finally, we get our image tensors of the desired dimensions, as depicted in the preceding output. We can now view some sample cell images to get an idea of how our data looks.
+
+
+```
+import matplotlib.pyplot as plt
+%matplotlib inline
+
+plt.figure(1 , figsize = (8 , 8))
+n = 0
+for i in range(16):
+n += 1
+r = np.random.randint(0 , train_data.shape[0] , 1)
+plt.subplot(4 , 4 , n)
+plt.subplots_adjust(hspace = 0.5 , wspace = 0.5)
+plt.imshow(train_data[r[0]]/255.)
+plt.title('{}'.format(train_labels[r[0]]))
+plt.xticks([]) , plt.yticks([])
+```
+
+![Malaria cell samples][16]
+
+Based on these sample images, we can see some subtle differences between malaria and healthy cell images. We will make our deep learning models try to learn these patterns during model training.
+
+Before can we start training our models, we must set up some basic configuration settings.
+
+
+```
+BATCH_SIZE = 64
+NUM_CLASSES = 2
+EPOCHS = 25
+INPUT_SHAPE = (125, 125, 3)
+
+train_imgs_scaled = train_data / 255.
+val_imgs_scaled = val_data / 255.
+
+# encode text category labels
+from sklearn.preprocessing import LabelEncoder
+
+le = LabelEncoder()
+le.fit(train_labels)
+train_labels_enc = le.transform(train_labels)
+val_labels_enc = le.transform(val_labels)
+
+print(train_labels[:6], train_labels_enc[:6])
+
+# Output
+['malaria' 'malaria' 'malaria' 'healthy' 'healthy' 'malaria'] [1 1 1 0 0 1]
+```
+
+We fix our image dimensions, batch size, and epochs and encode our categorical class labels. The alpha version of TensorFlow 2.0 was released in March 2019, and this exercise is the perfect excuse to try it out.
+
+
+```
+import tensorflow as tf
+
+# Load the TensorBoard notebook extension (optional)
+%load_ext tensorboard.notebook
+
+tf.random.set_seed(42)
+tf.__version__
+
+# Output
+'2.0.0-alpha0'
+```
+
+### Deep learning model training
+
+In the model training phase, we will build three deep learning models, train them with our training data, and compare their performance using the validation data. We will then save these models and use them later in the model evaluation phase.
+
+#### Model 1: CNN from scratch
+
+Our first malaria detection model will build and train a basic CNN from scratch. First, let's define our model architecture.
+
+
+```
+inp = tf.keras.layers.Input(shape=INPUT_SHAPE)
+
+conv1 = tf.keras.layers.Conv2D(32, kernel_size=(3, 3),
+activation='relu', padding='same')(inp)
+pool1 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv1)
+conv2 = tf.keras.layers.Conv2D(64, kernel_size=(3, 3),
+activation='relu', padding='same')(pool1)
+pool2 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv2)
+conv3 = tf.keras.layers.Conv2D(128, kernel_size=(3, 3),
+activation='relu', padding='same')(pool2)
+pool3 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv3)
+
+flat = tf.keras.layers.Flatten()(pool3)
+
+hidden1 = tf.keras.layers.Dense(512, activation='relu')(flat)
+drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
+hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
+drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
+
+out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
+
+model = tf.keras.Model(inputs=inp, outputs=out)
+model.compile(optimizer='adam',
+loss='binary_crossentropy',
+metrics=['accuracy'])
+model.summary()
+
+# Output
+Model: "model"
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+input_1 (InputLayer) [(None, 125, 125, 3)] 0
+_________________________________________________________________
+conv2d (Conv2D) (None, 125, 125, 32) 896
+_________________________________________________________________
+max_pooling2d (MaxPooling2D) (None, 62, 62, 32) 0
+_________________________________________________________________
+conv2d_1 (Conv2D) (None, 62, 62, 64) 18496
+_________________________________________________________________
+...
+...
+_________________________________________________________________
+dense_1 (Dense) (None, 512) 262656
+_________________________________________________________________
+dropout_1 (Dropout) (None, 512) 0
+_________________________________________________________________
+dense_2 (Dense) (None, 1) 513
+=================================================================
+Total params: 15,102,529
+Trainable params: 15,102,529
+Non-trainable params: 0
+_________________________________________________________________
+```
+
+Based on the architecture in this code, our CNN model has three convolution and pooling layers, followed by two dense layers, and dropouts for regularization. Let's train our model.
+
+
+```
+import datetime
+
+logdir = os.path.join('/home/dipanzan_sarkar/projects/tensorboard_logs',
+datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
+tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
+reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5,
+patience=2, min_lr=0.000001)
+callbacks = [reduce_lr, tensorboard_callback]
+
+history = model.fit(x=train_imgs_scaled, y=train_labels_enc,
+batch_size=BATCH_SIZE,
+epochs=EPOCHS,
+validation_data=(val_imgs_scaled, val_labels_enc),
+callbacks=callbacks,
+verbose=1)
+
+
+# Output
+Train on 17361 samples, validate on 1929 samples
+Epoch 1/25
+17361/17361 [====] - 32s 2ms/sample - loss: 0.4373 - accuracy: 0.7814 - val_loss: 0.1834 - val_accuracy: 0.9393
+Epoch 2/25
+17361/17361 [====] - 30s 2ms/sample - loss: 0.1725 - accuracy: 0.9434 - val_loss: 0.1567 - val_accuracy: 0.9513
+...
+...
+Epoch 24/25
+17361/17361 [====] - 30s 2ms/sample - loss: 0.0036 - accuracy: 0.9993 - val_loss: 0.3693 - val_accuracy: 0.9565
+Epoch 25/25
+17361/17361 [====] - 30s 2ms/sample - loss: 0.0034 - accuracy: 0.9994 - val_loss: 0.3699 - val_accuracy: 0.9559
+```
+
+We get a validation accuracy of 95.6%, which is pretty good, although our model looks to be overfitting slightly (based on looking at our training accuracy, which is 99.9%). We can get a clear perspective on this by plotting the training and validation accuracy and loss curves.
+
+
+```
+f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
+t = f.suptitle('Basic CNN Performance', fontsize=12)
+f.subplots_adjust(top=0.85, wspace=0.3)
+
+max_epoch = len(history.history['accuracy'])+1
+epoch_list = list(range(1,max_epoch))
+ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy')
+ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy')
+ax1.set_xticks(np.arange(1, max_epoch, 5))
+ax1.set_ylabel('Accuracy Value')
+ax1.set_xlabel('Epoch')
+ax1.set_title('Accuracy')
+l1 = ax1.legend(loc="best")
+
+ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
+ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
+ax2.set_xticks(np.arange(1, max_epoch, 5))
+ax2.set_ylabel('Loss Value')
+ax2.set_xlabel('Epoch')
+ax2.set_title('Loss')
+l2 = ax2.legend(loc="best")
+```
+
+![Learning curves for basic CNN][17]
+
+Learning curves for basic CNN
+
+We can see after the fifth epoch that things don't seem to improve a whole lot overall. Let's save this model for future evaluation.
+
+
+```
+`model.save('basic_cnn.h5')`
+```
+
+#### Deep transfer learning
+
+Just like humans have an inherent capability to transfer knowledge across tasks, transfer learning enables us to utilize knowledge from previously learned tasks and apply it to newer, related ones, even in the context of machine learning or deep learning. If you are interested in doing a deep-dive on transfer learning, you can read my article "[A comprehensive hands-on guide to transfer learning with real-world applications in deep learning][18]" and my book [_Hands-On Transfer Learning with Python_][19].
+
+![Ideas for deep transfer learning][20]
+
+The idea we want to explore in this exercise is:
+
+> Can we leverage a pre-trained deep learning model (which was trained on a large dataset, like ImageNet) to solve the problem of malaria detection by applying and transferring its knowledge in the context of our problem?
+
+We will apply the two most popular strategies for deep transfer learning.
+
+ * Pre-trained model as a feature extractor
+ * Pre-trained model with fine-tuning
+
+
+
+We will be using the pre-trained VGG-19 deep learning model, developed by the Visual Geometry Group (VGG) at the University of Oxford, for our experiments. A pre-trained model like VGG-19 is trained on a huge dataset ([ImageNet][21]) with a lot of diverse image categories. Therefore, the model should have learned a robust hierarchy of features, which are spatial-, rotational-, and translation-invariant with regard to features learned by CNN models. Hence, the model, having learned a good representation of features for over a million images, can act as a good feature extractor for new images suitable for computer vision problems like malaria detection. Let's discuss the VGG-19 model architecture before unleashing the power of transfer learning on our problem.
+
+##### Understanding the VGG-19 model
+
+The VGG-19 model is a 19-layer (convolution and fully connected) deep learning network built on the ImageNet database, which was developed for the purpose of image recognition and classification. This model was built by Karen Simonyan and Andrew Zisserman and is described in their paper "[Very deep convolutional networks for large-scale image recognition][22]." The architecture of the VGG-19 model is:
+
+![VGG-19 Model Architecture][23]
+
+You can see that we have a total of 16 convolution layers using 3x3 convolution filters along with max pooling layers for downsampling and two fully connected hidden layers of 4,096 units in each layer followed by a dense layer of 1,000 units, where each unit represents one of the image categories in the ImageNet database. We do not need the last three layers since we will be using our own fully connected dense layers to predict malaria. We are more concerned with the first five blocks so we can leverage the VGG model as an effective feature extractor.
+
+We will use one of the models as a simple feature extractor by freezing the five convolution blocks to make sure their weights aren't updated after each epoch. For the last model, we will apply fine-tuning to the VGG model, where we will unfreeze the last two blocks (Block 4 and Block 5) so that their weights will be updated in each epoch (per batch of data) as we train our own model.
+
+#### Model 2: Pre-trained model as a feature extractor
+
+For building this model, we will leverage TensorFlow to load up the VGG-19 model and freeze the convolution blocks so we can use them as an image feature extractor. We will plug in our own dense layers at the end to perform the classification task.
+
+
+```
+vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet',
+input_shape=INPUT_SHAPE)
+vgg.trainable = False
+# Freeze the layers
+for layer in vgg.layers:
+layer.trainable = False
+
+base_vgg = vgg
+base_out = base_vgg.output
+pool_out = tf.keras.layers.Flatten()(base_out)
+hidden1 = tf.keras.layers.Dense(512, activation='relu')(pool_out)
+drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
+hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
+drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
+
+out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
+
+model = tf.keras.Model(inputs=base_vgg.input, outputs=out)
+model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-4),
+loss='binary_crossentropy',
+metrics=['accuracy'])
+model.summary()
+
+# Output
+Model: "model_1"
+_________________________________________________________________
+Layer (type) Output Shape Param #
+=================================================================
+input_2 (InputLayer) [(None, 125, 125, 3)] 0
+_________________________________________________________________
+block1_conv1 (Conv2D) (None, 125, 125, 64) 1792
+_________________________________________________________________
+block1_conv2 (Conv2D) (None, 125, 125, 64) 36928
+_________________________________________________________________
+...
+...
+_________________________________________________________________
+block5_pool (MaxPooling2D) (None, 3, 3, 512) 0
+_________________________________________________________________
+flatten_1 (Flatten) (None, 4608) 0
+_________________________________________________________________
+dense_3 (Dense) (None, 512) 2359808
+_________________________________________________________________
+dropout_2 (Dropout) (None, 512) 0
+_________________________________________________________________
+dense_4 (Dense) (None, 512) 262656
+_________________________________________________________________
+dropout_3 (Dropout) (None, 512) 0
+_________________________________________________________________
+dense_5 (Dense) (None, 1) 513
+=================================================================
+Total params: 22,647,361
+Trainable params: 2,622,977
+Non-trainable params: 20,024,384
+_________________________________________________________________
+```
+
+It is evident from this output that we have a lot of layers in our model and we will be using the frozen layers of the VGG-19 model as feature extractors only. You can use the following code to verify how many layers in our model are indeed trainable and how many total layers are present in our network.
+
+
+```
+print("Total Layers:", len(model.layers))
+print("Total trainable layers:",
+sum([1 for l in model.layers if l.trainable]))
+
+# Output
+Total Layers: 28
+Total trainable layers: 6
+```
+
+We will now train our model using similar configurations and callbacks to the ones we used in our previous model. Refer to [my GitHub repository][24] for the complete code to train the model. We observe the following plots showing the model's accuracy and loss.
+
+![Learning curves for frozen pre-trained CNN][25]
+
+Learning curves for frozen pre-trained CNN
+
+This shows that our model is not overfitting as much as our basic CNN model, but the performance is slightly less than our basic CNN model. Let's save this model for future evaluation.
+
+
+```
+`model.save('vgg_frozen.h5')`
+```
+
+#### Model 3: Fine-tuned pre-trained model with image augmentation
+
+In our final model, we will fine-tune the weights of the layers in the last two blocks of our pre-trained VGG-19 model. We will also introduce the concept of image augmentation. The idea behind image augmentation is exactly as the name sounds. We load in existing images from our training dataset and apply some image transformation operations to them, such as rotation, shearing, translation, zooming, and so on, to produce new, altered versions of existing images. Due to these random transformations, we don't get the same images each time. We will leverage an excellent utility called **ImageDataGenerator** in **tf.keras** that can help build image augmentors.
+
+
+```
+train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,
+zoom_range=0.05,
+rotation_range=25,
+width_shift_range=0.05,
+height_shift_range=0.05,
+shear_range=0.05, horizontal_flip=True,
+fill_mode='nearest')
+
+val_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
+
+# build image augmentation generators
+train_generator = train_datagen.flow(train_data, train_labels_enc, batch_size=BATCH_SIZE, shuffle=True)
+val_generator = val_datagen.flow(val_data, val_labels_enc, batch_size=BATCH_SIZE, shuffle=False)
+```
+
+We will not apply any transformations on our validation dataset (except for scaling the images, which is mandatory) since we will be using it to evaluate our model performance per epoch. For a detailed explanation of image augmentation in the context of transfer learning, feel free to check out my [article][18] cited above. Let's look at some sample results from a batch of image augmentation transforms.
+
+
+```
+img_id = 0
+sample_generator = train_datagen.flow(train_data[img_id:img_id+1], train_labels[img_id:img_id+1],
+batch_size=1)
+sample = [next(sample_generator) for i in range(0,5)]
+fig, ax = plt.subplots(1,5, figsize=(16, 6))
+print('Labels:', [item[1][0] for item in sample])
+l = [ax[i].imshow(sample[i][0][0]) for i in range(0,5)]
+```
+
+![Sample augmented images][26]
+
+You can clearly see the slight variations of our images in the preceding output. We will now build our deep learning model, making sure the last two blocks of the VGG-19 model are trainable.
+
+
+```
+vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet',
+input_shape=INPUT_SHAPE)
+# Freeze the layers
+vgg.trainable = True
+
+set_trainable = False
+for layer in vgg.layers:
+if layer.name in ['block5_conv1', 'block4_conv1']:
+set_trainable = True
+if set_trainable:
+layer.trainable = True
+else:
+layer.trainable = False
+
+base_vgg = vgg
+base_out = base_vgg.output
+pool_out = tf.keras.layers.Flatten()(base_out)
+hidden1 = tf.keras.layers.Dense(512, activation='relu')(pool_out)
+drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
+hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
+drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
+
+out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
+
+model = tf.keras.Model(inputs=base_vgg.input, outputs=out)
+model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-5),
+loss='binary_crossentropy',
+metrics=['accuracy'])
+
+print("Total Layers:", len(model.layers))
+print("Total trainable layers:", sum([1 for l in model.layers if l.trainable]))
+
+# Output
+Total Layers: 28
+Total trainable layers: 16
+```
+
+We reduce the learning rate in our model since we don't want to make to large weight updates to the pre-trained layers when fine-tuning. The model's training process will be slightly different since we are using data generators, so we will be leveraging the **fit_generator(…)** function.
+
+
+```
+tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
+reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5,
+patience=2, min_lr=0.000001)
+
+callbacks = [reduce_lr, tensorboard_callback]
+train_steps_per_epoch = train_generator.n // train_generator.batch_size
+val_steps_per_epoch = val_generator.n // val_generator.batch_size
+history = model.fit_generator(train_generator, steps_per_epoch=train_steps_per_epoch, epochs=EPOCHS,
+validation_data=val_generator, validation_steps=val_steps_per_epoch,
+verbose=1)
+
+# Output
+Epoch 1/25
+271/271 [====] - 133s 489ms/step - loss: 0.2267 - accuracy: 0.9117 - val_loss: 0.1414 - val_accuracy: 0.9531
+Epoch 2/25
+271/271 [====] - 129s 475ms/step - loss: 0.1399 - accuracy: 0.9552 - val_loss: 0.1292 - val_accuracy: 0.9589
+...
+...
+Epoch 24/25
+271/271 [====] - 128s 473ms/step - loss: 0.0815 - accuracy: 0.9727 - val_loss: 0.1466 - val_accuracy: 0.9682
+Epoch 25/25
+271/271 [====] - 128s 473ms/step - loss: 0.0792 - accuracy: 0.9729 - val_loss: 0.1127 - val_accuracy: 0.9641
+```
+
+This looks to be our best model yet. It gives us a validation accuracy of almost 96.5% and, based on the training accuracy, it doesn't look like our model is overfitting as much as our first model. This can be verified with the following learning curves.
+
+![Learning curves for fine-tuned pre-trained CNN][27]
+
+Learning curves for fine-tuned pre-trained CNN
+
+Let's save this model so we can use it for model evaluation on our test dataset.
+
+
+```
+`model.save('vgg_finetuned.h5')`
+```
+
+This completes our model training phase. We are now ready to test the performance of our models on the actual test dataset!
+
+### Deep learning model performance evaluation
+
+We will evaluate the three models we built in the training phase by making predictions with them on the data from our test dataset—because just validation is not enough! We have also built a nifty utility module called **model_evaluation_utils** , which we can use to evaluate the performance of our deep learning models with relevant classification metrics. The first step is to scale our test data.
+
+
+```
+test_imgs_scaled = test_data / 255.
+test_imgs_scaled.shape, test_labels.shape
+
+# Output
+((8268, 125, 125, 3), (8268,))
+```
+
+The next step involves loading our saved deep learning models and making predictions on the test data.
+
+
+```
+# Load Saved Deep Learning Models
+basic_cnn = tf.keras.models.load_model('./basic_cnn.h5')
+vgg_frz = tf.keras.models.load_model('./vgg_frozen.h5')
+vgg_ft = tf.keras.models.load_model('./vgg_finetuned.h5')
+
+# Make Predictions on Test Data
+basic_cnn_preds = basic_cnn.predict(test_imgs_scaled, batch_size=512)
+vgg_frz_preds = vgg_frz.predict(test_imgs_scaled, batch_size=512)
+vgg_ft_preds = vgg_ft.predict(test_imgs_scaled, batch_size=512)
+
+basic_cnn_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
+for pred in basic_cnn_preds.ravel()])
+vgg_frz_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
+for pred in vgg_frz_preds.ravel()])
+vgg_ft_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
+for pred in vgg_ft_preds.ravel()])
+```
+
+The final step is to leverage our **model_evaluation_utils** module and check the performance of each model with relevant classification metrics.
+
+
+```
+import model_evaluation_utils as meu
+import pandas as pd
+
+basic_cnn_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=basic_cnn_pred_labels)
+vgg_frz_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=vgg_frz_pred_labels)
+vgg_ft_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=vgg_ft_pred_labels)
+
+pd.DataFrame([basic_cnn_metrics, vgg_frz_metrics, vgg_ft_metrics],
+index=['Basic CNN', 'VGG-19 Frozen', 'VGG-19 Fine-tuned'])
+```
+
+![Model accuracy][28]
+
+It looks like our third model performs best on the test dataset, giving a model accuracy and an F1-score of 96%, which is pretty good and quite comparable to the more complex models mentioned in the research paper and articles we mentioned earlier.
+
+### Conclusion
+
+Malaria detection is not an easy procedure, and the availability of qualified personnel around the globe is a serious concern in the diagnosis and treatment of cases. We looked at an interesting real-world medical imaging case study of malaria detection. Easy-to-build, open source techniques leveraging AI can give us state-of-the-art accuracy in detecting malaria, thus enabling AI for social good.
+
+I encourage you to check out the articles and research papers mentioned in this article, without which it would have been impossible for me to conceptualize and write it. If you are interested in running or adopting these techniques, all the code used in this article is available on [my GitHub repository][24]. Remember to download the data from the [official website][11].
+
+Let's hope for more adoption of open source AI capabilities in healthcare to make it less expensive and more accessible for everyone around the world!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/detecting-malaria-deep-learning
+
+作者:[Dipanjan (DJ) Sarkar (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/djsarkar
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourcedoctor.png?itok=fk79NwpC
+[2]: https://opensource.com/sites/default/files/uploads/malaria1_python-tensorflow.png (Python and TensorFlow)
+[3]: https://opensource.com/sites/default/files/uploads/malaria2_malaria-heat-map.png (Malaria heat map)
+[4]: https://www.who.int/features/factfiles/malaria/en/
+[5]: https://peerj.com/articles/4568/
+[6]: https://blog.insightdatascience.com/https-blog-insightdatascience-com-malaria-hero-a47d3d5fc4bb
+[7]: https://www.pyimagesearch.com/2018/12/03/deep-learning-and-medical-image-analysis-with-keras/
+[8]: https://opensource.com/sites/default/files/uploads/malaria3_blood-smear.png (Blood smear workflow for Malaria detection)
+[9]: http://cs231n.github.io/convolutional-networks/
+[10]: https://opensource.com/sites/default/files/uploads/malaria4_cnn-architecture.png (A typical CNN architecture)
+[11]: https://ceb.nlm.nih.gov/repositories/malaria-datasets/
+[12]: https://www.ncbi.nlm.nih.gov/pubmed/29360430
+[13]: https://opensource.com/sites/default/files/uploads/malaria5_dependencies.png (Installing dependencies)
+[14]: https://opensource.com/sites/default/files/uploads/malaria6_tree-dependency.png (Installing the tree dependency)
+[15]: https://opensource.com/sites/default/files/uploads/malaria7_dataset.png (Datasets)
+[16]: https://opensource.com/sites/default/files/uploads/malaria8_cell-samples.png (Malaria cell samples)
+[17]: https://opensource.com/sites/default/files/uploads/malaria9_learningcurves.png (Learning curves for basic CNN)
+[18]: https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a
+[19]: https://github.com/dipanjanS/hands-on-transfer-learning-with-python
+[20]: https://opensource.com/sites/default/files/uploads/malaria10_transferideas.png (Ideas for deep transfer learning)
+[21]: http://image-net.org/index
+[22]: https://arxiv.org/pdf/1409.1556.pdf
+[23]: https://opensource.com/sites/default/files/uploads/malaria11_vgg-19-model-architecture.png (VGG-19 Model Architecture)
+[24]: https://nbviewer.jupyter.org/github/dipanjanS/data_science_for_all/tree/master/os_malaria_detection/
+[25]: https://opensource.com/sites/default/files/uploads/malaria12_learningcurves.png (Learning curves for frozen pre-trained CNN)
+[26]: https://opensource.com/sites/default/files/uploads/malaria13_sampleimages.png (Sample augmented images)
+[27]: https://opensource.com/sites/default/files/uploads/malaria14_learningcurves.png (Learning curves for fine-tuned pre-trained CNN)
+[28]: https://opensource.com/sites/default/files/uploads/malaria15_modelaccuracy.png (Model accuracy)
From a2845c04025bc1f63401ba085648fb2106a2c3b5 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:48:27 +0800
Subject: [PATCH 0011/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190416=20Can?=
=?UTF-8?q?=20schools=20be=20agile=3F=20sources/tech/20190416=20Can=20scho?=
=?UTF-8?q?ols=20be=20agile.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
sources/tech/20190416 Can schools be agile.md | 79 +++++++++++++++++++
1 file changed, 79 insertions(+)
create mode 100644 sources/tech/20190416 Can schools be agile.md
diff --git a/sources/tech/20190416 Can schools be agile.md b/sources/tech/20190416 Can schools be agile.md
new file mode 100644
index 0000000000..065b313c05
--- /dev/null
+++ b/sources/tech/20190416 Can schools be agile.md
@@ -0,0 +1,79 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Can schools be agile?)
+[#]: via: (https://opensource.com/open-organization/19/4/education-culture-agile)
+[#]: author: (Ben Owens https://opensource.com/users/engineerteacher/users/ke4qqq/users/n8chz/users/don-watkins)
+
+Can schools be agile?
+======
+We certainly don't need to run our schools like businesses—but we could
+benefit from educational organizations more focused on continuous
+improvement.
+![][1]
+
+We've all had those _deja vu_ moments that make us think "I've seen this before!" I experienced them often in the late 1980s, when I first began my career in industry. I was caught up in a wave of organizational change, where the U.S. manufacturing sector was experimenting with various models that asked leaders, managers, and engineers like me to rethink how we approached things like quality, cost, innovation, and shareholder value. It seems as if every year (sometimes, more frequently) we'd study yet another book to identify the "best practices" necessary for making us leaner, flatter, more nimble, and more responsive to the needs of the customer.
+
+Many of the approaches were so transformational that their core principles still resonate with me today. Specific ideas and methods from thought leaders such as John Kotter, Peter Drucker, Edwards Demming, and Peter Senge were truly pivotal for our ability to rethink our work, as were the adoption of process improvement methods such as Six Sigma and those embodied in the "Toyota Way."
+
+But others seemed to simply repackage these same ideas with a sexy new twist—hence my _deja vu_.
+
+And yet when I began my career as a teacher, I encountered a context that _didn't_ give me that feeling: education. In fact, I was surprised to find that "getting better all the time" was _not_ the same high priority in my new profession that it was in my old one (particularly at the level of my role as a classroom teacher).
+
+Why aren't more educational organizations working to create cultures of continuous improvement? I can think of several reasons, but let me address two.
+
+### Widgets no more
+
+The first barrier to a culture of continuous improvement is education's general reticence to look at other professions for ideas it can adapt and adopt—especially ideas from the business community. The second is education's predominant leadership model, which remains predominantly top-down and rooted in hierarchy. Conversations about systemic, continuous improvement tend to be the purview of a relatively small group of school or district leaders: principals, assistant principals, superintendents, and the like. But widespread organizational culture change can't occur if only one small group is involved in it.
+
+Before unpacking these points a bit further, I'd like to emphasize that there are certainly exceptions to the above generalization (many I have seen first hand) and that there are two basic assumptions that I think any education stakeholder should be able to agree with:
+
+ 1. Continuous improvement must be an essential mindset for _anyone_ involved in the work of providing high-quality and equitable teaching and learning systems for students, and
+ 2. Decisions by leaders of our schools will more greatly benefit students and the communities in which they live when those decisions are informed and influenced by those who work closest with students.
+
+
+
+So why a tendency to ignore (or be outright hostile toward) ideas that come from outside the education space?
+
+I, for example, have certainly faced criticism in the past for suggesting that we look to other professions for ideas and inspiration that can help us better meet the needs of students. A common refrain is something like: "You're trying to treat our students like widgets!" But how could our students be treated any more like widgets than they already are? They matriculate through school in age-based cohorts, going from siloed class to class each day by the sound of a shrill bell, and receive grades based on arbitrary tests that emphasize sameness over individuality.
+
+What I'm advocating is a clear-eyed and objective look at any idea from any sector with potential to help us better meet the needs of individual students, not that we somehow run our schools like businesses.
+
+It may be news to many inside of education, but widgets—abstract units of production that evoke the idea of assembly line standardization—are not a significant part of the modern manufacturing sector. Thanks to the culture of continuous improvement described above, modern, advanced manufacturing delivers just what the individual customer wants, at a competitive price, exactly when she wants it. If we adapted this model to our schools, teachers would be more likely to collaborate and constantly refine their unique paths of growth for all students based on just-in-time needs and desires—regardless of the time, subject, or any other traditional norm.
+
+What I'm advocating is a clear-eyed and objective look at any idea from any sector with potential to help us better meet the needs of individual students, not that we somehow run our schools like businesses. In order for this to happen effectively, however, we need to scrutinize a leadership structure that has frankly remained stagnant for over 100 years.
+
+### Toward continuous improvement
+
+While I certainly appreciate the argument that education is an animal significantly different from other professions, I also believe that rethinking an organizational and leadership structure is an applicable exercise for any entity wanting to remain responsible (and responsive) to the needs of its stakeholders. Most other professions have taken a hard look at their traditional, closed, hierarchical structures and moved to ones that encourage collective autonomy per shared goals of excellence—organizational elements essential for continuous improvement. It's time our schools and districts do the same by expanding their horizon beyond sources that, while well intended, are developed from a lens of the current paradigm.
+
+Not surprisingly, a go-to resource I recommend to any school wanting to begin or accelerate this process is _The Open Organization_ by Jim Whitehurst. Not only does the book provide a window into how educators can create more open, inclusive leadership structures—where mutual respect enables nimble decisions to be made per real-time data—but it does so in language easily adaptable to the rather strange lexicon that's second nature to educators. Open organization thinking provides pragmatic ways any organization can empower members to be more open: sharing ideas and resources, embracing a culture of collaborative participation as a top priority, developing an innovation mindset through rapid prototyping, valuing ideas based on merit rather than the rank of the person proposing them, and building a strong sense of community that's baked into the organization's DNA. Such an open organization crowd-sources ideas from both inside and outside its formal structure and creates the type of environment that enables localized, student-centered innovations to thrive.
+
+We simply can't rely on solutions and practices we developed in a factory-model paradigm.
+
+Here's the bottom line: Essential to a culture of continuous improvement is recognizing that what we've done in the past may not be suitable in a rapidly changing future. For educators, that means we simply can't rely on solutions and practices we developed in a factory-model paradigm. We must acknowledge countless examples of best practices from other sectors—such as non-profits, the military, the medical profession, and yes, even business—that can at least _inform_ how we rethink what we do in the best interest of students. By moving beyond the traditionally sanctioned "eduspeak" world, we create opportunities for considering perspectives. We can better see the forest for the trees, taking a more objective look at the problems we face, as well as acknowledging what we do very well.
+
+Intentionally considering ideas from all sources—from first year classroom teachers to the latest NYT Business & Management Leadership bestseller—offers us a powerful way to engage existing talent within our schools to help overcome the institutionalized inertia that has prevented more positive change from taking hold in our schools and districts.
+
+Relentlessly pursuing methods of continuous improvement should not be a behavior confined to organizations fighting to remain competitive in a global, innovation economy, nor should it be left to a select few charged with the operation of our schools. When everyone in an organization is always thinking about what they can do differently _today_ to improve what they did _yesterday_ , then you have an organization living a culture of excellence. That's the kind of radically collaborative and innovative culture we should especially expect for organizations focused on changing the lives of young people.
+
+I'm eagerly awaiting the day when I enter a school, recognize that spirit, and smile to myself as I say, "I've seen this before."
+
+Experiential learning using open source is fraught with opportunities for disaster.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/open-organization/19/4/education-culture-agile
+
+作者:[Ben Owens][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/engineerteacher/users/ke4qqq/users/n8chz/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_network.png?itok=ySEHuAQ8
From 0362111d4ef181ffb3586d960a035b5db0433a31 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:48:37 +0800
Subject: [PATCH 0012/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190416=20Inte?=
=?UTF-8?q?r-process=20communication=20in=20Linux:=20Using=20pipes=20and?=
=?UTF-8?q?=20message=20queues=20sources/tech/20190416=20Inter-process=20c?=
=?UTF-8?q?ommunication=20in=20Linux-=20Using=20pipes=20and=20message=20qu?=
=?UTF-8?q?eues.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...n Linux- Using pipes and message queues.md | 531 ++++++++++++++++++
1 file changed, 531 insertions(+)
create mode 100644 sources/tech/20190416 Inter-process communication in Linux- Using pipes and message queues.md
diff --git a/sources/tech/20190416 Inter-process communication in Linux- Using pipes and message queues.md b/sources/tech/20190416 Inter-process communication in Linux- Using pipes and message queues.md
new file mode 100644
index 0000000000..a2472dbc92
--- /dev/null
+++ b/sources/tech/20190416 Inter-process communication in Linux- Using pipes and message queues.md
@@ -0,0 +1,531 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Inter-process communication in Linux: Using pipes and message queues)
+[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-channels)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+Inter-process communication in Linux: Using pipes and message queues
+======
+Learn how processes synchronize with each other in Linux.
+![Chat bubbles][1]
+
+This is the second article in a series about [interprocess communication][2] (IPC) in Linux. The [first article][3] focused on IPC through shared storage: shared files and shared memory segments. This article turns to pipes, which are channels that connect processes for communication. A channel has a _write end_ for writing bytes, and a _read end_ for reading these bytes in FIFO (first in, first out) order. In typical use, one process writes to the channel, and a different process reads from this same channel. The bytes themselves might represent anything: numbers, employee records, digital movies, and so on.
+
+Pipes come in two flavors, named and unnamed, and can be used either interactively from the command line or within programs; examples are forthcoming. This article also looks at memory queues, which have fallen out of fashion—but undeservedly so.
+
+The code examples in the first article acknowledged the threat of race conditions (either file-based or memory-based) in IPC that uses shared storage. The question naturally arises about safe concurrency for the channel-based IPC, which will be covered in this article. The code examples for pipes and memory queues use APIs with the POSIX stamp of approval, and a core goal of the POSIX standards is thread-safety.
+
+Consider the [man pages for the **mq_open**][4] function, which belongs to the memory queue API. These pages include a section on [Attributes][5] with this small table:
+
+Interface | Attribute | Value
+---|---|---
+mq_open() | Thread safety | MT-Safe
+
+The value **MT-Safe** (with **MT** for multi-threaded) means that the **mq_open** function is thread-safe, which in turn implies process-safe: A process executes in precisely the sense that one of its threads executes, and if a race condition cannot arise among threads in the _same_ process, such a condition cannot arise among threads in different processes. The **MT-Safe** attribute assures that a race condition does not arise in invocations of **mq_open**. In general, channel-based IPC is concurrent-safe, although a cautionary note is raised in the examples that follow.
+
+### Unnamed pipes
+
+Let's start with a contrived command line example that shows how unnamed pipes work. On all modern systems, the vertical bar **|** represents an unnamed pipe at the command line. Assume **%** is the command line prompt, and consider this command:
+
+
+```
+`% sleep 5 | echo "Hello, world!" ## writer to the left of |, reader to the right`
+```
+
+The _sleep_ and _echo_ utilities execute as separate processes, and the unnamed pipe allows them to communicate. However, the example is contrived in that no communication occurs. The greeting _Hello, world!_ appears on the screen; then, after about five seconds, the command line prompt returns, indicating that both the _sleep_ and _echo_ processes have exited. What's going on?
+
+In the vertical-bar syntax from the command line, the process to the left ( _sleep_ ) is the writer, and the process to the right ( _echo_ ) is the reader. By default, the reader blocks until there are bytes to read from the channel, and the writer—after writing its bytes—finishes up by sending an end-of-stream marker. (Even if the writer terminates prematurely, an end-of-stream marker is sent to the reader.) The unnamed pipe persists until both the writer and the reader terminate.
+
+In the contrived example, the _sleep_ process does not write any bytes to the channel but does terminate after about five seconds, which sends an end-of-stream marker to the channel. In the meantime, the _echo_ process immediately writes the greeting to the standard output (the screen) because this process does not read any bytes from the channel, so it does no waiting. Once the _sleep_ and _echo_ processes terminate, the unnamed pipe—not used at all for communication—goes away and the command line prompt returns.
+
+Here is a more useful example using two unnamed pipes. Suppose that the file _test.dat_ looks like this:
+
+
+```
+this
+is
+the
+way
+the
+world
+ends
+```
+
+The command:
+
+
+```
+`% cat test.dat | sort | uniq`
+```
+
+pipes the output from the _cat_ (concatenate) process into the _sort_ process to produce sorted output, and then pipes the sorted output into the _uniq_ process to eliminate duplicate records (in this case, the two occurrences of **the** reduce to one):
+
+
+```
+ends
+is
+the
+this
+way
+world
+```
+
+The scene now is set for a program with two processes that communicate through an unnamed pipe.
+
+#### Example 1. Two processes communicating through an unnamed pipe.
+
+
+```
+#include /* wait */
+#include
+#include /* exit functions */
+#include /* read, write, pipe, _exit */
+#include
+
+#define ReadEnd 0
+#define WriteEnd 1
+
+void report_and_exit(const char* msg) {
+[perror][6](msg);
+[exit][7](-1); /** failure **/
+}
+
+int main() {
+int pipeFDs[2]; /* two file descriptors */
+char buf; /* 1-byte buffer */
+const char* msg = "Nature's first green is gold\n"; /* bytes to write */
+
+if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");
+pid_t cpid = fork(); /* fork a child process */
+if (cpid < 0) report_and_exit("fork"); /* check for failure */
+
+if (0 == cpid) { /*** child ***/ /* child process */
+close(pipeFDs[WriteEnd]); /* child reads, doesn't write */
+
+while (read(pipeFDs[ReadEnd], &buf, 1) > 0) /* read until end of byte stream */
+write(STDOUT_FILENO, &buf, sizeof(buf)); /* echo to the standard output */
+
+close(pipeFDs[ReadEnd]); /* close the ReadEnd: all done */
+_exit(0); /* exit and notify parent at once */
+}
+else { /*** parent ***/
+close(pipeFDs[ReadEnd]); /* parent writes, doesn't read */
+
+write(pipeFDs[WriteEnd], msg, [strlen][8](msg)); /* write the bytes to the pipe */
+close(pipeFDs[WriteEnd]); /* done writing: generate eof */
+
+wait(NULL); /* wait for child to exit */
+[exit][7](0); /* exit normally */
+}
+return 0;
+}
+```
+
+The _pipeUN_ program above uses the system function **fork** to create a process. Although the program has but a single source file, multi-processing occurs during (successful) execution. Here are the particulars in a quick review of how the library function **fork** works:
+
+ * The **fork** function, called in the _parent_ process, returns **-1** to the parent in case of failure. In the _pipeUN_ example, the call is: [code]`pid_t cpid = fork(); /* called in parent */`[/code] The returned value is stored, in this example, in the variable **cpid** of integer type **pid_t**. (Every process has its own _process ID_ , a non-negative integer that identifies the process.) Forking a new process could fail for several reasons, including a full _process table_ , a structure that the system maintains to track processes. Zombie processes, clarified shortly, can cause a process table to fill if these are not harvested.
+ * If the **fork** call succeeds, it thereby spawns (creates) a new child process, returning one value to the parent but a different value to the child. Both the parent and the child process execute the _same_ code that follows the call to **fork**. (The child inherits copies of all the variables declared so far in the parent.) In particular, a successful call to **fork** returns:
+ * Zero to the child process
+ * The child's process ID to the parent
+ * An _if/else_ or equivalent construct typically is used after a successful **fork** call to segregate code meant for the parent from code meant for the child. In this example, the construct is: [code] if (0 == cpid) { /*** child ***/
+...
+}
+else { /*** parent ***/
+...
+}
+```
+If forking a child succeeds, the _pipeUN_ program proceeds as follows. There is an integer array:
+```
+`int pipeFDs[2]; /* two file descriptors */`
+```
+to hold two file descriptors, one for writing to the pipe and another for reading from the pipe. (The array element **pipeFDs[0]** is the file descriptor for the read end, and the array element **pipeFDs[1]** is the file descriptor for the write end.) A successful call to the system **pipe** function, made immediately before the call to **fork** , populates the array with the two file descriptors:
+```
+`if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");`
+```
+The parent and the child now have copies of both file descriptors, but the _separation of concerns_ pattern means that each process requires exactly one of the descriptors. In this example, the parent does the writing and the child does the reading, although the roles could be reversed. The first statement in the child _if_ -clause code, therefore, closes the pipe's write end:
+```
+`close(pipeFDs[WriteEnd]); /* called in child code */`
+```
+and the first statement in the parent _else_ -clause code closes the pipe's read end:
+```
+`close(pipeFDs[ReadEnd]); /* called in parent code */`
+```
+The parent then writes some bytes (ASCII codes) to the unnamed pipe, and the child reads these and echoes them to the standard output.
+
+One more aspect of the program needs clarification: the call to the **wait** function in the parent code. Once spawned, a child process is largely independent of its parent, as even the short _pipeUN_ program illustrates. The child can execute arbitrary code that may have nothing to do with the parent. However, the system does notify the parent through a signal—if and when the child terminates.
+
+What if the parent terminates before the child? In this case, unless precautions are taken, the child becomes and remains a _zombie_ process with an entry in the process table. The precautions are of two broad types. One precaution is to have the parent notify the system that the parent has no interest in the child's termination:
+```
+`signal(SIGCHLD, SIG_IGN); /* in parent: ignore notification */`
+```
+A second approach is to have the parent execute a **wait** on the child's termination, thereby ensuring that the parent outlives the child. This second approach is used in the _pipeUN_ program, where the parent code has this call:
+```
+`wait(NULL); /* called in parent */`
+```
+This call to **wait** means _wait until the termination of any child occurs_ , and in the _pipeUN_ program, there is only one child process. (The **NULL** argument could be replaced with the address of an integer variable to hold the child's exit status.) There is a more flexible **waitpid** function for fine-grained control, e.g., for specifying a particular child process among several.
+
+The _pipeUN_ program takes another precaution. When the parent is done waiting, the parent terminates with the call to the regular **exit** function. By contrast, the child terminates with a call to the **_exit** variant, which fast-tracks notification of termination. In effect, the child is telling the system to notify the parent ASAP that the child has terminated.
+
+If two processes write to the same unnamed pipe, can the bytes be interleaved? For example, if process P1 writes:
+```
+`foo bar`
+```
+to a pipe and process P2 concurrently writes:
+```
+`baz baz`
+```
+to the same pipe, it seems that the pipe contents might be something arbitrary, such as:
+```
+`baz foo baz bar`
+```
+The POSIX standard ensures that writes are not interleaved so long as no write exceeds **PIPE_BUF** bytes. On Linux systems, **PIPE_BUF** is 4,096 bytes in size. My preference with pipes is to have a single writer and a single reader, thereby sidestepping the issue.
+
+## Named pipes
+
+An unnamed pipe has no backing file: the system maintains an in-memory buffer to transfer bytes from the writer to the reader. Once the writer and reader terminate, the buffer is reclaimed, so the unnamed pipe goes away. By contrast, a named pipe has a backing file and a distinct API.
+
+Let's look at another command line example to get the gist of named pipes. Here are the steps:
+
+ * Open two terminals. The working directory should be the same for both.
+ * In one of the terminals, enter these two commands (the prompt again is **%** , and my comments start with **##** ): [code] % mkfifo tester ## creates a backing file named tester
+% cat tester ## type the pipe's contents to stdout [/code] At the beginning, nothing should appear in the terminal because nothing has been written yet to the named pipe.
+ * In the second terminal, enter the command: [code] % cat > tester ## redirect keyboard input to the pipe
+hello, world! ## then hit Return key
+bye, bye ## ditto
+ ## terminate session with a Control-C [/code] Whatever is typed into this terminal is echoed in the other. Once **Ctrl+C** is entered, the regular command line prompt returns in both terminals: the pipe has been closed.
+ * Clean up by removing the file that implements the named pipe: [code]`% unlink tester`
+```
+
+
+
+As the utility's name _mkfifo_ implies, a named pipe also is called a FIFO because the first byte in is the first byte out, and so on. There is a library function named **mkfifo** that creates a named pipe in programs and is used in the next example, which consists of two processes: one writes to the named pipe and the other reads from this pipe.
+
+#### Example 2. The _fifoWriter_ program
+
+
+```
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+#define MaxLoops 12000 /* outer loop */
+#define ChunkSize 16 /* how many written at a time */
+#define IntsPerChunk 4 /* four 4-byte ints per chunk */
+#define MaxZs 250 /* max microseconds to sleep */
+
+int main() {
+const char* pipeName = "./fifoChannel";
+mkfifo(pipeName, 0666); /* read/write for user/group/others */
+int fd = open(pipeName, O_CREAT | O_WRONLY); /* open as write-only */
+if (fd < 0) return -1; /* can't go on */
+
+int i;
+for (i = 0; i < MaxLoops; i++) { /* write MaxWrites times */
+int j;
+for (j = 0; j < ChunkSize; j++) { /* each time, write ChunkSize bytes */
+int k;
+int chunk[IntsPerChunk];
+for (k = 0; k < IntsPerChunk; k++)
+chunk[k] = [rand][9]();
+write(fd, chunk, sizeof(chunk));
+}
+usleep(([rand][9]() % MaxZs) + 1); /* pause a bit for realism */
+}
+
+close(fd); /* close pipe: generates an end-of-stream marker */
+unlink(pipeName); /* unlink from the implementing file */
+[printf][10]("%i ints sent to the pipe.\n", MaxLoops * ChunkSize * IntsPerChunk);
+
+return 0;
+}
+```
+
+The _fifoWriter_ program above can be summarized as follows:
+
+ * The program creates a named pipe for writing: [code] mkfifo(pipeName, 0666); /* read/write perms for user/group/others */
+int fd = open(pipeName, O_CREAT | O_WRONLY); [/code] where **pipeName** is the name of the backing file passed to **mkfifo** as the first argument. The named pipe then is opened with the by-now familiar call to the **open** function, which returns a file descriptor.
+ * For a touch of realism, the _fifoWriter_ does not write all the data at once, but instead writes a chunk, sleeps a random number of microseconds, and so on. In total, 768,000 4-byte integer values are written to the named pipe.
+ * After closing the named pipe, the _fifoWriter_ also unlinks the file: [code] close(fd); /* close pipe: generates end-of-stream marker */
+unlink(pipeName); /* unlink from the implementing file */ [/code] The system reclaims the backing file once every process connected to the pipe has performed the unlink operation. In this example, there are only two such processes: the _fifoWriter_ and the _fifoReader_ , both of which do an _unlink_ operation.
+
+
+
+The two programs should be executed in different terminals with the same working directory. However, the _fifoWriter_ should be started before the _fifoReader_ , as the former creates the pipe. The _fifoReader_ then accesses the already created named pipe.
+
+#### Example 3. The _fifoReader_ program
+
+
+```
+#include
+#include
+#include
+#include
+#include
+
+unsigned is_prime(unsigned n) { /* not pretty, but efficient */
+if (n <= 3) return n > 1;
+if (0 == (n % 2) || 0 == (n % 3)) return 0;
+
+unsigned i;
+for (i = 5; (i * i) <= n; i += 6)
+if (0 == (n % i) || 0 == (n % (i + 2))) return 0;
+
+return 1; /* found a prime! */
+}
+
+int main() {
+const char* file = "./fifoChannel";
+int fd = open(file, O_RDONLY);
+if (fd < 0) return -1; /* no point in continuing */
+unsigned count = 0, total = 0, primes_count = 0;
+
+while (1) {
+int next;
+int i;
+
+ssize_t count = read(fd, &next, sizeof(int));
+if (0 == count) break; /* end of stream */
+else if (count == sizeof(int)) { /* read a 4-byte int value */
+total++;
+if (is_prime(next)) primes_count++;
+}
+}
+
+close(fd); /* close pipe from read end */
+unlink(file); /* unlink from the underlying file */
+[printf][10]("Received ints: %u, primes: %u\n", total, primes_count);
+
+return 0;
+}
+```
+
+The _fifoReader_ program above can be summarized as follows:
+
+ * Because the _fifoWriter_ creates the named pipe, the _fifoReader_ needs only the standard call **open** to access the pipe through the backing file: [code] const char* file = "./fifoChannel";
+int fd = open(file, O_RDONLY); [/code] The file opens as read-only.
+ * The program then goes into a potentially infinite loop, trying to read a 4-byte chunk on each iteration. The **read** call: [code]`ssize_t count = read(fd, &next, sizeof(int));`[/code] returns 0 to indicate end-of-stream, in which case the _fifoReader_ breaks out of the loop, closes the named pipe, and unlinks the backing file before terminating.
+ * After reading a 4-byte integer, the _fifoReader_ checks whether the number is a prime. This represents the business logic that a production-grade reader might perform on the received bytes. On a sample run, there were 37,682 primes among the 768,000 integers received.
+
+
+
+On repeated sample runs, the _fifoReader_ successfully read all of the bytes that the _fifoWriter_ wrote. This is not surprising. The two processes execute on the same host, taking network issues out of the equation. Named pipes are a highly reliable and efficient IPC mechanism and, therefore, in wide use.
+
+Here is the output from the two programs, each launched from a separate terminal but with the same working directory:
+
+
+```
+% ./fifoWriter
+768000 ints sent to the pipe.
+###
+% ./fifoReader
+Received ints: 768000, primes: 37682
+```
+
+### Message queues
+
+Pipes have strict FIFO behavior: the first byte written is the first byte read, the second byte written is the second byte read, and so forth. Message queues can behave in the same way but are flexible enough that byte chunks can be retrieved out of FIFO order.
+
+As the name suggests, a message queue is a sequence of messages, each of which has two parts:
+
+ * The payload, which is an array of bytes ( **char** in C)
+ * A type, given as a positive integer value; types categorize messages for flexible retrieval
+
+
+
+Consider the following depiction of a message queue, with each message labeled with an integer type:
+
+
+```
++-+ +-+ +-+ +-+
+sender--->|3|--->|2|--->|2|--->|1|--->receiver
++-+ +-+ +-+ +-+
+```
+
+Of the four messages shown, the one labeled 1 is at the front, i.e., closest to the receiver. Next come two messages with label 2, and finally, a message labeled 3 at the back. If strict FIFO behavior were in play, then the messages would be received in the order 1-2-2-3. However, the message queue allows other retrieval orders. For example, the messages could be retrieved by the receiver in the order 3-2-1-2.
+
+The _mqueue_ example consists of two programs, the _sender_ that writes to the message queue and the _receiver_ that reads from this queue. Both programs include the header file _queue.h_ shown below:
+
+#### Example 4. The header file _queue.h_
+
+
+```
+#define ProjectId 123
+#define PathName "queue.h" /* any existing, accessible file would do */
+#define MsgLen 4
+#define MsgCount 6
+
+typedef struct {
+long type; /* must be of type long */
+char payload[MsgLen + 1]; /* bytes in the message */
+} queuedMessage;
+```
+
+The header file defines a structure type named **queuedMessage** , with **payload** (byte array) and **type** (integer) fields. This file also defines symbolic constants (the **#define** statements), the first two of which are used to generate a key that, in turn, is used to get a message queue ID. The **ProjectId** can be any positive integer value, and the **PathName** must be an existing, accessible file—in this case, the file _queue.h_. The setup statements in both the _sender_ and the _receiver_ programs are:
+
+
+```
+key_t key = ftok(PathName, ProjectId); /* generate key */
+int qid = msgget(key, 0666 | IPC_CREAT); /* use key to get queue id */
+```
+
+The ID **qid** is, in effect, the counterpart of a file descriptor for message queues.
+
+#### Example 5. The message _sender_ program
+
+
+```
+#include
+#include
+#include
+#include
+#include
+#include "queue.h"
+
+void report_and_exit(const char* msg) {
+[perror][6](msg);
+[exit][7](-1); /* EXIT_FAILURE */
+}
+
+int main() {
+key_t key = ftok(PathName, ProjectId);
+if (key < 0) report_and_exit("couldn't get key...");
+
+int qid = msgget(key, 0666 | IPC_CREAT);
+if (qid < 0) report_and_exit("couldn't get queue id...");
+
+char* payloads[] = {"msg1", "msg2", "msg3", "msg4", "msg5", "msg6"};
+int types[] = {1, 1, 2, 2, 3, 3}; /* each must be > 0 */
+int i;
+for (i = 0; i < MsgCount; i++) {
+/* build the message */
+queuedMessage msg;
+msg.type = types[i];
+[strcpy][11](msg.payload, payloads[i]);
+
+/* send the message */
+msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT); /* don't block */
+[printf][10]("%s sent as type %i\n", msg.payload, (int) msg.type);
+}
+return 0;
+}
+```
+
+The _sender_ program above sends out six messages, two each of a specified type: the first messages are of type 1, the next two of type 2, and the last two of type 3. The sending statement:
+
+
+```
+`msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT);`
+```
+
+is configured to be non-blocking (the flag **IPC_NOWAIT** ) because the messages are so small. The only danger is that a full queue, unlikely in this example, would result in a sending failure. The _receiver_ program below also receives messages using the **IPC_NOWAIT** flag.
+
+#### Example 6. The message _receiver_ program
+
+
+```
+#include
+#include
+#include
+#include
+#include "queue.h"
+
+void report_and_exit(const char* msg) {
+[perror][6](msg);
+[exit][7](-1); /* EXIT_FAILURE */
+}
+
+int main() {
+key_t key= ftok(PathName, ProjectId); /* key to identify the queue */
+if (key < 0) report_and_exit("key not gotten...");
+
+int qid = msgget(key, 0666 | IPC_CREAT); /* access if created already */
+if (qid < 0) report_and_exit("no access to queue...");
+
+int types[] = {3, 1, 2, 1, 3, 2}; /* different than in sender */
+int i;
+for (i = 0; i < MsgCount; i++) {
+queuedMessage msg; /* defined in queue.h */
+if (msgrcv(qid, &msg, sizeof(msg), types[i], MSG_NOERROR | IPC_NOWAIT) < 0)
+[puts][12]("msgrcv trouble...");
+[printf][10]("%s received as type %i\n", msg.payload, (int) msg.type);
+}
+
+/** remove the queue **/
+if (msgctl(qid, IPC_RMID, NULL) < 0) /* NULL = 'no flags' */
+report_and_exit("trouble removing queue...");
+
+return 0;
+}
+```
+
+The _receiver_ program does not create the message queue, although the API suggests as much. In the _receiver_ , the call:
+
+
+```
+`int qid = msgget(key, 0666 | IPC_CREAT);`
+```
+
+is misleading because of the **IPC_CREAT** flag, but this flag really means _create if needed, otherwise access_. The _sender_ program calls **msgsnd** to send messages, whereas the _receiver_ calls **msgrcv** to retrieve them. In this example, the _sender_ sends the messages in the order 1-1-2-2-3-3, but the _receiver_ then retrieves them in the order 3-1-2-1-3-2, showing that message queues are not bound to strict FIFO behavior:
+
+
+```
+% ./sender
+msg1 sent as type 1
+msg2 sent as type 1
+msg3 sent as type 2
+msg4 sent as type 2
+msg5 sent as type 3
+msg6 sent as type 3
+
+% ./receiver
+msg5 received as type 3
+msg1 received as type 1
+msg3 received as type 2
+msg2 received as type 1
+msg6 received as type 3
+msg4 received as type 2
+```
+
+The output above shows that the _sender_ and the _receiver_ can be launched from the same terminal. The output also shows that the message queue persists even after the _sender_ process creates the queue, writes to it, and exits. The queue goes away only after the _receiver_ process explicitly removes it with the call to **msgctl** :
+
+
+```
+`if (msgctl(qid, IPC_RMID, NULL) < 0) /* remove queue */`
+```
+
+### Wrapping up
+
+The pipes and message queue APIs are fundamentally _unidirectional_ : one process writes and another reads. There are implementations of bidirectional named pipes, but my two cents is that this IPC mechanism is at its best when it is simplest. As noted earlier, message queues have fallen in popularity—but without good reason; these queues are yet another tool in the IPC toolbox. Part 3 completes this quick tour of the IPC toolbox with code examples of IPC through sockets and signals.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/interprocess-communication-linux-channels
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
+[2]: https://en.wikipedia.org/wiki/Inter-process_communication
+[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
+[4]: http://man7.org/linux/man-pages/man2/mq_open.2.html
+[5]: http://man7.org/linux/man-pages/man2/mq_open.2.html#ATTRIBUTES
+[6]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
+[7]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
+[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
+[9]: http://www.opengroup.org/onlinepubs/009695399/functions/rand.html
+[10]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
+[11]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html
+[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
From 40f861baf382a096edbd3922c2833d7a5d35c0e6 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:48:49 +0800
Subject: [PATCH 0013/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190415=20Blen?=
=?UTF-8?q?der=20short=20film,=20new=20license=20for=20Chef,=20ethics=20in?=
=?UTF-8?q?=20open=20source,=20and=20more=20news=20sources/tech/20190415?=
=?UTF-8?q?=20Blender=20short=20film,=20new=20license=20for=20Chef,=20ethi?=
=?UTF-8?q?cs=20in=20open=20source,=20and=20more=20news.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...f, ethics in open source, and more news.md | 75 +++++++++++++++++++
1 file changed, 75 insertions(+)
create mode 100644 sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md
diff --git a/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md b/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md
new file mode 100644
index 0000000000..5bc9aaf92f
--- /dev/null
+++ b/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md
@@ -0,0 +1,75 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Blender short film, new license for Chef, ethics in open source, and more news)
+[#]: via: (https://opensource.com/article/15/4/news-april-15)
+[#]: author: (Joshua Allen Holm (Community Moderator) https://opensource.com/users/holmja)
+
+Blender short film, new license for Chef, ethics in open source, and more news
+======
+Here are some of the biggest headlines in open source in the last two
+weeks
+![][1]
+
+In this edition of our open source news roundup, we take a look at the 12th Blender short film, Chef shifts away from open core toward a 100% open source license, SuperTuxKart's latest release candidate with online multiplayer support, and more.
+
+### Blender Animation Studio releases Spring
+
+[Spring][2], the latest short film from [Blender Animation Studio][3], premiered on April 4th. The [press release on Blender.org][4] describes _Spring_ as "the story of a shepherd girl and her dog, who face ancient spirits in order to continue the cycle of life." The development version of Blender 2.80, as well as other open source tools, were used to create this animated short film. The character and asset files for the film are available from [Blender Cloud][5], and tutorials, walkthroughs, and other instructional material are coming soon.
+
+### The importance of ethics in open source
+
+Reuven M. Lerner, writing for [Linux Journal][6], shares his thoughts about need for teaching programmers about ethics in an article titled [Open Source Is Winning, and Now It's Time for People to Win Too][7]. Part retrospective looking back at the history of open source and part call to action for moving forward, Lerner's article discusses many issues relevant to open source beyond just coding. He argues that when we teach kids about open source "[w]e also need to inform them of the societal parts of their work, and the huge influence and power that today's programmers have." He continues by stating "It's sometimes okay—and even preferable—for a company to make less money deliberately, when the alternative would be to do things that are inappropriate or illegal." Overall a very thought-provoking piece, Lerner makes a solid case for making sure to remember that the open source movement is about more than free code.
+
+### Chef transitions from open core to open source
+
+Chef, the company behind the well-known DevOps automation tool, [announced][8] that they will be release 100% of their software as open source under an Apache 2.0 license. This move marks a departure from their current [open core model][9]. Given a tendency for companies to try to move in the opposite direction, Chef's move is a big one. By operating under a fully open source model Chef builds a better, stronger relationship with the community, and the community benefits from full access to all the source code. Even developers of competing projects (and the commercial projects based on those products) benefit from being able to learn from Chef's code, as Chef can do from its open source competitors, which is one of the greatest advantages of open source; the best ideas get to win and business relationships are built around trust and quality of service, not proprietary secrets. For a more detailed look at this development, read Steven J. Vaughan-Nichols's [article for ZDNet][10].
+
+### SuperTuxKart releases version 0.10 RC1 for testing
+
+SuperTuxKart, the open source Mario Kart clone featuring open source mascots, is getting very close to releasing a version that supports online multi-player. On April 5th, the SuperTuxKart blog announced the release of [SuperTuxKart 0.10 Release Candidate 1][11], which needs testing before the final release. Users who want to help test the online and LAN multiplayer options can [download the game from SourceForge][12]. In addition to the new online and LAN features, SuperTuxKart 0.10 features a couple new tracks to race on; Ravenbridge Mansion replaces the old Mansion track, and Black Forest, which was an add-on track in earlier versions, is now part of the official track set.
+
+#### In other news
+
+ * [My code is your code: Embracing the power of open sourcing][13]
+ * [FOSS means kids can have a big impact][14]
+ * [Open-source textbooks lighten students’ financial load][15]
+ * [Developing the ultimate open source radio control transmitter][16]
+ * [How does open source tech transform Government?][17]
+
+
+
+_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/15/4/news-april-15
+
+作者:[Joshua Allen Holm (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/holmja
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i
+[2]: https://www.youtube.com/watch?v=WhWc3b3KhnY (Spring)
+[3]: https://blender.studio/ (Blender Animation Studio)
+[4]: https://www.blender.org/press/spring-open-movie/ (Spring Open Movie)
+[5]: https://cloud.blender.org/p/spring/ (Spring on Blender Cloud)
+[6]: https://www.linuxjournal.com/ (Linux Journal)
+[7]: https://www.linuxjournal.com/content/open-source-winning-and-now-its-time-people-win-too (Open Source Is Winning, and Now It's Time for People to Win Too)
+[8]: https://blog.chef.io/2019/04/02/chef-software-announces-the-enterprise-automation-stack/ (Introducing the New Chef: 100% Open, Always)
+[9]: https://en.wikipedia.org/wiki/Open-core_model (Wikipedia: Open-core model)
+[10]: https://www.zdnet.com/article/leading-devops-program-chef-goes-all-in-with-open-source/ (Leading DevOps program Chef goes all in with open source)
+[11]: http://blog.supertuxkart.net/2019/04/supertuxkart-010-release-candidate-1.html (SuperTuxKart 0.10 Release Candidate 1 Released)
+[12]: https://sourceforge.net/projects/supertuxkart/files/SuperTuxKart/0.10-rc1/ (SourceForge: SuperTuxKart)
+[13]: https://www.forbes.com/sites/forbestechcouncil/2019/04/10/my-code-is-your-code-embracing-the-power-of-open-sourcing/ (My code is your code: Embracing the power of open sourcing)
+[14]: https://www.linuxjournal.com/content/foss-means-kids-can-have-big-impact (FOSS means kids can have a big impact)
+[15]: https://www.schoolnewsnetwork.org/2019/04/09/open-source-textbooks-lighten-students-financial-load/ (Open-source textbooks lighten students’ financial load)
+[16]: https://hackaday.com/2019/04/03/developing-the-ultimate-open-source-radio-control-transmitter/ (Developing the ultimate open source radio control transmitter)
+[17]: https://www.openaccessgovernment.org/open-source-tech-transform/62059/ (How does open source tech transform Government?)
From 14e725c85caa504ade46672b1bddf2621a32c563 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:49:10 +0800
Subject: [PATCH 0014/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190415=20Gett?=
=?UTF-8?q?ing=20started=20with=20Mercurial=20for=20version=20control=20so?=
=?UTF-8?q?urces/tech/20190415=20Getting=20started=20with=20Mercurial=20fo?=
=?UTF-8?q?r=20version=20control.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...rted with Mercurial for version control.md | 127 ++++++++++++++++++
1 file changed, 127 insertions(+)
create mode 100644 sources/tech/20190415 Getting started with Mercurial for version control.md
diff --git a/sources/tech/20190415 Getting started with Mercurial for version control.md b/sources/tech/20190415 Getting started with Mercurial for version control.md
new file mode 100644
index 0000000000..c2e451fd06
--- /dev/null
+++ b/sources/tech/20190415 Getting started with Mercurial for version control.md
@@ -0,0 +1,127 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Getting started with Mercurial for version control)
+[#]: via: (https://opensource.com/article/19/4/getting-started-mercurial)
+[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
+
+Getting started with Mercurial for version control
+======
+Learn the basics of Mercurial, a distributed version control system
+written in Python.
+![][1]
+
+[Mercurial][2] is a distributed version control system written in Python. Because it's written in a high-level language, you can write a Mercurial extension with a few Python functions.
+
+There are several ways to install Mercurial, which are explained in the [official documentation][3]. My favorite one is not there: using **pip**. This is the most amenable way to develop local extensions!
+
+For now, Mercurial only supports Python 2.7, so you will need to create a Python 2.7 virtual environment:
+
+
+```
+python2 -m virtualenv mercurial-env
+./mercurial-env/bin/pip install mercurial
+```
+
+To have a short command, and to satisfy everyone's insatiable need for chemistry-based humor, the command is called **hg**.
+
+
+```
+$ source mercurial-env/bin/activate
+(mercurial-env)$ mkdir test-dir
+(mercurial-env)$ cd test-dir
+(mercurial-env)$ hg init
+(mercurial-env)$ hg status
+(mercurial-env)$
+```
+
+The status is empty since you do not have any files. Add a couple of files:
+
+
+```
+(mercurial-env)$ echo 1 > one
+(mercurial-env)$ echo 2 > two
+(mercurial-env)$ hg status
+? one
+? two
+(mercurial-env)$ hg addremove
+adding one
+adding two
+(mercurial-env)$ hg commit -m 'Adding stuff'
+(mercurial-env)$ hg log
+changeset: 0:1f1befb5d1e9
+tag: tip
+user: Moshe Zadka <[moshez@zadka.club][4]>
+date: Fri Mar 29 12:42:43 2019 -0700
+summary: Adding stuff
+```
+
+The **addremove** command is useful: it adds any new files that are not ignored to the list of managed files and removes any files that have been removed.
+
+As I mentioned, Mercurial extensions are written in Python—they are just regular Python modules.
+
+This is an example of a short Mercurial extension:
+
+
+```
+from mercurial import registrar
+from mercurial.i18n import _
+
+cmdtable = {}
+command = registrar.command(cmdtable)
+
+@command('say-hello',
+[('w', 'whom', '', _('Whom to greet'))])
+def say_hello(ui, repo, **opts):
+ui.write("hello ", opts['whom'], "\n")
+```
+
+A simple way to test it is to put it in a file in the virtual environment manually:
+
+
+```
+`$ vi ../mercurial-env/lib/python2.7/site-packages/hello_ext.py`
+```
+
+Then you need to _enable_ the extension. You can start by enabling it only in the current repository:
+
+
+```
+$ cat >> .hg/hgrc
+[extensions]
+hello_ext =
+```
+
+Now, a greeting is possible:
+
+
+```
+(mercurial-env)$ hg say-hello --whom world
+hello world
+```
+
+Most extensions will do more useful stuff—possibly even things to do with Mercurial. The **repo** object is a **mercurial.hg.repository** object.
+
+Refer to the [official documentation][5] for more about Mercurial's API. And visit the [official repo][6] for more examples and inspiration.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/getting-started-mercurial
+
+作者:[Moshe Zadka (Community Moderator)][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO
+[2]: https://www.mercurial-scm.org/
+[3]: https://www.mercurial-scm.org/wiki/UnixInstall
+[4]: mailto:moshez@zadka.club
+[5]: https://www.mercurial-scm.org/wiki/MercurialApi#Repositories
+[6]: https://www.mercurial-scm.org/repo/hg/file/tip/hgext
From 2e776e212f573b14b2a07defc3c6a1567d07048f Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:49:21 +0800
Subject: [PATCH 0015/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190415=20Inte?=
=?UTF-8?q?r-process=20communication=20in=20Linux:=20Shared=20storage=20so?=
=?UTF-8?q?urces/tech/20190415=20Inter-process=20communication=20in=20Linu?=
=?UTF-8?q?x-=20Shared=20storage.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... communication in Linux- Shared storage.md | 419 ++++++++++++++++++
1 file changed, 419 insertions(+)
create mode 100644 sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
diff --git a/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md b/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
new file mode 100644
index 0000000000..bf6c2c07cc
--- /dev/null
+++ b/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
@@ -0,0 +1,419 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Inter-process communication in Linux: Shared storage)
+[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-storage)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+Inter-process communication in Linux: Shared storage
+======
+Learn how processes synchronize with each other in Linux.
+![Filing papers and documents][1]
+
+This is the first article in a series about [interprocess communication][2] (IPC) in Linux. The series uses code examples in C to clarify the following IPC mechanisms:
+
+ * Shared files
+ * Shared memory (with semaphores)
+ * Pipes (named and unnamed)
+ * Message queues
+ * Sockets
+ * Signals
+
+
+
+This article reviews some core concepts before moving on to the first two of these mechanisms: shared files and shared memory.
+
+### Core concepts
+
+A _process_ is a program in execution, and each process has its own address space, which comprises the memory locations that the process is allowed to access. A process has one or more _threads_ of execution, which are sequences of executable instructions: a _single-threaded_ process has just one thread, whereas a _multi-threaded_ process has more than one thread. Threads within a process share various resources, in particular, address space. Accordingly, threads within a process can communicate straightforwardly through shared memory, although some modern languages (e.g., Go) encourage a more disciplined approach such as the use of thread-safe channels. Of interest here is that different processes, by default, do _not_ share memory.
+
+There are various ways to launch processes that then communicate, and two ways dominate in the examples that follow:
+
+ * A terminal is used to start one process, and perhaps a different terminal is used to start another.
+ * The system function **fork** is called within one process (the parent) to spawn another process (the child).
+
+
+
+The first examples take the terminal approach. The [code examples][3] are available in a ZIP file on my website.
+
+### Shared files
+
+Programmers are all too familiar with file access, including the many pitfalls (non-existent files, bad file permissions, and so on) that beset the use of files in programs. Nonetheless, shared files may be the most basic IPC mechanism. Consider the relatively simple case in which one process ( _producer_ ) creates and writes to a file, and another process ( _consumer_ ) reads from this same file:
+
+
+```
+writes +-----------+ reads
+producer-------->| disk file |<\-------consumer
++-----------+
+```
+
+The obvious challenge in using this IPC mechanism is that a _race condition_ might arise: the producer and the consumer might access the file at exactly the same time, thereby making the outcome indeterminate. To avoid a race condition, the file must be locked in a way that prevents a conflict between a _write_ operation and any another operation, whether a _read_ or a _write_. The locking API in the standard system library can be summarized as follows:
+
+ * A producer should gain an exclusive lock on the file before writing to the file. An _exclusive_ lock can be held by one process at most, which rules out a race condition because no other process can access the file until the lock is released.
+ * A consumer should gain at least a shared lock on the file before reading from the file. Multiple _readers_ can hold a _shared_ lock at the same time, but no _writer_ can access a file when even a single _reader_ holds a shared lock.
+
+
+
+A shared lock promotes efficiency. If one process is just reading a file and not changing its contents, there is no reason to prevent other processes from doing the same. Writing, however, clearly demands exclusive access to a file.
+
+The standard I/O library includes a utility function named **fcntl** that can be used to inspect and manipulate both exclusive and shared locks on a file. The function works through a _file descriptor_ , a non-negative integer value that, within a process, identifies a file. (Different file descriptors in different processes may identify the same physical file.) For file locking, Linux provides the library function **flock** , which is a thin wrapper around **fcntl**. The first example uses the **fcntl** function to expose API details.
+
+#### Example 1. The _producer_ program
+
+
+```
+#include
+#include
+#include
+#include
+#include
+
+#define FileName "data.dat"
+#define DataString "Now is the winter of our discontent\nMade glorious summer by this sun of York\n"
+
+void report_and_exit(const char* msg) {
+[perror][4](msg);
+[exit][5](-1); /* EXIT_FAILURE */
+}
+
+int main() {
+struct flock lock;
+lock.l_type = F_WRLCK; /* read/write (exclusive versus shared) lock */
+lock.l_whence = SEEK_SET; /* base for seek offsets */
+lock.l_start = 0; /* 1st byte in file */
+lock.l_len = 0; /* 0 here means 'until EOF' */
+lock.l_pid = getpid(); /* process id */
+
+int fd; /* file descriptor to identify a file within a process */
+if ((fd = open(FileName, O_RDWR | O_CREAT, 0666)) < 0) /* -1 signals an error */
+report_and_exit("open failed...");
+
+if (fcntl(fd, F_SETLK, &lock) < 0) /** F_SETLK doesn't block, F_SETLKW does **/
+report_and_exit("fcntl failed to get lock...");
+else {
+write(fd, DataString, [strlen][6](DataString)); /* populate data file */
+[fprintf][7](stderr, "Process %d has written to data file...\n", lock.l_pid);
+}
+
+/* Now release the lock explicitly. */
+lock.l_type = F_UNLCK;
+if (fcntl(fd, F_SETLK, &lock) < 0)
+report_and_exit("explicit unlocking failed...");
+
+close(fd); /* close the file: would unlock if needed */
+return 0; /* terminating the process would unlock as well */
+}
+```
+
+The main steps in the _producer_ program above can be summarized as follows:
+
+ * The program declares a variable of type **struct flock** , which represents a lock, and initializes the structure's five fields. The first initialization: [code]`lock.l_type = F_WRLCK; /* exclusive lock */`[/code] makes the lock an exclusive ( _read-write_ ) rather than a shared ( _read-only_ ) lock. If the _producer_ gains the lock, then no other process will be able to write or read the file until the _producer_ releases the lock, either explicitly with the appropriate call to **fcntl** or implicitly by closing the file. (When the process terminates, any opened files would be closed automatically, thereby releasing the lock.)
+ * The program then initializes the remaining fields. The chief effect is that the _entire_ file is to be locked. However, the locking API allows only designated bytes to be locked. For example, if the file contains multiple text records, then a single record (or even part of a record) could be locked and the rest left unlocked.
+ * The first call to **fcntl** : [code]`if (fcntl(fd, F_SETLK, &lock) < 0)`[/code] tries to lock the file exclusively, checking whether the call succeeded. In general, the **fcntl** function returns **-1** (hence, less than zero) to indicate failure. The second argument **F_SETLK** means that the call to **fcntl** does _not_ block: the function returns immediately, either granting the lock or indicating failure. If the flag **F_SETLKW** (the **W** at the end is for _wait_ ) were used instead, the call to **fcntl** would block until gaining the lock was possible. In the calls to **fcntl** , the first argument **fd** is the file descriptor, the second argument specifies the action to be taken (in this case, **F_SETLK** for setting the lock), and the third argument is the address of the lock structure (in this case, **& lock**).
+ * If the _producer_ gains the lock, the program writes two text records to the file.
+ * After writing to the file, the _producer_ changes the lock structure's **l_type** field to the _unlock_ value: [code]`lock.l_type = F_UNLCK;`[/code] and calls **fcntl** to perform the unlocking operation. The program finishes up by closing the file and exiting.
+
+
+
+#### Example 2. The _consumer_ program
+
+
+```
+#include
+#include
+#include
+#include
+
+#define FileName "data.dat"
+
+void report_and_exit(const char* msg) {
+[perror][4](msg);
+[exit][5](-1); /* EXIT_FAILURE */
+}
+
+int main() {
+struct flock lock;
+lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
+lock.l_whence = SEEK_SET; /* base for seek offsets */
+lock.l_start = 0; /* 1st byte in file */
+lock.l_len = 0; /* 0 here means 'until EOF' */
+lock.l_pid = getpid(); /* process id */
+
+int fd; /* file descriptor to identify a file within a process */
+if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
+report_and_exit("open to read failed...");
+
+/* If the file is write-locked, we can't continue. */
+fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
+if (lock.l_type != F_UNLCK)
+report_and_exit("file is still write locked...");
+
+lock.l_type = F_RDLCK; /* prevents any writing during the reading */
+if (fcntl(fd, F_SETLK, &lock) < 0)
+report_and_exit("can't get a read-only lock...");
+
+/* Read the bytes (they happen to be ASCII codes) one at a time. */
+int c; /* buffer for read bytes */
+while (read(fd, &c, 1) > 0) /* 0 signals EOF */
+write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
+
+/* Release the lock explicitly. */
+lock.l_type = F_UNLCK;
+if (fcntl(fd, F_SETLK, &lock) < 0)
+report_and_exit("explicit unlocking failed...");
+
+close(fd);
+return 0;
+}
+```
+
+The _consumer_ program is more complicated than necessary to highlight features of the locking API. In particular, the _consumer_ program first checks whether the file is exclusively locked and only then tries to gain a shared lock. The relevant code is:
+
+
+```
+lock.l_type = F_WRLCK;
+...
+fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
+if (lock.l_type != F_UNLCK)
+report_and_exit("file is still write locked...");
+```
+
+The **F_GETLK** operation specified in the **fcntl** call checks for a lock, in this case, an exclusive lock given as **F_WRLCK** in the first statement above. If the specified lock does not exist, then the **fcntl** call automatically changes the lock type field to **F_UNLCK** to indicate this fact. If the file is exclusively locked, the _consumer_ terminates. (A more robust version of the program might have the _consumer_ **sleep** a bit and try again several times.)
+
+If the file is not currently locked, then the _consumer_ tries to gain a shared ( _read-only_ ) lock ( **F_RDLCK** ). To shorten the program, the **F_GETLK** call to **fcntl** could be dropped because the **F_RDLCK** call would fail if a _read-write_ lock already were held by some other process. Recall that a _read-only_ lock does prevent any other process from writing to the file, but allows other processes to read from the file. In short, a _shared_ lock can be held by multiple processes. After gaining a shared lock, the _consumer_ program reads the bytes one at a time from the file, prints the bytes to the standard output, releases the lock, closes the file, and terminates.
+
+Here is the output from the two programs launched from the same terminal with **%** as the command line prompt:
+
+
+```
+% ./producer
+Process 29255 has written to data file...
+
+% ./consumer
+Now is the winter of our discontent
+Made glorious summer by this sun of York
+```
+
+In this first code example, the data shared through IPC is text: two lines from Shakespeare's play _Richard III_. Yet, the shared file's contents could be voluminous, arbitrary bytes (e.g., a digitized movie), which makes file sharing an impressively flexible IPC mechanism. The downside is that file access is relatively slow, whether the access involves reading or writing. As always, programming comes with tradeoffs. The next example has the upside of IPC through shared memory, rather than shared files, with a corresponding boost in performance.
+
+### Shared memory
+
+Linux systems provide two separate APIs for shared memory: the legacy System V API and the more recent POSIX one. These APIs should never be mixed in a single application, however. A downside of the POSIX approach is that features are still in development and dependent upon the installed kernel version, which impacts code portability. For example, the POSIX API, by default, implements shared memory as a _memory-mapped file_ : for a shared memory segment, the system maintains a _backing file_ with corresponding contents. Shared memory under POSIX can be configured without a backing file, but this may impact portability. My example uses the POSIX API with a backing file, which combines the benefits of memory access (speed) and file storage (persistence).
+
+The shared-memory example has two programs, named _memwriter_ and _memreader_ , and uses a _semaphore_ to coordinate their access to the shared memory. Whenever shared memory comes into the picture with a _writer_ , whether in multi-processing or multi-threading, so does the risk of a memory-based race condition; hence, the semaphore is used to coordinate (synchronize) access to the shared memory.
+
+The _memwriter_ program should be started first in its own terminal. The _memreader_ program then can be started (within a dozen seconds) in its own terminal. The output from the _memreader_ is:
+
+
+```
+`This is the way the world ends...`
+```
+
+Each source file has documentation at the top explaining the link flags to be included during compilation.
+
+Let's start with a review of how semaphores work as a synchronization mechanism. A general semaphore also is called a _counting semaphore_ , as it has a value (typically initialized to zero) that can be incremented. Consider a shop that rents bicycles, with a hundred of them in stock, with a program that clerks use to do the rentals. Every time a bike is rented, the semaphore is incremented by one; when a bike is returned, the semaphore is decremented by one. Rentals can continue until the value hits 100 but then must halt until at least one bike is returned, thereby decrementing the semaphore to 99.
+
+A _binary semaphore_ is a special case requiring only two values: 0 and 1. In this situation, a semaphore acts as a _mutex_ : a mutual exclusion construct. The shared-memory example uses a semaphore as a mutex. When the semaphore's value is 0, the _memwriter_ alone can access the shared memory. After writing, this process increments the semaphore's value, thereby allowing the _memreader_ to read the shared memory.
+
+#### Example 3. Source code for the _memwriter_ process
+
+
+```
+/** Compilation: gcc -o memwriter memwriter.c -lrt -lpthread **/
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include "shmem.h"
+
+void report_and_exit(const char* msg) {
+[perror][4](msg);
+[exit][5](-1);
+}
+
+int main() {
+int fd = shm_open(BackingFile, /* name from smem.h */
+O_RDWR | O_CREAT, /* read/write, create if needed */
+AccessPerms); /* access permissions (0644) */
+if (fd < 0) report_and_exit("Can't open shared mem segment...");
+
+ftruncate(fd, ByteSize); /* get the bytes */
+
+caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
+ByteSize, /* how many bytes */
+PROT_READ | PROT_WRITE, /* access protections */
+MAP_SHARED, /* mapping visible to other processes */
+fd, /* file descriptor */
+0); /* offset: start at 1st byte */
+if ((caddr_t) -1 == memptr) report_and_exit("Can't get segment...");
+
+[fprintf][7](stderr, "shared mem address: %p [0..%d]\n", memptr, ByteSize - 1);
+[fprintf][7](stderr, "backing file: /dev/shm%s\n", BackingFile );
+
+/* semaphore code to lock the shared mem */
+sem_t* semptr = sem_open(SemaphoreName, /* name */
+O_CREAT, /* create the semaphore */
+AccessPerms, /* protection perms */
+0); /* initial value */
+if (semptr == (void*) -1) report_and_exit("sem_open");
+
+[strcpy][8](memptr, MemContents); /* copy some ASCII bytes to the segment */
+
+/* increment the semaphore so that memreader can read */
+if (sem_post(semptr) < 0) report_and_exit("sem_post");
+
+sleep(12); /* give reader a chance */
+
+/* clean up */
+munmap(memptr, ByteSize); /* unmap the storage */
+close(fd);
+sem_close(semptr);
+shm_unlink(BackingFile); /* unlink from the backing file */
+return 0;
+}
+```
+
+Here's an overview of how the _memwriter_ and _memreader_ programs communicate through shared memory:
+
+ * The _memwriter_ program, shown above, calls the **shm_open** function to get a file descriptor for the backing file that the system coordinates with the shared memory. At this point, no memory has been allocated. The subsequent call to the misleadingly named function **ftruncate** : [code]`ftruncate(fd, ByteSize); /* get the bytes */`[/code] allocates **ByteSize** bytes, in this case, a modest 512 bytes. The _memwriter_ and _memreader_ programs access the shared memory only, not the backing file. The system is responsible for synchronizing the shared memory and the backing file.
+ * The _memwriter_ then calls the **mmap** function: [code] caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
+ByteSize, /* how many bytes */
+PROT_READ | PROT_WRITE, /* access protections */
+MAP_SHARED, /* mapping visible to other processes */
+fd, /* file descriptor */
+0); /* offset: start at 1st byte */ [/code] to get a pointer to the shared memory. (The _memreader_ makes a similar call.) The pointer type **caddr_t** starts with a **c** for **calloc** , a system function that initializes dynamically allocated storage to zeroes. The _memwriter_ uses the **memptr** for the later _write_ operation, using the library **strcpy** (string copy) function.
+ * At this point, the _memwriter_ is ready for writing, but it first creates a semaphore to ensure exclusive access to the shared memory. A race condition would occur if the _memwriter_ were writing while the _memreader_ was reading. If the call to **sem_open** succeeds: [code] sem_t* semptr = sem_open(SemaphoreName, /* name */
+O_CREAT, /* create the semaphore */
+AccessPerms, /* protection perms */
+0); /* initial value */ [/code] then the writing can proceed. The **SemaphoreName** (any unique non-empty name will do) identifies the semaphore in both the _memwriter_ and the _memreader_. The initial value of zero gives the semaphore's creator, in this case, the _memwriter_ , the right to proceed, in this case, to the _write_ operation.
+ * After writing, the _memwriter_ increments the semaphore value to 1: [code]`if (sem_post(semptr) < 0) ..`[/code] with a call to the **sem_post** function. Incrementing the semaphore releases the mutex lock and enables the _memreader_ to perform its _read_ operation. For good measure, the _memwriter_ also unmaps the shared memory from the _memwriter_ address space: [code]`munmap(memptr, ByteSize); /* unmap the storage *`[/code] This bars the _memwriter_ from further access to the shared memory.
+
+
+
+#### Example 4. Source code for the _memreader_ process
+
+
+```
+/** Compilation: gcc -o memreader memreader.c -lrt -lpthread **/
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include "shmem.h"
+
+void report_and_exit(const char* msg) {
+[perror][4](msg);
+[exit][5](-1);
+}
+
+int main() {
+int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* empty to begin */
+if (fd < 0) report_and_exit("Can't get file descriptor...");
+
+/* get a pointer to memory */
+caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
+ByteSize, /* how many bytes */
+PROT_READ | PROT_WRITE, /* access protections */
+MAP_SHARED, /* mapping visible to other processes */
+fd, /* file descriptor */
+0); /* offset: start at 1st byte */
+if ((caddr_t) -1 == memptr) report_and_exit("Can't access segment...");
+
+/* create a semaphore for mutual exclusion */
+sem_t* semptr = sem_open(SemaphoreName, /* name */
+O_CREAT, /* create the semaphore */
+AccessPerms, /* protection perms */
+0); /* initial value */
+if (semptr == (void*) -1) report_and_exit("sem_open");
+
+/* use semaphore as a mutex (lock) by waiting for writer to increment it */
+if (!sem_wait(semptr)) { /* wait until semaphore != 0 */
+int i;
+for (i = 0; i < [strlen][6](MemContents); i++)
+write(STDOUT_FILENO, memptr + i, 1); /* one byte at a time */
+sem_post(semptr);
+}
+
+/* cleanup */
+munmap(memptr, ByteSize);
+close(fd);
+sem_close(semptr);
+unlink(BackingFile);
+return 0;
+}
+```
+
+In both the _memwriter_ and _memreader_ programs, the shared-memory functions of main interest are **shm_open** and **mmap** : on success, the first call returns a file descriptor for the backing file, which the second call then uses to get a pointer to the shared memory segment. The calls to **shm_open** are similar in the two programs except that the _memwriter_ program creates the shared memory, whereas the _memreader_ only accesses this already created memory:
+
+
+```
+int fd = shm_open(BackingFile, O_RDWR | O_CREAT, AccessPerms); /* memwriter */
+int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* memreader */
+```
+
+With a file descriptor in hand, the calls to **mmap** are the same:
+
+
+```
+`caddr_t memptr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);`
+```
+
+The first argument to **mmap** is **NULL** , which means that the system determines where to allocate the memory in virtual address space. It's possible (but tricky) to specify an address instead. The **MAP_SHARED** flag indicates that the allocated memory is shareable among processes, and the last argument (in this case, zero) means that the offset for the shared memory should be the first byte. The **size** argument specifies the number of bytes to be allocated (in this case, 512), and the protection argument indicates that the shared memory can be written and read.
+
+When the _memwriter_ program executes successfully, the system creates and maintains the backing file; on my system, the file is _/dev/shm/shMemEx_ , with _shMemEx_ as my name (given in the header file _shmem.h_ ) for the shared storage. In the current version of the _memwriter_ and _memreader_ programs, the statement:
+
+
+```
+`shm_unlink(BackingFile); /* removes backing file */`
+```
+
+removes the backing file. If the **unlink** statement is omitted, then the backing file persists after the program terminates.
+
+The _memreader_ , like the _memwriter_ , accesses the semaphore through its name in a call to **sem_open**. But the _memreader_ then goes into a wait state until the _memwriter_ increments the semaphore, whose initial value is 0:
+
+
+```
+`if (!sem_wait(semptr)) { /* wait until semaphore != 0 */`
+```
+
+Once the wait is over, the _memreader_ reads the ASCII bytes from the shared memory, cleans up, and terminates.
+
+The shared-memory API includes operations explicitly to synchronize the shared memory segment and the backing file. These operations have been omitted from the example to reduce clutter and keep the focus on the memory-sharing and semaphore code.
+
+The _memwriter_ and _memreader_ programs are likely to execute without inducing a race condition even if the semaphore code is removed: the _memwriter_ creates the shared memory segment and writes immediately to it; the _memreader_ cannot even access the shared memory until this has been created. However, best practice requires that shared-memory access is synchronized whenever a _write_ operation is in the mix, and the semaphore API is important enough to be highlighted in a code example.
+
+### Wrapping up
+
+The shared-file and shared-memory examples show how processes can communicate through _shared storage_ , files in one case and memory segments in the other. The APIs for both approaches are relatively straightforward. Do these approaches have a common downside? Modern applications often deal with streaming data, indeed, with massively large streams of data. Neither the shared-file nor the shared-memory approaches are well suited for massive data streams. Channels of one type or another are better suited. Part 2 thus introduces channels and message queues, again with code examples in C.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/interprocess-communication-linux-storage
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
+[2]: https://en.wikipedia.org/wiki/Inter-process_communication
+[3]: http://condor.depaul.edu/mkalin
+[4]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
+[5]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
+[6]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
+[7]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
+[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html
From 9183f68968c1504ee3b70a541edfaff33f76bffc Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:49:38 +0800
Subject: [PATCH 0016/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190416=20Two?=
=?UTF-8?q?=20tools=20to=20help=20visualize=20and=20simplify=20your=20data?=
=?UTF-8?q?-driven=20operations=20sources/talk/20190416=20Two=20tools=20to?=
=?UTF-8?q?=20help=20visualize=20and=20simplify=20your=20data-driven=20ope?=
=?UTF-8?q?rations.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...nd simplify your data-driven operations.md | 59 +++++++++++++++++++
1 file changed, 59 insertions(+)
create mode 100644 sources/talk/20190416 Two tools to help visualize and simplify your data-driven operations.md
diff --git a/sources/talk/20190416 Two tools to help visualize and simplify your data-driven operations.md b/sources/talk/20190416 Two tools to help visualize and simplify your data-driven operations.md
new file mode 100644
index 0000000000..8a44c56ca7
--- /dev/null
+++ b/sources/talk/20190416 Two tools to help visualize and simplify your data-driven operations.md
@@ -0,0 +1,59 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Two tools to help visualize and simplify your data-driven operations)
+[#]: via: (https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all)
+[#]: author: (Kent McNeil, Vice President of Software, Ciena Blue Planet )
+
+Two tools to help visualize and simplify your data-driven operations
+======
+Amidst the rising complexity of networks, and influx of data, service providers are striving to keep operational complexity under control. Blue Planet’s Kent McNeil explains how they can turn this challenge into a huge opportunity, and in fact reduce operational effort by exploiting state-of-the-art graph database visualization and delta-based federation technologies.
+![danleap][1]
+
+**Build the picture: Visualize your data**
+
+The Internet of Things (IoT), 5G, smart technology, virtual reality – all these applications guarantee one thing for communications service providers (CSPs): more data. As networks become increasingly overwhelmed by mounds of data, CSPs are on the hunt for ways to make the most of the intelligence collected and are looking for ways to monetize their services, provide more customizable offerings, and enhance their network performance.
+
+Customer analytics has gone some way towards fulfilling this need for greater insights, but with the rise in the volume and variety of consumer and IoT applications, the influx of data will increase at a phenomenal rate. The data includes not only customer-related data, but also device and network data, adding complexity to the picture. CSPs must harness this information to understand the relationships between any two things, to understand the connections within their data and to ultimately, leverage it for a better customer experience.
+
+**See the upward graphical trend with graph databases**
+
+Traditional relational databases certainly have their use, but graph databases offer a novel perspective. The visual representation between the component parts enables CSPs to understand and analyze the characteristics, as well as to act in a timely manner when confronted with any discrepancies.
+
+Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
+
+The use of graph databases has started to become more mainstream, as businesses see the benefits. IBM conducted a generic industry study, entitled “The State of Graph Databases Worldwide”, which found that people are moving to graph databases for speed, performance enhancement of applications, and streamlined operations. Ways in which businesses are using, or are planning to use, graph technology is highest for network and IT operations, followed by master data management. Performance is a key factor for CSPs, as is personalization, which enables support for more tailored service offerings.
+
+Another advantage of graph databases for CSPs is that of unravelling the complexity of network inventory in a clear, visualized picture – this capability gives CSPs a competitive advantage as speed and performance become increasingly paramount. This need for speed and reliability will increase tenfold as IoT continues its impressive global ramp-up. Operational complexity also grows as the influx of generated data produced by IoT will further challenge the scalability of existing operational environments. Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
+
+**Change the tide of data with delta-based federation**
+
+New data, updated data, corrected data, deleted data – all needs to be managed, in line with regulations, and instantaneously. But this capability does not exist in the reality of many CSP’s Operational Support Systems (OSS). Many still battle with updating data and relying on full uploads of network inventory in order to perform key service fulfillment and assurance tasks. This method is time-intensive and risky due to potential conflicts and inaccuracies. With data being accessed from a variety of systems, CSPs must have a way to effectively hone in on only what is required.
+
+Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
+
+A delta-based federation model ensures that an accurate picture is presented, and only essential changes are conducted reliably and quickly. This simplified method filters the delta changes, reducing the time involved in updating, and minimizing the system load and risks. A validation process takes place to catch any errors or issues with the data, so CSPs can apply checks and retain control over modifications. Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
+
+**Ride the wave**
+
+25 billion connected things are predicted by Gartner on a global scale by 2021 and CSPs are already struggling with the current levels of data, which Gartner estimates at 14.2 billion in 2019. Over the last decade, CSPs have faced significant rises in the levels of data consumed as demand for new services and higher bandwidth applications has taken off. This data wave is set to continue and CSPs have two important tools at their disposal helping them ride the wave. Firstly, CSPs have specialist, legacy OSS already in place which they can leverage as a basis for integrating data and implementing optimized systems. Secondly, they can utilize new technologies in database inventory management: graph databases and delta-based federation. The advantages of effectively integrating network data, visualizing it, and creating a clear map of the inter-connections, enable CSPs to make critical decisions more quickly and accurately, resulting in most optimized and informed service operations.
+
+[Watch this video to learn more about Blue Planet][2]
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all
+
+作者:[Kent McNeil, Vice President of Software, Ciena Blue Planet][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/istock-165721901-100793858-large.jpg
+[2]: https://www.blueplanet.com/resources/IT-plus-network-now-a-powerhouse-combination.html?utm_campaign=X1058319&utm_source=NWW&utm_term=BPVideo&utm_medium=sponsoredpost4
From ba64b81f285338f08d36a4315ce0e7e0de76b6cb Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:49:51 +0800
Subject: [PATCH 0017/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190416=20What?=
=?UTF-8?q?=20SDN=20is=20and=20where=20it=E2=80=99s=20going=20sources/talk?=
=?UTF-8?q?/20190416=20What=20SDN=20is=20and=20where=20it-s=20going.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...190416 What SDN is and where it-s going.md | 146 ++++++++++++++++++
1 file changed, 146 insertions(+)
create mode 100644 sources/talk/20190416 What SDN is and where it-s going.md
diff --git a/sources/talk/20190416 What SDN is and where it-s going.md b/sources/talk/20190416 What SDN is and where it-s going.md
new file mode 100644
index 0000000000..381c227b65
--- /dev/null
+++ b/sources/talk/20190416 What SDN is and where it-s going.md
@@ -0,0 +1,146 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What SDN is and where it’s going)
+[#]: via: (https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+What SDN is and where it’s going
+======
+Software-defined networking (SDN) established a foothold in cloud computing, intent-based networking, and network security, with Cisco, VMware, Juniper and others leading the charge.
+![seedkin / Getty Images][1]
+
+Hardware reigned supreme in the networking world until the emergence of software-defined networking (SDN), a category of technologies that separate the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources.
+
+SDN's origins can be traced to a research collaboration between Stanford University and the University of California at Berkeley that ultimately yielded the [OpenFlow][2] protocol in the 2008 timeframe.
+
+**[Learn more about the[difference between SDN and NFV][3]. Get regularly scheduled insights by [signing up for Network World newsletters][4]]**
+
+OpenFlow is only one of the first SDN canons, but it's a key component because it started the networking software revolution. OpenFlow defined a programmable network protocol that could help manage and direct traffic among routers and switches no matter which vendor made the underlying router or switch.
+
+In the years since its inception, SDN has evolved into a reputable networking technology offered by key vendors including Cisco, VMware, Juniper, Pluribus and Big Switch. The Open Networking Foundation develops myriad open-source SDN technologies as well.
+
+"Datacenter SDN no longer attracts breathless hype and fevered expectations, but the market is growing healthily, and its prospects remain robust," wrote Brad Casemore, IDC research vice president, data center networks, in a recent report, [_Worldwide Datacenter Software-Defined Networking Forecast, 2018–2022_][5]*. "*Datacenter modernization, driven by the relentless pursuit of digital transformation and characterized by the adoption of cloudlike infrastructure, will help to maintain growth, as will opportunities to extend datacenter SDN overlays and fabrics to multicloud application environments."
+
+SDN will be increasingly perceived as a form of established, conventional networking, Casemore said.
+
+IDC estimates that the worldwide data center SDN market will be worth more than $12 billion in 2022, recording a CAGR of 18.5% during the 2017–2022 period. The market generated revenue of nearly $5.15 billion in 2017, up more than 32.2% from 2016.
+
+In 2017, the physical network represented the largest segment of the worldwide datacenter SDN market, accounting for revenue of nearly $2.2 billion, or about 42% of the overall total revenue. In 2022, however, the physical network is expected to claim about $3.65 billion in revenue, slightly less than the $3.68 billion attributable to network virtualization overlays/SDN controller software but more than the $3.18 billion for SDN applications.
+
+“We're now at a point where SDN is better understood, where its use cases and value propositions are familiar to most datacenter network buyers and where a growing number of enterprises are finding that SDN offerings offer practical benefits,” Casemore said. “With SDN growth and the shift toward software-based network automation, the network is regaining lost ground and moving into better alignment with a wave of new application workloads that are driving meaningful business outcomes.”
+
+### **What is SDN? **
+
+The idea of programmability is the basis for the most precise definition of what SDN is: technology that separates the control plane management of network devices from the underlying data plane that forwards network traffic.
+
+IDC broadens that definition of SDN by stating: “Datacenter SDN architectures feature software-defined overlays or controllers that are abstracted from the underlying network hardware, offering intent-or policy-based management of the network as a whole. This results in a datacenter network that is better aligned with the needs of application workloads through automated (thereby faster) provisioning, programmatic network management, pervasive application-oriented visibility, and where needed, direct integration with cloud orchestration platforms.”
+
+The driving ideas behind the development of SDN are myriad. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources, everywhere from the data center to the campus or wide area network.
+
+Separating the control and data planes is the most common way to think of what SDN is, but it is much more than that, said Mike Capuano, chief marketing officer for [Pluribus][6].
+
+“At its heart SDN has a centralized or distributed intelligent entity that has an entire view of the network, that can make routing and switching decisions based on that view,” Capuano said. “Typically, network routers and switches only know about their neighboring network gear. But with a properly configured SDN environment, that central entity can control everything, from easily changing policies to simplifying configuration and automation across the enterprise.”
+
+### How does SDN support edge computing, IoT and remote access?
+
+A variety of networking trends have played into the central idea of SDN. Distributing computing power to remote sites, moving data center functions to the [edge][7], adopting cloud computing, and supporting [Internet of Things][8] environments – each of these efforts can be made easier and more cost efficient via a properly configured SDN environment.
+
+Typically in an SDN environment, customers can see all of their devices and TCP flows, which means they can slice up the network from the data or management plane to support a variety of applications and configurations, Capuano said. So users can more easily segment an IoT application from the production world if they want, for example.
+
+Some SDN controllers have the smarts to see that the network is getting congested and, in response, pump up bandwidth or processing to make sure remote and edge components don’t suffer latency.
+
+SDN technologies also help in distributed locations that have few IT personnel on site, such as an enterprise branch office or service provider central office, said Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks.
+
+“Naturally these places require remote and centralized delivery of connectivity, visibility and security. SDN solutions that centralize and abstract control and automate workflows across many places in the network, and their devices, improve operational reliability, speed and experience,” Bushong said.
+
+### **How does SDN support intent-based networking?**
+
+Intent-based networking ([IBN][9]) has a variety of components, but basically is about giving network administrators the ability to define what they want the network to do, and having an automated network management platform create the desired state and enforce policies to ensure what the business wants happens.
+
+“If a key tenet of SDN is abstracted control over a fleet of infrastructure, then the provisioning paradigm and dynamic control to regulate infrastructure state is necessarily higher level,” Bushong said. “Policy is closer to declarative intent, moving away from the minutia of individual device details and imperative and reactive commands.”
+
+IDC says that intent-based networking “represents an evolution of SDN to achieve even greater degrees of operational simplicity, automated intelligence, and closed-loop functionality.”
+
+For that reason, IBN represents a notable milestone on the journey toward autonomous infrastructure that includes a self-driving network, which will function much like the self-driving car, producing desired outcomes based on what network operators and their organizations wish to accomplish, Casemore stated.
+
+“While the self-driving car has been designed to deliver passengers safely to their destination with minimal human intervention, the self-driving network, as part of autonomous datacenter infrastructure, eventually will achieve similar outcomes in areas such as network provisioning, management, and troubleshooting — delivering applications and data, dynamically creating and altering network paths, and providing security enforcement with minimal need for operator intervention,” Casemore stated.
+
+While IBN technologies are relatively young, Gartner says by 2020, more than 1,000 large enterprises will use intent-based networking systems in production, up from less than 15 in the second quarter of 2018.
+
+### **How does SDN help customers with security?**
+
+SDN enables a variety of security benefits. A customer can split up a network connection between an end user and the data center and have different security settings for the various types of network traffic. A network could have one public-facing, low security network that does not touch any sensitive information. Another segment could have much more fine-grained remote access control with software-based [firewall][10] and encryption policies on it, which allow sensitive data to traverse over it.
+
+“For example, if a customer has an IoT group it doesn’t feel is all that mature with regards to security, via the SDN controller you can segment that group off away from the critical high-value corporate traffic,” Capuano stated. “SDN users can roll out security policies across the network from the data center to the edge and if you do all of this on top of white boxes, deployments can be 30 – 60 percent cheaper than traditional gear.”
+
+The ability to look at a set of workloads and see if they match a given security policy is a key benefit of SDN, especially as data is distributed, said Thomas Scheibe, vice president of product management for Cisco’s Nexus and ACI product lines.
+
+"The ability to deploy a whitelist security model like we do with ACI [Application Centric Infrastructure] that lets only specific entities access explicit resources across your network fabric is another key security element SDN enables," Scheibe said.
+
+A growing number of SDN platforms now support [microsegmentation][11], according to Casemore.
+
+“In fact, micro-segmentation has developed as a notable use case for SDN. As SDN platforms are extended to support multicloud environments, they will be used to mitigate the inherent complexity of establishing and maintaining consistent network and security policies across hybrid IT landscapes,” Casemore said.
+
+### **What is SDN’s role in cloud computing?**
+
+SDN’s role in the move toward [private cloud][12] and [hybrid cloud][13] adoption seems a natural. In fact, big SDN players such as Cisco, Juniper and VMware have all made moves to tie together enterprise data center and cloud worlds.
+
+Cisco's ACI Anywhere package would, for example, let policies configured through Cisco's SDN APIC (Application Policy Infrastructure Controller) use native APIs offered by a public-cloud provider to orchestrate changes within both the private and public cloud environments, Cisco said.
+
+“As organizations look to scale their hybrid cloud environments, it will be critical to leverage solutions that help improve productivity and processes,” said [Bob Laliberte][14], a senior analyst with Enterprise Strategy Group, in a recent [Network World article][15]. “The ability to leverage the same solution, like Cisco’s ACI, in your own private-cloud environment as well as across multiple public clouds will enable organizations to successfully scale their cloud environments.”
+
+Growth of public and private clouds and enterprises' embrace of distributed multicloud application environments will have an ongoing and significant impact on data center SDN, representing both a challenge and an opportunity for vendors, said IDC’s Casemore.
+
+“Agility is a key attribute of digital transformation, and enterprises will adopt architectures, infrastructures, and technologies that provide for agile deployment, provisioning, and ongoing operational management. In a datacenter networking context, the imperative of digital transformation drives adoption of extensive network automation, including SDN,” Casemore said.
+
+### Where does SD-WAN fit in?
+
+The software-defined wide area network ([SD-WAN][16]) is a natural application of SDN that extends the technology over a WAN. While the SDN architecture is typically the underpinning in a data center or campus, SD-WAN takes it a step further.
+
+At its most basic, SD-WAN lets companies aggregate a variety of network connections – including MPLS, 4G LTE and DSL – into a branch or network edge location and have a software management platform that can turn up new sites, prioritize traffic and set security policies.
+
+SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized – for data, voice or video – and potentially save money in the process.
+
+[SD-WAN][17] lets networks route traffic based on centrally managed roles and rules, no matter what the entry and exit points of the traffic are, and with full security. For example, if a user in a branch office is working in Office365, SD-WAN can route their traffic directly to the closest cloud data center for that app, improving network responsiveness for the user and lowering bandwidth costs for the business.
+
+"SD-WAN has been a promised technology for years, but in 2019 it will be a major driver in how networks are built and re-built," Anand Oswal, senior vice president of engineering in Cisco’s Enterprise Networking Business, said a Network World [article][18] earlier this year.
+
+It's a profoundly hot market with tons of players including [Cisco][19], VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa.
+
+IDC says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40% yearly clip between now and then.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/what-is-sdn_2_where-is-it-going_arrows_fork-in-the-road-100793314-large.jpg
+[2]: https://www.networkworld.com/article/2202144/data-center-faq-what-is-openflow-and-why-is-it-needed.html
+[3]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://www.idc.com/getdoc.jsp?containerId=US43862418
+[6]: https://www.networkworld.com/article/3192318/pluribus-recharges-expands-software-defined-network-platform.html
+[7]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[8]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
+[9]: https://www.networkworld.com/article/3202699/what-is-intent-based-networking.html
+[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
+[11]: https://www.networkworld.com/article/3247672/what-is-microsegmentation-how-getting-granular-improves-network-security.html
+[12]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
+[13]: https://www.networkworld.com/article/3233132/what-is-hybrid-cloud-computing.html
+[14]: https://www.linkedin.com/in/boblaliberte90/
+[15]: https://www.networkworld.com/article/3336075/cisco-serves-up-flexible-data-center-options.html
+[16]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
+[17]: https://www.networkworld.com/article/3031279/sd-wan/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
+[18]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html
+[19]: https://www.networkworld.com/article/3322937/what-will-be-hot-for-cisco-in-2019.html
From 9ae34a19864da61ccf031b7c3c54c78cca97d030 Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:50:17 +0800
Subject: [PATCH 0018/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190415=20How?=
=?UTF-8?q?=20to=20identify=20duplicate=20files=20on=20Linux=20sources/tec?=
=?UTF-8?q?h/20190415=20How=20to=20identify=20duplicate=20files=20on=20Lin?=
=?UTF-8?q?ux.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ow to identify duplicate files on Linux.md | 127 ++++++++++++++++++
1 file changed, 127 insertions(+)
create mode 100644 sources/tech/20190415 How to identify duplicate files on Linux.md
diff --git a/sources/tech/20190415 How to identify duplicate files on Linux.md b/sources/tech/20190415 How to identify duplicate files on Linux.md
new file mode 100644
index 0000000000..9bdc92a591
--- /dev/null
+++ b/sources/tech/20190415 How to identify duplicate files on Linux.md
@@ -0,0 +1,127 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to identify duplicate files on Linux)
+[#]: via: (https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+How to identify duplicate files on Linux
+======
+Some files on a Linux system can appear in more than one location. Follow these instructions to find and identify these "identical twins" and learn why hard links can be so advantageous.
+![Archana Jarajapu \(CC BY 2.0\)][1]
+
+Identifying files that share disk space relies on making use of the fact that the files share the same inode — the data structure that stores all the information about a file except its name and content. If two or more files have different names and file system locations, yet share an inode, they also share content, ownership, permissions, etc.
+
+These files are often referred to as "hard links" — unlike symbolic links that simply point to other files by containing their names. Symbolic links are easy to pick out in a file listing by the "l" in the first position and **- >** symbol that refers to the file being referenced.
+
+```
+$ ls -l my*
+-rw-r--r-- 4 shs shs 228 Apr 12 19:37 myfile
+lrwxrwxrwx 1 shs shs 6 Apr 15 11:18 myref -> myfile
+-rw-r--r-- 4 shs shs 228 Apr 12 19:37 mytwin
+```
+
+Identifying hard links in a single directory is not as obvious, but it is still quite easy. If you list the files using the **ls -i** command and sort them by inode number, you can pick out the hard links fairly easily. In this type of ls output, the first column shows the inode numbers.
+
+```
+$ ls -i | sort -n | more
+ ...
+ 788000 myfile <==
+ 788000 mytwin <==
+ 801865 Name_Labels.pdf
+ 786692 never leave home angry
+ 920242 NFCU_Docs
+ 800247 nmap-notes
+```
+
+Scan your output looking for identical inode numbers and any matches will tell you what you want to know.
+
+**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][2] ]**
+
+If, on the other hand, you simply want to know if one particular file is hard-linked to another file, there's an easier way than scanning through a list of what may be hundreds of files. The find command's **-samefile** option will do the work for you.
+
+```
+$ find . -samefile myfile
+./myfile
+./save/mycopy
+./mytwin
+```
+
+Notice that the starting location provided to the find command will determine how much of the file system is scanned for matches. In the above example, we're looking in the current directory and subdirectories.
+
+Adding output details using find's **-ls** option might be more convincing:
+
+```
+$ find . -samefile myfile -ls
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./myfile
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./save/mycopy
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./mytwin
+```
+
+The first column shows the inode number. Then we see file permissions, links, owner, file size, date information, and the names of the files that refer to the same disk content. Note that the link field in this case is a "4" not the "3" we might expect, telling us that there's another link to this same inode as well (but outside our search range).
+
+If you want to look for all instances of hard links in a single directory, you could try a script like this that will create the list and look for the duplicates for you:
+
+```
+#!/bin/bash
+
+# seaches for files sharing inodes
+
+prev=""
+
+# list files by inode
+ls -i | sort -n > /tmp/$0
+
+# search through file for duplicate inode #s
+while read line
+do
+ inode=`echo $line | awk '{print $1}'`
+ if [ "$inode" == "$prev" ]; then
+ grep $inode /tmp/$0
+ fi
+ prev=$inode
+done < /tmp/$0
+
+# clean up
+rm /tmp/$0
+
+$ ./findHardLinks
+ 788000 myfile
+ 788000 mytwin
+```
+
+You can also use the find command to look for files by inode number as in this command. However, this search could involve more than one file system, so it is possible that you will get false results, since the same inode number might be used in another file system where it would not represent the same file. If that's the case, other file details will not be identical.
+
+```
+$ find / -inum 788000 -ls 2> /dev/null
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /tmp/mycopy
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/myfile
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/save/mycopy
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/mytwin
+```
+
+Note that error output was shunted off to /dev/null so that we didn't have to look at all the "Permission denied" errors that would have otherwise been displayed for other directories that we're not allowed to look through.
+
+Also, scanning for files that contain the same content but don't share inodes (i.e., simply file copies) would take considerably more time and effort.
+
+Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/reflections-candles-100793651-large.jpg
+[2]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
From e4be2725ebbb895789e601faea8d4148045fb9dc Mon Sep 17 00:00:00 2001
From: darksun
Date: Wed, 17 Apr 2019 11:50:35 +0800
Subject: [PATCH 0019/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190415=20Nyan?=
=?UTF-8?q?sa=E2=80=99s=20Voyance=20expands=20to=20the=20IoT=20sources/tal?=
=?UTF-8?q?k/20190415=20Nyansa-s=20Voyance=20expands=20to=20the=20IoT.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...415 Nyansa-s Voyance expands to the IoT.md | 75 +++++++++++++++++++
1 file changed, 75 insertions(+)
create mode 100644 sources/talk/20190415 Nyansa-s Voyance expands to the IoT.md
diff --git a/sources/talk/20190415 Nyansa-s Voyance expands to the IoT.md b/sources/talk/20190415 Nyansa-s Voyance expands to the IoT.md
new file mode 100644
index 0000000000..e893c86d53
--- /dev/null
+++ b/sources/talk/20190415 Nyansa-s Voyance expands to the IoT.md
@@ -0,0 +1,75 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Nyansa’s Voyance expands to the IoT)
+[#]: via: (https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all)
+[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
+
+Nyansa’s Voyance expands to the IoT
+======
+
+![Brandon Mowinkel \(CC0\)][1]
+
+Nyansa announced today that their flagship Voyance product can now apply its AI-based secret sauce to [IoT][2] devices, over and above the networking equipment and IT endpoints it could already manage.
+
+Voyance – a network management product that leverages AI to automate the discovery of devices on the network and identify unusual behavior – has been around for two years now, and Nyansa says that it’s being used to observe a total of 25 million client devices operating across roughly 200 customer networks.
+
+**More on IoT:**
+
+ * [Most powerful Internet of Things companies][3]
+ * [10 Hot IoT startups to watch][4]
+ * [The 6 ways to make money in IoT][5]
+ * [][6] [Blockchain, service-centric networking key to IoT success][7]
+ * [Getting grounded in IoT networking and security][8]
+ * [Building IoT-ready networks must become a priority][9]
+ * [What is the Industrial IoT? [And why the stakes are so high]][10]
+
+
+
+It’s a software-only product (available either via public SaaS or private cloud) that works by scanning a customer’s network and identifying every device attached to it, then establishing a behavioral baseline that will let it flag suspicious actions (e.g., sending a lot more data than other devices of its kind, connecting to unusual servers) and even perform automated root-cause analysis of network issues.
+
+The process doesn’t happen instantaneously, particularly the creation of the baseline, but it’s designed to be minimally invasive to existing network management frameworks and easy to implement.
+
+Nyansa said that the medical field has been one of the key targets for the newly IoT-enabled iteration of Voyance, and one early customer – Baptist Health, a Florida-based healthcare company that runs four hospitals and several other clinics and practices – said that Voyance IoT has offered a new level of visibility into the business’ complex array of connected diagnostic and treatment machines.
+
+“In the past we didn’t have the ability to identify security concerns in this way, related to rogue devices on the enterprise network, and now we’re able to do that,” said CISO Thad Phillips.
+
+While spiraling network complexity isn’t an issue confined to the IoT, there’s a strong argument that the number and variety of devices connected to an IoT-enabled network represent a new challenge to network management, particularly in light of the fact that many such devices aren’t particularly secure.
+
+“They’re not manufactured by networking vendors or security vendors, so for a performance standpoint, they have a lot of quirks … and on the security side, that’s sort of a big problem there as well,” said Anand Srinivas, Nyansa’s co-founder and CTO.
+
+Enabling the Voyance platform to identify and manage IoT devices along with traditional endpoints seems to be mostly a matter of adding new device signatures to the system, but Enterprise Management Associates research director Shamus McGillicuddy said that, while the system’s designed for automation and ease of use, AIOps products like Voyance do need to be managed to make sure that they’re functioning correctly.
+
+“Anything based on machine learning is going to take a while to make sure it understands your environment and you might have to retrain it,” he said. “There’s always going to be more and more things connecting to IP networks, and it’s just going to be a question of building up a database.”
+
+Voyance IoT is available now. Pricing starts at $16,000 per year, and goes up with the number of total devices managed. (Current Voyance users can manage up to 100 IoT devices at no additional cost.)
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all
+
+作者:[Jon Gold][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Jon-Gold/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/geometric_architecture_ceiling_structure_lines_connections_networks_perspective_by_brandon_mowinkel_cc0_via_unsplash_2400x1600-100788530-large.jpg
+[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
+[3]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[4]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[5]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[6]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[7]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[8]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[9]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[10]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
From 9d952da1de6f797ace72514f3007758366093e6b Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 17 Apr 2019 12:14:06 +0800
Subject: [PATCH 0020/1154] PRF:20190413 The Fargate Illusion.md
---
.../tech/20190413 The Fargate Illusion.md | 26 +++++++++----------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/translated/tech/20190413 The Fargate Illusion.md b/translated/tech/20190413 The Fargate Illusion.md
index 36fe768e9a..c5591968a3 100644
--- a/translated/tech/20190413 The Fargate Illusion.md
+++ b/translated/tech/20190413 The Fargate Illusion.md
@@ -30,9 +30,9 @@
我有一个干净的 AWS 账户,并决定从零到部署一个 webapp。与 AWS 中的其它基础设施一样,我必须首先使基本的基础设施正常工作起来,因此我需要先定义一个 VPC。
-遵循最佳实践,因此我将这个 VPC 划分为跨可用区(AZ)的子网,一个公共子网和私有子网。这时我想到,只要这种设置基础设施的需求存在,我就能找到一份这种工作。AWS 是运维上“免费”这一概念一直让我感到愤怒。开发者社区中的许多人理所当然地认为在设置和定义一个设计良好的 AWS 账户和基础设施是不需要付出多少工作和努力的。在我们甚至开始谈论多帐户架构*之前*(现在我仍然使用单一帐户),我必须已经定义好基础设施和传统的网络设备。
+遵循最佳实践,因此我将这个 VPC 划分为跨可用区(AZ)的子网,一个公共子网和私有子网。这时我想到,只要这种设置基础设施的需求存在,我就能找到一份这种工作。AWS 是"免"运维的这一概念一直让我感到愤怒。开发者社区中的许多人理所当然地认为在设置和定义一个设计良好的 AWS 账户和基础设施是不需要付出多少工作和努力的。而这种想当然甚至发生在开始谈论多帐户架构*之前*就有了——现在我仍然使用单一帐户,我已经必须定义好基础设施和传统的网络设备。
-这里也值得记住,我已经做了很多次,所以我*很清楚*该做什么。我可以在我的帐户中使用默认的 VPC 以及预先提供的子网,我觉得很多刚开始的人也可以使用它。这大概花了我半个小时才运行起来,但我不禁想到,即使我想运行 lambda 函数,我仍然需要某种连接和网络。在 VPC 中定义 NAT 网关和路由根本不会让你觉得很“无服务器”,但要往下进行这就是必须要做的。
+这里也值得记住,我已经做了很多次,所以我*很清楚*该做什么。我可以在我的帐户中使用默认的 VPC 以及预先提供的子网,我觉得很多刚开始的人也可以使用它。这大概花了我半个小时才运行起来,但我不禁想到,即使我想运行 lambda 函数,我仍然需要某种连接和网络。定义 NAT 网关和在 VPC 中路由根本不会让你觉得很“Serverless”,但要往下进行这就是必须要做的。
### 运行简单的容器
@@ -44,7 +44,7 @@
#### 任务定义
-“任务定义”用来定义要运行的实际容器。我在这里遇到的问题是,任务定义这件事非常复杂。这里有很多选项都很简单,比如指定 Docker 镜像和内存限制,但我还必须定义一个网络模型以及我并不熟悉的其它各种选项。真需要这样吗?如果我完全没有 AWS 方面的知识就进入到这个过程里,那么在这个阶段我会感觉非常的不知所措。可以在 AWS 页面上找到这些 [参数][5] 的完整列表,这个列表很长。我知道我的容器需要一些环境变量,它需要暴露一个端口。所以我首先在一个神奇的 [terraform 模块][6] 的帮助下定义了这一点,这真的让这件事更容易了。如果没有这个模块,我就得手工编写 JSON 来定义我的容器定义。
+“任务定义Task Definition”用来定义要运行的实际容器。我在这里遇到的问题是,任务定义这件事非常复杂。这里有很多选项都很简单,比如指定 Docker 镜像和内存限制,但我还必须定义一个网络模型以及我并不熟悉的其它各种选项。真需要这样吗?如果我完全没有 AWS 方面的知识就进入到这个过程里,那么在这个阶段我会感觉非常的不知所措。可以在 AWS 页面上找到这些 [参数][5] 的完整列表,这个列表很长。我知道我的容器需要一些环境变量,它需要暴露一个端口。所以我首先在一个神奇的 [terraform 模块][6] 的帮助下定义了这一点,这真的让这件事更容易了。如果没有这个模块,我就得手写 JSON 来定义我的容器定义。
首先我定义了一些环境变量:
@@ -173,7 +173,7 @@ resource "aws_ecs_service" "app" {
##### 负载均衡器从未远离
-老实说,我很满意,我甚至不确定为什么。我已经习惯了 Kubernetes 的服务和 Ingress 对象,我一心认为用 Kubernetes 将我的应用程序放到网上是多么容易。当然,我们在 $work 花了几个月的时间建立一个平台,以便更轻松。我是 [external-dns][8] 和 [cert-manager][9] 的重度用户,它们可以自动填充 Ingress 对象上的 DNS 条目并自动化 TLS 证书,我非常了解进行这些设置所需的工作,但老实说,我认为在 Fargate 上做这件事会更容易。我认识到 Fargate 并没有声称自己是“如何运行应用程序”这件事的全部和最终目的,它只是抽象出节点管理,但我一直被告知这比 Kubernetes *更加容易*。我真的很惊讶。定义负载均衡器(即使你不想使用 Ingress 和 Ingress 控制器)也是向 Kubernetes 部署服务的重要组成部分,我不得不在这里再次做同样的事情。这一切都让人觉得如此熟悉。
+老实说,我很满意,我甚至不确定为什么。我已经习惯了 Kubernetes 的服务和 Ingress 对象,我一心认为用 Kubernetes 将我的应用程序放到网上是多么容易。当然,我们在 $work 花了几个月的时间建立一个平台,以便更轻松。我是 [external-dns][8] 和 [cert-manager][9] 的重度用户,它们可以自动填充 Ingress 对象上的 DNS 条目并自动化 TLS 证书,我非常了解进行这些设置所需的工作,但老实说,我认为在 Fargate 上做这件事会更容易。我认识到 Fargate 并没有声称自己是“如何运行应用程序”这件事的全部和最终目的,它只是抽象出节点管理,但我一直被告知这比 Kubernetes *更加容易*。我真的很惊讶。定义负载均衡器(即使你不想使用 Ingress 和 Ingress Controller)也是向 Kubernetes 部署服务的重要组成部分,我不得不在这里再次做同样的事情。这一切都让人觉得如此熟悉。
我现在意识到我需要:
@@ -296,9 +296,9 @@ module "ecs" {
```
这里让我感到惊讶的是为什么我必须定义一个集群。作为一个相当熟悉 ECS 的人,你会觉得你需要一个集群,但我试图从一个必须经历这个过程的新人的角度来考虑这一点 —— 对我来说,Fargate 标榜自己“
-无服务器”而你仍需要定义集群,这似乎很令人惊讶。当然这是一个小细节,但它确实盘旋在我的脑海里。
+Serverless”而你仍需要定义集群,这似乎很令人惊讶。当然这是一个小细节,但它确实盘旋在我的脑海里。
-### 告诉我你的秘密
+### 告诉我你的 Secret
在这个阶段,我很高兴我成功地运行了一些东西。然而,我的原始的成功标准缺少一些东西。如果我们回到任务定义那里,你会记得我的应用程序有一个存放密码的环境变量:
@@ -315,11 +315,11 @@ container_environment_variables = [
]
```
-如果我在 AWS 控制台中查看我的任务定义,我的密码就在那里,明晃晃的明文。我希望不要这样,所以我开始尝试将其转化为其他东西,类似于 [Kubernetes 的秘密信息管理][11]。
+如果我在 AWS 控制台中查看我的任务定义,我的密码就在那里,明晃晃的明文。我希望不要这样,所以我开始尝试将其转化为其他东西,类似于 [Kubernetes 的Secrets管理][11]。
#### AWS SSM
-Fargate / ECS 执行秘密信息管理secret management部分的方式是使用 [AWS SSM][12](此服务的全名是 AWS 系统管理器参数存储库,但我不想使用这个名称,因为坦率地说这个名字太愚蠢了)。
+Fargate / ECS 执行secret 管理secret management部分的方式是使用 [AWS SSM][12](此服务的全名是 AWS 系统管理器参数存储库 Systems Manager Parameter Store,但我不想使用这个名称,因为坦率地说这个名字太愚蠢了)。
AWS 文档很好的[涵盖了这个内容][13],因此我开始将其转换为 terraform。
@@ -335,9 +335,9 @@ resource "aws_ssm_parameter" "app_password" {
}
```
-显然,这里的关键部分是 “SecureString” 类型。这会使用默认的 AWS KMS 密钥来加密数据,这对我来说并不是很直观。这比 Kubernetes 的秘密信息管理具有巨大优势,默认情况下,这些秘密信息在 etcd 中是不加密的。
+显然,这里的关键部分是 “SecureString” 类型。这会使用默认的 AWS KMS 密钥来加密数据,这对我来说并不是很直观。这比 Kubernetes 的 Secret 管理具有巨大优势,默认情况下,这些 Secret 在 etcd 中是不加密的。
-然后我为 ECS 指定了另一个本地值映射,并将其作为秘密参数传递:
+然后我为 ECS 指定了另一个本地值映射,并将其作为 Secret 参数传递:
```
container_secrets = [
@@ -372,7 +372,7 @@ module "container_definition_app" {
##### 出了个问题
-此刻,我重新部署了我的任务定义,并且非常困惑。为什么任务没有正确拉起?当新的任务定义(版本 8)可用时,我一直在控制台中看到正在运行的应用程序仍在使用先前的任务定义(版本 7)。解决这件事花费的时间比我预期的要长,但是在控制台的事件屏幕上,我注意到了 IAM 错误。我错过了一个步骤,容器无法从 AWS SSM 中读取秘密信息,因为它没有正确的 IAM 权限。这是我第一次真正对整个这件事情感到沮丧。从用户体验的角度来看,这里的反馈非常*糟糕*。如果我没有发觉的话,我会认为一切都很好,因为仍然有一个任务正在运行,我的应用程序仍然可以通过正确的 URL 访问 —— 只不过是旧的配置而已。
+此刻,我重新部署了我的任务定义,并且非常困惑。为什么任务没有正确拉起?当新的任务定义(版本 8)可用时,我一直在控制台中看到正在运行的应用程序仍在使用先前的任务定义(版本 7)。解决这件事花费的时间比我预期的要长,但是在控制台的事件屏幕上,我注意到了 IAM 错误。我错过了一个步骤,容器无法从 AWS SSM 中读取 Secret 信息,因为它没有正确的 IAM 权限。这是我第一次真正对整个这件事情感到沮丧。从用户体验的角度来看,这里的反馈非常*糟糕*。如果我没有发觉的话,我会认为一切都很好,因为仍然有一个任务正在运行,我的应用程序仍然可以通过正确的 URL 访问 —— 只不过是旧的配置而已。
在 Kubernetes 里,我会清楚地看到 pod 定义中的错误。Fargate 可以确保我的应用不会停止,这绝对是太棒了,但作为一名运维,我需要一些关于发生了什么的实际反馈。这真的不够好。我真的希望 Fargate 团队的人能够读到这篇文章,改善这种体验。
@@ -400,7 +400,7 @@ module "container_definition_app" {
#### Kubernetes 的争议
-最后就是:如果你将 Kubernetes 纯粹视为一个容器编排工具,你可能会喜欢 Fargate。然而,随着我对 Kubernetes 越来越熟悉,我开始意识到它作为一种技术的重要性 —— 不仅因为它是一个伟大的容器编排工具,而且因为它的设计模式 —— 它是声明性的、API 驱动的平台。 在*整个* Fargate 过程期间发生的一个简单的事情是,如果我删除这里某个东西,Fargate 不一定会为我重新创建它。自动缩放很不错,不需要管理服务器和操作系统的补丁及更新也很棒,但我辉觉得因为无法使用 Kubernetes 自我修复和 API 驱动模型而失去了很多。当然,Kubernetes 有一个学习曲线,但从这里的体验来看,Fargate 也是如此。
+最后就是:如果你将 Kubernetes 纯粹视为一个容器编排工具,你可能会喜欢 Fargate。然而,随着我对 Kubernetes 越来越熟悉,我开始意识到它作为一种技术的重要性 —— 不仅因为它是一个伟大的容器编排工具,而且因为它的设计模式 —— 它是声明性的、API 驱动的平台。 在*整个* Fargate 过程期间发生的一个简单的事情是,如果我删除这里某个东西,Fargate 不一定会为我重新创建它。自动缩放很不错,不需要管理服务器和操作系统的补丁及更新也很棒,但我觉得因为无法使用 Kubernetes 自我修复和 API 驱动模型而失去了很多。当然,Kubernetes 有一个学习曲线,但从这里的体验来看,Fargate 也是如此。
### 总结
@@ -419,7 +419,7 @@ via: https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html
作者:[Lee Briggs][a]
选题:[lujun9972][b]
译者:[Bestony](https://github.com/Bestony)
-校对:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy), 临石(阿里云智能技术专家)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 5d4af3ce24b3a9e09a12e3b5b4ce923f2ceef3ad Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 17 Apr 2019 12:14:58 +0800
Subject: [PATCH 0021/1154] PUB:20190413 The Fargate Illusion.md
https://linux.cn/article-10740-1.html
---
.../tech => published}/20190413 The Fargate Illusion.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190413 The Fargate Illusion.md (99%)
diff --git a/translated/tech/20190413 The Fargate Illusion.md b/published/20190413 The Fargate Illusion.md
similarity index 99%
rename from translated/tech/20190413 The Fargate Illusion.md
rename to published/20190413 The Fargate Illusion.md
index c5591968a3..ef0cc6153e 100644
--- a/translated/tech/20190413 The Fargate Illusion.md
+++ b/published/20190413 The Fargate Illusion.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10740-1.html)
[#]: subject: (The Fargate Illusion)
[#]: via: (https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html)
[#]: author: (Lee Briggs https://leebriggs.co.uk/)
From 8a1da3c25dd86987653ef52106f1cd9961f257db Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 17 Apr 2019 12:41:26 +0800
Subject: [PATCH 0022/1154] PRF:20190314 A Look Back at the History of
Firefox.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
修正链接错误 @lujun9972 这个应该是 md 制作过程的问题,可以回头看看。
---
published/20190314 A Look Back at the History of Firefox.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/published/20190314 A Look Back at the History of Firefox.md b/published/20190314 A Look Back at the History of Firefox.md
index 6be471d64d..ac9341d9a0 100644
--- a/published/20190314 A Look Back at the History of Firefox.md
+++ b/published/20190314 A Look Back at the History of Firefox.md
@@ -92,7 +92,7 @@ via: https://itsfoss.com/history-of-firefox
[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
[5]: http://viola.org/
-[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser
+[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser)
[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
[8]: http://www.davetitus.com/mozilla/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
@@ -110,7 +110,7 @@ via: https://itsfoss.com/history-of-firefox
[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
[23]: https://en.wikipedia.org/wiki/Red_panda
-[24]: https://en.wikipedia.org/wiki/Flock_(web_browser
+[24]: https://en.wikipedia.org/wiki/Flock_(web_browser)
[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
[26]: https://itsfoss.com/why-firefox/
[27]: https://itsfoss.com/firefox-quantum-ubuntu/
From 037724e45e90fd9b4a41660ba4ed580246e0728b Mon Sep 17 00:00:00 2001
From: zgj
Date: Wed, 17 Apr 2019 13:21:39 +0800
Subject: [PATCH 0023/1154] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E7=BF=BB=E8=AF=91?=
=?UTF-8?q?=20Why=20DevOps=20is=20the=20most=20important=20=20tech=20strat?=
=?UTF-8?q?egy=20today?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... the most important tech strategy today.md | 129 ++++++++++++++++++
1 file changed, 129 insertions(+)
create mode 100644 translated/talk/20190327 Why DevOps is the most important tech strategy today.md
diff --git a/translated/talk/20190327 Why DevOps is the most important tech strategy today.md b/translated/talk/20190327 Why DevOps is the most important tech strategy today.md
new file mode 100644
index 0000000000..fe014a243a
--- /dev/null
+++ b/translated/talk/20190327 Why DevOps is the most important tech strategy today.md
@@ -0,0 +1,129 @@
+[#]: collector: "lujun9972"
+[#]: translator: "zgj1024 "
+[#]: reviewer: " "
+[#]: publisher: " "
+[#]: url: " "
+[#]: subject: "Why DevOps is the most important tech strategy today"
+[#]: via: "https://opensource.com/article/19/3/devops-most-important-tech-strategy"
+[#]: author: "Kelly AlbrechtWilly-Peter Schaub https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht"
+
+为何 DevOps 是如今最重要的技术策略
+======
+消除一些关于 DevOps 的疑惑
+![CICD with gears][1]
+
+很多人初学 [DevOps][2] 时,看到它其中一个结果就问这个是如何得来的。其实理解这部分 Devops 的怎样实现并不重要,重要的是——理解(使用) DevOps 策略的原因——这是做一个行业的领导者还是追随者的差别。
+
+你可能会听过些 Devops 的难以置信的成果,例如生产环境非常有弹性,“混世猴子”([Chaos Monkey][3])程序运行时,将周围的连接随机切断,每天仍可以处理数千个版本。这是令人印象深刻的,但就其本身而言,这是一个 DevOps 的无力案例,本质上会被一个[反例][4]困扰:DevOps 环境有弹性是因为没有观察到严重的故障。。。还没有。
+
+有很多关于 DevOps 的疑惑,并且许多人还在尝试弄清楚它的意义。下面是来自我 LinkedIn Feed 中的某个人的一个案例:
+
+> 最近我参加一些 #DevOps 的交流会,那里一些演讲人好像在倡导 #敏捷开发是 DevOps 的子集。不知为何,我的理解洽洽相反。
+>
+> 能听一下你们的想法吗?你认为敏捷开发和 DevOps 之间是什么关系呢?
+>
+> 1. DevOps 是敏捷开发的子集
+> 2. 敏捷开发 是 DevOps 的子集
+> 3. DevOps 是敏捷开发的扩展,从敏捷开发结束的地方开始
+> 4. DevOps 是敏捷开发的新版本
+>
+
+科技行业的专业人士在那篇 LinkedIn 的帖子上达标各样的答案,你会怎样回复呢?
+
+### DevOps源于精益和敏捷
+
+如果我们从亨利福特的战略和丰田生产系统对福特车型的改进(的历史)开始, DevOps 就更有意义了。精益制造就诞生在那段历史中,人们对精益制作进行了良好的研究。James P. Womack 和 Daniel T. Jones 将精益思维( [Lean Thinking][5])提炼为五个原则:
+ 1. 指明客户所需的价值
+ 2. 确定提供该价值的每个产品的价值流,并对当前提供该价值所需的所有浪费步骤提起挑战
+ 3. 使产品通过剩余的增值步骤持续流动
+ 4. 在可以连续流动的所有步骤之间引入拉力
+ 5. 管理要尽善尽美,以便为客户服务所需的步骤数量和时间以及信息量持续下降
+
+
+Lean seeks to continuously remove waste and increase the flow of value to the customer. This is easily recognizable and understood through a core tenet of lean: single piece flow. We can do a number of activities to learn why moving single pieces at a time is magnitudes faster than batches of many pieces; the [Penny Game][6] and the [Airplane Game][7] are two of them. In the Penny Game, if a batch of 20 pennies takes two minutes to get to the customer, they get the whole batch after waiting two minutes. If you move one penny at a time, the customer gets the first penny in about five seconds and continues getting pennies until the 20th penny arrives approximately 25 seconds later.
+
+精益致力于持续消除浪费并增加客户的价值流动。这很容易识别并明白精益的核心原则:单一流。我们可以做一些游戏去了解为何同一时间移动单个比批量移动要快得多。其中的两个游戏是[硬币游戏][6]和[飞机游戏][7]。在硬币游戏中,如果一批 20 个硬币到顾客手中要用 2 分钟,顾客等 2 分钟后能拿到整批硬币。如果一次只移动一个硬币,顾客会在 5 秒内得到第一枚硬币,并会持续获得硬币,直到在大约 25 秒后第 20 个硬币到达。(译者注:有相关的视频的)
+
+这是巨大的不同,但是不是生活中的所有事都像硬币游戏那样简单并可预测的。这就是敏捷的出现的原因。我们当然看到了高效绩敏捷团队的精益原则,但这些团队需要的不仅仅是精益去做他们要做的事。
+
+为了能够处理典型的软件开发任务的不可预见性和变化,敏捷开发的方法论会将重点放在意识、审议、决策和行动上,以便在不断变化的现实中调整。例如,敏捷框架(如 srcum)通过每日站立会议和冲刺评审会议等仪式提高意识。如果 scrum 团队意识到新的事实,框架允许并鼓励他们在必要时及时调整路线。
+
+要使团队做出这些类型的决策,他们需要高度信任的环境中的自我组织能力。以这种方式工作的高效绩敏捷团队在不断调整的同时实现快速的价值流,消除错误方向上的浪费。
+
+### 最佳批量大小
+
+要了解 DevOps 在软件开发中的请强大功能,这会帮助我们理解批处理大小的经济学。请考虑以下来自Donald Reinertsen 的[产品开发流程原则][8]的U曲线优化示例:
+
+![U-curve optimization illustration of optimal batch size][9]
+
+这可以类比杂货店购物来解释。假设你需要买一些鸡蛋,而你住的地方离商店只有 30 分的路程。买一个鸡蛋(图种最左边)意味着每次要花 30 分钟的路程,这就是你的_交易成本_。_持有成本_可能是鸡蛋变质和在你的冰箱中持续地占用空间。_总成本_是_交易成本_加上你的_持有成本_。这 U 型曲线解释了为什么对大部分来说,一次买一打鸡蛋是他们的_最佳批量大小_。如果你就住在商店的旁边,步行到那里不会花费你任何的时候,你可能每次只会买一小盒鸡蛋,以此来节省冰箱的空间并享受新鲜的鸡蛋。
+
+这 U 型优化曲线可以说明为什么在成功敏捷转换中生产力会显著提高。考虑敏捷转换对组织决策的影响。在传统的分级组织中,决策权是集中的。这会导致较少的人做更大的决策。敏捷方法论会有效地降低组织决策中的交易成本,方法是将决策分散到最被人熟知的认识和信息的位置:跨越高度信任,自组织的敏捷团队。
+
+下面的动画演示了降低事务成本后,最佳批量大小是如何向左移动。在更频繁地做出更快的决策方面,你不能低估组织的价值。
+
+![U-curve optimization illustration][10]
+
+### DevOps 适合哪些地方
+
+自动化是 DevOps 最知名的事情之一。前面的插图非常详细地展示了自动化的价值。通过自动化,我们将交易成本降低到接近零,实质上是免费进行测试和部署。这使我们可以利用越来越小的批量工作。较小批量的工作更容易理解、提交、测试、审查和知道何时能完成。这些较小的批量大小也包含较少的差异和风险,使其更易于部署,如果出现问题,可以进行故障排除和恢复。通过自动化与扎实的敏捷实践相结合,我们可以使我们的功能开发非常接近单件流程,从而快速,持续地为客户提供价值。
+
+更传统地说,DevOps 被理解为一种打破开发团队和运营团队之间混乱局面的方法。在这个模型中,开发团队开发新的功能,而运营团队则保持系统的稳定和平稳运行。摩擦的发生是因为开发过程中的新功能将更改引入到系统中,从而增加了停机的风险,运营团队并不认为要对此负责,但无论如何都必须处理这一问题。DevOps 不仅仅尝试让人们一起工作,更重要的是尝试在复杂的环境中安全地进行更频繁的更改。
+
+我们可以向 [Ron Westrum][11] 寻求有关在复杂组织中实现安全性的研究。在研究为什么有些组织比其他组织更安全时,他发现组织的文化可以预测其安全性。他确定了三种文化:病态,官僚主义的和生产式的。他发现病理的可以预测安全性较低,而生产式文化被预测为更安全(例如,在他的主要研究领域中,飞机坠毁或意外住院死亡的数量要少得多)。
+
+![Three types of culture identified by Ron Westrum][12]
+
+高效的 DevOps 团队通过精益和敏捷的实践实现了一种生成性文化,这表明速度和安全性是互补的,或者说是同一个问题的两个方面。通过将决策和功能的最佳批量大小减少到非常小,DevOps 实现了更快的信息流和价值,同时消除了浪费并降低了风险。
+
+与 Westrum的研究一致,在提高安全性和可靠性的同时,变化也很容易发生。。当一个敏捷的 DevOps 团队被信任做出自己的决定时,我们将获得 DevOps 目前最为人所知的工具和技术:自动化和持续交付。通过这种自动化,交易成本比以往任何时候都进一步降低,并且实现了近乎单一的精益流程,每天创造数千个决策和发布的潜力,正如我们在高效绩的 DevOps 组织中看到的那样
+
+### 流动、反馈、学习
+
+DevOps 并不止于此。我们主要讨论了 DevOps 实现了革命性的流程,但通过类似的努力可以进一步放大精益和敏捷实践,从而实现更快的反馈循环和更快的学习。在[_DevOps手册_][13] 中,作者除了详细解释快速流程外, DevOps 如何在整个价值流中实现遥测,从而获得快速且持续的反馈。此外,利用[精益求精的突破][14]和scrum 的[回顾][15],高效的 DevOps 团队将不断推动学习和持续改进深入到他们的组织的基础,实现软件产品开发行业的精益制造革命。
+
+
+### 从 DevOps 评估开始
+
+利用 DevOps 的第一步是,经过大量研究或在 DevOps 顾问和教练的帮助下,对高效绩 DevOps 团队中始终存在的一系列维度进行评估。评估应确定需要改进的薄弱或不存在的团队规范。对评估的结果进行评估,以找到具有高成功机会的快速获胜焦点领域,从而产生高影响力的改进。快速获胜非常重要,能让团队获取解决更具挑战性领域所需的动力。团队应该产生可以快速尝试的想法,并开始关注 DevOps 转型。
+
+一段时间后,团队应重新评估相同的维度,以衡量改进并确立新的高影响力重点领域,并再次采纳团队的新想法。一位好的教练将根据需要进行咨询、培训、指导和支持,直到团队拥有自己的持续改进方案,并通过不断地重新评估、试验和学习,在所有维度上实现近乎一致。
+
+在本文的[第二部分][16]中,我们将查看 Drupal 社区中 DevOps 调查的结果,并了解最有可能找到快速获胜的位置。
+
+* * *
+
+_Rob_ _Bayliss and Kelly Albrecht will present[DevOps: Why, How, and What][17] and host a follow-up [Birds of a][18]_ [_Feather_][18] _[discussion][18] at [DrupalCon 2019][19] in Seattle, April 8-12._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/devops-most-important-tech-strategy
+
+作者:[Kelly AlbrechtWilly-Peter Schaub][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/zgj1024)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc "CICD with gears"
+[2]: https://opensource.com/resources/devops
+[3]: https://github.com/Netflix/chaosmonkey
+[4]: https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative
+[5]: https://www.amazon.com/dp/B0048WQDIO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
+[6]: https://youtu.be/5t6GhcvKB8o?t=54
+[7]: https://www.shmula.com/paper-airplane-game-pull-systems-push-systems/8280/
+[8]: https://www.amazon.com/dp/B00K7OWG7O/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
+[9]: https://opensource.com/sites/default/files/uploads/batch_size_optimal_650.gif "U-curve optimization illustration of optimal batch size"
+[10]: https://opensource.com/sites/default/files/uploads/batch_size_650.gif "U-curve optimization illustration"
+[11]: https://en.wikipedia.org/wiki/Ron_Westrum
+[12]: https://opensource.com/sites/default/files/uploads/information_flow.png "Three types of culture identified by Ron Westrum"
+[13]: https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002/ref=sr_1_3?keywords=DevOps+handbook&qid=1553197361&s=books&sr=1-3
+[14]: https://en.wikipedia.org/wiki/Kaizen
+[15]: https://www.scrum.org/resources/what-is-a-sprint-retrospective
+[16]: https://opensource.com/article/19/3/where-drupal-community-stands-devops-adoption
+[17]: https://events.drupal.org/seattle2019/sessions/devops-why-how-and-what
+[18]: https://events.drupal.org/seattle2019/bofs/devops-getting-started
+[19]: https://events.drupal.org/seattle2019
From 8d756eb35466f3627354f105696c4d5469343336 Mon Sep 17 00:00:00 2001
From: zgj
Date: Wed, 17 Apr 2019 13:32:57 +0800
Subject: [PATCH 0024/1154] =?UTF-8?q?=E5=8E=BB=E6=8E=89=E5=8E=9F=E6=96=87?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... the most important tech strategy today.md | 130 ------------------
1 file changed, 130 deletions(-)
delete mode 100644 sources/talk/20190327 Why DevOps is the most important tech strategy today.md
diff --git a/sources/talk/20190327 Why DevOps is the most important tech strategy today.md b/sources/talk/20190327 Why DevOps is the most important tech strategy today.md
deleted file mode 100644
index 7ad9db59ea..0000000000
--- a/sources/talk/20190327 Why DevOps is the most important tech strategy today.md
+++ /dev/null
@@ -1,130 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (zgj1024 )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Why DevOps is the most important tech strategy today)
-[#]: via: (https://opensource.com/article/19/3/devops-most-important-tech-strategy)
-[#]: author: (Kelly AlbrechtWilly-Peter Schaub https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht)
-
-Why DevOps is the most important tech strategy today
-======
-Clearing up some of the confusion about DevOps.
-![CICD with gears][1]
-
-Many people first learn about [DevOps][2] when they see one of its outcomes and ask how it happened. It's not necessary to understand why something is part of DevOps to implement it, but knowing that—and why a DevOps strategy is important—can mean the difference between being a leader or a follower in an industry.
-
-Maybe you've heard some the incredible outcomes attributed to DevOps, such as production environments that are so resilient they can handle thousands of releases per day while a "[Chaos Monkey][3]" is running around randomly unplugging things. This is impressive, but on its own, it's a weak business case, essentially burdened with [proving a negative][4]: The DevOps environment is resilient because a serious failure hasn't been observed… yet.
-
-There is a lot of confusion about DevOps and many people are still trying to make sense of it. Here's an example from someone in my LinkedIn feed:
-
-> Recently attended few #DevOps sessions where some speakers seemed to suggest #Agile is a subset of DevOps. Somehow, my understanding was just the opposite.
->
-> Would like to hear your thoughts. What do you think is the relationship between Agile and DevOps?
->
-> 1. DevOps is a subset of Agile
-> 2. Agile is a subset of DevOps
-> 3. DevOps is an extension of Agile, starts where Agile ends
-> 4. DevOps is the new version of Agile
->
-
-
-Tech industry professionals have been weighing in on the LinkedIn post with a wide range of answers. How would you respond?
-
-### DevOps' roots in lean and agile
-
-DevOps makes a lot more sense if we start with the strategies of Henry Ford and the Toyota Production System's refinements of Ford's model. Within this history is the birthplace of lean manufacturing, which has been well studied. In [_Lean Thinking_][5], James P. Womack and Daniel T. Jones distill it into five principles:
-
- 1. Specify the value desired by the customer
- 2. Identify the value stream for each product providing that value and challenge all of the wasted steps currently necessary to provide it
- 3. Make the product flow continuously through the remaining value-added steps
- 4. Introduce pull between all steps where continuous flow is possible
- 5. Manage toward perfection so that the number of steps and the amount of time and information needed to serve the customer continually falls
-
-
-
-Lean seeks to continuously remove waste and increase the flow of value to the customer. This is easily recognizable and understood through a core tenet of lean: single piece flow. We can do a number of activities to learn why moving single pieces at a time is magnitudes faster than batches of many pieces; the [Penny Game][6] and the [Airplane Game][7] are two of them. In the Penny Game, if a batch of 20 pennies takes two minutes to get to the customer, they get the whole batch after waiting two minutes. If you move one penny at a time, the customer gets the first penny in about five seconds and continues getting pennies until the 20th penny arrives approximately 25 seconds later.
-
-This is a huge difference, but not everything in life is as simple and predictable as the penny in the Penny Game. This is where agile comes in. We certainly see lean principles on high-performing agile teams, but these teams need more than lean to do what they do.
-
-To be able to handle the unpredictability and variance of typical software development tasks, agile methodology focuses on awareness, deliberation, decision, and action to adjust course in the face of a constantly changing reality. For example, agile frameworks (like scrum) increase awareness with ceremonies like the daily standup and the sprint review. If the scrum team becomes aware of a new reality, the framework allows and encourages them to adjust course if necessary.
-
-For teams to make these types of decisions, they need to be self-organizing in a high-trust environment. High-performing agile teams working this way achieve a fast flow of value while continuously adjusting course, removing the waste of going in the wrong direction.
-
-### Optimal batch size
-
-To understand the power of DevOps in software development, it helps to understand the economics of batch size. Consider the following U-curve optimization illustration from Donald Reinertsen's _[Principles of Product Development Flow][8]:_
-
-![U-curve optimization illustration of optimal batch size][9]
-
-This can be explained with an analogy about grocery shopping. Suppose you need to buy some eggs and you live 30 minutes from the store. Buying one egg (far left on the illustration) at a time would mean a 30-minute trip each time. This is your _transaction cost_. The _holding cost_ might represent the eggs spoiling and taking up space in your refrigerator over time. The _total cost_ is the _transaction cost_ plus your _holding cost_. This U-curve explains why, for most people, buying a dozen eggs at a time is their _optimal batch size_. If you lived next door to the store, it'd cost you next to nothing to walk there, and you'd probably buy a smaller carton each time to save room in your refrigerator and enjoy fresher eggs.
-
-This U-curve optimization illustration can shed some light on why productivity increases significantly in successful agile transformations. Consider the effect of agile transformation on decision making in an organization. In traditional hierarchical organizations, decision-making authority is centralized. This leads to larger decisions made less frequently by fewer people. An agile methodology will effectively reduce an organization's transaction cost for making decisions by decentralizing the decisions to where the awareness and information is the best known: across the high-trust, self-organizing agile teams.
-
-The following animation shows how reducing transaction cost shifts the optimal batch size to the left. You can't understate the value to an organization in making faster decisions more frequently.
-
-![U-curve optimization illustration][10]
-
-### Where does DevOps fit in?
-
-Automation is one of the things DevOps is most known for. The previous illustration shows the value of automation in great detail. Through automation, we reduce our transaction costs to nearly zero, essentially getting our testing and deployments for free. This lets us take advantage of smaller and smaller batch sizes of work. Smaller batches of work are easier to understand, commit to, test, review, and know when they are done. These smaller batch sizes also contain less variance and risk, making them easier to deploy and, if something goes wrong, to troubleshoot and recover from. With automation combined with a solid agile practice, we can get our feature development very close to single piece flow, providing value to customers quickly and continuously.
-
-More traditionally, DevOps is understood as a way to knock down the walls of confusion between the dev and ops teams. In this model, development teams develop new features, while operations teams keep the system stable and running smoothly. Friction occurs because new features from development introduce change into the system, increasing the risk of an outage, which the operations team doesn't feel responsible for—but has to deal with anyway. DevOps is not just trying to get people working together, it's more about trying to make more frequent changes safely in a complex environment.
-
-We can look to [Ron Westrum][11] for research about achieving safety in complex organizations. In researching why some organizations are safer than others, he found that an organization's culture is predictive of its safety. He identified three types of culture: Pathological, Bureaucratic, and Generative. He found that the Pathological culture was predictive of less safety and the Generative culture was predictive of more safety (e.g., far fewer plane crashes or accidental hospital deaths in his main areas of research).
-
-![Three types of culture identified by Ron Westrum][12]
-
-Effective DevOps teams achieve a Generative culture with lean and agile practices, showing that speed and safety are complementary, or two sides of the same coin. By reducing the optimal batch sizes of decisions and features to become very small, DevOps achieves a faster flow of information and value while removing waste and reducing risk.
-
-In line with Westrum's research, change can happen easily with safety and reliability improving at the same time. When an agile DevOps team is trusted to make its own decisions, we get the tools and techniques DevOps is most known for today: automation and continuous delivery. Through this automation, transaction costs are reduced further than ever, and a near single piece lean flow is achieved, creating the potential for thousands of decisions and releases per day, as we've seen happen in high-performing DevOps organizations.
-
-### Flow, feedback, learning
-
-DevOps doesn't stop there. We've mainly been talking about DevOps achieving a revolutionary flow, but lean and agile practices are further amplified through similar efforts that achieve faster feedback loops and faster learning. In the [_DevOps Handbook_][13], the authors explain in detail how, beyond its fast flow, DevOps achieves telemetry across its entire value stream for fast and continuous feedback. Further, leveraging the [kaizen][14] bursts of lean and the [retrospectives][15] of scrum, high-performing DevOps teams will continuously drive learning and continuous improvement deep into the foundations of their organizations, achieving a lean manufacturing revolution in the software product development industry.
-
-### Start with a DevOps assessment
-
-The first step in leveraging DevOps is, either after much study or with the help of a DevOps consultant and coach, to conduct an assessment across a suite of dimensions consistently found in high-performing DevOps teams. The assessment should identify weak or non-existent team norms that need improvement. Evaluate the assessment's results to find quick wins—focus areas with high chances for success that will produce high-impact improvement. Quick wins are important for gaining the momentum needed to tackle more challenging areas. The teams should generate ideas that can be tried quickly and start to move the needle on the DevOps transformation.
-
-After some time, the team should reassess on the same dimensions to measure improvements and identify new high-impact focus areas, again with fresh ideas from the team. A good coach will consult, train, mentor, and support as needed until the team owns its own continuous improvement and achieves near consistency on all dimensions by continually reassessing, experimenting, and learning.
-
-In the [second part][16] of this article, we'll look at results from a DevOps survey in the Drupal community and see where the quick wins are most likely to be found.
-
-* * *
-
-_Rob_ _Bayliss and Kelly Albrecht will present[DevOps: Why, How, and What][17] and host a follow-up [Birds of a][18]_ [_Feather_][18] _[discussion][18] at [DrupalCon 2019][19] in Seattle, April 8-12._
-
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/3/devops-most-important-tech-strategy
-
-作者:[Kelly AlbrechtWilly-Peter Schaub][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
-[2]: https://opensource.com/resources/devops
-[3]: https://github.com/Netflix/chaosmonkey
-[4]: https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative
-[5]: https://www.amazon.com/dp/B0048WQDIO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
-[6]: https://youtu.be/5t6GhcvKB8o?t=54
-[7]: https://www.shmula.com/paper-airplane-game-pull-systems-push-systems/8280/
-[8]: https://www.amazon.com/dp/B00K7OWG7O/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
-[9]: https://opensource.com/sites/default/files/uploads/batch_size_optimal_650.gif (U-curve optimization illustration of optimal batch size)
-[10]: https://opensource.com/sites/default/files/uploads/batch_size_650.gif (U-curve optimization illustration)
-[11]: https://en.wikipedia.org/wiki/Ron_Westrum
-[12]: https://opensource.com/sites/default/files/uploads/information_flow.png (Three types of culture identified by Ron Westrum)
-[13]: https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002/ref=sr_1_3?keywords=DevOps+handbook&qid=1553197361&s=books&sr=1-3
-[14]: https://en.wikipedia.org/wiki/Kaizen
-[15]: https://www.scrum.org/resources/what-is-a-sprint-retrospective
-[16]: https://opensource.com/article/19/3/where-drupal-community-stands-devops-adoption
-[17]: https://events.drupal.org/seattle2019/sessions/devops-why-how-and-what
-[18]: https://events.drupal.org/seattle2019/bofs/devops-getting-started
-[19]: https://events.drupal.org/seattle2019
From 7587de3a6dda21471555f066c8753070d5144892 Mon Sep 17 00:00:00 2001
From: MjSeven
Date: Wed, 17 Apr 2019 15:36:58 +0800
Subject: [PATCH 0025/1154] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... programming languages should you learn.md | 46 ------------------
... programming languages should you learn.md | 47 +++++++++++++++++++
2 files changed, 47 insertions(+), 46 deletions(-)
delete mode 100644 sources/talk/20190208 Which programming languages should you learn.md
create mode 100644 translated/talk/20190208 Which programming languages should you learn.md
diff --git a/sources/talk/20190208 Which programming languages should you learn.md b/sources/talk/20190208 Which programming languages should you learn.md
deleted file mode 100644
index ed334b7761..0000000000
--- a/sources/talk/20190208 Which programming languages should you learn.md
+++ /dev/null
@@ -1,46 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (MjSeven)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Which programming languages should you learn?)
-[#]: via: (https://opensource.com/article/19/2/which-programming-languages-should-you-learn)
-[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
-
-Which programming languages should you learn?
-======
-Learning a new programming language is a great way to get ahead in your career. But which one?
-
-
-If you want to get started or get ahead in your programming career, learning a new language is a smart idea. But the huge number of languages in active use invites the question: Which programming language is the best one to know? To answer that, let's start with a simplifying question: What sort of programming do you want to do?
-
-If you want to do web programming on the client side, then the specialized languages HTML, CSS, and JavaScript—in one of its seemingly infinite dialects—are de rigueur.
-
-If you want to do web programming on the server side, the options include all of the familiar general-purpose languages: C++, Golang, Java, C#, Node.js, Perl, Python, Ruby, and so on. As a matter of course, server-side programs interact with datastores, such as relational and other databases, which means query languages such as SQL may come into play.
-
-If you're writing native apps for mobile devices, knowing the target platform is important. For Apple devices, Swift has supplanted Objective C as the language of choice. For Android devices, Java (with dedicated libraries and toolsets) remains the dominant language. There are special languages such as Xamarin, used with C#, that can generate platform-specific code for Apple, Android, and Windows devices.
-
-What about general-purpose languages? There are various choices within the usual pigeonholes. Among the dynamic or scripting languages (e.g., Perl, Python, and Ruby), there are newer offerings such as Node.js. Java and C#, which are more alike than their fans like to admit, remain the dominant statically compiled languages targeted at a virtual machine (the JVM and CLR, respectively). Among languages that compile into native executables, C++ is still in the mix, along with later arrivals such as Golang and Rust. General-purpose functional languages abound (e.g., Clojure, Haskell, Erlang, F#, Lisp, and Scala), often with passionately devoted communities. It's worth noting that object-oriented languages such as Java and C# have added functional constructs (in particular, lambdas), and the dynamic languages have had functional constructs from the start.
-
-Let me end with a pitch for C, which is a small, elegant, and extensible language not to be confused with C++. Modern operating systems are written mostly in C, with the rest in assembly language. The standard libraries on any platform are likewise mostly in C. For example, any program that issues the Hello, world! greeting does so through a call to the C library function named **write**.
-
-C serves as a portable assembly language, exposing details about the underlying system that other high-level languages deliberately hide. To understand C is thus to gain a better grasp of how programs contend for the shared system resources (processors, memory, and I/O devices) required for execution. C is at once high-level and close-to-the-metal, so unrivaled in performance—except, of course, for assembly language. Finally, C is the lingua franca among programming languages, and almost every general-purpose language supports C calls in one form or another.
-
-For a modern introduction to C, consider my book [C Programming: Introducing Portable Assembler][1]. No matter how you go about it, learn C and you'll learn a lot more than just another programming language.
-
-What programming languages do you think are important to know? Do you agree or disagree with these recommendations? Let us know in the comments!
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/2/which-programming-languages-should-you-learn
-
-作者:[Marty Kalin][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/mkalindepauledu
-[b]: https://github.com/lujun9972
-[1]: https://www.amazon.com/dp/1977056954?ref_=pe_870760_150889320
diff --git a/translated/talk/20190208 Which programming languages should you learn.md b/translated/talk/20190208 Which programming languages should you learn.md
new file mode 100644
index 0000000000..8806b8cfc0
--- /dev/null
+++ b/translated/talk/20190208 Which programming languages should you learn.md
@@ -0,0 +1,47 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Which programming languages should you learn?)
+[#]: via: (https://opensource.com/article/19/2/which-programming-languages-should-you-learn)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+应该学习哪种编程语言?
+======
+学习一门新的编程语言是在你的职业生涯中继续前进的好方法,但是应该学习哪一门呢?
+
+
+如果你想要在编程生涯中起步或继续前进,那么学习一门新语言是一个聪明的主意。但是,大量活跃使用的语言引发了一个问题:哪种编程语言是最好学习的?要回答这个问题,让我们从一个简单的问题开始:你想做什么样的程序?
+
+如果你想在客户端进行网络编程,那么特定语言 HTML、CSS 和 JavaScript(一种看似无穷无尽的方言)是必须要学习的。
+
+
+如果你想在服务器端进行 Web 编程,那么选项包括常见的通用语言:C++, Golang, Java, C#, Node.js, Perl, Python, Ruby 等等。当然,服务器程序与数据存储(例如关系数据库和其他数据库)打交道,这意味着 SQL 等查询语言可能会发挥作用。
+
+如果你正在为移动设备编写本地应用程序,那么了解目标平台非常重要。对于 Apple 设备,Swift 已经取代 Objective C 成为首选语言。对于 Android 设备,Java(带有专用库和工具集)仍然是主要语言。有一些特殊语言,如 与 C# 一起使用的 Xamarin,可以为 Apple、Android 和 Windows 设备生成特定于平台的代码。
+
+那么通用语言呢?通常有各种各样的选择。在*动态*或*脚本*语言(如 Perl、Python 和 Ruby)中,有一些新东西,如 Node.js。java 和 C# 的相似之处比它们的粉丝愿意承认的还要多,仍然是针对虚拟机(分别是 JVM 和 CLR)的主要*静态编译*语言。在编译为*原生可执行文件*的语言中,C++ 仍然处于混合状态,以及后来的 Golang 和 Rust 等。通用*函数*语言比比皆是(如 Clojure、Haskell、Erlang、F#、Lisp 和 Scala),它们通常都有热情投入的社区。值得注意的是,面向对象语言(如 Java 和 C#)已经添加了函数构造(特别是 lambdas),而动态语言从一开始就有函数构造。
+
+让我以 C 语言结尾,它是一种小巧,优雅,可扩展的语言,不要与 C++ 混淆。现代操作系统主要用 C 语言编写,其余的用汇编语言编写。任何平台上的标准库大多数都是用 C 语言编写的。例如,任何打印 `Hello, world!` 这种问候都是通过调用名为 **write** 的 C 库函数来实现的。
+
+C 作为一种可移植的汇编语言,公开了其他高级语言有意隐藏的底层系统的详细信息。因此,理解 C 可以更好地掌握程序如何竞争执行所需的共享系统资源(如处理器,内存和 I/O 设备)。C 语言既高级又接近硬件,因此在性能方面无与伦比,当然,汇编语言除外。最后,C 是编程语言中的通用语言,几乎所有通用语言都支持一种或另一种形式的 C 调用。
+
+有关现代 C 语言的介绍,参考我的书籍 [C Programming: Introducing Portable Assembler][1]。无论你怎么做,学习 C 语言,你会学到比另一种编程语言多得多的东西。
+
+你认为学习哪些编程语言很重要?你是否同意这些建议?在评论告知我们!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/which-programming-languages-should-you-learn
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+ -->
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://www.amazon.com/dp/1977056954?ref_=pe_870760_150889320
From b6327abe7f8c57f0275a145ac631b25671c1d756 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 17 Apr 2019 19:26:37 +0800
Subject: [PATCH 0026/1154] PRF:20190405 Streaming internet radio with
RadioDroid.md
@tomjlw
---
...treaming internet radio with RadioDroid.md | 41 ++++++++++---------
1 file changed, 22 insertions(+), 19 deletions(-)
diff --git a/translated/tech/20190405 Streaming internet radio with RadioDroid.md b/translated/tech/20190405 Streaming internet radio with RadioDroid.md
index 7dcb940ed6..3a70a29b28 100644
--- a/translated/tech/20190405 Streaming internet radio with RadioDroid.md
+++ b/translated/tech/20190405 Streaming internet radio with RadioDroid.md
@@ -1,6 +1,6 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Streaming internet radio with RadioDroid)
@@ -9,45 +9,48 @@
使用 RadioDroid 流传输网络广播
======
-通过简单的设置使用你家中的音响收听你最爱的网络电台
-![编程中的女士][1]
-最近在线新闻通讯社对 [Google 的 Chromecast 音频设备的推出][2]发出哀叹。该设备在音频媒体界受到[好评][3],因此我已经在考虑入手一个。基于 Chromecast 毁灭的消息,我决定在它们全部被打包扔进垃圾堆之前以一个合理价位买一个。
+> 通过简单的设置使用你家中的音响收听你最爱的网络电台。
-我在 [MobileFun][4] 上找到一个放进我的订单中。这个设备最终到货了。它被包在一个常用但是最简的 Google 包装袋中,外面打印着非常简短的开始指南。
+![][1]
+
+最近网络媒体对 [Google 的 Chromecast 音频设备的下架][2]发出叹息。该设备在音频媒体界备受[好评][3],因此我已经在考虑入手一个。基于 Chromecast 退场的消息,我决定在它们全部被打包扔进垃圾堆之前以一个合理价位买一个。
+
+我在 [MobileFun][4] 上找到一个放进我的订单中。这个设备最终到货了。它被包在一个普普通通、简简单单的 Google 包装袋中,外面打印着非常简短的使用指南。
![Google Chromecast 音频][5]
-我用光缆把它通过我的数模转换器的光学 S/PDIF 连接接入家庭音响,希望以此能提供最佳的音质。
+我通过我的数模转换器的光纤 S/PDIF 连接接入到家庭音响,希望以此能提供最佳的音质。
-安装过程并无纰漏,在五分钟后我就准备好播放一些音乐了。我知道一些安卓应用支持 Chromecast,因此我决定用 Google Play 音乐测试它。意料之中,它工作得不错,音乐效果听上去也相当好。然而作为一个具有开源精神的人,我决定看看我能找到什么开源播放器能兼容 Chromecast。
+安装过程并无纰漏,在五分钟后我就可以播放一些音乐了。我知道一些安卓应用支持 Chromecast,因此我决定用 Google Play Music 测试它。意料之中,它工作得不错,音乐效果听上去也相当好。然而作为一个具有开源精神的人,我决定看看我能找到什么开源播放器能兼容 Chromecast。
### RadioDroid 的救赎
-[RadioDroid Android application][6] 满足条件。它开源并且可从 [GitHub][7],Google Play 以及 [F-Droid][8] 上获取。根据帮助文档,RadioDroid 从 [Community Radio Browser][9] 网页寻找播放流。因此我决定在我的手机上安装尝试一下。
+
+[RadioDroid 安卓应用][6] 满足条件。它是开源的,并且可从 [GitHub][7]、Google Play 以及 [F-Droid][8] 上获取。根据帮助文档,RadioDroid 从 [Community Radio Browser][9] 网页寻找播放流。因此我决定在我的手机上安装尝试一下。
![RadioDroid][10]
-安装过程快速平滑,RadioDroid 打开展示当地电台十分迅速。你可以在这个屏幕截图的右上方附近看到 Chromecast 按钮(看上去像一个有着波阵面的长方形图标)。
+安装过程快速顺利,RadioDroid 打开展示当地电台十分迅速。你可以在这个屏幕截图的右上方附近看到 Chromecast 按钮(看上去像一个有着波阵面的长方形图标)。
-我尝试了几个当地电台。应用可靠地在我手机喇叭上播放了音乐。但是我不得不摆弄 Chromecast 按钮来通过 Chromecast 把音乐传到流上。但是它确实可以做到流传输。
+我尝试了几个当地电台。这个应用可靠地在我手机喇叭上播放了音乐。但是我不得不摆弄 Chromecast 按钮来通过 Chromecast 把音乐传到流上。但是它确实可以做到流传输。
-我决定找一些我最爱的网络广播电台,在法国马赛的 [格雷诺耶广播电台][11]。在 RadioDroid 上有许多找到电台的方法。其中一种是使用标签——当地,最流行等——就在电台列表上方。其中一个标签是国家,我找到法国,在其1500个电台中划来划去寻找格雷诺耶广播电台。另一种办法是使用屏幕上方的查询按钮;查询迅速找到了那家美妙的电台。我尝试了其它几次查询它们都返回了合理的信息。
+我决定找一下我喜爱的网络广播电台:法国马赛的 [格雷诺耶广播电台][11]。在 RadioDroid 上有许多找到电台的方法。其中一种是使用标签——“当地”、“最流行”等——就在电台列表上方。其中一个标签是国家,我找到法国,在其 1500 个电台中划来划去寻找格雷诺耶广播电台。另一种办法是使用屏幕上方的查询按钮;查询迅速找到了那家美妙的电台。我尝试了其它几次查询它们都返回了合理的信息。
-回到当地标签,我在列表中翻来覆去,发现“当地”的定义似乎是“在同一个国家”。因此尽管西雅图,波特兰,旧金山,洛杉矶和朱诺比多伦多更靠近我的家,我并没有在当地标签中看到它们。然而通过使用查询功能,我可以发现所有名字中带有西雅图的电台。
+回到“当地”标签,我在列表中翻来覆去,发现“当地”的定义似乎是“在同一个国家”。因此尽管西雅图、波特兰、旧金山、洛杉矶和朱诺比多伦多更靠近我的家,我并没有在“当地”标签中看到它们。然而通过使用查询功能,我可以发现所有名字中带有西雅图的电台。
-语言标签使我找到所有用葡语(及葡语方言)播报的电台。我很快发现了另一个最爱的电台 [91 Rock Curitiba][12]。
+“语言”标签使我找到所有用葡语(及葡语方言)播报的电台。我很快发现了另一个最爱的电台 [91 Rock Curitiba][12]。
-接着灵感来了,现在是春天了但又如何呢?让我们听一些圣诞音乐。意料之中,搜寻圣诞把我引到了 [181.FM – Christmas Blender][13]。不错,一两分钟的欣赏对我就够了。
+接着灵感来了,虽然现在是春天了,但又如何呢?让我们听一些圣诞音乐。意料之中,搜寻圣诞把我引到了 [181.FM – Christmas Blender][13]。不错,一两分钟的欣赏对我就够了。
-因此总的来说,我推荐 RadioDroid 和 Chromecast 的组合作为一种用家庭音响以合理价位播放网络电台的良好方式、
+因此总的来说,我推荐把 RadioDroid 和 Chromecast 的组合作为一种用家庭音响以合理价位播放网络电台的良好方式。
-### 对于音乐方面。。。
+### 对于音乐方面……
最近我从 [Blue Coast Music][16] 商店里选了一个 [Qua Continuum][15] 创作的叫作 [Continuum One][14] 的有趣的氛围(甚至无节拍)音乐专辑。
-Blue Coast 有许多可提供给开源音乐爱好者的。音乐可以无需通过那些奇怪的平台专用下载管理器下载(有时以物理形式)。它通常提供几种形式,包括 WAV,FLAC 和 DSD;为 WAV 和 FLAC 提供不同的字长和比特率,包括 16/44.1,24/96和24/192,针对 DSD 则有2.8,5.6,和11.2 MHz。音乐是用优秀的仪器精心录制的。不幸的是,我并没有找到许多符合我口味的音乐尽管我喜欢 Blue Coast 上能获取的几个艺术家,包括 Qua Continuum,[Art Lande][17] 以及 [Alex De Grassi][18]。
+Blue Coast 有许多可提供给开源音乐爱好者的。音乐可以无需通过那些奇怪的平台专用下载管理器下载(有时以物理形式)。它通常提供几种形式,包括 WAV、FLAC 和 DSD;WAV 和 FLAC 还提供不同的字长和比特率,包括 16/44.1、24/96 和 24/192,针对 DSD 则有 2.8、5.6 和 11.2 MHz。音乐是用优秀的仪器精心录制的。不幸的是,我并没有找到许多符合我口味的音乐,尽管我喜欢 Blue Coast 上能获取的几个艺术家,包括 Qua Continuum,[Art Lande][17] 以及 [Alex De Grassi][18]。
-在 [Bandcamp][19] 上,我挑选了 [Emancipator's Baralku][20] 和 [Framework's Tides][21],两个都是我喜欢的。两位艺术家创作的音乐符合我的口味——电音但又不(总体来说)是舞蹈,他们的音乐旋律优美,副歌也很好听。有许多可以让开源音乐发烧友爱上 Bandcamp 的东西比如买前试听整首歌的服务;没有垃圾软件下载器;大量与音乐家的合作;以及对 [Creative Commons music][22] 的支持。
+在 [Bandcamp][19] 上,我挑选了 [Emancipator's Baralku][20] 和 [Framework's Tides][21],两个都是我喜欢的。两位艺术家创作的音乐符合我的口味——电音但又(总体来说)不是舞蹈,它们的音乐旋律优美,副歌也很好听。有许多可以让开源音乐发烧友爱上 Bandcamp 的东西,比如买前试听整首歌的服务;没有垃圾软件下载器;与大量音乐家的合作;以及对 [Creative Commons music][22] 的支持。
--------------------------------------------------------------------------------
@@ -56,7 +59,7 @@ via: https://opensource.com/article/19/4/radiodroid-internet-radio-player
作者:[Chris Hermansen (Community Moderator)][a]
选题:[lujun9972][b]
译者:[tomjlw](https://github.com/tomjlw)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From b2d2f718c6d4b11d439e3d5efc75b50766dbfd98 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 17 Apr 2019 19:28:12 +0800
Subject: [PATCH 0027/1154] PUB:20190405 Streaming internet radio with
RadioDroid.md
@tomjlw https://linux.cn/article-10741-1.html
---
.../20190405 Streaming internet radio with RadioDroid.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190405 Streaming internet radio with RadioDroid.md (98%)
diff --git a/translated/tech/20190405 Streaming internet radio with RadioDroid.md b/published/20190405 Streaming internet radio with RadioDroid.md
similarity index 98%
rename from translated/tech/20190405 Streaming internet radio with RadioDroid.md
rename to published/20190405 Streaming internet radio with RadioDroid.md
index 3a70a29b28..801098b3a1 100644
--- a/translated/tech/20190405 Streaming internet radio with RadioDroid.md
+++ b/published/20190405 Streaming internet radio with RadioDroid.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10741-1.html)
[#]: subject: (Streaming internet radio with RadioDroid)
[#]: via: (https://opensource.com/article/19/4/radiodroid-internet-radio-player)
[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen)
From 885f38addaf58b68f037b53a3d3846092e9de4d5 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 17 Apr 2019 22:24:57 +0800
Subject: [PATCH 0028/1154] PRF:20190402 Parallel computation in Python with
Dask.md
@geekpi
---
...arallel computation in Python with Dask.md | 22 +++++++++++--------
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/translated/tech/20190402 Parallel computation in Python with Dask.md b/translated/tech/20190402 Parallel computation in Python with Dask.md
index ed29af7b9a..a1376d3f8b 100644
--- a/translated/tech/20190402 Parallel computation in Python with Dask.md
+++ b/translated/tech/20190402 Parallel computation in Python with Dask.md
@@ -1,6 +1,6 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Parallel computation in Python with Dask)
@@ -9,14 +9,17 @@
使用 Dask 在 Python 中进行并行计算
======
-Dask 库将 Python 计算扩展到多个核心甚至是多台机器。
+
+> Dask 库可以将 Python 计算扩展到多个核心甚至是多台机器。
+
![Pair programming][1]
-关于 Python 性能的一个常见抱怨是[全局解释器锁][2](GIL)。由于 GIL,一次只能有一个线程执行 Python 字节码。因此,即使在现代的多核机器上,使用线程也不会加速计算。
+关于 Python 性能的一个常见抱怨是[全局解释器锁][2](GIL)。由于 GIL,同一时刻只能有一个线程执行 Python 字节码。因此,即使在现代的多核机器上,使用线程也不会加速计算。
-但当你需要并行化到多核时,你不需要停止使用 Python:**[Dask][3]** 库可以将计算扩展到多个内核甚至多个机器。某些设置在数千台机器上配置 Dask,每台机器都有多个内核。虽然存在扩展限制,但并不容易达到。
+但当你需要并行化到多核时,你不需要放弃使用 Python:[Dask][3] 库可以将计算扩展到多个内核甚至多个机器。某些设置可以在数千台机器上配置 Dask,每台机器都有多个内核。虽然存在扩展规模的限制,但一般达不到。
虽然 Dask 有许多内置的数组操作,但举一个非内置的例子,我们可以计算[偏度][4]:
+
```
import numpy
import dask
@@ -31,11 +34,12 @@ skewness = ((unnormalized_moment - (3 * mean * stddev ** 2) - mean ** 3) /
stddev ** 3)
```
-请注意,每个操作将根据需要使用尽可能多的内核。这将在所有核心上并行化,即使在计算数十亿个元素时也是如此。
+请注意,每个操作将根据需要使用尽可能多的内核。这将在所有核心上并行化执行,即使在计算数十亿个元素时也是如此。
-当然,并不是我们所有的操作都可由库并行化,有时我们需要自己实现并行性。
+当然,并不是我们所有的操作都可由这个库并行化,有时我们需要自己实现并行性。
为此,Dask 有一个“延迟”功能:
+
```
import dask
@@ -47,9 +51,9 @@ total = dask.delayed(sum)(palindromes)
result = total.compute()
```
-这将计算字符串是否是回文并返回回回文的数量。
+这将计算字符串是否是回文并返回回文的数量。
-虽然 Dask 是为数据科学家创建的,但它绝不仅限于数据科学。每当我们需要在 Python 中并行化任务时,我们可以使用 Dask-有 GIL 或没有 GIL。
+虽然 Dask 是为数据科学家创建的,但它绝不仅限于数据科学。每当我们需要在 Python 中并行化任务时,我们可以使用 Dask —— 无论有没有 GIL。
--------------------------------------------------------------------------------
@@ -58,7 +62,7 @@ via: https://opensource.com/article/19/4/parallel-computation-python-dask
作者:[Moshe Zadka (Community Moderator)][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 4f939961b7abbd23619f6f50d14f4cd386e20379 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 17 Apr 2019 22:26:02 +0800
Subject: [PATCH 0029/1154] PUB:20190402 Parallel computation in Python with
Dask.md
@geekpi https://linux.cn/article-10742-1.html
---
.../20190402 Parallel computation in Python with Dask.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190402 Parallel computation in Python with Dask.md (97%)
diff --git a/translated/tech/20190402 Parallel computation in Python with Dask.md b/published/20190402 Parallel computation in Python with Dask.md
similarity index 97%
rename from translated/tech/20190402 Parallel computation in Python with Dask.md
rename to published/20190402 Parallel computation in Python with Dask.md
index a1376d3f8b..818b242cc6 100644
--- a/translated/tech/20190402 Parallel computation in Python with Dask.md
+++ b/published/20190402 Parallel computation in Python with Dask.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10742-1.html)
[#]: subject: (Parallel computation in Python with Dask)
[#]: via: (https://opensource.com/article/19/4/parallel-computation-python-dask)
[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
From 6ed00012baf3ea1b9bebe9803edc08618df45d5a Mon Sep 17 00:00:00 2001
From: lctt-bot
Date: Wed, 17 Apr 2019 17:00:52 +0000
Subject: [PATCH 0030/1154] Revert "Translate Request"
This reverts commit 8275cae67ee808f56f3fdbcfd0f41a261715af75.
---
sources/tech/20180629 100 Best Ubuntu Apps.md | 1 -
1 file changed, 1 deletion(-)
diff --git a/sources/tech/20180629 100 Best Ubuntu Apps.md b/sources/tech/20180629 100 Best Ubuntu Apps.md
index 487ebd6e7d..581d22b527 100644
--- a/sources/tech/20180629 100 Best Ubuntu Apps.md
+++ b/sources/tech/20180629 100 Best Ubuntu Apps.md
@@ -1,4 +1,3 @@
-DaivdMax2006 is translating
100 Best Ubuntu Apps
======
From 88f2973806672b1feee8518d73042cbcd4916773 Mon Sep 17 00:00:00 2001
From: Liwen Jiang
Date: Wed, 17 Apr 2019 15:24:24 -0500
Subject: [PATCH 0031/1154] Submit Translated Passage for Review
Submit Translated Passage for Review
---
...lticloud-hybrid cloud joint development.md | 78 -------------------
...lticloud-hybrid cloud joint development.md | 78 +++++++++++++++++++
2 files changed, 78 insertions(+), 78 deletions(-)
delete mode 100644 sources/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
create mode 100644 translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
diff --git a/sources/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md b/sources/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
deleted file mode 100644
index 93e135224c..0000000000
--- a/sources/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
+++ /dev/null
@@ -1,78 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (tomjlw)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Cisco, Google reenergize multicloud/hybrid cloud joint development)
-[#]: via: (https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all)
-[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
-
-Cisco, Google reenergize multicloud/hybrid cloud joint development
-======
-Cisco, VMware, HPE and others tap into new Google Cloud Athos cloud technology
-![Thinkstock][1]
-
-Cisco and Google have expanded their joint cloud-development activities to help customers more easily build secure multicloud and hybrid applications everywhere from on-premises data centers to public clouds.
-
-**[Check out[what hybrid cloud computing is][2] and learn [what you need to know about multi-cloud][3]. Get regularly scheduled insights by [signing up for Network World newsletters][4]]**
-
-The expansion centers around Google’s new open-source hybrid cloud package called Anthos, which was introduced at the company’s Google Next event this week. Anthos is based on – and supplants – the company's existing Google Cloud Service beta. Anthos will let customers run applications, unmodified, on existing on-premises hardware or in the public cloud and will be available on [Google Cloud Platform][5] (GCP) with [Google Kubernetes Engine][6] (GKE), and in data centers with [GKE On-Prem][7], the company says. Anthos will also let customers for the first time manage workloads running on third-party clouds such as AWS and Azure from the Google platform without requiring administrators and developers to learn different environments and APIs, Google said.
-
-Essentially, Athos offers a single managed service that promises to let customers manage and deploy workloads across clouds, without having to worry about dissimilar environments or APIs.
-
-As part of the rollout, Google also announced a beta program called[ Anthos Migrate][8] that Google says auto-migrates VMs from on-premises, or other clouds, directly into containers in GKE. “This unique migration technology lets you migrate and modernize your infrastructure in one streamlined motion, without upfront modifications to the original VMs or applications,” Google said. It gives companies the flexibility to move on-prem apps to a cloud environment at the customers pace, Google said.
-
-### Cisco and Google
-
-For its part Cisco announced support of Anthos and promised to tightly integrate it with Cisco data center-technologies, such as its HyperFlex hyperconverged package, Application Centric Infrastructure (Cisco’s flagship SDN offering), SD-WAN and Stealthwatch Cloud. The integrations will enable a consistent, cloud-like experience whether on-prem or in the cloud with automatic upgrades to the latest versions and security patches, Cisco stated.
-
-"Google Cloud’s expertise in containerization and service mesh – Kubernetes and Istio, respectively – as well as their leadership in the developer community, combined with Cisco’s enterprise-class networking, compute, storage and security products and services makes for a winning combination for our customers," [wrote][9] Kip Compton, Cisco senior vice president, Cloud Platform and Solutions Group. “The Cisco integrations for Anthos will help customers build and manage multicloud and hybrid applications across their on-premises datacenters and public clouds and let them focus on innovation and agility without compromising security or increasing complexity.”
-
-### Google Cloud and Cisco
-
-Eyal Manor, vice president, engineering at Google Cloud, [wrote][10] that with Cisco’s support for Anthos, customers will be able to:
-
- * Benefit from a fully-managed service, like GKE, and Cisco’s hyperconverged infrastructure, networking, and security technologies.
- * Operate consistently across an enterprise data center and the cloud.
- * Consume cloud services from an enterprise data center.
- * Modernize now on premises with the latest cloud technologies.
-
-
-
-Cisco and Google have been working closely together since October 2017, when the companies said they were working on an open hybrid cloud platform that bridges on-premises and cloud environments. That package, [Cisco Hybrid Cloud Platform for Google Cloud][11], became generally available in September 2018. It lets customer develop enterprise-grade capabilities from Google Cloud-managed Kubernetes containers that include Cisco networking and security technology as well as service mesh monitoring from Istio.
-
-Google says Istio’s open-source, container- and microservice-optimized technology offers developers a uniform way to connect, secure, manage and monitor microservices across clouds through service-to-service level mTLS [Mutual Transport Layer Security] authentication access control. As a result, customers can easily implement new, portable services and centrally configure and manage those services.
-
-Cisco wasn’t the only vendor to announce support for Anthos. At least 30 other big Google partners including [VMware][12], [Dell EMC][13], [HPE][14], Intel, and Lenovo committed to delivering Anthos on their own hyperconverged infrastructure for their customers, Google stated.
-
-Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all
-
-作者:[Michael Cooney][a]
-选题:[lujun9972][b]
-译者:[tomjlw](https://github.com/tomjlw)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Michael-Cooney/
-[b]: https://github.com/lujun9972
-[1]: https://images.techhive.com/images/article/2016/12/hybrid_cloud-100700390-large.jpg
-[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
-[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
-[4]: https://www.networkworld.com/newsletters/signup.html
-[5]: https://cloud.google.com/
-[6]: https://cloud.google.com/kubernetes-engine/
-[7]: https://cloud.google.com/gke-on-prem/
-[8]: https://cloud.google.com/contact/
-[9]: https://blogs.cisco.com/news/next-phase-cisco-google-cloud
-[10]: https://cloud.google.com/blog/topics/partners/google-cloud-partners-with-cisco-on-hybrid-cloud-next19?utm_medium=unpaidsocial&utm_campaign=global-googlecloud-liveevent&utm_content=event-next
-[11]: https://cloud.google.com/cisco/
-[12]: https://blogs.vmware.com/networkvirtualization/2019/04/vmware-and-google-showcase-hybrid-cloud-deployment.html/
-[13]: https://www.dellemc.com/en-us/index.htm
-[14]: https://www.hpe.com/us/en/newsroom/blog-post/2019/04/hpe-and-google-cloud-join-forces-to-accelerate-innovation-with-hybrid-cloud-solutions-optimized-for-containerized-applications.html
-[15]: https://www.facebook.com/NetworkWorld/
-[16]: https://www.linkedin.com/company/network-world
diff --git a/translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md b/translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
new file mode 100644
index 0000000000..1fc3d0c230
--- /dev/null
+++ b/translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
@@ -0,0 +1,78 @@
+[#]: collector: (lujun9972)
+[#]: translator: (tomjlw)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Cisco, Google reenergize multicloud/hybrid cloud joint development)
+[#]: via: (https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+思科、谷歌重新赋能多/混合云共同开发
+======
+思科,VMware,HPE 与其它公司踏入新的 Google Cloud Athos 云技术
+![Thinkstock][1]
+
+思科与谷歌已扩展它们的混合云开发活动,以帮助其客户在从预置的数据中心到公共云上的任何地方更简便地搭建安全的多云以及混合云应用。
+
+**[查看[什么是混合云计算][2]并了解[关于多云你所需要知道的][3]。[注册 Network World 速报][4]]以定时接受资讯**
+
+这次扩张围绕着谷歌在这周的 Google Next 活动上介绍的新的称作 Anthos 的开源混合云包展开。Anthos 基于并取代了谷歌现有的谷歌云服务测试版。Anthos 将让客户们无须修改就可以在现有的预置硬件或公共云上运行应用。据谷歌说,它可以在[谷歌云平台][5] (GCP) 与 [谷歌 Kubernetes 引擎][6] (GKE) 或者在数据中心中与 [GKE On-Prem][7] 一同被获取。谷歌说,Anthos 首次让客户们可以无需管理员和开发者了解不同的坏境和 API 就能从谷歌平台上管理在第三方云上的工作负荷如 AWS 和 Azure。
+
+关键在于,Athos 提供一个单独的受管理的服务,它使得客户们无须忧虑不相似的环境或 API 就能跨云管理、部署工作负荷。
+
+作为首秀的一部分,谷歌也宣布一个能够从预置环境或者其它云自动移植虚拟机到 GKE 上的容器的叫做 [Anthos Migrate[8] 的测试项目。谷歌说,“这种独特的移植技术使你无须修改原来的虚拟机或者应用就能以一种行云流水般的方式迁移、更新你的基础架构”。谷歌称它给予了公司按客户节奏转移预置应用到云环境的灵活性。
+
+### 思科和谷歌
+
+就思科来说,它宣布对 Anthos 的支持并承诺将它紧密集成进思科的数据中心科技中,例如 HyperFlex 超融合包,应用中心基础架构(思科的旗舰 SDN 方案), SD-WAN 和 StealthWatch 云。思科说,无论是预置的还是在云端的,这次集成将通过自动更新到最新版本和安全补丁,给予一种一致的、云般的感觉。
+
+“谷歌云在容器和服务网——分别在 Kubernetes 和 Istio——上的专业与它们在开发者社区的领导力,混合思科的企业级网络,计算,存储和安全产品及服务,将为我们的顾客促成一次强强联合。”思科的资深副总裁,云平台和解决方案小组的 Kip Compton 这样[写道][9],“思科对于 Anthos 的集成将会帮助顾客跨预置数据中心和公共云搭建、管理多云/混合云应用,并使得他们无须在安全上让步或者增加复杂性就能集中精力在创新与敏捷上。”
+
+### 谷歌云和思科
+
+Eyal Manor,在谷歌云工作的副总裁,[写道][10] 通过思科对 Anthos 的支持,客户将能够:
+* 从完全受管理的服务例如 GKE 以及思科的超融合基础架构,网络和安全科技中收益。
+* 具有一致性地跨企业数据中心和云操作
+* 从一个企业数据中心消耗云服务
+* 用最新的云技术更新预置架构
+
+
+
+思科和谷歌从2017年10月两公司说他们将从事能够连接预置架构和云环境的开放混合云平台就开始紧密合作。那个包,[为谷歌云打造的思科混合云平台],大致在2018年9月可以获取。它使得客户们能通过谷歌云管理的包含思科网络和安全以及来自 Istio 的服务网络监听技术的 Kubernetes 容器拥有企业级开发的能力。
+
+谷歌说 Istio 的开源容器和微服务优化科技给开发者提供了通过服务到服务级的 mTLS [双向传输层安全]访问控制认证进行跨云连接,保护,管理和监听微服务的同一方式。其结果就是,客户能够轻松落实新的便携的服务同时也能够中心化地配置管理那些服务。
+
+思科不是唯一宣布对 Anthos 的支持的供应商。谷歌宣称至少30家大的谷歌合作商包括 [VMware][12],[Dell EMC][13], [HPE][14], Intel 和 Lenovo 致力于为他们的客户在它们自己的超融合基础架构上分发 Anthos 服务。
+
+在 [Facebook][15] 和 [LinkedIn][16] 上加入 Network World 社区以对高端话题评论。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[tomjlw](https://github.com/tomjlw)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.techhive.com/images/article/2016/12/hybrid_cloud-100700390-large.jpg
+[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
+[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: https://cloud.google.com/
+[6]: https://cloud.google.com/kubernetes-engine/
+[7]: https://cloud.google.com/gke-on-prem/
+[8]: https://cloud.google.com/contact/
+[9]: https://blogs.cisco.com/news/next-phase-cisco-google-cloud
+[10]: https://cloud.google.com/blog/topics/partners/google-cloud-partners-with-cisco-on-hybrid-cloud-next19?utm_medium=unpaidsocial&utm_campaign=global-googlecloud-liveevent&utm_content=event-next
+[11]: https://cloud.google.com/cisco/
+[12]: https://blogs.vmware.com/networkvirtualization/2019/04/vmware-and-google-showcase-hybrid-cloud-deployment.html/
+[13]: https://www.dellemc.com/en-us/index.htm
+[14]: https://www.hpe.com/us/en/newsroom/blog-post/2019/04/hpe-and-google-cloud-join-forces-to-accelerate-innovation-with-hybrid-cloud-solutions-optimized-for-containerized-applications.html
+[15]: https://www.facebook.com/NetworkWorld/
+[16]: https://www.linkedin.com/company/network-world
+
From 1a61451939ce69d9d4125b65335b691712e3cbd7 Mon Sep 17 00:00:00 2001
From: "GJ.Zhang"
Date: Thu, 18 Apr 2019 05:12:46 +0800
Subject: [PATCH 0032/1154] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...rn Command Line HTTP Client For Curl And Wget Alternative.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md b/sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
index 46298a6fa0..71224c4917 100644
--- a/sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
+++ b/sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (zgj1024)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From 19dbb915201fc37940af1cf6600e3f8054785d7a Mon Sep 17 00:00:00 2001
From: geekpi
Date: Thu, 18 Apr 2019 08:52:03 +0800
Subject: [PATCH 0033/1154] translated
---
...iles and Folders in Linux -Beginner Tip.md | 106 ------------------
1 file changed, 106 deletions(-)
delete mode 100644 sources/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md
diff --git a/sources/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md b/sources/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md
deleted file mode 100644
index d13fd912f8..0000000000
--- a/sources/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md
+++ /dev/null
@@ -1,106 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to Zip Files and Folders in Linux [Beginner Tip])
-[#]: via: (https://itsfoss.com/linux-zip-folder/)
-[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
-
-How to Zip Files and Folders in Linux [Beginner Tip]
-======
-
-_**Brief: This quick tip shows you how to create a zip folder in Ubuntu and other Linux distributions. Both terminal and GUI methods have been discussed.**_
-
-Zip is one of the most popular archive file format out there. With zip, you can compress multiple files into one file. This not only saves disk space, it also saves network bandwidth. This is why you’ll encounter zip files almost all the time.
-
-As a normal user, mostly you’ll unzip folders in Linux. But how do you zip a folder in Linux? This article helps you answer that question.
-
-**Prerequisite: Verify if zip is installed**
-
-Normally [zip][1] support is installed but no harm in verifying. You can run the below command to install zip and unzip support. If it’s not installed already, it will be installed now.
-
-```
-sudo apt install zip unzip
-```
-
-Now that you know your system has zip support, you can read on to learn how to zip a directory in Linux.
-
-![][2]
-
-### Zip a folder in Linux Command Line
-
-The syntax for using the zip command is pretty straight forward.
-
-```
-zip [option] output_file_name input1 input2
-```
-
-While there could be several options, I don’t want you to confuse with them. If your only aim is to create a zip folder from a bunch of files and directories, use the command like this:
-
-```
-zip -r output_file.zip file1 folder1
-```
-
-The -r option will recurse into directories and compress its contents as well. The .zip extension in the output files is optional as .zip is added by default.
-
-You should see the files being added to the compressed folder during the zip operation.
-
-```
-zip -r myzip abhi-1.txt abhi-2.txt sample_directory
- adding: abhi-1.txt (stored 0%)
- adding: abhi-2.txt (stored 0%)
- adding: sample_directory/ (stored 0%)
- adding: sample_directory/newfile.txt (stored 0%)
- adding: sample_directory/agatha.txt (deflated 41%)
-```
-
-You can use the -e option to [create a password protect zip folder in Linux][3].
-
-You are not always restricted to the terminal for creating zip archive files. You can do that graphically as well. Here’s how!
-
-### Zip a folder in Ubuntu Linux Using GUI
-
-_Though I have used Ubuntu here, the method should be pretty much the same in other distributions using GNOME or other desktop environment._
-
-If you want to compress a file or folder in desktop Linux, it’s just a matter of a few clicks.
-
-Go to the folder where you have the desired files (and folders) you want to compress into one zip folder.
-
-In here, select the files and folders. Now, right click and select Compress. You can do the same for a single file as well.
-
-![Select the files, right click and click compress][4]
-
-Now you can create a compressed archive file in zip, tar xz or 7z format. In case you are wondering, all these three are various compression algorithms that you can use for compressing your files.
-
-Give it the name you desire and click on Create.
-
-![Create archive file][5]
-
-It shouldn’t take long and you should see an archive file in the same directory.
-
-![][6]
-
-Well, that’s it. You successfully created a zip folder in Linux.
-
-I hope this quick little tip helped you with the zip files. Please feel free to share your suggestions.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/linux-zip-folder/
-
-作者:[Abhishek Prakash][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/abhishek/
-[b]: https://github.com/lujun9972
-[1]: https://en.wikipedia.org/wiki/Zip_(file_format)
-[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/zip-folder-linux.png?resize=800%2C450&ssl=1
-[3]: https://itsfoss.com/password-protect-zip-file/
-[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/create-zip-file-ubuntu.jpg?resize=800%2C428&ssl=1
-[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/create-zip-folder-ubuntu-1.jpg?ssl=1
-[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/zip-file-created-in-ubuntu.png?resize=800%2C277&ssl=1
From 519702354b6924a6f37ca6892983f400e2bd5875 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Thu, 18 Apr 2019 08:52:13 +0800
Subject: [PATCH 0034/1154] translated
---
...iles and Folders in Linux -Beginner Tip.md | 106 ++++++++++++++++++
1 file changed, 106 insertions(+)
create mode 100644 translated/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md
diff --git a/translated/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md b/translated/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md
new file mode 100644
index 0000000000..49857445e1
--- /dev/null
+++ b/translated/tech/20190413 How to Zip Files and Folders in Linux -Beginner Tip.md
@@ -0,0 +1,106 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Zip Files and Folders in Linux [Beginner Tip])
+[#]: via: (https://itsfoss.com/linux-zip-folder/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+
+如何在 Linux 中 zip 压缩文件和文件夹(初学者提示)
+======
+
+_ **简介:本文向你展示了如何在 Ubuntu 和其他 Linux 发行版中创建一个 zip 文件夹。终端和 GUI 方法都有。** _
+
+zip 是最流行的归档文件格式之一。使用 zip,你可以将多个文件压缩到一个文件中。这不仅节省了磁盘空间,还节省了网络带宽。这就是为什么你几乎一直会看到 zip 文件的原因。
+
+作为普通用户,大多数情况下你会在 Linux 中解压缩文件夹。但是如何在 Linux 中压缩文件夹?本文可以帮助你回答这个问题。
+
+**先决条件:验证是否安装了 zip**
+
+通常 [zip][1] 已经安装,但验证下也没坏处。你可以运行以下命令来安装 zip 和 unzip。如果它尚未安装,它将立即安装。
+
+```
+sudo apt install zip unzip
+```
+
+现在你知道你的系统有 zip 支持,你可以继续了解如何在 Linux 中压缩一个目录。
+
+![][2]
+
+### 在 Linux 命令行中压缩文件夹
+
+zip 命令的语法非常简单。
+
+```
+zip [option] output_file_name input1 input2
+```
+
+虽然有几个选项,但我不希望你将它们混淆。如果你只想要将一堆文件变成一个 zip 文件夹,请使用如下命令:
+
+```
+zip -r output_file.zip file1 folder1
+```
+
+-r 选项将递归目录并压缩其内容。输出文件中的 .zip 扩展名是可选的,因为默认情况下会添加 .zip。
+
+你应该会在 zip 操作期间看到要添加到压缩文件夹中的文件。
+
+```
+zip -r myzip abhi-1.txt abhi-2.txt sample_directory
+ adding: abhi-1.txt (stored 0%)
+ adding: abhi-2.txt (stored 0%)
+ adding: sample_directory/ (stored 0%)
+ adding: sample_directory/newfile.txt (stored 0%)
+ adding: sample_directory/agatha.txt (deflated 41%)
+```
+
+你可以使用 -e 选项[在 Linux 中创建密码保护的 zip 文件夹][3]。
+
+你并不是只能通过终端创建 zip 归档文件。你也可以用图形方式做到这一点。下面是如何做的!
+
+### 在 Ubuntu Linux 中使用 GUI 压缩文件夹
+
+_虽然我在这里使用 Ubuntu,但在使用 GNOME 或其他桌面环境的其他发行版中,方法应该基本相同。_
+
+如果要在 Linux 桌面中压缩文件或文件夹,只需点击几下即可。
+
+进入到你想将文件(和文件夹)压缩到一个 zip 文件夹的所在文件夹。
+
+在这里,选择文件和文件夹。现在,右键单击并选择“压缩”。你也可以对单个文件执行相同操作。
+
+![Select the files, right click and click compress][4]
+
+现在,你可以使用 zip、tar xz 或 7z 格式创建压缩归档文件。如果你好奇,这三个都是各种压缩算法,你可以使用它们来压缩文件。
+
+输入一个你想要的名字,并点击“创建”
+
+![Create archive file][5]
+
+这不会花很长时间,你会同一目录中看到一个归档文件。
+
+![][6]
+
+好了,就是这些。你已经成功地在 Linux 中创建了一个 zip 文件夹。
+
+我希望这篇文章能帮助你了解 zip 文件。请随时分享你的建议。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/linux-zip-folder/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Zip_(file_format)
+[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/zip-folder-linux.png?resize=800%2C450&ssl=1
+[3]: https://itsfoss.com/password-protect-zip-file/
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/create-zip-file-ubuntu.jpg?resize=800%2C428&ssl=1
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/create-zip-folder-ubuntu-1.jpg?ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/zip-file-created-in-ubuntu.png?resize=800%2C277&ssl=1
From 11c408c17ff76908d6d609adb63cddf8e5791886 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Thu, 18 Apr 2019 08:54:38 +0800
Subject: [PATCH 0035/1154] translating
---
sources/tech/20190415 Troubleshooting slow WiFi on Linux.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md b/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
index 52af44459a..6c3db30f25 100644
--- a/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
+++ b/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From adc9ab3c582c4b295f07c03fb8c404c8be972cee Mon Sep 17 00:00:00 2001
From: MjSeven
Date: Thu, 18 Apr 2019 12:55:16 +0800
Subject: [PATCH 0036/1154] Translating by MjSeven
---
.../tech/20190415 How to identify duplicate files on Linux.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190415 How to identify duplicate files on Linux.md b/sources/tech/20190415 How to identify duplicate files on Linux.md
index 9bdc92a591..3ddbb4baaa 100644
--- a/sources/tech/20190415 How to identify duplicate files on Linux.md
+++ b/sources/tech/20190415 How to identify duplicate files on Linux.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From 5e284b1a440dc2a5fa7cd9302183a2d10306cf2e Mon Sep 17 00:00:00 2001
From: zgj
Date: Thu, 18 Apr 2019 16:14:33 +0800
Subject: [PATCH 0037/1154] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E7=BF=BB=E8=AF=91?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...TP Client For Curl And Wget Alternative.md | 119 +++++++++---------
1 file changed, 58 insertions(+), 61 deletions(-)
rename {sources => translated}/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md (65%)
diff --git a/sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md b/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
similarity index 65%
rename from sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
rename to translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
index 71224c4917..a05948c9af 100644
--- a/sources/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
+++ b/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
@@ -7,86 +7,83 @@
[#]: via: (https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-HTTPie – A Modern Command Line HTTP Client For Curl And Wget Alternative
+HTTPie – 替代 Curl 和 Wget 的现代 HTTP 命令行客户端
======
-Most of the time we use Curl Command or Wget Command for file download and other purpose.
+大多数时间我们会使用 Curl 命令或是 Wget 命令下载文件或者做其他事
-We had written **[best command line download manager][1]** in the past. You can navigate those articles by clicking the corresponding URLs.
+我们以前曾写过 **[最佳命令行下载管理器][1]** 的文章。你可以点击相应的 URL 连接来浏览这些文章。
- * **[aria2 – A Command Line Multi-Protocol Download Tool For Linux][2]**
- * **[Axel – A Lightweight Command Line Download Accelerator For Linux][3]**
- * **[Wget – A Standard Command Line Download Utility For Linux][4]**
- * **[curl – A Nifty Command Line Download Tool For Linux][5]**
+ * **[aria2 – Linux 下的多协议命令行下载工具][2]**
+ * **[Axel – Linux 下的轻量级命令行下载加速器][3]**
+ * **[Wget – Linux 下的标准命令行下载工具][4]**
+ * **[curl – Linux 下的实用的命令行下载工具][5]**
+今天我们将讨论同样的话题。实用程序名为 HTTPie。
-Today we are going to discuss about the same kind of topic. The utility name is HTTPie.
+它是现代命令行 http 客户端,也是curl和wget命令的最佳替代品。
-It’s modern command line http client and best alternative for curl and wget commands.
+### 什么是 HTTPie?
-### What Is HTTPie?
+HTTPie (发音是 aitch-tee-tee-pie) 是一个 Http 命令行客户端。
-HTTPie (pronounced aitch-tee-tee-pie) is a command line HTTP client.
+httpie 工具是现代命令的 HTTP 客户端,它能让命令行界面与 Web 服务进行交互。
-The httpie tool is a modern command line http client which makes CLI interaction with web services.
+他提供一个简单 Http 命令,运行使用简单而自然的语法发送任意的 HTTP 请求,并会显示彩色的输出。
-It provides a simple http command that allows for sending arbitrary HTTP requests using a simple and natural syntax, and displays colorized output.
+HTTPie 能用于测试、debugging及与 HTTP 服务器交互。
-HTTPie can be used for testing, debugging, and generally interacting with HTTP servers.
+### 主要特点
-### Main Features
+ * 具表达力的和直观语法
+ * 格式化的及彩色化的终端输出
+ * 内置 JSON 支持
+ * 表单和文件上传
+ * HTTPS, 代理, 和认证
+ * 任意请求数据
+ * 自定义头部
+ * 持久化会话(sessions)
+ * 类似 wget 的下载
+ * 支持 Python 2.7 和 3.x
- * Expressive and intuitive syntax
- * Formatted and colorized terminal output
- * Built-in JSON support
- * Forms and file uploads
- * HTTPS, proxies, and authentication
- * Arbitrary request data
- * Custom headers
- * Persistent sessions
- * Wget-like downloads
- * Python 2.7 and 3.x support
+### 在 Linux 下如何安装 HTTPie
+大部分 Linux 发行版都提供了系统包管理器,可以用它来安装。
-
-### How To Install HTTPie In Linux?
-
-Most Linux distributions provide a package that can be installed using the system package manager.
-
-For **`Fedora`** system, use **[DNF Command][6]** to install httpie.
+**`Fedora`** 系统,使用 **[DNF 命令][6]** 来安装 httpie
```
$ sudo dnf install httpie
```
-For **`Debian/Ubuntu`** systems, use **[APT-GET Command][7]** or **[APT Command][8]** to install httpie.
+**`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][7]** 或 **[APT 命令][8]** 来安装 httpie。
```
$ sudo apt install httpie
```
-For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install httpie.
+基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][9]** 来安装 httpie。
```
$ sudo pacman -S httpie
```
-For **`RHEL/CentOS`** systems, use **[YUM Command][10]** to install httpie.
+**`RHEL/CentOS`** 的系统, 使用 **[YUM 命令][10]** 来安装 httpie。
```
$ sudo yum install httpie
```
-For **`openSUSE Leap`** system, use **[Zypper Command][11]** to install httpie.
+**`openSUSE Leap`** 系统, 使用 **[Zypper 命令][11]** 来安装 httpie。
```
$ sudo zypper install httpie
```
-### 1) How To Request A URL Using HTTPie?
+### 1) 如何使用 HTTPie 请求URL?
-The basic usage of httpie is to request a website URL as an argument.
+httpie 的基本用法是将网站的 URL 作为参数。
```
# http 2daygeek.com
@@ -102,9 +99,9 @@ Transfer-Encoding: chunked
Vary: Accept-Encoding
```
-### 2) How To Download A File Using HTTPie?
+### 2) 如何使用 HTTPie 下载文件
-You can download a file using HTTPie with the `--download` parameter. This is similar to wget command.
+你可以使用带 `--download` 参数的 HTTPie 命令下载文件。类似于 wget 命令。
```
# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png
@@ -128,7 +125,7 @@ Downloading 31.31 kB to "Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png"
Done. 31.31 kB in 0.01187s (2.58 MB/s)
```
-Alternatively you can save the output file with different name by using `-o` parameter.
+你还可以使用 `-o` 参数用不同的名称保存输出文件。
```
# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png -o Anbox-1.png
@@ -151,11 +148,10 @@ Vary: Accept-Encoding
Downloading 31.31 kB to "Anbox-1.png"
Done. 31.31 kB in 0.01551s (1.97 MB/s)
```
+如何使用HTTPie恢复部分下载?
+### 3) 如何使用 HTTPie 恢复部分下载?
-### 3) How To Resume Partial Download Using HTTPie?
-
-You can resume the download using HTTPie with the `-c` parameter.
-
+你可以使用带 `-c` 参数的 HTTPie 继续下载。
```
# http --download --continue https://speed.hetzner.de/100MB.bin -o 100MB.bin
HTTP/1.1 206 Partial Content
@@ -173,24 +169,24 @@ Downloading 100.00 MB to "100MB.bin"
| 24.14 % 24.14 MB 1.12 MB/s 0:01:07 ETA^C
```
-You can verify the same in the below output.
-
+你根据下面的输出验证是否同一个文件
```
[email protected]:/var/log# ls -lhtr 100MB.bin
-rw-r--r-- 1 root root 25M Apr 9 01:33 100MB.bin
```
-### 5) How To Upload A File Using HTTPie?
+### 5) 如何使用 HTTPie 上传文件?
+你可以通过使用带有 `小于号 "<"` 的 HTTPie 命令上传文件
You can upload a file using HTTPie with the `less-than symbol "<"` symbol.
```
$ http https://transfer.sh < Anbox-1.png
```
-### 6) How To Download A File Using HTTPie With Redirect Symbol ">"?
+### 6) 如何使用带有重定向符号">" 的 HTTPie 下载文件?
-You can download a file using HTTPie with the `redirect ">"` symbol.
+你可以使用带有 `重定向 ">"` 符号的 HTTPie 命令下载文件。
```
# http https://www.2daygeek.com/wp-content/uploads/2019/03/How-To-Install-And-Enable-Flatpak-Support-On-Linux-1.png > Flatpak.png
@@ -199,9 +195,10 @@ You can download a file using HTTPie with the `redirect ">"` symbol.
-rw-r--r-- 1 root root 47K Apr 9 01:44 Flatpak.png
```
-### 7) Send a HTTP GET Method?
+### 7) 发送一个 HTTP GET 请求?
+
+您可以在请求中发送 HTTP GET 方法。GET 方法会使用给定的 URI,从给定服务器检索信息。
-You can send a HTTP GET method in the request. The GET method is used to retrieve information from the given server using a given URI.
```
# http GET httpie.org
@@ -217,9 +214,9 @@ Transfer-Encoding: chunked
Vary: Accept-Encoding
```
-### 8) Submit A Form?
+### 8) 提交表单?
-Use the following format to Submit a forms. A POST request is used to send data to the server, for example, customer information, file upload, etc. using HTML forms.
+使用以下格式提交表单。POST 请求用于向服务器发送数据,例如客户信息、文件上传等。要使用 HTML 表单。
```
# http -f POST Ubuntu18.2daygeek.com hello='World'
@@ -237,7 +234,7 @@ Server: Apache/2.4.29 (Ubuntu)
Vary: Accept-Encoding
```
-Run the following command to see the request that is being sent.
+运行下面的指令以查看正在发送的请求。
```
# http -v Ubuntu18.2daygeek.com
@@ -264,24 +261,24 @@ Server: Apache/2.4.29 (Ubuntu)
Vary: Accept-Encoding
```
-### 9) HTTP Authentication?
+### 9) HTTP 认证?
+当前支持的身份验证认证方案是基本认证(Basic)和摘要验证(Digest)
The currently supported authentication schemes are Basic and Digest
-Basic auth
+基本认证
```
$ http -a username:password example.org
```
-Digest auth
+摘要验证
```
$ http -A digest -a username:password example.org
```
-Password prompt
-
+提示输入密码
```
$ http -a username example.org
```
@@ -292,7 +289,7 @@ via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
+译者:[译者ID](https://github.com/zgj1024)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -309,4 +306,4 @@ via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
-[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
+[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
\ No newline at end of file
From 62b9888b3413db756456d575d05c3c2eeaffd244 Mon Sep 17 00:00:00 2001
From: MjSeven
Date: Thu, 18 Apr 2019 22:00:51 +0800
Subject: [PATCH 0038/1154] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ow to identify duplicate files on Linux.md | 127 ------------------
...ow to identify duplicate files on Linux.md | 124 +++++++++++++++++
2 files changed, 124 insertions(+), 127 deletions(-)
delete mode 100644 sources/tech/20190415 How to identify duplicate files on Linux.md
create mode 100644 translated/tech/20190415 How to identify duplicate files on Linux.md
diff --git a/sources/tech/20190415 How to identify duplicate files on Linux.md b/sources/tech/20190415 How to identify duplicate files on Linux.md
deleted file mode 100644
index 3ddbb4baaa..0000000000
--- a/sources/tech/20190415 How to identify duplicate files on Linux.md
+++ /dev/null
@@ -1,127 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (MjSeven)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to identify duplicate files on Linux)
-[#]: via: (https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all)
-[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
-
-How to identify duplicate files on Linux
-======
-Some files on a Linux system can appear in more than one location. Follow these instructions to find and identify these "identical twins" and learn why hard links can be so advantageous.
-![Archana Jarajapu \(CC BY 2.0\)][1]
-
-Identifying files that share disk space relies on making use of the fact that the files share the same inode — the data structure that stores all the information about a file except its name and content. If two or more files have different names and file system locations, yet share an inode, they also share content, ownership, permissions, etc.
-
-These files are often referred to as "hard links" — unlike symbolic links that simply point to other files by containing their names. Symbolic links are easy to pick out in a file listing by the "l" in the first position and **- >** symbol that refers to the file being referenced.
-
-```
-$ ls -l my*
--rw-r--r-- 4 shs shs 228 Apr 12 19:37 myfile
-lrwxrwxrwx 1 shs shs 6 Apr 15 11:18 myref -> myfile
--rw-r--r-- 4 shs shs 228 Apr 12 19:37 mytwin
-```
-
-Identifying hard links in a single directory is not as obvious, but it is still quite easy. If you list the files using the **ls -i** command and sort them by inode number, you can pick out the hard links fairly easily. In this type of ls output, the first column shows the inode numbers.
-
-```
-$ ls -i | sort -n | more
- ...
- 788000 myfile <==
- 788000 mytwin <==
- 801865 Name_Labels.pdf
- 786692 never leave home angry
- 920242 NFCU_Docs
- 800247 nmap-notes
-```
-
-Scan your output looking for identical inode numbers and any matches will tell you what you want to know.
-
-**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][2] ]**
-
-If, on the other hand, you simply want to know if one particular file is hard-linked to another file, there's an easier way than scanning through a list of what may be hundreds of files. The find command's **-samefile** option will do the work for you.
-
-```
-$ find . -samefile myfile
-./myfile
-./save/mycopy
-./mytwin
-```
-
-Notice that the starting location provided to the find command will determine how much of the file system is scanned for matches. In the above example, we're looking in the current directory and subdirectories.
-
-Adding output details using find's **-ls** option might be more convincing:
-
-```
-$ find . -samefile myfile -ls
- 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./myfile
- 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./save/mycopy
- 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./mytwin
-```
-
-The first column shows the inode number. Then we see file permissions, links, owner, file size, date information, and the names of the files that refer to the same disk content. Note that the link field in this case is a "4" not the "3" we might expect, telling us that there's another link to this same inode as well (but outside our search range).
-
-If you want to look for all instances of hard links in a single directory, you could try a script like this that will create the list and look for the duplicates for you:
-
-```
-#!/bin/bash
-
-# seaches for files sharing inodes
-
-prev=""
-
-# list files by inode
-ls -i | sort -n > /tmp/$0
-
-# search through file for duplicate inode #s
-while read line
-do
- inode=`echo $line | awk '{print $1}'`
- if [ "$inode" == "$prev" ]; then
- grep $inode /tmp/$0
- fi
- prev=$inode
-done < /tmp/$0
-
-# clean up
-rm /tmp/$0
-
-$ ./findHardLinks
- 788000 myfile
- 788000 mytwin
-```
-
-You can also use the find command to look for files by inode number as in this command. However, this search could involve more than one file system, so it is possible that you will get false results, since the same inode number might be used in another file system where it would not represent the same file. If that's the case, other file details will not be identical.
-
-```
-$ find / -inum 788000 -ls 2> /dev/null
- 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /tmp/mycopy
- 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/myfile
- 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/save/mycopy
- 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/mytwin
-```
-
-Note that error output was shunted off to /dev/null so that we didn't have to look at all the "Permission denied" errors that would have otherwise been displayed for other directories that we're not allowed to look through.
-
-Also, scanning for files that contain the same content but don't share inodes (i.e., simply file copies) would take considerably more time and effort.
-
-Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all
-
-作者:[Sandra Henry-Stocker][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
-[b]: https://github.com/lujun9972
-[1]: https://images.idgesg.net/images/article/2019/04/reflections-candles-100793651-large.jpg
-[2]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
-[3]: https://www.facebook.com/NetworkWorld/
-[4]: https://www.linkedin.com/company/network-world
diff --git a/translated/tech/20190415 How to identify duplicate files on Linux.md b/translated/tech/20190415 How to identify duplicate files on Linux.md
new file mode 100644
index 0000000000..033c3d85a1
--- /dev/null
+++ b/translated/tech/20190415 How to identify duplicate files on Linux.md
@@ -0,0 +1,124 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MjSeven)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to identify duplicate files on Linux)
+[#]: via: (https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all)
+[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
+
+如何识别 Linux 上的重复文件
+======
+Linux 系统上的一些文件可能出现在多个位置。按照本文指示查找并识别这些“同卵双胞胎”,还可以了解为什么硬链接会如此有利。
+![Archana Jarajapu \(CC BY 2.0\)][1]
+
+识别共享磁盘空间的文件依赖于利用文件共享相同的 `inode` 这一事实。这种数据结构存储除了文件名和内容之外的所有信息。如果两个或多个文件具有不同的名称和文件系统位置,但共享一个 inode,则它们还共享内容、所有权、权限等。
+
+这些文件通常被称为“硬链接”,不像符号链接(即软链接)那样仅仅通过包含它们的名称指向其他文件,符号链接很容易在文件列表中通过第一个位置的 “l” 和引用文件的 **->** 符号识别出来。
+
+```
+$ ls -l my*
+-rw-r--r-- 4 shs shs 228 Apr 12 19:37 myfile
+lrwxrwxrwx 1 shs shs 6 Apr 15 11:18 myref -> myfile
+-rw-r--r-- 4 shs shs 228 Apr 12 19:37 mytwin
+```
+
+识别单个目录中的硬链接并不是很明显,但它仍然非常容易。如果使用 **ls -i** 命令列出文件并按 `inode` 编号排序,则可以非常容易地挑选出硬链接。在这种类型的 `ls` 输出中,第一列显示 `inode` 编号。
+
+```
+$ ls -i | sort -n | more
+ ...
+ 788000 myfile <==
+ 788000 mytwin <==
+ 801865 Name_Labels.pdf
+ 786692 never leave home angry
+ 920242 NFCU_Docs
+ 800247 nmap-notes
+```
+
+扫描输出,查找相同的 `inode` 编号,任何匹配都会告诉你想知道的内容。
+
+**[另请参考:[Linux 疑难解答的宝贵提示和技巧][2]]**
+
+另一方面,如果你只是想知道某个特定文件是否是另一个文件的硬链接,那么有一种方法比浏览数百个文件的列表更简单,即 `find` 命令的 **-samefile** 选项将帮助你完成工作。
+```
+$ find . -samefile myfile
+./myfile
+./save/mycopy
+./mytwin
+```
+
+注意,提供给 `find` 命令的起始位置决定文件系统会扫描多少来进行匹配。在上面的示例中,我们正在查看当前目录和子目录。
+
+使用 find 的 **-ls** 选项添加输出的详细信息可能更有说服力:
+```
+$ find . -samefile myfile -ls
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./myfile
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./save/mycopy
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./mytwin
+```
+
+第一列显示 `inode` 编号,然后我们会看到文件权限、链接、所有者、文件大小、日期信息以及引用相同磁盘内容的文件的名称。注意,在这种情况下,`link` 字段是 “4” 而不是我们可能期望的 “3”。这告诉我们还有另一个指向同一个 `inode` 的链接(但不在我们的搜索范围内)。
+
+如果你想在一个目录中查找所有硬链接的实例,可以尝试以下的脚本来创建列表并为你查找副本:
+```
+#!/bin/bash
+
+# seaches for files sharing inodes
+
+prev=""
+
+# list files by inode
+ls -i | sort -n > /tmp/$0
+
+# search through file for duplicate inode #s
+while read line
+do
+ inode=`echo $line | awk '{print $1}'`
+ if [ "$inode" == "$prev" ]; then
+ grep $inode /tmp/$0
+ fi
+ prev=$inode
+done < /tmp/$0
+
+# clean up
+rm /tmp/$0
+
+$ ./findHardLinks
+ 788000 myfile
+ 788000 mytwin
+```
+
+你还可以使用 `find` 命令按 `inode` 编号查找文件,如命令中所示。但是,此搜索可能涉及多个文件系统,因此可能会得到错误的结果。因为相同的 `inode` 编号可能会在另一个文件系统中使用,代表另一个文件。如果是这种情况,文件的其他详细信息将不相同。
+
+```
+$ find / -inum 788000 -ls 2> /dev/null
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /tmp/mycopy
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/myfile
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/save/mycopy
+ 788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/mytwin
+```
+
+注意,错误输出被重定向到 `/dev/null`,这样我们就不必查看所有 "Permission denied" 错误,否则这些错误将显示在我们不允许查看的其他目录中。
+
+此外,扫描包含相同内容但不共享 `inode` 的文件(即,简单的文本拷贝)将花费更多的时间和精力。
+
+加入 [Facebook][3] 和 [LinkedIn][4] 上的网络世界社区,对重要的话题发表评论。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all
+
+作者:[Sandra Henry-Stocker][a]
+选题:[lujun9972][b]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/reflections-candles-100793651-large.jpg
+[2]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
+[3]: https://www.facebook.com/NetworkWorld/
+[4]: https://www.linkedin.com/company/network-world
From aa4be1337cfce506686ca7b0926823afeee400ca Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Thu, 18 Apr 2019 22:45:21 +0800
Subject: [PATCH 0039/1154] PRF:20190317 How To Configure sudo Access In
Linux.md
@liujing97
---
...7 How To Configure sudo Access In Linux.md | 86 +++++++++----------
1 file changed, 41 insertions(+), 45 deletions(-)
diff --git a/translated/tech/20190317 How To Configure sudo Access In Linux.md b/translated/tech/20190317 How To Configure sudo Access In Linux.md
index 1a23afdd69..b4de951c9b 100644
--- a/translated/tech/20190317 How To Configure sudo Access In Linux.md
+++ b/translated/tech/20190317 How To Configure sudo Access In Linux.md
@@ -1,18 +1,16 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Configure sudo Access In Linux?)
[#]: via: (https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-Linux 中如何配置 sudo 访问权限?
+如何在 Linux 中配置 sudo 访问权限
======
-Linux 系统中 root 用户拥有所有的控制权力。
-
-Linux 系统中 root 是拥有最高权力的用户,可以在系统中实施任意的行为。
+Linux 系统中 root 用户拥有 Linux 中全部控制权力。Linux 系统中 root 是拥有最高权力的用户,可以在系统中实施任意的行为。
如果其他用户想去实施一些行为,不能为所有人都提供 root 访问权限。因为如果他或她做了一些错误的操作,没有办法去纠正它。
@@ -20,43 +18,40 @@ Linux 系统中 root 是拥有最高权力的用户,可以在系统中实施
我们可以把 sudo 权限发放给相应的用户来克服这种情况。
-sudo 命令提供了一种机制,它可以在不用分享 root 用户的密码的前提下,为信任的用户提供系统的管理权限。
+`sudo` 命令提供了一种机制,它可以在不用分享 root 用户的密码的前提下,为信任的用户提供系统的管理权限。
他们可以执行大部分的管理操作,但又不像 root 一样有全部的权限。
### 什么是 sudo?
-sudo 是一个程序,普通用户可以使用它以超级用户或其他用户的身份执行命令,是由安全策略指定的。
+`sudo` 是一个程序,普通用户可以使用它以超级用户或其他用户的身份执行命令,是由安全策略指定的。
sudo 用户的访问权限是由 `/etc/sudoers` 文件控制的。
### sudo 用户有什么优点?
-在 Linux 系统中,如果你不熟悉一个命令,sudo 是运行它的一个安全方式。
+在 Linux 系统中,如果你不熟悉一个命令,`sudo` 是运行它的一个安全方式。
- * Linux 系统在 `/var/log/secure` 和 `/var/log/auth.log` 文件中保留日志,并且你可以验证 sudo 用户实施了哪些行为操作。
- * 每一次它都为当前的操作提示输入密码。所以,你将会有时间去验证这个操作是不是你想要执行的。如果你发觉它是不正确的行为,你可以安全地退出而且没有执行此操作。
+* Linux 系统在 `/var/log/secure` 和 `/var/log/auth.log` 文件中保留日志,并且你可以验证 sudo 用户实施了哪些行为操作。
+* 每一次它都为当前的操作提示输入密码。所以,你将会有时间去验证这个操作是不是你想要执行的。如果你发觉它是不正确的行为,你可以安全地退出而且没有执行此操作。
+基于 RHEL 的系统(如 Redhat (RHEL)、 CentOS 和 Oracle Enterprise Linux (OEL))和基于 Debian 的系统(如 Debian、Ubuntu 和 LinuxMint)在这点是不一样的。
-基于 RHEL 的系统(如 Redhat (RHEL), CentOS 和 Oracle Enterprise Linux (OEL))和基于 Debian 的系统(如 Debian, Ubuntu 和 LinuxMint)在这点是不一样的。
-
-我们将会教你如何在本文中的两种发行版中执行该操作。
+我们将会教你如何在本文中提及的两种发行版中执行该操作。
这里有三种方法可以应用于两个发行版本。
- * 增加用户到相应的组。基于 RHEL 的系统,我们需要添加用户到 `wheel` 组。基于 Debain 的系统,我们添加用户到 `sudo` 或 `admin` 组。
- * 手动添加用户到 `/etc/group` 文件中。
- * 用 visudo 命令添加用户到 `/etc/sudoers` 文件中。
-
-
+* 增加用户到相应的组。基于 RHEL 的系统,我们需要添加用户到 `wheel` 组。基于 Debain 的系统,我们添加用户到 `sudo` 或 `admin` 组。
+* 手动添加用户到 `/etc/group` 文件中。
+* 用 `visudo` 命令添加用户到 `/etc/sudoers` 文件中。
### 如何在 RHEL/CentOS/OEL 系统中配置 sudo 访问权限?
-在基于 RHEL 的系统中(如 Redhat (RHEL), CentOS 和 Oracle Enterprise Linux (OEL)),使用下面的三个方法就可以做到。
+在基于 RHEL 的系统中(如 Redhat (RHEL)、 CentOS 和 Oracle Enterprise Linux (OEL)),使用下面的三个方法就可以做到。
-### 方法 1:在 Linux 中如何使用 wheel 组为普通用户授予超级用户访问权限?
+#### 方法 1:在 Linux 中如何使用 wheel 组为普通用户授予超级用户访问权限?
-Wheel 是基于 RHEL 的系统中的一个特殊组,它提供额外的权限,可以授权用户像超级用户一样执行受到限制的命令。
+wheel 是基于 RHEL 的系统中的一个特殊组,它提供额外的权限,可以授权用户像超级用户一样执行受到限制的命令。
注意,应该在 `/etc/sudoers` 文件中激活 `wheel` 组来获得该访问权限。
@@ -70,7 +65,7 @@ Wheel 是基于 RHEL 的系统中的一个特殊组,它提供额外的权限
假设我们已经创建了一个用户账号来执行这些操作。在此,我将会使用 `daygeek` 这个用户账号。
-执行下面的命令,添加用户到 wheel 组。
+执行下面的命令,添加用户到 `wheel` 组。
```
# usermod -aG wheel daygeek
@@ -87,10 +82,10 @@ wheel:x:10:daygeek
```
$ tail -5 /var/log/secure
-tail: cannot open _/var/log/secure_ for reading: Permission denied
+tail: cannot open /var/log/secure for reading: Permission denied
```
-当我试图以普通用户身份访问 `/var/log/secure` 文件时出现错误。 我将使用 sudo 访问同一个文件,让我们看看这个魔术。
+当我试图以普通用户身份访问 `/var/log/secure` 文件时出现错误。 我将使用 `sudo` 访问同一个文件,让我们看看这个魔术。
```
$ sudo tail -5 /var/log/secure
@@ -102,9 +97,9 @@ Mar 17 07:05:10 CentOS7 sudo: daygeek : TTY=pts/0 ; PWD=/home/daygeek ; USER=roo
Mar 17 07:05:10 CentOS7 sudo: pam_unix(sudo:session): session opened for user root by daygeek(uid=0)
```
-### 方法 2:在 RHEL/CentOS/OEL 中如何使用 /etc/group 文件为普通用户授予超级用户访问权限?
+#### 方法 2:在 RHEL/CentOS/OEL 中如何使用 /etc/group 文件为普通用户授予超级用户访问权限?
-我们可以通过编辑 `/etc/group` 文件来手动地添加用户到 wheel 组。
+我们可以通过编辑 `/etc/group` 文件来手动地添加用户到 `wheel` 组。
只需打开该文件,并在恰当的组后追加相应的用户就可完成这一点。
@@ -115,7 +110,7 @@ wheel:x:10:daygeek,user1
在该例中,我将使用 `user1` 这个用户账号。
-我将要通过在系统中重启 `Apache` 服务来检查用户 `user1` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
+我将要通过在系统中重启 Apache httpd 服务来检查用户 `user1` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
```
$ sudo systemctl restart httpd
@@ -128,11 +123,11 @@ Mar 17 07:10:40 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ;
Mar 17 07:12:35 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/grep -i httpd /var/log/secure
```
-### 方法 3:在 Linux 中如何使用 /etc/sudoers 文件为普通用户授予超级用户访问权限?
+#### 方法 3:在 Linux 中如何使用 /etc/sudoers 文件为普通用户授予超级用户访问权限?
-sudo 用户的访问权限是被 `/etc/sudoers` 文件控制的。因此,只需将用户添加到 wheel 组下的 sudoers 文件中即可。
+sudo 用户的访问权限是被 `/etc/sudoers` 文件控制的。因此,只需将用户添加到 `sudoers` 文件中 的 `wheel` 组下即可。
-只需通过 visudo 命令将期望的用户追加到 /etc/sudoers 文件中。
+只需通过 `visudo` 命令将期望的用户追加到 `/etc/sudoers` 文件中。
```
# grep -i user2 /etc/sudoers
@@ -141,7 +136,7 @@ user2 ALL=(ALL) ALL
在该例中,我将使用 `user2` 这个用户账号。
-我将要通过在系统中重启 `MariaDB` 服务来检查用户 `user2` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
+我将要通过在系统中重启 MariaDB 服务来检查用户 `user2` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
```
$ sudo systemctl restart mariadb
@@ -155,11 +150,11 @@ Mar 17 07:26:52 CentOS7 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ;
### 在 Debian/Ubuntu 系统中如何配置 sudo 访问权限?
-在基于 Debian 的系统中(如 Debian, Ubuntu 和 LinuxMint),使用下面的三个方法就可以做到。
+在基于 Debian 的系统中(如 Debian、Ubuntu 和 LinuxMint),使用下面的三个方法就可以做到。
-### 方法 1:在 Linux 中如何使用 sudo 或 admin 组为普通用户授予超级用户访问权限?
+#### 方法 1:在 Linux 中如何使用 sudo 或 admin 组为普通用户授予超级用户访问权限?
-sudo 或 admin 是基于 Debian 的系统中的特殊组,它提供额外的权限,可以授权用户像超级用户一样执行受到限制的命令。
+`sudo` 或 `admin` 是基于 Debian 的系统中的特殊组,它提供额外的权限,可以授权用户像超级用户一样执行受到限制的命令。
注意,应该在 `/etc/sudoers` 文件中激活 `sudo` 或 `admin` 组来获得该访问权限。
@@ -175,7 +170,7 @@ sudo 或 admin 是基于 Debian 的系统中的特殊组,它提供额外的权
假设我们已经创建了一个用户账号来执行这些操作。在此,我将会使用 `2gadmin` 这个用户账号。
-执行下面的命令,添加用户到 sudo 组。
+执行下面的命令,添加用户到 `sudo` 组。
```
# usermod -aG sudo 2gadmin
@@ -195,7 +190,7 @@ $ less /var/log/auth.log
/var/log/auth.log: Permission denied
```
-当我试图以普通用户身份访问 `/var/log/auth.log` 文件时出现错误。 我将要使用 sudo 访问同一个文件,让我们看看这个魔术。
+当我试图以普通用户身份访问 `/var/log/auth.log` 文件时出现错误。 我将要使用 `sudo` 访问同一个文件,让我们看看这个魔术。
```
$ sudo tail -5 /var/log/auth.log
@@ -209,7 +204,7 @@ Mar 17 20:40:48 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user r
或者,我们可以通过添加用户到 `admin` 组来执行相同的操作。
-运行下面的命令,添加用户到 admin 组。
+运行下面的命令,添加用户到 `admin` 组。
```
# usermod -aG admin user1
@@ -231,9 +226,9 @@ Mar 17 20:53:36 Ubuntu18 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ;
Mar 17 20:53:36 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user1(uid=0)
```
-### 方法 2:在 Debian/Ubuntu 中如何使用 /etc/group 文件为普通用户授予超级用户访问权限?
+#### 方法 2:在 Debian/Ubuntu 中如何使用 /etc/group 文件为普通用户授予超级用户访问权限?
-我们可以通过编辑 `/etc/group` 文件来手动地添加用户到 sudo 组或 admin 组。
+我们可以通过编辑 `/etc/group` 文件来手动地添加用户到 `sudo` 组或 `admin` 组。
只需打开该文件,并在恰当的组后追加相应的用户就可完成这一点。
@@ -244,7 +239,7 @@ sudo:x:27:2gadmin,user2
在该例中,我将使用 `user2` 这个用户账号。
-我将要通过在系统中重启 `Apache` 服务来检查用户 `user2` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
+我将要通过在系统中重启 Apache httpd 服务来检查用户 `user2` 是不是拥有 `sudo` 访问权限。让我们看看这个魔术。
```
$ sudo systemctl restart apache2
@@ -257,11 +252,11 @@ Mar 17 21:01:04 Ubuntu18 systemd: pam_unix(systemd-user:session): session opened
Mar 17 21:01:33 Ubuntu18 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/systemctl restart apache2
```
-### 方法 3:在 Linux 中如何使用 /etc/sudoers 文件为普通用户授予超级用户访问权限?
+#### 方法 3:在 Linux 中如何使用 /etc/sudoers 文件为普通用户授予超级用户访问权限?
-sudo 用户的访问权限是被 `/etc/sudoers` 文件控制的。因此,只需将用户添加到 sudo 或 admin 组下的 sudoers 文件中即可。
+sudo 用户的访问权限是被 `/etc/sudoers` 文件控制的。因此,只需将用户添加到 `sudoers` 文件中的 `sudo` 或 `admin` 组下即可。
-只需通过 visudo 命令将期望的用户追加到 /etc/sudoers 文件中。
+只需通过 `visudo` 命令将期望的用户追加到 `/etc/sudoers` 文件中。
```
# grep -i user3 /etc/sudoers
@@ -270,7 +265,7 @@ user3 ALL=(ALL:ALL) ALL
在该例中,我将使用 `user3` 这个用户账号。
-我将要通过在系统中重启 `MariaDB` 服务来检查用户 `user3` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
+我将要通过在系统中重启 MariaDB 服务来检查用户 `user3` 是不是拥有 `sudo` 访问权限。让我们看看这个魔术。
```
$ sudo systemctl restart mariadb
@@ -285,6 +280,7 @@ Mar 17 21:12:53 Ubuntu18 sudo: pam_unix(sudo:session): session closed for user r
Mar 17 21:13:08 Ubuntu18 sudo: user3 : TTY=pts/0 ; PWD=/home/user3 ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log
Mar 17 21:13:08 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user3(uid=0)
```
+
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
@@ -292,7 +288,7 @@ via: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[liujing97](https://github.com/liujing97)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 4e8f27794c8bf8a0be9162aa1630f14df109a700 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Thu, 18 Apr 2019 22:46:58 +0800
Subject: [PATCH 0040/1154] PUB:20190317 How To Configure sudo Access In
Linux.md
@liujing97 https://linux.cn/article-10746-1.html
---
.../20190317 How To Configure sudo Access In Linux.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190317 How To Configure sudo Access In Linux.md (99%)
diff --git a/translated/tech/20190317 How To Configure sudo Access In Linux.md b/published/20190317 How To Configure sudo Access In Linux.md
similarity index 99%
rename from translated/tech/20190317 How To Configure sudo Access In Linux.md
rename to published/20190317 How To Configure sudo Access In Linux.md
index b4de951c9b..efbd663b44 100644
--- a/translated/tech/20190317 How To Configure sudo Access In Linux.md
+++ b/published/20190317 How To Configure sudo Access In Linux.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10746-1.html)
[#]: subject: (How To Configure sudo Access In Linux?)
[#]: via: (https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
From ca5a737d86cb018556e7b2dae6d1316f397e9ae3 Mon Sep 17 00:00:00 2001
From: HALO Feng <289716347@qq.com>
Date: Fri, 19 Apr 2019 00:02:10 +0800
Subject: [PATCH 0041/1154] Update and rename sources/tech/20190328 How to run
PostgreSQL on Kubernetes.md to translated/tech/20190328 How to run PostgreSQL
on Kubernetes.md
---
...328 How to run PostgreSQL on Kubernetes.md | 108 ------------------
...328 How to run PostgreSQL on Kubernetes.md | 102 +++++++++++++++++
2 files changed, 102 insertions(+), 108 deletions(-)
delete mode 100644 sources/tech/20190328 How to run PostgreSQL on Kubernetes.md
create mode 100644 translated/tech/20190328 How to run PostgreSQL on Kubernetes.md
diff --git a/sources/tech/20190328 How to run PostgreSQL on Kubernetes.md b/sources/tech/20190328 How to run PostgreSQL on Kubernetes.md
deleted file mode 100644
index 2d259db48d..0000000000
--- a/sources/tech/20190328 How to run PostgreSQL on Kubernetes.md
+++ /dev/null
@@ -1,108 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (arrowfeng)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to run PostgreSQL on Kubernetes)
-[#]: via: (https://opensource.com/article/19/3/how-run-postgresql-kubernetes)
-[#]: author: (Jonathan S. Katz https://opensource.com/users/jkatz05)
-
-How to run PostgreSQL on Kubernetes
-======
-Create uniformly managed, cloud-native production deployments with the
-flexibility to deploy a personalized database-as-a-service.
-![cubes coming together to create a larger cube][1]
-
-By running a [PostgreSQL][2] database on [Kubernetes][3], you can create uniformly managed, cloud-native production deployments with the flexibility to deploy a personalized database-as-a-service tailored to your specific needs.
-
-Using an Operator allows you to provide additional context to Kubernetes to [manage a stateful application][4]. An Operator is also helpful when using an open source database like PostgreSQL to help with actions including provisioning, scaling, high availability, and user management.
-
-Let's explore how to get PostgreSQL up and running on Kubernetes.
-
-### Set up the PostgreSQL operator
-
-The first step to using PostgreSQL with Kubernetes is installing an Operator. You can get up and running with the open source [Crunchy PostgreSQL Operator][5] on any Kubernetes-based environment with the help of Crunchy's [quickstart script][6] for Linux.
-
-The quickstart script has a few prerequisites:
-
- * The [Wget][7] utility installed
- * [kubectl][8] installed
- * A [StorageClass][9] defined on your Kubernetes cluster
- * Access to a Kubernetes user account with cluster-admin privileges. This is required to install the Operator [RBAC][10] rules
- * A [namespace][11] to hold the PostgreSQL Operator
-
-
-
-Executing the script will give you a default PostgreSQL Operator deployment that assumes [dynamic storage][12] and a StorageClass named **standard**. User-provided values are allowed by the script to override these defaults.
-
-You can download the quickstart script and set it to be executable with the following commands:
-
-```
-wget
-chmod +x ./quickstart.sh
-```
-
-Then you can execute the quickstart script:
-
-```
-./examples/quickstart.sh
-```
-
-After the script prompts you for some basic information about your Kubernetes cluster, it performs the following operations:
-
- * Downloads the Operator configuration files
- * Sets the **$HOME/.pgouser** file to default settings
- * Deploys the Operator as a Kubernetes [Deployment][13]
- * Sets your **.bashrc** to include the Operator environmental variables
- * Sets your **$HOME/.bash_completion** file to be the **pgo bash_completion** file
-
-
-
-During the quickstart's execution, you'll be prompted to set up the RBAC rules for your Kubernetes cluster. In a separate terminal, execute the command the quickstart command tells you to use.
-
-Once the script completes, you'll get information on setting up a port forward to the PostgreSQL Operator pod. In a separate terminal, execute the port forward; this will allow you to begin executing commands to the PostgreSQL Operator! Try creating a cluster by entering:
-
-```
-pgo create cluster mynewcluster
-```
-
-You can test that your cluster is up and running with by entering:
-
-```
-pgo test mynewcluster
-```
-
-You can now manage your PostgreSQL databases in your Kubernetes environment! You can find a full reference to commands, including those for scaling, high availability, backups, and more, in the [documentation][14].
-
-* * *
-
-_Parts of this article are based on[Get Started Running PostgreSQL on Kubernetes][15] that the author wrote for the Crunchy blog._
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/3/how-run-postgresql-kubernetes
-
-作者:[Jonathan S. Katz][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/jkatz05
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
-[2]: https://www.postgresql.org/
-[3]: https://kubernetes.io/
-[4]: https://opensource.com/article/19/2/scaling-postgresql-kubernetes-operators
-[5]: https://github.com/CrunchyData/postgres-operator
-[6]: https://crunchydata.github.io/postgres-operator/stable/installation/#quickstart-script
-[7]: https://www.gnu.org/software/wget/
-[8]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
-[9]: https://kubernetes.io/docs/concepts/storage/storage-classes/
-[10]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
-[11]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
-[12]: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/
-[13]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
-[14]: https://crunchydata.github.io/postgres-operator/stable/#documentation
-[15]: https://info.crunchydata.com/blog/get-started-runnning-postgresql-on-kubernetes
diff --git a/translated/tech/20190328 How to run PostgreSQL on Kubernetes.md b/translated/tech/20190328 How to run PostgreSQL on Kubernetes.md
new file mode 100644
index 0000000000..f29110281e
--- /dev/null
+++ b/translated/tech/20190328 How to run PostgreSQL on Kubernetes.md
@@ -0,0 +1,102 @@
+[#]: collector: (lujun9972)
+[#]: translator: (arrowfeng)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to run PostgreSQL on Kubernetes)
+[#]: via: (https://opensource.com/article/19/3/how-run-postgresql-kubernetes)
+[#]: author: (Jonathan S. Katz https://opensource.com/users/jkatz05)
+
+怎样在Kubernetes上运行PostgreSQL
+======
+创建统一管理的,具备灵活性的云原生生产部署来部署一个人性化的数据库即服务。
+![cubes coming together to create a larger cube][1]
+
+通过在[Kubernetes][2]上运行[PostgreSQL][3]数据库,你能创建统一管理的,具备灵活性的云原生生产部署应用来部署一个个性化的数据库即服务为你的特定需求进行量身定制。
+
+对于Kubernetes,使用Operator允许你提供额外的上下文去[管理有状态应用][4]。当使用像PostgreSQL这样开源的数据库去执行包括配置,扩张,高可用和用户管理时,Operator也很有帮助。
+
+让我们来探索如何在Kubernetes上启动并运行PostgreSQL。
+
+### 安装 PostgreSQL Operator
+
+将PostgreSQL和Kubernetes结合使用的第一步是安装一个Operator。在针对Linux系统的Crunchy's [快速启动脚本][6]的帮助下,你可以在任意基于Kubernetes的环境下启动和运行开源的[Crunchy PostgreSQL Operator][5]。
+
+快速启动脚本有一些必要前提:
+
+ * [Wget][7]工具已安装。
+ * [kubectl][8]工具已安装。
+ * 一个[StorageClass][9]已经定义在你的Kubernetes中。
+ * 拥有集群权限的可访问Kubernetes的用户账号。安装Operator的[RBAC][10]规则是必要的。
+ * 拥有一个[namespace][11]去管理PostgreSQL Operator。
+
+
+
+执行这个脚本将提供给你一个默认的PostgreSQL Operator deployment,它假设你的[dynamic storage][12]和存储类的名字为**standard**。通过这个脚本允许用户自定义的值去覆盖这些默认值。
+
+通过下列命令,你能下载这个快速启动脚本并把它的权限设置为可执行:
+```
+wget
+chmod +x ./quickstart.sh
+```
+
+然后你运行快速启动脚本:
+```
+./examples/quickstart.sh
+```
+
+在脚本提示你相关的Kubernetes集群基本信息后,它将执行下列操作:
+ * 下载Operator配置文件
+ * 将 **$HOME/.pgouser** 这个文件设置为默认设置
+ * 以Kubernetes [Deployment][13]部署Operator
+ * 设置你的 **.bashrc** 文件包含Operator环境变量
+ * 设置你的 **$HOME/.bash_completion** 文件为 **pgo bash_completion**文件
+
+在快速启动脚本的执行期间,你将会被提示在你的Kubernetes集群设置RBAC规则。在另一个终端,执行快速启动命令所提示你的命令。
+
+一旦这个脚本执行完成,你将会得到关于打开一个端口转发到PostgreSQL Operator pod的信息。在另一个终端,执行端口转发;这将允许你开始对PostgreSQL Operator执行命令!尝试输入下列命令创建集群:
+
+```
+pgo create cluster mynewcluster
+```
+
+你能输入下列命令测试你的集群运行状况:
+
+```
+pgo test mynewcluster
+```
+
+现在,你能在Kubernetes环境下管理你的PostgreSQL数据库!你可以在[官方文档][14]找到非常全面的命令,包括扩容,高可用,备份等等。
+
+* * *
+这篇文章部分参考了该作者为了Crunchy博客而写的[在Kubernetes上开始运行PostgreSQL][15]
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/how-run-postgresql-kubernetes
+
+作者:[Jonathan S. Katz][a]
+选题:[lujun9972][b]
+译者:[arrowfeng](https://github.com/arrowfeng)
+校对:[校对ID](https://github.com/校对ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jkatz05
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
+[2]: https://www.postgresql.org/
+[3]: https://kubernetes.io/
+[4]: https://opensource.com/article/19/2/scaling-postgresql-kubernetes-operators
+[5]: https://github.com/CrunchyData/postgres-operator
+[6]: https://crunchydata.github.io/postgres-operator/stable/installation/#quickstart-script
+[7]: https://www.gnu.org/software/wget/
+[8]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
+[9]: https://kubernetes.io/docs/concepts/storage/storage-classes/
+[10]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
+[11]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
+[12]: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/
+[13]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
+[14]: https://crunchydata.github.io/postgres-operator/stable/#documentation
+[15]: https://info.crunchydata.com/blog/get-started-runnning-postgresql-on-kubernetes
From d7bad386f8885898634c5b11ae360f261526385b Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Fri, 19 Apr 2019 00:10:26 +0800
Subject: [PATCH 0042/1154] PRF:20190409 Cisco, Google reenergize
multicloud-hybrid cloud joint development.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@tomjlw 下回建议选择更熟悉的内容来翻译。
---
...lticloud-hybrid cloud joint development.md | 42 +++++++++----------
1 file changed, 19 insertions(+), 23 deletions(-)
diff --git a/translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md b/translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
index 1fc3d0c230..51451afcee 100644
--- a/translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
+++ b/translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
@@ -1,6 +1,6 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco, Google reenergize multicloud/hybrid cloud joint development)
@@ -9,51 +9,47 @@
思科、谷歌重新赋能多/混合云共同开发
======
-思科,VMware,HPE 与其它公司踏入新的 Google Cloud Athos 云技术
+> 思科、VMware、HPE 等公司开始采用了新的 Google Cloud Athos 云技术。
+
![Thinkstock][1]
-思科与谷歌已扩展它们的混合云开发活动,以帮助其客户在从预置的数据中心到公共云上的任何地方更简便地搭建安全的多云以及混合云应用。
+思科与谷歌已扩展它们的混合云开发活动,以帮助其客户可以在从本地数据中心到公共云上的任何地方更轻松地搭建安全的多云以及混合云应用。
-**[查看[什么是混合云计算][2]并了解[关于多云你所需要知道的][3]。[注册 Network World 速报][4]]以定时接受资讯**
+这次扩张围绕着谷歌被称作 Anthos 的新的开源混合云包展开,它是在这周的 Google Next 活动上推出的。Anthos 基于并取代了谷歌现有的谷歌云服务测试版。Anthos 将让客户们无须修改应用就可以在现有的本地硬件或公共云上运行应用。据谷歌说,它可以在[谷歌云平台][5] (GCP) 与 [谷歌 Kubernetes 引擎][6] (GKE) 或者在数据中心中与 [GKE On-Prem][7] 一同使用。谷歌说,Anthos 首次让客户们可以无需管理员和开发者了解不同的坏境和 API 就能从谷歌平台上管理在第三方云上(如 AWS 和 Azure)的工作负荷。
-这次扩张围绕着谷歌在这周的 Google Next 活动上介绍的新的称作 Anthos 的开源混合云包展开。Anthos 基于并取代了谷歌现有的谷歌云服务测试版。Anthos 将让客户们无须修改就可以在现有的预置硬件或公共云上运行应用。据谷歌说,它可以在[谷歌云平台][5] (GCP) 与 [谷歌 Kubernetes 引擎][6] (GKE) 或者在数据中心中与 [GKE On-Prem][7] 一同被获取。谷歌说,Anthos 首次让客户们可以无需管理员和开发者了解不同的坏境和 API 就能从谷歌平台上管理在第三方云上的工作负荷如 AWS 和 Azure。
+关键在于,Athos 提供了一个单一的托管服务,它使得客户们无须担心不同的环境或 API 就能跨云管理、部署工作负荷。
-关键在于,Athos 提供一个单独的受管理的服务,它使得客户们无须忧虑不相似的环境或 API 就能跨云管理、部署工作负荷。
-
-作为首秀的一部分,谷歌也宣布一个能够从预置环境或者其它云自动移植虚拟机到 GKE 上的容器的叫做 [Anthos Migrate[8] 的测试项目。谷歌说,“这种独特的移植技术使你无须修改原来的虚拟机或者应用就能以一种行云流水般的方式迁移、更新你的基础架构”。谷歌称它给予了公司按客户节奏转移预置应用到云环境的灵活性。
+作为首秀的一部分,谷歌也宣布一个叫做 [Anthos Migrate][8] 的测试计划,它能够从本地环境或者其它云自动迁移虚拟机到 GKE 上的容器中。谷歌说,“这种独特的迁移技术使你无须修改原来的虚拟机或者应用就能以一种行云流水般的方式迁移、更新你的基础设施”。谷歌称它给予了公司按客户节奏转移本地应用到云环境的灵活性。
### 思科和谷歌
-就思科来说,它宣布对 Anthos 的支持并承诺将它紧密集成进思科的数据中心科技中,例如 HyperFlex 超融合包,应用中心基础架构(思科的旗舰 SDN 方案), SD-WAN 和 StealthWatch 云。思科说,无论是预置的还是在云端的,这次集成将通过自动更新到最新版本和安全补丁,给予一种一致的、云般的感觉。
+就思科来说,它宣布对 Anthos 的支持并承诺将它紧密集成进思科的数据中心技术中,例如 HyperFlex 超融合包、应用中心基础设施(思科的旗舰 SDN 方案)、SD-WAN 和 StealthWatch 云。思科说,无论是本地的还是在云端的,这次集成将通过自动更新到最新版本和安全补丁,给予一种一致的、云般的感觉。
-“谷歌云在容器和服务网——分别在 Kubernetes 和 Istio——上的专业与它们在开发者社区的领导力,混合思科的企业级网络,计算,存储和安全产品及服务,将为我们的顾客促成一次强强联合。”思科的资深副总裁,云平台和解决方案小组的 Kip Compton 这样[写道][9],“思科对于 Anthos 的集成将会帮助顾客跨预置数据中心和公共云搭建、管理多云/混合云应用,并使得他们无须在安全上让步或者增加复杂性就能集中精力在创新与敏捷上。”
+“谷歌云在容器(Kubernetes)和服务网格service mesh(Istio)上的专业与它们在开发者社区的领导力,再加上思科的企业级网络、计算、存储和安全产品及服务,将为我们的顾客促成一次强强联合。”思科的云平台和解决方案集团资深副总裁 Kip Compton 这样[写道][9],“思科对于 Anthos 的集成将会帮助顾客跨本地数据中心和公共云搭建、管理多云/混合云应用,让他们专注于创新和灵活性,同时不会影响安全性或增加复杂性。”
### 谷歌云和思科
-Eyal Manor,在谷歌云工作的副总裁,[写道][10] 通过思科对 Anthos 的支持,客户将能够:
-* 从完全受管理的服务例如 GKE 以及思科的超融合基础架构,网络和安全科技中收益。
-* 具有一致性地跨企业数据中心和云操作
-* 从一个企业数据中心消耗云服务
-* 用最新的云技术更新预置架构
+谷歌云工程副总裁 Eyal Manor [写道][10] 通过思科对 Anthos 的支持,客户将能够:
+* 受益于全托管服务例如 GKE 以及思科的超融合基础设施、网络和安全技术;
+* 在企业数据中心和云中一致运行
+* 在企业数据中心使用云服务
+* 用最新的云技术更新本地基础设施
+思科和谷歌从 2017 年 10 月就在紧密合作,当时他们表示正在开发一个能够连接本地基础设施和云环境的开放混合云平台。该套件,即[思科为谷歌云打造的混合云平台][11],大致在 2018 年 9 月上市。它使得客户们能通过谷歌云托管 Kubernetes 容器开发企业级功能,包含思科网络和安全技术以及来自 Istio 的服务网格监控。
-思科和谷歌从2017年10月两公司说他们将从事能够连接预置架构和云环境的开放混合云平台就开始紧密合作。那个包,[为谷歌云打造的思科混合云平台],大致在2018年9月可以获取。它使得客户们能通过谷歌云管理的包含思科网络和安全以及来自 Istio 的服务网络监听技术的 Kubernetes 容器拥有企业级开发的能力。
+谷歌说开源的 Istio 的容器和微服务优化技术给开发者提供了一种一致的方式,通过服务级的 mTLS (双向传输层安全)身份验证访问控制来跨云连接、保护、管理和监听微服务。因此,客户能够轻松实施新的可移植的服务,并集中配置和管理这些服务。
-谷歌说 Istio 的开源容器和微服务优化科技给开发者提供了通过服务到服务级的 mTLS [双向传输层安全]访问控制认证进行跨云连接,保护,管理和监听微服务的同一方式。其结果就是,客户能够轻松落实新的便携的服务同时也能够中心化地配置管理那些服务。
-
-思科不是唯一宣布对 Anthos 的支持的供应商。谷歌宣称至少30家大的谷歌合作商包括 [VMware][12],[Dell EMC][13], [HPE][14], Intel 和 Lenovo 致力于为他们的客户在它们自己的超融合基础架构上分发 Anthos 服务。
-
-在 [Facebook][15] 和 [LinkedIn][16] 上加入 Network World 社区以对高端话题评论。
+思科不是唯一宣布对 Anthos 支持的供应商。谷歌表示,至少 30 家大型合作商包括 [VMware][12]、[Dell EMC][13]、[HPE][14]、Intel 和联想致力于为他们的客户在它们自己的超融合基础设施上提供 Anthos 服务。
--------------------------------------------------------------------------------
-via: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all
+via: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[tomjlw](https://github.com/tomjlw)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 1930b5dda7a88511717a188ffbd50e74c5c9f1d9 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Fri, 19 Apr 2019 00:11:33 +0800
Subject: [PATCH 0043/1154] PUB:20190409 Cisco, Google reenergize
multicloud-hybrid cloud joint development.md
@tomjlw https://linux.cn/article-10747-1.html
---
...le reenergize multicloud-hybrid cloud joint development.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md (98%)
diff --git a/translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md b/published/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
similarity index 98%
rename from translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
rename to published/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
index 51451afcee..2e63380c76 100644
--- a/translated/tech/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
+++ b/published/20190409 Cisco, Google reenergize multicloud-hybrid cloud joint development.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10747-1.html)
[#]: subject: (Cisco, Google reenergize multicloud/hybrid cloud joint development)
[#]: via: (https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
From 6edc7a63bd012a84ba22fa678f3bc2647a8e943c Mon Sep 17 00:00:00 2001
From: lctt-bot
Date: Thu, 18 Apr 2019 17:00:36 +0000
Subject: [PATCH 0044/1154] Revert "Translating by ustblixin"
This reverts commit 95894b45db0ce03e5579e7762f2e5af0d59dc571.
---
...stall Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md
index 13b441f85d..7ce1201c4f 100644
--- a/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md
+++ b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: (ustblixin)
+[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From ac9a01de0c0baa08708d64a9d36fc3cd1c46e928 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Fri, 19 Apr 2019 08:41:00 +0800
Subject: [PATCH 0045/1154] translated
---
...0190410 Managing Partitions with sgdisk.md | 94 -------------------
...0190410 Managing Partitions with sgdisk.md | 94 +++++++++++++++++++
2 files changed, 94 insertions(+), 94 deletions(-)
delete mode 100644 sources/tech/20190410 Managing Partitions with sgdisk.md
create mode 100644 translated/tech/20190410 Managing Partitions with sgdisk.md
diff --git a/sources/tech/20190410 Managing Partitions with sgdisk.md b/sources/tech/20190410 Managing Partitions with sgdisk.md
deleted file mode 100644
index b42fef82af..0000000000
--- a/sources/tech/20190410 Managing Partitions with sgdisk.md
+++ /dev/null
@@ -1,94 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Managing Partitions with sgdisk)
-[#]: via: (https://fedoramagazine.org/managing-partitions-with-sgdisk/)
-[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
-
-Managing Partitions with sgdisk
-======
-
-![][1]
-
-[Roderick W. Smith][2]‘s _sgdisk_ command can be used to manage the partitioning of your hard disk drive from the command line. The basics that you need to get started with it are demonstrated below.
-
-The following six parameters are all that you need to know to make use of sgdisk’s most basic features:
-
- 1. **-p**
-_Print_ the partition table:
-### sgdisk -p /dev/sda
- 2. **-d x**
-_Delete_ partition x:
-### sgdisk -d 1 /dev/sda
- 3. **-n x:y:z**
-Create a _new_ partition numbered x, starting at y and ending at z:
-### sgdisk -n 1:1MiB:2MiB /dev/sda
- 4. **-c x:y**
-_Change_ the name of partition x to y:
-### sgdisk -c 1:grub /dev/sda
- 5. **-t x:y**
-Change the _type_ of partition x to y:
-### sgdisk -t 1:ef02 /dev/sda
- 6. **–list-types**
-List the partition type codes:
-### sgdisk --list-types
-
-
-
-![The SGDisk Command][3]
-
-As you can see in the above examples, most of the commands require that the [device file name][4] of the hard disk drive to operate on be specified as the last parameter.
-
-The parameters shown above can be combined so that you can completely define a partition with a single run of the sgdisk command:
-
-### sgdisk -n 1:1MiB:2MiB -t 1:ef02 -c 1:grub /dev/sda
-
-Relative values can be specified for some fields by prefixing the value with a **+** or **–** symbol. If you use a relative value, sgdisk will do the math for you. For example, the above example could be written as:
-
-### sgdisk -n 1:1MiB:+1MiB -t 1:ef02 -c 1:grub /dev/sda
-
-The value **0** has a special-case meaning for several of the fields:
-
- * In the _partition number_ field, 0 indicates that the next available number should be used (numbering starts at 1).
- * In the _starting address_ field, 0 indicates that the start of the largest available block of free space should be used. Some space at the start of the hard drive is always reserved for the partition table itself.
- * In the _ending address_ field, 0 indicates that the end of the largest available block of free space should be used.
-
-
-
-By using **0** and relative values in the appropriate fields, you can create a series of partitions without having to pre-calculate any absolute values. For example, the following sequence of sgdisk commands would create all the basic partitions that are needed for a typical Linux installation if in run sequence against a blank hard drive:
-
-### sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub /dev/sda
-### sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot /dev/sda
-### sgdisk -n 0:0:+4GiB -t 0:8200 -c 0:swap /dev/sda
-### sgdisk -n 0:0:0 -t 0:8300 -c 0:root /dev/sda
-
-The above example shows how to partition a hard disk for a BIOS-based computer. The [grub partition][5] is not needed on a UEFI-based computer. Because sgdisk is calculating all the absolute values for you in the above example, you can just skip running the first command on a UEFI-based computer and the remaining commands can be run without modification. Likewise, you could skip creating the swap partition and the remaining commands would not need to be modified.
-
-There is also a short-cut for deleting all the partitions from a hard disk with a single command:
-
-### sgdisk --zap-all /dev/sda
-
-For the most up-to-date and detailed information, check the man page:
-
-$ man sgdisk
-
---------------------------------------------------------------------------------
-
-via: https://fedoramagazine.org/managing-partitions-with-sgdisk/
-
-作者:[Gregory Bartholomew][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://fedoramagazine.org/author/glb/
-[b]: https://github.com/lujun9972
-[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/managing-partitions-816x345.png
-[2]: https://www.rodsbooks.com/
-[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/sgdisk.jpg
-[4]: https://en.wikipedia.org/wiki/Device_file
-[5]: https://en.wikipedia.org/wiki/BIOS_boot_partition
diff --git a/translated/tech/20190410 Managing Partitions with sgdisk.md b/translated/tech/20190410 Managing Partitions with sgdisk.md
new file mode 100644
index 0000000000..19f2752245
--- /dev/null
+++ b/translated/tech/20190410 Managing Partitions with sgdisk.md
@@ -0,0 +1,94 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Managing Partitions with sgdisk)
+[#]: via: (https://fedoramagazine.org/managing-partitions-with-sgdisk/)
+[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
+
+使用 sgdisk 管理分区
+======
+
+![][1]
+
+[Roderick W. Smith][2] 的 _sgdisk_ 命令可在命令行中管理硬盘的分区。下面将介绍使用它所需的基础知识。
+
+以下六个参数是你使用 sgdisk 大多数基本功能所需了解的:
+
+ 1. **-p**
+_打印_ 分区表:
+### sgdisk -p /dev/sda
+ 2. **-d x**
+_删除_分区 x:
+### sgdisk -d 1 /dev/sda
+ 3. **-n x:y:z**
+创建一个编号 x 的_新_分区,从 y 开始,从 z 结束:
+### sgdisk -n 1:1MiB:2MiB /dev/sda
+ 4. **-c x:y**
+_更改_分区 x 的名称为 y:
+### sgdisk -c 1:grub /dev/sda
+ 5. **-t x:y**
+将分区 x 的_类型_更改为 y:
+### sgdisk -t 1:ef02 /dev/sda
+ 6. **–list-types**
+列出分区类型代码:
+### sgdisk --list-types
+
+
+
+![The SGDisk Command][3]
+
+如你在上面的例子中所见,大多数命令都要求将要操作的硬盘的[设备文件名][4]指定为最后一个参数。
+
+可以组合上面的参数,这样你可以一次定义所有分区:
+
+### sgdisk -n 1:1MiB:2MiB -t 1:ef02 -c 1:grub /dev/sda
+
+在值的前面加上 **+** 或 **–** 符号,可以为某些字段指定相对值。如果你使用相对值,sgdisk 会为你做数学运算。例如,上面的例子可以写成:
+
+### sgdisk -n 1:1MiB:+1MiB -t 1:ef02 -c 1:grub /dev/sda
+
+**0** 值对于以下几个字段是特殊情况:
+
+ * 对于_分区号_字段,0 表示应使用下一个可用编号(编号从 1 开始)。
+ * 对于_起始地址_字段,0 表示使用最大可用空闲块的头。硬盘开头的一些空间始终保留给分区表本身。
+ * 对于_结束地址_字段,0 表示使用最大可用空闲块的末尾。
+
+
+
+通过在适当的字段中使用 **0** 和相对值,你可以创建一系列分区,而无需预先计算任何绝对值。例如,如果在一块空白硬盘中,以下 sgdisk 命令序列将创建典型 Linux 安装所需的所有基本分区:
+
+### sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub /dev/sda
+### sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot /dev/sda
+### sgdisk -n 0:0:+4GiB -t 0:8200 -c 0:swap /dev/sda
+### sgdisk -n 0:0:0 -t 0:8300 -c 0:root /dev/sda
+
+上面的例子展示了如何为基于 BIOS 的计算机分区硬盘。基于 UEFI 的计算机上不需要 [grub分区][5]。由于 sgdisk 在上面的示例中为你计算了所有绝对值,因此你可以在基于 UEFI 的计算机上跳过第一个命令,并且可以无需修改即可运行其余命令。同样,你可以跳过创建交换分区,并且不需要修改其余命令。
+
+还有使用一个命令删除硬盘上所有分区的快捷方式:
+
+### sgdisk --zap-all /dev/sda
+
+关于最新和详细信息,请查看手册页:
+
+$ man sgdisk
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/managing-partitions-with-sgdisk/
+
+作者:[Gregory Bartholomew][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/glb/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/managing-partitions-816x345.png
+[2]: https://www.rodsbooks.com/
+[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/sgdisk.jpg
+[4]: https://en.wikipedia.org/wiki/Device_file
+[5]: https://en.wikipedia.org/wiki/BIOS_boot_partition
From 1d973dab6e130f862fb76069ee08d4a6de008236 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Fri, 19 Apr 2019 08:44:27 +0800
Subject: [PATCH 0046/1154] translating
---
...190415 Getting started with Mercurial for version control.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190415 Getting started with Mercurial for version control.md b/sources/tech/20190415 Getting started with Mercurial for version control.md
index c2e451fd06..10812affed 100644
--- a/sources/tech/20190415 Getting started with Mercurial for version control.md
+++ b/sources/tech/20190415 Getting started with Mercurial for version control.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From b8ceec274aa74cfebbbce2af6dbca9e320b87df5 Mon Sep 17 00:00:00 2001
From: HALO Feng <289716347@qq.com>
Date: Fri, 19 Apr 2019 09:21:58 +0800
Subject: [PATCH 0047/1154] Update 20190416 How to Install MySQL in Ubuntu
Linux.md
---
sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md b/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
index ee3a82ca03..87d7c98172 100644
--- a/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
+++ b/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (arrowfeng)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From d1fd268fb4c2e6d4b3df2cb7dd977064c075b75a Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E9=83=91?=
Date: Fri, 19 Apr 2019 09:32:40 +0800
Subject: [PATCH 0048/1154] Translated
---
...ork with Python using the module Pygame.md | 283 ------------------
...ork with Python using the module Pygame.md | 279 +++++++++++++++++
2 files changed, 279 insertions(+), 283 deletions(-)
delete mode 100644 sources/tech/20171214 Build a game framework with Python using the module Pygame.md
create mode 100644 translated/tech/20171214 Build a game framework with Python using the module Pygame.md
diff --git a/sources/tech/20171214 Build a game framework with Python using the module Pygame.md b/sources/tech/20171214 Build a game framework with Python using the module Pygame.md
deleted file mode 100644
index 1acdd12a7c..0000000000
--- a/sources/tech/20171214 Build a game framework with Python using the module Pygame.md
+++ /dev/null
@@ -1,283 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (robsean)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Build a game framework with Python using the module Pygame)
-[#]: via: (https://opensource.com/article/17/12/game-framework-python)
-[#]: author: (Seth Kenlon https://opensource.com/users/seth)
-
-Build a game framework with Python using the module Pygame
-======
-The first part of this series explored Python by creating a simple dice game. Now it's time to make your own game from scratch.
-
-
-In my [first article in this series][1], I explained how to use Python to create a simple, text-based dice game. This time, I'll demonstrate how to use the Python module Pygame to create a graphical game. It will take several articles to get a game that actually does anything, but by the end of the series, you will have a better understanding of how to find and learn new Python modules and how to build an application from the ground up.
-
-Before you start, you must install [Pygame][2].
-
-### Installing new Python modules
-
-There are several ways to install Python modules, but the two most common are:
-
- * From your distribution's software repository
- * Using the Python package manager, pip
-
-
-
-Both methods work well, and each has its own set of advantages. If you're developing on Linux or BSD, leveraging your distribution's software repository ensures automated and timely updates.
-
-However, using Python's built-in package manager gives you control over when modules are updated. Also, it is not OS-specific, meaning you can use it even when you're not on your usual development machine. Another advantage of pip is that it allows local installs of modules, which is helpful if you don't have administrative rights to a computer you're using.
-
-### Using pip
-
-If you have both Python and Python3 installed on your system, the command you want to use is probably `pip3`, which differentiates it from Python 2.x's `pip` command. If you're unsure, try `pip3` first.
-
-The `pip` command works a lot like most Linux package managers. You can search for Python modules with `search`, then install them with `install`. If you don't have permission to install software on the computer you're using, you can use the `--user` option to just install the module into your home directory.
-
-```
-$ pip3 search pygame
-[...]
-Pygame (1.9.3) - Python Game Development
-sge-pygame (1.5) - A 2-D game engine for Python
-pygame_camera (0.1.1) - A Camera lib for PyGame
-pygame_cffi (0.2.1) - A cffi-based SDL wrapper that copies the pygame API.
-[...]
-$ pip3 install Pygame --user
-```
-
-Pygame is a Python module, which means that it's just a set of libraries that can be used in your Python programs. In other words, it's not a program that you launch, like [IDLE][3] or [Ninja-IDE][4] are.
-
-### Getting started with Pygame
-
-A video game needs a setting; a world in which it takes place. In Python, there are two different ways to create your setting:
-
- * Set a background color
- * Set a background image
-
-
-
-Your background is only an image or a color. Your video game characters can't interact with things in the background, so don't put anything too important back there. It's just set dressing.
-
-### Setting up your Pygame script
-
-To start a new Pygame project, create a folder on your computer. All your game files go into this directory. It's vitally important that you keep all the files needed to run your game inside of your project folder.
-
-
-
-A Python script starts with the file type, your name, and the license you want to use. Use an open source license so your friends can improve your game and share their changes with you:
-
-```
-#!/usr/bin/env python3
-# by Seth Kenlon
-
-## GPLv3
-# This program is free software: you can redistribute it and/or
-# modify it under the terms of the GNU General Public License as
-# published by the Free Software Foundation, either version 3 of the
-# License, or (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-```
-
-Then you tell Python what modules you want to use. Some of the modules are common Python libraries, and of course, you want to include the one you just installed, Pygame.
-
-```
-import pygame # load pygame keywords
-import sys # let python use your file system
-import os # help python identify your OS
-```
-
-Since you'll be working a lot with this script file, it helps to make sections within the file so you know where to put stuff. You do this with block comments, which are comments that are visible only when looking at your source code. Create three blocks in your code.
-
-```
-'''
-Objects
-'''
-
-# put Python classes and functions here
-
-'''
-Setup
-'''
-
-# put run-once code here
-
-'''
-Main Loop
-'''
-
-# put game loop here
-```
-
-Next, set the window size for your game. Keep in mind that not everyone has a big computer screen, so it's best to use a screen size that fits on most people's computers.
-
-There is a way to toggle full-screen mode, the way many modern video games do, but since you're just starting out, keep it simple and just set one size.
-
-```
-'''
-Setup
-'''
-worldx = 960
-worldy = 720
-```
-
-The Pygame engine requires some basic setup before you can use it in a script. You must set the frame rate, start its internal clock, and start (`init`) Pygame.
-
-```
-fps = 40 # frame rate
-ani = 4 # animation cycles
-clock = pygame.time.Clock()
-pygame.init()
-```
-
-Now you can set your background.
-
-### Setting the background
-
-Before you continue, open a graphics application and create a background for your game world. Save it as `stage.png` inside a folder called `images` in your project directory.
-
-There are several free graphics applications you can use.
-
- * [Krita][5] is a professional-level paint materials emulator that can be used to create beautiful images. If you're very interested in creating art for video games, you can even purchase a series of online [game art tutorials][6].
- * [Pinta][7] is a basic, easy to learn paint application.
- * [Inkscape][8] is a vector graphics application. Use it to draw with shapes, lines, splines, and Bézier curves.
-
-
-
-Your graphic doesn't have to be complex, and you can always go back and change it later. Once you have it, add this code in the setup section of your file:
-
-```
-world = pygame.display.set_mode([worldx,worldy])
-backdrop = pygame.image.load(os.path.join('images','stage.png').convert())
-backdropbox = world.get_rect()
-```
-
-If you're just going to fill the background of your game world with a color, all you need is:
-
-```
-world = pygame.display.set_mode([worldx,worldy])
-```
-
-You also must define a color to use. In your setup section, create some color definitions using values for red, green, and blue (RGB).
-
-```
-'''
-Setup
-'''
-
-BLUE = (25,25,200)
-BLACK = (23,23,23 )
-WHITE = (254,254,254)
-```
-
-At this point, you could theoretically start your game. The problem is, it would only last for a millisecond.
-
-To prove this, save your file as `your-name_game.py` (replace `your-name` with your actual name). Then launch your game.
-
-If you are using IDLE, run your game by selecting `Run Module` from the Run menu.
-
-If you are using Ninja, click the `Run file` button in the left button bar.
-
-
-
-You can also run a Python script straight from a Unix terminal or a Windows command prompt.
-
-```
-$ python3 ./your-name_game.py
-```
-
-If you're using Windows, use this command:
-
-```
-py.exe your-name_game.py
-```
-
-However you launch it, don't expect much, because your game only lasts a few milliseconds right now. You can fix that in the next section.
-
-### Looping
-
-Unless told otherwise, a Python script runs once and only once. Computers are very fast these days, so your Python script runs in less than a second.
-
-To force your game to stay open and active long enough for someone to see it (let alone play it), use a `while` loop. To make your game remain open, you can set a variable to some value, then tell a `while` loop to keep looping for as long as the variable remains unchanged.
-
-This is often called a "main loop," and you can use the term `main` as your variable. Add this anywhere in your setup section:
-
-```
-main = True
-```
-
-During the main loop, use Pygame keywords to detect if keys on the keyboard have been pressed or released. Add this to your main loop section:
-
-```
-'''
-Main loop
-'''
-while main == True:
- for event in pygame.event.get():
- if event.type == pygame.QUIT:
- pygame.quit(); sys.exit()
- main = False
-
- if event.type == pygame.KEYDOWN:
- if event.key == ord('q'):
- pygame.quit()
- sys.exit()
- main = False
-```
-
-Also in your main loop, refresh your world's background.
-
-If you are using an image for the background:
-
-```
-world.blit(backdrop, backdropbox)
-```
-
-If you are using a color for the background:
-
-```
-world.fill(BLUE)
-```
-
-Finally, tell Pygame to refresh everything on the screen and advance the game's internal clock.
-
-```
- pygame.display.flip()
- clock.tick(fps)
-```
-
-Save your file, and run it again to see the most boring game ever created.
-
-To quit the game, press `q` on your keyboard.
-
-In the [next article][9] of this series, I'll show you how to add to your currently empty game world, so go ahead and start creating some graphics to use!
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/17/12/game-framework-python
-
-作者:[Seth Kenlon][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/seth
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/article/17/10/python-101
-[2]: http://www.pygame.org/wiki/about
-[3]: https://en.wikipedia.org/wiki/IDLE
-[4]: http://ninja-ide.org/
-[5]: http://krita.org
-[6]: https://gumroad.com/l/krita-game-art-tutorial-1
-[7]: https://pinta-project.com/pintaproject/pinta/releases
-[8]: http://inkscape.org
-[9]: https://opensource.com/article/17/12/program-game-python-part-3-spawning-player
diff --git a/translated/tech/20171214 Build a game framework with Python using the module Pygame.md b/translated/tech/20171214 Build a game framework with Python using the module Pygame.md
new file mode 100644
index 0000000000..7ce7402959
--- /dev/null
+++ b/translated/tech/20171214 Build a game framework with Python using the module Pygame.md
@@ -0,0 +1,279 @@
+[#]: collector: (lujun9972)
+[#]: translator: (robsean)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Build a game framework with Python using the module Pygame)
+[#]: via: (https://opensource.com/article/17/12/game-framework-python)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+使用 Python 和 Pygame 模块构建一个游戏框架
+======
+这系列的第一篇通过创建一个简单的骰子游戏来探究 Python。现在是来从零制作你自己的游戏的时间。
+
+
+在我的 [这系列的第一篇文章][1] 中, 我已经讲解如何使用 Python 创建一个简单的,基于文本的骰子游戏。这次,我将展示如何使用 Python 和 Pygame 模块来创建一个图形化游戏。它将占用一些文章来得到一个确实完成一些东西的游戏,但是在这系列的结尾,你将有一个更好的理解,如何查找和学习新的 Python 模块和如何从其基础上构建一个应用程序。
+
+在开始前,你必须安装 [Pygame][2]。
+
+### 安装新的 Python 模块
+
+这里有一些方法来安装 Python 模块,但是最通用的两个是:
+
+ * 从你的发行版的软件存储库
+ * 使用 Python 的软件包管理器,pip
+
+两个方法都工作很好,并且每一个都有它自己的一套优势。如果你是在 Linux 或 BSD 上开发,促使你的发行版的软件存储库确保自动及时更新。
+
+然而,使用 Python 的内置软件包管理器给予你控制更新模块时间的能力。而且,它不是明确指定操作系统的,意味着,即使当你不是在你常用的开发机器上时,你也可以使用它。pip 的其它的优势是允许模块局部安装,如果你没有一台正在使用的计算机的权限,它是有用的。
+
+### 使用 pip
+
+如果 Python 和 Python3 都安装在你的系统上,你想使用的命令很可能是 `pip3`,它区分来自Python 2.x 的 `pip` 的命令。如果你不确定,先尝试 `pip3`。
+
+`pip` 命令有些像大多数 Linux 软件包管理器的工作。你可以使用 `search` 搜索 Pythin 模块,然后使用 `install` 安装它们。如果你没有你正在使用的计算机的权限来安装软件,你可以使用 `--user` 选项来仅仅安装模块到你的 home 目录。
+
+```
+$ pip3 search pygame
+[...]
+Pygame (1.9.3) - Python Game Development
+sge-pygame (1.5) - A 2-D game engine for Python
+pygame_camera (0.1.1) - A Camera lib for PyGame
+pygame_cffi (0.2.1) - A cffi-based SDL wrapper that copies the pygame API.
+[...]
+$ pip3 install Pygame --user
+```
+
+Pygame 是一个 Python 模块,这意味着它仅仅是一套可以被使用在你的 Python 程序中库。换句话说,它不是一个你启动的程序,像 [IDLE][3] 或 [Ninja-IDE][4] 一样。
+
+### Pygame 新手入门
+
+一个电子游戏需要一个故事背景;一个发生的地点。在 Python 中,有两种不同的方法来创建你的故事背景:
+
+ * 设置一种背景颜色
+ * 设置一张背景图片
+
+你的背景仅是一张图片或一种颜色。你的电子游戏人物不能与在背景中的东西相互作用,因此,不要在后面放置一些太重要的东西。它仅仅是设置装饰。
+
+### 设置你的 Pygame 脚本
+
+为了开始一个新的 Pygame 脚本,在计算机上创建一个文件夹。游戏的全部文件被放在这个目录中。在工程文件夹内部保持所需要的所有的文件来运行游戏是极其重要的。
+
+
+
+一个 Python 脚本以文件类型,你的姓名,和你想使用的协议开始。使用一个开放源码协议,以便你的朋友可以改善你的游戏并与你一起分享他们的更改:
+
+```
+#!/usr/bin/env python3
+# Seth Kenlon 编写
+
+## GPLv3
+# This program is free software: you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation, either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see .
+```
+
+然后,你告诉 Python 你想使用的模块。一些模块是常见的 Python 库,当然,你想包括一个你刚刚安装的,Pygame 。
+
+```
+import pygame # 加载 pygame 关键字
+import sys # 让 python 使用你的文件系统
+import os # 帮助 python 识别你的操作系统
+```
+
+由于你将用这个脚本文件工作很多,在文件中制作成段落是有帮助的,以便你知道在哪里放原料。使用语句块注释来做这些,这些注释仅在看你的源文件代码时是可见的。在你的代码中创建三个语句块。
+
+```
+'''
+Objects
+'''
+
+# 在这里放置 Python 类和函数
+
+'''
+Setup
+'''
+
+# 在这里放置一次性的运行代码
+
+'''
+Main Loop
+'''
+
+# 在这里放置游戏的循环代码指令
+```
+
+接下来,为你的游戏设置窗口大小。注意,不是每一个人都有大计算机屏幕,所以,最好使用一个适合大多数人的计算机的屏幕大小。
+
+这里有一个方法来切换全屏模式,很多现代电子游戏做的方法,但是,由于你刚刚开始,保存它简单和仅设置一个大小。
+
+```
+'''
+Setup
+'''
+worldx = 960
+worldy = 720
+```
+
+在一个脚本中使用 Pygame 引擎前,你需要一些基本的设置。你必需设置帧频,启动它的内部时钟,然后开始 (`init`) Pygame 。
+
+```
+fps = 40 # 帧频
+ani = 4 # 动画循环
+clock = pygame.time.Clock()
+pygame.init()
+```
+
+现在你可以设置你的背景。
+
+### 设置背景
+
+在你继续前,打开一个图形应用程序,并为你的游戏世界创建一个背景。在你的工程目录中的 `images` 文件夹内部保存它为 `stage.png` 。
+
+这里有一些你可以使用的自由图形应用程序。
+
+ * [Krita][5] 是一个专业级绘图原料模拟器,它可以被用于创建漂亮的图片。如果你对电子游戏创建艺术作品非常感兴趣,你甚至可以购买一系列的[游戏艺术作品教程][6].
+ * [Pinta][7] 是一个基本的,易于学习的绘图应用程序。
+ * [Inkscape][8] 是一个矢量图形应用程序。使用它来绘制形状,线,样条曲线,和 Bézier 曲线。
+
+
+
+你的图像不必很复杂,你可以以后回去更改它。一旦你有它,在你文件的 setup 部分添加这些代码:
+
+```
+world = pygame.display.set_mode([worldx,worldy])
+backdrop = pygame.image.load(os.path.join('images','stage.png').convert())
+backdropbox = world.get_rect()
+```
+
+如果你仅仅用一种颜色来填充你的游戏的背景,你需要做的全部是:
+
+```
+world = pygame.display.set_mode([worldx,worldy])
+```
+
+你也必需定义一个来使用的颜色。在你的 setup 部分,使用红,绿,蓝 (RGB) 的值来创建一些颜色的定义。
+
+```
+'''
+Setup
+'''
+
+BLUE = (25,25,200)
+BLACK = (23,23,23 )
+WHITE = (254,254,254)
+```
+
+在这点上,你能理论上启动你的游戏。问题是,它可能仅持续一毫秒。
+
+为证明这一点,保存你的文件为 `your-name_game.py` (用你真实的名称替换 `your-name` )。然后启动你的游戏。
+
+如果你正在使用 IDLE ,通过选择来自 Run 菜单的 `Run Module` 来运行你的游戏。
+
+如果你正在使用 Ninja ,在左侧按钮条中单击 `Run file` 按钮。
+
+
+
+你也可以直接从一个 Unix 终端或一个 Windows 命令提示符中运行一个 Python 脚本。
+
+```
+$ python3 ./your-name_game.py
+```
+
+如果你正在使用 Windows ,使用这命令:
+
+```
+py.exe your-name_game.py
+```
+
+你启动它,不过不要期望很多,因为你的游戏现在仅仅持续几毫秒。你可以在下一部分中修复它。
+
+### 循环
+
+除非另有说明,一个 Python 脚本运行一次并仅一次。近来计算机的运行速度是非常快的,所以你的 Python 脚本运行时间少于1秒钟。
+
+为强制你的游戏来处于足够长的打开和活跃状态来让人看到它(更不要说玩它),使用一个 `while` 循环。为使你的游戏保存打开,你可以设置一个变量为一些值,然后告诉一个 `while` 循环只要变量保持未更改则一直保存循环。
+
+这经常被称为一个"主循环",你可以使用术语 `main` 作为你的变量。在你的 setup 部分的任意位置添加这些代码:
+
+```
+main = True
+```
+
+在主循环期间,使用 Pygame 关键字来检查是否在键盘上的按键已经被按下或释放。添加这些代码到你的主循环部分:
+
+```
+'''
+Main loop
+'''
+while main == True:
+ for event in pygame.event.get():
+ if event.type == pygame.QUIT:
+ pygame.quit(); sys.exit()
+ main = False
+
+ if event.type == pygame.KEYDOWN:
+ if event.key == ord('q'):
+ pygame.quit()
+ sys.exit()
+ main = False
+```
+
+也在你的循环中,刷新你世界的背景。
+
+如果你使用一个图片作为背景:
+
+```
+world.blit(backdrop, backdropbox)
+```
+
+如果你使用一种颜色作为背景:
+
+```
+world.fill(BLUE)
+```
+
+最后,告诉 Pygame 来刷新在屏幕上的所有内容并推进游戏的内部时钟。
+
+```
+ pygame.display.flip()
+ clock.tick(fps)
+```
+
+保存你的文件,再次运行它来查看曾经创建的最无趣的游戏。
+
+退出游戏,在你的键盘上按 `q` 键。
+
+在这系列的 [下一篇文章][9] 中,我将向你演示,如何加强你当前空的游戏世界,所以,继续学习并创建一些将要使用的图形!
+
+--------------------------------------------------------------------------------
+
+通过: https://opensource.com/article/17/12/game-framework-python
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[robsean](https://github.com/robsean)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/article/17/10/python-101
+[2]: http://www.pygame.org/wiki/about
+[3]: https://en.wikipedia.org/wiki/IDLE
+[4]: http://ninja-ide.org/
+[5]: http://krita.org
+[6]: https://gumroad.com/l/krita-game-art-tutorial-1
+[7]: https://pinta-project.com/pintaproject/pinta/releases
+[8]: http://inkscape.org
+[9]: https://opensource.com/article/17/12/program-game-python-part-3-spawning-player
From 4b38cff0bd1a12b053551232383cedce019b48b5 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sat, 20 Apr 2019 00:09:28 +0800
Subject: [PATCH 0049/1154] PRF:20190409 How To Install And Enable Flatpak
Support On Linux.md
@MjSeven
---
...all And Enable Flatpak Support On Linux.md | 48 ++++++++-----------
1 file changed, 21 insertions(+), 27 deletions(-)
diff --git a/translated/tech/20190409 How To Install And Enable Flatpak Support On Linux.md b/translated/tech/20190409 How To Install And Enable Flatpak Support On Linux.md
index fe10d72bd7..2256a9497b 100644
--- a/translated/tech/20190409 How To Install And Enable Flatpak Support On Linux.md
+++ b/translated/tech/20190409 How To Install And Enable Flatpak Support On Linux.md
@@ -1,6 +1,6 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Install And Enable Flatpak Support On Linux?)
@@ -10,35 +10,30 @@
如何在 Linux 上安装并启用 Flatpak 支持?
======
-
-
目前,我们都在使用 Linux 发行版的官方软件包管理器来安装所需的软件包。
-在 Linux 中,它做得很好,没有任何问题。(它很好地完成了它应该做的工作,同时它没有任何妥协)
+在 Linux 中,它做得很好,没有任何问题。(它不打折扣地很好的完成了它应该做的工作)
-在一些方面它也有一些限制,所以会让我们考虑其他替代解决方案来解决。
+但在一些方面它也有一些限制,所以会让我们考虑其他替代解决方案来解决。
-是的,默认情况下,我们不会从发行版官方软件包管理器获取最新版本的软件包,因为这些软件包是在构建当前 OS 版本时构建的。它们只会提供安全更新,直到下一个主要版本发布。
+是的,默认情况下,我们不能从发行版官方软件包管理器获取到最新版本的软件包,因为这些软件包是在构建当前 OS 版本时构建的。它们只会提供安全更新,直到下一个主要版本发布。
-那么,这种情况有什么解决办法吗?
-
-是的,我们有多种解决方案,而且我们大多数人已经开始使用其中的一些了。
+那么,这种情况有什么解决办法吗?是的,我们有多种解决方案,而且我们大多数人已经开始使用其中的一些了。
有些什么呢,它们有什么好处?
- * **对于基于 Ubuntu 的系统:** PPAs
- * **对于基于 RHEL 的系统:** [EPEL Repository][1]、[ELRepo Repository][2]、[nux-dextop Repository][3]、[IUS Community Repo][4]、[RPMfusion Repository][5] 和 [Remi Repository][6]
+ * **对于基于 Ubuntu 的系统:** PPA
+ * **对于基于 RHEL 的系统:** [EPEL 仓库][1]、[ELRepo 仓库][2]、[nux-dextop 仓库][3]、[IUS 社区仓库][4]、[RPMfusion 仓库][5] 和 [Remi 仓库][6]
-
-使用上面的仓库,我们将获得最新的软件包。这些软件包通常都得到了很好的维护,还有大多数社区的建议。但这对于操作系统来说应该是适当的,因为它们可能并不安全。
+使用上面的仓库,我们将获得最新的软件包。这些软件包通常都得到了很好的维护,还有大多数社区的推荐。但这些只是建议,可能并不总是安全的。
近年来,出现了一下通用软件包封装格式,并且得到了广泛的应用。
- * **`Flatpak:`** 它是独立于发行版的包格式,主要贡献者是 Fedora 项目团队。大多数主要的 Linux 发行版都采用了 Flatpak 框架。
- * **`Snaps:`** Snappy 是一种通用的软件包封装格式,最初由 Canonical 为 Ubuntu 手机及其操作系统设计和构建的。后来,大多数发行版都进行了改编。
- * **`AppImage:`** AppImage 是一种可移植的包格式,可以在不安装或不需要 root 权限的情况下运行。
+ * Flatpak:它是独立于发行版的包格式,主要贡献者是 Fedora 项目团队。大多数主要的 Linux 发行版都采用了 Flatpak 框架。
+ * Snaps:Snappy 是一种通用的软件包封装格式,最初由 Canonical 为 Ubuntu 手机及其操作系统设计和构建的。后来,更多的发行版都接纳了它。
+ * AppImage:AppImage 是一种可移植的包格式,可以在不安装和不需要 root 权限的情况下运行。
-我们之前已经介绍过 **[Snap 包管理器和包封装格式][7]**。今天我们将讨论 Flatpak 包封装格式。
+我们之前已经介绍过 [Snap 包管理器和包封装格式][7]。今天我们将讨论 Flatpak 包封装格式。
### 什么是 Flatpak?
@@ -56,13 +51,13 @@ Flatpak 的一个缺点是不像 Snap 和 AppImage 那样支持服务器操作
大多数 Linux 发行版官方仓库都提供 Flatpak 软件包。因此,可以使用它们来进行安装。
-对于 **`Fedora`** 系统,使用 **[DNF 命令][8]** 来安装 flatpak。
+对于 Fedora 系统,使用 [DNF 命令][8] 来安装 flatpak。
```
$ sudo dnf install flatpak
```
-对于 **`Debian/Ubuntu`** 系统,使用 **[APT-GET 命令][9]** 或 **[APT 命令][10]** 来安装 flatpak。
+对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][9] 或 [APT 命令][10] 来安装 flatpak。
```
$ sudo apt install flatpak
@@ -76,19 +71,19 @@ $ sudo apt update
$ sudo apt install flatpak
```
-对于基于 **`Arch Linux`** 的系统,使用 **[Pacman 命令][11]** 来安装 flatpak。
+对于基于 Arch Linux 的系统,使用 [Pacman 命令][11] 来安装 flatpak。
```
$ sudo pacman -S flatpak
```
-对于 **`RHEL/CentOS`** 系统,使用 **[YUM 命令][12]** 来安装 flatpak。
+对于 RHEL/CentOS 系统,使用 [YUM 命令][12] 来安装 flatpak。
```
$ sudo yum install flatpak
```
-对于 **`openSUSE Leap`** 系统,使用 **[Zypper 命令][13]** 来安装 flatpak。
+对于 openSUSE Leap 系统,使用 [Zypper 命令][13] 来安装 flatpak。
```
$ sudo zypper install flatpak
@@ -96,9 +91,7 @@ $ sudo zypper install flatpak
### 如何在 Linux 上启用 Flathub 支持?
-Flathub 网站是一个应用程序商店,你可以在其中找到 flatpak。
-
-它是一个中央仓库,所有的 flatpak 应用程序都可供用户使用。
+Flathub 网站是一个应用程序商店,你可以在其中找到 flatpak 软件包。它是一个中央仓库,所有的 flatpak 应用程序都可供用户使用。
运行以下命令在 Linux 上启用 Flathub 支持:
@@ -226,7 +219,7 @@ org.gnome.Platform/x86_64/3.30 system,runtime
### 如何查看有关已安装应用程序的详细信息?
-运行以下命令以查看有关已安装应用程序的详细信息。
+运行以下命令以查看有关已安装应用程序的详细信息:
```
$ flatpak info com.github.muriloventuroso.easyssh
@@ -264,6 +257,7 @@ $ flatpak update com.github.muriloventuroso.easyssh
### 如何移除已安装的应用程序?
运行以下命令来移除已安装的应用程序:
+
```
$ sudo flatpak uninstall com.github.muriloventuroso.easyssh
```
@@ -281,7 +275,7 @@ via: https://www.2daygeek.com/how-to-install-and-enable-flatpak-support-on-linux
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 1b5c31cc5c83b15ccdfa55798a5f7c1375a7f9ad Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sat, 20 Apr 2019 00:10:02 +0800
Subject: [PATCH 0050/1154] PUB:20190409 How To Install And Enable Flatpak
Support On Linux.md
@MjSeven https://linux.cn/article-10751-1.html
---
...0409 How To Install And Enable Flatpak Support On Linux.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190409 How To Install And Enable Flatpak Support On Linux.md (99%)
diff --git a/translated/tech/20190409 How To Install And Enable Flatpak Support On Linux.md b/published/20190409 How To Install And Enable Flatpak Support On Linux.md
similarity index 99%
rename from translated/tech/20190409 How To Install And Enable Flatpak Support On Linux.md
rename to published/20190409 How To Install And Enable Flatpak Support On Linux.md
index 2256a9497b..246405796b 100644
--- a/translated/tech/20190409 How To Install And Enable Flatpak Support On Linux.md
+++ b/published/20190409 How To Install And Enable Flatpak Support On Linux.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10751-1.html)
[#]: subject: (How To Install And Enable Flatpak Support On Linux?)
[#]: via: (https://www.2daygeek.com/how-to-install-and-enable-flatpak-support-on-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
From 92cd1f682601b91c84cca0a65c61a4e2496bf867 Mon Sep 17 00:00:00 2001
From: FSSlc
Date: Sat, 20 Apr 2019 09:47:55 +0800
Subject: [PATCH 0051/1154] Update 20190415 Inter-process communication in
Linux- Shared storage.md
---
...15 Inter-process communication in Linux- Shared storage.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md b/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
index bf6c2c07cc..de0a8ffdc1 100644
--- a/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
+++ b/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (FSSlc)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@@ -402,7 +402,7 @@ via: https://opensource.com/article/19/4/interprocess-communication-linux-storag
作者:[Marty Kalin][a]
选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
+译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 23059a4f81299c60114084ad3b9027492fcff56c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E4=BB=98=E5=B3=A5?=
<24203166+fuzheng1998@users.noreply.github.com>
Date: Sat, 20 Apr 2019 10:07:30 +0800
Subject: [PATCH 0052/1154] apply to translate
---
sources/tech/20190409 5 open source mobile apps.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/sources/tech/20190409 5 open source mobile apps.md b/sources/tech/20190409 5 open source mobile apps.md
index 15378c29b8..b606e50526 100644
--- a/sources/tech/20190409 5 open source mobile apps.md
+++ b/sources/tech/20190409 5 open source mobile apps.md
@@ -1,3 +1,5 @@
+fuzheng1998 applying
+
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
From b34b63afd3d93655e082528351a004e534fa3293 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sat, 20 Apr 2019 10:45:32 +0800
Subject: [PATCH 0053/1154] PRF:20190325 Getting started with Vim- The
basics.md
@Modrisco
---
...25 Getting started with Vim- The basics.md | 67 ++++++++++---------
1 file changed, 34 insertions(+), 33 deletions(-)
diff --git a/translated/tech/20190325 Getting started with Vim- The basics.md b/translated/tech/20190325 Getting started with Vim- The basics.md
index 87b2ed01f5..db884d298d 100644
--- a/translated/tech/20190325 Getting started with Vim- The basics.md
+++ b/translated/tech/20190325 Getting started with Vim- The basics.md
@@ -1,28 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (Modrisco)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Vim: The basics)
[#]: via: (https://opensource.com/article/19/3/getting-started-vim)
-[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson)
+[#]: author: (Bryant Son https://opensource.com/users/brson)
Vim 入门:基础
======
-为工作或者新项目学习足够的 Vim 知识。
+> 为工作或者新项目学习足够的 Vim 知识。
![Person standing in front of a giant computer screen with numbers, data][1]
-我还清晰地记得我第一次接触 Vim 的时候。那时我还是一名大学生,计算机学院的机房里都装着 Ubuntu 系统。尽管我在上大学前也曾接触过不同的 Linux 发行版(比如 RHEL,Red Hat 在百思买出售它的 CD),但这却是我第一次要在日常中频繁使用 Linux 系统,因为我的课程要求我这样做。当我开始使用 Linux 时,正如我的前辈和将来的后继者们一样,我感觉自己像是一名“真正的程序员”了。
+我还清晰地记得我第一次接触 Vim 的时候。那时我还是一名大学生,计算机学院的机房里都装着 Ubuntu 系统。尽管我在上大学前也曾接触过不同的 Linux 发行版(比如 RHEL —— Red Hat 在百思买出售它的 CD),但这却是我第一次要在日常中频繁使用 Linux 系统,因为我的课程要求我这样做。当我开始使用 Linux 时,正如我的前辈和将来的后继者们一样,我感觉自己像是一名“真正的程序员”了。
![Real Programmers comic][2]
-真正的程序员,来自 [xkcd][3]
+*真正的程序员,来自 [xkcd][3]*
学生们可以使用像 [Kate][4] 一样的图形文本编辑器,这也安装在学校的电脑上了。对于那些可以使用 shell 但不习惯使用控制台编辑器的学生,最流行的选择是 [Nano][5],它提供了很好的交互式菜单和类似于 Windows 图形文本编辑器的体验。
-我有时会用 Nano,但当我听说 [Vi/Vim][6] 和 [Emacs][7] 能做一些很棒的事情时我决定试一试它们(主要是因为它们看起来很酷,而且我也很好奇它们有什么特别之处)。第一次使用 Vim 时吓到我了 —— 我不想搞砸任何事情!但是,一旦我掌握了它的诀窍,事情就变得容易得多,我可以欣赏编辑器的强大功能。至于 Emacs,呃,我有点放弃了,但我很高兴我坚持和 Vim 在一起。
+我有时会用 Nano,但当我听说 [Vi/Vim][6] 和 [Emacs][7] 能做一些很棒的事情时我决定试一试它们(主要是因为它们看起来很酷,而且我也很好奇它们有什么特别之处)。第一次使用 Vim 时吓到我了 —— 我不想搞砸任何事情!但是,一旦我掌握了它的诀窍,事情就变得容易得多,我就可以欣赏这个编辑器的强大功能了。至于 Emacs,呃,我有点放弃了,但我很高兴我坚持和 Vim 在一起。
在本文中,我将介绍一下 Vim(基于我的个人经验),这样你就可以在 Linux 系统上用它来作为编辑器使用了。这篇文章不会让你变成 Vim 的专家,甚至不会触及 Vim 许多强大功能的皮毛。但是起点总是很重要的,我想让开始的经历尽可能简单,剩下的则由你自己去探索。
@@ -40,23 +40,23 @@ Vim 入门:基础
还记得我一开始说过我不敢使用 Vim 吗?我当时在害怕“如果我改变了一个现有的文件,把事情搞砸了怎么办?”毕竟,一些计算机科学作业要求我修改现有的文件。我想知道:_如何在不保存更改的情况下打开和关闭文件?_
-好消息是你可以使用相同的命令在 Vim 中创建或打开文件:`vim `,其中 **** 表示要创建或修改的目标文件名。让我们通过输入 `vim HelloWorld.java` 来创建一个名为 `HelloWorld.java` 的文件。
+好消息是你可以使用相同的命令在 Vim 中创建或打开文件:`vim `,其中 `` 表示要创建或修改的目标文件名。让我们通过输入 `vim HelloWorld.java` 来创建一个名为 `HelloWorld.java` 的文件。
你好,Vim!现在,讲一下 Vim 中一个非常重要的概念,可能也是最需要记住的:Vim 有多种模式,下面是 Vim 基础中需要知道的的三种:
模式 | 描述
---|---
正常模式 | 默认模式,用于导航和简单编辑
-插入模式 | 用于插入和修改文本
-命令模式 | 用于执行如保存,退出等命令
+插入模式 | 用于直接插入和修改文本
+命令行模式 | 用于执行如保存,退出等命令
-Vim 也有其他模式,例如可视模式、选择模式和命令模式。不过上面的三种模式对我们来说已经足够好了。
+Vim 也有其他模式,例如可视模式、选择模式和命令模式。不过上面的三种模式对我们来说已经足够用了。
你现在正处于正常模式,如果有文本,你可以用箭头键移动或使用其他导航键(将在稍后看到)。要确定你正处于正常模式,只需按下 `esc` (Escape)键即可。
-> **提示:** **Esc** 切换到正常模式。即使你已经在正常模式下,点击 **Esc** 只是为了练习。
+> **提示:** `Esc` 切换到正常模式。即使你已经在正常模式下,点击 `Esc` 只是为了练习。
-现在,有趣的事情发生了。输入 `:` (冒号键)并接着 `q!` (完整命令:`:q!`)。你的屏幕将显示如下:
+现在,有趣的事情发生了。输入 `:` (冒号键)并接着 `q!` (完整命令:`:q!`)。你的屏幕将显示如下:
![Editing Vim][9]
@@ -68,7 +68,7 @@ Vim 也有其他模式,例如可视模式、选择模式和命令模式。不
通过输入 `vim HelloWorld.java` 和回车键来再次打开这个文件。你可以在插入模式中修改文件。首先,通过 `Esc` 键来确定你正处于正常模式。接着输入 `i` 来进入插入模式(没错,就是字母 **i**)。
-在左下角,你将看到 `\-- INSERT --`,这标志着你这处于插入模式。
+在左下角,你将看到 `-- INSERT --`,这标志着你这处于插入模式。
![Vim insert mode][10]
@@ -80,9 +80,10 @@ public class HelloWorld {
}
}
```
+
非常漂亮!注意文本是如何在 Java 语法中高亮显示的。因为这是个 Java 文件,所以 Vim 将自动检测语法并高亮颜色。
-保存文件:按下 `Esc` 来退出插入模式并进入命令模式。输入 `:` 并接着 `x!` (完整命令:`:x!`),按回车键来保存文件。你也可以输入 `wq` 来执行相同的操作。
+保存文件:按下 `Esc` 来退出插入模式并进入命令行模式。输入 `:` 并接着 `x!` (完整命令:`:x!`),按回车键来保存文件。你也可以输入 `wq` 来执行相同的操作。
现在,你知道了如何使用插入模式输入文本并使用以下命令保存文件:`:x!` 或者 `:wq`。
@@ -96,7 +97,7 @@ public class HelloWorld {
![Showing Line Numbers][12]
-好,你也许会说,“这确实很酷,不过我该怎么跳到某一行呢?”再一次的,确认你正处于正常模式。接着输入 `: `,在这里 **< LINE_NUMBER>** 是你想去的那一行的行数。按下回车键来试着移动到第二行。
+好,你也许会说,“这确实很酷,不过我该怎么跳到某一行呢?”再一次的,确认你正处于正常模式。接着输入 `:`,在这里 `` 是你想去的那一行的行数。按下回车键来试着移动到第二行。
```
:2
@@ -116,17 +117,17 @@ public class HelloWorld {
你现在来到这行的最后一个字节了。在此示例中,高亮左大括号以显示光标移动到的位置,右大括号被高亮是因为它是高亮的左大括号的匹配字符。
-这就是 Vim 中的基本导航功能。等等,别急着退出文件。让我们转到 Vim 中的基本编辑。不过,你可以暂时随便喝杯咖啡或茶休息一下。
+这就是 Vim 中的基本导航功能。等等,别急着退出文件。让我们转到 Vim 中的基本编辑。不过,你可以暂时顺便喝杯咖啡或茶休息一下。
### 第 4 步:Vim 中的基本编辑
现在,你已经知道如何通过跳到想要的一行来在文件中导航,你可以使用这个技能在 Vim 中进行一些基本编辑。切换到插入模式。(还记得怎么做吗?是不是输入 `i` ?)当然,你可以使用键盘逐一删除或插入字符来进行编辑,但是 Vim 提供了更快捷的方法来编辑文件。
-来到第三行,这里的代码是 **public static void main(String[] args) {**。双击 `d` 键,没错,就是 `dd`。如果你成功做到了,你将会看到,第三行消失了,剩下的所有行都向上移动了一行。(例如,第四行变成了第三行)。
+来到第三行,这里的代码是 `public static void main(String[] args) {`。双击 `d` 键,没错,就是 `dd`。如果你成功做到了,你将会看到,第三行消失了,剩下的所有行都向上移动了一行。(例如,第四行变成了第三行)。
![Deleting A Line][15]
-这就是 _删除_(delete) 命令。不要担心,键入 `u`,你会发现这一行又回来了。喔,这就是 _撤销_(undo) 命令。
+这就是删除delete命令。不要担心,键入 `u`,你会发现这一行又回来了。喔,这就是撤销undo命令。
![Undoing a change in Vim][16]
@@ -134,7 +135,7 @@ public class HelloWorld {
![Highlighting text in Vim][17]
-来到第四行,这里的代码是 **System.out.println("Hello, Opensource");**。高亮这一行的所有内容。好了吗?当第四行的内容处于高亮时,按下 `y`。这就叫做 _复制_(yank)模式,文本将会被复制到剪贴板上。接下来,输入 `o` 来创建新的一行。注意,这将让你进入插入模式。通过按 `Esc` 退出插入模式,然后按下 `p`,代表 _粘贴_。这将把复制的文本从第三行粘贴到第四行。
+来到第四行,这里的代码是 `System.out.println("Hello, Opensource");`。高亮这一行的所有内容。好了吗?当第四行的内容处于高亮时,按下 `y`。这就叫做复制yank模式,文本将会被复制到剪贴板上。接下来,输入 `o` 来创建新的一行。注意,这将让你进入插入模式。通过按 `Esc` 退出插入模式,然后按下 `p`,代表粘贴paste。这将把复制的文本从第三行粘贴到第四行。
![Pasting in Vim][18]
@@ -148,50 +149,50 @@ public class HelloWorld {
假设你的团队领导希望你更改项目中的文本字符串。你该如何快速完成任务?你可能希望使用某个关键字来搜索该行。
-Vim 的搜索功能非常有用。通过 `Esc` 键来进入命令模式,然后输入冒号 `:`,我们可以通过输入 `/ ` 来搜索关键词, **< SEARCH_KEYWORD>** 指你希望搜索的字符串。在这里,我们搜索关键字符串 “Hello”。在面的图示中缺少冒号,但这是必需的。
+Vim 的搜索功能非常有用。通过 `Esc` 键来进入命令模式,然后输入冒号 `:`,我们可以通过输入 `/` 来搜索关键词, `` 指你希望搜索的字符串。在这里,我们搜索关键字符串 `Hello`。在下面的图示中没有显示冒号,但这是必须输入的。
![Searching in Vim][19]
-但是,一个关键字可以出现不止一次,而这可能不是你想要的那一个。那么,如何找到下一个匹配项呢?只需按 `n` 键即可,这代表 _下一个_(next)。执行此操作时,请确保你没有处于插入模式!
+但是,一个关键字可以出现不止一次,而这可能不是你想要的那一个。那么,如何找到下一个匹配项呢?只需按 `n` 键即可,这代表下一个next。执行此操作时,请确保你没有处于插入模式!
-### 附加步骤:Vim中的分割模式
+### 附加步骤:Vim 中的分割模式
-以上几乎涵盖了所有的 Vim 基础知识。但是,作为一个额外奖励,我想给你展示 Vim 一个很酷的特性,叫做 _分割_(split)模式。
+以上几乎涵盖了所有的 Vim 基础知识。但是,作为一个额外奖励,我想给你展示 Vim 一个很酷的特性,叫做分割split模式。
-退出 _HelloWorld.java_ 并创建一个新文件。在控制台窗口中,输入 `vim GoodBye.java` 并按回车键来创建一个名为 _GoodBye.java_ 的新文件。
+退出 `HelloWorld.java` 并创建一个新文件。在控制台窗口中,输入 `vim GoodBye.java` 并按回车键来创建一个名为 `GoodBye.java` 的新文件。
-输入任何你想输入的让内容,我选择输入“Goodbye”。保存文件(记住你可以在命令模式中使用 `:x!` 或者 `:wq`)。
+输入任何你想输入的让内容,我选择输入 `Goodbye`。保存文件(记住你可以在命令模式中使用 `:x!` 或者 `:wq`)。
在命令模式中,输入 `:split HelloWorld.java`,来看看发生了什么。
![Split mode in Vim][20]
-Wow!快看!**split** 命令将控制台窗口水平分割成了两个部分,上面是 _HelloWorld.java_,下面是 _GoodBye.java_。该怎么能在窗口之间切换呢?按住 `Control` 键 (在 Mac 上)或 `Ctrl` 键(在PC上),然后按下 `ww` (即双击 `w` 键)。
+Wow!快看! `split` 命令将控制台窗口水平分割成了两个部分,上面是 `HelloWorld.java`,下面是 `GoodBye.java`。该怎么能在窗口之间切换呢? 按住 `Control` 键(在 Mac 上)或 `Ctrl` 键(在 PC 上),然后按下 `ww` (即双击 `w` 键)。
-作为最后一个练习,尝试通过复制和粘贴 _HelloWorld.java_ 来编辑 _GoodBye.java_ 以匹配下面屏幕上的内容。
+作为最后一个练习,尝试通过复制和粘贴 `HelloWorld.java` 来编辑 `GoodBye.java` 以匹配下面屏幕上的内容。
![Modify GoodBye.java file in Split Mode][21]
保存两份文件,成功!
-> **提示 1:** 如果你想将两个文件窗口垂直分割,使用 `:vsplit ` 命令。(代替 `:split ` 命令,**< FILE_NAME>** 指你想要使用分割模式打开的文件名)。
+> **提示 1:** 如果你想将两个文件窗口垂直分割,使用 `:vsplit ` 命令。(代替 `:split ` 命令,`` 指你想要使用分割模式打开的文件名)。
>
-> **提示 2:** 你可以通过调用任意数量的 **split** 或者 **vsplit** 命令来打开两个以上的文件。试一试,看看它效果如何。
+> **提示 2:** 你可以通过调用任意数量的 `split` 或者 `vsplit` 命令来打开两个以上的文件。试一试,看看它效果如何。
### Vim 速查表
-在本文中,您学会了如何使用 Vim 来完成工作或项目。但这只是你开启 Vim 强大功能之旅的开始。请务必在 Opensource.com 上查看其他很棒的教程和技巧。
+在本文中,您学会了如何使用 Vim 来完成工作或项目,但这只是你开启 Vim 强大功能之旅的开始,可以查看其他很棒的教程和技巧。
-为了让一切变得简单些,我已经将你学到的一切总结到了 [a handy cheat sheet][22] 中。
+为了让一切变得简单些,我已经将你学到的一切总结到了 [一份方便的速查表][22] 中。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/getting-started-vim
-作者:[Bryant Son (Red Hat, Community Moderator)][a]
+作者:[Bryant Son][a]
选题:[lujun9972][b]
译者:[Modrisco](https://github.com/Modrisco)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 18c1d007c9c7692aea7fc3c988d23903290cc760 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sat, 20 Apr 2019 10:46:12 +0800
Subject: [PATCH 0054/1154] PUB:20190325 Getting started with Vim- The
basics.md
@Modrisco https://linux.cn/article-10752-1.html
---
.../20190325 Getting started with Vim- The basics.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190325 Getting started with Vim- The basics.md (99%)
diff --git a/translated/tech/20190325 Getting started with Vim- The basics.md b/published/20190325 Getting started with Vim- The basics.md
similarity index 99%
rename from translated/tech/20190325 Getting started with Vim- The basics.md
rename to published/20190325 Getting started with Vim- The basics.md
index db884d298d..bcdc0ffe20 100644
--- a/translated/tech/20190325 Getting started with Vim- The basics.md
+++ b/published/20190325 Getting started with Vim- The basics.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (Modrisco)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10752-1.html)
[#]: subject: (Getting started with Vim: The basics)
[#]: via: (https://opensource.com/article/19/3/getting-started-vim)
[#]: author: (Bryant Son https://opensource.com/users/brson)
From 65051c17b18367622710a9765056b236676a9b18 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E4=BB=98=E5=B3=A5?=
<24203166+fuzheng1998@users.noreply.github.com>
Date: Sat, 20 Apr 2019 10:53:06 +0800
Subject: [PATCH 0055/1154] Update 20190409 5 open source mobile apps.md
---
sources/tech/20190409 5 open source mobile apps.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190409 5 open source mobile apps.md b/sources/tech/20190409 5 open source mobile apps.md
index b606e50526..ef570bc07b 100644
--- a/sources/tech/20190409 5 open source mobile apps.md
+++ b/sources/tech/20190409 5 open source mobile apps.md
@@ -1,4 +1,4 @@
-fuzheng1998 applying
+tranlator:(fuzheng1998)
[#]: collector: (lujun9972)
[#]: translator: ( )
From 2f96e5fe5a08a0de97abf9f46933776f4a8e33c4 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E5=BC=A0=E6=95=A6=E9=94=8B?= <289716347@qq.com>
Date: Sat, 20 Apr 2019 18:09:52 +0800
Subject: [PATCH 0056/1154] finish translated
---
...16 How to Install MySQL in Ubuntu Linux.md | 238 -----------------
...16 How to Install MySQL in Ubuntu Linux.md | 244 ++++++++++++++++++
2 files changed, 244 insertions(+), 238 deletions(-)
delete mode 100644 sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
create mode 100644 translated/tech/20190416 How to Install MySQL in Ubuntu Linux.md
diff --git a/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md b/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
deleted file mode 100644
index 87d7c98172..0000000000
--- a/sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
+++ /dev/null
@@ -1,238 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (arrowfeng)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to Install MySQL in Ubuntu Linux)
-[#]: via: (https://itsfoss.com/install-mysql-ubuntu/)
-[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
-
-How to Install MySQL in Ubuntu Linux
-======
-
-_**Brief: This tutorial teaches you to install MySQL in Ubuntu based Linux distributions. You’ll also learn how to verify your install and how to connect to MySQL for the first time.**_
-
-**[MySQL][1]** is the quintessential database management system. It is used in many tech stacks, including the popular **[LAMP][2]** (Linux, Apache, MySQL, PHP) stack. It has proven its stability. Another thing that makes **MySQL** so great is that it is **open-source**.
-
-**MySQL** uses **relational databases** (basically **tabular data** ). It is really easy to store, organize and access data this way. For managing data, **SQL** ( **Structured Query Language** ) is used.
-
-In this article I’ll show you how to **install** and **use** MySQL 8.0 in Ubuntu 18.04. Let’s get to it!
-
-### Installing MySQL in Ubuntu
-
-![][3]
-
-I’ll be covering two ways you can install **MySQL** in Ubuntu 18.04:
-
- 1. Install MySQL from the Ubuntu repositories. Very basic, not the latest version (5.7)
- 2. Install MySQL using the official repository. There is a bigger step that you’ll have to add to the process, but nothing to worry about. Also, you’ll have the latest version (8.0)
-
-
-
-When needed, I’ll provide screenshots to guide you. For most of this guide, I’ll be entering commands in the **terminal** ( **default hotkey** : CTRL+ALT+T). Don’t be scared of it!
-
-#### Method 1. Installing MySQL from the Ubuntu repositories
-
-First of all, make sure your repositories are updated by entering:
-
-```
-sudo apt update
-```
-
-Now, to install **MySQL 5.7** , simply type:
-
-```
-sudo apt install mysql-server -y
-```
-
-That’s it! Simple and efficient.
-
-#### Method 2. Installing MySQL using the official repository
-
-Although this method has a few more steps, I’ll go through them one by one and I’ll try writing down clear notes.
-
-The first step is browsing to the [download page][4] of the official MySQL website.
-
-![][5]
-
-Here, go down to the **download link** for the **DEB Package**.
-
-![][6]
-
-Scroll down past the info about Oracle Web and right-click on **No thanks, just start my download.** Select **Copy link location**.
-
-Now go back to the terminal. We’ll [use][7] **[Curl][7]** [command][7] to the download the package:
-
-```
-curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb
-```
-
-**** is the link I copied from the website. It might be different based on the current version of MySQL. Let’s use **dpkg** to start installing MySQL:
-
-```
-sudo dpkg -i mysql-apt-config*
-```
-
-Update your repositories:
-
-```
-sudo apt update
-```
-
-To actually install MySQL, we’ll use the same command as in the first method:
-
-```
-sudo apt install mysql-server -y
-```
-
-Doing so will open a prompt in your terminal for **package configuration**. Use the **down arrow** to select the **Ok** option.
-
-![][8]
-
-Press **Enter**. This should prompt you to enter a **password** :. Your are basically setting the root password for MySQL. Don’t confuse it with [root password of Ubuntu][9] system.
-
-![][10]
-
-Type in a password and press **Tab** to select **< Ok>**. Press **Enter.** You’ll now have to **re-enter** the **password**. After doing so, press **Tab** again to select **< Ok>**. Press **Enter**.
-
-![][11]
-
-Some **information** on configuring MySQL Server will be presented. Press **Tab** to select **< Ok>** and **Enter** again:
-
-![][12]
-
-Here you need to choose a **default authentication plugin**. Make sure **Use Strong Password Encryption** is selected. Press **Tab** and then **Enter**.
-
-That’s it! You have successfully installed MySQL.
-
-#### Verify your MySQL installation
-
-To **verify** that MySQL installed correctly, use:
-
-```
-sudo systemctl status mysql.service
-```
-
-This will show some information about the service:
-
-![][13]
-
-You should see **Active: active (running)** in there somewhere. If you don’t, use the following command to start the **service** :
-
-```
-sudo systemctl start mysql.service
-```
-
-#### Configuring/Securing MySQL
-
-For a new install, you should run the provided command for security-related updates. That’s:
-
-```
-sudo mysql_secure_installation
-```
-
-Doing so will first of all ask you if you want to use the **VALIDATE PASSWORD COMPONENT**. If you want to use it, you’ll have to select a minimum password strength ( **0 – Low, 1 – Medium, 2 – High** ). You won’t be able to input any password doesn’t respect the selected rules. If you don’t have the habit of using strong passwords (you should!), this could come in handy. If you think it might help, type in **y** or **Y** and press **Enter** , then choose a **strength level** for your password and input the one you want to use. If successful, you’ll continue the **securing** process; otherwise you’ll have to re-enter a password.
-
-If, however, you do not want this feature (I won’t), just press **Enter** or **any other key** to skip using it.
-
-For the other options, I suggest **enabling** them (typing in **y** or **Y** and pressing **Enter** for each of them). They are (in this order): **remove anonymous user, disallow root login remotely, remove test database and access to it, reload privilege tables now**.
-
-#### Connecting to & Disconnecting from the MySQL Server
-
-To be able to run SQL queries, you’ll first have to connect to the server using MySQL and use the MySQL prompt. The command for doing this is:
-
-```
-mysql -h host_name -u user -p
-```
-
- * **-h** is used to specify a **host name** (if the server is located on another machine; if it isn’t, just omit it)
- * **-u** mentions the **user**
- * **-p** specifies that you want to input a **password**.
-
-
-
-Although not recommended (for safety reasons), you can enter the password directly in the command by typing it in right after **-p**. For example, if the password for **test_user** is **1234** and you are trying to connect on the machine you are using, you could use:
-
-```
-mysql -u test_user -p1234
-```
-
-If you successfully inputted the required parameters, you’ll be greeted by the **MySQL shell prompt** ( **mysql >**):
-
-![][14]
-
-To **disconnect** from the server and **leave** the mysql prompt, type:
-
-```
-QUIT
-```
-
-Typing **quit** (MySQL is case insensitive) or **\q** will also work. Press **Enter** to exit.
-
-You can also output info about the **version** with a simple command:
-
-```
-sudo mysqladmin -u root version -p
-```
-
-If you want to see a **list of options** , use:
-
-```
-mysql --help
-```
-
-#### Uninstalling MySQL
-
-If you decide that you want to use a newer release or just want to stop using MySQL.
-
-First, disable the service:
-
-```
-sudo systemctl stop mysql.service && sudo systemctl disable mysql.service
-```
-
-Make sure you backed up your databases, in case you want to use them later on. You can uninstall MySQL by running:
-
-```
-sudo apt purge mysql*
-```
-
-To clean up dependecies:
-
-```
-sudo apt autoremove
-```
-
-**Wrapping Up**
-
-In this article, I’ve covered **installing MySQL** in Ubuntu Linux. I’d be glad if this guide helps struggling users and beginners.
-
-Tell us in the comments if you found this post to be a useful resource. What do you use MySQL for? We’re eager to receive any feedback, impressions or suggestions. Thanks for reading and have don’t hesitate to experiment with this incredible tool!
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/install-mysql-ubuntu/
-
-作者:[Sergiu][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/sergiu/
-[b]: https://github.com/lujun9972
-[1]: https://www.mysql.com/
-[2]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
-[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-mysql-ubuntu.png?resize=800%2C450&ssl=1
-[4]: https://dev.mysql.com/downloads/repo/apt/
-[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_apt_download_page.jpg?fit=800%2C280&ssl=1
-[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_deb_download_link.jpg?fit=800%2C507&ssl=1
-[7]: https://linuxhandbook.com/curl-command-examples/
-[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_package_configuration_ok.jpg?fit=800%2C587&ssl=1
-[9]: https://itsfoss.com/change-password-ubuntu/
-[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_enter_password.jpg?fit=800%2C583&ssl=1
-[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_information_on_configuring.jpg?fit=800%2C581&ssl=1
-[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_default_authentication_plugin.jpg?fit=800%2C586&ssl=1
-[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_service_information.jpg?fit=800%2C402&ssl=1
-[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_shell_prompt-2.jpg?fit=800%2C423&ssl=1
diff --git a/translated/tech/20190416 How to Install MySQL in Ubuntu Linux.md b/translated/tech/20190416 How to Install MySQL in Ubuntu Linux.md
new file mode 100644
index 0000000000..27b2fba58e
--- /dev/null
+++ b/translated/tech/20190416 How to Install MySQL in Ubuntu Linux.md
@@ -0,0 +1,244 @@
+[#]: collector: (lujun9972)
+[#]: translator: (arrowfeng)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to Install MySQL in Ubuntu Linux)
+[#]: via: (https://itsfoss.com/install-mysql-ubuntu/)
+[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
+
+怎样在Ubuntu Linux上安装MySQL
+======
+
+_**简要: 本教程教你如何在基于Ubuntu的Linux发行版上安装MySQL。对于首次使用的用户,你将会学习到如何验证你的安装和第一次怎样去连接MySQL。**_
+
+**[MySQL][1]** 是一个典型的数据库管理系统。它被用于许多技术栈中,包括流行的 **[LAMP][2]** (Linux, Apache, MySQL, PHP) 技术栈. 它已经被证实了其稳定性。 另一个让**MySQL**受欢迎的原因是它是开源的。
+
+**MySQL** 是 **关系型数据库** (基本上是 **表格 数据** ). 以这种方式它很容易去存储,组织和访问数据。它使用**SQL**( **结构化查询语言** )来管理数据。
+
+这这篇文章中,我将向你展示如何在Ubuntu 18.04安装和使用MySQL 8.0。让我们一起来看看吧!
+
+
+### 在Ubuntu上安装MySQL
+
+![][3]
+
+我将会介绍两种在Ubuntu18.04上安装**MySQL**的方法:
+ 1. 从Ubuntu仓库上安装MySQL。非常简单,但不是最新版(5.7)
+ 2. 从官方仓库安装MySQL。你将额外增加一些步处理过程,但不用担心。你将会拥有最新版的MySQL(8.0)
+
+
+有必要的时候,我将会提供屏幕截图去引导你。但这边文章中的大部分步骤,我将直接在**终端**(**默认热键**: CTRL+ALT+T)输入命令。别害怕它!
+
+#### 方法 1. 从Ubuntu仓库安装MySQL
+
+首先,输入下列命令确保你的仓库已经被更新:
+
+```
+sudo apt update
+```
+
+现在, 安装 **MySQL 5.7** , 简单输入下列命令:
+
+```
+sudo apt install mysql-server -y
+```
+
+就是这样!简单且高效。
+
+#### 方法 2. 使用官方仓库安装MySQL
+
+虽然这个方法多了一些步骤,但我将逐一介绍,并尝试写下清晰的笔记。
+
+首先浏览官方的MySQL网站[download page][4]。
+
+![][5]
+
+在这,选择**DEB Package**点击**download link**。
+
+![][6]
+
+滑到有关于Oracle网站信息的底部,右键 **No thanks, just start my download.** ,然后选择 **Copy link location**。
+
+
+现在回到终端,我们将使用 **[Curl][7]** 命令去下载这个软件包:
+
+Now go back to the terminal. We’ll [use][7] **[Curl][7]** [command][7] to the download the package:
+
+```
+curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb
+```
+
+**** 是我刚刚从网页上复制的链接。根据当前的MySQL版本,它有可能不同。让我们使用**dpkg**去开始安装MySQL:
+
+```
+sudo dpkg -i mysql-apt-config*
+```
+
+更新你的仓库:
+
+```
+sudo apt update
+```
+
+要实际安装MySQL,我们将使用像第一个方法中同样的命令来安装:
+
+```
+sudo apt install mysql-server -y
+```
+
+这样做会在你的终端中打开**包配置**的提示。使用**向下箭头**选择 **Ok**选项。
+
+![][8]
+
+点击 **Enter**.这应该会提示你输入**password**:你的基本上是在为MySQL设置root密码。不要与[Ubuntu的root密码混淆][9]。
+
+![][10]
+
+输入密码然后点击**Tab**键去选择 **< Ok>**.点击**Enter**键,你将**重新输入** **password**。操作完之后,再次键入**Tab**去选择 **< Ok>**。按下**Enter**键。
+
+![][11]
+
+一些关于MySQL Server的配置信息将会展示。再次按下**Tab**去选择 **< Ok>** 和按下 **Enter**键:
+
+![][12]
+
+这里你需要去选择**default authentication plugin**。确保**Use Strong Password Encryption**被选择。按下**Tab**键和**Enter**键。
+
+就是这样!你已经成功地安装了MySQL。
+
+#### 验证你的MySQL安装
+
+要验证MySQL已经正确安装,使用下列命令:
+
+```
+sudo systemctl status mysql.service
+```
+
+这将展示一些关于MySQL服务的信息:
+
+![][13]
+
+你应该在那里看到 **Active: active (running)** 。如果你没有看到,使用下列命令去开始这个 **service**:
+
+```
+sudo systemctl start mysql.service
+```
+
+#### 配置/保护 MySQL
+
+对于刚安装的MySQL,你应该运行它提供的安全相关的更新命令。就是:
+
+```
+sudo mysql_secure_installation
+```
+
+这样做首先会询问你是否想使用**VALIDATE PASSWORD COMPONENT**.如果你想使用它,你将不得不去选择一个最小密码强度( **0 – Low, 1 – Medium, 2 – High** )。你将无法输入任何不遵守所选规则的密码。如果你没有使用强密码的习惯(本应该使用),这可能会配上用场。如果你认为它可能有帮助,那你就键入**y** 或者 **Y**,按下**Enter**键,然后为你的密码选择一个**强度等级**和输入一个你想使用的。如果成功,你将继续**securing**过程;否则你将重新输入一个密码。
+
+
+但是,如果你不想要此功能(我不会),只需按**Enter**或**任何其他键**即可跳过使用它。
+
+对于其他选项,我建议**开启**他们(对于每一步输入**y** 或者 **Y** 和按下**Enter**)。他们是(以这样的顺序):**remove anonymous user, disallow root login remotely, remove test database and access to it, reload privilege tables now**.
+MySQL Server上
+#### 链接与断开MySQL Server
+
+为了去运行SQL查询,你首先不得不使用MySQL链接到MySQL服务并使用MySQL提示符。
+To be able to run SQL queries, you’ll first have to connect to the server using MySQL and use the MySQL prompt. 执行此操作的命令是:
+
+```
+mysql -h host_name -u user -p
+```
+
+ * **-h** 被用来指定一个 **主机名** (如果这个服务被安装到其他机器上,那么会有用;如果没有,忽略它)
+ * **-u** 指定登录的 **用户**
+ * **-p** 指定你想输入的 **密码**.
+
+
+
+你能通过在最右边输入 **-p**后直接输入密码在命令行,但这样做是不建议的(为了安全原因),如果用户**test_user** 的密码是**1234**,那么你可以尝试去连接你正在使用的机器上的mysql,你应该这样使用:
+
+```
+mysql -u test_user -p1234
+```
+
+如果你成功输入了必要的参数,你将会收到由**MySQL shell 提示符**提供的欢迎( **mysql >** ):
+![][14]
+
+要从服务端断开连接和离开mysql提示符,输入:
+
+```
+QUIT
+```
+
+输入**quit** (MySQL不区分大小写)或者 **\q**也能工作。按下**Enter**退出。
+
+你使用简单的命令也能输出关于版本的信息:
+
+```
+sudo mysqladmin -u root version -p
+```
+
+如果你想看 **选项列表**,使用:
+
+```
+mysql --help
+```
+
+#### 卸载 MySQL
+
+
+
+如果您决定要使用较新版本或只是想停止使用MySQL。
+
+首先,关闭服务:
+
+```
+sudo systemctl stop mysql.service && sudo systemctl disable mysql.service
+```
+
+确保你备份了你的数据库,以防你之后想使用它们。你可以通过运行下列命令卸载MySQL:
+
+```
+sudo apt purge mysql*
+```
+
+清理依赖:
+
+```
+sudo apt autoremove
+```
+
+**小结**
+
+在这边文章中,我已经介绍如何在Ubuntu Linux上**安装 Mysql**。我很高兴如果这篇文章能帮助到那些正为此挣扎的用户或者刚刚开始的用户。
+In this article, I’ve covered **installing MySQL** in Ubuntu Linux. I’d be glad if this guide helps struggling users and beginners.
+
+如果你发现这篇文章是一个很有用的资源,在评论里告诉我们。你为了什么使用MySQL? 我们渴望收到你的任何反馈,印象和建议。感谢阅读,并毫不犹豫地尝试这个令人难以置信的工具!
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-mysql-ubuntu/
+
+作者:[Sergiu][a]
+选题:[lujun9972][b]
+译者:[arrowfeng](https://github.com/arrowfeng)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/sergiu/
+[b]: https://github.com/lujun9972
+[1]: https://www.mysql.com/
+[2]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-mysql-ubuntu.png?resize=800%2C450&ssl=1
+[4]: https://dev.mysql.com/downloads/repo/apt/
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_apt_download_page.jpg?fit=800%2C280&ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_deb_download_link.jpg?fit=800%2C507&ssl=1
+[7]: https://linuxhandbook.com/curl-command-examples/
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_package_configuration_ok.jpg?fit=800%2C587&ssl=1
+[9]: https://itsfoss.com/change-password-ubuntu/
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_enter_password.jpg?fit=800%2C583&ssl=1
+[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_information_on_configuring.jpg?fit=800%2C581&ssl=1
+[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_default_authentication_plugin.jpg?fit=800%2C586&ssl=1
+[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_service_information.jpg?fit=800%2C402&ssl=1
+[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_shell_prompt-2.jpg?fit=800%2C423&ssl=1
From 8f7b1d2ebff57ce899fa0d02db53603a7eb0a4dd Mon Sep 17 00:00:00 2001
From: darksun
Date: Sat, 20 Apr 2019 21:29:54 +0800
Subject: [PATCH 0057/1154] patch
---
.../tech/20170414 5 projects for Raspberry Pi at home.md | 6 +++---
... notes.md => 20180718 3 Emacs modes for taking notes.md} | 0
...15 How to create portable documents with CBZ and DjVu.md | 2 +-
...319 How to set up a homelab from hardware to firewall.md | 2 +-
...ng an open messenger client- Alternatives to WhatsApp.md | 2 +-
...ng started with Jaeger to build an Istio service mesh.md | 2 +-
.../20190321 How to use Spark SQL- A hands-on tutorial.md | 2 +-
... 12 open source tools for natural language processing.md | 2 +-
.../tech/20190326 How to use NetBSD on a Raspberry Pi.md | 2 +-
.../20190329 How to submit a bug report with Bugzilla.md | 2 +-
sources/tech/20190401 Build and host a website with Git.md | 2 +-
.../tech/20190402 Manage your daily schedule with Git.md | 2 +-
sources/tech/20190403 Use Git as the backend for chat.md | 2 +-
sources/tech/20190405 File sharing with Git.md | 2 +-
sources/tech/20190406 Run a server with Git.md | 2 +-
sources/tech/20190407 Manage multimedia files with Git.md | 2 +-
...e to building DevOps pipelines with open source tools.md | 2 +-
...08 Getting started with Python-s cryptography library.md | 2 +-
sources/tech/20190409 5 Linux rookie mistakes.md | 2 +-
sources/tech/20190409 5 open source mobile apps.md | 2 +-
...0410 How to enable serverless computing in Kubernetes.md | 2 +-
sources/tech/20190411 Be your own certificate authority.md | 2 +-
...411 How do you contribute to open source without code.md | 2 +-
...d- 3 dimensions of leadership in an open organization.md | 2 +-
.../20190411 Testing Small Scale Scrum in the real world.md | 2 +-
.../tech/20190412 How libraries are adopting open source.md | 2 +-
...icense for Chef, ethics in open source, and more news.md | 2 +-
...15 Getting started with Mercurial for version control.md | 2 +-
.../tech/20190416 Detecting malaria with deep learning.md | 2 +-
29 files changed, 30 insertions(+), 30 deletions(-)
rename sources/tech/{20190718 3 Emacs modes for taking notes.md => 20180718 3 Emacs modes for taking notes.md} (100%)
diff --git a/sources/tech/20170414 5 projects for Raspberry Pi at home.md b/sources/tech/20170414 5 projects for Raspberry Pi at home.md
index 37c9fde3db..69aeaf32ac 100644
--- a/sources/tech/20170414 5 projects for Raspberry Pi at home.md
+++ b/sources/tech/20170414 5 projects for Raspberry Pi at home.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (5 projects for Raspberry Pi at home)
[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home)
-[#]: author: (Ben Nuttall (Community Moderator) )
+[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
5 projects for Raspberry Pi at home
======
@@ -100,14 +100,14 @@ Let us know in the comments.
via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home
-作者:[Ben Nuttall (Community Moderator)][a]
+作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-[a]:
+[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home)
[2]: https://www.raspberrypi.org/
diff --git a/sources/tech/20190718 3 Emacs modes for taking notes.md b/sources/tech/20180718 3 Emacs modes for taking notes.md
similarity index 100%
rename from sources/tech/20190718 3 Emacs modes for taking notes.md
rename to sources/tech/20180718 3 Emacs modes for taking notes.md
diff --git a/sources/tech/20190315 How to create portable documents with CBZ and DjVu.md b/sources/tech/20190315 How to create portable documents with CBZ and DjVu.md
index 70f292e827..1700970688 100644
--- a/sources/tech/20190315 How to create portable documents with CBZ and DjVu.md
+++ b/sources/tech/20190315 How to create portable documents with CBZ and DjVu.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (How to create portable documents with CBZ and DjVu)
[#]: via: (https://opensource.com/article/19/3/comic-book-archive-djvu)
-[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
How to create portable documents with CBZ and DjVu
======
diff --git a/sources/tech/20190319 How to set up a homelab from hardware to firewall.md b/sources/tech/20190319 How to set up a homelab from hardware to firewall.md
index 28a50d8a43..d8bb34395b 100644
--- a/sources/tech/20190319 How to set up a homelab from hardware to firewall.md
+++ b/sources/tech/20190319 How to set up a homelab from hardware to firewall.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (How to set up a homelab from hardware to firewall)
[#]: via: (https://opensource.com/article/19/3/home-lab)
-[#]: author: (Michael Zamot (Red Hat) https://opensource.com/users/mzamot)
+[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
How to set up a homelab from hardware to firewall
======
diff --git a/sources/tech/20190320 Choosing an open messenger client- Alternatives to WhatsApp.md b/sources/tech/20190320 Choosing an open messenger client- Alternatives to WhatsApp.md
index cb590455a5..5f940e9b0b 100644
--- a/sources/tech/20190320 Choosing an open messenger client- Alternatives to WhatsApp.md
+++ b/sources/tech/20190320 Choosing an open messenger client- Alternatives to WhatsApp.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Choosing an open messenger client: Alternatives to WhatsApp)
[#]: via: (https://opensource.com/article/19/3/open-messenger-client)
-[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen)
+[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
Choosing an open messenger client: Alternatives to WhatsApp
======
diff --git a/sources/tech/20190320 Getting started with Jaeger to build an Istio service mesh.md b/sources/tech/20190320 Getting started with Jaeger to build an Istio service mesh.md
index d8366df720..c4200355e4 100644
--- a/sources/tech/20190320 Getting started with Jaeger to build an Istio service mesh.md
+++ b/sources/tech/20190320 Getting started with Jaeger to build an Istio service mesh.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Getting started with Jaeger to build an Istio service mesh)
[#]: via: (https://opensource.com/article/19/3/getting-started-jaeger)
-[#]: author: (Daniel Oh (Red Hat) https://opensource.com/users/daniel-oh)
+[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
Getting started with Jaeger to build an Istio service mesh
======
diff --git a/sources/tech/20190321 How to use Spark SQL- A hands-on tutorial.md b/sources/tech/20190321 How to use Spark SQL- A hands-on tutorial.md
index fee3a8cc4c..0e4be0aa01 100644
--- a/sources/tech/20190321 How to use Spark SQL- A hands-on tutorial.md
+++ b/sources/tech/20190321 How to use Spark SQL- A hands-on tutorial.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (How to use Spark SQL: A hands-on tutorial)
[#]: via: (https://opensource.com/article/19/3/apache-spark-and-dataframes-tutorial)
-[#]: author: (Dipanjan (DJ) Sarkar (Red Hat) https://opensource.com/users/djsarkar)
+[#]: author: (Dipanjan Sarkar https://opensource.com/users/djsarkar)
How to use Spark SQL: A hands-on tutorial
======
diff --git a/sources/tech/20190322 12 open source tools for natural language processing.md b/sources/tech/20190322 12 open source tools for natural language processing.md
index 99031acf68..9d2822926f 100644
--- a/sources/tech/20190322 12 open source tools for natural language processing.md
+++ b/sources/tech/20190322 12 open source tools for natural language processing.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (12 open source tools for natural language processing)
[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools)
-[#]: author: (Dan Barker (Community Moderator) https://opensource.com/users/barkerd427)
+[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
12 open source tools for natural language processing
======
diff --git a/sources/tech/20190326 How to use NetBSD on a Raspberry Pi.md b/sources/tech/20190326 How to use NetBSD on a Raspberry Pi.md
index e3bd5f5e26..37c14fec39 100644
--- a/sources/tech/20190326 How to use NetBSD on a Raspberry Pi.md
+++ b/sources/tech/20190326 How to use NetBSD on a Raspberry Pi.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (How to use NetBSD on a Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/netbsd-raspberry-pi)
-[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
How to use NetBSD on a Raspberry Pi
======
diff --git a/sources/tech/20190329 How to submit a bug report with Bugzilla.md b/sources/tech/20190329 How to submit a bug report with Bugzilla.md
index e1fe583d96..ee778410e7 100644
--- a/sources/tech/20190329 How to submit a bug report with Bugzilla.md
+++ b/sources/tech/20190329 How to submit a bug report with Bugzilla.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (How to submit a bug report with Bugzilla)
[#]: via: (https://opensource.com/article/19/3/bug-reporting)
-[#]: author: (David Both (Community Moderator) https://opensource.com/users/dboth)
+[#]: author: (David Both https://opensource.com/users/dboth)
How to submit a bug report with Bugzilla
======
diff --git a/sources/tech/20190401 Build and host a website with Git.md b/sources/tech/20190401 Build and host a website with Git.md
index 0878047c1d..32a07d3490 100644
--- a/sources/tech/20190401 Build and host a website with Git.md
+++ b/sources/tech/20190401 Build and host a website with Git.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Build and host a website with Git)
[#]: via: (https://opensource.com/article/19/4/building-hosting-website-git)
-[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Build and host a website with Git
======
diff --git a/sources/tech/20190402 Manage your daily schedule with Git.md b/sources/tech/20190402 Manage your daily schedule with Git.md
index 5d3b7a195e..8f5d7d89bb 100644
--- a/sources/tech/20190402 Manage your daily schedule with Git.md
+++ b/sources/tech/20190402 Manage your daily schedule with Git.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Manage your daily schedule with Git)
[#]: via: (https://opensource.com/article/19/4/calendar-git)
-[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Manage your daily schedule with Git
======
diff --git a/sources/tech/20190403 Use Git as the backend for chat.md b/sources/tech/20190403 Use Git as the backend for chat.md
index 2a7ac6d28a..e564bbc6e7 100644
--- a/sources/tech/20190403 Use Git as the backend for chat.md
+++ b/sources/tech/20190403 Use Git as the backend for chat.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Use Git as the backend for chat)
[#]: via: (https://opensource.com/article/19/4/git-based-chat)
-[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Use Git as the backend for chat
======
diff --git a/sources/tech/20190405 File sharing with Git.md b/sources/tech/20190405 File sharing with Git.md
index 6b51d11600..13f95b8287 100644
--- a/sources/tech/20190405 File sharing with Git.md
+++ b/sources/tech/20190405 File sharing with Git.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (File sharing with Git)
[#]: via: (https://opensource.com/article/19/4/file-sharing-git)
-[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
File sharing with Git
======
diff --git a/sources/tech/20190406 Run a server with Git.md b/sources/tech/20190406 Run a server with Git.md
index 650d5672af..2d7749a465 100644
--- a/sources/tech/20190406 Run a server with Git.md
+++ b/sources/tech/20190406 Run a server with Git.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Run a server with Git)
[#]: via: (https://opensource.com/article/19/4/server-administration-git)
-[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth/users/seth)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/seth)
Run a server with Git
======
diff --git a/sources/tech/20190407 Manage multimedia files with Git.md b/sources/tech/20190407 Manage multimedia files with Git.md
index 81bc0d02ca..340c356aa9 100644
--- a/sources/tech/20190407 Manage multimedia files with Git.md
+++ b/sources/tech/20190407 Manage multimedia files with Git.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Manage multimedia files with Git)
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
-[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Manage multimedia files with Git
======
diff --git a/sources/tech/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md b/sources/tech/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md
index e5f772e8ca..2110c17606 100644
--- a/sources/tech/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md
+++ b/sources/tech/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (A beginner's guide to building DevOps pipelines with open source tools)
[#]: via: (https://opensource.com/article/19/4/devops-pipeline)
-[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter)
+[#]: author: (Bryant Son https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter)
A beginner's guide to building DevOps pipelines with open source tools
======
diff --git a/sources/tech/20190408 Getting started with Python-s cryptography library.md b/sources/tech/20190408 Getting started with Python-s cryptography library.md
index 63eab6104f..9ed9211adf 100644
--- a/sources/tech/20190408 Getting started with Python-s cryptography library.md
+++ b/sources/tech/20190408 Getting started with Python-s cryptography library.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Getting started with Python's cryptography library)
[#]: via: (https://opensource.com/article/19/4/cryptography-python)
-[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Getting started with Python's cryptography library
======
diff --git a/sources/tech/20190409 5 Linux rookie mistakes.md b/sources/tech/20190409 5 Linux rookie mistakes.md
index ae7a0a2969..2e2c25a9cf 100644
--- a/sources/tech/20190409 5 Linux rookie mistakes.md
+++ b/sources/tech/20190409 5 Linux rookie mistakes.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (5 Linux rookie mistakes)
[#]: via: (https://opensource.com/article/19/4/linux-rookie-mistakes)
-[#]: author: (Jen Wike Huger (Red Hat) https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p)
+[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p)
5 Linux rookie mistakes
======
diff --git a/sources/tech/20190409 5 open source mobile apps.md b/sources/tech/20190409 5 open source mobile apps.md
index 15378c29b8..e4ca791149 100644
--- a/sources/tech/20190409 5 open source mobile apps.md
+++ b/sources/tech/20190409 5 open source mobile apps.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (5 open source mobile apps)
[#]: via: (https://opensource.com/article/19/4/mobile-apps)
-[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen)
+[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen)
5 open source mobile apps
======
diff --git a/sources/tech/20190410 How to enable serverless computing in Kubernetes.md b/sources/tech/20190410 How to enable serverless computing in Kubernetes.md
index 52b75df6e2..75e5a5868d 100644
--- a/sources/tech/20190410 How to enable serverless computing in Kubernetes.md
+++ b/sources/tech/20190410 How to enable serverless computing in Kubernetes.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (How to enable serverless computing in Kubernetes)
[#]: via: (https://opensource.com/article/19/4/enabling-serverless-kubernetes)
-[#]: author: (Daniel Oh (Red Hat, Community Moderator) https://opensource.com/users/daniel-oh/users/daniel-oh)
+[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh/users/daniel-oh)
How to enable serverless computing in Kubernetes
======
diff --git a/sources/tech/20190411 Be your own certificate authority.md b/sources/tech/20190411 Be your own certificate authority.md
index de35385097..f6ea26aba4 100644
--- a/sources/tech/20190411 Be your own certificate authority.md
+++ b/sources/tech/20190411 Be your own certificate authority.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Be your own certificate authority)
[#]: via: (https://opensource.com/article/19/4/certificate-authority)
-[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez/users/elenajon123)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123)
Be your own certificate authority
======
diff --git a/sources/tech/20190411 How do you contribute to open source without code.md b/sources/tech/20190411 How do you contribute to open source without code.md
index 0b04f7e87d..659fd9064e 100644
--- a/sources/tech/20190411 How do you contribute to open source without code.md
+++ b/sources/tech/20190411 How do you contribute to open source without code.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (How do you contribute to open source without code?)
[#]: via: (https://opensource.com/article/19/4/contribute-without-code)
-[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/don-watkins/users/greg-p/users/petercheer)
+[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen/users/don-watkins/users/greg-p/users/petercheer)
How do you contribute to open source without code?
======
diff --git a/sources/tech/20190411 Managed, enabled, empowered- 3 dimensions of leadership in an open organization.md b/sources/tech/20190411 Managed, enabled, empowered- 3 dimensions of leadership in an open organization.md
index 4efcdd17a7..890b934ef1 100644
--- a/sources/tech/20190411 Managed, enabled, empowered- 3 dimensions of leadership in an open organization.md
+++ b/sources/tech/20190411 Managed, enabled, empowered- 3 dimensions of leadership in an open organization.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Managed, enabled, empowered: 3 dimensions of leadership in an open organization)
[#]: via: (https://opensource.com/open-organization/19/4/managed-enabled-empowered)
-[#]: author: (Heidi Hess von Ludewig (Red Hat) https://opensource.com/users/heidi-hess-von-ludewig/users/amatlack)
+[#]: author: (Heidi Hess von Ludewig https://opensource.com/users/heidi-hess-von-ludewig/users/amatlack)
Managed, enabled, empowered: 3 dimensions of leadership in an open organization
======
diff --git a/sources/tech/20190411 Testing Small Scale Scrum in the real world.md b/sources/tech/20190411 Testing Small Scale Scrum in the real world.md
index c39a787482..0e4016435e 100644
--- a/sources/tech/20190411 Testing Small Scale Scrum in the real world.md
+++ b/sources/tech/20190411 Testing Small Scale Scrum in the real world.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Testing Small Scale Scrum in the real world)
[#]: via: (https://opensource.com/article/19/4/next-steps-small-scale-scrum)
-[#]: author: (Agnieszka Gancarczyk (Red Hat)Leigh Griffin (Red Hat) https://opensource.com/users/agagancarczyk/users/lgriffin/users/agagancarczyk/users/lgriffin)
+[#]: author: (Agnieszka Gancarczyk Leigh Griffin https://opensource.com/users/agagancarczyk/users/lgriffin/users/agagancarczyk/users/lgriffin)
Testing Small Scale Scrum in the real world
======
diff --git a/sources/tech/20190412 How libraries are adopting open source.md b/sources/tech/20190412 How libraries are adopting open source.md
index 481c317ead..2a8c8806e5 100644
--- a/sources/tech/20190412 How libraries are adopting open source.md
+++ b/sources/tech/20190412 How libraries are adopting open source.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (How libraries are adopting open source)
[#]: via: (https://opensource.com/article/19/4/software-libraries)
-[#]: author: (Don Watkins (Community Moderator) https://opensource.com/users/don-watkins)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
How libraries are adopting open source
======
diff --git a/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md b/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md
index 5bc9aaf92f..f33d614f86 100644
--- a/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md
+++ b/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Blender short film, new license for Chef, ethics in open source, and more news)
[#]: via: (https://opensource.com/article/15/4/news-april-15)
-[#]: author: (Joshua Allen Holm (Community Moderator) https://opensource.com/users/holmja)
+[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja)
Blender short film, new license for Chef, ethics in open source, and more news
======
diff --git a/sources/tech/20190415 Getting started with Mercurial for version control.md b/sources/tech/20190415 Getting started with Mercurial for version control.md
index 10812affed..a682517b5f 100644
--- a/sources/tech/20190415 Getting started with Mercurial for version control.md
+++ b/sources/tech/20190415 Getting started with Mercurial for version control.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Getting started with Mercurial for version control)
[#]: via: (https://opensource.com/article/19/4/getting-started-mercurial)
-[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Getting started with Mercurial for version control
======
diff --git a/sources/tech/20190416 Detecting malaria with deep learning.md b/sources/tech/20190416 Detecting malaria with deep learning.md
index 77df4a561b..9e6ea2ba01 100644
--- a/sources/tech/20190416 Detecting malaria with deep learning.md
+++ b/sources/tech/20190416 Detecting malaria with deep learning.md
@@ -5,7 +5,7 @@
[#]: url: ( )
[#]: subject: (Detecting malaria with deep learning)
[#]: via: (https://opensource.com/article/19/4/detecting-malaria-deep-learning)
-[#]: author: (Dipanjan (DJ) Sarkar (Red Hat) https://opensource.com/users/djsarkar)
+[#]: author: (Dipanjan Sarkar https://opensource.com/users/djsarkar)
Detecting malaria with deep learning
======
From 494af0bc70600ba45857f3a9534592ea5781fc62 Mon Sep 17 00:00:00 2001
From: jdh8383 <4565726+jdh8383@users.noreply.github.com>
Date: Sat, 20 Apr 2019 21:46:04 +0800
Subject: [PATCH 0058/1154] Update 20171226 The shell scripting trap.md
---
sources/tech/20171226 The shell scripting trap.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20171226 The shell scripting trap.md b/sources/tech/20171226 The shell scripting trap.md
index f91620ce98..72a94dc441 100644
--- a/sources/tech/20171226 The shell scripting trap.md
+++ b/sources/tech/20171226 The shell scripting trap.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (jdh8383)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From 3292a83f42b23948e6992b1d97fadac0535fcefe Mon Sep 17 00:00:00 2001
From: jdh8383 <4565726+jdh8383@users.noreply.github.com>
Date: Sat, 20 Apr 2019 22:39:15 +0800
Subject: [PATCH 0059/1154] Update 20171226 The shell scripting trap.md
---
sources/tech/20171226 The shell scripting trap.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/sources/tech/20171226 The shell scripting trap.md b/sources/tech/20171226 The shell scripting trap.md
index 72a94dc441..fe78d5f47a 100644
--- a/sources/tech/20171226 The shell scripting trap.md
+++ b/sources/tech/20171226 The shell scripting trap.md
@@ -8,19 +8,21 @@
[#]: author: (Martin Tournoij https://arp242.net/)
The shell scripting trap
+Shell 脚本的陷阱
======
Shell scripting is great. It is amazingly simple to create something very useful. Even a simple no-brainer command such as:
-
+Shell 脚本很棒,你可以非常轻轻地写出有用的东西来。甚至是下面这个傻瓜式的命令:
```
# Official way of naming Go-related things:
+# 用含有 Go 的词汇起名字:
$ grep -i ^go /usr/share/dict/american-english /usr/share/dict/british /usr/share/dict/british-english /usr/share/dict/catala /usr/share/dict/catalan /usr/share/dict/cracklib-small /usr/share/dict/finnish /usr/share/dict/french /usr/share/dict/german /usr/share/dict/italian /usr/share/dict/ngerman /usr/share/dict/ogerman /usr/share/dict/spanish /usr/share/dict/usa /usr/share/dict/words | cut -d: -f2 | sort -R | head -n1
goldfish
```
Takes several lines of code and a lot more brainpower in many programming languages. For example in Ruby:
-
+如果用其他编程语言,就需要花费更多的脑力用多行代码实现,比如用 Ruby 的话:
```
puts(Dir['/usr/share/dict/*-english'].map do |f|
File.open(f)
From e1b8d7ca8cf01c2be0826832526798d938734071 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E4=BB=98=E5=B3=A5?=
<24203166+fuzheng1998@users.noreply.github.com>
Date: Sun, 21 Apr 2019 08:52:05 +0800
Subject: [PATCH 0060/1154] Update 20190409 5 open source mobile apps.md
---
sources/tech/20190409 5 open source mobile apps.md | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/sources/tech/20190409 5 open source mobile apps.md b/sources/tech/20190409 5 open source mobile apps.md
index ef570bc07b..875f79947a 100644
--- a/sources/tech/20190409 5 open source mobile apps.md
+++ b/sources/tech/20190409 5 open source mobile apps.md
@@ -1,7 +1,5 @@
-tranlator:(fuzheng1998)
-
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (fuzheng1998 )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From a4c01b2d4a1388b0288265c30ae69428de8b2f9c Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 21 Apr 2019 09:52:19 +0800
Subject: [PATCH 0061/1154] PRF:20190407 Fixing Ubuntu Freezing at Boot Time.md
@Raverstern
---
...407 Fixing Ubuntu Freezing at Boot Time.md | 98 +++++++++----------
1 file changed, 49 insertions(+), 49 deletions(-)
diff --git a/translated/tech/20190407 Fixing Ubuntu Freezing at Boot Time.md b/translated/tech/20190407 Fixing Ubuntu Freezing at Boot Time.md
index eed2f478ff..08f91944e1 100644
--- a/translated/tech/20190407 Fixing Ubuntu Freezing at Boot Time.md
+++ b/translated/tech/20190407 Fixing Ubuntu Freezing at Boot Time.md
@@ -1,6 +1,6 @@
[#]: collector: (lujun9972)
[#]: translator: (Raverstern)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fixing Ubuntu Freezing at Boot Time)
@@ -10,9 +10,9 @@
解决 Ubuntu 在启动时冻结的问题
======
-_**本文将向您一步步展示如何通过安装 NVIDIA 专有驱动来处理 Ubuntu 在启动过程中冻结的问题。本教程仅在一个新安装的 Ubuntu 系统上操作验证过,不过在其他情况下也理应可用。**_
+> 本文将向你一步步展示如何通过安装 NVIDIA 专有驱动来处理 Ubuntu 在启动过程中冻结的问题。本教程仅在一个新安装的 Ubuntu 系统上操作验证过,不过在其它情况下也理应可用。
-不久前我买了台[宏碁掠夺者][1](此为[广告联盟][2]链接)笔记本电脑来测试各种 Linux 发行版。这台庞大且笨重的机器与我喜欢的,类似[戴尔 XPS][3]那般小巧轻便的笔记本电脑大相径庭。
+不久前我买了台[宏碁掠夺者][1]笔记本电脑来测试各种 Linux 发行版。这台庞大且笨重的机器与我喜欢的,类似[戴尔 XPS][3]那般小巧轻便的笔记本电脑大相径庭。
我即便不打游戏也选择这台电竞笔记本电脑的原因,就是为了 [NVIDIA 的显卡][4]。宏碁掠夺者 Helios 300 上搭载了一块 [NVIDIA Geforce][5] GTX 1050Ti 显卡。
@@ -20,35 +20,35 @@ NVIDIA 那糟糕的 Linux 兼容性为人们所熟知。过去很多 It’s FOSS
所以当我决定搞一台专门的设备来测试 Linux 发行版时,我选择了带有 NVIDIA 显卡的笔记本电脑。
-这台笔记本原装的 Windows 10 系统安装在 120 GB 的固态硬盘上,并另外配有 1 TB 的机械硬盘来存储数据。在此之上我配置好了 [Windows 10 和 Ubuntu 18.04 双系统][6]。整个的安装过程舒适,方便,快捷。
+这台笔记本原装的 Windows 10 系统安装在 120 GB 的固态硬盘上,并另外配有 1 TB 的机械硬盘来存储数据。在此之上我配置好了 [Windows 10 和 Ubuntu 18.04 双系统][6]。整个的安装过程舒适、方便、快捷。
随后我启动了 [Ubuntu][7]。那熟悉的紫色界面展现了出来,然后我就发现它卡在那儿了。鼠标一动不动,我也输入不了任何东西,然后除了长按电源键强制关机以外我啥事儿都做不了。
然后再次尝试启动,结果一模一样。整个系统就一直卡在那个紫色界面,随后的登录界面也出不来。
-这听起来很耳熟吧?下面就让我来告诉您如何解决这个 Ubuntu 在启动过程中冻结的问题。
+这听起来很耳熟吧?下面就让我来告诉你如何解决这个 Ubuntu 在启动过程中冻结的问题。
-要不您考虑考虑抛弃 Ubuntu?
+> 如果你用的不是 Ubuntu
-请注意,尽管是在 Ubuntu 18.04 上操作的,本教程应该也能用于其他基于 Ubuntu 的发行版,例如 Linux Mint,elementary OS 等等。关于这点我已经在 Zorin OS 上确认过。
+> 请注意,尽管是在 Ubuntu 18.04 上操作的,本教程应该也能用于其他基于 Ubuntu 的发行版,例如 Linux Mint、elementary OS 等等。关于这点我已经在 Zorin OS 上确认过。
### 解决 Ubuntu 启动中由 NVIDIA 驱动引起的冻结问题
![][8]
-我介绍的解决方案适用于配有 NVIDIA 显卡的系统,因为您所面临的系统冻结问题是由开源的 [NVIDIA Nouveau 驱动][9]所导致的。
+我介绍的解决方案适用于配有 NVIDIA 显卡的系统,因为你所面临的系统冻结问题是由开源的 [NVIDIA Nouveau 驱动][9]所导致的。
事不宜迟,让我们马上来看看如何解决这个问题。
#### 步骤 1:编辑 Grub
-在启动系统的过程中,请您在如下图所示的 Grub 界面上停下。如果您没看到这个界面,在启动电脑时请按住 Shift 键。
+在启动系统的过程中,请你在如下图所示的 Grub 界面上停下。如果你没看到这个界面,在启动电脑时请按住 `Shift` 键。
-在这个界面上,按“E”键进入编辑模式。
+在这个界面上,按 `E` 键进入编辑模式。
![按“E”按键][10]
-您应该看到一些如下图所示的代码。此刻您应关注于以 Linux 开头的那一行。
+你应该看到一些如下图所示的代码。此刻你应关注于以 “linux” 开头的那一行。
![前往 Linux 开头的那一行][11]
@@ -56,97 +56,97 @@ NVIDIA 那糟糕的 Linux 兼容性为人们所熟知。过去很多 It’s FOSS
回忆一下,我们的问题出在 NVIDIA 显卡驱动上,是开源版 NVIDIA 驱动的不适配导致了我们的问题。所以此处我们能做的就是禁用这些驱动。
-此刻,您有多种方式可以禁用这些驱动。我最喜欢的方式是通过 nomodeset 来禁用所有显卡的驱动。
+此刻,你有多种方式可以禁用这些驱动。我最喜欢的方式是通过 `nomodeset` 来禁用所有显卡的驱动。
-请把下列文本添加到以 Linux 开头的那一行的末尾。此处您应该可以正常输入。请确保您把这段文本加到了行末。
+请把下列文本添加到以 “linux” 开头的那一行的末尾。此处你应该可以正常输入。请确保你把这段文本加到了行末。
```
-nomodeset
+ nomodeset
```
-现在您屏幕上的显示应如下图所示:
+现在你屏幕上的显示应如下图所示:
![通过向内核添加 nomodeset 来禁用显卡驱动][12]
-按 Ctrl+X 或 F10 保存并退出。下次您就将以修改后的内核参数来启动。
+按 `Ctrl+X` 或 `F10` 保存并退出。下次你就将以修改后的内核参数来启动。
-对以上操作的解释(点击展开)
+> 对以上操作的解释
-所以我们究竟做了些啥?那个 nomodeset 又是个什么玩意儿?让我来向您简单地解释一下。
+> 所以我们究竟做了些啥?那个 `nomodeset` 又是个什么玩意儿?让我来向你简单地解释一下。
-通常来说,显卡是在 X 或者是其他显示服务开始执行后才被启用的,也就是在您登录系统并看到图形界面以后。
+> 通常来说,显卡是在 X 或者是其他显示服务器开始执行后才被启用的,也就是在你登录系统并看到图形界面以后。
-但最近,视频模式的设置被移植进了内核。这么做的众多优点之一就是能您看到一个漂亮且高清的启动画面。
+> 但近来,视频模式的设置被移进了内核。这么做的众多优点之一就是能你看到一个漂亮且高清的启动画面。
-若您往内核中加入 nomodeset 参数,它就会指示内核在显示服务启动后才加载显卡驱动。
+> 若你往内核中加入 `nomodeset` 参数,它就会指示内核在显示服务启动后才加载显卡驱动。
-换句话说,您在此时禁止视频驱动的加载,由此产生的冲突也会随之消失。您在登录进系统以后,还是能看到一切如旧,那是因为显卡驱动在随后的过程中被加载了。
+> 换句话说,你在此时禁止视频驱动的加载,由此产生的冲突也会随之消失。你在登录进系统以后,还是能看到一切如旧,那是因为显卡驱动在随后的过程中被加载了。
-#### 步骤 3:更新您的系统并安装 NVIDIA 专有驱动
+#### 步骤 3:更新你的系统并安装 NVIDIA 专有驱动
-别因为现在可以登录系统了就过早地高兴起来。您之前所做的只是临时措施,在下次启动的时候,您的系统依旧会尝试加载 Nouveau 驱动而因此冻结。
+别因为现在可以登录系统了就过早地高兴起来。你之前所做的只是临时措施,在下次启动的时候,你的系统依旧会尝试加载 Nouveau 驱动而因此冻结。
-这是否意味着您将不得不在 Grub 界面上不断地编辑内核?可喜可贺,答案是否定的。
+这是否意味着你将不得不在 Grub 界面上不断地编辑内核?可喜可贺,答案是否定的。
-您可以在 Ubuntu 上为 NVIDIA 显卡[安装额外的驱动][13]。在使用专有驱动后,Ubuntu 将不会在启动过程中冻结。
+你可以在 Ubuntu 上为 NVIDIA 显卡[安装额外的驱动][13]。在使用专有驱动后,Ubuntu 将不会在启动过程中冻结。
-我假设这是您第一次登录到一个新安装的系统。这意味着在做其他事情之前您必须先[更新 Ubuntu][14]。通过 Ubuntu 的 Ctrl+Alt+T [系统快捷键][15]打开一个终端,并输入以下命令:
+我假设这是你第一次登录到一个新安装的系统。这意味着在做其他事情之前你必须先[更新 Ubuntu][14]。通过 Ubuntu 的 `Ctrl+Alt+T` [系统快捷键][15]打开一个终端,并输入以下命令:
```
sudo apt update && sudo apt upgrade -y
```
-在上述命令执行完以后,您可以尝试安装额外的驱动。不过根据我的经验,在安装新驱动之前您需要先重启一下您的系统。在您重启时,您还是需要按我们之前做的那样修改内核参数。
+在上述命令执行完以后,你可以尝试安装额外的驱动。不过根据我的经验,在安装新驱动之前你需要先重启一下你的系统。在你重启时,你还是需要按我们之前做的那样修改内核参数。
-当您的系统已经更新和重启完毕,按下 Windows 键打开一个菜单栏,并搜索“软件与更新”(Software & Updates)。
+当你的系统已经更新和重启完毕,按下 `Windows` 键打开一个菜单栏,并搜索“软件与更新Software & Updates”。
![点击“软件与更新”(Software & Updates)][16]
-然后切换到“额外驱动”(Additional Drivers)标签页,并等待数秒。然后您就能看到可供系统使用的专有驱动了。在这个列表上您应该可以找到 NVIDIA。
+然后切换到“额外驱动Additional Drivers”标签页,并等待数秒。然后你就能看到可供系统使用的专有驱动了。在这个列表上你应该可以找到 NVIDIA。
-选择专有驱动并点击“应用更改”(Apply Changes)。
+选择专有驱动并点击“应用更改Apply Changes”。
![NVIDIA 驱动安装中][17]
-新驱动的安装会费点时间。若您的系统启用了 UEFI 安全启动,您将被要求设置一个密码。_您可以将其设置为任何容易记住的密码_。它的用处我将在步骤 4 中说明。
+新驱动的安装会费点时间。若你的系统启用了 UEFI 安全启动,你将被要求设置一个密码。*你可以将其设置为任何容易记住的密码*。它的用处我将在步骤 4 中说明。
-![您可能需要设置一个安全启动密码][18]
+![你可能需要设置一个安全启动密码][18]
-安装完成后,您会被要求重启系统以令之前的更改生效。
+安装完成后,你会被要求重启系统以令之前的更改生效。
-![在新驱动安装好后重启您的系统][19]
+![在新驱动安装好后重启你的系统][19]
#### 步骤 4:处理 MOK(仅针对启用了 UEFI 安全启动的设备)
-如果您之前被要求设置安全启动密码,此刻您会看到一个蓝色界面,上面写着“MOK management”。这是个复杂的概念,我试着长话短说。
+如果你之前被要求设置安全启动密码,此刻你会看到一个蓝色界面,上面写着 “MOK management”。这是个复杂的概念,我试着长话短说。
-对 MOK([设备所有者密码][20])的要求是因为安全启动的功能要求所有内核模块都必须被签名。Ubuntu 中所有随 ISO 镜像发行的内核模块都已经签了名。由于您安装了一个新模块(也就是那额外的驱动),或者您对内核模块做了修改,您的安全系统可能视之为一个未经验证的外部修改,从而拒绝启动。
+对 MOK([设备所有者密码][20])的要求是因为安全启动的功能要求所有内核模块都必须被签名。Ubuntu 中所有随 ISO 镜像发行的内核模块都已经签了名。由于你安装了一个新模块(也就是那个额外的驱动),或者你对内核模块做了修改,你的安全系统可能视之为一个未经验证的外部修改,从而拒绝启动。
-因此,您可以自己对系统模块进行签名(以告诉 UEFI 系统莫要大惊小怪,这些修改是您做的),或者您也可以简单粗暴地[禁用安全启动][21]。
+因此,你可以自己对系统模块进行签名(以告诉 UEFI 系统莫要大惊小怪,这些修改是你做的),或者你也可以简单粗暴地[禁用安全启动][21]。
-现在你对[安全启动和 MOK ][22]有了一定了解,那咱们就来看看在遇到这个蓝色界面后该做些什么。
+现在你对[安全启动和 MOK][22] 有了一定了解,那咱们就来看看在遇到这个蓝色界面后该做些什么。
-如果您选择“继续启动”,您的系统将有很大概率如往常一样启动,并且您啥事儿也不用做。不过在这种情况下,新驱动的有些功能有可能工作不正常。
+如果你选择“继续启动”,你的系统将有很大概率如往常一样启动,并且你啥事儿也不用做。不过在这种情况下,新驱动的有些功能有可能工作不正常。
-这就是为什么,您应该**选择注册 MOK **。
+这就是为什么,你应该“选择注册 MOK”。
![][23]
-它会在下一个页面让您点击“继续”,然后要您输入一串密码。请输入在上一步中,在安装额外驱动时设置的密码。
+它会在下一个页面让你点击“继续”,然后要你输入一串密码。请输入在上一步中,在安装额外驱动时设置的密码。
-别担心!
+> 别担心!
-如果您错过了这个关于 MOK 的蓝色界面,或不小心点了“继续启动”而不是“注册 MOK”,不必惊慌。您的主要目的是能够成功启动系统,而通过禁用 Nouveau 显卡驱动,您已经成功地实现了这一点。
+> 如果你错过了这个关于 MOK 的蓝色界面,或不小心点了“继续启动”而不是“注册 MOK”,不必惊慌。你的主要目的是能够成功启动系统,而通过禁用 Nouveau 显卡驱动,你已经成功地实现了这一点。
-最坏的情况也不过就是您的系统切换到 Intel 集成显卡而不再使用 NVIDIA 显卡。您可以之后的任何时间安装 NVIDIA 显卡驱动。您的首要任务是启动系统。
+> 最坏的情况也不过就是你的系统切换到 Intel 集成显卡而不再使用 NVIDIA 显卡。你可以之后的任何时间安装 NVIDIA 显卡驱动。你的首要任务是启动系统。
#### 步骤 5:享受安装了专有 NVIDIA 驱动的 Linux 系统
-当新驱动被安装好后,您需要再次重启系统。别担心!目前的情况应该已经好起来了,并且您不必再去修改内核参数,而是能够直接启动 Ubuntu 系统了。
+当新驱动被安装好后,你需要再次重启系统。别担心!目前的情况应该已经好起来了,并且你不必再去修改内核参数,而是能够直接启动 Ubuntu 系统了。
-我希望本教程帮助您解决了 Ubuntu 系统在启动中冻结的问题,并让您能够成功启动 Ubuntu 系统。
+我希望本教程帮助你解决了 Ubuntu 系统在启动中冻结的问题,并让你能够成功启动 Ubuntu 系统。
-如果您有任何问题或建议,请在下方评论区给我留言。
+如果你有任何问题或建议,请在下方评论区给我留言。
--------------------------------------------------------------------------------
@@ -155,7 +155,7 @@ via: https://itsfoss.com/fix-ubuntu-freezing/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[Raverstern](https://github.com/Raverstern)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 6fd35aab7af8f1d5e5d087601a031a7556f33229 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 21 Apr 2019 09:53:20 +0800
Subject: [PATCH 0062/1154] PUB:20190407 Fixing Ubuntu Freezing at Boot Time.md
@Raverstern https://linux.cn/article-10756-1.html
---
.../20190407 Fixing Ubuntu Freezing at Boot Time.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190407 Fixing Ubuntu Freezing at Boot Time.md (99%)
diff --git a/translated/tech/20190407 Fixing Ubuntu Freezing at Boot Time.md b/published/20190407 Fixing Ubuntu Freezing at Boot Time.md
similarity index 99%
rename from translated/tech/20190407 Fixing Ubuntu Freezing at Boot Time.md
rename to published/20190407 Fixing Ubuntu Freezing at Boot Time.md
index 08f91944e1..b19c1a1a42 100644
--- a/translated/tech/20190407 Fixing Ubuntu Freezing at Boot Time.md
+++ b/published/20190407 Fixing Ubuntu Freezing at Boot Time.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (Raverstern)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10756-1.html)
[#]: subject: (Fixing Ubuntu Freezing at Boot Time)
[#]: via: (https://itsfoss.com/fix-ubuntu-freezing/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
From 01020263858ca105177148c3c49269ba416350fa Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 21 Apr 2019 10:51:32 +0800
Subject: [PATCH 0063/1154] PRF:20190403 5 useful open source log analysis
tools.md
@MjSeven
---
...5 useful open source log analysis tools.md | 42 +++++++++----------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/translated/tech/20190403 5 useful open source log analysis tools.md b/translated/tech/20190403 5 useful open source log analysis tools.md
index cc80f12590..9a86daff8a 100644
--- a/translated/tech/20190403 5 useful open source log analysis tools.md
+++ b/translated/tech/20190403 5 useful open source log analysis tools.md
@@ -1,6 +1,6 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 useful open source log analysis tools)
@@ -9,56 +9,56 @@
5 个有用的开源日志分析工具
======
-监控网络活动既重要又繁琐,以下这些工具可以使它更容易。
+
+> 监控网络活动既重要又繁琐,以下这些工具可以使它更容易。
+
![People work on a computer server][1]
-监控网络活动是一项繁琐的工作,但有充分的理由这样做。例如,它允许你查找和调查工作站、连接到网络的设备和服务器上的可疑登录,同时确定管理员滥用了什么。你还可以跟踪软件安装和数据传输,以实时识别潜在问题,而不是在损坏发生后才进行跟踪。
+监控网络活动是一项繁琐的工作,但有充分的理由这样做。例如,它允许你查找和调查工作站和连接到网络的设备及服务器上的可疑登录,同时确定管理员滥用了什么。你还可以跟踪软件安装和数据传输,以实时识别潜在问题,而不是在损坏发生后才进行跟踪。
-这些日志还有助于使你的公司遵守适用于在欧盟范围内运营的任何实体的[通用数据保护条例][2](GFPR)。如果你的网站在欧盟可以浏览,那么你就有资格。
+这些日志还有助于使你的公司遵守适用于在欧盟范围内运营的任何实体的[通用数据保护条例][2](GFPR)。如果你的网站在欧盟可以浏览,那么你就有遵守的该条例的资格。
-日志记录,包括跟踪和分析,应该是任何监控基础设置中的一个基本过程。要从灾难中恢复 SQL Server 数据库,需要事务日志文件。此外,通过跟踪日志文件,DevOps 团队和数据库管理员(DBA)可以保持最佳的数据库性能,或者,在网络攻击的情况下找到未经授权活动的证据。因此,定期监视和分析系统日志非常重要。这是一种可靠的方式来重新创建导致出现任何问题的事件链。
+日志记录,包括跟踪和分析,应该是任何监控基础设置中的一个基本过程。要从灾难中恢复 SQL Server 数据库,需要事务日志文件。此外,通过跟踪日志文件,DevOps 团队和数据库管理员(DBA)可以保持最佳的数据库性能,又或者,在网络攻击的情况下找到未经授权活动的证据。因此,定期监视和分析系统日志非常重要。这是一种重新创建导致出现任何问题的事件链的可靠方式。
-现在有很多开源日志跟踪器和分析工具可供使用,这使得为活动日志选择合适的资源比你想象的更容易。免费和开源软件社区提供的日志设计适用于各种站点和操作系统。以下是五个我用过的最好的,它们并没有特别的顺序。
+现在有很多开源日志跟踪器和分析工具可供使用,这使得为活动日志选择合适的资源比你想象的更容易。自由和开源软件社区提供的日志设计适用于各种站点和操作系统。以下是五个我用过的最好的工具,它们并没有特别的顺序。
### Graylog
-[Graylog][3] 于 2011 年在德国启动,现在作为开源工具或商业解决方案提供。它被设计成一个集中式日志管理系统,接受来自不同服务器或端点的数据流,并允许你快速浏览或分析该信息。
+[Graylog][3] 于 2011 年在德国创立,现在作为开源工具或商业解决方案提供。它被设计成一个集中式日志管理系统,接受来自不同服务器或端点的数据流,并允许你快速浏览或分析该信息。
![Graylog screenshot][4]
-Graylog 在系统管理员中建立了良好的声誉,因为它易于扩展。大多数 Web 项目都是从小规模开始的,‘但它们可能指数级增长。Graylog 可以平衡后端服务网络中的负载,每天可以处理几 TB 的日志数据。
+Graylog 在系统管理员中有着良好的声誉,因为它易于扩展。大多数 Web 项目都是从小规模开始的,但它们可能指数级增长。Graylog 可以均衡后端服务网络中的负载,每天可以处理几 TB 的日志数据。
IT 管理员会发现 Graylog 的前端界面易于使用,而且功能强大。Graylog 是围绕仪表板的概念构建的,它允许你选择你认为最有价值的指标或数据源,并快速查看一段时间内的趋势。
-当发生安全或性能事件时,IT 管理员希望能够尽可能地将症状追根溯源。Graylog 的搜索功能使这变得容易。它有内置的容错功能,可运行多线程搜索,因此你可以同时分析多个潜在的威胁。
+当发生安全或性能事件时,IT 管理员希望能够尽可能地根据症状追根溯源。Graylog 的搜索功能使这变得容易。它有内置的容错功能,可运行多线程搜索,因此你可以同时分析多个潜在的威胁。
### Nagios
-[Nagios][5] 于 1999 年开始由一个开发人员开发,现在已经发展成为管理日志数据最可靠的开源工具之一。当前版本的 Nagios 可以与运行 Microsoft Windows, Linux 或 Unix 的服务器集成。
+[Nagios][5] 始于 1999 年,最初是由一个开发人员开发的,现在已经发展成为管理日志数据最可靠的开源工具之一。当前版本的 Nagios 可以与运行 Microsoft Windows、Linux 或 Unix 的服务器集成。
![Nagios Core][6]
-它的主要产品是日志服务器,旨在简化数据收集并使系统管理员更容易访问信息。Nagios 日志服务器引擎将实时捕获数据并将其提供给一个强大的搜索工具。通过内置的设置向导,可以轻松地与新端点或应用程序集成。
+它的主要产品是日志服务器,旨在简化数据收集并使系统管理员更容易访问信息。Nagios 日志服务器引擎将实时捕获数据,并将其提供给一个强大的搜索工具。通过内置的设置向导,可以轻松地与新端点或应用程序集成。
Nagios 最常用于需要监控其本地网络安全性的组织。它可以审核一系列与网络相关的事件,并帮助自动分发警报。如果满足特定条件,甚至可以将 Nagios 配置为运行预定义的脚本,从而允许你在人员介入之前解决问题。
-作为网络审核的一部分,Nagios 将根据日志数据来源的地理位置过滤日志数据。这意味着你可以使用映射技术构建全面的仪表板,以了解 Web 流量是如何流动的。
+作为网络审计的一部分,Nagios 将根据日志数据来源的地理位置过滤日志数据。这意味着你可以使用地图技术构建全面的仪表板,以了解 Web 流量是如何流动的。
-### Elastic Stack ("ELK Stack")
+### Elastic Stack (ELK Stack)
[Elastic Stack][7],通常称为 ELK Stack,是需要筛选大量数据并理解其日志系统的组织中最受欢迎的开源工具之一(这也是我个人的最爱)。
![ELK Stack][8]
-它的主要产品由三个独立的产品组成:Elasticsearch, Kibana 和 Logstash:
+它的主要产品由三个独立的产品组成:Elasticsearch、Kibana 和 Logstash:
- * 顾名思义, _**Elasticsearch**_ 旨在帮助用户使用多种查询语言和类型在数据集中找到匹配项。速度是它最大的优势。它可以扩展成由数百个服务器节点组成的集群,轻松处理 PB 级的数据。
+ * 顾名思义, Elasticsearch 旨在帮助用户使用多种查询语言和类型在数据集之中找到匹配项。速度是它最大的优势。它可以扩展成由数百个服务器节点组成的集群,轻松处理 PB 级的数据。
+ * Kibana 是一个可视化工具,与 Elasticsearch 一起工作,允许用户分析他们的数据并构建强大的报告。当你第一次在服务器集群上安装 Kibana 引擎时,你会看到一个显示着统计数据、图表甚至是动画的界面。
+ * ELK Stack 的最后一部分是 Logstash,它作为一个纯粹的服务端管道进入 Elasticsearch 数据库。你可以将 Logstash 与各种编程语言和 API 集成,这样你的网站和移动应用程序中的信息就可以直接提供给强大的 Elastic Stalk 搜索引擎中。
- * _**Kibana**_ 是一个可视化工具,与 Elasticsearch 一起工作,允许用户分析他们的数据并构建强大的报告。当你第一次在服务器集群上安装 Kibana 引擎时,你将访问一个显示统计数据、图表甚至是动画的界面。
-
- * ELK Stack 的最后一部分是 _**Logstash**_ ,它作为一个纯粹的服务端管道进入 Elasticsearch 数据库。你可以将 Logstash 与各种编程语言和 API 集成,这样你的网站和移动应用程序中的信息就可以直接提供给强大的 Elastic Stalk 搜索引擎中。
-
-ELK Stack 的一个独特功能是,它允许你监视构建在 WordPress 开源安装上的应用程序。与[跟踪管理员和 PHP 日志][9]的大多数开箱即用的安全审计日志工具相比,ELK Stack 可以筛选 Web 服务器和数据库日志。
+ELK Stack 的一个独特功能是,它允许你监视构建在 WordPress 开源网站上的应用程序。与[跟踪管理日志和 PHP 日志][9]的大多数开箱即用的安全审计日志工具相比,ELK Stack 可以筛选 Web 服务器和数据库日志。
糟糕的日志跟踪和数据库管理是导致网站性能不佳的最常见原因之一。没有定期检查、优化和清空数据库日志,不仅会降低站点的运行速度,还可能导致其完全崩溃。因此,ELK Stack 对于每个 WordPress 开发人员的工具包来说都是一个优秀的工具。
@@ -97,7 +97,7 @@ via: https://opensource.com/article/19/4/log-analysis-tools
作者:[Sam Bocetta][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 51e278eeea79e5137708bd3619cf3f4bf82d5095 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 21 Apr 2019 10:52:20 +0800
Subject: [PATCH 0064/1154] PUB:20190403 5 useful open source log analysis
tools.md
@MjSeven https://linux.cn/article-10757-1.html
---
.../20190403 5 useful open source log analysis tools.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190403 5 useful open source log analysis tools.md (99%)
diff --git a/translated/tech/20190403 5 useful open source log analysis tools.md b/published/20190403 5 useful open source log analysis tools.md
similarity index 99%
rename from translated/tech/20190403 5 useful open source log analysis tools.md
rename to published/20190403 5 useful open source log analysis tools.md
index 9a86daff8a..4a87e64f90 100644
--- a/translated/tech/20190403 5 useful open source log analysis tools.md
+++ b/published/20190403 5 useful open source log analysis tools.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10757-1.html)
[#]: subject: (5 useful open source log analysis tools)
[#]: via: (https://opensource.com/article/19/4/log-analysis-tools)
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
From 990214ae1e320a6f8b8af6b566b73e184a61912a Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Sun, 21 Apr 2019 15:08:08 +0800
Subject: [PATCH 0065/1154] PRF:20190403 5 useful open source log analysis
tools
---
published/20190403 5 useful open source log analysis tools.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/published/20190403 5 useful open source log analysis tools.md b/published/20190403 5 useful open source log analysis tools.md
index 4a87e64f90..13618cda41 100644
--- a/published/20190403 5 useful open source log analysis tools.md
+++ b/published/20190403 5 useful open source log analysis tools.md
@@ -16,7 +16,7 @@
监控网络活动是一项繁琐的工作,但有充分的理由这样做。例如,它允许你查找和调查工作站和连接到网络的设备及服务器上的可疑登录,同时确定管理员滥用了什么。你还可以跟踪软件安装和数据传输,以实时识别潜在问题,而不是在损坏发生后才进行跟踪。
-这些日志还有助于使你的公司遵守适用于在欧盟范围内运营的任何实体的[通用数据保护条例][2](GFPR)。如果你的网站在欧盟可以浏览,那么你就有遵守的该条例的资格。
+这些日志还有助于使你的公司遵守适用于在欧盟范围内运营的任何实体的[通用数据保护条例][2](GDPR)。如果你的网站在欧盟可以浏览,那么你就有遵守的该条例的资格。
日志记录,包括跟踪和分析,应该是任何监控基础设置中的一个基本过程。要从灾难中恢复 SQL Server 数据库,需要事务日志文件。此外,通过跟踪日志文件,DevOps 团队和数据库管理员(DBA)可以保持最佳的数据库性能,又或者,在网络攻击的情况下找到未经授权活动的证据。因此,定期监视和分析系统日志非常重要。这是一种重新创建导致出现任何问题的事件链的可靠方式。
From d5722e28a8ca937474cc5fba23c05d7bd0edf30f Mon Sep 17 00:00:00 2001
From: sanfusu <34563541+sanfusu@users.noreply.github.com>
Date: Sun, 21 Apr 2019 15:39:50 +0800
Subject: [PATCH 0066/1154] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...0190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md
index 9e85b82f2c..56f8b8d8c1 100644
--- a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md
+++ b/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (sanfusu)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From 05bc043bd0eb675f8bf4dec2e22e457b91b6ed68 Mon Sep 17 00:00:00 2001
From: jdh8383 <4565726+jdh8383@users.noreply.github.com>
Date: Sun, 21 Apr 2019 22:14:47 +0800
Subject: [PATCH 0067/1154] Update 20171226 The shell scripting trap.md
---
sources/tech/20171226 The shell scripting trap.md | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/sources/tech/20171226 The shell scripting trap.md b/sources/tech/20171226 The shell scripting trap.md
index fe78d5f47a..32bfff26cc 100644
--- a/sources/tech/20171226 The shell scripting trap.md
+++ b/sources/tech/20171226 The shell scripting trap.md
@@ -13,7 +13,7 @@ Shell 脚本的陷阱
Shell scripting is great. It is amazingly simple to create something very useful. Even a simple no-brainer command such as:
-Shell 脚本很棒,你可以非常轻轻地写出有用的东西来。甚至是下面这个傻瓜式的命令:
+Shell 脚本很棒,你可以非常轻松地写出有用的东西来。甚至是下面这个傻瓜式的命令:
```
# Official way of naming Go-related things:
# 用含有 Go 的词汇起名字:
@@ -32,8 +32,10 @@ end.flatten.sample.chomp)
```
The Ruby version isn’t that long, or even especially complicated. But the shell script version was so simple that I didn’t even need to actually test it to make sure it is correct, whereas I did have to test the Ruby version to ensure I didn’t make a mistake. It’s also twice as long and looks a lot more dense.
+Ruby 版本的代码虽然不是那么长,也没有那么复杂。但是 Shell 版是如此简单,我甚至不用实际测试就可以确保它是正确的。而 Ruby 版的我就没法确定它不会出错了,必须得测试一下。而且它要长一倍,看起来也更复杂。
This is why people use shell scripts, it’s so easy to make something useful. Here’s is another example:
+这就是人们使用 Shell 脚本的原因,它简单却实用。下面是另一个例子:
```
curl https://nl.wikipedia.org/wiki/Lijst_van_Nederlandse_gemeenten |
@@ -43,11 +45,14 @@ curl https://nl.wikipedia.org/wiki/Lijst_van_Nederlandse_gemeenten |
```
This gets a list of all Dutch municipalities. I actually wrote this as a quick one-shot script to populate a database years ago, but it still works fine today, and it took me a minimum of effort to make it. Doing this in e.g. Ruby would take a lot more effort.
+这个脚本可以从维基百科上获取荷兰基层政权的列表。几年前我写了这个临时的脚本来快速生成一个数据库,到现在它仍然可以正常运行,当时写它并没有花费我多少精力。要使用 Ruby 完成同样的功能则会麻烦地多。
But there’s a downside, as your script grows it will become increasingly harder to maintain, but you also don’t really want to rewrite it to something else, as you’ve already spent so much time on the shell script version.
+现在来说说 Shell 的缺点吧。随着代码量的增加,你的脚本会变得越来越难维护,但你也不会想用别的语言来重写一遍,因为你已经在这个 Shell 版上花费了很多时间。
This is what I call ‘the shell script trap’, which is a special case of the [sunk cost fallacy][1].
+
And many scripts do grow beyond their original intended size, and often you will spend a lot more time than you should on “fixing that one bug”, or “adding just one small feature”. Rinse, repeat.
If you had written it in Python or Ruby or another similar language from the start, you would have spent some more time writing the original version, but would have spent much less time maintaining it, while almost certainly having fewer bugs.
From adc00b0b008a251e929b088b4f16feefbe6ea1fe Mon Sep 17 00:00:00 2001
From: geekpi
Date: Mon, 22 Apr 2019 08:54:01 +0800
Subject: [PATCH 0068/1154] translated
---
...0415 Troubleshooting slow WiFi on Linux.md | 39 -------------------
...0415 Troubleshooting slow WiFi on Linux.md | 33 ++++++++++++++++
2 files changed, 33 insertions(+), 39 deletions(-)
delete mode 100644 sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
create mode 100644 translated/tech/20190415 Troubleshooting slow WiFi on Linux.md
diff --git a/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md b/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
deleted file mode 100644
index 6c3db30f25..0000000000
--- a/sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
+++ /dev/null
@@ -1,39 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Troubleshooting slow WiFi on Linux)
-[#]: via: (https://www.linux.com/blog/troubleshooting-slow-wifi-linux)
-[#]: author: (OpenSource.com https://www.linux.com/USERS/OPENSOURCECOM)
-
-Troubleshooting slow WiFi on Linux
-======
-
-I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
-
-Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
-
-Read more at: [OpenSource.com][2]
-
-I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
-
-Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
-
-Read more at: [OpenSource.com][2]
-
---------------------------------------------------------------------------------
-
-via: https://www.linux.com/blog/troubleshooting-slow-wifi-linux
-
-作者:[OpenSource.com][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.linux.com/USERS/OPENSOURCECOM
-[b]: https://github.com/lujun9972
-[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
-[2]: https://opensource.com/article/19/4/troubleshooting-wifi-linux
diff --git a/translated/tech/20190415 Troubleshooting slow WiFi on Linux.md b/translated/tech/20190415 Troubleshooting slow WiFi on Linux.md
new file mode 100644
index 0000000000..ee78f64d14
--- /dev/null
+++ b/translated/tech/20190415 Troubleshooting slow WiFi on Linux.md
@@ -0,0 +1,33 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Troubleshooting slow WiFi on Linux)
+[#]: via: (https://www.linux.com/blog/troubleshooting-slow-wifi-linux)
+[#]: author: (OpenSource.com https://www.linux.com/USERS/OPENSOURCECOM)
+
+排查 Linux 上的 WiFi 慢速问题
+======
+
+我对 [Linux 系统][1]上的硬件问题进行诊断并不陌生。虽然我过去几年大部分工作都都涉及虚拟化,但我仍然喜欢蹲在桌子下,摸索设备和内存模块。好吧,除开“蹲在桌子下”的部分。但这些都不意味着持久而神秘的 bug 不令人沮丧。我最近遇到了我的 Ubuntu 18.04 工作站上的一个 bug,这个 bug 几个月来一直没有解决。
+
+在这里,我将分享我的问题和许多我尝试的方法。即使你可能永远不会遇到我的具体问题,但故障排查的过程可能会有所帮助。此外,你会对我在这无用的线索上浪费的时间和精力感到好笑。
+
+更多阅读: [OpenSource.com][2]
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/troubleshooting-slow-wifi-linux
+
+作者:[OpenSource.com][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/USERS/OPENSOURCECOM
+[b]: https://github.com/lujun9972
+[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
+[2]: https://opensource.com/article/19/4/troubleshooting-wifi-linux
From 963f6e788b1497431e8f2825654cca760bf9b332 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Mon, 22 Apr 2019 09:02:34 +0800
Subject: [PATCH 0069/1154] translating
---
... Linux Foundation Training Courses Sale - Discount Coupon.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md b/sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md
index 04c1feb5ba..73222de0b6 100644
--- a/sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md
+++ b/sources/tech/20190416 Linux Foundation Training Courses Sale - Discount Coupon.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From 084fe7ffd4641b88f8f402e8b294506667792787 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 22 Apr 2019 10:10:14 +0800
Subject: [PATCH 0070/1154] PRF:20190402 Using Square Brackets in Bash- Part
2.md
@HankChow
---
...2 Using Square Brackets in Bash- Part 2.md | 53 +++++++++++--------
1 file changed, 30 insertions(+), 23 deletions(-)
diff --git a/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md b/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md
index 70652f894c..b782595c61 100644
--- a/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md
+++ b/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md
@@ -1,6 +1,6 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Square Brackets in Bash: Part 2)
@@ -14,8 +14,6 @@
> 我们继续来看方括号的用法,它们甚至还可以在 Bash 当中作为一个命令使用。
-[Creative Commons Zero][2]
-
欢迎回到我们的方括号专题。在[前一篇文章][3]当中,我们介绍了方括号在命令行中可以用于通配操作,如果你已经读过前一篇文章,就可以从这里继续了。
方括号还可以以一个命令的形式使用,就像这样:
@@ -26,7 +24,7 @@
上面这种 `[ ... ]` 的形式就可以看成是一个可执行的命令。要注意,方括号内部的内容 `"a" = "a"` 和方括号 `[`、`]` 之间是有空格隔开的。因为这里的方括号被视作一个命令,因此要用空格将命令和它的参数隔开。
-上面这个命令的含义是“判断字符串 `"a"` 和字符串 `"a"` 是否相同”,如果判断结果为真,那么 `[ ... ]` 就会以状态码status code 0 退出,否则以状态码 1 退出。在之前的文章中,我们也有介绍过状态码的概念,可以通过 `$?` 变量获取到最近一个命令的状态码。
+上面这个命令的含义是“判断字符串 `"a"` 和字符串 `"a"` 是否相同”,如果判断结果为真,那么 `[ ... ]` 就会以状态码status code 0 退出,否则以状态码 1 退出。在[之前的文章][4]中,我们也有介绍过状态码的概念,可以通过 `$?` 变量获取到最近一个命令的状态码。
分别执行
@@ -42,22 +40,22 @@ echo $?
echo $?
```
-这两段命令中,前者会输出 0(判断结果为真),后者则会输出 1(判断结果为假)。在 Bash 当中,如果一个命令的状态码是 0,表示这个命令正常执行完成并退出,而且其中没有出现错误,对应布尔值 `true`;如果在命令执行过程中出现错误,就会返回一个非零的状态码,对应布尔值 `false`。而 `[ ... ]`也同样遵循这样的规则。
+这两段命令中,前者会输出 0(判断结果为真),后者则会输出 1(判断结果为假)。在 Bash 当中,如果一个命令的状态码是 0,表示这个命令正常执行完成并退出,而且其中没有出现错误,对应布尔值 `true`;如果在命令执行过程中出现错误,就会返回一个非零的状态码,对应布尔值 `false`。而 `[ ... ]` 也同样遵循这样的规则。
因此,`[ ... ]` 很适合在 `if ... then`、`while` 或 `until` 这种在代码块结束前需要判断是否达到某个条件结构中使用。
对应使用的逻辑判断运算符也相当直观:
```
-[ STRING1 = STRING2 ] => checks to see if the strings are equal
-[ STRING1 != STRING2 ] => checks to see if the strings are not equal
-[ INTEGER1 -eq INTEGER2 ] => checks to see if INTEGER1 is equal to INTEGER2
-[ INTEGER1 -ge INTEGER2 ] => checks to see if INTEGER1 is greater than or equal to INTEGER2
-[ INTEGER1 -gt INTEGER2 ] => checks to see if INTEGER1 is greater than INTEGER2
-[ INTEGER1 -le INTEGER2 ] => checks to see if INTEGER1 is less than or equal to INTEGER2
-[ INTEGER1 -lt INTEGER2 ] => checks to see if INTEGER1 is less than INTEGER2
-[ INTEGER1 -ne INTEGER2 ] => checks to see if INTEGER1 is not equal to INTEGER2
-etc...
+[ STRING1 = STRING2 ] => 检查字符串是否相等
+[ STRING1 != STRING2 ] => 检查字符串是否不相等
+[ INTEGER1 -eq INTEGER2 ] => 检查整数 INTEGER1 是否等于 INTEGER2
+[ INTEGER1 -ge INTEGER2 ] => 检查整数 INTEGER1 是否大于等于 INTEGER2
+[ INTEGER1 -gt INTEGER2 ] => 检查整数 INTEGER1 是否大于 INTEGER2
+[ INTEGER1 -le INTEGER2 ] => 检查整数 INTEGER1 是否小于等于 INTEGER2
+[ INTEGER1 -lt INTEGER2 ] => 检查整数 INTEGER1 是否小于 INTEGER2
+[ INTEGER1 -ne INTEGER2 ] => 检查整数 INTEGER1 是否不等于 INTEGER2
+等等……
```
方括号的这种用法也可以很有 shell 风格,例如通过带上 `-f` 参数可以判断某个文件是否存在:
@@ -129,7 +127,16 @@ done
在下一篇文章中,我们会开始介绍圆括号 `()` 在 Linux 命令行中的用法,敬请关注!
+### 更多
+- [Linux 工具:点的含义][6]
+- [理解 Bash 中的尖括号][7]
+- [Bash 中尖括号的更多用法][8]
+- [Linux 中的 &][9]
+- [Bash 中的 & 符号和文件描述符][10]
+- [Bash 中的逻辑和(&)][4]
+- [浅析 Bash 中的 {花括号}][11]
+- [在 Bash 中使用[方括号] (一)][3]
--------------------------------------------------------------------------------
@@ -138,7 +145,7 @@ via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -146,13 +153,13 @@ via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-brackets-3734552_1920.jpg?itok=hv9D6TBy "square brackets"
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
-[3]: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
-[4]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
+[3]: https://linux.cn/article-10717-1.html
+[4]: https://linux.cn/article-10596-1.html
[5]: https://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions
-[6]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
-[7]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
-[8]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
-[9]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
-[10]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
-[11]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
+[6]: https://linux.cn/article-10465-1.html
+[7]: https://linux.cn/article-10502-1.html
+[8]: https://linux.cn/article-10529-1.html
+[9]: https://linux.cn/article-10587-1.html
+[10]: https://linux.cn/article-10591-1.html
+[11]: https://linux.cn/article-10624-1.html
From f502d04c6ee2f657af6dcac1c6fdf870facfb938 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 22 Apr 2019 10:10:54 +0800
Subject: [PATCH 0071/1154] PUB:20190402 Using Square Brackets in Bash- Part
2.md
@HankChow https://linux.cn/article-10761-1.html
---
.../20190402 Using Square Brackets in Bash- Part 2.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190402 Using Square Brackets in Bash- Part 2.md (98%)
diff --git a/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md b/published/20190402 Using Square Brackets in Bash- Part 2.md
similarity index 98%
rename from translated/tech/20190402 Using Square Brackets in Bash- Part 2.md
rename to published/20190402 Using Square Brackets in Bash- Part 2.md
index b782595c61..22ca8b3ee9 100644
--- a/translated/tech/20190402 Using Square Brackets in Bash- Part 2.md
+++ b/published/20190402 Using Square Brackets in Bash- Part 2.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10761-1.html)
[#]: subject: (Using Square Brackets in Bash: Part 2)
[#]: via: (https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
From 9fb14d92426b45aa33780e74e206fb225540d686 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 22 Apr 2019 10:57:46 +0800
Subject: [PATCH 0072/1154] PRF:20190328 How to run PostgreSQL on Kubernetes.md
@arrowfeng
---
...328 How to run PostgreSQL on Kubernetes.md | 58 ++++++++++---------
1 file changed, 30 insertions(+), 28 deletions(-)
diff --git a/translated/tech/20190328 How to run PostgreSQL on Kubernetes.md b/translated/tech/20190328 How to run PostgreSQL on Kubernetes.md
index f29110281e..f45404efc8 100644
--- a/translated/tech/20190328 How to run PostgreSQL on Kubernetes.md
+++ b/translated/tech/20190328 How to run PostgreSQL on Kubernetes.md
@@ -1,60 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: (arrowfeng)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to run PostgreSQL on Kubernetes)
[#]: via: (https://opensource.com/article/19/3/how-run-postgresql-kubernetes)
[#]: author: (Jonathan S. Katz https://opensource.com/users/jkatz05)
-怎样在Kubernetes上运行PostgreSQL
+怎样在 Kubernetes 上运行 PostgreSQL
======
-创建统一管理的,具备灵活性的云原生生产部署来部署一个人性化的数据库即服务。
+
+> 创建统一管理的,具备灵活性的云原生生产部署来部署一个个性化的数据库即服务(DBaaS)。
+
![cubes coming together to create a larger cube][1]
-通过在[Kubernetes][2]上运行[PostgreSQL][3]数据库,你能创建统一管理的,具备灵活性的云原生生产部署应用来部署一个个性化的数据库即服务为你的特定需求进行量身定制。
+通过在 [Kubernetes][2] 上运行 [PostgreSQL][3] 数据库,你能创建统一管理的,具备灵活性的云原生生产部署应用来部署一个个性化的数据库即服务为你的特定需求进行量身定制。
-对于Kubernetes,使用Operator允许你提供额外的上下文去[管理有状态应用][4]。当使用像PostgreSQL这样开源的数据库去执行包括配置,扩张,高可用和用户管理时,Operator也很有帮助。
+对于 Kubernetes,使用 Operator 允许你提供额外的上下文去[管理有状态应用][4]。当使用像PostgreSQL 这样开源的数据库去执行包括配置、扩展、高可用和用户管理时,Operator 也很有帮助。
-让我们来探索如何在Kubernetes上启动并运行PostgreSQL。
+让我们来探索如何在 Kubernetes 上启动并运行 PostgreSQL。
### 安装 PostgreSQL Operator
-将PostgreSQL和Kubernetes结合使用的第一步是安装一个Operator。在针对Linux系统的Crunchy's [快速启动脚本][6]的帮助下,你可以在任意基于Kubernetes的环境下启动和运行开源的[Crunchy PostgreSQL Operator][5]。
+将 PostgreSQL 和 Kubernetes 结合使用的第一步是安装一个 Operator。在针对 Linux 系统的Crunchy 的[快速启动脚本][6]的帮助下,你可以在任意基于 Kubernetes 的环境下启动和运行开源的[Crunchy PostgreSQL Operator][5]。
快速启动脚本有一些必要前提:
- * [Wget][7]工具已安装。
- * [kubectl][8]工具已安装。
- * 一个[StorageClass][9]已经定义在你的Kubernetes中。
- * 拥有集群权限的可访问Kubernetes的用户账号。安装Operator的[RBAC][10]规则是必要的。
- * 拥有一个[namespace][11]去管理PostgreSQL Operator。
+ * [Wget][7] 工具已安装。
+ * [kubectl][8] 工具已安装。
+ * 在你的 Kubernetes 中已经定义了一个 [StorageClass][9]。
+ * 拥有集群权限的可访问 Kubernetes 的用户账号,以安装 Operator 的 [RBAC][10] 规则。
+ * 一个 PostgreSQL Operator 的 [命名空间][11]。
-
-
-执行这个脚本将提供给你一个默认的PostgreSQL Operator deployment,它假设你的[dynamic storage][12]和存储类的名字为**standard**。通过这个脚本允许用户自定义的值去覆盖这些默认值。
+执行这个脚本将提供给你一个默认的 PostgreSQL Operator 部署,其默认假设你采用 [动态存储][12]和一个名为 `standard` 的 StorageClass。这个脚本允许用户采用自定义的值去覆盖这些默认值。
通过下列命令,你能下载这个快速启动脚本并把它的权限设置为可执行:
+
```
wget
chmod +x ./quickstart.sh
```
然后你运行快速启动脚本:
+
```
./examples/quickstart.sh
```
-在脚本提示你相关的Kubernetes集群基本信息后,它将执行下列操作:
- * 下载Operator配置文件
- * 将 **$HOME/.pgouser** 这个文件设置为默认设置
- * 以Kubernetes [Deployment][13]部署Operator
- * 设置你的 **.bashrc** 文件包含Operator环境变量
- * 设置你的 **$HOME/.bash_completion** 文件为 **pgo bash_completion**文件
-
-在快速启动脚本的执行期间,你将会被提示在你的Kubernetes集群设置RBAC规则。在另一个终端,执行快速启动命令所提示你的命令。
+在脚本提示你相关的 Kubernetes 集群基本信息后,它将执行下列操作:
-一旦这个脚本执行完成,你将会得到关于打开一个端口转发到PostgreSQL Operator pod的信息。在另一个终端,执行端口转发;这将允许你开始对PostgreSQL Operator执行命令!尝试输入下列命令创建集群:
+ * 下载 Operator 配置文件
+ * 将 `$HOME/.pgouser` 这个文件设置为默认设置
+ * 以 Kubernetes [Deployment][13] 部署 Operator
+ * 设置你的 `.bashrc` 文件包含 Operator 环境变量
+ * 设置你的 `$HOME/.bash_completion` 文件为 `pgo bash_completion` 文件
+
+在快速启动脚本的执行期间,你将会被提示在你的 Kubernetes 集群设置 RBAC 规则。在另一个终端,执行快速启动命令所提示你的命令。
+
+一旦这个脚本执行完成,你将会得到提示设置一个端口以转发到 PostgreSQL Operator pod。在另一个终端,执行这个端口转发操作;这将允许你开始对 PostgreSQL Operator 执行命令!尝试输入下列命令创建集群:
```
pgo create cluster mynewcluster
@@ -66,10 +69,9 @@ pgo create cluster mynewcluster
pgo test mynewcluster
```
-现在,你能在Kubernetes环境下管理你的PostgreSQL数据库!你可以在[官方文档][14]找到非常全面的命令,包括扩容,高可用,备份等等。
+现在,你能在 Kubernetes 环境下管理你的 PostgreSQL 数据库了!你可以在[官方文档][14]找到非常全面的命令,包括扩容,高可用,备份等等。
-* * *
-这篇文章部分参考了该作者为了Crunchy博客而写的[在Kubernetes上开始运行PostgreSQL][15]
+这篇文章部分参考了该作者为 Crunchy 博客而写的[在 Kubernetes 上开始运行 PostgreSQL][15]。
--------------------------------------------------------------------------------
@@ -79,7 +81,7 @@ via: https://opensource.com/article/19/3/how-run-postgresql-kubernetes
作者:[Jonathan S. Katz][a]
选题:[lujun9972][b]
译者:[arrowfeng](https://github.com/arrowfeng)
-校对:[校对ID](https://github.com/校对ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 801ecd16af9b7e1dbbc2de88e9cc543e5646188f Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 22 Apr 2019 10:58:24 +0800
Subject: [PATCH 0073/1154] PUB:20190328 How to run PostgreSQL on Kubernetes.md
@arrowfeng https://linux.cn/article-10762-1.html
---
.../20190328 How to run PostgreSQL on Kubernetes.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190328 How to run PostgreSQL on Kubernetes.md (98%)
diff --git a/translated/tech/20190328 How to run PostgreSQL on Kubernetes.md b/published/20190328 How to run PostgreSQL on Kubernetes.md
similarity index 98%
rename from translated/tech/20190328 How to run PostgreSQL on Kubernetes.md
rename to published/20190328 How to run PostgreSQL on Kubernetes.md
index f45404efc8..e8c4ceb539 100644
--- a/translated/tech/20190328 How to run PostgreSQL on Kubernetes.md
+++ b/published/20190328 How to run PostgreSQL on Kubernetes.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (arrowfeng)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10762-1.html)
[#]: subject: (How to run PostgreSQL on Kubernetes)
[#]: via: (https://opensource.com/article/19/3/how-run-postgresql-kubernetes)
[#]: author: (Jonathan S. Katz https://opensource.com/users/jkatz05)
From b36258b9f273e12ec132d30a3781d11c51d735c1 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:12:45 +0800
Subject: [PATCH 0074/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190422=20How?=
=?UTF-8?q?=20To=20Monitor=20Disk=20I/O=20Activity=20Using=20iotop=20And?=
=?UTF-8?q?=20iostat=20Commands=20In=20Linux=3F=20sources/tech/20190422=20?=
=?UTF-8?q?How=20To=20Monitor=20Disk=20I-O=20Activity=20Using=20iotop=20An?=
=?UTF-8?q?d=20iostat=20Commands=20In=20Linux.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...sing iotop And iostat Commands In Linux.md | 341 ++++++++++++++++++
1 file changed, 341 insertions(+)
create mode 100644 sources/tech/20190422 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md
diff --git a/sources/tech/20190422 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md b/sources/tech/20190422 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md
new file mode 100644
index 0000000000..379aba5be9
--- /dev/null
+++ b/sources/tech/20190422 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md
@@ -0,0 +1,341 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?)
+[#]: via: (https://www.2daygeek.com/monitor-disk-io-activity-using-iotop-iostat-command-in-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?
+======
+
+Do you know what are the tools we can use for troubleshooting or monitoring real-time disk activity in Linux?
+
+If **[Linux system performance][1]** gets slow down we may use **[top command][2]** to see the system performance.
+
+It is used to check what are the processes are consuming high utilization on server.
+
+It’s common for most of the Linux administrator.
+
+It’s widely used by Linux administrator in the real world.
+
+If you don’t see much difference in the process output still you have an option to check other things.
+
+I would like to advise you to check `wa` status in the top output because most of the time the server performance will be degraded due to high I/O Read and Write on hard disk.
+
+If it’s high or fluctuation, it could be a cause. So, we need to check I/O activity on hard drive.
+
+We can monitory disk I/O statistics for all disks and file system in Linux system using `iotop` and `iostat` commands.
+
+### What Is iotop?
+
+iotop is a top-like utility for displaying real-time disk activity.
+
+iotop watches I/O usage information output by the Linux kernel and displays a table of current I/O usage by processes or threads on the system.
+
+It displays the I/O bandwidth read and written by each process/thread. It also displays the percentage of time the thread/process spent while swapping in and while waiting on I/O.
+
+Total DISK READ and Total DISK WRITE values represent total read and write bandwidth between processes and kernel threads on the one side and kernel block device subsystem on the other.
+
+Actual DISK READ and Actual DISK WRITE values represent corresponding bandwidths for actual disk I/O between kernel block device subsystem and underlying hardware (HDD, SSD, etc.).
+
+### How To Install iotop In Linux?
+
+We can easily install it with help of package manager since the package is available in all the Linux distributions repository.
+
+For **`Fedora`** system, use **[DNF Command][3]** to install iotop.
+
+```
+$ sudo dnf install iotop
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install iotop.
+
+```
+$ sudo apt install iotop
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][6]** to install iotop.
+
+```
+$ sudo pacman -S iotop
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install iotop.
+
+```
+$ sudo yum install iotop
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install iotop.
+
+```
+$ sudo zypper install iotop
+```
+
+### How To Monitor Disk I/O Activity/Statistics In Linux Using iotop Command?
+
+There are many options are available in iotop command to check varies statistics about disk I/O.
+
+Run the iotop command without any arguments to see each process or thread current I/O usage.
+
+```
+# iotop
+```
+
+[![][9]![][9]][10]
+
+If you would like to check which process are actually doing IO then run the iotop command with `-o` or `--only` option.
+
+```
+# iotop --only
+```
+
+[![][9]![][9]][11]
+
+**Details:**
+
+ * **`IO:`** It shows I/O utilization for each process, which includes disk and swap.
+ * **`SWAPIN:`** It shows only the swap usage of each process.
+
+
+
+### What Is iostat?
+
+iostat is used to report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions.
+
+The iostat command is used for monitoring system input/output device loading by observing the time the devices are active in relation to their average transfer rates.
+
+The iostat command generates reports that can be used to change system configuration to better balance the input/output load between physical disks.
+
+All statistics are reported each time the iostat command is run. The report consists of a CPU header row followed by a row of CPU statistics.
+
+On multiprocessor systems, CPU statistics are calculated system-wide as averages among all processors. A device header row is displayed followed by a line of statistics for each device that is configured.
+
+The iostat command generates two types of reports, the CPU Utilization report and the Device Utilization report.
+
+### How To Install iostat In Linux?
+
+iostat tool is part of sysstat package so, We can easily install it with help of package manager since the package is available in all the Linux distributions repository.
+
+For **`Fedora`** system, use **[DNF Command][3]** to install sysstat.
+
+```
+$ sudo dnf install sysstat
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install sysstat.
+
+```
+$ sudo apt install sysstat
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][6]** to install sysstat.
+
+```
+$ sudo pacman -S sysstat
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install sysstat.
+
+```
+$ sudo yum install sysstat
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install sysstat.
+
+```
+$ sudo zypper install sysstat
+```
+
+### How To Monitor Disk I/O Activity/Statistics In Linux Using sysstat Command?
+
+There are many options are available in iostat command to check varies statistics about disk I/O and CPU.
+
+Run the iostat command without any arguments to see complete statistics of the system.
+
+```
+# iostat
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.45 0.02 16.47 0.12 0.00 53.94
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+nvme0n1 6.68 126.95 124.97 0.00 58420014 57507206 0
+sda 0.18 6.77 80.24 0.00 3115036 36924764 0
+loop0 0.00 0.00 0.00 0.00 2160 0 0
+loop1 0.00 0.00 0.00 0.00 1093 0 0
+loop2 0.00 0.00 0.00 0.00 1077 0 0
+```
+
+Run the iostat command with `-d` option to see I/O statistics for all the devices
+
+```
+# iostat -d
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+nvme0n1 6.68 126.95 124.97 0.00 58420030 57509090 0
+sda 0.18 6.77 80.24 0.00 3115292 36924764 0
+loop0 0.00 0.00 0.00 0.00 2160 0 0
+loop1 0.00 0.00 0.00 0.00 1093 0 0
+loop2 0.00 0.00 0.00 0.00 1077 0 0
+```
+
+Run the iostat command with `-p` option to see I/O statistics for all the devices and their partitions.
+
+```
+# iostat -p
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.42 0.02 16.45 0.12 0.00 53.99
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+nvme0n1 6.68 126.94 124.96 0.00 58420062 57512278 0
+nvme0n1p1 6.40 124.46 118.36 0.00 57279753 54474898 0
+nvme0n1p2 0.27 2.47 6.60 0.00 1138069 3037380 0
+sda 0.18 6.77 80.23 0.00 3116060 36924764 0
+sda1 0.00 0.01 0.00 0.00 3224 0 0
+sda2 0.18 6.76 80.23 0.00 3111508 36924764 0
+loop0 0.00 0.00 0.00 0.00 2160 0 0
+loop1 0.00 0.00 0.00 0.00 1093 0 0
+loop2 0.00 0.00 0.00 0.00 1077 0 0
+```
+
+Run the iostat command with `-x` option to see detailed I/O statistics for all the devices.
+
+```
+# iostat -x
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.41 0.02 16.45 0.12 0.00 54.00
+
+Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
+nvme0n1 2.45 126.93 0.60 19.74 0.40 51.74 4.23 124.96 5.12 54.76 3.16 29.54 0.00 0.00 0.00 0.00 0.00 0.00 0.31 30.28
+sda 0.06 6.77 0.00 0.00 8.34 119.20 0.12 80.23 19.94 99.40 31.84 670.73 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13
+loop0 0.00 0.00 0.00 0.00 0.08 19.64 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
+loop1 0.00 0.00 0.00 0.00 0.40 12.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
+loop2 0.00 0.00 0.00 0.00 0.38 19.58 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
+```
+
+Run the iostat command with `-d [Device_Name]` option to see I/O statistics of particular device and their partitions.
+
+```
+# iostat -p [Device_Name]
+
+# iostat -p sda
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.38 0.02 16.43 0.12 0.00 54.05
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+sda 0.18 6.77 80.21 0.00 3117468 36924764 0
+sda2 0.18 6.76 80.21 0.00 3112916 36924764 0
+sda1 0.00 0.01 0.00 0.00 3224 0 0
+```
+
+Run the iostat command with `-m` option to see I/O statistics with `MB` for all the devices instead of `KB`. By default it shows the output with KB.
+
+```
+# iostat -m
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.36 0.02 16.41 0.12 0.00 54.09
+
+Device tps MB_read/s MB_wrtn/s MB_dscd/s MB_read MB_wrtn MB_dscd
+nvme0n1 6.68 0.12 0.12 0.00 57050 56176 0
+sda 0.18 0.01 0.08 0.00 3045 36059 0
+loop0 0.00 0.00 0.00 0.00 2 0 0
+loop1 0.00 0.00 0.00 0.00 1 0 0
+loop2 0.00 0.00 0.00 0.00 1 0 0
+```
+
+Run the iostat command with certain interval then use the following format. In this example, we are going to capture totally two reports at five seconds interval.
+
+```
+# iostat [Interval] [Number Of Reports]
+
+# iostat 5 2
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.35 0.02 16.41 0.12 0.00 54.10
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+nvme0n1 6.68 126.89 124.95 0.00 58420116 57525344 0
+sda 0.18 6.77 80.20 0.00 3118492 36924764 0
+loop0 0.00 0.00 0.00 0.00 2160 0 0
+loop1 0.00 0.00 0.00 0.00 1093 0 0
+loop2 0.00 0.00 0.00 0.00 1077 0 0
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 3.71 0.00 2.51 0.05 0.00 93.73
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+nvme0n1 19.00 0.20 311.40 0.00 1 1557 0
+sda 0.20 25.60 0.00 0.00 128 0 0
+loop0 0.00 0.00 0.00 0.00 0 0 0
+loop1 0.00 0.00 0.00 0.00 0 0 0
+loop2 0.00 0.00 0.00 0.00 0 0 0
+```
+
+Run the iostat command with `-N` option to see the LVM disk I/O statistics report.
+
+```
+# iostat -N
+
+Linux 4.15.0-47-generic (Ubuntu18.2daygeek.com) Thursday 18 April 2019 _x86_64_ (2 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 0.38 0.07 0.18 0.26 0.00 99.12
+
+Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
+sda 3.60 57.07 69.06 968729 1172340
+sdb 0.02 0.33 0.00 5680 0
+sdc 0.01 0.12 0.00 2108 0
+2g-2gvol1 0.00 0.07 0.00 1204 0
+```
+
+Run the nfsiostat command to see the I/O statistics for Network File System(NFS).
+
+```
+# nfsiostat
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/monitor-disk-io-activity-using-iotop-iostat-command-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/category/monitoring-tools/
+[2]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
+[3]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[5]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[6]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[7]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[8]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
+[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[10]: https://www.2daygeek.com/wp-content/uploads/2015/03/monitor-disk-io-activity-using-iotop-iostat-command-in-linux-1.jpg
+[11]: https://www.2daygeek.com/wp-content/uploads/2015/03/monitor-disk-io-activity-using-iotop-iostat-command-in-linux-2.jpg
From 3133b517e8bf883735e74942f8510e7b48419890 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:12:59 +0800
Subject: [PATCH 0075/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190419=204=20?=
=?UTF-8?q?cool=20new=20projects=20to=20try=20in=20COPR=20for=20April=2020?=
=?UTF-8?q?19=20sources/tech/20190419=204=20cool=20new=20projects=20to=20t?=
=?UTF-8?q?ry=20in=20COPR=20for=20April=202019.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... projects to try in COPR for April 2019.md | 110 ++++++++++++++++++
1 file changed, 110 insertions(+)
create mode 100644 sources/tech/20190419 4 cool new projects to try in COPR for April 2019.md
diff --git a/sources/tech/20190419 4 cool new projects to try in COPR for April 2019.md b/sources/tech/20190419 4 cool new projects to try in COPR for April 2019.md
new file mode 100644
index 0000000000..e8d7dd9f7c
--- /dev/null
+++ b/sources/tech/20190419 4 cool new projects to try in COPR for April 2019.md
@@ -0,0 +1,110 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (4 cool new projects to try in COPR for April 2019)
+[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-april-2019/)
+[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
+
+4 cool new projects to try in COPR for April 2019
+======
+
+![][1]
+
+COPR is a [collection][2] of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
+
+Here’s a set of new and interesting projects in COPR.
+
+### Joplin
+
+[Joplin][3] is a note-taking and to-do app. Notes are written in the Markdown format, and organized by sorting them into various notebooks and using tags.
+Joplin can import notes from any Markdown source or exported from Evernote. In addition to the desktop app, there’s an Android version with the ability to synchronize notes between them — using Nextcloud, Dropbox or other cloud services. Finally, there’s a browser extension for Chrome and Firefox to save web pages and screenshots.
+
+![][4]
+
+#### Installation instructions
+
+The [repo][5] currently provides Joplin for Fedora 29 and 30, and for EPEL 7. To install Joplin, use these commands [with sudo][6]:
+
+```
+sudo dnf copr enable taw/joplin
+sudo dnf install joplin
+```
+
+### Fzy
+
+[Fzy][7] is a command-line utility for fuzzy string searching. It reads from a standard input and sorts the lines based on what is most likely the sought after text, and then prints the selected line. In addition to command-line, fzy can be also used within vim. You can try fzy in this online [demo][8].
+
+#### Installation instructions
+
+The [repo][9] currently provides fzy for Fedora 29, 30, and Rawhide, and other distributions. To install fzy, use these commands:
+
+```
+sudo dnf copr enable lehrenfried/fzy
+sudo dnf install fzy
+```
+
+### Fondo
+
+Fondo is a program for browsing many photographs from the [unsplash.com][10] website. It has a simple interface that allows you to look for pictures of one of several themes, or all of them at once. You can then set a found picture as a wallpaper with a single click, or share it.
+
+ * ![][11]
+
+
+
+#### Installation instructions
+
+The [repo][12] currently provides Fondo for Fedora 29, 30, and Rawhide. To install Fondo, use these commands:
+
+```
+sudo dnf copr enable atim/fondo
+sudo dnf install fondo
+```
+
+### YACReader
+
+[YACReader][13] is a digital comic book reader that supports many comics and image formats, such as _cbz_ , _cbr_ , _pdf_ and others. YACReader keeps track of reading progress, and can download comics’ information from [Comic Vine.][14] It also comes with a YACReader Library for organizing and browsing your comic book collection.
+
+ * ![][15]
+
+
+
+#### Installation instructions
+
+The [repo][16] currently provides YACReader for Fedora 29, 30, and Rawhide. To install YACReader, use these commands:
+
+```
+sudo dnf copr enable atim/yacreader
+sudo dnf install yacreader
+```
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-april-2019/
+
+作者:[Dominik Turecek][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/dturecek/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
+[2]: https://copr.fedorainfracloud.org/
+[3]: https://joplin.cozic.net/
+[4]: https://fedoramagazine.org/wp-content/uploads/2019/04/joplin.png
+[5]: https://copr.fedorainfracloud.org/coprs/taw/joplin/
+[6]: https://fedoramagazine.org/howto-use-sudo/
+[7]: https://github.com/jhawthorn/fzy
+[8]: https://jhawthorn.github.io/fzy-demo/
+[9]: https://copr.fedorainfracloud.org/coprs/lehrenfried/fzy/
+[10]: https://unsplash.com/
+[11]: https://fedoramagazine.org/wp-content/uploads/2019/04/fondo.png
+[12]: https://copr.fedorainfracloud.org/coprs/atim/fondo/
+[13]: https://www.yacreader.com/
+[14]: https://comicvine.gamespot.com/
+[15]: https://fedoramagazine.org/wp-content/uploads/2019/04/yacreader.png
+[16]: https://copr.fedorainfracloud.org/coprs/atim/yacreader/
From b20f610abbbf78f0acdf4a0a169e685355e70688 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:13:17 +0800
Subject: [PATCH 0076/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190417=20Mana?=
=?UTF-8?q?ging=20RAID=20arrays=20with=20mdadm=20sources/tech/20190417=20M?=
=?UTF-8?q?anaging=20RAID=20arrays=20with=20mdadm.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...0190417 Managing RAID arrays with mdadm.md | 99 +++++++++++++++++++
1 file changed, 99 insertions(+)
create mode 100644 sources/tech/20190417 Managing RAID arrays with mdadm.md
diff --git a/sources/tech/20190417 Managing RAID arrays with mdadm.md b/sources/tech/20190417 Managing RAID arrays with mdadm.md
new file mode 100644
index 0000000000..5f29608b8a
--- /dev/null
+++ b/sources/tech/20190417 Managing RAID arrays with mdadm.md
@@ -0,0 +1,99 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Managing RAID arrays with mdadm)
+[#]: via: (https://fedoramagazine.org/managing-raid-arrays-with-mdadm/)
+[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
+
+Managing RAID arrays with mdadm
+======
+
+![][1]
+
+Mdadm stands for Multiple Disk and Device Administration. It is a command line tool that can be used to manage software [RAID][2] arrays on your Linux PC. This article outlines the basics you need to get started with it.
+
+The following five commands allow you to make use of mdadm’s most basic features:
+
+ 1. **Create a RAID array** :
+### mdadm --create /dev/md/test --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
+ 2. **Assemble (and start) a RAID array** :
+### mdadm --assemble /dev/md/test /dev/sda1 /dev/sdb1
+ 3. **Stop a RAID array** :
+### mdadm --stop /dev/md/test
+ 4. **Delete a RAID array** :
+### mdadm --zero-superblock /dev/sda1 /dev/sdb1
+ 5. **Check the status of all assembled RAID arrays** :
+### cat /proc/mdstat
+
+
+
+#### Notes on features
+
+##### `mdadm --create`
+
+The _create_ command shown above includes the following four parameters in addition to the create parameter itself and the device names:
+
+ 1. **–homehost** :
+By default, mdadm stores your computer’s name as an attribute of the RAID array. If your computer name does not match the stored name, the array will not automatically assemble. This feature is useful in server clusters that share hard drives because file system corruption usually occurs if multiple servers attempt to access the same drive at the same time. The name _any_ is reserved and disables the _homehost_ restriction.
+ 2. **–metadata** :
+_mdadm_ reserves a small portion of each RAID device to store information about the RAID array itself. The _metadata_ parameter specifies the format and location of the information. The value _1.0_ indicates to use version-1 formatting and store the metadata at the end of the device.
+ 3. **–level** :
+The _level_ parameter specifies how the data should be distributed among the underlying devices. Level _1_ indicates each device should contain a complete copy of all the data. This level is also known as [disk mirroring][3].
+ 4. **–raid-devices** :
+The _raid-devices_ parameter specifies the number of devices that will be used to create the RAID array.
+
+
+
+By using _level=1_ (mirroring) in combination with _metadata=1.0_ (store the metadata at the end of the device), you create a RAID1 array whose underlying devices appear normal if accessed without the aid of the mdadm driver. This is useful in the case of disaster recovery, because you can access the device even if the new system doesn’t support mdadm arrays. It’s also useful in case a program needs _read-only_ access to the underlying device before mdadm is available. For example, the [UEFI][4] firmware in a computer may need to read the bootloader from the [ESP][5] before mdadm is started.
+
+##### `mdadm --assemble`
+
+The _assemble_ command above fails if a member device is missing or corrupt. To force the RAID array to assemble and start when one of its members is missing, use the following command:
+
+```
+# mdadm --assemble --run /dev/md/test /dev/sda1
+```
+
+#### Other important notes
+
+Avoid writing directly to any devices that underlay a mdadm RAID1 array. That causes the devices to become out-of-sync and mdadm won’t know that they are out-of-sync. If you access a RAID1 array with a device that’s been modified out-of-band, you can cause file system corruption. If you modify a RAID1 device out-of-band and need to force the array to re-synchronize, delete the mdadm metadata from the device to be overwritten and then re-add it to the array as demonstrated below:
+
+```
+# mdadm --zero-superblock /dev/sdb1
+# mdadm --assemble --run /dev/md/test /dev/sda1
+# mdadm /dev/md/test --add /dev/sdb1
+```
+
+These commands completely overwrite the contents of sdb1 with the contents of sda1.
+
+To specify any RAID arrays to automatically activate when your computer starts, create an _/etc/mdadm.conf_ configuration file.
+
+For the most up-to-date and detailed information, check the man pages:
+
+```
+$ man mdadm
+$ man mdadm.conf
+```
+
+The next article of this series will show a step-by-step guide on how to convert an existing single-disk Linux installation to a mirrored-disk installation, that will continue running even if one of its hard drives suddenly stops working!
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/managing-raid-arrays-with-mdadm/
+
+作者:[Gregory Bartholomew][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/glb/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/mdadm-816x345.jpg
+[2]: https://en.wikipedia.org/wiki/RAID
+[3]: https://en.wikipedia.org/wiki/Disk_mirroring
+[4]: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
+[5]: https://en.wikipedia.org/wiki/EFI_system_partition
From e40b2c8b79525454ad8f731d3b0fbf8a3e794b74 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:13:29 +0800
Subject: [PATCH 0077/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190416=20Move?=
=?UTF-8?q?=20Data=20to=20the=20Cloud=20with=20Azure=20Data=20Migration=20?=
=?UTF-8?q?sources/tech/20190416=20Move=20Data=20to=20the=20Cloud=20with?=
=?UTF-8?q?=20Azure=20Data=20Migration.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... to the Cloud with Azure Data Migration.md | 34 +++++++++++++++++++
1 file changed, 34 insertions(+)
create mode 100644 sources/tech/20190416 Move Data to the Cloud with Azure Data Migration.md
diff --git a/sources/tech/20190416 Move Data to the Cloud with Azure Data Migration.md b/sources/tech/20190416 Move Data to the Cloud with Azure Data Migration.md
new file mode 100644
index 0000000000..379c53cd97
--- /dev/null
+++ b/sources/tech/20190416 Move Data to the Cloud with Azure Data Migration.md
@@ -0,0 +1,34 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Move Data to the Cloud with Azure Data Migration)
+[#]: via: (https://www.linux.com/blog/move-data-cloud-azure-data-migration)
+[#]: author: (InfoWorld https://www.linux.com/users/infoworld)
+
+Move Data to the Cloud with Azure Data Migration
+======
+
+Despite more than a decade of cloud migration, there’s still a vast amount of data running on-premises. That’s not surprising since data migrations, even between similar systems, are complex, slow, and add risk to your day-to-day operations. Moving to the cloud adds additional management overhead, raising questions of network connectivity and bandwidth, as well as the variable costs associated with running cloud databases.
+
+Read more at: [InfoWorld][1]
+
+Despite more than a decade of cloud migration, there’s still a vast amount of data running on-premises. That’s not surprising since data migrations, even between similar systems, are complex, slow, and add risk to your day-to-day operations. Moving to the cloud adds additional management overhead, raising questions of network connectivity and bandwidth, as well as the variable costs associated with running cloud databases.
+
+Read more at: [InfoWorld][1]
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/move-data-cloud-azure-data-migration
+
+作者:[InfoWorld][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.linux.com/users/infoworld
+[b]: https://github.com/lujun9972
+[1]: https://www.infoworld.com/article/3388312/move-data-to-the-cloud-with-azure-data-migration.html
From 90ae80e61740e6fd8b93c58d8fded36a1ff53bdc Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:13:41 +0800
Subject: [PATCH 0078/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190420=20New?=
=?UTF-8?q?=20Features=20Coming=20to=20Debian=2010=20Buster=20Release=20so?=
=?UTF-8?q?urces/tech/20190420=20New=20Features=20Coming=20to=20Debian=201?=
=?UTF-8?q?0=20Buster=20Release.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ures Coming to Debian 10 Buster Release.md | 147 ++++++++++++++++++
1 file changed, 147 insertions(+)
create mode 100644 sources/tech/20190420 New Features Coming to Debian 10 Buster Release.md
diff --git a/sources/tech/20190420 New Features Coming to Debian 10 Buster Release.md b/sources/tech/20190420 New Features Coming to Debian 10 Buster Release.md
new file mode 100644
index 0000000000..aa1ee9df46
--- /dev/null
+++ b/sources/tech/20190420 New Features Coming to Debian 10 Buster Release.md
@@ -0,0 +1,147 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (New Features Coming to Debian 10 Buster Release)
+[#]: via: (https://itsfoss.com/new-features-coming-to-debian-10-buster-release/)
+[#]: author: (Shirish https://itsfoss.com/author/shirish/)
+
+New Features Coming to Debian 10 Buster Release
+======
+
+Debian 10 Buster is nearing its release. The first release candidate is already out and we should see the final release, hopefully, in a few weeks.
+
+If you are excited about this major new release, let me tell you what’s in it for you.
+
+### Debian 10 Buster Release Schedule
+
+There is no set release date for [Debian 10 Buster][1]. Why is that so? Unlike other distributions, [Debian][2] doesn’t do time-based releases. It instead focuses on fixing release-critical bugs. Release-critical bugs are bugs which have either security issues [CVE’s][3] or some other critical issues which prevent Debian from releasing.
+
+Debian has three parts in its archive, called Main, contrib and non-free. Of the three, Debian Developers and Release Managers are most concerned that the packages which form the bedrock of the distribution i.e. Main is rock stable. So they make sure that there aren’t any major functional or security issues. They are also given priority values such as Essential, Required, Important, Standard, Optional and Extra. More on this in some later Debian article.
+
+This is necessary because Debian is used as a server in many different environments and people have come to depend on Debian. They also look at upgrade cycles to see nothing breaks for which they look for people to test and see if something breaks while upgrading and inform Debian of the same.
+
+This commitment to stability is one of the [many reasons why I love to use Debian][4].
+
+### What’s new in Debian 10 Buster Release
+
+Here are a few visual and under the hood changes in the upcoming major release of Debian.
+
+#### New theme and wallpaper
+
+The Debian theme for Buster is called [FuturePrototype][5] and can be seen below:
+
+![Debian Buster FuturePrototype Theme][6]
+
+#### 1\. GNOME Desktop 3.30
+
+The GNOME desktop which was 1.3.22 in Debian Stretch is updated to 1.3.30 in Buster. Some of the new packages included in this GNOME desktop release are gnome-todo, tracker instead of tracker-gui , dependency against gstreamer1.0-packagekit so there is automatic codec installation for playing movies etc. The big move has been all packages being moved from libgtk2+ to libgtk3+ .
+
+#### 2\. Linux Kernel 4.19.0-4
+
+Debian uses LTS Kernel versions so you can expect much better hardware support and long 5 year maintainance and support cycle from Debian. From kernel 4.9.0.3 we have come to 4.19.0-4 .
+
+```
+$ uname -r
+4.19.0-4-amd64
+```
+
+#### 3\. OpenJDK 11.0
+
+For a long time Debian was stuck on OpenJDK 8.0. Now in Debian Buster we have moved to OpenJDK 11.0 and have a team which will take care of new versions.
+
+#### 4\. AppArmor Enabled by Default
+
+In Debian Buster [AppArmor][7] will be enabled by default. While this is a good thing, care would have to be taken care by system administrators to enable correct policies. This is only the first step and would need fixing probably lot of scripts to be as useful as been envisioned for the user.
+
+#### 5\. Nodejs 10.15.2
+
+For a long time Debian had Nodejs 4.8 in the repo. In this cycle Debian has moved to Nodejs 10.15.2 . In fact, Debian Buster has many javascript libraries such as yarnpkg (an npm alternative) and many others.
+
+Of course, you can [install latest Nodejs in Debian][8] from the project’s repository but it’s good to see newer version in Debian repository.
+
+#### 6\. NFtables replaces iptables
+
+Debian buster provides nftables as a full replacement to iptables which means better and easier syntax, better support for dual-stack ipv4-v6 firewalls and more.
+
+#### 7\. Support for lot of ARM 64 and ARMHF SBC Boards.
+
+There has been a constant stream of new SBC boards which Debian is supporting, the latest amongst these are pine64_plus, pinebook for ARM64, while Firefly-RK3288, u-boot-rockchip for ARMHF 64 as well as Odroid HC1/HC2 boards, SolidRun Cubox-i Dual/Quad (1.5som), and SolidRun Cubox-i Dual/Quad (1.5som+emmc) boards, Cubietruckplus as well. There is support for Rock 64, Banana Pi M2 Berry, Pine A64 LTS Board, Olimex A64 Teres-1 as well as Raspberry Pi 1, Zero and Pi 3. Support will be out-of-the box for RISC-V systems as well.
+
+#### 8\. Python 2 is dead, long live Python 3
+
+Python 2 will be [deprecated][9] on January 1, 2020 by python.org . While Debian does have Python 2.7 efforts are on to remove after moving all packages to Python 3 to remove it from the repo. This may happen either at Buster release or in a future point release but this is imminent. So Python developers are encouraged to move their code-base to be compatible with Python 3. At the moment of writing, both python2 and python3 are supported in Debian buster.
+
+#### 9\. Mailman 3
+
+Mailman3 is finally available in Debian. While [Mailman][10] has been further sub-divided into components. To install the whole stack, install mailman3-full to get all the components.
+
+#### 10\. Any existing Postgresql databases used will need to be reindexed
+
+Due to updates in glibc locale data, the way the information is sorted in put in text indexes will change hence it would be beneficial to reindex the data so no data corruption arises in near future.
+
+#### 11\. Bash 5.0 by Default
+
+You probably have already about the [new features in Bash 5.0][11], this version is already in Debian.
+
+#### 12\. Debian implementing /usr/merge
+
+An excellent freedesktop [primer][12] on what /usr/merge brings is already shared. Couple of things to note though. While Debian would like to do the whole transition, there is possibility that due to unforseen circumstances, some binaries may not be in a position to do the change. One point to note though, /var and /etc/ will be left alone so people who are using containers or cloud would not have to worry too much :)
+
+#### 13\. Secure-boot support
+
+With Buster RC1, Debian now has secure-boot support. Which means machines which have the secure-boot bit turned on in the machine should be easily able to install Debian on such machines. No need to disable or workaround Secure boot anymore :)
+
+#### 14\. Calameres Live-installer for Debian-Live images
+
+For Debian buster, Debian Live, Debian introduces [Calameres Installer][13] instead of plain old debian-installer. While the Debian-installer has lot many features than Calameres, for newbies Calameres provides a fresh alternative to install than debian-installer. Some screenshots from the installation process.
+
+![Calamares Partitioning Stage][14]
+
+As can be seen it is pretty easy to Install Debian under Calamares, only 5 stages to go through and you can have Debian installed at your end.
+
+### Download Debian 10 Live Images (only for testing)
+
+Don’t use it on production machines just yet. Try it on a test machine or a virtual machine.
+
+You can get Debian 64-bit and 32 bit images from Debian Live [directory][15]. If you want the 64-bit look into 64-bit directory, if you want the 32-bit, you can look into the 32-bit directory.
+
+[Debian 10 Buster Live Images][15]
+
+If you upgrade from existing stable and something breaks, see if it is reported against the [upgrade-reports][16] psuedo-package using [reportbug][17] you saw the issue with. If the bug has not reported in the package then report it and share as much information as you can.
+
+**In Conclusion**
+
+While thousands of packages have been updated and it is virtually impossible to list them all. I have tired to list some of the major changes that you can look for in Debian buster. What do you think of it?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/new-features-coming-to-debian-10-buster-release/
+
+作者:[Shirish][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/shirish/
+[b]: https://github.com/lujun9972
+[1]: https://wiki.debian.org/DebianBuster
+[2]: https://www.debian.org/
+[3]: https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures
+[4]: https://itsfoss.com/reasons-why-i-love-debian/
+[5]: https://wiki.debian.org/DebianArt/Themes/futurePrototype
+[6]: https://itsfoss.com/wp-content/uploads/2019/04/debian-buster-theme-800x450.png
+[7]: https://wiki.debian.org/AppArmor
+[8]: https://itsfoss.com/install-nodejs-ubuntu/
+[9]: https://www.python.org/dev/peps/pep-0373/
+[10]: https://www.list.org/
+[11]: https://itsfoss.com/bash-5-release/
+[12]: https://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/
+[13]: https://calamares.io/about/
+[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/calamares-partitioning-wizard.jpg?fit=800%2C538&ssl=1
+[15]: https://cdimage.debian.org/cdimage/weekly-live-builds/
+[16]: https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=upgrade-reports;dist=unstable
+[17]: https://itsfoss.com/bug-report-debian/
From d4714bb04b7678a769e810e39bcc77bf300ae88c Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:13:53 +0800
Subject: [PATCH 0079/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190418=20Ubun?=
=?UTF-8?q?tu=2019.04=20=E2=80=98Disco=20Dingo=E2=80=99=20Has=20Arrived:?=
=?UTF-8?q?=20Downloads=20Available=20Now!=20sources/tech/20190418=20Ubunt?=
=?UTF-8?q?u=2019.04=20=E2=80=98Disco=20Dingo-=20Has=20Arrived-=20Download?=
=?UTF-8?q?s=20Available=20Now.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ngo- Has Arrived- Downloads Available Now.md | 105 ++++++++++++++++++
1 file changed, 105 insertions(+)
create mode 100644 sources/tech/20190418 Ubuntu 19.04 ‘Disco Dingo- Has Arrived- Downloads Available Now.md
diff --git a/sources/tech/20190418 Ubuntu 19.04 ‘Disco Dingo- Has Arrived- Downloads Available Now.md b/sources/tech/20190418 Ubuntu 19.04 ‘Disco Dingo- Has Arrived- Downloads Available Now.md
new file mode 100644
index 0000000000..8098db45f2
--- /dev/null
+++ b/sources/tech/20190418 Ubuntu 19.04 ‘Disco Dingo- Has Arrived- Downloads Available Now.md
@@ -0,0 +1,105 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Ubuntu 19.04 ‘Disco Dingo’ Has Arrived: Downloads Available Now!)
+[#]: via: (https://itsfoss.com/ubuntu-19-04-release/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Ubuntu 19.04 ‘Disco Dingo’ Has Arrived: Downloads Available Now!
+======
+
+It’s the time to disco! Why? Well, Ubuntu 19.04 ‘Disco Dingo’ is here and finally available to download. Although, we are aware of the [new features in Ubuntu 19.04][1] – I will mention a few important things below and would also point you to the official links to download it and get started.
+
+### Ubuntu 19.04: What You Need To Know
+
+Here are a few things you should know about Ubuntu 19.04 Disco Dingo release.
+
+#### Ubuntu 19.04 is not an LTS Release
+
+Unlike Ubuntu 18.04 LTS, this will not be [supported for 10 years][2]. Instead, the non-LTS 19.04 will be supported for **9 months until January 2020.**
+
+So, if you have a production environment, we may not recommend upgrading it right away. For example, if you have a server that runs on Ubuntu 18.04 LTS – it may not be a good idea to upgrade it to 19.04 just because it is an exciting release.
+
+However, for users who want the latest and greatest installed on their machines can try it out.
+
+![][3]
+
+#### Ubuntu 19.04 is a sweet update for NVIDIA GPU Owners
+
+_Martin Wimpress_ (from Canonical) mentioned that Ubuntu 19.04 is particularly a big deal for NVIDIA GPU owners in the final release notes of Ubuntu MATE 19.04 (one of the Ubuntu flavors) on [GitHub][4].
+
+In other words, while installing the proprietary graphics driver – it now selects the best driver compatible with your particular GPU model.
+
+#### Ubuntu 19.04 Features
+
+Even though we have already discussed the [best features of Ubuntu 19.04][1] Disco Dingo, it is worth mentioning that I’m exciting about the desktop updates (GNOME 3.32) and the Linux Kernel (5.0) that comes as one of the major changes in this release.
+
+#### Upgrading from Ubuntu 18.10 to 19.04
+
+If you have Ubuntu 18.10 installed, you should upgrade it for obvious reasons. 18.10 will reach its end of life in July 2019 – so we recommend you to upgrade it to 19.04.
+
+To do that, you can simply head on to the “ **Software and Updates** ” settings and then navigate your way to the “ **Updates** ” tab.
+
+Now, change the option for – **Notify me of a new Ubuntu version** to “ _For any new version_ “.
+
+When you run the update manager now, you should see that Ubuntu 19.04 is available now.
+
+![][5]
+
+#### Upgrading from Ubuntu 18.04 to 19.04
+
+It is not recommended to directly upgrade from 18.04 to 19.04 because you will have to update the OS to 18.10 first and then proceed to get 19.04 on board.
+
+Instead, you can simply download the official ISO image of Ubuntu 19.04 and then re-install Ubuntu on your system.
+
+### Ubuntu 19.04: Downloads Available for all flavors
+
+As per the [release notes][6], Ubuntu 19.04 is available to download now. You can get the torrent or the ISO file on its official release download page.
+
+[Download Ubuntu 19.04][7]
+
+If you need a different desktop environment or need something specific, you should check out the official flavors of Ubuntu available:
+
+ * [Ubuntu MATE][8]
+ * [Kubuntu][9]
+ * [Lubuntu][10]
+ * [Ubuntu Budgie][11]
+ * [Ubuntu Studio][12]
+ * [Xubuntu][13]
+
+
+
+Some of the above mentioned Ubuntu flavors haven’t put the 19.04 release on their download yet. But you can [still find the ISOs on the Ubuntu’s release note webpage][6]. Personally, I use Ubuntu with GNOME desktop. You can choose whatever you like.
+
+**Wrapping Up**
+
+What do you think about Ubuntu 19.04 Disco Dingo? Are the new features exciting enough? Have you tried it yet? Let us know in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/ubuntu-19-04-release/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/ubuntu-19-04-release-features/
+[2]: https://itsfoss.com/ubuntu-18-04-ten-year-support/
+[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/ubuntu-19-04-Disco-Dingo-default-wallpaper.jpg?resize=800%2C450&ssl=1
+[4]: https://github.com/ubuntu-mate/ubuntu-mate.org/blob/master/blog/20190418-ubuntu-mate-disco-final-release.md
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/ubuntu-19-04-upgrade-available.jpg?ssl=1
+[6]: https://wiki.ubuntu.com/DiscoDingo/ReleaseNotes
+[7]: https://www.ubuntu.com/download/desktop
+[8]: https://ubuntu-mate.org/download/
+[9]: https://kubuntu.org/getkubuntu/
+[10]: https://lubuntu.me/cosmic-released/
+[11]: https://ubuntubudgie.org/downloads
+[12]: https://ubuntustudio.org/2019/04/ubuntu-studio-19-04-released/
+[13]: https://xubuntu.org/download/
From c0cc9f1bb216bccfb4198dfdc08ba3511135f6db Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:14:05 +0800
Subject: [PATCH 0080/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190419=20Buil?=
=?UTF-8?q?ding=20scalable=20social=20media=20sentiment=20analysis=20servi?=
=?UTF-8?q?ces=20in=20Python=20sources/tech/20190419=20Building=20scalable?=
=?UTF-8?q?=20social=20media=20sentiment=20analysis=20services=20in=20Pyth?=
=?UTF-8?q?on.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...a sentiment analysis services in Python.md | 294 ++++++++++++++++++
1 file changed, 294 insertions(+)
create mode 100644 sources/tech/20190419 Building scalable social media sentiment analysis services in Python.md
diff --git a/sources/tech/20190419 Building scalable social media sentiment analysis services in Python.md b/sources/tech/20190419 Building scalable social media sentiment analysis services in Python.md
new file mode 100644
index 0000000000..35321f1c9d
--- /dev/null
+++ b/sources/tech/20190419 Building scalable social media sentiment analysis services in Python.md
@@ -0,0 +1,294 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Building scalable social media sentiment analysis services in Python)
+[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable)
+[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
+
+Building scalable social media sentiment analysis services in Python
+======
+Learn how you can use spaCy, vaderSentiment, Flask, and Python to add
+sentiment analysis capabilities to your work.
+![Tall building with windows][1]
+
+The [first part][2] of this series provided some background on how sentiment analysis works. Now let's investigate how to add these capabilities to your designs.
+
+### Exploring spaCy and vaderSentiment in Python
+
+#### Prerequisites
+
+ * A terminal shell
+ * Python language binaries (version 3.4+) in your shell
+ * The **pip** command for installing Python packages
+ * (optional) A [Python Virtualenv][3] to keep your work isolated from the system
+
+
+
+#### Configure your environment
+
+Before you begin writing code, you will need to set up the Python environment by installing the [spaCy][4] and [vaderSentiment][5] packages and downloading a language model to assist your analysis. Thankfully, most of this is relatively easy to do from the command line.
+
+In your shell, type the following command to install the spaCy and vaderSentiment packages:
+
+
+```
+`pip install spacy vaderSentiment`
+```
+
+After the command completes, install a language model that spaCy can use for text analysis. The following command will use the spaCy module to download and install the English language [model][6]:
+
+
+```
+`python -m spacy download en_core_web_sm`
+```
+
+With these libraries and models installed, you are now ready to begin coding.
+
+#### Do a simple text analysis
+
+Use the [Python interpreter interactive mode][7] to write some code that will analyze a single text fragment. Begin by starting the Python environment:
+
+
+```
+$ python
+Python 3.6.8 (default, Jan 31 2019, 09:38:34)
+[GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] on linux
+Type "help", "copyright", "credits" or "license" for more information.
+>>>
+```
+
+_(Your Python interpreter version print might look different than this.)_
+
+ 1. Import the necessary modules: [code] >>> import spacy
+>>> from vaderSentiment import vaderSentiment
+```
+2. Load the English language model from spaCy: [code]`>>> english = spacy.load("en_core_web_sm")`
+```
+ 3. Process a piece of text. This example shows a very simple sentence that we expect to return a slightly positive sentiment: [code]`>>> result = english("I like to eat applesauce with sugar and cinnamon.")`
+```
+4. Gather the sentences from the processed result. SpaCy has identified and processed the entities within the phrase; this step generates sentiment for each sentence (even though there is only one sentence in this example): [code]`>>> sentences = [str(s) for s in result.sents]`
+```
+ 5. Create an analyzer using vaderSentiments: [code]`>>> analyzer = vaderSentiment.SentimentIntensityAnalyzer()`
+```
+6. Perform the sentiment analysis on the sentences: [code]`>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]`
+```
+
+
+
+The sentiment variable now contains the polarity scores for the example sentence. Print out the value to see how it analyzed the sentence.
+
+
+```
+>>> print(sentiment)
+[{'neg': 0.0, 'neu': 0.737, 'pos': 0.263, 'compound': 0.3612}]
+```
+
+What does this structure mean?
+
+On the surface, this is an array with a single dictionary object; had there been multiple sentences, there would be a dictionary for each one. There are four keys in the dictionary that correspond to different types of sentiment. The **neg** key represents negative sentiment, of which none has been reported in this text, as evidenced by the **0.0** value. The **neu** key represents neutral sentiment, which has gotten a fairly high score of **0.737** (with a maximum of **1.0** ). The **pos** key represents positive sentiments, which has a moderate score of **0.263**. Last, the **compound** key represents an overall score for the text; this can range from negative to positive scores, with the value **0.3612** representing a sentiment more on the positive side.
+
+To see how these values might change, you can run a small experiment using the code you already entered. The following block demonstrates an evaluation of sentiment scores on a similar sentence.
+
+
+```
+>>> result = english("I love applesauce!")
+>>> sentences = [str(s) for s in result.sents]
+>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]
+>>> print(sentiment)
+[{'neg': 0.0, 'neu': 0.182, 'pos': 0.818, 'compound': 0.6696}]
+```
+
+You can see that by changing the example sentence to something overwhelmingly positive, the sentiment values have changed dramatically.
+
+### Building a sentiment analysis service
+
+Now that you have assembled the basic building blocks for doing sentiment analysis, let's turn that knowledge into a simple service.
+
+For this demonstration, you will create a [RESTful][8] HTTP server using the Python [Flask package][9]. This service will accept text data in English and return the sentiment analysis. Please note that this example service is for learning the technologies involved and not something to put into production.
+
+#### Prerequisites
+
+ * A terminal shell
+ * The Python language binaries (version 3.4+) in your shell.
+ * The **pip** command for installing Python packages
+ * The **curl** command
+ * A text editor
+ * (optional) A [Python Virtualenv][3] to keep your work isolated from the system
+
+
+
+#### Configure your environment
+
+This environment is nearly identical to the one in the previous section. The only difference is the addition of the Flask package to Python.
+
+ 1. Install the necessary dependencies: [code]`pip install spacy vaderSentiment flask`
+```
+2. Install the English language model for spaCy: [code]`python -m spacy download en_core_web_sm`
+```
+
+
+
+#### Create the application file
+
+Open your editor and create a file named **app.py**. Add the following contents to it _(don't worry, we will review every line)_ :
+
+
+```
+import flask
+import spacy
+import vaderSentiment.vaderSentiment as vader
+
+app = flask.Flask(__name__)
+analyzer = vader.SentimentIntensityAnalyzer()
+english = spacy.load("en_core_web_sm")
+
+def get_sentiments(text):
+result = english(text)
+sentences = [str(sent) for sent in result.sents]
+sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
+return sentiments
+
+@app.route("/", methods=["POST", "GET"])
+def index():
+if flask.request.method == "GET":
+return "To access this service send a POST request to this URL with" \
+" the text you want analyzed in the body."
+body = flask.request.data.decode("utf-8")
+sentiments = get_sentiments(body)
+return flask.json.dumps(sentiments)
+```
+
+Although this is not an overly large source file, it is quite dense. Let's walk through the pieces of this application and describe what they are doing.
+
+
+```
+import flask
+import spacy
+import vaderSentiment.vaderSentiment as vader
+```
+
+The first three lines bring in the packages needed for performing the language analysis and the HTTP framework.
+
+
+```
+app = flask.Flask(__name__)
+analyzer = vader.SentimentIntensityAnalyzer()
+english = spacy.load("en_core_web_sm")
+```
+
+The next three lines create a few global variables. The first variable, **app** , is the main entry point that Flask uses for creating HTTP routes. The second variable, **analyzer** , is the same type used in the previous example, and it will be used to generate the sentiment scores. The last variable, **english** , is also the same type used in the previous example, and it will be used to annotate and tokenize the initial text input.
+
+You might be wondering why these variables have been declared globally. In the case of the **app** variable, this is standard procedure for many Flask applications. But, in the case of the **analyzer** and **english** variables, the decision to make them global is based on the load times associated with the classes involved. Although the load time might appear minor, when it's run in the context of an HTTP server, these delays can negatively impact performance.
+
+
+```
+def get_sentiments(text):
+result = english(text)
+sentences = [str(sent) for sent in result.sents]
+sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
+return sentiments
+```
+
+The next piece is the heart of the service—a function for generating sentiment values from a string of text. You can see that the operations in this function correspond to the commands you ran in the Python interpreter earlier. Here they're wrapped in a function definition with the source **text** being passed in as the variable text and finally the **sentiments** variable returned to the caller.
+
+
+```
+@app.route("/", methods=["POST", "GET"])
+def index():
+if flask.request.method == "GET":
+return "To access this service send a POST request to this URL with" \
+" the text you want analyzed in the body."
+body = flask.request.data.decode("utf-8")
+sentiments = get_sentiments(body)
+return flask.json.dumps(sentiments)
+```
+
+The last function in the source file contains the logic that will instruct Flask how to configure the HTTP server for the service. It starts with a line that will associate an HTTP route **/** with the request methods **POST** and **GET**.
+
+After the function definition line, the **if** clause will detect if the request method is **GET**. If a user sends this request to the service, the following line will return a text message instructing how to access the server. This is largely included as a convenience to end users.
+
+The next line uses the **flask.request** object to acquire the body of the request, which should contain the text string to be processed. The **decode** function will convert the array of bytes into a usable, formatted string. The decoded text message is now passed to the **get_sentiments** function to generate the sentiment scores. Last, the scores are returned to the user through the HTTP framework.
+
+You should now save the file, if you have not done so already, and return to the shell.
+
+#### Run the sentiment service
+
+With everything in place, running the service is quite simple with Flask's built-in debugging server. To start the service, enter the following command from the same directory as your source file:
+
+
+```
+`FLASK_APP=app.py flask run`
+```
+
+You will now see some output from the server in your shell, and the server will be running. To test that the server is running, you will need to open a second shell and use the **curl** command.
+
+First, check to see that the instruction message is printed by entering this command:
+
+
+```
+`curl http://localhost:5000`
+```
+
+You should see the instruction message:
+
+
+```
+`To access this service send a POST request to this URI with the text you want analyzed in the body.`
+```
+
+Next, send a test message to see the sentiment analysis by running the following command:
+
+
+```
+`curl http://localhost:5000 --header "Content-Type: application/json" --data "I love applesauce!"`
+```
+
+The response you get from the server should be similar to the following:
+
+
+```
+`[{"compound": 0.6696, "neg": 0.0, "neu": 0.182, "pos": 0.818}]`
+```
+
+Congratulations! You have now implemented a RESTful HTTP sentiment analysis service. You can find a link to a [reference implementation of this service and all the code from this article on GitHub][10].
+
+### Continue exploring
+
+Now that you have an understanding of the principles and mechanics behind natural language processing and sentiment analysis, here are some ways to further your discovery of this topic.
+
+#### Create a streaming sentiment analyzer on OpenShift
+
+While creating local applications to explore sentiment analysis is a convenient first step, having the ability to deploy your applications for wider usage is a powerful next step. By following the instructions and code in this [workshop from Radanalytics.io][11], you will learn how to create a sentiment analyzer that can be containerized and deployed to a Kubernetes platform. You will also see how Apache Kafka is used as a framework for event-driven messaging and how Apache Spark can be used as a distributed computing platform for sentiment analysis.
+
+#### Discover live data with the Twitter API
+
+Although the [Radanalytics.io][12] lab generated synthetic tweets to stream, you are not limited to synthetic data. In fact, anyone with a Twitter account can access the Twitter streaming API and perform sentiment analysis on tweets with the [Tweepy Python][13] package.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable
+
+作者:[Michael McCune ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/elmiko/users/jschlessman
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows)
+[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-1
+[3]: https://virtualenv.pypa.io/en/stable/
+[4]: https://pypi.org/project/spacy/
+[5]: https://pypi.org/project/vaderSentiment/
+[6]: https://spacy.io/models
+[7]: https://docs.python.org/3.6/tutorial/interpreter.html
+[8]: https://en.wikipedia.org/wiki/Representational_state_transfer
+[9]: http://flask.pocoo.org/
+[10]: https://github.com/elmiko/social-moments-service
+[11]: https://github.com/radanalyticsio/streaming-lab
+[12]: http://Radanalytics.io
+[13]: https://github.com/tweepy/tweepy
From fdeb89afe0d5a3b69d9625cb835e2fdd9741093b Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:14:19 +0800
Subject: [PATCH 0081/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190419=20Gett?=
=?UTF-8?q?ing=20started=20with=20social=20media=20sentiment=20analysis=20?=
=?UTF-8?q?in=20Python=20sources/tech/20190419=20Getting=20started=20with?=
=?UTF-8?q?=20social=20media=20sentiment=20analysis=20in=20Python.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...cial media sentiment analysis in Python.md | 117 ++++++++++++++++++
1 file changed, 117 insertions(+)
create mode 100644 sources/tech/20190419 Getting started with social media sentiment analysis in Python.md
diff --git a/sources/tech/20190419 Getting started with social media sentiment analysis in Python.md b/sources/tech/20190419 Getting started with social media sentiment analysis in Python.md
new file mode 100644
index 0000000000..e70b263acc
--- /dev/null
+++ b/sources/tech/20190419 Getting started with social media sentiment analysis in Python.md
@@ -0,0 +1,117 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Getting started with social media sentiment analysis in Python)
+[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python)
+[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
+
+Getting started with social media sentiment analysis in Python
+======
+Learn the basics of natural language processing and explore two useful
+Python packages.
+![Raspberry Pi and Python][1]
+
+Natural language processing (NLP) is a type of machine learning that addresses the correlation between spoken/written languages and computer-aided analysis of those languages. We experience numerous innovations from NLP in our daily lives, from writing assistance and suggestions to real-time speech translation and interpretation.
+
+This article examines one specific area of NLP: sentiment analysis, with an emphasis on determining the positive, negative, or neutral nature of the input language. This part will explain the background behind NLP and sentiment analysis and explore two open source Python packages. [Part 2][2] will demonstrate how to begin building your own scalable sentiment analysis services.
+
+When learning sentiment analysis, it is helpful to have an understanding of NLP in general. This article won't dig into the mathematical guts, rather our goal is to clarify key concepts in NLP that are crucial to incorporating these methods into your solutions in practical ways.
+
+### Natural language and text data
+
+A reasonable place to begin is defining: "What is natural language?" It is the means by which we, as humans, communicate with one another. The primary modalities for communication are verbal and text. We can take this a step further and focus solely on text communication; after all, living in an age of pervasive Siri, Alexa, etc., we know speech is a group of computations away from text.
+
+### Data landscape and challenges
+
+Limiting ourselves to textual data, what can we say about language and text? First, language, particularly English, is fraught with exceptions to rules, plurality of meanings, and contextual differences that can confuse even a human interpreter, let alone a computational one. In elementary school, we learn articles of speech and punctuation, and from speaking our native language, we acquire intuition about which words have less significance when searching for meaning. Examples of the latter would be articles of speech such as "a," "the," and "or," which in NLP are referred to as _stop words_ , since traditionally an NLP algorithm's search for meaning stops when reaching one of these words in a sequence.
+
+Since our goal is to automate the classification of text as belonging to a sentiment class, we need a way to work with text data in a computational fashion. Therefore, we must consider how to represent text data to a machine. As we know, the rules for utilizing and interpreting language are complicated, and the size and structure of input text can vary greatly. We'll need to transform the text data into numeric data, the form of choice for machines and math. This transformation falls under the area of _feature extraction_.
+
+Upon extracting numeric representations of input text data, one refinement might be, given an input body of text, to determine a set of quantitative statistics for the articles of speech listed above and perhaps classify documents based on them. For example, a glut of adverbs might make a copywriter bristle, or excessive use of stop words might be helpful in identifying term papers with content padding. Admittedly, this may not have much bearing on our goal of sentiment analysis.
+
+### Bag of words
+
+When you assess a text statement as positive or negative, what are some contextual clues you use to assess its polarity (i.e., whether the text has positive, negative, or neutral sentiment)? One way is connotative adjectives: something called "disgusting" is viewed as negative, but if the same thing were called "beautiful," you would judge it as positive. Colloquialisms, by definition, give a sense of familiarity and often positivity, whereas curse words could be a sign of hostility. Text data can also include emojis, which carry inherent sentiments.
+
+Understanding the polarity influence of individual words provides a basis for the [_bag-of-words_][3] (BoW) model of text. It considers a set of words or vocabulary and extracts measures about the presence of those words in the input text. The vocabulary is formed by considering text where the polarity is known, referred to as _labeled training data_. Features are extracted from this set of labeled data, then the relationships between the features are analyzed and labels are associated with the data.
+
+The name "bag of words" illustrates what it utilizes: namely, individual words without consideration of spatial locality or context. A vocabulary typically is built from all words appearing in the training set, which tends to be pruned afterward. Stop words, if not cleaned prior to training, are removed due to their high frequency and low contextual utility. Rarely used words can also be removed, given the lack of information they provide for general input cases.
+
+It is important to note, however, that you can (and should) go further and consider the appearance of words beyond their use in an individual instance of training data, or what is called [_term frequency_][4] (TF). You should also consider the counts of a word through all instances of input data; typically the infrequency of words among all documents is notable, which is called the [_inverse document frequency_][5] (IDF). These metrics are bound to be mentioned in other articles and software packages on this subject, so having an awareness of them can only help.
+
+BoW is useful in a number of document classification applications; however, in the case of sentiment analysis, things can be gamed when the lack of contextual awareness is leveraged. Consider the following sentences:
+
+ * We are not enjoying this war.
+ * I loathe rainy days, good thing today is sunny.
+ * This is not a matter of life and death.
+
+
+
+The sentiment of these phrases is questionable for human interpreters, and by strictly focusing on instances of individual vocabulary words, it's difficult for a machine interpreter as well.
+
+Groupings of words, called _n-grams_ , can also be considered in NLP. A bigram considers groups of two adjacent words instead of (or in addition to) the single BoW. This should alleviate situations such as "not enjoying" above, but it will remain open to gaming due to its loss of contextual awareness. Furthermore, in the second sentence above, the sentiment context of the second half of the sentence could be perceived as negating the first half. Thus, spatial locality of contextual clues also can be lost in this approach. Complicating matters from a pragmatic perspective is the sparsity of features extracted from a given input text. For a thorough and large vocabulary, a count is maintained for each word, which can be considered an integer vector. Most documents will have a large number of zero counts in their vectors, which adds unnecessary space and time complexity to operations. While a number of clever approaches have been proposed for reducing this complexity, it remains an issue.
+
+### Word embeddings
+
+Word embeddings are a distributed representation that allows words with a similar meaning to have a similar representation. This is based on using a real-valued vector to represent words in connection with the company they keep, as it were. The focus is on the manner that words are used, as opposed to simply their existence. In addition, a huge pragmatic benefit of word embeddings is their focus on dense vectors; by moving away from a word-counting model with commensurate amounts of zero-valued vector elements, word embeddings provide a more efficient computational paradigm with respect to both time and storage.
+
+Following are two prominent word embedding approaches.
+
+#### Word2vec
+
+The first of these word embeddings, [Word2vec][6], was developed at Google. You'll probably see this embedding method mentioned as you go deeper in your study of NLP and sentiment analysis. It utilizes either a _continuous bag of words_ (CBOW) or a _continuous skip-gram_ model. In CBOW, a word's context is learned during training based on the words surrounding it. Continuous skip-gram learns the words that tend to surround a given word. Although this is more than what you'll probably need to tackle, if you're ever faced with having to generate your own word embeddings, the author of Word2vec advocates the CBOW method for speed and assessment of frequent words, while the skip-gram approach is better suited for embeddings where rare words are more important.
+
+#### GloVe
+
+The second word embedding, [_Global Vectors for Word Representation_][7] (GloVe), was developed at Stanford. It's an extension to the Word2vec method that attempts to combine the information gained through classical global text statistical feature extraction with the local contextual information determined by Word2vec. In practice, GloVe has outperformed Word2vec for some applications, while falling short of Word2vec's performance in others. Ultimately, the targeted dataset for your word embedding will dictate which method is optimal; as such, it's good to know the existence and high-level mechanics of each, as you'll likely come across them.
+
+#### Creating and using word embeddings
+
+Finally, it's useful to know how to obtain word embeddings; in part 2, you'll see that we are standing on the shoulders of giants, as it were, by leveraging the substantial work of others in the community. This is one method of acquiring a word embedding: namely, using an existing trained and proven model. Indeed, myriad models exist for English and other languages, and it's possible that one does what your application needs out of the box!
+
+If not, the opposite end of the spectrum in terms of development effort is training your own standalone model without consideration of your application. In essence, you would acquire substantial amounts of labeled training data and likely use one of the approaches above to train a model. Even then, you are still only at the point of acquiring understanding of your input-text data; you then need to develop a model specific for your application (e.g., analyzing sentiment valence in software version-control messages) which, in turn, requires its own time and effort.
+
+You also could train a word embedding on data specific to your application; while this could reduce time and effort, the word embedding would be application-specific, which would reduce reusability.
+
+### Available tooling options
+
+You may wonder how you'll ever get to a point of having a solution for your problem, given the intensive time and computing power needed. Indeed, the complexities of developing solid models can be daunting; however, there is good news: there are already many proven models, tools, and software libraries available that may provide much of what you need. We will focus on [Python][8], which conveniently has a plethora of tooling in place for these applications.
+
+#### SpaCy
+
+[SpaCy][9] provides a number of language models for parsing input text data and extracting features. It is highly optimized and touted as the fastest library of its kind. Best of all, it's open source! SpaCy performs tokenization, parts-of-speech classification, and dependency annotation. It contains word embedding models for performing this and other feature extraction operations for over 46 languages. You will see how it can be used for text analysis and feature extraction in the second article in this series.
+
+#### vaderSentiment
+
+The [vaderSentiment][10] package provides a measure of positive, negative, and neutral sentiment. As the [original paper][11]'s title ("VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text") indicates, the models were developed and tuned specifically for social media text data. VADER was trained on a thorough set of human-labeled data, which included common emoticons, UTF-8 encoded emojis, and colloquial terms and abbreviations (e.g., meh, lol, sux).
+
+For given input text data, vaderSentiment returns a 3-tuple of polarity score percentages. It also provides a single scoring measure, referred to as _vaderSentiment's compound metric_. This is a real-valued measurement within the range **[-1, 1]** wherein sentiment is considered positive for values greater than **0.05** , negative for values less than **-0.05** , and neutral otherwise.
+
+In [part 2][2], you will learn how to use these tools to add sentiment analysis capabilities to your designs.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python
+
+作者:[Michael McCune ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/elmiko/users/jschlessman
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Raspberry Pi and Python)
+[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-2
+[3]: https://en.wikipedia.org/wiki/Bag-of-words_model
+[4]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Term_frequency
+[5]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Inverse_document_frequency
+[6]: https://en.wikipedia.org/wiki/Word2vec
+[7]: https://en.wikipedia.org/wiki/GloVe_(machine_learning)
+[8]: https://www.python.org/
+[9]: https://pypi.org/project/spacy/
+[10]: https://pypi.org/project/vaderSentiment/
+[11]: http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf
From b09ee31069d3a55f8efdd435a806aedb1add6727 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:14:35 +0800
Subject: [PATCH 0082/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190419=20This?=
=?UTF-8?q?=20is=20how=20System76=20does=20open=20hardware=20sources/tech/?=
=?UTF-8?q?20190419=20This=20is=20how=20System76=20does=20open=20hardware.?=
=?UTF-8?q?md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...This is how System76 does open hardware.md | 78 +++++++++++++++++++
1 file changed, 78 insertions(+)
create mode 100644 sources/tech/20190419 This is how System76 does open hardware.md
diff --git a/sources/tech/20190419 This is how System76 does open hardware.md b/sources/tech/20190419 This is how System76 does open hardware.md
new file mode 100644
index 0000000000..7f0ca3e479
--- /dev/null
+++ b/sources/tech/20190419 This is how System76 does open hardware.md
@@ -0,0 +1,78 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (This is how System76 does open hardware)
+[#]: via: (https://opensource.com/article/19/4/system76-hardware)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
+
+This is how System76 does open hardware
+======
+What sets the new Thelio line of desktops apart from the rest.
+![metrics and data shown on a computer screen][1]
+
+Most people know very little about the hardware in their computers. As a long-time Linux user, I've had my share of frustration while getting my wireless cards, video cards, displays, and other hardware working with my chosen distribution. Proprietary hardware often makes it difficult to determine why an Ethernet controller, wireless controller, or mouse performs differently than we expect. As Linux distributions have matured, this has become less of a problem, but we still see some quirks with touchpads and other peripherals, especially when we don't know much—if anything—about our underlying hardware.
+
+Companies like [System76][2] aim to take these types of problems out of the Linux user experience. System76 manufactures a line of Linux laptops, desktops, and servers, and even offers its own Linux distro, [Pop! OS][3], as an option for buyers, Recently I had the privilege of visiting System76's plant in Denver for [the unveiling][4] of [Thelio][5], its new desktop product line.
+
+### About Thelio
+
+System76 says Thelio's open hardware daughterboard, named Thelio Io after the fifth moon of Jupiter, is one thing that makes the computer unique in the marketplace. Thelio Io is certified [OSHWA #us000145][6] and has four SATA ports for storage and an embedded controller for fan and power button control. Thelio Io SAS is certified [OSHWA #us000146][7] and has four U.2 ports for storage and no embedded controller. During a demonstration, System76 showed how these components adjust fans throughout the chassis to optimize the unit's performance.
+
+The controller also runs the power button and the LED ring around the button, which glows at 100% brightness when it is pressed. This provides both tactile and visual confirmation that the unit is being powered on. While the computer is in use, the button is set to 35% brightness, and when it's in suspend mode, it pulses between 2.35% and 25%. When the computer is off, the LED remains dimly lit so that it's easy to find the power control in a dark room.
+
+Thelio's embedded controller is a low-power [ATmega32U4][8] microchip, and the controller's setup can be prototyped with an Arduino Micro. The number of Thelio Io boards changes depending on which Thelio model you purchase.
+
+Thelio is also perhaps the best-designed computer case and system I have ever seen. You'll probably agree if you have ever skinned your knuckles trying to operate inside a typical PC case. I have done this a number of times, and I have the scars to prove it.
+
+### Why open hardware?
+
+The boards were designed in [KiCAD][9], and you can access all of Thelio's design files under GPL on [GitHub][10]. So, why would a company that competes with other PC manufacturers design a unique interface then license it openly? It's because the company recognizes the value of open design and the ability to share and adjust an I/O board to your needs, even if you're a competitor in the marketplace.
+
+![Don Watkins speaks with System76 CEO Carl Richell at the Thelio launch event.][11]
+
+Don Watkins speaks with System76 CEO Carl Richell at the [Thelio launch event][12].
+
+I asked [Carl Richell][13], System76's founder and CEO, whether the company is concerned that openly licensing its hardware designs means someone could take its unique design and use it to drive System76 out of business. He said:
+
+> Open hardware benefits all of us. It's how we further advance technology and make it more available to everyone. We welcome anyone who wishes to improve on Thelio's design to do so. Opening the hardware not only helps advance improvements of our computers more quickly, but it also empowers our customers to truly own 100% of their device. Our goal is to remove as much proprietary functioning as we can, while still producing a competitive Linux computer for customers.
+>
+> We've been working with the Linux community for over 13 years to create a flawless and digestible experience on all of our laptops, desktops, and servers. Our long tenure serving the Linux community, providing our customers with a high level of service, and our personability are what makes System76 unique.
+
+I also asked Carl why open hardware makes sense for System76 and the PC business in general. He replied:
+
+> System76 was founded on the idea that technology should be open and accessible to everyone. We're not yet at the point where we can create a computer that is 100% open source, but with open hardware, we're one large, essential step closer to reaching that point.
+>
+> We live in an era where technology has become a utility. Computers are tools for people at every level of education and across many industries. With everyone's needs specific, each person has their own ideas on how they might improve the computer and its software as their primary tool. Having our computers open allows these ideas to come to fruition, which in turn makes the technology a more powerful tool. In an open environment, we constantly get to iterate a better PC. And that's kind of cool.
+
+We wrapped up our conversation by talking about System76's roadmap, which includes open hardware mini PCs and, eventually, laptops. Existing mini PCs and laptops sold under the System76 brand are manufactured by other vendors and are not based on open hardware (although their Linux software is, of course, open source).
+
+Designing and supporting open hardware is a game-changer in the PC business, and it is what sets System76's new Thelio line of desktop computers apart.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/system76-hardware
+
+作者:[Don Watkins ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
+[2]: https://system76.com/
+[3]: https://opensource.com/article/18/1/behind-scenes-popos-linux
+[4]: /article/18/11/system76-thelio-desktop-computer
+[5]: https://system76.com/desktops
+[6]: https://certification.oshwa.org/us000145.html
+[7]: https://certification.oshwa.org/us000146.html
+[8]: https://www.microchip.com/wwwproducts/ATmega32u4
+[9]: http://kicad-pcb.org/
+[10]: https://github.com/system76/thelio-io
+[11]: https://opensource.com/sites/default/files/uploads/don_system76_ceo.jpg (Don Watkins speaks with System76 CEO Carl Richell at the Thelio launch event.)
+[12]: https://trevgstudios.smugmug.com/System76/121418-Thelio-Press-Event/i-FKWFxFv
+[13]: https://www.linkedin.com/in/carl-richell-9435781
From 44aa6eec02d658a39d82df83af8c9bf95f82091a Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:14:49 +0800
Subject: [PATCH 0083/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190418=20How?=
=?UTF-8?q?=20to=20organize=20with=20Calculist:=20Ideas,=20events,=20and?=
=?UTF-8?q?=20more=20sources/tech/20190418=20How=20to=20organize=20with=20?=
=?UTF-8?q?Calculist-=20Ideas,=20events,=20and=20more.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...with Calculist- Ideas, events, and more.md | 120 ++++++++++++++++++
1 file changed, 120 insertions(+)
create mode 100644 sources/tech/20190418 How to organize with Calculist- Ideas, events, and more.md
diff --git a/sources/tech/20190418 How to organize with Calculist- Ideas, events, and more.md b/sources/tech/20190418 How to organize with Calculist- Ideas, events, and more.md
new file mode 100644
index 0000000000..7c9d844315
--- /dev/null
+++ b/sources/tech/20190418 How to organize with Calculist- Ideas, events, and more.md
@@ -0,0 +1,120 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to organize with Calculist: Ideas, events, and more)
+[#]: via: (https://opensource.com/article/19/4/organize-calculist)
+[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
+
+How to organize with Calculist: Ideas, events, and more
+======
+Give structure to your ideas and plans with Calculist, an open source
+web app for creating outlines.
+![Team checklist][1]
+
+Thoughts. Ideas. Plans. We all have a few of them. Often, more than a few. And all of us want to make some or all of them a reality.
+
+Far too often, however, those thoughts and ideas and plans are a jumble inside our heads. They refuse to take a discernable shape, preferring instead to rattle around here, there, and everywhere in our brains.
+
+One solution to that problem is to put everything into [an outline][2]. An outline can be a great way to organize what you need to organize and give it the shape you need to take it to the next step.
+
+A number of people I know rely on a popular web-based tool called WorkFlowy for their outlining needs. If you prefer your applications (including web ones) to be open source, you'll want to take a look at [Calculist][3].
+
+The brainchild of [Dan Allison][4], Calculist is billed as _the thinking tool for problem solvers_. It does much of what WorkFlowy does, and it has a few features that its rival is missing.
+
+Let's take a look at using Calculist to organize your ideas (and more).
+
+### Getting started
+
+If you have a server, you can try to [install Calculist][5] on it. If, like me, you don't have server or just don't have the technical chops, you can turn to the [hosted version][6] of Calculist.
+
+[Sign up][7] for a no-cost account, then log in. Once you've done that, you're ready to go.
+
+### Creating a basic outline
+
+What you use Calculist for really depends on your needs. I use Calculist to create outlines for articles and essays, to create lists of various sorts, and to plan projects. Regardless of what I'm doing, every outline I create follows the same pattern.
+
+To get started, click the **New List** button. This creates a blank outline (which Calculist calls a _list_ ).
+
+![Create a new list in Calculist][8]
+
+The outline is a blank slate waiting for you to fill it up. Give the outline a name, then press Enter. When you do that, Calculist adds the first blank line for your outline. Use that as your starting point.
+
+![A new outline in Calculist][9]
+
+Add a new line by pressing Enter. To indent a line, press the Tab key while on that line. If you need to create a hierarchy, you can indent lines as far as you need to indent them. Press Shift+Tab to outdent a line.
+
+Keep adding lines until you have a completed outline. Calculist saves your work every few seconds, so you don't need to worry about that.
+
+![Calculist outline][10]
+
+### Editing an outline
+
+Outlines are fluid. They morph. They grow and shrink. Individual items in an outline change. Calculist makes it easy for you to adapt and make those changes.
+
+You already know how to add an item to an outline. If you don't, go back a few paragraphs for a refresher. To edit text, click on an item and start typing. Don't double-click (more on this in a few moments). If you accidentally double-click on an item, press Esc on your keyboard and all will be well.
+
+Sometimes you need to move an item somewhere else in the outline. Do that by clicking and holding the bullet for that item. Drag the item and drop it wherever you want it. Anything indented below the item moves with it.
+
+At the moment, Calculist doesn't support adding notes or comments to an item in an outline. A simple workaround I use is to add a line indented one level deeper than the item where I want to add the note. That's not the most elegant solution, but it works.
+
+### Let your keyboard do the walking
+
+Not everyone likes to use their mouse to perform actions in an application. Like a good desktop application, you're not at the mercy of your mouse when you use Calculist. It has many keyboard shortcuts that you can use to move around your outlines and manipulate them.
+
+The keyboard shortcuts I mentioned a few paragraphs ago are just the beginning. There are a couple of dozen keyboard shortcuts that you can use.
+
+For example, you can focus on a single portion of an outline by pressing Ctrl+Right Arrow key. To get back to the full outline, press Ctrl+Left Arrow key. There are also shortcuts for moving up and down in your outline, expanding and collapsing lists, and deleting items.
+
+You can view the list of shortcuts by clicking on your user name in the upper-right corner of the Calculist window and clicking **Preferences**. You can also find a list of [keyboard shortcuts][11] in the Calculist GitHub repository.
+
+If you need or want to, you can change the shortcuts on the **Preferences** page. Click on the shortcut you want to change—you can, for example, change the shortcut for zooming in on an item to Ctrl+0.
+
+### The power of commands
+
+Calculist's keyboard shortcuts are useful, but they're only the beginning. The application has command mode that enables you to perform basic actions and do some interesting and complex tasks.
+
+To use a command, double-click an item in your outline or press Ctrl+Enter while on it. The item turns black. Type a letter or two, and a list of commands displays. Scroll down to find the command you want to use, then press Enter. There's also a [list of commands][12] in the Calculist GitHub repository.
+
+![Calclulist commands][13]
+
+The commands are quite comprehensive. While in command mode, you can, for example, delete an item in an outline or delete an entire outline. You can import or export outlines, sort and group items in an outline, or change the application's theme or font.
+
+### Final thoughts
+
+I've found that Calculist is a quick, easy, and flexible way to create and view outlines. It works equally well on my laptop and my phone, and it packs not only the features I regularly use but many others (including support for [LaTeX math expressions][14] and a [table/spreadsheet mode][15]) that more advanced users will find useful.
+
+That said, Calculist isn't for everyone. If you prefer your outlines on the desktop, then check out [TreeLine][16], [Leo][17], or [Emacs org-mode][18].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/organize-calculist
+
+作者:[Scott Nesbitt ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/scottnesbitt
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
+[2]: https://en.wikipedia.org/wiki/Outline_(list)
+[3]: https://calculist.io/
+[4]: https://danallison.github.io/
+[5]: https://github.com/calculist/calculist-web
+[6]: https://app.calculist.io/
+[7]: https://app.calculist.io/join
+[8]: https://opensource.com/sites/default/files/uploads/calculist-new-list.png (Create a new list in Calculist)
+[9]: https://opensource.com/sites/default/files/uploads/calculist-getting-started.png (A new outline in Calculist)
+[10]: https://opensource.com/sites/default/files/uploads/calculist-outline.png (Calculist outline)
+[11]: https://github.com/calculist/calculist/wiki/Keyboard-Shortcuts
+[12]: https://github.com/calculist/calculist/wiki/Command-Mode
+[13]: https://opensource.com/sites/default/files/uploads/calculist-commands.png (Calculist commands)
+[14]: https://github.com/calculist/calculist/wiki/LaTeX-Expressions
+[15]: https://github.com/calculist/calculist/issues/32
+[16]: https://opensource.com/article/18/1/creating-outlines-treeline
+[17]: http://www.leoeditor.com/
+[18]: https://orgmode.org/
From 70931abd23e3cd13367a787a0b0d5cfb91e3f96f Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:14:59 +0800
Subject: [PATCH 0084/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190418=20Elec?=
=?UTF-8?q?tronics=20designed=20in=205=20different=20countries=20with=20op?=
=?UTF-8?q?en=20hardware=20sources/tech/20190418=20Electronics=20designed?=
=?UTF-8?q?=20in=205=20different=20countries=20with=20open=20hardware.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... different countries with open hardware.md | 119 ++++++++++++++++++
1 file changed, 119 insertions(+)
create mode 100644 sources/tech/20190418 Electronics designed in 5 different countries with open hardware.md
diff --git a/sources/tech/20190418 Electronics designed in 5 different countries with open hardware.md b/sources/tech/20190418 Electronics designed in 5 different countries with open hardware.md
new file mode 100644
index 0000000000..5c81f2d8bc
--- /dev/null
+++ b/sources/tech/20190418 Electronics designed in 5 different countries with open hardware.md
@@ -0,0 +1,119 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Electronics designed in 5 different countries with open hardware)
+[#]: via: (https://opensource.com/article/19/4/hardware-international)
+[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg)
+
+Electronics designed in 5 different countries with open hardware
+======
+This month's open source hardware column looks at certified open
+hardware from five countries that may surprise you.
+![Gadgets and open hardware][1]
+
+The Open Source Hardware Association's [Hardware Registry][2] lists hardware from 29 different countries on five continents, demonstrating the broad, international footprint of certified open source hardware.
+
+![Open source hardware map][3]
+
+In some ways, this international reach shouldn't be a surprise. Like many other open source communities, the open source hardware community is built on top of the internet, not grounded in any specific geographical location. The focus on documentation, sharing, and openness makes it easy for people in different places with different backgrounds to connect and work together to develop new hardware. Even the community-developed open source hardware [definition][4] has been translated into 11 languages from the original English.
+
+Even if you're familiar with the international nature of open source hardware, it can still be refreshing to step back and remember what it means in practice. While it may not surprise you that there are many certifications from the United States, Germany, and India, some of the other countries boasting certifications might be a bit less expected. Let's look at six such projects from five of those countries.
+
+### Bulgaria
+
+Bulgaria may have the highest per-capita open source hardware certification rate of any country on earth. That distinction is mostly due to the work of two companies: [ANAVI Technology][5] and [Olimex][6].
+
+ANAVI focuses mostly on IoT projects built on top of the Raspberry Pi and ESP8266. The concept of "creator contribution" means that these projects can be certified open source even though they are built upon non-open bases. That is because all of ANAVI's work to develop the hardware on top of these platforms (ANAVI's "creator contribution") has been open sourced in compliance with the certification requirements.
+
+The [ANAVI Light pHAT][7] was the first piece of Bulgarian hardware to be certified by OSHWA. The Light pHAT makes it easy to add a 12V RGB LED strip to a Raspberry Pi.
+
+![ANAVI-Light-pHAT][8]
+
+[ANAVI-Light-pHAT][9]
+
+Olimex's first OSHWA certification was for the [ESP32-PRO][10], a highly connectable IoT board built around an ESP32 microcontroller.
+
+![Olimex ESP32-PRO][11]
+
+[Olimex ESP32-PRO][12]
+
+### China
+
+While most people know China is a hotbed for hardware development, fewer realize that it is also the home to a thriving _open source_ hardware culture. One of the reasons is the tireless advocacy of Naomi Wu (also known as [SexyCyborg][13]). It is fitting that the first piece of certified hardware from China is one she helped develop: the [sino:bit][14]. The sino:bit is designed to help introduce students to programming and includes China-specific features like a LED matrix big enough to represent Chinese characters.
+
+![sino:bit][15]
+
+[ sino:bit][16]
+
+### Mexico
+
+Mexico has also produced a range of certified open source hardware. A recent certification is the [Meow Meow][17], a capacitive touch interface from [Electronic Cats][18]. Meow Meow makes it easy to use a wide range of objects—bananas are always a favorite—as controllers for your computer.
+
+![Meow Meow][19]
+
+[Meow Meow][20]
+
+### Saudi Arabia
+
+Saudi Arabia jumped into open source hardware earlier this year with the [M1 Rover][21]. The robot is an unmanned vehicle that you can build (and build upon). It is compatible with a number of different packages designed for specific purposes, so you can customize it for a wide range of applications.
+
+![M1-Rover ][22]
+
+[M1-Rover][23]
+
+### Sri Lanka
+
+This project from Sri Lanka is part of a larger effort to improve traffic flow in urban areas. The team behind the [Traffic Wave Disruptor][24] read research about how many traffic jams are caused by drivers slamming on their brakes when they drive too close to the car in front of them, producing a ripple of rapid breaking on the road behind them. This stop/start effect can be avoided if cars maintain a consistent, optimal distance from one another. If you reduce the stop/start pattern, you also reduce the number of traffic jams.
+
+![Traffic Wave Disruptor][25]
+
+[Traffic Wave Disruptor][26]
+
+But how can drivers know if they are keeping an optimal distance? The prototype Traffic Wave Disruptor aims to give drivers feedback when they fail to keep optimal spacing. Wider adoption could help increase traffic flow without building new highways nor reducing the number of cars using them.
+
+* * *
+
+You may have noticed that all the hardware featured here is based on electronics. In next month's open source hardware column, we will take a look at open source hardware for the outdoors, away from batteries and plugs. Until then, [certify][27] your open source hardware project (especially if your country is not yet on the registry). It might be featured in a future column.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/hardware-international
+
+作者:[Michael Weinberg][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mweinberg
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openhardwaretools_0.png?itok=NUIvc-R1 (Gadgets and open hardware)
+[2]: https://certification.oshwa.org/list.html
+[3]: https://opensource.com/sites/default/files/uploads/opensourcehardwaremap.jpg (Open source hardware map)
+[4]: https://www.oshwa.org/definition/
+[5]: http://anavi.technology/
+[6]: https://www.olimex.com/
+[7]: https://certification.oshwa.org/bg000001.html
+[8]: https://opensource.com/sites/default/files/uploads/anavi-light-phat.png (ANAVI-Light-pHAT)
+[9]: http://anavi.technology/#products
+[10]: https://certification.oshwa.org/bg000010.html
+[11]: https://opensource.com/sites/default/files/uploads/olimex-esp32-pro.png (Olimex ESP32-PRO)
+[12]: https://www.olimex.com/Products/IoT/ESP32/ESP32-PRO/open-source-hardware
+[13]: https://www.youtube.com/channel/UCh_ugKacslKhsGGdXP0cRRA
+[14]: https://certification.oshwa.org/cn000001.html
+[15]: https://opensource.com/sites/default/files/uploads/sinobit.png (sino:bit)
+[16]: https://github.com/sinobitorg/hardware
+[17]: https://certification.oshwa.org/mx000003.html
+[18]: https://electroniccats.com/
+[19]: https://opensource.com/sites/default/files/uploads/meowmeow.png (Meow Meow)
+[20]: https://electroniccats.com/producto/meowmeow/
+[21]: https://certification.oshwa.org/sa000001.html
+[22]: https://opensource.com/sites/default/files/uploads/m1-rover.png (M1-Rover )
+[23]: https://www.hackster.io/AhmedAzouz/m1-rover-362c05
+[24]: https://certification.oshwa.org/lk000001.html
+[25]: https://opensource.com/sites/default/files/uploads/traffic-wave-disruptor.png (Traffic Wave Disruptor)
+[26]: https://github.com/Aightm8/Traffic-wave-disruptor
+[27]: https://certification.oshwa.org/
From 3758bd69e6d508c987e19c9436494aee0b4886bf Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:15:11 +0800
Subject: [PATCH 0085/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190418=20Leve?=
=?UTF-8?q?l=20up=20command-line=20playgrounds=20with=20WebAssembly=20sour?=
=?UTF-8?q?ces/tech/20190418=20Level=20up=20command-line=20playgrounds=20w?=
=?UTF-8?q?ith=20WebAssembly.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...mmand-line playgrounds with WebAssembly.md | 196 ++++++++++++++++++
1 file changed, 196 insertions(+)
create mode 100644 sources/tech/20190418 Level up command-line playgrounds with WebAssembly.md
diff --git a/sources/tech/20190418 Level up command-line playgrounds with WebAssembly.md b/sources/tech/20190418 Level up command-line playgrounds with WebAssembly.md
new file mode 100644
index 0000000000..411adc44fa
--- /dev/null
+++ b/sources/tech/20190418 Level up command-line playgrounds with WebAssembly.md
@@ -0,0 +1,196 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Level up command-line playgrounds with WebAssembly)
+[#]: via: (https://opensource.com/article/19/4/command-line-playgrounds-webassembly)
+[#]: author: (Robert Aboukhalil https://opensource.com/users/robertaboukhalil)
+
+Level up command-line playgrounds with WebAssembly
+======
+WebAssembly is a powerful tool for bringing command line utilities to
+the web and giving people the chance to tinker with tools.
+![Various programming languages in use][1]
+
+[WebAssembly][2] (Wasm) is a new low-level language designed with the web in mind. Its main goal is to enable developers to compile code written in other languages—such as C, C++, and Rust—into WebAssembly and run that code in the browser. In an environment where JavaScript has traditionally been the only option, WebAssembly is an appealing counterpart, and it enables portability along with the promise for near-native runtimes. WebAssembly has also already been used to port lots of tools to the web, including [desktop applications][3], [games][4], and even [data science tools written in Python][5]!
+
+Another application of WebAssembly is command line playgrounds, where users are free to play with a simulated version of a command line tool. In this article, we'll explore a concrete example of leveraging WebAssembly for this purpose, specifically to port the tool **[jq][6]** —which is normally confined to the command line—to run directly in the browser.
+
+If you haven't heard, jq is a very powerful command line tool for querying, modifying, and wrangling JSON objects on the command line.
+
+### Why WebAssembly?
+
+Aside from WebAssembly, there are two other approaches we can take to build a jq playground:
+
+ 1. **Set up a sandboxed environment** on your server that executes queries and returns the result to the user via API calls. Although this means your users get to play with the real thing, the thought of hosting, securing, and sanitizing user inputs for such an application is worrisome. Aside from security, the other concern is responsiveness; the additional round trips to the server can introduce noticeable latencies and negatively impact the user experience.
+ 2. **Simulate the command line environment using JavaScript** , where you define a series of steps that the user can take. Although this approach is more secure than option 1, it involves _a lot_ more work, as you need to rewrite the logic of the tool in JavaScript. This method is also limiting: when I'm learning a new tool, I'm not just interested in the "happy path"; I want to break things!
+
+
+
+These two solutions are not ideal because we have to choose between security and a meaningful learning experience. Ideally, we could simply run the command line tool directly in the browser, with no servers and no simulations. Lucky for us, WebAssembly is just the solution we need to achieve that.
+
+### Set up your environment
+
+In this article, we'll use the [Emscripten tool][7] to port jq from C to WebAssembly. Conveniently, it provides us with drop-in replacements for the most common C/C++ build tools, including gcc, make, and configure.
+
+Instead of [installing Emscripten from scratch][8] (the build process can take a long time), we'll use a Docker image I put together that comes prepackaged with everything you'll need for this article (and beyond!).
+
+Let's start by pulling the image and creating a container from it:
+
+
+```
+# Fetch docker image containing Emscripten
+docker pull robertaboukhalil/emsdk:1.38.26
+
+# Create container from that image
+docker run -dt --name wasm robertaboukhalil/emsdk:1.38.26
+
+# Enter the container
+docker exec -it wasm bash
+
+# Make sure we can run emcc, Emscripten's wrapper around gcc
+emcc --version
+```
+
+If you see the Emscripten version on the screen, you're good to go!
+
+### Porting jq to WebAssembly
+
+Next, let's clone the jq repository:
+
+
+```
+git clone
+cd jq
+git checkout 9fa2e51
+```
+
+Note that we're checking out a specific commit, just in case the jq code changes significantly after this article is published.
+
+Before we compile jq to WebAssembly, let's first consider how we would normally compile jq to binary for use on the command line.
+
+From the [README file][9], here is what we need to build jq to binary (don't type this in yet):
+
+
+```
+# Fetch jq dependencies
+git submodule update --init
+
+# Generate ./configure file
+autoreconf -fi
+
+# Run ./configure
+./configure \
+\--with-oniguruma=builtin \
+\--disable-maintainer-mode
+
+# Build jq executable
+make LDFLAGS=-all-static
+```
+
+Instead, to compile jq to WebAssembly, we'll leverage Emscripten's drop-in replacements for the configure and make build tools (note the differences here from the previous entry: **emconfigure** and **emmake** in the Run and Build statements, respectively):
+
+
+```
+# Fetch jq dependencies
+git submodule update --init
+
+# Generate ./configure file
+autoreconf -fi
+
+# Run ./configure
+emconfigure ./configure \
+\--with-oniguruma=builtin \
+\--disable-maintainer-mode
+
+# Build jq executable
+emmake make LDFLAGS=-all-static
+```
+
+If you type the commands above inside the Wasm container we created earlier, you'll notice that emconfigure and emmake will make sure jq is compiled using emcc instead of gcc (Emscripten also has a g++ replacement called em++).
+
+So far, this was surprisingly easy: we just prepended a handful of commands with Emscripten tools and ported a codebase—comprising tens of thousands of lines—from C to WebAssembly. Note that it won't always be this easy, especially for more complex codebases and graphical applications, but that's for [another article][10].
+
+Another advantage of Emscripten is that it can generate some JavaScript glue code for us that handles initializing the WebAssembly module, calling C functions from JavaScript, and even providing a [virtual filesystem][11].
+
+Let's generate that glue code from the executable file jq that emmake outputs:
+
+
+```
+# But first, rename the jq executable to a .o file; otherwise,
+# emcc complains that the "file has an unknown suffix"
+mv jq jq.o
+
+# Generate .js and .wasm files from jq.o
+# Disable errors on undefined symbols to avoid warnings about llvm_fma_f64
+emcc jq.o -o jq.js \
+-s ERROR_ON_UNDEFINED_SYMBOLS=0
+```
+
+To make sure it works, let's try an example from the [jq tutorial][12] directly on the command line:
+
+
+```
+# Output the description of the latest commit on the jq repo
+$ curl -s "" | \
+node jq.js '.[0].commit.message'
+"Restore cfunction arity in builtins/0\n\nCount arguments up-front at definition/invocation instead of doing it at\nbind time, which comes after generating builtins/0 since e843a4f"
+```
+
+And just like that, we are now ready to run jq in the browser!
+
+### The result
+
+Using the output of emcc above, we can put together a user interface that calls jq on a JSON blob the user provides. This is the approach I took to build [jqkungfu][13] (source code [available on GitHub][14]):
+
+![jqkungfu screenshot][15]
+
+jqkungfu, a playground built by compiling jq to WebAssembly
+
+Although there are similar web apps that let you execute arbitrary jq queries in the browser, they are generally implemented as server-side applications that execute user queries in a sandbox (option #1 above).
+
+Instead, by compiling jq from C to WebAssembly, we get the best of both worlds: the flexibility of the server and the security of the browser. Specifically, the benefits are:
+
+ 1. **Flexibility** : Users can "choose their own adventure" and use the app with fewer limitations
+ 2. **Speed** : Once the Wasm module is loaded, executing queries is extremely fast because all the magic happens in the browser
+ 3. **Security** : No backend means we don't have to worry about our servers being compromised or used to mine Bitcoins
+ 4. **Convenience** : Since we don't need a backend, jqkungfu is simply hosted as static files on a cloud storage platform
+
+
+
+### Conclusion
+
+WebAssembly is a powerful tool for bringing existing command line utilities to the web. When included as part of a tutorial, such playgrounds can become powerful teaching tools. They can even allow your users to test-drive your tool before they bother installing it.
+
+If you want to dive further into WebAssembly and learn how to build applications like jqkungfu (or games like Pacman!), check out my book [_Level up with WebAssembly_][16].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/command-line-playgrounds-webassembly
+
+作者:[Robert Aboukhalil][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/robertaboukhalil
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_language_c.png?itok=mPwqDAD9 (Various programming languages in use)
+[2]: https://webassembly.org/
+[3]: https://www.figma.com/blog/webassembly-cut-figmas-load-time-by-3x/
+[4]: http://www.continuation-labs.com/projects/d3wasm/
+[5]: https://hacks.mozilla.org/2019/03/iodide-an-experimental-tool-for-scientific-communicatiodide-for-scientific-communication-exploration-on-the-web/
+[6]: https://stedolan.github.io/jq/
+[7]: https://emscripten.org/
+[8]: https://emscripten.org/docs/getting_started/downloads.html
+[9]: https://github.com/stedolan/jq/blob/9fa2e51099c55af56e3e541dc4b399f11de74abe/README.md
+[10]: https://medium.com/@robaboukhalil/porting-games-to-the-web-with-webassembly-70d598e1a3ec?sk=20c835664031227eae5690b8a12514f0
+[11]: https://emscripten.org/docs/porting/files/file_systems_overview.html
+[12]: https://stedolan.github.io/jq/tutorial/
+[13]: http://jqkungfu.com
+[14]: https://github.com/robertaboukhalil/jqkungfu/
+[15]: https://opensource.com/sites/default/files/uploads/jqkungfu.gif (jqkungfu screenshot)
+[16]: http://levelupwasm.com/
From a1975bb3fe887c458d942c6e05423e62ffb6e1c9 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:15:25 +0800
Subject: [PATCH 0086/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190418=20Simp?=
=?UTF-8?q?lifying=20organizational=20change:=20A=20guide=20for=20the=20pe?=
=?UTF-8?q?rplexed=20sources/tech/20190418=20Simplifying=20organizational?=
=?UTF-8?q?=20change-=20A=20guide=20for=20the=20perplexed.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ional change- A guide for the perplexed.md | 167 ++++++++++++++++++
1 file changed, 167 insertions(+)
create mode 100644 sources/tech/20190418 Simplifying organizational change- A guide for the perplexed.md
diff --git a/sources/tech/20190418 Simplifying organizational change- A guide for the perplexed.md b/sources/tech/20190418 Simplifying organizational change- A guide for the perplexed.md
new file mode 100644
index 0000000000..e9fa0cb7fd
--- /dev/null
+++ b/sources/tech/20190418 Simplifying organizational change- A guide for the perplexed.md
@@ -0,0 +1,167 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Simplifying organizational change: A guide for the perplexed)
+[#]: via: (https://opensource.com/open-organization/19/4/simplifying-change)
+[#]: author: (Jen Kelchner https://opensource.com/users/jenkelchner)
+
+Simplifying organizational change: A guide for the perplexed
+======
+Here's a 4-step, open process for making change easier—both for you and
+your organization.
+![][1]
+
+Most organizational leaders have encountered a certain paralysis around efforts to implement culture change—perhaps because of perceived difficulty or the time necessary for realizing our work. But change is only as difficult as we choose to make it. In order to lead successful change efforts, we must simplify our understanding and approach to change.
+
+Change isn't something rare. We live everyday life in a continuous state of change—from grappling with the speed of innovation to simply interacting with the environment around us. Quite simply, *change is how we process, disseminate, and adopt new information. *And whether you're leading a team or an organization—or are simply breathing—you'll benefit from a more focused, simplified approach to change. Here's a process that can save you time and reduce frustration.
+
+### Three interactions with change
+
+Everyone interacts with change in different ways. Those differences are based on who we are, our own unique experiences, and our core beliefs. In fact, [only 5% of decision making involves conscious processing][2]. Even when you don't _think_ you're making a decision, you are actually making a decision (that is, to not take action).
+
+So you see, two actors are at play in situations involving change. The first is the human decision maker. The second is the information _coming to_ the decision maker. Both are present in three sets of interactions at varying stages in the decision-making process.
+
+#### **Engaging change**
+
+First, we must understand that uncertainty is really the result of "new information" we must process. We must accept where we are, at that moment, while waiting for additional information. Engaging with change requires us to trust—at the very least, ourselves and our capacity to manage—as new information continues to arrive. Everyone will respond to new information differently, and those responses are based on multiple factors: general hardwiring, unconscious needs that need to be met to feel safe, and so on. How do you feel safe in periods of uncertainty? Are you routine driven? Do you need details or need to assess risk? Are you good with figuring it out on the fly? Or does safety feel like creating something brand new?
+
+#### **Navigating change**
+
+"Navigating" doesn't necessarily mean "going around" something safely. It's knowing how to "get through it." Navigating change truly requires "all hands on deck" in order to keep everything intact and moving forward as we encounter each oncoming wave of new information. Everyone around you has something to contribute to the process of navigation; leverage them for “smooth sailing."
+
+#### **Adopting change**
+
+Only a small set of members in your organization will be truly comfortable with adopting change. But that committed and confident minority can spread the fire of change and help you grow some innovative ideas within your organization. Consider taking advantage of what researchers call "[the pendulum effect][3]," which holds that a group as small as 5% of an organization's population can influence a crowd's direction (the other 95% will follow along without realizing it). Moreover, [scientists at Rensselaer Polytechnic Institute have found][4] that when just 10% of a population holds an unshakable belief, that belief will always be adopted by a majority. Findings from this cognitive study have implications for the spread of innovations and movements within a collective group of people. Opportunities for mass adoption are directly related to your influence with the external parties around you.
+
+Everyone interacts with change in different ways. Those differences are based on who we are, our own unique experiences, and our core beliefs.
+
+### A useful matrix to guide culture change
+
+So far, we've identified three "interactions" every person, team, or department will experience with change: "engaging," "navigating," and "adopting." When we examine the work of _implementing_ change in the broader context of an organization (any kind), we can also identify _three relationships_ that drive the success of each interaction: "people," "capacity," and "information."
+
+Here's a brief list of considerations you should make—at every moment and with every relationship—to help you build roadmaps thoughtfully.
+
+#### **Engaging—People**
+
+Organizational success comes from the overlap of awareness and action of the "I" and the "We."
+
+ * _Individuals (I)_ are aware of and engage based on their [natural response strength][5].
+ * _Teams (We)_ are aware of and balance their responsibilities based on the Individual strengths by initiative.
+ * _Leaders (I/We) l_ everage insight based on knowing their (I) and the collective (We).
+
+
+
+#### **Engaging—Capacity**
+
+"Capacity" applies to skills, processes, and culture that is clearly structured, documented, and accessible with your organization. It is the “space” within which you operate and achieve solutions.
+
+ * _Current state_ awareness allows you to use what and who you have available and accessible through your known operational capacity.
+ * _Future state_ needs will show you what is required of you to learn, _or stretch_ , in order to bridge any gaps; essentially, you will design the recoding of your organization.
+
+
+
+#### **Engaging—Information**
+
+ * _Access to information_ is readily available to all based on appropriate needs within protocols.
+ * _Communication flows_ easily and is reciprocated at all levels.
+ * _Communication flow_ is timely and transparent.
+
+
+
+#### **Navigating—People**
+
+ * Balance responses from both individuals and the collective will impact your outcomes.
+ * Balance the _I_ with the _We_. This allows for responses to co-exist in a seamless, collaborative way—which fuels every project.
+
+
+
+#### **Navigating—Capacity**
+
+ * _Skills_ : Assuring a continuous state of assessment and learning through various modalities allows you to navigate with ease as each person graduates their understanding in preparation for the next iteration of change.
+ * _Culture:_ Be clear on goals and mission with a supported ecosystem in which your teams can operate by contributing their best efforts when working together.
+ * _Processes:_ Review existing processes and let go of anything that prevents you from evolving. Open practices and methodologies do allow for a higher rate of adaptability and decision making.
+ * _Utilize Talent:_ Discover who is already in your organization and how you can leverage their talent in new ways. Go beyond your known teams and seek out sources of new perspectives.
+
+
+
+#### **Navigating—Information**
+
+ * Be clear on your mission.
+ * Be very clear on your desired endgame so everyone knows what you are navigating toward (without clearly defined and posted directions, it's easy to waste time, money and efforts resulting in missed targets).
+
+
+
+#### **Adopting—People**
+
+ * _Behaviors_ have a critical impact on influence and adoption.
+ * For _internal adoption_ , consider the [pendulum of thought][3] swung by the committed few.
+
+
+
+#### **Adopting—Capacity**
+
+ * _Sustainability:_ Leverage people who are more routine and legacy-oriented to help stabilize and embed your new initiatives.
+ * Allows your innovators and co-creators to move into the next phase of development and begin solving problems while other team members can perform follow-through efforts.
+
+
+
+#### **Adopting—Information**
+
+ * Be open and transparent with your external communication.
+ * Lead the way in _what_ you do and _how_ you do it to create a tidal wave of change.
+ * Remember that mass adoption has a tipping point of 10%.
+
+
+
+[**Download a one-page guide to this model on GitHub.**][6]
+---
+
+### Four steps to simplify change
+
+You now understand what change is and how you are processing it. You've seen how you and your organization can reframe various interactions with it. Now, let's examine the four steps to simplify how you interact with and implement change as an individual, team leader, or organizational executive.
+
+#### **1\. Understand change**
+
+Change is receiving and processing new information and determining how to respond and participate with it (think personal or organizational operating system). Change is a _reciprocal_ action between yourself and incoming new information (think system interface). Change is an evolutionary process that happens in layers and stages in a continuous cycle (think data processing, bug fixes, and program iterations).
+
+#### **2\. Know your people**
+
+Change is personal and responses vary by context. People's responses to change are not indicators of the speed of adoption. Knowing how your people and your teams interact with change allows you to balance and optimize your efforts to solving problems, building solutions and sustaining implementations. Are they change makers, fast followers, innovators, stabilizers? When you know how you, _or others_ , process change, you can leverage your risk mitigators to sweep for potential pitfalls; and, your routine minded folks to be responsible for implementation follow through.
+
+Only a small set of members in your organization will be truly comfortable with adopting change. But that committed and confident minority can spread the fire of change and help you grow some innovative ideas within your organization.
+
+#### **3\. Know your capacity**
+
+Your capacity to implement widespread change will depend on your culture, your processes, and decision-making models. Get familiar with your operational capacity and guardrails (process and policy).
+
+#### **4\. Prepare for Interaction**
+
+Each interaction uses your people, capacity (operational), and information flow. Working with the stages of change is not always a linear process and may overlap at certain points along the way. Understand that [_people_ feed all engagement, navigation, and adoption actions][7].
+
+Humans are built for adaptation to our environments. Yes, any kind of change can be scary at first. But it need not involve some major new implementation with a large, looming deadline that throws you off. Knowing that you can take a simplified approach to change, hopefully, you're able to engage new information with ease. Using this approach over time—and integrating it as habit—allows for both the _I_ and the _We_ to experience continuous cycles of change without the tensions of old.
+
+_Want to learn more about simplifying change?[View additional resources on GitHub][8]._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/open-organization/19/4/simplifying-change
+
+作者:[Jen Kelchner][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jenkelchner
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_2dot0.png?itok=bKJ41T85
+[2]: http://www.simplifyinginterfaces.com/2008/08/01/95-percent-of-brain-activity-is-beyond-our-conscious-awareness/
+[3]: http://www.leeds.ac.uk/news/article/397/sheep_in_human_clothing__scientists_reveal_our_flock_mentality
+[4]: https://news.rpi.edu/luwakkey/2902
+[5]: https://opensource.com/open-organization/18/7/transformation-beyond-digital-2
+[6]: https://github.com/jenkelchner/simplifying-change/blob/master/Visual_%20Simplifying%20Change%20(1).pdf
+[7]: https://opensource.com/open-organization/17/7/digital-transformation-people-1
+[8]: https://github.com/jenkelchner/simplifying-change
From a4101e0540db636d2b96c4bff21fd27f0d22f237 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:15:51 +0800
Subject: [PATCH 0087/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190417=20How?=
=?UTF-8?q?=20to=20use=20Ansible=20to=20document=20procedures=20sources/te?=
=?UTF-8?q?ch/20190417=20How=20to=20use=20Ansible=20to=20document=20proced?=
=?UTF-8?q?ures.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...w to use Ansible to document procedures.md | 132 ++++++++++++++++++
1 file changed, 132 insertions(+)
create mode 100644 sources/tech/20190417 How to use Ansible to document procedures.md
diff --git a/sources/tech/20190417 How to use Ansible to document procedures.md b/sources/tech/20190417 How to use Ansible to document procedures.md
new file mode 100644
index 0000000000..51eddfe92c
--- /dev/null
+++ b/sources/tech/20190417 How to use Ansible to document procedures.md
@@ -0,0 +1,132 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How to use Ansible to document procedures)
+[#]: via: (https://opensource.com/article/19/4/ansible-procedures)
+[#]: author: (Marco Bravo https://opensource.com/users/marcobravo/users/shawnhcorey/users/marcobravo)
+
+How to use Ansible to document procedures
+======
+In Ansible, the documentation is the playbook, so the documentation
+naturally evolves alongside the code
+![][1]
+
+> "Documentation is a love letter that you write to your future self." —[Damian Conway][2]
+
+I use [Ansible][3] as my personal notebook for documenting coding procedures—both the ones I use often and the ones I rarely use. This process facilitates my work and reduces the time it takes to do repetitive tasks, the ones where specific commands in a certain sequence are executed to accomplish a specific result.
+
+By documenting with Ansible, I don't need to memorize all the parameters for each command or all the steps involved with a specific procedure, and it's easy to share the details with my teammates.
+
+Traditional approaches for documentation, like wikis or shared drives, are useful for general documents, but inevitably they become outdated and can't keep pace with the rapid changes in infrastructure and environments. For specific procedures, it's better to document directly into the code using a tool like Ansible.
+
+### Ansible's advantages
+
+Before we begin, let's recap some basic Ansible concepts: a _playbook_ is a high-level organization of procedures using plays; _plays_ are specific procedures for a group of hosts; _tasks_ are specific actions, _modules_ are units of code, and _inventory_ is a list of managed nodes.
+
+Ansible's great advantage is that the documentation is the playbook itself, so it evolves with and is contained inside the code. This is not only useful; it's also practical because, more than just documenting solutions with Ansible, you're also coding a playbook that permits you to write your procedures and commands, reproduce them, and automate them. This way, you can look back in six months and be able to quickly understand and execute them again.
+
+It's true that this way of resolving problems could take more time at first, but it will definitely save a lot of time in the long term. By being courageous and disciplined to adopt these new habits, you will improve your skills in each iteration.
+
+Following are some other important elements and support tools that will facilitate your process.
+
+### Use source code control
+
+> "First do it, then do it right, then do it better." —[Addy Osmani][4]
+
+When working with Ansible playbooks, it's very important to implement a playbook-as-code strategy. A good way to accomplish this is to use a source code control repository that will permit to you start with a simple solution and iterate to improve it.
+
+A source code control repository provides many advantages as you collaborate with other developers, restore previous versions, and back up your work. But in creating documentation, its main advantages are that you get traceability about what are you doing and can iterate around small changes to improve your work.
+
+The most popular source control system is [Git][5], but there are [others][6] like [Subversion][7], [Bazaar][8], [BitKeeper][9], and [Mercurial][10].
+
+### Keep idempotency in mind
+
+In infrastructure automation, idempotency means to reach a specific end state that remains the same, no matter how many times the process is executed. So when you are preparing to automate your procedures, keep the desired result in mind and write scripts and commands that will achieve them consistently.
+
+This concept exists in most Ansible modules because after you specify the desired final state, Ansible will accomplish it. For instance, there are modules for creating filesystems, modifying iptables, and managing cron entries. All of these modules are idempotent by default, so you should give them preference.
+
+If you are using some of the lower-level modules, like command or shell, or developing your own modules, be careful to write code that will be idempotent and safe to repeat many times to get the same result.
+
+The idempotency concept is important when you prepare procedures for automation because it permits you to evaluate several scenarios and incorporate the ones that will make your code safer and create an abstraction level that points to the desired result.
+
+### Test it!
+
+Testing your deployment workflow creates fewer surprises when your code arrives in production. Ansible's belief that you shouldn't need another framework to validate basic things in your infrastructure is true. But your focus should be on application testing, not infrastructure testing.
+
+Ansible's documentation offers several [testing strategies for your procedures][11]. For testing Ansible playbooks, you can use [Molecule][12], which is designed to aid in the development and testing of Ansible roles. Molecule supports testing with multiple instances, operating systems/distributions, virtualization providers, test frameworks, and testing scenarios. This means Molecule will run through all the testing steps: linting verifications, checking playbook syntax, building Docker environments, running playbooks against Docker environments, running the playbook again to verify idempotence, and cleaning everything up afterward. [Testing Ansible roles with Molecule][13] is a good introduction to Molecule.
+
+### Run it!
+
+Running Ansible playbooks can create logs that are formatted in an unfriendly and difficult-to-read way. In those cases, the Ansible Run Analysis (ARA) is a great complementary tool for running Ansible playbooks, as it provides an intuitive interface to browse them. Read [Analyzing Ansible runs using ARA][14] for more information.
+
+Remember to protect your passwords and other sensitive information with [Ansible Vault][15]. Vault can encrypt binary files, **group_vars** , **host_vars** , **include_vars** , and **var_files**. But this encrypted data is exposed when you run a playbook in **-v** (verbose) mode, so it's a good idea to combine it with the keyword **no_log** set to **true** to hide any task's information, as it indicates that the value of the argument should not be logged or displayed.
+
+### A basic example
+
+Do you need to connect to a server to produce a report file and copy the file to another server? Or do you need a lot of specific parameters to connect? Maybe you're not sure where to store the parameters. Or are your procedures are taking a long time because you need to collect all the parameters from several sources?
+
+Suppose you have a network topology with some restrictions and you need to copy a file from a server that you can access ( **server1** ) to another server that is managed by a third party ( **server2** ). The parameters to connect are:
+
+
+```
+Source server: server1
+Target server: server2
+Port: 2202
+User: transfers
+SSH Key: transfers_key
+File to copy: file.log
+Remote directory: /logs/server1/
+```
+
+In this scenario, you need to connect to **server1** and copy the file using these parameters. You can accomplish this using a one-line command:
+
+
+```
+`ssh server1 "scp -P 2202 -oUser=transfers -i ~/.ssh/transfers_key file.log server2:/logs/server1/"`
+```
+
+Now your playbook can do the procedure.
+
+### Useful combinations
+
+If you produce a lot of Ansible playbooks, you can organize all your procedures with other tools like [AWX][16] (Ansible Works Project), which provides a web-based user interface, a REST API, and a task engine built on top of Ansible so that users can better control their Ansible project use in IT environments.
+
+Other interesting combinations are Ansible with [Rundeck][17], which provides procedures as self-service jobs, and [Jenkins][18] for continuous integration and continuous delivery processes.
+
+### Conclusion
+
+I hope that these tips for using Ansible will help you improve your automation processes, coding, and documentation. If you have more interest, dive in and learn more. And I would like to hear your ideas or questions, so please share them in the comments below.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/ansible-procedures
+
+作者:[Marco Bravo][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/marcobravo/users/shawnhcorey/users/marcobravo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2
+[2]: https://en.wikipedia.org/wiki/Damian_Conway
+[3]: https://www.ansible.com/
+[4]: https://addyosmani.com/
+[5]: https://git-scm.com/
+[6]: https://en.wikipedia.org/wiki/Comparison_of_version_control_software
+[7]: https://subversion.apache.org/
+[8]: https://bazaar.canonical.com/en/
+[9]: https://www.bitkeeper.org/
+[10]: https://www.mercurial-scm.org/
+[11]: https://docs.ansible.com/ansible/latest/reference_appendices/test_strategies.html
+[12]: https://molecule.readthedocs.io/en/latest/
+[13]: https://opensource.com/article/18/12/testing-ansible-roles-molecule
+[14]: https://opensource.com/article/18/5/analyzing-ansible-runs-using-ara
+[15]: https://docs.ansible.com/ansible/latest/user_guide/vault.html
+[16]: https://github.com/ansible/awx
+[17]: https://www.rundeck.com/ansible
+[18]: https://www.redhat.com/en/blog/integrating-ansible-jenkins-cicd-process
From 760e7a79e9dfa34ede383c6e320a354d1baa0720 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 22 Apr 2019 12:31:17 +0800
Subject: [PATCH 0088/1154] PRF:20190417 HTTPie - A Modern Command Line HTTP
Client For Curl And Wget Alternative.md
@zgj1024
---
...TP Client For Curl And Wget Alternative.md | 94 ++++++++++---------
1 file changed, 48 insertions(+), 46 deletions(-)
diff --git a/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md b/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
index a05948c9af..0592564464 100644
--- a/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
+++ b/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
@@ -1,38 +1,37 @@
[#]: collector: (lujun9972)
[#]: translator: (zgj1024)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (HTTPie – A Modern Command Line HTTP Client For Curl And Wget Alternative)
[#]: via: (https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-HTTPie – 替代 Curl 和 Wget 的现代 HTTP 命令行客户端
+HTTPie:替代 Curl 和 Wget 的现代 HTTP 命令行客户端
======
-大多数时间我们会使用 Curl 命令或是 Wget 命令下载文件或者做其他事
+大多数时间我们会使用 `curl` 命令或是 `wget` 命令下载文件或者做其他事。
-我们以前曾写过 **[最佳命令行下载管理器][1]** 的文章。你可以点击相应的 URL 连接来浏览这些文章。
+我们以前曾写过 [最佳命令行下载管理器][1] 的文章。你可以点击相应的 URL 连接来浏览这些文章。
- * **[aria2 – Linux 下的多协议命令行下载工具][2]**
- * **[Axel – Linux 下的轻量级命令行下载加速器][3]**
- * **[Wget – Linux 下的标准命令行下载工具][4]**
- * **[curl – Linux 下的实用的命令行下载工具][5]**
+* [aria2 – Linux 下的多协议命令行下载工具][2]
+* [Axel – Linux 下的轻量级命令行下载加速器][3]
+* [Wget – Linux 下的标准命令行下载工具][4]
+* [curl – Linux 下的实用的命令行下载工具][5]
+今天我们将讨论同样的话题。这个实用程序名为 HTTPie。
-今天我们将讨论同样的话题。实用程序名为 HTTPie。
-
-它是现代命令行 http 客户端,也是curl和wget命令的最佳替代品。
+它是现代命令行 http 客户端,也是 `curl` 和 `wget` 命令的最佳替代品。
### 什么是 HTTPie?
-HTTPie (发音是 aitch-tee-tee-pie) 是一个 Http 命令行客户端。
+HTTPie (发音是 aitch-tee-tee-pie) 是一个 HTTP 命令行客户端。
-httpie 工具是现代命令的 HTTP 客户端,它能让命令行界面与 Web 服务进行交互。
+HTTPie 工具是现代的 HTTP 命令行客户端,它能通过命令行界面与 Web 服务进行交互。
-他提供一个简单 Http 命令,运行使用简单而自然的语法发送任意的 HTTP 请求,并会显示彩色的输出。
+它提供一个简单的 `http` 命令,允许使用简单而自然的语法发送任意的 HTTP 请求,并会显示彩色的输出。
-HTTPie 能用于测试、debugging及与 HTTP 服务器交互。
+HTTPie 能用于测试、调试及与 HTTP 服务器交互。
### 主要特点
@@ -40,50 +39,52 @@ HTTPie 能用于测试、debugging及与 HTTP 服务器交互。
* 格式化的及彩色化的终端输出
* 内置 JSON 支持
* 表单和文件上传
- * HTTPS, 代理, 和认证
+ * HTTPS、代理和认证
* 任意请求数据
* 自定义头部
- * 持久化会话(sessions)
- * 类似 wget 的下载
+ * 持久化会话
+ * 类似 `wget` 的下载
* 支持 Python 2.7 和 3.x
### 在 Linux 下如何安装 HTTPie
大部分 Linux 发行版都提供了系统包管理器,可以用它来安装。
-**`Fedora`** 系统,使用 **[DNF 命令][6]** 来安装 httpie
+Fedora 系统,使用 [DNF 命令][6] 来安装 httpie:
```
$ sudo dnf install httpie
```
-**`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][7]** 或 **[APT 命令][8]** 来安装 httpie。
+Debian/Ubuntu 系统,使用 [APT-GET 命令][7] 或 [APT 命令][8] 来安装 HTTPie。
```
$ sudo apt install httpie
```
-基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][9]** 来安装 httpie。
+基于 Arch Linux 的系统,使用 [Pacman 命令][9] 来安装 HTTPie。
```
$ sudo pacman -S httpie
```
-**`RHEL/CentOS`** 的系统, 使用 **[YUM 命令][10]** 来安装 httpie。
+RHEL/CentOS 的系统,使用 [YUM 命令][10] 来安装 HTTPie。
```
$ sudo yum install httpie
```
-**`openSUSE Leap`** 系统, 使用 **[Zypper 命令][11]** 来安装 httpie。
+openSUSE Leap 系统,使用 [Zypper 命令][11] 来安装 HTTPie。
```
$ sudo zypper install httpie
```
-### 1) 如何使用 HTTPie 请求URL?
+### 用法
-httpie 的基本用法是将网站的 URL 作为参数。
+#### 如何使用 HTTPie 请求 URL?
+
+HTTPie 的基本用法是将网站的 URL 作为参数。
```
# http 2daygeek.com
@@ -99,9 +100,9 @@ Transfer-Encoding: chunked
Vary: Accept-Encoding
```
-### 2) 如何使用 HTTPie 下载文件
+#### 如何使用 HTTPie 下载文件
-你可以使用带 `--download` 参数的 HTTPie 命令下载文件。类似于 wget 命令。
+你可以使用带 `--download` 参数的 HTTPie 命令下载文件。类似于 `wget` 命令。
```
# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png
@@ -148,10 +149,11 @@ Vary: Accept-Encoding
Downloading 31.31 kB to "Anbox-1.png"
Done. 31.31 kB in 0.01551s (1.97 MB/s)
```
-如何使用HTTPie恢复部分下载?
-### 3) 如何使用 HTTPie 恢复部分下载?
+
+#### 如何使用 HTTPie 恢复部分下载?
你可以使用带 `-c` 参数的 HTTPie 继续下载。
+
```
# http --download --continue https://speed.hetzner.de/100MB.bin -o 100MB.bin
HTTP/1.1 206 Partial Content
@@ -169,24 +171,24 @@ Downloading 100.00 MB to "100MB.bin"
| 24.14 % 24.14 MB 1.12 MB/s 0:01:07 ETA^C
```
-你根据下面的输出验证是否同一个文件
+你根据下面的输出验证是否同一个文件:
+
```
[email protected]:/var/log# ls -lhtr 100MB.bin
-rw-r--r-- 1 root root 25M Apr 9 01:33 100MB.bin
```
-### 5) 如何使用 HTTPie 上传文件?
+#### 如何使用 HTTPie 上传文件?
-你可以通过使用带有 `小于号 "<"` 的 HTTPie 命令上传文件
-You can upload a file using HTTPie with the `less-than symbol "<"` symbol.
+你可以通过使用带有小于号 `<` 的 HTTPie 命令上传文件
```
$ http https://transfer.sh < Anbox-1.png
```
-### 6) 如何使用带有重定向符号">" 的 HTTPie 下载文件?
+#### 如何使用带有重定向符号 > 下载文件?
-你可以使用带有 `重定向 ">"` 符号的 HTTPie 命令下载文件。
+你可以使用带有重定向 `>` 符号的 HTTPie 命令下载文件。
```
# http https://www.2daygeek.com/wp-content/uploads/2019/03/How-To-Install-And-Enable-Flatpak-Support-On-Linux-1.png > Flatpak.png
@@ -195,7 +197,7 @@ $ http https://transfer.sh < Anbox-1.png
-rw-r--r-- 1 root root 47K Apr 9 01:44 Flatpak.png
```
-### 7) 发送一个 HTTP GET 请求?
+#### 发送一个 HTTP GET 请求?
您可以在请求中发送 HTTP GET 方法。GET 方法会使用给定的 URI,从给定服务器检索信息。
@@ -214,7 +216,7 @@ Transfer-Encoding: chunked
Vary: Accept-Encoding
```
-### 8) 提交表单?
+#### 提交表单?
使用以下格式提交表单。POST 请求用于向服务器发送数据,例如客户信息、文件上传等。要使用 HTML 表单。
@@ -261,24 +263,24 @@ Server: Apache/2.4.29 (Ubuntu)
Vary: Accept-Encoding
```
-### 9) HTTP 认证?
+#### HTTP 认证?
-当前支持的身份验证认证方案是基本认证(Basic)和摘要验证(Digest)
-The currently supported authentication schemes are Basic and Digest
+当前支持的身份验证认证方案是基本认证(Basic)和摘要验证(Digest)。
-基本认证
+基本认证:
```
$ http -a username:password example.org
```
-摘要验证
+摘要验证:
```
$ http -A digest -a username:password example.org
```
-提示输入密码
+提示输入密码:
+
```
$ http -a username example.org
```
@@ -289,8 +291,8 @@ via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
-译者:[译者ID](https://github.com/zgj1024)
-校对:[校对者ID](https://github.com/校对者ID)
+译者:[zgj1024](https://github.com/zgj1024)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@@ -306,4 +308,4 @@ via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
-[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
\ No newline at end of file
+[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
From d1eea1f8b790cc48b24cbe9ee239cc6e3da9eeec Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 22 Apr 2019 12:32:01 +0800
Subject: [PATCH 0089/1154] PUB:20190417 HTTPie - A Modern Command Line HTTP
Client For Curl And Wget Alternative.md
@zgj1024 https://linux.cn/article-10765-1.html
---
... Command Line HTTP Client For Curl And Wget Alternative.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md (99%)
diff --git a/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md b/published/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
similarity index 99%
rename from translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
rename to published/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
index 0592564464..b6d1b5d17b 100644
--- a/translated/tech/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
+++ b/published/20190417 HTTPie - A Modern Command Line HTTP Client For Curl And Wget Alternative.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (zgj1024)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10765-1.html)
[#]: subject: (HTTPie – A Modern Command Line HTTP Client For Curl And Wget Alternative)
[#]: via: (https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
From 4bc50b0985e2b50fd25b7496069e75f5114df85e Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:32:34 +0800
Subject: [PATCH 0090/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190418=20Fuji?=
=?UTF-8?q?tsu=20completes=20design=20of=20exascale=20supercomputer,=20pro?=
=?UTF-8?q?mises=20to=20productize=20it=20sources/talk/20190418=20Fujitsu?=
=?UTF-8?q?=20completes=20design=20of=20exascale=20supercomputer,=20promis?=
=?UTF-8?q?es=20to=20productize=20it.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...upercomputer, promises to productize it.md | 58 +++++++++++++++++++
1 file changed, 58 insertions(+)
create mode 100644 sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md
diff --git a/sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md b/sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md
new file mode 100644
index 0000000000..59978d555c
--- /dev/null
+++ b/sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md
@@ -0,0 +1,58 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Fujitsu completes design of exascale supercomputer, promises to productize it)
+[#]: via: (https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Fujitsu completes design of exascale supercomputer, promises to productize it
+======
+Fujitsu hopes to be the first to offer exascale supercomputing using Arm processors.
+![Riken Advanced Institute for Computational Science][1]
+
+Fujitsu and Japanese research institute Riken announced the design for the post-K supercomputer, to be launched in 2021, is complete and that they will productize the design for sale later this year.
+
+The K supercomputer was a massive system, built by Fujitsu and housed at the Riken Advanced Institute for Computational Science campus in Kobe, Japan, with more than 80,000 nodes and using Sparc64 VIIIfx processors, a derivative of the Sun Microsystems Sparc processor developed under a license agreement that pre-dated Oracle buying out Sun in 2010.
+
+**[ Also read:[10 of the world's fastest supercomputers][2] ]**
+
+It was ranked as the top supercomputer when it was launched in June 2011 with a computation speed of over 8 petaflops. And in November 2011, K became the first computer to top 10 petaflops. It was eventually surpassed as the world's fastest supercomputer by the IBM’s Sequoia, but even now, eight years later, it’s still in the top 20 of supercomputers in the world.
+
+### What's in the Post-K supercomputer?
+
+The new system, dubbed “Post-K,” will feature an Arm-based processor called A64FX, a high-performance CPU developed by Fujitsu, designed for exascale systems. The chip is based off the Arm8 design, which is popular in smartphones, with 48 cores plus four “assistant” cores and the ability to access up to 32GB of memory per chip.
+
+A64FX is the first CPU to adopt the Scalable Vector Extension (SVE), an instruction set specifically designed for Arm-based supercomputers. Fujitsu claims A64FX will offer a peak double precision (64-bit) floating point operations performance of over 2.7 teraflops per chip. The system will have one CPU per node and 384 nodes per rack. That comes out to one petaflop per rack.
+
+Contrast that with Summit, the top supercomputer in the world built by IBM for the Oak Ridge National Laboratory using IBM Power9 processors and Nvidia GPUs. A Summit rack has a peak computer of 864 teraflops.
+
+Let me put it another way: IBM’s Power processor and Nvidia’s Tesla are about to get pwned by a derivative chip to the one in your iPhone.
+
+**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
+
+Fujitsu will productize the Post-K design and sell it as the successor to the Fujitsu Supercomputer PrimeHPC FX100. The company said it is also considering measures such as developing an entry-level model that will be easy to deploy, or supplying these technologies to other vendors.
+
+Post-K will be installed in the Riken Center for Computational Science (R-CCS), where the K computer is currently located. The system will be one of the first exascale supercomputers in the world, although the U.S. and China are certainly gunning to be first if only for bragging rights.
+
+Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/06/riken_advanced_institute_for_computational_science_k-computer_supercomputer_1200x800-100762135-large.jpg
+[2]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
+[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
From 2b586a1cba07a253cc7edb27619f32d30bc18b86 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:32:47 +0800
Subject: [PATCH 0091/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190418=20Cisc?=
=?UTF-8?q?o=20warns=20WLAN=20controller,=209000=20series=20router=20and?=
=?UTF-8?q?=20IOS/XE=20users=20to=20patch=20urgent=20security=20holes=20so?=
=?UTF-8?q?urces/talk/20190418=20Cisco=20warns=20WLAN=20controller,=209000?=
=?UTF-8?q?=20series=20router=20and=20IOS-XE=20users=20to=20patch=20urgent?=
=?UTF-8?q?=20security=20holes.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...XE users to patch urgent security holes.md | 76 +++++++++++++++++++
1 file changed, 76 insertions(+)
create mode 100644 sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md
diff --git a/sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md b/sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md
new file mode 100644
index 0000000000..5abcb3bcba
--- /dev/null
+++ b/sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md
@@ -0,0 +1,76 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes)
+[#]: via: (https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes
+======
+Cisco says unpatched vulnerabilities could lead to DoS attacks, arbitrary code execution, take-over of devices.
+![Woolzian / Getty Images][1]
+
+Cisco this week issued 31 security advisories but directed customer attention to “critical” patches for its IOS and IOS XE Software Cluster Management and IOS software for Cisco ASR 9000 Series routers. A number of other vulnerabilities also need attention if customers are running Cisco Wireless LAN Controllers.
+
+The [first critical patch][2] has to do with a vulnerability in the Cisco Cluster Management Protocol (CMP) processing code in Cisco IOS and Cisco IOS XE Software that could allow an unauthenticated, remote attacker to send malformed CMP-specific Telnet options while establishing a Telnet session with an affected Cisco device configured to accept Telnet connections. An exploit could allow an attacker to execute arbitrary code and obtain full control of the device or cause a reload of the affected device, Cisco said.
+
+**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
+
+The problem has a Common Vulnerability Scoring System number of 9.8 out of 10.
+
+According to Cisco, the Cluster Management Protocol utilizes Telnet internally as a signaling and command protocol between cluster members. The vulnerability is due to the combination of two factors:
+
+ * The failure to restrict the use of CMP-specific Telnet options only to internal, local communications between cluster members and instead accept and process such options over any Telnet connection to an affected device
+ * The incorrect processing of malformed CMP-specific Telnet options.
+
+
+
+Cisco says the vulnerability can be exploited during Telnet session negotiation over either IPv4 or IPv6. This vulnerability can only be exploited through a Telnet session established _to_ the device; sending the malformed options on Telnet sessions _through_ the device will not trigger the vulnerability.
+
+The company says there are no workarounds for this problem, but disabling Telnet as an allowed protocol for incoming connections would eliminate the exploit vector. Cisco recommends disabling Telnet and using SSH instead. Information on how to do both can be found on the [Cisco Guide to Harden Cisco IOS Devices][5]. For patch information [go here][6].
+
+The second critical patch involves a vulnerability in the sysadmin virtual machine (VM) on Cisco’s ASR 9000 carrier class routers running Cisco IOS XR 64-bit Software could let an unauthenticated, remote attacker access internal applications running on the sysadmin VM, Cisco said in the [advisory][7]. This CVSS also has a 9.8 rating.
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
+
+Cisco said the vulnerability is due to incorrect isolation of the secondary management interface from internal sysadmin applications. An attacker could exploit this vulnerability by connecting to one of the listening internal applications. A successful exploit could result in unstable conditions, including both denial of service (DoS) and remote unauthenticated access to the device, Cisco stated.
+
+Cisco has released [free software updates][6] that address the vulnerability described in this advisory.
+
+Lastly, Cisco wrote that [multiple vulnerabilities][9] in the administrative GUI configuration feature of Cisco Wireless LAN Controller (WLC) Software could let an authenticated, remote attacker cause the device to reload unexpectedly during device configuration when the administrator is using this GUI, causing a DoS condition on an affected device. The attacker would need to have valid administrator credentials on the device for this exploit to work, Cisco stated.
+
+“These vulnerabilities are due to incomplete input validation for unexpected configuration options that the attacker could submit while accessing the GUI configuration menus. An attacker could exploit these vulnerabilities by authenticating to the device and submitting crafted user input when using the administrative GUI configuration feature,” Cisco stated.
+
+“These vulnerabilities have a Security Impact Rating (SIR) of High because they could be exploited when the software fix for the Cisco Wireless LAN Controller Cross-Site Request Forgery Vulnerability is not in place,” Cisco stated. “In that case, an unauthenticated attacker who first exploits the cross-site request forgery vulnerability could perform arbitrary commands with the privileges of the administrator user by exploiting the vulnerabilities described in this advisory.”
+
+Cisco has released [software updates][10] that address these vulnerabilities and said that there are no workarounds.
+
+Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/compromised_data_security_breach_vulnerability_by_woolzian_gettyimages-475563052_2400x1600-100788413-large.jpg
+[2]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20170317-cmp
+[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
+[4]: https://www.networkworld.com/newsletters/signup.html
+[5]: http://www.cisco.com/c/en/us/support/docs/ip/access-lists/13608-21.html
+[6]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
+[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-asr9k-exr
+[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-wlc-iapp
+[10]: https://www.cisco.com/c/en/us/support/web/tsd-cisco-worldwide-contacts.html
+[11]: https://www.facebook.com/NetworkWorld/
+[12]: https://www.linkedin.com/company/network-world
From f724bdce99c9351030e557abab89bed2a4d7513f Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:32:56 +0800
Subject: [PATCH 0092/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190418=20'Fib?=
=?UTF-8?q?er-in-air'=205G=20network=20research=20gets=20funding=20sources?=
=?UTF-8?q?/talk/20190418=20-Fiber-in-air-=205G=20network=20research=20get?=
=?UTF-8?q?s=20funding.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...n-air- 5G network research gets funding.md | 64 +++++++++++++++++++
1 file changed, 64 insertions(+)
create mode 100644 sources/talk/20190418 -Fiber-in-air- 5G network research gets funding.md
diff --git a/sources/talk/20190418 -Fiber-in-air- 5G network research gets funding.md b/sources/talk/20190418 -Fiber-in-air- 5G network research gets funding.md
new file mode 100644
index 0000000000..4a82248cde
--- /dev/null
+++ b/sources/talk/20190418 -Fiber-in-air- 5G network research gets funding.md
@@ -0,0 +1,64 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: ('Fiber-in-air' 5G network research gets funding)
+[#]: via: (https://www.networkworld.com/article/3389881/extreme-5g-network-research-gets-funding.html#tk.rss_all)
+[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
+
+'Fiber-in-air' 5G network research gets funding
+======
+A consortium of tech companies and universities plan to aggressively investigate the exploitation of D-Band to develop a new variant of 5G infrastructure.
+![Peshkova / Getty Images][1]
+
+Wireless transmission at data rates of around 45gbps could one day be commonplace, some engineers say. “Fiber-in-air” is how the latest variant of 5G infrastructure is being described. To get there, a Britain-funded consortium of chip makers, universities, and others intend to aggressively investigate the exploitation of D-Band. That part of the radio spectrum is at 151-174.8 GHz in millimeter wavelengths (mm-wave) and hasn’t been used before.
+
+The researchers intend to do it by riffing on a now roughly 70-year-old gun-like electron-sending device that can trace its roots back through the annals of radio history: The Traveling Wave Tube, or TWT, an electron gun-magnet-combo that was used in the development of television and still brings space images back to Earth.
+
+**[ Also read:[The time of 5G is almost here][2] ]**
+
+D-Band, the spectrum the researchers want to use, has the advantage that it’s wide, so theoretically it should be good for fast, copious data rates. The problem with it though, and the reason it hasn’t thus far been used, is that it’s subject to monkey-wrenching from atmospheric conditions such as rain, explains IQE, a semiconductor wafer and materials producer involved in the project, in a [press release][3]. The team says attenuation is fixable, though. Their solution is the now-aging TWTs.
+
+The group, which includes BT, Filtronic, Glasgow University, Intel, Nokia Bell Labs, Optocap, and Teledyne e2v, has secured funding of the equivalent of $1.12 million USD from the U.K.’s [Engineering and Physical Sciences Research Council (EPSRC)][4]. That’s the principal public funding body for engineering science research there.
+
+### Tapping the power of TWTs
+
+The DLINK system, as the team calls it, will use a high-power vacuum TWT with a special, newly developed tunneling diode and a modulator. Two bands of 10 GHz, each will deliver the throughput, [explains Lancaster University on its website][5]. The tubes are, in fact, special amplifiers that produce 10 Watts. That’s 10 times what an equivalent solid-state solution would likely produce at the same spot in the band, they say. Energy is basically sent from the electron beam to an electric field generated by the input signal.
+
+Despite TWTs being around for eons, “no D-band TWTs are available in the market.” The development of one is key to these fiber-in-air speeds, the researchers say.
+
+They will include “unprecedented data rate and transmission distance,” IQE writes.
+
+The TWT device, although used extensively in space wireless communications since its invention in the 1950s, is overlooked as a significant contributor to global communications systems, say a group of French researchers working separately from this project, who recently argue that TWTs should be given more recognition.
+
+TWT’s are “the unsung heroes of space exploration,” the Aix-Marseille Université researchers say in [an article on publisher Springer’s website][6]. Springer is promoting the group's 2019-published [paper][7] in the European Physical Journal H in which they delve into the history of the simple electron gun and magnet device.
+
+“Its role in the history of wireless communications and in the space conquest is significant, but largely ignored,” they write in their paper.
+
+They will be pleased to hear it maybe isn’t going away anytime soon.
+
+Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3389881/extreme-5g-network-research-gets-funding.html#tk.rss_all
+
+作者:[Patrick Nelson][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Patrick-Nelson/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/abstract_data_coding_matrix_structure_network_connections_by_peshkova_gettyimages-897683944_2400x1600-100788487-large.jpg
+[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
+[3]: https://www.iqep.com/media/2019/03/iqe-partners-in-key-wireless-communications-project-for-5g-infrastructure-(1)/
+[4]: https://epsrc.ukri.org/
+[5]: http://wp.lancs.ac.uk/dlink/
+[6]: https://www.springer.com/gp/about-springer/media/research-news/all-english-research-news/traveling-wave-tubes--the-unsung-heroes-of-space-exploration/16578434
+[7]: https://link.springer.com/article/10.1140%2Fepjh%2Fe2018-90023-1
+[8]: https://www.facebook.com/NetworkWorld/
+[9]: https://www.linkedin.com/company/network-world
From 03007b4249f9bea5d276a9e1ba420de182930f3c Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:33:07 +0800
Subject: [PATCH 0093/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190418=20Most?=
=?UTF-8?q?=20data=20center=20workers=20happy=20with=20their=20jobs=20--?=
=?UTF-8?q?=20despite=20the=20heavy=20demands=20sources/talk/20190418=20Mo?=
=?UTF-8?q?st=20data=20center=20workers=20happy=20with=20their=20jobs=20--?=
=?UTF-8?q?=20despite=20the=20heavy=20demands.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...their jobs -- despite the heavy demands.md | 73 +++++++++++++++++++
1 file changed, 73 insertions(+)
create mode 100644 sources/talk/20190418 Most data center workers happy with their jobs -- despite the heavy demands.md
diff --git a/sources/talk/20190418 Most data center workers happy with their jobs -- despite the heavy demands.md b/sources/talk/20190418 Most data center workers happy with their jobs -- despite the heavy demands.md
new file mode 100644
index 0000000000..5030e830dc
--- /dev/null
+++ b/sources/talk/20190418 Most data center workers happy with their jobs -- despite the heavy demands.md
@@ -0,0 +1,73 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Most data center workers happy with their jobs -- despite the heavy demands)
+[#]: via: (https://www.networkworld.com/article/3389359/most-data-center-workers-happy-with-their-jobs-despite-the-heavy-demands.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Most data center workers happy with their jobs -- despite the heavy demands
+======
+An Informa Engage and Data Center Knowledge survey finds data center workers are content with their jobs, so much so they would encourage their children to go into that line of work.
+![Thinkstock][1]
+
+A [survey conducted by Informa Engage and Data Center Knowledge][2] finds data center workers overall are content with their job, so much so they would encourage their children to go into that line of work despite the heavy demands on time and their brain.
+
+Overall satisfaction is pretty good, with 72% of respondents generally agreeing with the statement “I love my current job,” while a third strongly agreed. And 75% agreed with the statement, “If my child, niece or nephew asked, I’d recommend getting into IT.”
+
+**[ Also read:[20 hot jobs ambitious IT pros should shoot for][3] ]**
+
+And there is a feeling of significance among data center workers, with 88% saying they feel they are very important to the success of their employer.
+
+That’s despite some challenges, not the least of which is a skills and certification shortage. Survey respondents cite a lack of skills as the biggest area of concern. Only 56% felt they had the training necessary to do their job, and 74% said they had been in the IT industry for more than a decade.
+
+The industry offers certification programs, every major IT hardware provider has them, but 61% said they have not completed or renewed certificates in the past 12 months. There are several reasons why.
+
+A third (34%) said it was due to a lack of a training budget at their organization, while 24% cited a lack of time, 16% said management doesn’t see a need for training, and 16% cited no training plans within their workplace.
+
+That doesn’t surprise me, since tech is one of the most open industries in the world where you can find training or educational materials and teach yourself. It’s already established that [many coders are self-taught][4], including industry giants Bill Gates, Steve Wozniak, John Carmack, and Jack Dorsey.
+
+**[[Looking to upgrade your career in tech? This comprehensive online course teaches you how.][5] ]**
+
+### Data center workers' salaries
+
+Data center workers can’t complain about the pay. Well, most can’t, as 50% make $100,000 per year or more, but 11% make less than $40,000. Two-thirds of those surveyed are in the U.S., so those on the low end might be outside the country.
+
+There was one notable discrepancy. Steve Brown, managing director of London-based Datacenter People, noted that software engineers get paid a lot better than the hardware people.
+
+“The software engineering side of the data center is comparable to the highest-earning professions,” Brown said in the report. “On the physical infrastructure — the mechanical/electrical side — it’s not quite the case. It’s more equivalent to mid-level management.”
+
+### Data center professionals still predominantly male
+
+The least surprising finding? Nine out of 10 survey respondents were male. The industry is bending over backwards to fix the gender imbalance, but so far nothing has changed.
+
+The conclusion of the report is a bit ominous, but I also think is wrong:
+
+> “As data center infrastructure completes its transition to a cloud computing model, and software moves into containers and microservices, the remaining, treasured leaders of the data center workforce — people who acquired their skills in the 20th century — may find themselves with nothing recognizable they can manage and no-one to lead. We may be shocked when the crisis finally hits, but we won’t be able to say we weren’t warned.”
+
+How many times do I have to say it, [the data center is not going away][6].
+
+Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3389359/most-data-center-workers-happy-with-their-jobs-despite-the-heavy-demands.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/02/data_center_thinkstock_879720438-100749725-large.jpg
+[2]: https://informa.tradepub.com/c/pubRD.mpl?sr=oc&_t=oc:&qf=w_dats04&ch=datacenterkids
+[3]: https://www.networkworld.com/article/3276025/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
+[4]: https://www.networkworld.com/article/3046178/survey-finds-most-coders-are-self-taught.html
+[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career
+[6]: https://www.networkworld.com/article/3289509/two-studies-show-the-data-center-is-thriving-instead-of-dying.html
+[7]: https://www.facebook.com/NetworkWorld/
+[8]: https://www.linkedin.com/company/network-world
From 27ab35376f1d1e6522641e250c85b36f43d4739a Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:33:36 +0800
Subject: [PATCH 0094/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190417=20Clea?=
=?UTF-8?q?ring=20up=20confusion=20between=20edge=20and=20cloud=20sources/?=
=?UTF-8?q?talk/20190417=20Clearing=20up=20confusion=20between=20edge=20an?=
=?UTF-8?q?d=20cloud.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...ing up confusion between edge and cloud.md | 69 +++++++++++++++++++
1 file changed, 69 insertions(+)
create mode 100644 sources/talk/20190417 Clearing up confusion between edge and cloud.md
diff --git a/sources/talk/20190417 Clearing up confusion between edge and cloud.md b/sources/talk/20190417 Clearing up confusion between edge and cloud.md
new file mode 100644
index 0000000000..722051e8a7
--- /dev/null
+++ b/sources/talk/20190417 Clearing up confusion between edge and cloud.md
@@ -0,0 +1,69 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Clearing up confusion between edge and cloud)
+[#]: via: (https://www.networkworld.com/article/3389364/clearing-up-confusion-between-edge-and-cloud.html#tk.rss_all)
+[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
+
+Clearing up confusion between edge and cloud
+======
+The benefits of edge computing are not just hype; however, that doesn’t mean you should throw cloud computing initiatives to the wind.
+![iStock][1]
+
+Edge computing and cloud computing are sometimes discussed as if they’re mutually exclusive approaches to network infrastructure. While they may function in different ways, utilizing one does not preclude the use of the other.
+
+Indeed, [Futurum Research][2] found that, among companies that have deployed edge projects, only 15% intend to separate these efforts from their cloud computing initiatives — largely for security or compartmentalization reasons.
+
+So then, what’s the difference, and how do edge and cloud work together?
+
+**Location, location, location**
+
+Moving data and processing to the cloud, as opposed to on-premises data centers, has enabled the business to move faster, more efficiently, less expensively — and in many cases, more securely.
+
+Yet cloud computing is not without challenges, particularly:
+
+ * Users will abandon a graphics-heavy website if it doesn’t load quickly. So, imagine the lag for compute-heavy processing associated artificial intelligence or machine learning functions.
+
+ * The strength of network connectivity is crucial for large data sets. As enterprises increasingly generate data, particularly with the adoption of Internet of Things (IoT), traditional cloud connections will be insufficient.
+
+
+
+
+To make up for the lack of speed and connectivity with cloud, processing for mission-critical applications will need to occur closer to the data source. Maybe that’s a robot on the factory floor, digital signage at a retail store, or an MRI machine in a hospital. That’s edge computing, which reduces the distance the data must travel and thereby boosts the performance and reliability of applications and services.
+
+**One doesn’t supersede the other**
+
+That said, the benefits gained by edge computing don’t negate the need for cloud. In many cases, IT will now become a decision-maker in terms of best usage for each. For example, edge might make sense for devices running processing-power-hungry apps such as IoT, artificial intelligence, and machine learning. And cloud will work for apps where time isn’t necessarily of the essence, like inventory or big-data projects.
+
+> “By being able to triage the types of data processing on the edge versus that heading to the cloud, we can keep both systems running smoothly – keeping our customers and employees safe and happy,” [writes Daniel Newman][3], principal analyst for Futurum Research.
+
+And in reality, edge will require cloud. “To enable digital transformation, you have to build out the edge computing side and connect it with the cloud,” [Tony Antoun][4], senior vice president of edge and digital at GE Digital, told _Automation World_. “It’s a journey from the edge to the cloud and back, and the cycle keeps continuing. You need both to enrich and boost the business and take advantage of different points within this virtual lifecycle.”
+
+**Ensuring resiliency of cloud and edge**
+
+Both edge and cloud computing require careful consideration to the underlying processing power. Connectivity and availability, no matter the application, are always critical measures.
+
+But especially for the edge, it will be important to have a resilient architecture. Companies should focus on ensuring security, redundancy, connectivity, and remote management capabilities.
+
+Discover how your edge and cloud computing environments can coexist at [APC.com][5].
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3389364/clearing-up-confusion-between-edge-and-cloud.html#tk.rss_all
+
+作者:[Anne Taylor][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Anne-Taylor/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/04/istock-612507606-100793995-large.jpg
+[2]: https://futurumresearch.com/edge-computing-from-edge-to-enterprise/
+[3]: https://futurumresearch.com/edge-computing-data-centers/
+[4]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
+[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
From 8a4ce3bf9e19b9b767f82d87bf4494ca924c78ac Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:33:46 +0800
Subject: [PATCH 0095/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190417=20Star?=
=?UTF-8?q?tup=20MemVerge=20combines=20DRAM=20and=20Optane=20into=20massiv?=
=?UTF-8?q?e=20memory=20pool=20sources/talk/20190417=20Startup=20MemVerge?=
=?UTF-8?q?=20combines=20DRAM=20and=20Optane=20into=20massive=20memory=20p?=
=?UTF-8?q?ool.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...RAM and Optane into massive memory pool.md | 58 +++++++++++++++++++
1 file changed, 58 insertions(+)
create mode 100644 sources/talk/20190417 Startup MemVerge combines DRAM and Optane into massive memory pool.md
diff --git a/sources/talk/20190417 Startup MemVerge combines DRAM and Optane into massive memory pool.md b/sources/talk/20190417 Startup MemVerge combines DRAM and Optane into massive memory pool.md
new file mode 100644
index 0000000000..71ddf70826
--- /dev/null
+++ b/sources/talk/20190417 Startup MemVerge combines DRAM and Optane into massive memory pool.md
@@ -0,0 +1,58 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Startup MemVerge combines DRAM and Optane into massive memory pool)
+[#]: via: (https://www.networkworld.com/article/3389358/startup-memverge-combines-dram-and-optane-into-massive-memory-pool.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Startup MemVerge combines DRAM and Optane into massive memory pool
+======
+MemVerge bridges two technologies that are already a bridge.
+![monsitj / Getty Images][1]
+
+A startup called MemVerge has announced software to combine regular DRAM with Intel’s Optane DIMM persistent memory into a single clustered storage pool and without requiring any changes to applications.
+
+MemVerge has been working with Intel in developing this new hardware platform for close to two years. It offers what it calls a Memory-Converged Infrastructure (MCI) to allow existing apps to use Optane DC persistent memory. It's architected to integrate seamlessly with existing applications.
+
+**[ Read also:[Mass data fragmentation requires a storage rethink][2] ]**
+
+Optane memory is designed to sit between high-speed memory and [solid-state drives][3] (SSDs) and acts as a cache for the SSD, since it has speed comparable to DRAM but SSD persistence. With Intel’s new Xeon Scalable processors, this can make up to 4.5TB of memory available to a processor.
+
+Optane runs in one of two modes: Memory Mode and App Direct Mode. In Memory Mode, the Optane memory functions like regular memory and is not persistent. In App Direct Mode, it functions as the SSD cache but apps don’t natively support it. They need to be tweaked to function properly in Optane memory.
+
+As it was explained to me, apps aren’t designed for persistent storage because the data is already in memory on powerup rather than having to load it from storage. So, the app has to know memory doesn’t go away and that it does not need to shuffle data back and forth between storage and memory. Therefore, apps natively don’t work in persistent memory.
+
+### Why didn't Intel think of this?
+
+All of which really begs a question I can’t get answered, at least not immediately: Why didn’t Intel think of this when it created Optane in the first place?
+
+MemVerge has what it calls Distributed Memory Objects (DMO) hypervisor technology to provide a logical convergence layer to run data-intensive workloads at memory speed with guaranteed data consistency across multiple systems. This allows Optane memory to process and derive insights from the enormous amounts of data in real time.
+
+That’s because MemVerge’s technology makes random access as fast as sequential access. Normally, random access is slower than sequential because of all the jumping around with random access vs. reading one sequential file. But MemVerge can handle many small files as fast as it handles one large file.
+
+MemVerge itself is actually software, with a single API for both DRAM and Optane. It’s also available via a hyperconverged server appliance that comes with 2 Cascade Lake processors, up to 512 GB DRAM, 6TB of Optane memory, and 360TB of NVMe physical storage capacity.
+
+However, all of this is still vapor. MemVerge doesn’t expect to ship a beta product until at least June.
+
+Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3389358/startup-memverge-combines-dram-and-optane-into-massive-memory-pool.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/big_data_center_server_racks_storage_binary_analytics_by_monsitj_gettyimages-951389152_3x2-100787358-large.jpg
+[2]: https://www.networkworld.com/article/3323580/mass-data-fragmentation-requires-a-storage-rethink.html
+[3]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
+[4]: https://www.facebook.com/NetworkWorld/
+[5]: https://www.linkedin.com/company/network-world
From 4a365a19b165a8ce0cbf8592588ba41803ab4c27 Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:33:56 +0800
Subject: [PATCH 0096/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190417=20Cisc?=
=?UTF-8?q?o=20Talos=20details=20exceptionally=20dangerous=20DNS=20hijacki?=
=?UTF-8?q?ng=20attack=20sources/talk/20190417=20Cisco=20Talos=20details?=
=?UTF-8?q?=20exceptionally=20dangerous=20DNS=20hijacking=20attack.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...tionally dangerous DNS hijacking attack.md | 130 ++++++++++++++++++
1 file changed, 130 insertions(+)
create mode 100644 sources/talk/20190417 Cisco Talos details exceptionally dangerous DNS hijacking attack.md
diff --git a/sources/talk/20190417 Cisco Talos details exceptionally dangerous DNS hijacking attack.md b/sources/talk/20190417 Cisco Talos details exceptionally dangerous DNS hijacking attack.md
new file mode 100644
index 0000000000..db534e4457
--- /dev/null
+++ b/sources/talk/20190417 Cisco Talos details exceptionally dangerous DNS hijacking attack.md
@@ -0,0 +1,130 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Cisco Talos details exceptionally dangerous DNS hijacking attack)
+[#]: via: (https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html#tk.rss_all)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+Cisco Talos details exceptionally dangerous DNS hijacking attack
+======
+Cisco Talos says state-sponsored attackers are battering DNS to gain access to sensitive networks and systems
+![Peshkova / Getty][1]
+
+Security experts at Cisco Talos have released a [report detailing][2] what it calls the “first known case of a domain name registry organization that was compromised for cyber espionage operations.”
+
+Talos calls ongoing cyber threat campaign “Sea Turtle” and said that state-sponsored attackers are abusing DNS to harvest credentials to gain access to sensitive networks and systems in a way that victims are unable to detect, which displays unique knowledge on how to manipulate DNS, Talos stated.
+
+**More about DNS:**
+
+ * [DNS in the cloud: Why and why not][3]
+ * [DNS over HTTPS seeks to make internet use more private][4]
+ * [How to protect your infrastructure from DNS cache poisoning][5]
+ * [ICANN housecleaning revokes old DNS security key][6]
+
+
+
+By obtaining control of victims’ DNS, the attackers can change or falsify any data on the Internet, illicitly modify DNS name records to point users to actor-controlled servers; users visiting those sites would never know, Talos reported.
+
+DNS, routinely known as the Internet’s phonebook, is part of the global internet infrastructure that translates between familiar names and the numbers computers need to access a website or send an email.
+
+### Threat to DNS could spread
+
+At this point Talos says Sea Turtle isn't compromising organizations in the U.S.
+
+“While this incident is limited to targeting primarily national security organizations in the Middle East and North Africa, and we do not want to overstate the consequences of this specific campaign, we are concerned that the success of this operation will lead to actors more broadly attacking the global DNS system,” Talos stated.
+
+Talos reports that the ongoing operation likely began as early as January 2017 and has continued through the first quarter of 2019. “Our investigation revealed that approximately 40 different organizations across 13 different countries were compromised during this campaign,” Talos stated. “We assess with high confidence that this activity is being carried out by an advanced, state-sponsored actor that seeks to obtain persistent access to sensitive networks and systems.”
+
+**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][7] ]**
+
+Talos says the attackers directing the Sea Turtle campaign show signs of being highly sophisticated and have continued their attacks despite public reports of their activities. In most cases, threat actors typically stop or slow down their activities once their campaigns are publicly revealed suggesting the Sea Turtle actors are unusually brazen and may be difficult to deter going forward, Talos stated.
+
+In January the Department of Homeland Security (DHS) [issued an alert][8] about this activity, warning that an attacker could redirect user traffic and obtain valid encryption certificates for an organization’s domain names.
+
+At that time the DHS’s [Cybersecurity and Infrastructure Security Agency][9] said in its [Emergency Directive][9] that it was tracking a series of incidents targeting DNS infrastructure. CISA wrote that it “is aware of multiple executive branch agency domains that were impacted by the tampering campaign and has notified the agencies that maintain them.”
+
+### DNS hijacking
+
+CISA said that attackers have managed to intercept and redirect web and mail traffic and could target other networked services. The agency said the attacks start with compromising user credentials of an account that can make changes to DNS records. Then the attacker alters DNS records, like Address, Mail Exchanger, or Name Server records, replacing the legitimate address of the services with an address the attacker controls.
+
+To achieve their nefarious goals, Talos stated the Sea Turtle accomplices:
+
+ * Use DNS hijacking through the use of actor-controlled name servers.
+ * Are aggressive in their pursuit targeting DNS registries and a number of registrars, including those that manage country-code top-level domains (ccTLD).
+
+
+ * Use Let’s Encrypts, Comodo, Sectigo, and self-signed certificates in their man-in-the-middle (MitM) servers to gain the initial round of credentials.
+
+
+ * Steal victim organization’s legitimate SSL certificate and use it on actor-controlled servers.
+
+
+
+Such actions also distinguish Sea Turtle from an earlier DNS exploit known as DNSpionage, which [Talos reported][10] on in November 2018.
+
+Talos noted “with high confidence” that these operations are distinctly different and independent from the operations performed by [DNSpionage.][11]
+
+In that report, Talos said a DNSpionage campaign utilized two fake, malicious websites containing job postings that were used to compromise targets via malicious Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers.
+
+In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated Let's Encrypt certificates for the redirected domains. These certificates provide X.509 certificates for [Transport Layer Security (TLS)][12] free of charge to the user, Talos said.
+
+The Sea Turtle campaign gained initial access either by exploiting known vulnerabilities or by sending spear-phishing emails. Talos said it believes the attackers have exploited multiple known common vulnerabilities and exposures (CVEs) to either gain initial access or to move laterally within an affected organization. Talos research further shows the following known exploits of Sea Turtle include:
+
+ * CVE-2009-1151: PHP code injection vulnerability affecting phpMyAdmin
+ * CVE-2014-6271: RCE affecting GNU bash system, specifically the SMTP (this was part of the Shellshock CVEs)
+ * CVE-2017-3881: RCE by unauthenticated user with elevated privileges Cisco switches
+ * CVE-2017-6736: Remote Code Exploit (RCE) for Cisco integrated Service Router 2811
+ * CVE-2017-12617: RCE affecting Apache web servers running Tomcat
+ * CVE-2018-0296: Directory traversal allowing unauthorized access to Cisco Adaptive Security Appliances (ASAs) and firewalls
+ * CVE-2018-7600: RCE for Website built with Drupal, aka “Drupalgeddon”
+
+
+
+“As with any initial access involving a sophisticated actor, we believe this list of CVEs to be incomplete,” Talos stated. “The actor in question can leverage known vulnerabilities as they encounter a new threat surface. This list only represents the observed behavior of the actor, not their complete capabilities.”
+
+Talos says that the Sea Turtle campaign continues to be highly successful for several reasons. “First, the actors employ a unique approach to gain access to the targeted networks. Most traditional security products such as IDS and IPS systems are not designed to monitor and log DNS requests,” Talos stated. “The threat actors were able to achieve this level of success because the DNS domain space system added security into the equation as an afterthought. Had more ccTLDs implemented security features such as registrar locks, attackers would be unable to redirect the targeted domains.”
+
+Talos said the attackers also used previously undisclosed techniques such as certificate impersonation. “This technique was successful in part because the SSL certificates were created to provide confidentiality, not integrity. The attackers stole organizations’ SSL certificates associated with security appliances such as [Cisco's Adaptive Security Appliance] to obtain VPN credentials, allowing the actors to gain access to the targeted network, and have long-term persistent access, Talos stated.
+
+### Cisco Talos DNS attack mitigation strategy
+
+To protect against Sea Turtle, Cisco recommends:
+
+ * Use a registry lock service, which will require an out-of-band message before any changes can occur to an organization's DNS record.
+ * If your registrar does not offer a registry-lock service, Talos recommends implementing multi-factor authentication, such as DUO, to access your organization's DNS records.
+ * If you suspect you were targeted by this type of intrusion, Talos recommends instituting a network-wide password reset, preferably from a computer on a trusted network.
+ * Apply patches, especially on internet-facing machines. Network administrators can monitor passive DNS records on their domains to check for abnormalities.
+
+
+
+Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html#tk.rss_all
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/man-in-boat-surrounded-by-sharks_risk_fear_decision_attack_threat_by-peshkova-getty-100786972-large.jpg
+[2]: https://blog.talosintelligence.com/2019/04/seaturtle.html
+[3]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
+[4]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
+[5]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
+[6]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
+[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
+[8]: https://www.networkworld.com/article/3336201/batten-down-the-dns-hatches-as-attackers-strike-feds.html
+[9]: https://cyber.dhs.gov/ed/19-01/
+[10]: https://blog.talosintelligence.com/2018/11/dnspionage-campaign-targets-middle-east.html
+[11]: https://krebsonsecurity.com/tag/dnspionage/
+[12]: https://www.networkworld.com/article/2303073/lan-wan-what-is-transport-layer-security-protocol.html
+[13]: https://www.facebook.com/NetworkWorld/
+[14]: https://www.linkedin.com/company/network-world
From 7d004758328a7adb4193e184f67d9670bf90a32c Mon Sep 17 00:00:00 2001
From: darksun
Date: Mon, 22 Apr 2019 12:34:06 +0800
Subject: [PATCH 0097/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190417=20Want?=
=?UTF-8?q?=20to=20the=20know=20future=20of=20IoT=3F=20Ask=20the=20develop?=
=?UTF-8?q?ers!=20sources/talk/20190417=20Want=20to=20the=20know=20future?=
=?UTF-8?q?=20of=20IoT-=20Ask=20the=20developers.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... know future of IoT- Ask the developers.md | 119 ++++++++++++++++++
1 file changed, 119 insertions(+)
create mode 100644 sources/talk/20190417 Want to the know future of IoT- Ask the developers.md
diff --git a/sources/talk/20190417 Want to the know future of IoT- Ask the developers.md b/sources/talk/20190417 Want to the know future of IoT- Ask the developers.md
new file mode 100644
index 0000000000..4f96f34b2b
--- /dev/null
+++ b/sources/talk/20190417 Want to the know future of IoT- Ask the developers.md
@@ -0,0 +1,119 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Want to the know future of IoT? Ask the developers!)
+[#]: via: (https://www.networkworld.com/article/3389877/want-to-the-know-future-of-iot-ask-the-developers.html#tk.rss_all)
+[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
+
+Want to the know future of IoT? Ask the developers!
+======
+A new survey of IoT developers reveals that connectivity, performance, and standards are growing areas of concern as IoT projects hit production.
+![Avgust01 / Getty Images][1]
+
+It may be a cliché that software developers rule the world, but if you want to know the future of an important technology, it pays to look at what the developers are doing. With that in mind, there are some real, on-the-ground insights for the entire internet of things (IoT) community to be gained in a new [survey of more than 1,700 IoT developers][2] (pdf) conducted by the [Eclipse Foundation][3].
+
+### IoT connectivity concerns
+
+Perhaps not surprisingly, security topped the list of concerns, easily outpacing other IoT worries. But that's where things begin to get interesting. More than a fifth (21%) of IoT developers cited connectivity as a challenge, followed by data collection and analysis (19%), performance (18%), privacy (18%), and standards (16%).
+
+Connectivity rose to second place after being the number three IoT concern for developers last year. Worries over security and data collection and analysis, meanwhile, actually declined slightly year over year. (Concerns over performance, privacy, and standards also increased significantly from last year.)
+
+**[ Learn more:[Download a PDF bundle of five essential articles about IoT in the enterprise][4] ]**
+
+“If you look at the list of developers’ top concerns with IoT in the survey,” said [Mike Milinkovich][5], executive director of the Eclipse Foundation via email, “I think connectivity, performance, and standards stand out — those are speaking to the fact that the IoT projects are getting real, that they’re getting out of sandboxes and into production.”
+
+“With connectivity in IoT,” Milinkovich continued, “everything seems straightforward until you have a sensor in a corner somewhere — narrowband or broadband — and physical constraints make it hard to connect."
+
+He also cited a proliferation of incompatible technologies that is driving developer concerns over connectivity.
+
+![][6]
+
+### IoT standards and interoperability
+
+Milinkovich also addressed one of [my personal IoT bugaboos: interoperability][7]. “Standards is a proxy for interoperability” among products from different vendors, he explained, which is an “elusive goal” in industrial IoT (IIoT).
+
+**[[Learn Java from beginning concepts to advanced design patterns in this comprehensive 12-part course!][8] ]**
+
+“IIoT is about breaking down the proprietary silos and re-tooling the infrastructure that’s been in our factories and logistics for many years using OSS standards and implementations — standard sets of protocols as opposed to vendor-specific protocols,” he said.
+
+That becomes a big issue when you’re deploying applications in the field and different manufacturers are using different protocols or non-standard extensions to existing protocols and the machines can’t talk to each other.
+
+**[ Also read:[Interoperability is the key to IoT success][7] ]**
+
+“This ties back to the requirement of not just having open standards, but more robust implementations of those standards in open source stacks,” Milinkovich said. “To keep maturing, the market needs not just standards, but out-of-the-box interoperability between devices.”
+
+“Performance is another production-grade concern,” he said. “When you’re in development, you think you know the bottlenecks, but then you discover the real-world issues when you push to production.”
+
+### Cloudy developments for IoT
+
+The survey also revealed that in some ways, IoT is very much aligned with the larger technology community. For example, IoT use of public and hybrid cloud architectures continues to grow. Amazon Web Services (AWS) (34%), Microsoft Azure (23%), and Google Cloud Platform (20%) are the leading IoT cloud providers, just as they are throughout the industry. If anything, AWS’ lead may be smaller in the IoT space than it is in other areas, though reliable cloud-provider market share figures are notoriously hard to come by.
+
+But Milinkovich sees industrial IoT as “a massive opportunity for hybrid cloud” because many industrial IoT users are very concerned about minimizing latency with their factory data, what he calls “their gold.” He sees factories moving towards hybrid cloud environments, leveraging “modern infrastructure technology like Kubernetes, and building around open protocols like HTTP and MQTT while getting rid of the older proprietary protocols.”
+
+### How IoT development is different
+
+In some ways, the IoT development world doesn’t seem much different than wider software development. For example, the top IoT programming languages mirror [the popularity of those languages][9] over all, with C and Java ruling the roost. (C led the way on constrained devices, while Java was the top choice for gateway and edge nodes, as well as the IoT cloud.)
+
+![][10]
+
+But Milinkovich noted that when developing for embedded or constrained devices, the programmer’s interface to a device could be through any number of esoteric hardware connectors.
+
+“You’re doing development using emulators and simulators, and it’s an inherently different and more complex interaction between your dev environment and the target for your application,” he said. “Sometimes hardware and software are developed in tandem, which makes it even more complicated.”
+
+For example, he explained, building an IoT solution may bring in web developers working on front ends using JavaScript and Angular, while backend cloud developers control cloud infrastructure and embedded developers focus on building software to run on constrained devices.
+
+No wonder IoT developers have so many things to worry about.
+
+**More about IoT:**
+
+ * [What is the IoT? How the internet of things works][11]
+ * [What is edge computing and how it’s changing the network][12]
+ * [Most powerful Internet of Things companies][13]
+ * [10 Hot IoT startups to watch][14]
+ * [The 6 ways to make money in IoT][15]
+ * [What is digital twin technology? [and why it matters]][16]
+ * [Blockchain, service-centric networking key to IoT success][17]
+ * [Getting grounded in IoT networking and security][4]
+ * [Building IoT-ready networks must become a priority][18]
+ * [What is the Industrial IoT? [And why the stakes are so high]][19]
+
+
+
+Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3389877/want-to-the-know-future-of-iot-ask-the-developers.html#tk.rss_all
+
+作者:[Fredric Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Fredric-Paul/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_mobile_connections_by_avgust01_gettyimages-1055659210_2400x1600-100788447-large.jpg
+[2]: https://drive.google.com/file/d/17WEobD5Etfw5JnoKC1g4IME_XCtPNGGc/view
+[3]: https://www.eclipse.org/
+[4]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
+[5]: https://blogs.eclipse.org/post/mike-milinkovich/measuring-industrial-iot%E2%80%99s-evolution
+[6]: https://images.idgesg.net/images/article/2019/04/top-developer-concerns-2019-eclipse-foundation-100793974-large.jpg
+[7]: https://www.networkworld.com/article/3204529/interoperability-is-the-key-to-iot-success.html
+[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fjava
+[9]: https://blog.newrelic.com/technology/popular-programming-languages-2018/
+[10]: https://images.idgesg.net/images/article/2019/04/top-iot-programming-languages-eclipse-foundation-100793973-large.jpg
+[11]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
+[12]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
+[13]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
+[14]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
+[15]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
+[16]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
+[17]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
+[18]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
+[19]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
+[20]: https://www.facebook.com/NetworkWorld/
+[21]: https://www.linkedin.com/company/network-world
From f99ad2010c358d448877c4bcc894d6099de34748 Mon Sep 17 00:00:00 2001
From: Jerry Li
Date: Mon, 22 Apr 2019 17:59:36 +0800
Subject: [PATCH 0098/1154] Update 20171226 The shell scripting trap.md
---
.../tech/20171226 The shell scripting trap.md | 90 ++++++++-----------
1 file changed, 39 insertions(+), 51 deletions(-)
diff --git a/sources/tech/20171226 The shell scripting trap.md b/sources/tech/20171226 The shell scripting trap.md
index 32bfff26cc..f1fc05dcc9 100644
--- a/sources/tech/20171226 The shell scripting trap.md
+++ b/sources/tech/20171226 The shell scripting trap.md
@@ -7,22 +7,17 @@
[#]: via: (https://arp242.net/weblog/shell-scripting-trap.html)
[#]: author: (Martin Tournoij https://arp242.net/)
-The shell scripting trap
-Shell 脚本的陷阱
+Shell 脚本陷阱
======
-
-Shell scripting is great. It is amazingly simple to create something very useful. Even a simple no-brainer command such as:
-Shell 脚本很棒,你可以非常轻松地写出有用的东西来。甚至是下面这个傻瓜式的命令:
+Shell 脚本很棒,你可以非常轻松地写出有用的东西来。甚至像是下面这个傻瓜式的命令:
```
-# Official way of naming Go-related things:
# 用含有 Go 的词汇起名字:
$ grep -i ^go /usr/share/dict/american-english /usr/share/dict/british /usr/share/dict/british-english /usr/share/dict/catala /usr/share/dict/catalan /usr/share/dict/cracklib-small /usr/share/dict/finnish /usr/share/dict/french /usr/share/dict/german /usr/share/dict/italian /usr/share/dict/ngerman /usr/share/dict/ogerman /usr/share/dict/spanish /usr/share/dict/usa /usr/share/dict/words | cut -d: -f2 | sort -R | head -n1
goldfish
```
-Takes several lines of code and a lot more brainpower in many programming languages. For example in Ruby:
-如果用其他编程语言,就需要花费更多的脑力用多行代码实现,比如用 Ruby 的话:
+如果用其他编程语言,就需要花费更多的脑力,用多行代码实现,比如用 Ruby 的话:
```
puts(Dir['/usr/share/dict/*-english'].map do |f|
File.open(f)
@@ -31,10 +26,8 @@ puts(Dir['/usr/share/dict/*-english'].map do |f|
end.flatten.sample.chomp)
```
-The Ruby version isn’t that long, or even especially complicated. But the shell script version was so simple that I didn’t even need to actually test it to make sure it is correct, whereas I did have to test the Ruby version to ensure I didn’t make a mistake. It’s also twice as long and looks a lot more dense.
-Ruby 版本的代码虽然不是那么长,也没有那么复杂。但是 Shell 版是如此简单,我甚至不用实际测试就可以确保它是正确的。而 Ruby 版的我就没法确定它不会出错了,必须得测试一下。而且它要长一倍,看起来也更复杂。
+Ruby 版本的代码虽然不是那么长,也并不复杂。但是 shell 版是如此简单,我甚至不用实际测试就可以确保它是正确的。而 Ruby 版的我就没法确定它不会出错了,必须得测试一下。而且它要长一倍,看起来也更复杂。
-This is why people use shell scripts, it’s so easy to make something useful. Here’s is another example:
这就是人们使用 Shell 脚本的原因,它简单却实用。下面是另一个例子:
```
@@ -44,51 +37,46 @@ curl https://nl.wikipedia.org/wiki/Lijst_van_Nederlandse_gemeenten |
grep -Ev '(^Tabel van|^Lijst van|Nederland)'
```
-This gets a list of all Dutch municipalities. I actually wrote this as a quick one-shot script to populate a database years ago, but it still works fine today, and it took me a minimum of effort to make it. Doing this in e.g. Ruby would take a lot more effort.
-这个脚本可以从维基百科上获取荷兰基层政权的列表。几年前我写了这个临时的脚本来快速生成一个数据库,到现在它仍然可以正常运行,当时写它并没有花费我多少精力。要使用 Ruby 完成同样的功能则会麻烦地多。
+这个脚本可以从维基百科上获取荷兰基层政权的列表。几年前我写了这个临时的脚本,用来快速生成一个数据库,到现在它仍然可以正常运行,当时写它并没有花费我多少精力。但要用 Ruby 完成同样的功能则会麻烦得多。
-But there’s a downside, as your script grows it will become increasingly harder to maintain, but you also don’t really want to rewrite it to something else, as you’ve already spent so much time on the shell script version.
-现在来说说 Shell 的缺点吧。随着代码量的增加,你的脚本会变得越来越难维护,但你也不会想用别的语言来重写一遍,因为你已经在这个 Shell 版上花费了很多时间。
+现在来说说 shell 的缺点吧。随着代码量的增加,你的脚本会变得越来越难以维护,但你也不会想用别的语言重写一遍,因为你已经在这个 shell 版上花费了很多时间。
-This is what I call ‘the shell script trap’, which is a special case of the [sunk cost fallacy][1].
+我把这种情况称为“Shell 脚本陷阱”,这是[沉没成本谬论][1]的一种特例(沉没成本谬论是一个经济学概念,可以简单理解为,对已经投入的成本可能被浪费而念念不忘)。
+
+实际上许多脚本会增长到超出预期的大小,你经常会花费过多的时间来“修复某个 bug”,或者“添加一个小功能”。如此循环往复,让人头大。
+
+如果你从一开始就使用 Python、Ruby 或是其他类似的语言来写这个程序,你可能会在写第一版的时候多花些时间,但以后维护起来就容易很多,bug 也肯定会少很多。
+
+以我的 [packman.vim][2] 脚本为例。它起初只包含一个简单的用来遍历所有目录的 `for` 循环,外加一个 `git pull`,但在这之后就刹不住车了,它现在有 200 行左右的代码,这肯定不能算是最复杂的脚本,但假如我一上来就按计划用 Go 来编写它的话,那么增加一些像“打印状态”或者“从配置文件里克隆新的 git 库”这样的功能就会轻松很多;添加“并行克隆”的支持也几乎不算个事儿了,而在 shell 脚本里却很难实现(尽管不是不可能)。事后看来,我本可以节省时间,并且获得更好的结果。
+
+出于类似的原因,我很后悔写出了许多这样的 shell 脚本,而我在 2018 年的新年誓言就是不要再犯类似的错误了。
+
+#### 附录:问题汇总
+
+需要指出的是,shell 编程的确存在一些实际的限制。下面是一些例子:
+
+ * 在处理一些包含“空格”或者其他“特殊”字符的文件名时,需要特别注意细节。绝大多数脚本都会犯错,即使是那些经验丰富的作者(比如我)编写的脚本,因为太容易写错了,[只添加引号是不够的][3]。
+
+ * 有许多所谓“正确”和“错误”的做法。你应该用 `which` 还是 `command`?该用 `$@` 还是 `$*`,是不是得加引号?你是该用 `cmd $arg` 还是 `cmd "$arg"`?等等等等。
+
+ * 你没法在变量里存储空字节(0x00);shell 脚本处理二进制数据很麻烦。
+
+ * 虽然你可以非常快速地写出有用的东西,但实现更复杂的算法则要痛苦许多,即使用 ksh/zsh/bash 扩展也是如此。我上面那个解析 HTML 的脚本临时用用是可以的,但你真的不会想在生产环境中使用这种脚本。
+
+ * 很难写出跨平台的通用型 shell 脚本。`/bin/sh` 可能是 `dash` 或者 `bash`,不同的 shell 有不同的运行方式。外部工具如 `grep`、`sed` 等,不一定能支持同样的参数。你能确定你的脚本可以适用于 Linux、macOS 和 Windows 的所有版本吗(过去、现在和将来)?
+
+ * 调试 shell 脚本会很难,特别是你眼中的语法可能会很快变得模糊起来,并不是所有人都熟悉 shell 编程的语境。
+
+ * 处理错误会很棘手(检查 `$?` 或是 `set -e`),排查一些超过“出了个小错”级别的复杂错误几乎是不可能的。
+
+ * 除非你使用了 `set -u`,变量未定义将不会报错,而这会导致一些“搞笑事件”,比如 `rm -r ~/$undefined` 会删除用户的整个家目录([瞅瞅 Github 上的这个悲剧][4])。
+
+ * 所有东西都是字符串。一些 shell 引入了数组,能用,但是语法非常丑陋和模糊。带分数的数字运算仍然难以应付,并且依赖像 `bc` 或 `dc` 这样的外部工具(`$(( .. ))` 这种方式只能对付一下整数)。
-And many scripts do grow beyond their original intended size, and often you will spend a lot more time than you should on “fixing that one bug”, or “adding just one small feature”. Rinse, repeat.
+**反馈**
-If you had written it in Python or Ruby or another similar language from the start, you would have spent some more time writing the original version, but would have spent much less time maintaining it, while almost certainly having fewer bugs.
-
-Take my [packman.vim][2] script for example. It started out as a simple `for` loop over all directories and a `git pull` and has grown from there. At about 200 lines it’s hardly the most complex script, but had I written it in Go as I originally planned then it would have been much easier to add support for printing out the status or cloning new repos from a config file. It would also be almost trivial to add support for parallel clones, which is hard (though not impossible) to do correct in a shell script. In hindsight, I would have saved time, and gotten a better result to boot.
-
-I regret writing most shell scripts I’ve written for similar reasons, and my 2018 new year’s pledge will be to not write any more.
-
-#### Appendix: the problems
-
-And to be clear, shell scripting does come with some real limitation. Some examples:
-
- * Dealing with filenames that contain spaces or other ‘special’ characters requires careful attention to detail. The vast majority of scripts get this wrong, even when written by experienced authors who care about such things (e.g. me), because it’s so easy to do it wrong. [Adding quotes is not enough][3].
-
- * There are many “right” and “wrong” ways to do things. Should you use `which` or `command`? Should you use `$@` or `$*`, and should that be quoted? Should you use `cmd $arg` or `cmd "$arg"`? etc. etc.
-
- * You cannot store any NULL bytes (0x00) in variables; it is very hard to make shell scripts deal with binary data.
-
- * While you can make something very useful very quickly, implementing more complex algorithms can be very painful – if not nigh-impossible – even when using the ksh/zsh/bash extensions. My ad-hoc HTML parsing in the example above was okay for a quick one-off script, but you really don’t want to do things like that in a production-script.
-
- * It can be hard to write shell scripts that work well on all platforms. `/bin/sh` could be `dash` or `bash`, and will behave different. External tools such as `grep`, `sed`, etc. may or may not support certain flags. Are you sure that your script works on all versions (past, present, and future) of Linux, macOS, and Windows equally well?
-
- * Debugging shell scripts can be hard, especially as the syntax can get fairly obscure quite fast, and not everyone is equally well versed in shell scripting.
-
- * Error handling can be tricky (check `$?` or `set -e`), and doing something more advanced beyond “an error occurred” is practically impossible.
-
- * Undefined variables are not an error unless you use `set -u`, leading to “fun stuff” like `rm -r ~/$undefined` deleting user’s home dir ([not a theoretical problem][4]).
-
- * Everything is a string. Some shells add arrays, which works but the syntax is obscure and ugly. Numeric computations with fractions remain tricky and rely on external tools such as `bc` or `dc` (`$(( .. ))` expansion only works for integers).
-
-
-
-
-**Feedback**
-
-You can mail me at [martin@arp242.net][5] or [create a GitHub issue][6] for feedback, questions, etc.
+你可以发邮件到 [martin@arp242.net][5],或者[在 GitHub 上创建 issue][6] 来向我反馈,提问等。
--------------------------------------------------------------------------------
From cc953b3f4b3b9da70240c5c223ed3f1b5b141818 Mon Sep 17 00:00:00 2001
From: Jerry Li
Date: Mon, 22 Apr 2019 18:03:36 +0800
Subject: [PATCH 0099/1154] Update 20171226 The shell scripting trap.md
---
sources/tech/20171226 The shell scripting trap.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20171226 The shell scripting trap.md b/sources/tech/20171226 The shell scripting trap.md
index f1fc05dcc9..15659760a1 100644
--- a/sources/tech/20171226 The shell scripting trap.md
+++ b/sources/tech/20171226 The shell scripting trap.md
@@ -84,7 +84,7 @@ via: https://arp242.net/weblog/shell-scripting-trap.html
作者:[Martin Tournoij][a]
选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
+译者:[jdh8383](https://github.com/jdh8383)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 5b4a953af351c2f94e2696b90d089061921d4291 Mon Sep 17 00:00:00 2001
From: Jerry Li
Date: Mon, 22 Apr 2019 18:12:34 +0800
Subject: [PATCH 0100/1154] commit translation
---
{sources => translated}/tech/20171226 The shell scripting trap.md | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {sources => translated}/tech/20171226 The shell scripting trap.md (100%)
diff --git a/sources/tech/20171226 The shell scripting trap.md b/translated/tech/20171226 The shell scripting trap.md
similarity index 100%
rename from sources/tech/20171226 The shell scripting trap.md
rename to translated/tech/20171226 The shell scripting trap.md
From 9af63fad5ce1d6d5ac9c04b9c6043cb2df2729f2 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 22 Apr 2019 23:11:16 +0800
Subject: [PATCH 0101/1154] PRF:20190321 How To Check If A Port Is Open On
Multiple Remote Linux System Using Shell Script With nc Command.md
@zero-MK
---
...stem Using Shell Script With nc Command.md | 68 +++++++++----------
1 file changed, 31 insertions(+), 37 deletions(-)
diff --git a/translated/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md b/translated/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
index a02cd504d5..f5b60aca21 100644
--- a/translated/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
+++ b/translated/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
@@ -1,6 +1,6 @@
[#]: collector: "lujun9972"
[#]: translator: "zero-MK"
-[#]: reviewer: " "
+[#]: reviewer: "wxy"
[#]: publisher: " "
[#]: url: " "
[#]: subject: "How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?"
@@ -8,26 +8,20 @@
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
+如何检查多个远程 Linux 系统是否打开了指定端口?
+======
-# 如何使用带有 nc 命令的 Shell 脚本来检查多个远程 Linux 系统是否打开了指定端口?
+我们最近写了一篇文章关于如何检查远程 Linux 服务器是否打开指定端口。它能帮助你检查单个服务器。
-我们最近写了一篇文章关于如何检查远程 Linux 服务器是否打开指定端口。它能帮助您检查单个服务器。
+如果要检查五个服务器有没有问题,可以使用以下任何一个命令,如 `nc`(netcat)、`nmap` 和 `telnet`。但是如果想检查 50 多台服务器,那么你的解决方案是什么?
-如果要检查五个服务器有没有问题,可以使用以下任何一个命令,如 nc(netcat),nmap 和 telnet。
+要检查所有服务器并不容易,如果你一个一个这样做,完全没有必要,因为这样你将会浪费大量的时间。为了解决这种情况,我使用 `nc` 命令编写了一个 shell 小脚本,它将允许我们扫描任意数量服务器给定的端口。
-但是如果想检查 50 多台服务器,那么你的解决方案是什么?
+如果你要查找单个服务器扫描,你有多个选择,你只需阅读 [检查远程 Linux 系统上的端口是否打开?][1] 了解更多信息。
-要检查所有服务器并不容易,如果你一个一个这样做,完全没有必要,因为这样你将会浪费大量的时间。
+本教程中提供了两个脚本,这两个脚本都很有用。这两个脚本都用于不同的目的,你可以通过阅读标题轻松理解其用途。
-为了解决这种情况,我使用 nc 命令编写了一个 shell 小脚本,它将允许我们扫描任意数量服务器给定的端口。
-
-如果您要查找单个服务器扫描,您有多个选择,你只需导航到到 **[检查远程 Linux 系统上的端口是否打开?][1]** 了解更多信息。
-
-本教程中提供了两个脚本,这两个脚本都很有用。
-
-这两个脚本都用于不同的目的,您可以通过阅读标题轻松理解其用途。
-
-在你阅读这篇文章之前,我会问你几个问题,如果你知道答案或者你可以通过阅读这篇文章来获得答案。
+在你阅读这篇文章之前,我会问你几个问题,如果你不知道答案你可以通过阅读这篇文章来获得答案。
如何检查一个远程 Linux 服务器上指定的端口是否打开?
@@ -35,17 +29,17 @@
如何检查多个远程 Linux 服务器上是否打开了多个指定的端口?
-### 什么是nc(netcat)命令?
+### 什么是 nc(netcat)命令?
-nc 即 netcat 。Netcat 是一个简单实用的 Unix 程序,它使用 TCP 或 UDP 协议进行跨网络连接进行数据读取和写入。
+`nc` 即 netcat。它是一个简单实用的 Unix 程序,它使用 TCP 或 UDP 协议进行跨网络连接进行数据读取和写入。
-它被设计成一个可靠的 “后端” (back-end) 工具,我们可以直接使用或由其他程序和脚本轻松驱动它。
+它被设计成一个可靠的 “后端” 工具,我们可以直接使用或由其他程序和脚本轻松驱动它。
-同时,它也是一个功能丰富的网络调试和探索工具,因为它可以创建您需要的几乎任何类型的连接,并具有几个有趣的内置功能。
+同时,它也是一个功能丰富的网络调试和探索工具,因为它可以创建你需要的几乎任何类型的连接,并具有几个有趣的内置功能。
-Netcat 有三个主要的模式。分别是连接模式,监听模式和隧道模式。
+netcat 有三个主要的模式。分别是连接模式,监听模式和隧道模式。
-**nc(netcat)的通用语法:**
+`nc`(netcat)的通用语法:
```
$ nc [-options] [HostName or IP] [PortNumber]
@@ -55,9 +49,9 @@ $ nc [-options] [HostName or IP] [PortNumber]
如果要检查多个远程 Linux 服务器上给定端口是否打开,请使用以下 shell 脚本。
-在我的例子中,我们将检查端口 22 是否在以下远程服务器中打开,确保您已经更新文件中的服务器列表而不是还是使用我的服务器列表。
+在我的例子中,我们将检查端口 22 是否在以下远程服务器中打开,确保你已经更新文件中的服务器列表而不是使用我的服务器列表。
-您必须确保已经更新服务器列表 : `server-list.txt file` 。每个服务器(IP)应该在单独的行中。
+你必须确保已经更新服务器列表 :`server-list.txt` 。每个服务器(IP)应该在单独的行中。
```
# cat server-list.txt
@@ -77,12 +71,12 @@ $ nc [-options] [HostName or IP] [PortNumber]
#!/bin/sh
for server in `more server-list.txt`
do
-#echo $i
-nc -zvw3 $server 22
+ #echo $i
+ nc -zvw3 $server 22
done
```
-设置 `port_scan.sh` 文件的可执行权限。
+设置 `port_scan.sh` 文件的可执行权限。
```
$ chmod +x port_scan.sh
@@ -105,9 +99,9 @@ Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
如果要检查多个服务器中的多个端口,请使用下面的脚本。
-在我的例子中,我们将检查给定服务器的 22 和 80 端口是否打开。确保您必须替换所需的端口和服务器名称而不使用是我的。
+在我的例子中,我们将检查给定服务器的 22 和 80 端口是否打开。确保你必须替换所需的端口和服务器名称而不使用是我的。
-您必须确保已经将要检查的端口写入 `port-list.txt` 文件中。每个端口应该在一个单独的行中。
+你必须确保已经将要检查的端口写入 `port-list.txt` 文件中。每个端口应该在一个单独的行中。
```
# cat port-list.txt
@@ -115,7 +109,7 @@ Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
80
```
-您必须确保已经将要检查的服务器( IP 地址 )写入 `server-list.txt` 到文件中。每个服务器( IP ) 应该在单独的行中。
+你必须确保已经将要检查的服务器(IP 地址)写入 `server-list.txt` 到文件中。每个服务器(IP) 应该在单独的行中。
```
# cat server-list.txt
@@ -135,12 +129,12 @@ Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
#!/bin/sh
for server in `more server-list.txt`
do
-for port in `more port-list.txt`
-do
-#echo $server
-nc -zvw3 $server $port
-echo ""
-done
+ for port in `more port-list.txt`
+ do
+ #echo $server
+ nc -zvw3 $server $port
+ echo ""
+ done
done
```
@@ -180,10 +174,10 @@ via: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[zero-MK](https://github.com/zero-mk)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/
+[1]: https://linux.cn/article-10675-1.html
From 5c041a17bbf9ff647041f587ec622ff127c51a87 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Mon, 22 Apr 2019 23:12:32 +0800
Subject: [PATCH 0102/1154] PUB:20190321 How To Check If A Port Is Open On
Multiple Remote Linux System Using Shell Script With nc Command.md
@zero-MK https://linux.cn/article-10766-1.html
---
... Remote Linux System Using Shell Script With nc Command.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md (98%)
diff --git a/translated/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md b/published/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
similarity index 98%
rename from translated/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
rename to published/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
index f5b60aca21..073cd7b1b3 100644
--- a/translated/tech/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
+++ b/published/20190321 How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command.md
@@ -1,8 +1,8 @@
[#]: collector: "lujun9972"
[#]: translator: "zero-MK"
[#]: reviewer: "wxy"
-[#]: publisher: " "
-[#]: url: " "
+[#]: publisher: "wxy"
+[#]: url: "https://linux.cn/article-10766-1.html"
[#]: subject: "How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?"
[#]: via: "https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/"
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
From 5df1f5241a2140f1117dbcb25bf40c3cbc98d3f8 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Tue, 23 Apr 2019 08:56:10 +0800
Subject: [PATCH 0103/1154] translated
---
...rted with Mercurial for version control.md | 34 +++++++++----------
1 file changed, 17 insertions(+), 17 deletions(-)
rename {sources => translated}/tech/20190415 Getting started with Mercurial for version control.md (57%)
diff --git a/sources/tech/20190415 Getting started with Mercurial for version control.md b/translated/tech/20190415 Getting started with Mercurial for version control.md
similarity index 57%
rename from sources/tech/20190415 Getting started with Mercurial for version control.md
rename to translated/tech/20190415 Getting started with Mercurial for version control.md
index a682517b5f..078589dd2d 100644
--- a/sources/tech/20190415 Getting started with Mercurial for version control.md
+++ b/translated/tech/20190415 Getting started with Mercurial for version control.md
@@ -7,17 +7,17 @@
[#]: via: (https://opensource.com/article/19/4/getting-started-mercurial)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
-Getting started with Mercurial for version control
+开始使用 Mercurial 进行版本控制
======
-Learn the basics of Mercurial, a distributed version control system
-written in Python.
+了解 Mercurial 的基础知识,它是一个用 Python 写的分布式版本控制系统。
+
![][1]
-[Mercurial][2] is a distributed version control system written in Python. Because it's written in a high-level language, you can write a Mercurial extension with a few Python functions.
+[Mercurial][2] 是一个用 Python 编写的分布式版本控制系统。因为它是用高级语言编写的,所以你可以用 Python 函数编写一个 Mercurial 扩展。
-There are several ways to install Mercurial, which are explained in the [official documentation][3]. My favorite one is not there: using **pip**. This is the most amenable way to develop local extensions!
+在[官方文档中][3]说明了几种安装 Mercurial 的方法。我最喜欢的一种方法不在里面:使用 **pip**。这是开发本地扩展的最合适方式!
-For now, Mercurial only supports Python 2.7, so you will need to create a Python 2.7 virtual environment:
+目前,Mercurial 仅支持 Python 2.7,因此你需要创建一个 Python 2.7 虚拟环境:
```
@@ -25,7 +25,7 @@ python2 -m virtualenv mercurial-env
./mercurial-env/bin/pip install mercurial
```
-To have a short command, and to satisfy everyone's insatiable need for chemistry-based humor, the command is called **hg**.
+为了有一个简短的命令,以及满足人们对化学幽默的无法满足的需求,命令称之为 **hg**。
```
@@ -37,7 +37,7 @@ $ source mercurial-env/bin/activate
(mercurial-env)$
```
-The status is empty since you do not have any files. Add a couple of files:
+由于还没有任何文件,因此状态为空。添加几个文件:
```
@@ -58,11 +58,11 @@ date: Fri Mar 29 12:42:43 2019 -0700
summary: Adding stuff
```
-The **addremove** command is useful: it adds any new files that are not ignored to the list of managed files and removes any files that have been removed.
+**addremove** 命令很有用:它将任何未被忽略的新文件添加到托管文件列表中,并移除任何已删除的文件。
-As I mentioned, Mercurial extensions are written in Python—they are just regular Python modules.
+如我所说,Mercurial 扩展用 Python 写成,它们只是常规的 Python 模块。
-This is an example of a short Mercurial extension:
+这是一个简短的 Mercurial 扩展示例:
```
@@ -78,14 +78,14 @@ def say_hello(ui, repo, **opts):
ui.write("hello ", opts['whom'], "\n")
```
-A simple way to test it is to put it in a file in the virtual environment manually:
+一个简单的测试方法是将它手动加入虚拟环境中的文件中:
```
`$ vi ../mercurial-env/lib/python2.7/site-packages/hello_ext.py`
```
-Then you need to _enable_ the extension. You can start by enabling it only in the current repository:
+然后你需要_启用_扩展。你可以仅在当前仓库中启用它开始:
```
@@ -94,7 +94,7 @@ $ cat >> .hg/hgrc
hello_ext =
```
-Now, a greeting is possible:
+现在,问候有了:
```
@@ -102,9 +102,9 @@ Now, a greeting is possible:
hello world
```
-Most extensions will do more useful stuff—possibly even things to do with Mercurial. The **repo** object is a **mercurial.hg.repository** object.
+大多数扩展会做更多有用的东西,甚至可能与 Mercurial 有关。 **repo** 对象是 **mercurial.hg.repository** 的对象。
-Refer to the [official documentation][5] for more about Mercurial's API. And visit the [official repo][6] for more examples and inspiration.
+有关 Mercurial API 的更多信息,请参阅[官方文档][5]。并访问[官方仓库][6]获取更多示例和灵感。
--------------------------------------------------------------------------------
@@ -112,7 +112,7 @@ via: https://opensource.com/article/19/4/getting-started-mercurial
作者:[Moshe Zadka (Community Moderator)][a]
选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
+译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 2e7c76df92a7d085164523363f1bb3262e546115 Mon Sep 17 00:00:00 2001
From: geekpi
Date: Tue, 23 Apr 2019 08:59:56 +0800
Subject: [PATCH 0104/1154] translating
---
...0190419 4 cool new projects to try in COPR for April 2019.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190419 4 cool new projects to try in COPR for April 2019.md b/sources/tech/20190419 4 cool new projects to try in COPR for April 2019.md
index e8d7dd9f7c..1c6cd5ccaa 100644
--- a/sources/tech/20190419 4 cool new projects to try in COPR for April 2019.md
+++ b/sources/tech/20190419 4 cool new projects to try in COPR for April 2019.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From 6f0b23601953ac7497d8e49b7d9a1223ce6da046 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 23 Apr 2019 09:17:07 +0800
Subject: [PATCH 0105/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190419=20Inte?=
=?UTF-8?q?l=20follows=20AMD=E2=80=99s=20lead=20(again)=20into=20single-so?=
=?UTF-8?q?cket=20Xeon=20servers=20sources/talk/20190419=20Intel=20follows?=
=?UTF-8?q?=20AMD-s=20lead=20(again)=20into=20single-socket=20Xeon=20serve?=
=?UTF-8?q?rs.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...(again) into single-socket Xeon servers.md | 61 +++++++++++++++++++
1 file changed, 61 insertions(+)
create mode 100644 sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md
diff --git a/sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md b/sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md
new file mode 100644
index 0000000000..9685591b2c
--- /dev/null
+++ b/sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md
@@ -0,0 +1,61 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Intel follows AMD’s lead (again) into single-socket Xeon servers)
+[#]: via: (https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all)
+[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
+
+Intel follows AMD’s lead (again) into single-socket Xeon servers
+======
+Intel's new U series of processors are aimed at the low-end market where one processor is good enough.
+![Intel][1]
+
+I’m really starting to wonder who the leader in x86 really is these days because it seems Intel is borrowing another page out of AMD’s playbook.
+
+Intel launched a whole lot of new Xeon Scalable processors earlier this month, but they neglected to mention a unique line: the U series of single-socket processors. The folks over at Serve The Home sniffed it out first, and Intel has confirmed the existence of the line, just that they “didn’t broadly promote them.”
+
+**[ Read also:[Intel makes a play for high-speed fiber networking for data centers][2] ]**
+
+To backtrack a bit, AMD made a major push for [single-socket servers][3] when it launched the Epyc line of server chips. Epyc comes with up to 32 cores and multithreading, and Intel (and Dell) argued that one 32-core/64-thread processor was enough to handle many loads and a lot cheaper than a two-socket system.
+
+The new U series isn’t available in the regular Intel [ARK database][4] listing of Xeon Scalable processors, but they do show up if you search. Intel says they are looking into that .There are two processors for now, one with 24 cores and two with 20 cores.
+
+The 24-core Intel [Xeon Gold 6212U][5] will be a counterpart to the Intel Xeon Platinum 8260, with a 2.4GHz base clock speed and a 3.9GHz turbo clock and the ability to access up to 1TB of memory. The Xeon Gold 6212U will have the same 165W TDP as the 8260 line, but with a single socket that’s 165 fewer watts of power.
+
+Also, Intel is suggesting a price of about $2,000 for the Intel Xeon Gold 6212U, a big discount over the Xeon Platinum 8260’s $4,702 list price. So, that will translate into much cheaper servers.
+
+The [Intel Xeon Gold 6210U][6] with 20 cores carries a suggested price of $1,500, has a base clock rate of 2.50GHz with turbo boost to 3.9GHz and a 150 watt TDP. Finally, there is the 20-core Intel [Xeon Gold 6209U][7] with a price of around $1,000 that is identical to the 6210 except its base clock speed is 2.1GHz with a turbo boost of 3.9GHz and a TDP of 125 watts due to its lower clock speed.
+
+**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]**
+
+All of the processors support up to 1TB of DDR4-2933 memory and Intel’s Optane persistent memory.
+
+In terms of speeds and feeds, AMD has a slight advantage over Intel in the single-socket race, and Epyc 2 is rumored to be approaching completion, which will only further advance AMD’s lead.
+
+Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all
+
+作者:[Andy Patrizio][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Andy-Patrizio/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/06/intel_generic_cpu_background-100760187-large.jpg
+[2]: https://www.networkworld.com/article/3307852/intel-makes-a-play-for-high-speed-fiber-networking-for-data-centers.html
+[3]: https://www.networkworld.com/article/3253626/amd-lands-dell-as-its-latest-epyc-server-processor-customer.html
+[4]: https://ark.intel.com/content/www/us/en/ark/products/series/192283/2nd-generation-intel-xeon-scalable-processors.html
+[5]: https://ark.intel.com/content/www/us/en/ark/products/192453/intel-xeon-gold-6212u-processor-35-75m-cache-2-40-ghz.html
+[6]: https://ark.intel.com/content/www/us/en/ark/products/192452/intel-xeon-gold-6210u-processor-27-5m-cache-2-50-ghz.html
+[7]: https://ark.intel.com/content/www/us/en/ark/products/193971/intel-xeon-gold-6209u-processor-27-5m-cache-2-10-ghz.html
+[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
+[9]: https://www.facebook.com/NetworkWorld/
+[10]: https://www.linkedin.com/company/network-world
From 1faf614c72c9108e6516aaae6543b7f17804ce75 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 23 Apr 2019 09:53:33 +0800
Subject: [PATCH 0106/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190417=20Inte?=
=?UTF-8?q?r-process=20communication=20in=20Linux:=20Sockets=20and=20signa?=
=?UTF-8?q?ls=20sources/tech/20190417=20Inter-process=20communication=20in?=
=?UTF-8?q?=20Linux-=20Sockets=20and=20signals.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...unication in Linux- Sockets and signals.md | 388 ++++++++++++++++++
1 file changed, 388 insertions(+)
create mode 100644 sources/tech/20190417 Inter-process communication in Linux- Sockets and signals.md
diff --git a/sources/tech/20190417 Inter-process communication in Linux- Sockets and signals.md b/sources/tech/20190417 Inter-process communication in Linux- Sockets and signals.md
new file mode 100644
index 0000000000..40f64a2f5a
--- /dev/null
+++ b/sources/tech/20190417 Inter-process communication in Linux- Sockets and signals.md
@@ -0,0 +1,388 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Inter-process communication in Linux: Sockets and signals)
+[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-networking)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+Inter-process communication in Linux: Sockets and signals
+======
+
+Learn how processes synchronize with each other in Linux.
+
+
+
+This is the third and final article in a series about [interprocess communication][1] (IPC) in Linux. The [first article][2] focused on IPC through shared storage (files and memory segments), and the [second article][3] does the same for basic channels: pipes (named and unnamed) and message queues. This article moves from IPC at the high end (sockets) to IPC at the low end (signals). Code examples flesh out the details.
+
+### Sockets
+
+Just as pipes come in two flavors (named and unnamed), so do sockets. IPC sockets (aka Unix domain sockets) enable channel-based communication for processes on the same physical device (host), whereas network sockets enable this kind of IPC for processes that can run on different hosts, thereby bringing networking into play. Network sockets need support from an underlying protocol such as TCP (Transmission Control Protocol) or the lower-level UDP (User Datagram Protocol).
+
+By contrast, IPC sockets rely upon the local system kernel to support communication; in particular, IPC sockets communicate using a local file as a socket address. Despite these implementation differences, the IPC socket and network socket APIs are the same in the essentials. The forthcoming example covers network sockets, but the sample server and client programs can run on the same machine because the server uses network address localhost (127.0.0.1), the address for the local machine on the local machine.
+
+Sockets configured as streams (discussed below) are bidirectional, and control follows a client/server pattern: the client initiates the conversation by trying to connect to a server, which tries to accept the connection. If everything works, requests from the client and responses from the server then can flow through the channel until this is closed on either end, thereby breaking the connection.
+
+An iterative server, which is suited for development only, handles connected clients one at a time to completion: the first client is handled from start to finish, then the second, and so on. The downside is that the handling of a particular client may hang, which then starves all the clients waiting behind. A production-grade server would be concurrent, typically using some mix of multi-processing and multi-threading. For example, the Nginx web server on my desktop machine has a pool of four worker processes that can handle client requests concurrently. The following code example keeps the clutter to a minimum by using an iterative server; the focus thus remains on the basic API, not on concurrency.
+
+Finally, the socket API has evolved significantly over time as various POSIX refinements have emerged. The current sample code for server and client is deliberately simple but underscores the bidirectional aspect of a stream-based socket connection. Here's a summary of the flow of control, with the server started in a terminal then the client started in a separate terminal:
+
+ * The server awaits client connections and, given a successful connection, reads the bytes from the client.
+
+ * To underscore the two-way conversation, the server echoes back to the client the bytes received from the client. These bytes are ASCII character codes, which make up book titles.
+
+ * The client writes book titles to the server process and then reads the same titles echoed from the server. Both the server and the client print the titles to the screen. Here is the server's output, essentially the same as the client's:
+
+```
+Listening on port 9876 for clients...
+War and Peace
+Pride and Prejudice
+The Sound and the Fury
+```
+
+
+
+
+#### Example 1. The socket server
+
+```
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include "sock.h"
+
+void report(const char* msg, int terminate) {
+ perror(msg);
+ if (terminate) exit(-1); /* failure */
+}
+
+int main() {
+ int fd = socket(AF_INET, /* network versus AF_LOCAL */
+ SOCK_STREAM, /* reliable, bidirectional, arbitrary payload size */
+ 0); /* system picks underlying protocol (TCP) */
+ if (fd < 0) report("socket", 1); /* terminate */
+
+ /* bind the server's local address in memory */
+ struct sockaddr_in saddr;
+ memset(&saddr, 0, sizeof(saddr)); /* clear the bytes */
+ saddr.sin_family = AF_INET; /* versus AF_LOCAL */
+ saddr.sin_addr.s_addr = htonl(INADDR_ANY); /* host-to-network endian */
+ saddr.sin_port = htons(PortNumber); /* for listening */
+
+ if (bind(fd, (struct sockaddr *) &saddr, sizeof(saddr)) < 0)
+ report("bind", 1); /* terminate */
+
+ /* listen to the socket */
+ if (listen(fd, MaxConnects) < 0) /* listen for clients, up to MaxConnects */
+ report("listen", 1); /* terminate */
+
+ fprintf(stderr, "Listening on port %i for clients...\n", PortNumber);
+ /* a server traditionally listens indefinitely */
+ while (1) {
+ struct sockaddr_in caddr; /* client address */
+ int len = sizeof(caddr); /* address length could change */
+
+ int client_fd = accept(fd, (struct sockaddr*) &caddr, &len); /* accept blocks */
+ if (client_fd < 0) {
+ report("accept", 0); /* don't terminate, though there's a problem */
+ continue;
+ }
+
+ /* read from client */
+ int i;
+ for (i = 0; i < ConversationLen; i++) {
+ char buffer[BuffSize + 1];
+ memset(buffer, '\0', sizeof(buffer));
+ int count = read(client_fd, buffer, sizeof(buffer));
+ if (count > 0) {
+ puts(buffer);
+ write(client_fd, buffer, sizeof(buffer)); /* echo as confirmation */
+ }
+ }
+ close(client_fd); /* break connection */
+ } /* while(1) */
+ return 0;
+}
+```
+
+The server program above performs the classic four-step to ready itself for client requests and then to accept individual requests. Each step is named after a system function that the server calls:
+
+ 1. **socket(…)** : get a file descriptor for the socket connection
+ 2. **bind(…)** : bind the socket to an address on the server's host
+ 3. **listen(…)** : listen for client requests
+ 4. **accept(…)** : accept a particular client request
+
+
+
+The **socket** call in full is:
+
+```
+int sockfd = socket(AF_INET, /* versus AF_LOCAL */
+ SOCK_STREAM, /* reliable, bidirectional */
+ 0); /* system picks protocol (TCP) */
+```
+
+The first argument specifies a network socket as opposed to an IPC socket. There are several options for the second argument, but **SOCK_STREAM** and **SOCK_DGRAM** (datagram) are likely the most used. A stream-based socket supports a reliable channel in which lost or altered messages are reported; the channel is bidirectional, and the payloads from one side to the other can be arbitrary in size. By contrast, a datagram-based socket is unreliable (best try), unidirectional, and requires fixed-sized payloads. The third argument to **socket** specifies the protocol. For the stream-based socket in play here, there is a single choice, which the zero represents: TCP. Because a successful call to **socket** returns the familiar file descriptor, a socket is written and read with the same syntax as, for example, a local file.
+
+The **bind** call is the most complicated, as it reflects various refinements in the socket API. The point of interest is that this call binds the socket to a memory address on the server machine. However, the **listen** call is straightforward:
+
+```
+if (listen(fd, MaxConnects) < 0)
+```
+
+The first argument is the socket's file descriptor and the second specifies how many client connections can be accommodated before the server issues a connection refused error on an attempted connection. ( **MaxConnects** is set to 8 in the header file sock.h.)
+
+The **accept** call defaults to a blocking wait: the server does nothing until a client attempts to connect and then proceeds. The **accept** function returns **-1** to indicate an error. If the call succeeds, it returns another file descriptor—for a read/write socket in contrast to the accepting socket referenced by the first argument in the **accept** call. The server uses the read/write socket to read requests from the client and to write responses back. The accepting socket is used only to accept client connections.
+
+By design, a server runs indefinitely. Accordingly, the server can be terminated with a **Ctrl+C** from the command line.
+
+#### Example 2. The socket client
+
+```
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include "sock.h"
+
+const char* books[] = {"War and Peace",
+ "Pride and Prejudice",
+ "The Sound and the Fury"};
+
+void report(const char* msg, int terminate) {
+ perror(msg);
+ if (terminate) exit(-1); /* failure */
+}
+
+int main() {
+ /* fd for the socket */
+ int sockfd = socket(AF_INET, /* versus AF_LOCAL */
+ SOCK_STREAM, /* reliable, bidirectional */
+ 0); /* system picks protocol (TCP) */
+ if (sockfd < 0) report("socket", 1); /* terminate */
+
+ /* get the address of the host */
+ struct hostent* hptr = gethostbyname(Host); /* localhost: 127.0.0.1 */
+ if (!hptr) report("gethostbyname", 1); /* is hptr NULL? */
+ if (hptr->h_addrtype != AF_INET) /* versus AF_LOCAL */
+ report("bad address family", 1);
+
+ /* connect to the server: configure server's address 1st */
+ struct sockaddr_in saddr;
+ memset(&saddr, 0, sizeof(saddr));
+ saddr.sin_family = AF_INET;
+ saddr.sin_addr.s_addr =
+ ((struct in_addr*) hptr->h_addr_list[0])->s_addr;
+ saddr.sin_port = htons(PortNumber); /* port number in big-endian */
+
+ if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0)
+ report("connect", 1);
+
+ /* Write some stuff and read the echoes. */
+ puts("Connect to server, about to write some stuff...");
+ int i;
+ for (i = 0; i < ConversationLen; i++) {
+ if (write(sockfd, books[i], strlen(books[i])) > 0) {
+ /* get confirmation echoed from server and print */
+ char buffer[BuffSize + 1];
+ memset(buffer, '\0', sizeof(buffer));
+ if (read(sockfd, buffer, sizeof(buffer)) > 0)
+ puts(buffer);
+ }
+ }
+ puts("Client done, about to exit...");
+ close(sockfd); /* close the connection */
+ return 0;
+}
+```
+
+The client program's setup code is similar to the server's. The principal difference between the two is that the client neither listens nor accepts, but instead connects:
+
+```
+if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0)
+```
+
+The **connect** call might fail for several reasons; for example, the client has the wrong server address or too many clients are already connected to the server. If the **connect** operation succeeds, the client writes requests and then reads the echoed responses in a **for** loop. After the conversation, both the server and the client **close** the read/write socket, although a close operation on either side is sufficient to close the connection. The client exits thereafter but, as noted earlier, the server remains open for business.
+
+The socket example, with request messages echoed back to the client, hints at the possibilities of arbitrarily rich conversations between the server and the client. Perhaps this is the chief appeal of sockets. It is common on modern systems for client applications (e.g., a database client) to communicate with a server through a socket. As noted earlier, local IPC sockets and network sockets differ only in a few implementation details; in general, IPC sockets have lower overhead and better performance. The communication API is essentially the same for both.
+
+### Signals
+
+A signal interrupts an executing program and, in this sense, communicates with it. Most signals can be either ignored (blocked) or handled (through designated code), with **SIGSTOP** (pause) and **SIGKILL** (terminate immediately) as the two notable exceptions. Symbolic constants such as **SIGKILL** have integer values, in this case, 9.
+
+Signals can arise in user interaction. For example, a user hits **Ctrl+C** from the command line to terminate a program started from the command-line; **Ctrl+C** generates a **SIGTERM** signal. **SIGTERM** for terminate, unlike **SIGKILL** , can be either blocked or handled. One process also can signal another, thereby making signals an IPC mechanism.
+
+Consider how a multi-processing application such as the Nginx web server might be shut down gracefully from another process. The **kill** function:
+
+```
+int kill(pid_t pid, int signum); /* declaration */
+```
+
+can be used by one process to terminate another process or group of processes. If the first argument to function **kill** is greater than zero, this argument is treated as the pid (process ID) of the targeted process; if the argument is zero, the argument identifies the group of processes to which the signal sender belongs.
+
+The second argument to **kill** is either a standard signal number (e.g., **SIGTERM** or **SIGKILL** ) or 0, which makes the call to **signal** a query about whether the pid in the first argument is indeed valid. The graceful shutdown of a multi-processing application thus could be accomplished by sending a terminate signal—a call to the **kill** function with **SIGTERM** as the second argument—to the group of processes that make up the application. (The Nginx master process could terminate the worker processes with a call to **kill** and then exit itself.) The **kill** function, like so many library functions, houses power and flexibility in a simple invocation syntax.
+
+#### Example 3. The graceful shutdown of a multi-processing system
+
+```
+#include
+#include
+#include
+#include
+#include
+
+void graceful(int signum) {
+ printf("\tChild confirming received signal: %i\n", signum);
+ puts("\tChild about to terminate gracefully...");
+ sleep(1);
+ puts("\tChild terminating now...");
+ _exit(0); /* fast-track notification of parent */
+}
+
+void set_handler() {
+ struct sigaction current;
+ sigemptyset(¤t.sa_mask); /* clear the signal set */
+ current.sa_flags = 0; /* enables setting sa_handler, not sa_action */
+ current.sa_handler = graceful; /* specify a handler */
+ sigaction(SIGTERM, ¤t, NULL); /* register the handler */
+}
+
+void child_code() {
+ set_handler();
+
+ while (1) { /** loop until interrupted **/
+ sleep(1);
+ puts("\tChild just woke up, but going back to sleep.");
+ }
+}
+
+void parent_code(pid_t cpid) {
+ puts("Parent sleeping for a time...");
+ sleep(5);
+
+ /* Try to terminate child. */
+ if (-1 == kill(cpid, SIGTERM)) {
+ perror("kill");
+ exit(-1);
+ }
+ wait(NULL); /** wait for child to terminate **/
+ puts("My child terminated, about to exit myself...");
+}
+
+int main() {
+ pid_t pid = fork();
+ if (pid < 0) {
+ perror("fork");
+ return -1; /* error */
+ }
+ if (0 == pid)
+ child_code();
+ else
+ parent_code(pid);
+ return 0; /* normal */
+}
+```
+
+The shutdown program above simulates the graceful shutdown of a multi-processing system, in this case, a simple one consisting of a parent process and a single child process. The simulation works as follows:
+
+ * The parent process tries to fork a child. If the fork succeeds, each process executes its own code: the child executes the function **child_code** , and the parent executes the function **parent_code**.
+ * The child process goes into a potentially infinite loop in which the child sleeps for a second, prints a message, goes back to sleep, and so on. It is precisely a **SIGTERM** signal from the parent that causes the child to execute the signal-handling callback function **graceful**. The signal thus breaks the child process out of its loop and sets up the graceful termination of both the child and the parent. The child prints a message before terminating.
+ * The parent process, after forking the child, sleeps for five seconds so that the child can execute for a while; of course, the child mostly sleeps in this simulation. The parent then calls the **kill** function with **SIGTERM** as the second argument, waits for the child to terminate, and then exits.
+
+
+
+Here is the output from a sample run:
+
+```
+% ./shutdown
+Parent sleeping for a time...
+ Child just woke up, but going back to sleep.
+ Child just woke up, but going back to sleep.
+ Child just woke up, but going back to sleep.
+ Child just woke up, but going back to sleep.
+ Child confirming received signal: 15 ## SIGTERM is 15
+ Child about to terminate gracefully...
+ Child terminating now...
+My child terminated, about to exit myself...
+```
+
+For the signal handling, the example uses the **sigaction** library function (POSIX recommended) rather than the legacy **signal** function, which has portability issues. Here are the code segments of chief interest:
+
+ * If the call to **fork** succeeds, the parent executes the **parent_code** function and the child executes the **child_code** function. The parent waits for five seconds before signaling the child:
+
+```
+ puts("Parent sleeping for a time...");
+sleep(5);
+if (-1 == kill(cpid, SIGTERM)) {
+...sleepkillcpidSIGTERM...
+```
+
+If the **kill** call succeeds, the parent does a **wait** on the child's termination to prevent the child from becoming a permanent zombie; after the wait, the parent exits.
+
+ * The **child_code** function first calls **set_handler** and then goes into its potentially infinite sleeping loop. Here is the **set_handler** function for review:
+
+```
+ void set_handler() {
+ struct sigaction current; /* current setup */
+ sigemptyset(¤t.sa_mask); /* clear the signal set */
+ current.sa_flags = 0; /* for setting sa_handler, not sa_action */
+ current.sa_handler = graceful; /* specify a handler */
+ sigaction(SIGTERM, ¤t, NULL); /* register the handler */
+}
+```
+
+The first three lines are preparation. The fourth statement sets the handler to the function **graceful** , which prints some messages before calling **_exit** to terminate. The fifth and last statement then registers the handler with the system through the call to **sigaction**. The first argument to **sigaction** is **SIGTERM** for terminate, the second is the current **sigaction** setup, and the last argument ( **NULL** in this case) can be used to save a previous **sigaction** setup, perhaps for later use.
+
+
+
+
+Using signals for IPC is indeed a minimalist approach, but a tried-and-true one at that. IPC through signals clearly belongs in the IPC toolbox.
+
+### Wrapping up this series
+
+These three articles on IPC have covered the following mechanisms through code examples:
+
+ * Shared files
+ * Shared memory (with semaphores)
+ * Pipes (named and unnamed)
+ * Message queues
+ * Sockets
+ * Signals
+
+
+
+Even today, when thread-centric languages such as Java, C#, and Go have become so popular, IPC remains appealing because concurrency through multi-processing has an obvious advantage over multi-threading: every process, by default, has its own address space, which rules out memory-based race conditions in multi-processing unless the IPC mechanism of shared memory is brought into play. (Shared memory must be locked in both multi-processing and multi-threading for safe concurrency.) Anyone who has written even an elementary multi-threading program with communication via shared variables knows how challenging it can be to write thread-safe yet clear, efficient code. Multi-processing with single-threaded processes remains a viable—indeed, quite appealing—way to take advantage of today's multi-processor machines without the inherent risk of memory-based race conditions.
+
+There is no simple answer, of course, to the question of which among the IPC mechanisms is the best. Each involves a trade-off typical in programming: simplicity versus functionality. Signals, for example, are a relatively simple IPC mechanism but do not support rich conversations among processes. If such a conversion is needed, then one of the other choices is more appropriate. Shared files with locking is reasonably straightforward, but shared files may not perform well enough if processes need to share massive data streams; pipes or even sockets, with more complicated APIs, might be a better choice. Let the problem at hand guide the choice.
+
+Although the sample code ([available on my website][4]) is all in C, other programming languages often provide thin wrappers around these IPC mechanisms. The code examples are short and simple enough, I hope, to encourage you to experiment.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/interprocess-communication-linux-networking
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://en.wikipedia.org/wiki/Inter-process_communication
+[2]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
+[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-2
+[4]: http://condor.depaul.edu/mkalin
From 376f4dbadfbd92838874f4f1fc3778b3755e54d9 Mon Sep 17 00:00:00 2001
From: HALO Feng <289716347@qq.com>
Date: Tue, 23 Apr 2019 10:08:26 +0800
Subject: [PATCH 0107/1154] translating
---
...0190409 How To Install And Configure Chrony As NTP Client.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190409 How To Install And Configure Chrony As NTP Client.md b/sources/tech/20190409 How To Install And Configure Chrony As NTP Client.md
index 9f7eb5f66e..3988cda330 100644
--- a/sources/tech/20190409 How To Install And Configure Chrony As NTP Client.md
+++ b/sources/tech/20190409 How To Install And Configure Chrony As NTP Client.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (arrowfeng)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From ed1f0bada64a9a05642d70f9cdb633bf01219c93 Mon Sep 17 00:00:00 2001
From: HALO Feng <289716347@qq.com>
Date: Tue, 23 Apr 2019 10:18:46 +0800
Subject: [PATCH 0108/1154] Update 20190418 Most data center workers happy with
their jobs -- despite the heavy demands.md
---
...orkers happy with their jobs -- despite the heavy demands.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/talk/20190418 Most data center workers happy with their jobs -- despite the heavy demands.md b/sources/talk/20190418 Most data center workers happy with their jobs -- despite the heavy demands.md
index 5030e830dc..af966b072c 100644
--- a/sources/talk/20190418 Most data center workers happy with their jobs -- despite the heavy demands.md
+++ b/sources/talk/20190418 Most data center workers happy with their jobs -- despite the heavy demands.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (arrowfeng)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From 1c3d9ceeaa19bedf66b7b32d914e351d0d9b284d Mon Sep 17 00:00:00 2001
From: HALO Feng <289716347@qq.com>
Date: Tue, 23 Apr 2019 10:21:44 +0800
Subject: [PATCH 0109/1154] translating
---
... Install And Configure NTP Server And NTP Client In Linux.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sources/tech/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md b/sources/tech/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md
index f243fad898..5f67996bda 100644
--- a/sources/tech/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md
+++ b/sources/tech/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (arrowfeng)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
From 4849ecee3dbc4dcc4aa2f7a0e02946fdd6aaeb17 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 23 Apr 2019 12:56:17 +0800
Subject: [PATCH 0110/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190423=20Four?=
=?UTF-8?q?=20Methods=20To=20Check=20The=20Default=20Gateway=20Or=20Router?=
=?UTF-8?q?=20IP=20Address=20In=20Linux=3F=20sources/tech/20190423=20Four?=
=?UTF-8?q?=20Methods=20To=20Check=20The=20Default=20Gateway=20Or=20Router?=
=?UTF-8?q?=20IP=20Address=20In=20Linux.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...t Gateway Or Router IP Address In Linux.md | 147 ++++++++++++++++++
1 file changed, 147 insertions(+)
create mode 100644 sources/tech/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md
diff --git a/sources/tech/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md b/sources/tech/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md
new file mode 100644
index 0000000000..9ecb164307
--- /dev/null
+++ b/sources/tech/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md
@@ -0,0 +1,147 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Four Methods To Check The Default Gateway Or Router IP Address In Linux?)
+[#]: via: (https://www.2daygeek.com/check-find-default-gateway-or-router-ip-address-in-linux/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+Four Methods To Check The Default Gateway Or Router IP Address In Linux?
+======
+
+Your default gateway is the IP address of your router that you should aware of that.
+
+Typically this is automatically detected by your operating system during installation, if not then you may need to change it.
+
+If your system not able to ping self then probable it could be a gateway issue and you have to fix it.
+
+This might happen if you have multiple network adapters or routers on the network.
+
+A gateway is a router that acts as an access point to passes network data from one networks to another networks.
+
+The below articles will help you to gather some other information which is similar to this topic.
+
+ * **[9 Methods To Check Your Public IP Address In Linux Command Line][1]**
+ * **[How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux?][2]**
+
+
+
+This can be done using below Four commands.
+
+ * **`route Command:`** route command is used to show and manipulate the IP routing table.
+ * **`ip Command:`** IP command is similar to ifconfig, which is very familiar for assigning Static IP Address, Route & Default Gateway, etc.,.
+ * **`netstat Command:`** netstat (“network statistics”) is a command-line tool that displays network connections related information (both incoming and outgoing) such as routing tables, masquerade connections, multicast memberships and a number of network interface
+ * **`routel Command:`** routel command is used to list routes with pretty output format.
+
+
+
+### 1) How To Check The Default Gateway Or Router IP Address In Linux Using route Command?
+
+route command is used to show and manipulate the IP routing table.
+
+Its primary use is to set up static routes to specific hosts or networks via an interface once the interface was configured.
+
+When the add or del options are used, route modifies the routing tables. Without these options, route displays the current contents of the routing tables.
+
+```
+# route
+or
+# route -n
+
+Kernel IP routing table
+Destination Gateway Genmask Flags Metric Ref Use Iface
+default www.routerlogin 0.0.0.0 UG 600 0 0 wlp8s0
+192.168.1.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp8s0
+```
+
+### 2) How To Check The Default Gateway Or Router IP Address In Linux Using ip Command?
+
+**[IP command][3]** is similar to ifconfig, which is very familiar for assigning Static IP Address, Route & Default Gateway, etc.,.
+
+ifconfig command was deprecated due to no maintenance since so many years, even though it is still available on most Linux distributions.
+
+ifconfig command has been replaced by IP command which is very powerful and performing several network administration tasks with one command.
+
+IP command utility bundled with iproute2 package. By default iproute2 utility pre-installed all the major Linux distribution.
+
+If no, you can install it by issuing iproute2 on your terminal with help of package manager.
+
+```
+# ip r
+or
+# ip route
+or
+# ip route show
+
+default via 192.168.1.1 dev wlp8s0 proto dhcp metric 600
+192.168.1.0/24 dev wlp8s0 proto kernel scope link src 192.168.1.6 metric 600
+```
+
+### 3) How To Check The Default Gateway Or Router IP Address In Linux Using netstat Command?
+
+netstat stands for Network Statistics, is a command-line tool that displays network connections related information (both incoming and outgoing) such as routing tables, masquerade connections, multicast memberships and a number of network interface.
+
+It lists out all the tcp, udp socket connections and the unix socket connections.
+
+It is used for diagnosing network problems in the network and to determine the amount of traffic on the network as a performance measurement.
+
+```
+# netstat -r
+
+Kernel IP routing table
+Destination Gateway Genmask Flags MSS Window irtt Iface
+default www.routerlogin 0.0.0.0 UG 0 0 0 wlp8s0
+192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlp8s0
+```
+
+### 4) How To Check The Default Gateway Or Router IP Address In Linux Using routel Command?
+
+It used to list routes with pretty output format. These programs are a set of helper scripts you can use instead of raw iproute2 commands.
+
+The routel script will list routes in a format that some might consider easier to interpret then the ip route list equivalent.
+
+The routef script does not take any arguments and will simply flush the routing table down the drain. Beware! This means deleting all routes which will make your network unusable!
+
+```
+# routel
+ target gateway source proto scope dev tbl
+ default 192.168.1.1 dhcp wlp8s0
+ 192.168.1.0/ 24 192.168.1.6 kernel link wlp8s0
+ 127.0.0.0 broadcast 127.0.0.1 kernel link lo local
+ 127.0.0.0/ 8 local 127.0.0.1 kernel host lo local
+ 127.0.0.1 local 127.0.0.1 kernel host lo local
+127.255.255.255 broadcast 127.0.0.1 kernel link lo local
+ 192.168.1.0 broadcast 192.168.1.6 kernel link wlp8s0 local
+ 192.168.1.6 local 192.168.1.6 kernel host wlp8s0 local
+ 192.168.1.255 broadcast 192.168.1.6 kernel link wlp8s0 local
+ ::1 kernel lo
+ fe80::/ 64 kernel wlp8s0
+ ::1 local kernel lo local
+fe80::ad00:2f7e:d882:5add local kernel wlp8s0 local
+ ff00::/ 8 wlp8s0 local
+```
+
+If you would like to print only default gateway then use the following format.
+
+```
+# routel | grep default
+ default 192.168.1.1 dhcp wlp8s0
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/check-find-default-gateway-or-router-ip-address-in-linux/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/check-find-server-public-ip-address-linux/
+[2]: https://www.2daygeek.com/enable-disable-up-down-nic-network-interface-port-linux-using-ifconfig-ifdown-ifup-ip-nmcli-nmtui/
+[3]: https://www.2daygeek.com/ip-command-configure-network-interface-usage-linux/
From 3d0a302d505eb09f2bd35a41d4fce53573943af4 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 23 Apr 2019 12:56:30 +0800
Subject: [PATCH 0111/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190423=20How?=
=?UTF-8?q?=20To=20Monitor=20Disk=20I/O=20Activity=20Using=20iotop=20And?=
=?UTF-8?q?=20iostat=20Commands=20In=20Linux=3F=20sources/tech/20190423=20?=
=?UTF-8?q?How=20To=20Monitor=20Disk=20I-O=20Activity=20Using=20iotop=20An?=
=?UTF-8?q?d=20iostat=20Commands=20In=20Linux.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...sing iotop And iostat Commands In Linux.md | 341 ++++++++++++++++++
1 file changed, 341 insertions(+)
create mode 100644 sources/tech/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md
diff --git a/sources/tech/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md b/sources/tech/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md
new file mode 100644
index 0000000000..f4084302b8
--- /dev/null
+++ b/sources/tech/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md
@@ -0,0 +1,341 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?)
+[#]: via: (https://www.2daygeek.com/check-monitor-disk-io-in-linux-using-iotop-iostat-command/)
+[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
+
+How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?
+======
+
+Do you know what are the tools we can use for troubleshooting or monitoring real-time disk activity in Linux?
+
+If **[Linux system performance][1]** gets slow down we may use **[top command][2]** to see the system performance.
+
+It is used to check what are the processes are consuming high utilization on server.
+
+It’s common for most of the Linux administrator.
+
+It’s widely used by Linux administrator in the real world.
+
+If you don’t see much difference in the process output still you have an option to check other things.
+
+I would like to advise you to check `wa` status in the top output because most of the time the server performance will be degraded due to high I/O Read and Write on hard disk.
+
+If it’s high or fluctuation, it could be a cause. So, we need to check I/O activity on hard drive.
+
+We can monitory disk I/O statistics for all disks and file system in Linux system using `iotop` and `iostat` commands.
+
+### What Is iotop?
+
+iotop is a top-like utility for displaying real-time disk activity.
+
+iotop watches I/O usage information output by the Linux kernel and displays a table of current I/O usage by processes or threads on the system.
+
+It displays the I/O bandwidth read and written by each process/thread. It also displays the percentage of time the thread/process spent while swapping in and while waiting on I/O.
+
+Total DISK READ and Total DISK WRITE values represent total read and write bandwidth between processes and kernel threads on the one side and kernel block device subsystem on the other.
+
+Actual DISK READ and Actual DISK WRITE values represent corresponding bandwidths for actual disk I/O between kernel block device subsystem and underlying hardware (HDD, SSD, etc.).
+
+### How To Install iotop In Linux?
+
+We can easily install it with help of package manager since the package is available in all the Linux distributions repository.
+
+For **`Fedora`** system, use **[DNF Command][3]** to install iotop.
+
+```
+$ sudo dnf install iotop
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install iotop.
+
+```
+$ sudo apt install iotop
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][6]** to install iotop.
+
+```
+$ sudo pacman -S iotop
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install iotop.
+
+```
+$ sudo yum install iotop
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install iotop.
+
+```
+$ sudo zypper install iotop
+```
+
+### How To Monitor Disk I/O Activity/Statistics In Linux Using iotop Command?
+
+There are many options are available in iotop command to check varies statistics about disk I/O.
+
+Run the iotop command without any arguments to see each process or thread current I/O usage.
+
+```
+# iotop
+```
+
+[![][9]![][9]][10]
+
+If you would like to check which process are actually doing IO then run the iotop command with `-o` or `--only` option.
+
+```
+# iotop --only
+```
+
+[![][9]![][9]][11]
+
+**Details:**
+
+ * **`IO:`** It shows I/O utilization for each process, which includes disk and swap.
+ * **`SWAPIN:`** It shows only the swap usage of each process.
+
+
+
+### What Is iostat?
+
+iostat is used to report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions.
+
+The iostat command is used for monitoring system input/output device loading by observing the time the devices are active in relation to their average transfer rates.
+
+The iostat command generates reports that can be used to change system configuration to better balance the input/output load between physical disks.
+
+All statistics are reported each time the iostat command is run. The report consists of a CPU header row followed by a row of CPU statistics.
+
+On multiprocessor systems, CPU statistics are calculated system-wide as averages among all processors. A device header row is displayed followed by a line of statistics for each device that is configured.
+
+The iostat command generates two types of reports, the CPU Utilization report and the Device Utilization report.
+
+### How To Install iostat In Linux?
+
+iostat tool is part of sysstat package so, We can easily install it with help of package manager since the package is available in all the Linux distributions repository.
+
+For **`Fedora`** system, use **[DNF Command][3]** to install sysstat.
+
+```
+$ sudo dnf install sysstat
+```
+
+For **`Debian/Ubuntu`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install sysstat.
+
+```
+$ sudo apt install sysstat
+```
+
+For **`Arch Linux`** based systems, use **[Pacman Command][6]** to install sysstat.
+
+```
+$ sudo pacman -S sysstat
+```
+
+For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install sysstat.
+
+```
+$ sudo yum install sysstat
+```
+
+For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install sysstat.
+
+```
+$ sudo zypper install sysstat
+```
+
+### How To Monitor Disk I/O Activity/Statistics In Linux Using sysstat Command?
+
+There are many options are available in iostat command to check varies statistics about disk I/O and CPU.
+
+Run the iostat command without any arguments to see complete statistics of the system.
+
+```
+# iostat
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.45 0.02 16.47 0.12 0.00 53.94
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+nvme0n1 6.68 126.95 124.97 0.00 58420014 57507206 0
+sda 0.18 6.77 80.24 0.00 3115036 36924764 0
+loop0 0.00 0.00 0.00 0.00 2160 0 0
+loop1 0.00 0.00 0.00 0.00 1093 0 0
+loop2 0.00 0.00 0.00 0.00 1077 0 0
+```
+
+Run the iostat command with `-d` option to see I/O statistics for all the devices
+
+```
+# iostat -d
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+nvme0n1 6.68 126.95 124.97 0.00 58420030 57509090 0
+sda 0.18 6.77 80.24 0.00 3115292 36924764 0
+loop0 0.00 0.00 0.00 0.00 2160 0 0
+loop1 0.00 0.00 0.00 0.00 1093 0 0
+loop2 0.00 0.00 0.00 0.00 1077 0 0
+```
+
+Run the iostat command with `-p` option to see I/O statistics for all the devices and their partitions.
+
+```
+# iostat -p
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.42 0.02 16.45 0.12 0.00 53.99
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+nvme0n1 6.68 126.94 124.96 0.00 58420062 57512278 0
+nvme0n1p1 6.40 124.46 118.36 0.00 57279753 54474898 0
+nvme0n1p2 0.27 2.47 6.60 0.00 1138069 3037380 0
+sda 0.18 6.77 80.23 0.00 3116060 36924764 0
+sda1 0.00 0.01 0.00 0.00 3224 0 0
+sda2 0.18 6.76 80.23 0.00 3111508 36924764 0
+loop0 0.00 0.00 0.00 0.00 2160 0 0
+loop1 0.00 0.00 0.00 0.00 1093 0 0
+loop2 0.00 0.00 0.00 0.00 1077 0 0
+```
+
+Run the iostat command with `-x` option to see detailed I/O statistics for all the devices.
+
+```
+# iostat -x
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.41 0.02 16.45 0.12 0.00 54.00
+
+Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
+nvme0n1 2.45 126.93 0.60 19.74 0.40 51.74 4.23 124.96 5.12 54.76 3.16 29.54 0.00 0.00 0.00 0.00 0.00 0.00 0.31 30.28
+sda 0.06 6.77 0.00 0.00 8.34 119.20 0.12 80.23 19.94 99.40 31.84 670.73 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13
+loop0 0.00 0.00 0.00 0.00 0.08 19.64 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
+loop1 0.00 0.00 0.00 0.00 0.40 12.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
+loop2 0.00 0.00 0.00 0.00 0.38 19.58 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
+```
+
+Run the iostat command with `-d [Device_Name]` option to see I/O statistics of particular device and their partitions.
+
+```
+# iostat -p [Device_Name]
+
+# iostat -p sda
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.38 0.02 16.43 0.12 0.00 54.05
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+sda 0.18 6.77 80.21 0.00 3117468 36924764 0
+sda2 0.18 6.76 80.21 0.00 3112916 36924764 0
+sda1 0.00 0.01 0.00 0.00 3224 0 0
+```
+
+Run the iostat command with `-m` option to see I/O statistics with `MB` for all the devices instead of `KB`. By default it shows the output with KB.
+
+```
+# iostat -m
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.36 0.02 16.41 0.12 0.00 54.09
+
+Device tps MB_read/s MB_wrtn/s MB_dscd/s MB_read MB_wrtn MB_dscd
+nvme0n1 6.68 0.12 0.12 0.00 57050 56176 0
+sda 0.18 0.01 0.08 0.00 3045 36059 0
+loop0 0.00 0.00 0.00 0.00 2 0 0
+loop1 0.00 0.00 0.00 0.00 1 0 0
+loop2 0.00 0.00 0.00 0.00 1 0 0
+```
+
+Run the iostat command with certain interval then use the following format. In this example, we are going to capture totally two reports at five seconds interval.
+
+```
+# iostat [Interval] [Number Of Reports]
+
+# iostat 5 2
+
+Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 29.35 0.02 16.41 0.12 0.00 54.10
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+nvme0n1 6.68 126.89 124.95 0.00 58420116 57525344 0
+sda 0.18 6.77 80.20 0.00 3118492 36924764 0
+loop0 0.00 0.00 0.00 0.00 2160 0 0
+loop1 0.00 0.00 0.00 0.00 1093 0 0
+loop2 0.00 0.00 0.00 0.00 1077 0 0
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 3.71 0.00 2.51 0.05 0.00 93.73
+
+Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
+nvme0n1 19.00 0.20 311.40 0.00 1 1557 0
+sda 0.20 25.60 0.00 0.00 128 0 0
+loop0 0.00 0.00 0.00 0.00 0 0 0
+loop1 0.00 0.00 0.00 0.00 0 0 0
+loop2 0.00 0.00 0.00 0.00 0 0 0
+```
+
+Run the iostat command with `-N` option to see the LVM disk I/O statistics report.
+
+```
+# iostat -N
+
+Linux 4.15.0-47-generic (Ubuntu18.2daygeek.com) Thursday 18 April 2019 _x86_64_ (2 CPU)
+
+avg-cpu: %user %nice %system %iowait %steal %idle
+ 0.38 0.07 0.18 0.26 0.00 99.12
+
+Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
+sda 3.60 57.07 69.06 968729 1172340
+sdb 0.02 0.33 0.00 5680 0
+sdc 0.01 0.12 0.00 2108 0
+2g-2gvol1 0.00 0.07 0.00 1204 0
+```
+
+Run the nfsiostat command to see the I/O statistics for Network File System(NFS).
+
+```
+# nfsiostat
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/check-monitor-disk-io-in-linux-using-iotop-iostat-command/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://www.2daygeek.com/category/monitoring-tools/
+[2]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
+[3]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
+[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
+[5]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
+[6]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
+[7]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
+[8]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
+[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[10]: https://www.2daygeek.com/wp-content/uploads/2015/03/monitor-disk-io-activity-using-iotop-iostat-command-in-linux-1.jpg
+[11]: https://www.2daygeek.com/wp-content/uploads/2015/03/monitor-disk-io-activity-using-iotop-iostat-command-in-linux-2.jpg
From 3c3d25ac60962c30debe641bebeeb0fc955b0318 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 23 Apr 2019 12:57:14 +0800
Subject: [PATCH 0112/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190422=202=20?=
=?UTF-8?q?new=20apps=20for=20music=20tweakers=20on=20Fedora=20Workstation?=
=?UTF-8?q?=20sources/tech/20190422=202=20new=20apps=20for=20music=20tweak?=
=?UTF-8?q?ers=20on=20Fedora=20Workstation.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...or music tweakers on Fedora Workstation.md | 148 ++++++++++++++++++
1 file changed, 148 insertions(+)
create mode 100644 sources/tech/20190422 2 new apps for music tweakers on Fedora Workstation.md
diff --git a/sources/tech/20190422 2 new apps for music tweakers on Fedora Workstation.md b/sources/tech/20190422 2 new apps for music tweakers on Fedora Workstation.md
new file mode 100644
index 0000000000..8da9ca4795
--- /dev/null
+++ b/sources/tech/20190422 2 new apps for music tweakers on Fedora Workstation.md
@@ -0,0 +1,148 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (2 new apps for music tweakers on Fedora Workstation)
+[#]: via: (https://fedoramagazine.org/2-new-apps-for-music-tweakers-on-fedora-workstation/)
+[#]: author: (Justin W. Flory https://fedoramagazine.org/author/jflory7/)
+
+2 new apps for music tweakers on Fedora Workstation
+======
+
+![][1]
+
+Linux operating systems are great for making unique customizations and tweaks to make your computer work better for you. For example, the [i3 window manager][2] encourages users to think about the different components and pieces that make up the modern Linux desktop.
+
+Fedora has two new packages of interest for music tweakers: **mpris-scrobbler** and **playerctl**. _mpris-scrobbler_ [tracks your music listening history][3] on a music-tracking service like Last.fm and/or ListenBrainz. _playerctl_ is a command-line [music player controller][4].
+
+## _mpris-scrobbler_ records your music listening trends
+
+_mpris-scrobbler_ is a CLI application to submit play history of your music to a service like [Last.fm][5], [Libre.fm][6], or [ListenBrainz][7]. It listens on the [MPRIS D-Bus interface][8] to detect what’s playing. It connects with several different music clients like spotify-client, [vlc][9], audacious, bmp, [cmus][10], and others.
+
+![Last.fm last week in music report. Generated from user-submitted listening history.][11]
+
+### Install and configure _mpris-scrobbler_
+
+_mpris-scrobbler_ is available for Fedora 28 or later, as well as the EPEL 7 repositories. Run the following command in a terminal to install it:
+
+```
+sudo dnf install mpris-scrobbler
+```
+
+Once it is installed, use _systemctl_ to start and enable the service. The following command starts _mpris-scrobbler_ and always starts it after a system reboot:
+
+```
+systemctl --user enable --now mpris-scrobbler.service
+```
+
+### Submit plays to ListenBrainz
+
+This article explains how to link _mpris-scrobbler_ with a ListenBrainz account. To use Last.fm or Libre.fm, see the [upstream documentation][12].
+
+To submit plays to a ListenBrainz server, you need a ListenBrainz API token. If you have an account, get the token from your [profile settings page][13]. When you have a token, run this command to authenticate with your ListenBrainz API token:
+
+```
+$ mpris-scrobbler-signon token listenbrainz
+Token for listenbrainz.org:
+```
+
+Finally, test it by playing a song in your preferred music client on Fedora. The songs you play appear on your ListenBrainz profile.
+
+![Basic statistics and play history from a user profile on ListenBrainz. The current track is playing on a Fedora Workstation laptop with mpris-scrobbler.][14]
+
+## _playerctl_ controls your music playback
+
+_playerctl_ is a CLI tool to control any music player implementing the MPRIS D-Bus interface. You can easily bind it to keyboard shortcuts or media hotkeys. Here’s how to install it, use it in the command line, and create key bindings for the i3 window manager.
+
+### Install and use _playerctl_
+
+_playerctl_ is available for Fedora 28 or later. Run the following command in a terminal to install it:
+
+```
+sudo dnf install playerctl
+```
+
+Now that it’s installed, you can use it right away. Open your preferred music player on Fedora. Next, try the following commands to control playback from a terminal.
+
+To play or pause the currently playing track:
+
+```
+playerctl play-pause
+```
+
+If you want to skip to the next track:
+
+```
+playerctl next
+```
+
+For a list of all running players:
+
+```
+playerctl -l
+```
+
+To play or pause what’s currently playing, only on the spotify-client app:
+
+```
+playerctl -p spotify play-pause
+```
+
+### Create _playerctl_ key bindings in i3wm
+
+Do you use a window manager like the [i3 window manager?][2] Try using _playerctl_ for key bindings. You can bind different commands to different key shortcuts, like the play/pause buttons on your keyboard. Look at the following [i3wm config excerpt][15] to see how:
+
+```
+# Media player controls
+bindsym XF86AudioPlay exec "playerctl play-pause"
+bindsym XF86AudioNext exec "playerctl next"
+bindsym XF86AudioPrev exec "playerctl previous"
+```
+
+## Try it out with your favorite music players
+
+Need to know more about customizing the music listening experience on Fedora? The Fedora Magazine has you covered. Check out these five cool music players on Fedora:
+
+> [5 cool music player apps][16]
+
+Bring order to your music library chaos by sorting and organizing it with MusicBrainz Picard:
+
+> [Picard brings order to your music library][17]
+
+* * *
+
+_Photo by _[ _Frank Septillion_][18]_ on _[_Unsplash_][19]_._
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/2-new-apps-for-music-tweakers-on-fedora-workstation/
+
+作者:[Justin W. Flory][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/jflory7/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/2-music-tweak-apps-816x345.jpg
+[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
+[3]: https://github.com/mariusor/mpris-scrobbler
+[4]: https://github.com/acrisci/playerctl
+[5]: https://www.last.fm/
+[6]: https://libre.fm/
+[7]: https://listenbrainz.org/
+[8]: https://specifications.freedesktop.org/mpris-spec/latest/
+[9]: https://www.videolan.org/vlc/
+[10]: https://cmus.github.io/
+[11]: https://fedoramagazine.org/wp-content/uploads/2019/02/Screenshot_2019-04-13-jflory7%E2%80%99s-week-in-music2-1024x500.png
+[12]: https://github.com/mariusor/mpris-scrobbler#authenticate-to-the-service
+[13]: https://listenbrainz.org/profile/
+[14]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot_2019-04-13-User-jflory-ListenBrainz.png
+[15]: https://github.com/jwflory/swiss-army/blob/ba6ac0c71855e33e3caa1ee1fe51c05d2df0529d/roles/apps/i3wm/files/config#L207-L210
+[16]: https://fedoramagazine.org/5-cool-music-player-apps/
+[17]: https://fedoramagazine.org/picard-brings-order-music-library/
+[18]: https://unsplash.com/photos/Qrspubmx6kE?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[19]: https://unsplash.com/search/photos/music?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
From fc58536c6dd64640327440fdaa403814d82c00d4 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 23 Apr 2019 12:57:38 +0800
Subject: [PATCH 0113/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190422=20Stra?=
=?UTF-8?q?wberry:=20A=20Fork=20of=20Clementine=20Music=20Player=20sources?=
=?UTF-8?q?/tech/20190422=20Strawberry-=20A=20Fork=20of=20Clementine=20Mus?=
=?UTF-8?q?ic=20Player.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
...erry- A Fork of Clementine Music Player.md | 132 ++++++++++++++++++
1 file changed, 132 insertions(+)
create mode 100644 sources/tech/20190422 Strawberry- A Fork of Clementine Music Player.md
diff --git a/sources/tech/20190422 Strawberry- A Fork of Clementine Music Player.md b/sources/tech/20190422 Strawberry- A Fork of Clementine Music Player.md
new file mode 100644
index 0000000000..66b0345586
--- /dev/null
+++ b/sources/tech/20190422 Strawberry- A Fork of Clementine Music Player.md
@@ -0,0 +1,132 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Strawberry: A Fork of Clementine Music Player)
+[#]: via: (https://itsfoss.com/strawberry-music-player/)
+[#]: author: (John Paul https://itsfoss.com/author/john/)
+
+Strawberry: A Fork of Clementine Music Player
+======
+
+In this age of streaming music and cloud services, there are still people who need an application to collect and play their music. If you are such a person, this article should interest you.
+
+We have earlier covered [Sayonara music player][1]. Today, we will be taking a look at the Strawberry Music Player.
+
+### Strawberry Music Player: A fork of Clementine
+
+The [Strawberry Music Player][2] is, quite simply, an application to manage and play your music.
+
+![Strawberry media library][3]
+
+Strawberry contains the following list of features:
+
+ * Play and organize music
+ * Supports WAV, FLAC, WavPack, DSF, DSDIFF, Ogg Vorbis, Speex, MPC, TrueAudio, AIFF, MP4, MP3, ASF and Monkey’s Audio Audio CD playback
+ * Native desktop notifications
+ * Support for playlists in multiple formats
+ * Advanced audio output and device configuration for bit-perfect playback on Linux
+ * Edit tags on music files
+ * Fetch tags from [MusicBrainz Picard][4]
+ * Album cover art from [Last.fm][5], MusicBrainz and Discogs
+ * Song lyrics from [AudD][6]
+ * Support for multiple backends
+ * Audio analyzer
+ * Audio equalizer
+ * Transfer music to iPod, iPhone, MTP or mass-storage USB player
+ * Streaming support for Tidal
+ * Scrobbler with support for Last.fm, Libre.fm and ListenBrainz
+
+
+
+If you take a look at the screenshots, they probably look familiar. That is because Strawberry is a fork of the [Clementine Music Player][7]. Clementine has not been updated since 2016, while the most recent version of Strawberry (0.5.3) was released early April 2019.
+
+Trivia
+
+You might think that Strawberry music player is named after the fruit. However, its [creator][8] claims that he has named the project after the band [Strawbs][9].
+
+### Installing Strawberry Music player
+
+Now let’s take a look at how you can install Strawberry on your system.
+
+#### Ubuntu
+
+The easiest way to install Strawberry on Ubuntu is to install the [official snap][10]. Just type:
+
+```
+sudo snap install strawberry
+```
+
+If you are not a fan of snaps, you can download a .deb file from Strawberry’s GitHub [release page][11]. You can [install the .deb file][12] by double-clicking it and opening it via the Software Center.
+
+Strawberry is not available in the main [Ubuntu repositories][13].
+
+#### Fedora
+
+Installing Strawberry on Fedora is much simpler. Strawberry is in the Fedora repos, so you just have to type `sudo dnf strawberry`. Strawberry is not available on Flatpak.
+
+#### Arch
+
+Just like Fedora, Strawberry is in the Arch repos. All you have to type is `sudo pacman -S strawberry`. The same is true for Manjaro.
+
+You can find a list of Linux distros that have Strawberry in their repos [here][14]. If you have openSUSE or Mageia, click [here][15]. You can also compile Strawberry from source.
+
+### Experience with Strawberry Music Player
+
+![Playing an audio book with Strawberry][16]
+
+I installed Strawberry on Fedora and Windows. I have used Clementine in the past, so I knew what to expect. I downloaded a number of audiobooks and several [Old Time Radio][17] [shows][18] as I don’t listen to a lot of music. Instead of using a dedicated [audiobook player like Cozy][19], I used Strawberry for listening to these radio shows.
+
+Once I told Strawberry where my files were located, it quickly imported them. I used [EasyTag][20] to fix some of the MP3 information on the old time radio shows. Strawberry has a tag editor, but EasyTag allows you to edit several folders very quickly. Strawberry undated the media library instantaneously.
+
+The big plus for me was performance. It loaded quickly and ran well. This might have something to do with the fact that it is not another Electron app. Strawberry is written in good-old-fashioned C++ and Qt 5. No need to load a whole web browser every time you want to play music, or in my case listen to audio dramas.
+
+I was not able to test the Tidal streaming feature because I don’t have an account. Also, I don’t sync music to my iPod.
+
+### Final Thoughts
+
+Strawberry is like a standard music player that makes managing and playing your audio library very easy.
+
+The features that I miss from Clementine include the option to access your media from cloud storage systems (like Box and Dropbox) and the ability to download podcasts. But then, I don’t store my media in the cloud and I mainly listen to podcasts on my iPod.
+
+I recommend giving Strawberry a try. You just might like it as much as I do.
+
+Have you ever used Strawberry? What is your favorite music player/manager? Please let us know in the comments below.
+
+If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][21].
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/strawberry-music-player/
+
+作者:[John Paul][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/john/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/sayonara-music-player/
+[2]: https://strawbs.org/
+[3]: https://itsfoss.com/wp-content/uploads/2019/04/strawberry1-800x471.png
+[4]: https://itsfoss.com/musicbrainz-picard/
+[5]: https://www.last.fm/
+[6]: https://audd.io/
+[7]: https://www.clementine-player.org/
+[8]: https://github.com/jonaski
+[9]: https://en.wikipedia.org/wiki/Strawbs
+[10]: https://snapcraft.io/strawberry
+[11]: https://github.com/jonaski/strawberry/releases
+[12]: https://itsfoss.com/install-deb-files-ubuntu/
+[13]: https://itsfoss.com/ubuntu-repositories/
+[14]: https://repology.org/project/strawberry/versions
+[15]: https://download.opensuse.org/repositories/home:/jonaski:/audio/
+[16]: https://itsfoss.com/wp-content/uploads/2019/04/strawberry3-800x471.png
+[17]: https://en.wikipedia.org/wiki/Golden_Age_of_Radio
+[18]: https://zootradio.com/
+[19]: https://itsfoss.com/cozy-audiobook-player/
+[20]: https://wiki.gnome.org/Apps/EasyTAG
+[21]: http://reddit.com/r/linuxusersgroup
From 0221efc69594b32d10ecff80ccd61b9869c1731c Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 23 Apr 2019 12:57:54 +0800
Subject: [PATCH 0114/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190422=209=20?=
=?UTF-8?q?ways=20to=20save=20the=20planet=20sources/tech/20190422=209=20w?=
=?UTF-8?q?ays=20to=20save=20the=20planet.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../20190422 9 ways to save the planet.md | 96 +++++++++++++++++++
1 file changed, 96 insertions(+)
create mode 100644 sources/tech/20190422 9 ways to save the planet.md
diff --git a/sources/tech/20190422 9 ways to save the planet.md b/sources/tech/20190422 9 ways to save the planet.md
new file mode 100644
index 0000000000..d3301006cc
--- /dev/null
+++ b/sources/tech/20190422 9 ways to save the planet.md
@@ -0,0 +1,96 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (9 ways to save the planet)
+[#]: via: (https://opensource.com/article/19/4/save-planet)
+[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/alanfdoss/users/jmpearce)
+
+9 ways to save the planet
+======
+These ideas have an open source twist.
+![][1]
+
+What can be done to help save the planet? The question can seem depressing at a time when it feels like an individual's contribution isn't enough. But, who are we Earth dwellers if not for a collection of individuals? So, I asked our writer community to share ways that open source software or hardware can be used to make a difference. Here's what I heard back.
+
+### 9 ways to save the planet with an open source twist
+
+**1.** **Disable the blinking cursor in your terminal.**
+
+It might sound silly, but the trivial, blinking cursor can cause up to [2 watts per hour of extra power consumption][2]. To disable it, go to Terminal Settings: Edit > Preferences > Cursor > Cursor blinking > Disabled.
+
+_Recommended by Mars Toktonaliev_
+
+**2\. Reduce your consumption of animal products and processed foods.**
+
+One way to do this is to add these open source apps to your phone: Daily Dozen, OpenFoodFacts, OpenVegeMap, and Food Restrictions. These apps will help you eat a healthy, plant-based diet, find vegan- and vegetarian-friendly restaurants, and communicate your dietary needs to others, even if they do not speak the same language. To learn more about these apps read [_4 open source apps to support eating a plant-based diet_][3].
+
+_Recommendation by Joshua Allen Holm_
+
+**3\. Recycle old computers.**
+
+How? With Linux, of course. Pay it forward by giving creating a new computer for someone who can't one and keep a computer out of the landfill. Here's how we do it at [The Asian Penguins][4].
+
+_Recommendation by Stu Keroff_
+
+**4\. Turn off devices when you're not using them.**
+
+Use "smart power strips" that have a "master" outlet and several "controlled" outlets. Plug your PC into the master outlet, and when you turn on the computer, your monitor, printer, and anything else plugged into the controlled outlets turns on too. A simpler, low-tech solution is a power strip with a timer. That's what I use at home. You can use switches on the timer to set a handy schedule to turn the power on and off at specific times. Automatically turn off your network printer when no one is at home. Or for my six-year-old laptop, extend the life of the battery with a schedule to alternate when it's running from wall power (outlet is on) and when it's running from the battery (outlet is off).
+
+_Recommended by Jim Hall_
+
+**5\. Reduce the use of your HVAC system.**
+
+Sunlight shining through windows adds a lot of heat to your home during the summer. Use Home Assistant to [automatically adjust][5] window blinds and awnings [based on the time of day][6], or even based on the angle of the sun.
+
+_Recommended by Michael Hrivnak_
+
+**6\. Turn your thermostat off or to a lower setting while you're away.**
+
+If your home thermostat has an "Away" feature, activating it on your way out the door is easy to forget. With a touch of automation, any connected thermostat can begin automatically saving energy while you're not home. [Stataway][7] is one such project that uses your phone's GPS coordinates to determine when it should set your thermostat to "Home" or "Away".
+
+_Recommended by Michael Hrivnak_
+
+**7\. Save computing power for later.**
+
+I have an idea: Create a script that can read the power output from an alternative energy array (wind and solar) and begin turning on servers (taking them from a power-saving sleep mode to an active mode) in a computing cluster until the overload power is used (whatever excess is produced beyond what can be stored/buffered for later use). Then use the overload power during high-production times for compute-intensive projects like rendering. This process would be essentially free of cost because the power can't be buffered for other uses. I'm sure the monitoring, power management, and server array tools must exist to do this. Then, it's just an integration problem, making it all work together.
+
+_Recommended by Terry Hancock_
+
+**8\. Turn off exterior lights.**
+
+Light pollution affects more than 80% of the world's population, according to the [World Atlas of Artificial Night Sky Brightness][8], published (Creative Commons Attribution-NonCommercial 4.0) in 2016 in the open access journal _Science Advances_. Turning off exterior lights is a quick way to benefit wildlife, human health, our ability to enjoy the night sky, and of course energy consumption. Visit [darksky.org][9] for more ideas on how to reduce the impact of your exterior lighting.
+
+_Recommended by Michael Hrivnak_
+
+**9\. Reduce your CPU count.**
+
+For me, I remember I used to have a whole bunch of computers running in my basement as my IT playground/lab. I've become more conscious now of power consumption and so have really drastically reduced my CPU count. I like to take advantage of VMs, zones, containers... that type of technology a lot more these days. Also, I'm really glad that small form factor and SoC computers, such as the Raspberry Pi, exist because I can do a lot with one, such as run a DNS or Web server, without heating the room and running up my electricity bill.
+
+P.S. All of these computers are running Linux, FreeBSD, or Raspbian!
+
+_Recommended by Alan Formy-Duvall_
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/save-planet
+
+作者:[Jen Wike Huger ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jen-wike/users/alanfdoss/users/jmpearce
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pixelated-world.png?itok=fHjM6m53
+[2]: https://www.redhat.com/archives/fedora-devel-list/2009-January/msg02406.html
+[3]: https://opensource.com/article/19/4/apps-plant-based-diets
+[4]: https://opensource.com/article/19/2/asian-penguins-close-digital-divide
+[5]: https://www.home-assistant.io/docs/automation/trigger/#sun-trigger
+[6]: https://www.home-assistant.io/components/cover/
+[7]: https://github.com/mhrivnak/stataway
+[8]: http://advances.sciencemag.org/content/2/6/e1600377
+[9]: http://darksky.org/
From 79571ebd33fecbb55140baf4eb288f7ae1734c5a Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 23 Apr 2019 12:58:12 +0800
Subject: [PATCH 0115/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190422=204=20?=
=?UTF-8?q?open=20source=20apps=20for=20plant-based=20diets=20sources/tech?=
=?UTF-8?q?/20190422=204=20open=20source=20apps=20for=20plant-based=20diet?=
=?UTF-8?q?s.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... open source apps for plant-based diets.md | 68 +++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 sources/tech/20190422 4 open source apps for plant-based diets.md
diff --git a/sources/tech/20190422 4 open source apps for plant-based diets.md b/sources/tech/20190422 4 open source apps for plant-based diets.md
new file mode 100644
index 0000000000..6d77b66eea
--- /dev/null
+++ b/sources/tech/20190422 4 open source apps for plant-based diets.md
@@ -0,0 +1,68 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (4 open source apps for plant-based diets)
+[#]: via: (https://opensource.com/article/19/4/apps-plant-based-diets)
+[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja)
+
+4 open source apps for plant-based diets
+======
+These apps make it easier for vegetarians and vegans—and omnivores who
+want to eat healthier—to find food they can eat.
+![][1]
+
+Reducing your consumption of meat, dairy, and processed foods is better for the planet and better for your health. Changing your diet can be difficult, but several open source Android applications can help you switch to a more plant-based diet. Whether you are taking part in [Meatless Monday][2], following Mark Bittman's [Vegan Before 6:00][3] guidelines, or switching entirely to a [whole-food, plant-based diet][4], these apps can aid you on your journey by helping you figure out what to eat, discover vegan- and vegetarian-friendly restaurants, and easily communicate your dietary preferences to others. All of these apps are open source and available from the [F-Droid repository][5].
+
+### Daily Dozen
+
+![Daily Dozen app][6]
+
+The [Daily Dozen][7] app provides a checklist of items that Michael Greger, MD, FACLM, recommends as part of a healthy diet and lifestyle. Dr. Greger recommends consuming a whole-food, plant-based diet consisting of diverse foods and supported by daily exercise. This app lets you keep track of how many servings of each type of food you have eaten, how many servings of water (or other approved beverage, such as tea) you drank, and if you exercised each day. Each category of food provides serving sizes and lists of foods that fall under that category; for example, the Cruciferous Vegetable category includes bok choy, broccoli, brussels sprouts, and many other suggestions.
+
+### Food Restrictions
+
+![Food Restrictions app][8]
+
+[Food Restrictions][9] is a simple app that can help you communicate your dietary restrictions to others, even if those people do not speak your language. Users can enter their food restrictions for seven different categories: chicken, beef, pork, fish, cheese, milk, and peppers. There is an "I don't eat" and an "I'm allergic" option for each of those categories. The "don't eat" option shows the icon with a red X over it. The "allergic" option displays the X and a small skull icon. The same information can be displayed using text instead of icons, but the text is only available in English and Portuguese. There is also an option for displaying a text message that says the user is vegetarian or vegan, which summarizes those dietary restrictions more succinctly and more accurately than the pick-and-choose options. The vegan text clearly mentions not eating eggs and honey, which are not options in the pick-and-choose method. However, just like the text version of the pick-and-choose option, these sentences are only available in English and Portuguese.
+
+### OpenFoodFacts
+
+![Open Food Facts app][10]
+
+Avoiding unwanted ingredients when buying groceries can be frustrating, but [OpenFoodFacts][11] can help make the process easier. This app lets you scan the barcodes on products to get a report about the ingredients in a product and how healthy the product is. A product can still be very unhealthy even if it meets the criteria to be a vegan product. Having both the ingredients list and the nutrition facts lets you make informed choices when shopping. The only drawback for this app is that the data is user contributed, so not every product is available, but you can contribute new items, if you want to give back to the project.
+
+### OpenVegeMap
+
+![OpenVegeMap app][12]
+
+Find vegan and vegetarian restaurants in your neighborhood with the [OpenVegeMap][13] app. This app lets you search by either using your phone's current location or by entering an address. Restaurants are classified as Vegan only, Vegan friendly, Vegetarian only, Vegetarian friendly, Non-vegetarian, and Unknown. The app uses data from [OpenStreetMap][14] and user-contributed information about the restaurants, so be sure to double-check to make sure the information provided is up-to-date and accurate.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/apps-plant-based-diets
+
+作者:[Joshua Allen Holm ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/holmja
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78
+[2]: https://www.meatlessmonday.com/
+[3]: https://www.amazon.com/dp/0385344740/
+[4]: https://nutritionstudies.org/whole-food-plant-based-diet-guide/
+[5]: https://f-droid.org/
+[6]: https://opensource.com/sites/default/files/uploads/daily_dozen.png (Daily Dozen app)
+[7]: https://f-droid.org/en/packages/org.nutritionfacts.dailydozen/
+[8]: https://opensource.com/sites/default/files/uploads/food_restrictions.png (Food Restrictions app)
+[9]: https://f-droid.org/en/packages/br.com.frs.foodrestrictions/
+[10]: https://opensource.com/sites/default/files/uploads/openfoodfacts.png (Open Food Facts app)
+[11]: https://f-droid.org/en/packages/openfoodfacts.github.scrachx.openfood/
+[12]: https://opensource.com/sites/default/files/uploads/openvegmap.png (OpenVegeMap app)
+[13]: https://f-droid.org/en/packages/pro.rudloff.openvegemap/
+[14]: https://www.openstreetmap.org/
From 7ebfd4c3b8489080b3283854fdc223f02264bc3b Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 23 Apr 2019 12:58:25 +0800
Subject: [PATCH 0116/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190422=208=20?=
=?UTF-8?q?environment-friendly=20open=20software=20projects=20you=20shoul?=
=?UTF-8?q?d=20know=20sources/tech/20190422=208=20environment-friendly=20o?=
=?UTF-8?q?pen=20software=20projects=20you=20should=20know.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... open software projects you should know.md | 66 +++++++++++++++++++
1 file changed, 66 insertions(+)
create mode 100644 sources/tech/20190422 8 environment-friendly open software projects you should know.md
diff --git a/sources/tech/20190422 8 environment-friendly open software projects you should know.md b/sources/tech/20190422 8 environment-friendly open software projects you should know.md
new file mode 100644
index 0000000000..0c56347440
--- /dev/null
+++ b/sources/tech/20190422 8 environment-friendly open software projects you should know.md
@@ -0,0 +1,66 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (8 environment-friendly open software projects you should know)
+[#]: via: (https://opensource.com/article/19/4/environment-projects)
+[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
+
+8 environment-friendly open software projects you should know
+======
+Celebrate Earth Day by contributing to these projects dedicated to
+improving our environment.
+![][1]
+
+For the last few years, I've been helping [Greenpeace][2] build its first fully open source software project, Planet 4. [Planet 4][3] is a global engagement platform where Greenpeace supporters and activists can interact and engage with the organization. The goal is to drive people to action on behalf of our planet. We want to invite participation and use people power to battle global issues like climate change and plastic pollution. Developers, designers, writers, contributors, and others who are looking for an open source way to support environmentalism are more than welcome to [get involved][4]!
+
+Planet 4 is far from the only open source project focused on the environment. For Earth Day, I thought I'd share seven other open source projects that have our planet in mind.
+
+**[Eco Hacker Farm][5]** works to support sustainable communities. It advises and supports projects combining hackerspaces/hackbases and permaculture living. The organization also has online software projects. Visit its [wiki][6] or reach out on [Twitter][7] to learn more about what Eco Hacker Farm is doing.
+
+**[Public Lab][8]** is an open community and nonprofit organization that works to put science in the hands of citizens. Formed after the BP oil disaster in 2010, Public Lab works with open source to aid environmental exploration and investigation. It's a diverse community with lots of ways to [contribute][9].
+
+A while back, Don Watkins, a community moderator here on Opensource.com, wrote about **[Open Climate Workbench][10]** , a project from the Apache Foundation. The [OCW][11] provides software to do climate modeling and evaluation, which can have all sorts of applications.
+
+**[Open Source Ecology][12]** is a project that aims to improve how our economy functions. With an eye on environmental regeneration and social justice, the project seeks to redefine some of our dirty production and distribution techniques to create a more sustainable civilization.
+
+Fostering collaboration around open source and big data tools to enable research in ocean, atmosphere, land, and climate, " **[Pangeo][13]** is first and foremost a community promoting open, reproducible, and scalable science." Big data can change the world!
+
+**[**Leaflet**][14]** is a well-known open source JavaScript library. It can be used for all sorts of things, including environmentally friendly projects like the [Arctic Web Map][15], which allows scientists to accurately visualize and analyze the arctic region, a critical ability for climate research.
+
+And of course, no list would be complete (not that this is a complete list!) without pointing to my friends at Mozilla. The **[Mozilla Science Lab][16]** community is, like all of Mozilla, fiercely open, and it's committed to bringing open source principles to the scientific community. Its projects and communities enable scientists to do the sorts of research our world needs to address some of the most pervasive environmental issues.
+
+### How you can contribute
+
+This Earth Day, make a six-month commitment to contribute some of your time to an open source project that helps fight climate change or otherwise encourages people to step up for Mother Earth. There must be scores of environmentally minded open source projects out there, so please leave your favorites in the comments!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/environment-projects
+
+作者:[Laura Hilliger][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/laurahilliger
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/world_hands_diversity.png?itok=zm4EDxgE
+[2]: http://www.greenpeace.org
+[3]: http://medium.com/planet4
+[4]: https://planet4.greenpeace.org/community/#partners-open-sourcers
+[5]: https://wiki.ecohackerfarm.org/start
+[6]: https://wiki.ecohackerfarm.org/
+[7]: https://twitter.com/EcoHackerFarm
+[8]: https://publiclab.org/
+[9]: https://publiclab.org/contribute
+[10]: https://opensource.com/article/17/1/apache-open-climate-workbench
+[11]: https://climate.apache.org/
+[12]: https://wiki.opensourceecology.org/wiki/Project_needs
+[13]: http://pangeo.io/
+[14]: https://leafletjs.com/
+[15]: https://webmap.arcticconnect.ca/#ac_3573/2/20.8/-65.5
+[16]: https://science.mozilla.org/
From e0c4c2c25045dd0229a828ef732ddcdc203b5414 Mon Sep 17 00:00:00 2001
From: darksun
Date: Tue, 23 Apr 2019 12:58:37 +0800
Subject: [PATCH 0117/1154] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190422=20Trac?=
=?UTF-8?q?king=20the=20weather=20with=20Python=20and=20Prometheus=20sourc?=
=?UTF-8?q?es/tech/20190422=20Tracking=20the=20weather=20with=20Python=20a?=
=?UTF-8?q?nd=20Prometheus.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
... the weather with Python and Prometheus.md | 98 +++++++++++++++++++
1 file changed, 98 insertions(+)
create mode 100644 sources/tech/20190422 Tracking the weather with Python and Prometheus.md
diff --git a/sources/tech/20190422 Tracking the weather with Python and Prometheus.md b/sources/tech/20190422 Tracking the weather with Python and Prometheus.md
new file mode 100644
index 0000000000..0e2409dc56
--- /dev/null
+++ b/sources/tech/20190422 Tracking the weather with Python and Prometheus.md
@@ -0,0 +1,98 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Tracking the weather with Python and Prometheus)
+[#]: via: (https://opensource.com/article/19/4/weather-python-prometheus)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
+
+Tracking the weather with Python and Prometheus
+======
+Create a custom Prometheus integration to keep track of the biggest
+cloud provider of all: Mother Earth.
+![Tree clouds][1]
+
+Open source monitoring system [Prometheus][2] has integrations to track many types of time-series data, but if you want an integration that doesn't yet exist, it's easy to build one. An often-used example is a custom integration with a cloud provider that uses the provider's APIs to grab specific metrics. In this example, though, we will integrate with the biggest cloud provider of all: Earth.
+
+Luckily, the US government already measures the weather and provides an easy API for integrations. Getting the weather forecast for the next hour at Red Hat headquarters is simple.
+
+
+```
+import requests
+HOURLY_RED_HAT = ""
+def get_temperature():
+result = requests.get(HOURLY_RED_HAT)
+return result.json()["properties"]["periods"][0]["temperature"]
+```
+
+Now that our integration with Earth is done, it's time to make sure Prometheus can understand what we are saying. We can use the [Prometheus Python library][3] to create a registry with one _gauge_ : the temperature at Red Hat HQ.
+
+
+```
+from prometheus_client import CollectorRegistry, Gauge
+def prometheus_temperature(num):
+registry = CollectorRegistry()
+g = Gauge("red_hat_temp", "Temperature at Red Hat HQ", registry=registry)
+g.set(num)
+return registry
+```
+
+Finally, we need to connect this to Prometheus in some way. That depends a little on the network topology for Prometheus: whether it is easier for Prometheus to talk to our service, or whether the reverse is easier.
+
+The first case is the one usually recommended, if possible, so we need to build a web server exposing the registry and then configure Prometheus to _scrape_ it.
+
+We can build a simple web server with [Pyramid][4].
+
+
+```
+from pyramid.config import Configurator
+from pyramid.response import Response
+from prometheus_client import generate_latest, CONTENT_TYPE_LATEST
+def metrics_web(request):
+registry = prometheus_temperature(get_temperature())
+return Response(generate_latest(registry),
+content_type=CONTENT_TYPE_LATEST)
+config = Configurator()
+config.add_route('metrics', '/metrics')
+config.add_view(metrics_web, route_name='metrics')
+app = config.make_wsgi_app()
+```
+
+This can be run with any Web Server Gateway Interface (WSGI) server. For example, we can use **python -m twisted web --wsgi earth.app** to run it, assuming we put the code in **earth.py**.
+
+Alternatively, if it is easier for our code to connect to Prometheus, we can push it to Prometheus's [Push gateway][5] periodically.
+
+
+```
+import time
+from prometheus_client import push_to_gateway
+def push_temperature(url):
+while True:
+registry = prometheus_temperature(get_temperature())
+push_to_gateway(url, "temperature collector", registry)
+time.sleep(60*60)
+```
+
+The URL is the one for the Push gateway; it often ends in **:9091**.
+
+Good luck building your own custom Prometheus integration so you can track all the things!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/4/weather-python-prometheus
+
+作者:[Moshe Zadka ][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_tree_clouds.png?itok=b_ftihhP (Tree clouds)
+[2]: https://prometheus.io/
+[3]: https://github.com/prometheus/client_python
+[4]: https://trypyramid.com/
+[5]: https://github.com/prometheus/pushgateway
From a4bd28fe3ad395a008d9f02d38091a6e72ff47e0 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Tue, 23 Apr 2019 13:04:44 +0800
Subject: [PATCH 0118/1154] PRF:20190409 Four Methods To Add A User To Group In
Linux.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@NeverKnowsTomorrow 恭喜你,完成了第一篇翻译,是我校对晚了。
---
...Methods To Add A User To Group In Linux.md | 114 ++++++++----------
1 file changed, 52 insertions(+), 62 deletions(-)
diff --git a/translated/tech/20190409 Four Methods To Add A User To Group In Linux.md b/translated/tech/20190409 Four Methods To Add A User To Group In Linux.md
index 23c3a51c9b..0ada08206b 100644
--- a/translated/tech/20190409 Four Methods To Add A User To Group In Linux.md
+++ b/translated/tech/20190409 Four Methods To Add A User To Group In Linux.md
@@ -1,46 +1,38 @@
[#]: collector: (lujun9972)
-[#]: translator: ( NeverKnowsTomorrow )
-[#]: reviewer: ( )
+[#]: translator: (NeverKnowsTomorrow)
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Four Methods To Add A User To Group In Linux)
[#]: via: (https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-usermod-gpasswd/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-在 Linux 中添加用户到组的四个方法
+在 Linux 中把用户添加到组的四个方法
======
-Linux 组是用于管理 Linux 中用户帐户的组织单位。
+Linux 组是用于管理 Linux 中用户帐户的组织单位。对于 Linux 系统中的每一个用户和组,它都有惟一的数字标识号。它被称为 用户 ID(UID)和组 ID(GID)。组的主要目的是为组的成员定义一组特权。它们都可以执行特定的操作,但不能执行其他操作。
-对于 Linux 系统中的每一个用户和组,它都有惟一的数字标识号。
+Linux 中有两种类型的默认组。每个用户应该只有一个 主要组primary group 和任意数量的 次要组secondary group。
-它被称为 userid (UID) 和 groupid (GID)。组的主要目的是为组的成员定义一组特权。
-
-它们都可以执行特定的操作,但不能执行其他操作。
-
-Linux 中有两种类型的默认组可用。每个用户应该只有一个 主要组primary group 和任意数量的 次要组secondary group。
-
- * **主要组:** 创建用户帐户时,已将主组添加到用户。它通常是用户的名称。在执行诸如创建新文件(或目录)、修改文件或执行命令等任何操作时,主组将应用于用户。用户主要组信息存储在 `/etc/passwd` 文件中。
- * **次要组:** 它被称为次要组。它允许用户组在同一组成员文件中执行特定操作。
-
-例如,如果你希望允许少数用户运行 apache(httpd)服务命令,那么它将非常适合。
+* **主要组:** 创建用户帐户时,已将主要组添加到用户。它通常是用户的名称。在执行诸如创建新文件(或目录)、修改文件或执行命令等任何操作时,主要组将应用于用户。用户的主要组信息存储在 `/etc/passwd` 文件中。
+* **次要组:** 它被称为次要组。它允许用户组在同一组成员文件中执行特定操作。例如,如果你希望允许少数用户运行 Apache(httpd)服务命令,那么它将非常适合。
你可能对以下与用户管理相关的文章感兴趣。
- * 在 Linux 中创建用户帐户的三种方法?
- * 如何在 Linux 中创建批量用户?
- * 如何在 Linux 中使用不同的方法更新/更改用户密码?
+* [在 Linux 中创建用户帐户的三种方法?][1]
+* [如何在 Linux 中创建批量用户?][2]
+* [如何在 Linux 中使用不同的方法更新/更改用户密码?][3]
可以使用以下四种方法实现。
- * **`usermod:`** usermod 命令修改系统帐户文件,以反映在命令行中指定的更改。
- * **`gpasswd:`** gpasswd 命令用于管理 /etc/group 和 /etc/gshadow。每个组都可以有管理员、成员和密码。
- * **`Shell Script:`** shell 脚本允许管理员自动执行所需的任务。
- * **`Manual Method:`** 我们可以通过编辑 `/etc/group` 文件手动将用户添加到任何组中。
+* `usermod`:修改系统帐户文件,以反映在命令行中指定的更改。
+* `gpasswd`:用于管理 `/etc/group` 和 `/etc/gshadow`。每个组都可以有管理员、成员和密码。
+* Shell 脚本:可以让管理员自动执行所需的任务。
+* 手动方式:我们可以通过编辑 `/etc/group` 文件手动将用户添加到任何组中。
-我假设你已经拥有此活动所需的组和用户。在本例中,我们将使用以下用户和组:`user1`、`user2`、`user3`,group 是 `mygroup` 和 `mygroup1`。
+我假设你已经拥有此操作所需的组和用户。在本例中,我们将使用以下用户和组:`user1`、`user2`、`user3`,另外的组是 `mygroup` 和 `mygroup1`。
-在进行更改之前,我想检查用户和组信息。详见下文。
+在进行更改之前,我希望检查一下用户和组信息。详见下文。
我可以看到下面的用户与他们自己的组关联,而不是与其他组关联。
@@ -65,15 +57,15 @@ mygroup:x:1012:
mygroup1:x:1013:
```
-### 方法 1:什么是 usermod 命令?
+### 方法 1:使用 usermod 命令
-usermod 命令修改系统帐户文件,以反映命令行上指定的更改。
+`usermod` 命令修改系统帐户文件,以反映命令行上指定的更改。
-### 如何使用 usermod 命令将现有的用户添加到次要组或附加组?
+#### 如何使用 usermod 命令将现有的用户添加到次要组或附加组?
-要将现有用户添加到辅助组,请使用带有 `-g` 选项和组名称的 usermod 命令。
+要将现有用户添加到辅助组,请使用带有 `-g` 选项和组名称的 `usermod` 命令。
-语法
+语法:
```
# usermod [-G] [GroupName] [UserName]
@@ -85,18 +77,18 @@ usermod 命令修改系统帐户文件,以反映命令行上指定的更改。
# usermod -a -G mygroup user1
```
-让我使用 id 命令查看输出。是的,添加成功。
+让我使用 `id` 命令查看输出。是的,添加成功。
```
# id user1
uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
```
-### 如何使用 usermod 命令将现有的用户添加到多个次要组或附加组?
+#### 如何使用 usermod 命令将现有的用户添加到多个次要组或附加组?
-要将现有用户添加到多个次要组中,请使用带有 `-G` 选项的 usermod 命令和带有逗号分隔的组名称。
+要将现有用户添加到多个次要组中,请使用带有 `-G` 选项的 `usermod` 命令和带有逗号分隔的组名称。
-语法
+语法:
```
# usermod [-G] [GroupName1,GroupName2] [UserName]
@@ -115,11 +107,11 @@ uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
uid=1009(user2) gid=1009(user2) groups=1009(user2),1012(mygroup),1013(mygroup1)
```
-### 如何改变用户的主要组?
+#### 如何改变用户的主要组?
-要更改用户的主要组,请使用带有 `-g` 选项和组名称的 usermod 命令。
+要更改用户的主要组,请使用带有 `-g` 选项和组名称的 `usermod` 命令。
-语法
+语法:
```
# usermod [-g] [GroupName] [UserName]
@@ -131,22 +123,22 @@ uid=1009(user2) gid=1009(user2) groups=1009(user2),1012(mygroup),1013(mygroup1)
# usermod -g mygroup user3
```
-让我们看看输出。是的,已成功更改。现在,它将 mygroup 显示为 user3 主要组而不是 user3。
+让我们看看输出。是的,已成功更改。现在,显示`user3` 主要组是 `mygroup` 而不是 `user3`。
```
# id user3
uid=1010(user3) gid=1012(mygroup) groups=1012(mygroup)
```
-### 方法 2:什么是 gpasswd 命令?
+### 方法 2:使用 gpasswd 命令
`gpasswd` 命令用于管理 `/etc/group` 和 `/etc/gshadow`。每个组都可以有管理员、成员和密码。
-### 如何使用 gpasswd 命令将现有用户添加到次要组或者附加组?
+#### 如何使用 gpasswd 命令将现有用户添加到次要组或者附加组?
-要将现有用户添加到次要组,请使用带有 `-M` 选项和组名称的 gpasswd 命令。
+要将现有用户添加到次要组,请使用带有 `-M` 选项和组名称的 `gpasswd` 命令。
-语法
+语法:
```
# gpasswd [-M] [UserName] [GroupName]
@@ -158,18 +150,18 @@ uid=1010(user3) gid=1012(mygroup) groups=1012(mygroup)
# gpasswd -M user1 mygroup
```
-让我使用 id 命令查看输出。是的,`user1` 已成功添加到 `mygroup` 中。
+让我使用 `id` 命令查看输出。是的,`user1` 已成功添加到 `mygroup` 中。
```
# id user1
uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
```
-### 如何使用 gpasswd 命令添加多个用户到次要组或附加组中?
+#### 如何使用 gpasswd 命令添加多个用户到次要组或附加组中?
-要将多个用户添加到辅助组中,请使用带有 `-M` 选项和组名称的 gpasswd 命令。
+要将多个用户添加到辅助组中,请使用带有 `-M` 选项和组名称的 `gpasswd` 命令。
-语法
+语法:
```
# gpasswd [-M] [UserName1,UserName2] [GroupName]
@@ -181,18 +173,18 @@ uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
# gpasswd -M user2,user3 mygroup1
```
-让我使用 getent 命令查看输出。是的,`user2` 和 `user3` 已成功添加到 `myGroup1` 中。
+让我使用 `getent` 命令查看输出。是的,`user2` 和 `user3` 已成功添加到 `myGroup1` 中。
```
# getent group mygroup1
mygroup1:x:1013:user2,user3
```
-### 如何使用 gpasswd 命令从组中删除一个用户?
+#### 如何使用 gpasswd 命令从组中删除一个用户?
-要从组中删除用户,请使用带有 `-d` 选项的 gpasswd 命令以及用户和组的名称。
+要从组中删除用户,请使用带有 `-d` 选项的 `gpasswd` 命令以及用户和组的名称。
-语法
+语法:
```
# gpasswd [-d] [UserName] [GroupName]
@@ -205,11 +197,9 @@ mygroup1:x:1013:user2,user3
Removing user user1 from group mygroup
```
-### 方法 3:使用 Shell 脚本?
+### 方法 3:使用 Shell 脚本
-基于上面的例子,我知道 `usermod` 命令没有能力将多个用户添加到组中,但是可以通过 `gpasswd` 命令完成。
-
-但是,它将覆盖当前与组关联的现有用户。
+基于上面的例子,我知道 `usermod` 命令没有能力将多个用户添加到组中,可以通过 `gpasswd` 命令完成。但是,它将覆盖当前与组关联的现有用户。
例如,`user1` 已经与 `mygroup` 关联。如果要使用 `gpasswd` 命令将 `user2` 和 `user3` 添加到 `mygroup` 中,它将不会按预期生效,而是对组进行修改。
@@ -219,9 +209,9 @@ Removing user user1 from group mygroup
因此,我们需要编写一个小的 shell 脚本来实现这一点。
-### 如何使用 gpasswd 命令将多个用户添加到次要组或附加组?
+#### 如何使用 gpasswd 命令将多个用户添加到次要组或附加组?
-如果要使用 gpasswd 命令将多个用户添加到次要组或附加组,请创建以下小的 shell 脚本。
+如果要使用 `gpasswd` 命令将多个用户添加到次要组或附加组,请创建以下 shell 脚本。
创建用户列表。每个用户应该在单独的行中。
@@ -256,16 +246,16 @@ done
# sh group-update.sh
```
-让我看看使用 getent 命令的输出。 是的,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup` 中。
+让我看看使用 `getent` 命令的输出。 是的,`user1`、`user2` 和 `user3` 已成功添加到 `mygroup` 中。
```
# getent group mygroup
mygroup:x:1012:user1,user2,user3
```
-### 如何使用 gpasswd 命令将多个用户添加到多个次要组或附加组?
+#### 如何使用 gpasswd 命令将多个用户添加到多个次要组或附加组?
-如果要使用 gpasswd 命令将多个用户添加到多个次要组或附加组中,请创建以下小的 shell 脚本。
+如果要使用 `gpasswd` 命令将多个用户添加到多个次要组或附加组中,请创建以下 shell 脚本。
创建用户列表。每个用户应该在单独的行中。
@@ -308,21 +298,21 @@ done
# sh group-update-1.sh
```
-让我看看使用 getent 命令的输出。 是的,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup` 中。
+让我看看使用 `getent` 命令的输出。 是的,`user1`、`user2` 和 `user3` 已成功添加到 `mygroup` 中。
```
# getent group mygroup
mygroup:x:1012:user1,user2,user3
```
-此外,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup1` 中。
+此外,`user1`、`user2` 和 `user3` 已成功添加到 `mygroup1` 中。
```
# getent group mygroup1
mygroup1:x:1013:user1,user2,user3
```
-### 方法 4:在 Linux 中将用户添加到组中的手动方法?
+### 方法 4:在 Linux 中将用户添加到组中的手动方法
我们可以通过编辑 `/etc/group` 文件手动将用户添加到任何组中。
@@ -339,7 +329,7 @@ via: https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-us
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[NeverKnowsTomorrow](https://github.com/NeverKnowsTomorrow)
-校对:[校对者 ID](https://github.com/校对者 ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
From 3db3d1b4a3318fe69dd15ae20eb49f2dd7747ec8 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Tue, 23 Apr 2019 13:05:52 +0800
Subject: [PATCH 0119/1154] PUB:20190409 Four Methods To Add A User To Group In
Linux.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@ NeverKnowsTomorrow 本文首发地址: https://linux.cn/article-10768-1.html
您的 LCTT 专页地址: https://linux.cn/lctt/NeverKnowsTomorrow
请注册领取 LCCN: https://lctt.linux.cn/
---
.../20190409 Four Methods To Add A User To Group In Linux.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/tech => published}/20190409 Four Methods To Add A User To Group In Linux.md (99%)
diff --git a/translated/tech/20190409 Four Methods To Add A User To Group In Linux.md b/published/20190409 Four Methods To Add A User To Group In Linux.md
similarity index 99%
rename from translated/tech/20190409 Four Methods To Add A User To Group In Linux.md
rename to published/20190409 Four Methods To Add A User To Group In Linux.md
index 0ada08206b..bb222efdae 100644
--- a/translated/tech/20190409 Four Methods To Add A User To Group In Linux.md
+++ b/published/20190409 Four Methods To Add A User To Group In Linux.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (NeverKnowsTomorrow)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10768-1.html)
[#]: subject: (Four Methods To Add A User To Group In Linux)
[#]: via: (https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-usermod-gpasswd/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
From 3a2cbdfb778018200573f8033a04d2fb834a49a2 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 24 Apr 2019 00:09:08 +0800
Subject: [PATCH 0120/1154] PRF:20161106 Myths about -dev-urandom.md
part
---
.../tech/20161106 Myths about -dev-urandom.md | 36 +++++++++----------
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/translated/tech/20161106 Myths about -dev-urandom.md b/translated/tech/20161106 Myths about -dev-urandom.md
index 118c6426f2..dca61fea30 100644
--- a/translated/tech/20161106 Myths about -dev-urandom.md
+++ b/translated/tech/20161106 Myths about -dev-urandom.md
@@ -7,31 +7,31 @@
**`/dev/urandom` 不安全。加密用途必须使用 `/dev/random`**
-事实:`/dev/urandom` 才是类 Unix 操作系统下推荐的加密种子。
+*事实*:`/dev/urandom` 才是类 Unix 操作系统下推荐的加密种子。
**`/dev/urandom` 是伪随机数生成器pseudo random number generator(PRND),而 `/dev/random` 是“真”随机数生成器。**
-事实:它们两者本质上用的是同一种 CSPRNG (一种密码学伪随机数生成器)。它们之间细微的差别和“真”不“真”随机完全无关
+事实:它们两者本质上用的是同一种 CSPRNG (一种密码学伪随机数生成器)。它们之间细微的差别和“真”“不真”随机完全无关。
**`/dev/random` 在任何情况下都是密码学应用更好地选择。即便 `/dev/urandom` 也同样安全,我们还是不应该用它。**
-事实:`/dev/random` 有个很恶心人的问题:它是阻塞的。(LCTT 译注:意味着请求都得逐个执行,等待前一个请求完成)
+*事实*:`/dev/random` 有个很恶心人的问题:它是阻塞的。(LCTT 译注:意味着请求都得逐个执行,等待前一个请求完成)
-**但阻塞不是好事吗!`/dev/random` 只会给出电脑收集的信息熵足以支持的随机量。`/dev/urandom` 在用完了所有熵的情况下还会不断吐不安全的随机数给你。**
+**但阻塞不是好事吗!`/dev/random` 只会给出电脑收集的信息熵足以支持的随机量。`/dev/urandom` 在用完了所有熵的情况下还会不断吐出不安全的随机数给你。**
-事实:这是误解。就算我们不去考虑应用层面后续对随机种子的用法,“用完信息熵池”这个概念本身就不存在。仅仅 256 位的熵就足以生成计算上安全的随机数很长、很长的一段时间了。
+*事实*:这是误解。就算我们不去考虑应用层面后续对随机种子的用法,“用完信息熵池”这个概念本身就不存在。仅仅 256 位的熵就足以生成计算上安全的随机数很长、很长的一段时间了。
问题的关键还在后头:`/dev/random` 怎么知道有系统会*多少*可用的信息熵?接着看!
**但密码学家老是讨论重新选种子(re-seeding)。这难道不和上一条冲突吗?**
-事实:你说的也没错!某种程度上吧。确实,随机数生成器一直在使用系统信息熵的状态重新选种。但这么做(一部分)是因为别的原因。
+*事实*:你说的也没错!某种程度上吧。确实,随机数生成器一直在使用系统信息熵的状态重新选种。但这么做(一部分)是因为别的原因。
这样说吧,我没有说引入新的信息熵是坏的。更多的熵肯定更好。我只是说在熵池低的时候阻塞是没必要的。
**好,就算你说的都对,但是 `/dev/(u)random` 的 man 页面和你说的也不一样啊!到底有没有专家同意你说的这堆啊?**
-事实:其实 man 页面和我说的不冲突。它看似好像在说 `/dev/urandom` 对密码学用途来说不安全,但如果你真的理解这堆密码学术语你就知道它说的并不是这个意思。
+*事实*:其实 man 页面和我说的不冲突。它看似好像在说 `/dev/urandom` 对密码学用途来说不安全,但如果你真的理解这堆密码学术语你就知道它说的并不是这个意思。
man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也没问题,但绝对不是说必要的),但它也推荐在大多数“一般”的密码学应用下使用 `/dev/urandom` 。
@@ -61,13 +61,13 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
### 真随机
-什么叫一个随机变量是“真随机的”?
+随机数是“真正随机”是什么意思?
-我不想搞的太复杂以至于变成哲学范畴的东西。这种讨论很容易走偏因为随机模型大家见仁见智,讨论很快变得毫无意义。
+我不想搞的太复杂以至于变成哲学范畴的东西。这种讨论很容易走偏因为对于随机模型大家见仁见智,讨论很快变得毫无意义。
在我看来“真随机”的“试金石”是量子效应。一个光子穿过或不穿过一个半透镜。或者观察一个放射性粒子衰变。这类东西是现实世界最接近真随机的东西。当然,有些人也不相信这类过程是真随机的,或者这个世界根本不存在任何随机性。这个就百家争鸣了,我也不好多说什么了。
-密码学家一般都会通过不去讨论什么是“真随机”来避免这种争论。它们更关心的是不可预测性 unpredictability。只要没有*任何*方法能猜出下一个随机数就可以了。所以当你以密码学应用为前提讨论一个随机数好不好的时候,在我看来这才是最重要的。
+密码学家一般都会通过不去讨论什么是“真随机”来避免这种哲学辩论。他们更关心的是不可预测性unpredictability。只要没有*任何*方法能猜出下一个随机数就可以了。所以当你以密码学应用为前提讨论一个随机数好不好的时候,在我看来这才是最重要的。
无论如何,我不怎么关心“哲学上安全”的随机数,这也包括别人嘴里的“真”随机数。
@@ -75,23 +75,23 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
但就让我们退一步说,你有了一个“真”随机变量。你下一步做什么呢?
-你把它们打印出来然后挂在墙上来展示量子宇宙的美与和谐?牛逼!我很理解你。
+你把它们打印出来然后挂在墙上来展示量子宇宙的美与和谐?牛逼!我支持你。
但是等等,你说你要*用*它们?做密码学用途?额,那这就废了,因为这事情就有点复杂了。
-事情是这样的,你的真随机,量子力学加护的随机数即将被用进不理想的现实世界程序里。
+事情是这样的,你的真随机、量子力学加护的随机数即将被用进不理想的现实世界算法里去。
-因为我们使用的大多数算法并不是理论信息学information-theoretic上安全的。它们“只能”提供 **计算意义上的安全**。我能想到为数不多的例外就只有 Shamir 密钥分享 和 One-time pad 算法。并且就算前者是名副其实的(如果你实际打算用的话),后者则毫无可行性可言。
+因为我们使用的几乎所有的算法都并不是信息论安全性information-theoretic security 的。它们“只能”提供**计算意义上的安全**。我能想到为数不多的例外就只有 Shamir 密钥分享和一次性密码本One-time pad(OTP)算法。并且就算前者是名副其实的(如果你实际打算用的话),后者则毫无可行性可言。
-但所有那些大名鼎鼎的密码学算法,AES、RSA、Diffie-Hellman、椭圆曲线,还有所有那些加密软件包,OpenSSL、GnuTLS、Keyczar、你的操作系统的加密 API,都仅仅是计算意义上的安全的。
+但所有那些大名鼎鼎的密码学算法,AES、RSA、Diffie-Hellman、椭圆曲线,还有所有那些加密软件包,OpenSSL、GnuTLS、Keyczar、你的操作系统的加密 API,都仅仅是计算意义上安全的。
-那区别是什么呢?理论信息学上的安全肯定是安全的,绝对是,其它那些的算法都可能在理论上被拥有无限计算力的穷举破解。我们依然愉快地使用它们因为全世界的计算机加起来都不可能在宇宙年龄的时间里破解,至少现在是这样。而这就是我们文章里说的“不安全”。
+那区别是什么呢?信息论安全的算法肯定是安全的,绝对是,其它那些的算法都可能在理论上被拥有无限计算力的穷举破解。我们依然愉快地使用它们是因为全世界的计算机加起来都不可能在宇宙年龄的时间里破解,至少现在是这样。而这就是我们文章里说的“不安全”。
-除非哪个聪明的家伙破解了算法本身——在只需要极少量计算力的情况下。这也是每个密码学家梦寐以求的圣杯:破解 AES 本身、破解 RSA 本身等等。
+除非哪个聪明的家伙破解了算法本身 —— 在只需要更少量计算力、在今天可实现的计算力的情况下。这也是每个密码学家梦寐以求的圣杯:破解 AES 本身、破解 RSA 本身等等。
所以现在我们来到了更底层的东西:随机数生成器,你坚持要“真随机”而不是“伪随机”。但是没过一会儿你的真随机数就被喂进了你极为鄙视的伪随机算法里了!
-真相是,如果我们最先进的 hash 算法被破解了,或者最先进的块加密被破解了,你得到这些那些“哲学上不安全的”甚至无所谓了,因为反正你也没有安全的应用方法了。
+真相是,如果我们最先进的哈希算法被破解了,或者最先进的分组加密算法被破解了,你得到的这些“哲学上不安全”的随机数甚至无所谓了,因为反正你也没有安全的应用方法了。
所以把计算性上安全的随机数喂给你的仅仅是计算性上安全的算法就可以了,换而言之,用 `/dev/urandom`。
@@ -103,7 +103,7 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
![image: mythical structure of the kernel's random number generator][1]
-“真随机数”,尽管可能有点瑕疵,进入操作系统然后它的熵立刻被加入内部熵计数器。然后经过“矫偏”和“漂白”之后它进入内核的熵池,然后 `/dev/random` 和 `/dev/urandom` 从里面生成随机数。
+“真正的随机性”,尽管可能有点瑕疵,进入操作系统然后它的熵立刻被加入内部熵计数器。然后经过“矫偏”和“漂白”之后它进入内核的熵池,然后 `/dev/random` 和 `/dev/urandom` 从里面生成随机数。
“真”随机数生成器,`/dev/random`,直接从池里选出随机数,如果熵计数器表示能满足需要的数字大小,那就吐出数字并且减少熵计数。如果不够的话,它会阻塞程序直至有足够的熵进入系统。
From 699898034104b3abd6116a3be5dee0af5883151e Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 24 Apr 2019 08:28:46 +0800
Subject: [PATCH 0121/1154] PRF:20190208 Which programming languages should you
learn.md
@MjSeven
---
... programming languages should you learn.md | 29 ++++++++++---------
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/translated/talk/20190208 Which programming languages should you learn.md b/translated/talk/20190208 Which programming languages should you learn.md
index 8806b8cfc0..b0dcb5564e 100644
--- a/translated/talk/20190208 Which programming languages should you learn.md
+++ b/translated/talk/20190208 Which programming languages should you learn.md
@@ -1,33 +1,34 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
-[#]: reviewer: ( )
+[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Which programming languages should you learn?)
[#]: via: (https://opensource.com/article/19/2/which-programming-languages-should-you-learn)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
-应该学习哪种编程语言?
+你应该学习哪种编程语言?
======
-学习一门新的编程语言是在你的职业生涯中继续前进的好方法,但是应该学习哪一门呢?
+
+> 学习一门新的编程语言是在你的职业生涯中继续前进的好方法,但是应该学习哪一门呢?
+

-如果你想要在编程生涯中起步或继续前进,那么学习一门新语言是一个聪明的主意。但是,大量活跃使用的语言引发了一个问题:哪种编程语言是最好学习的?要回答这个问题,让我们从一个简单的问题开始:你想做什么样的程序?
+如果你想要开始你的编程生涯或继续前进,那么学习一门新语言是一个聪明的主意。但是,大量活跃使用的语言引发了一个问题:哪种编程语言是最好的?要回答这个问题,让我们从一个简单的问题开始:你想做什么样的程序?
-如果你想在客户端进行网络编程,那么特定语言 HTML、CSS 和 JavaScript(一种看似无穷无尽的方言)是必须要学习的。
-
+如果你想在客户端进行网络编程,那么特定语言 HTML、CSS 和 JavaScript(看似无穷无尽的方言之一)是必须要学习的。
-如果你想在服务器端进行 Web 编程,那么选项包括常见的通用语言:C++, Golang, Java, C#, Node.js, Perl, Python, Ruby 等等。当然,服务器程序与数据存储(例如关系数据库和其他数据库)打交道,这意味着 SQL 等查询语言可能会发挥作用。
+如果你想在服务器端进行 Web 编程,那么选择包括常见的通用语言:C++、Golang、Java、C#、 Node.js、Perl、Python、Ruby 等等。当然,服务器程序与数据存储(例如关系数据库和其他数据库)打交道,这意味着 SQL 等查询语言可能会发挥作用。
-如果你正在为移动设备编写本地应用程序,那么了解目标平台非常重要。对于 Apple 设备,Swift 已经取代 Objective C 成为首选语言。对于 Android 设备,Java(带有专用库和工具集)仍然是主要语言。有一些特殊语言,如 与 C# 一起使用的 Xamarin,可以为 Apple、Android 和 Windows 设备生成特定于平台的代码。
+如果你正在为移动设备编写原生应用程序,那么了解目标平台非常重要。对于 Apple 设备,Swift 已经取代 Objective C 成为首选语言。对于 Android 设备,Java(带有专用库和工具集)仍然是主要语言。有一些特殊语言,如与 C# 一起使用的 Xamarin,可以为 Apple、Android 和 Windows 设备生成特定于平台的代码。
-那么通用语言呢?通常有各种各样的选择。在*动态*或*脚本*语言(如 Perl、Python 和 Ruby)中,有一些新东西,如 Node.js。java 和 C# 的相似之处比它们的粉丝愿意承认的还要多,仍然是针对虚拟机(分别是 JVM 和 CLR)的主要*静态编译*语言。在编译为*原生可执行文件*的语言中,C++ 仍然处于混合状态,以及后来的 Golang 和 Rust 等。通用*函数*语言比比皆是(如 Clojure、Haskell、Erlang、F#、Lisp 和 Scala),它们通常都有热情投入的社区。值得注意的是,面向对象语言(如 Java 和 C#)已经添加了函数构造(特别是 lambdas),而动态语言从一开始就有函数构造。
+那么通用语言呢?通常有各种各样的选择。在*动态*或*脚本*语言(如 Perl、Python 和 Ruby)中,有一些新东西,如 Node.js。而 Java 和 C# 的相似之处比它们的粉丝愿意承认的还要多,仍然是针对虚拟机(分别是 JVM 和 CLR)的主要*静态编译*语言。在可以编译为*原生可执行文件*的语言中,C++ 仍在使用,还有后来出现的 Golang 和 Rust 等。通用的*函数式*语言比比皆是(如 Clojure、Haskell、Erlang、F#、Lisp 和 Scala),它们通常都有热情投入的社区。值得注意的是,面向对象语言(如 Java 和 C#)已经添加了函数式构造(特别是 lambdas),而动态语言从一开始就有函数式构造。
-让我以 C 语言结尾,它是一种小巧,优雅,可扩展的语言,不要与 C++ 混淆。现代操作系统主要用 C 语言编写,其余的用汇编语言编写。任何平台上的标准库大多数都是用 C 语言编写的。例如,任何打印 `Hello, world!` 这种问候都是通过调用名为 **write** 的 C 库函数来实现的。
+让我以 C 语言结尾,它是一种小巧、优雅、可扩展的语言,不要与 C++ 混淆。现代操作系统主要用 C 语言编写,其余部分用汇编语言编写。任何平台上的标准库大多数都是用 C 语言编写的。例如,任何打印 `Hello, world!` 这种问候都是通过调用名为 `write` 的 C 库函数来实现的。
-C 作为一种可移植的汇编语言,公开了其他高级语言有意隐藏的底层系统的详细信息。因此,理解 C 可以更好地掌握程序如何竞争执行所需的共享系统资源(如处理器,内存和 I/O 设备)。C 语言既高级又接近硬件,因此在性能方面无与伦比,当然,汇编语言除外。最后,C 是编程语言中的通用语言,几乎所有通用语言都支持一种或另一种形式的 C 调用。
+C 作为一种可移植的汇编语言,公开了其他高级语言有意隐藏的底层系统的详细信息。因此,理解 C 可以更好地掌握程序如何竞争执行所需的共享系统资源(如处理器、内存和 I/O 设备)。C 语言既高级又接近硬件,因此在性能方面无与伦比,当然,汇编语言除外。最后,C 是编程语言中的通用语言,几乎所有通用语言都支持某种形式的 C 调用。
-有关现代 C 语言的介绍,参考我的书籍 [C Programming: Introducing Portable Assembler][1]。无论你怎么做,学习 C 语言,你会学到比另一种编程语言多得多的东西。
+有关现代 C 语言的介绍,参考我的书籍 《[C 语言编程:可移植的汇编器介绍][1]》。无论你怎么做,学习 C 语言,你会学到比另一种编程语言多得多的东西。
你认为学习哪些编程语言很重要?你是否同意这些建议?在评论告知我们!
@@ -37,8 +38,8 @@ via: https://opensource.com/article/19/2/which-programming-languages-should-you-
作者:[Marty Kalin][a]
选题:[lujun9972][b]
- -->
-校对:[校对者ID](https://github.com/校对者ID)
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
From 4c07eab74ce995451f83d28f020e96e1428cc6d7 Mon Sep 17 00:00:00 2001
From: "Xingyu.Wang"
Date: Wed, 24 Apr 2019 08:29:46 +0800
Subject: [PATCH 0122/1154] PUB:20190208 Which programming languages should you
learn.md
@MjSeven https://linux.cn/article-10769-1.html
---
.../20190208 Which programming languages should you learn.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
rename {translated/talk => published}/20190208 Which programming languages should you learn.md (98%)
diff --git a/translated/talk/20190208 Which programming languages should you learn.md b/published/20190208 Which programming languages should you learn.md
similarity index 98%
rename from translated/talk/20190208 Which programming languages should you learn.md
rename to published/20190208 Which programming languages should you learn.md
index b0dcb5564e..6535e6cd64 100644
--- a/translated/talk/20190208 Which programming languages should you learn.md
+++ b/published/20190208 Which programming languages should you learn.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: (wxy)
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-10769-1.html)
[#]: subject: (Which programming languages should you learn?)
[#]: via: (https://opensource.com/article/19/2/which-programming-languages-should-you-learn)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
From 088ce7f80776f9042d32c6a4f35e091e44083120 Mon Sep 17 00:00:00 2001
From: Chang Liu
Date: Wed, 24 Apr 2019 08:30:18 +0800
Subject: [PATCH 0123/1154] [Translated] 20190415 Inter-process communication
in Linux- Shared storage.md
Signed-off-by: Chang Liu
---
... communication in Linux- Shared storage.md | 419 -----------------
... communication in Linux- Shared storage.md | 435 ++++++++++++++++++
2 files changed, 435 insertions(+), 419 deletions(-)
delete mode 100644 sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
create mode 100644 translated/tech/20190415 Inter-process communication in Linux- Shared storage.md
diff --git a/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md b/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
deleted file mode 100644
index de0a8ffdc1..0000000000
--- a/sources/tech/20190415 Inter-process communication in Linux- Shared storage.md
+++ /dev/null
@@ -1,419 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (FSSlc)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Inter-process communication in Linux: Shared storage)
-[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-storage)
-[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
-
-Inter-process communication in Linux: Shared storage
-======
-Learn how processes synchronize with each other in Linux.
-![Filing papers and documents][1]
-
-This is the first article in a series about [interprocess communication][2] (IPC) in Linux. The series uses code examples in C to clarify the following IPC mechanisms:
-
- * Shared files
- * Shared memory (with semaphores)
- * Pipes (named and unnamed)
- * Message queues
- * Sockets
- * Signals
-
-
-
-This article reviews some core concepts before moving on to the first two of these mechanisms: shared files and shared memory.
-
-### Core concepts
-
-A _process_ is a program in execution, and each process has its own address space, which comprises the memory locations that the process is allowed to access. A process has one or more _threads_ of execution, which are sequences of executable instructions: a _single-threaded_ process has just one thread, whereas a _multi-threaded_ process has more than one thread. Threads within a process share various resources, in particular, address space. Accordingly, threads within a process can communicate straightforwardly through shared memory, although some modern languages (e.g., Go) encourage a more disciplined approach such as the use of thread-safe channels. Of interest here is that different processes, by default, do _not_ share memory.
-
-There are various ways to launch processes that then communicate, and two ways dominate in the examples that follow:
-
- * A terminal is used to start one process, and perhaps a different terminal is used to start another.
- * The system function **fork** is called within one process (the parent) to spawn another process (the child).
-
-
-
-The first examples take the terminal approach. The [code examples][3] are available in a ZIP file on my website.
-
-### Shared files
-
-Programmers are all too familiar with file access, including the many pitfalls (non-existent files, bad file permissions, and so on) that beset the use of files in programs. Nonetheless, shared files may be the most basic IPC mechanism. Consider the relatively simple case in which one process ( _producer_ ) creates and writes to a file, and another process ( _consumer_ ) reads from this same file:
-
-
-```
-writes +-----------+ reads
-producer-------->| disk file |<\-------consumer
-+-----------+
-```
-
-The obvious challenge in using this IPC mechanism is that a _race condition_ might arise: the producer and the consumer might access the file at exactly the same time, thereby making the outcome indeterminate. To avoid a race condition, the file must be locked in a way that prevents a conflict between a _write_ operation and any another operation, whether a _read_ or a _write_. The locking API in the standard system library can be summarized as follows:
-
- * A producer should gain an exclusive lock on the file before writing to the file. An _exclusive_ lock can be held by one process at most, which rules out a race condition because no other process can access the file until the lock is released.
- * A consumer should gain at least a shared lock on the file before reading from the file. Multiple _readers_ can hold a _shared_ lock at the same time, but no _writer_ can access a file when even a single _reader_ holds a shared lock.
-
-
-
-A shared lock promotes efficiency. If one process is just reading a file and not changing its contents, there is no reason to prevent other processes from doing the same. Writing, however, clearly demands exclusive access to a file.
-
-The standard I/O library includes a utility function named **fcntl** that can be used to inspect and manipulate both exclusive and shared locks on a file. The function works through a _file descriptor_ , a non-negative integer value that, within a process, identifies a file. (Different file descriptors in different processes may identify the same physical file.) For file locking, Linux provides the library function **flock** , which is a thin wrapper around **fcntl**. The first example uses the **fcntl** function to expose API details.
-
-#### Example 1. The _producer_ program
-
-
-```
-#include
-#include
-#include
-#include
-#include
-
-#define FileName "data.dat"
-#define DataString "Now is the winter of our discontent\nMade glorious summer by this sun of York\n"
-
-void report_and_exit(const char* msg) {
-[perror][4](msg);
-[exit][5](-1); /* EXIT_FAILURE */
-}
-
-int main() {
-struct flock lock;
-lock.l_type = F_WRLCK; /* read/write (exclusive versus shared) lock */
-lock.l_whence = SEEK_SET; /* base for seek offsets */
-lock.l_start = 0; /* 1st byte in file */
-lock.l_len = 0; /* 0 here means 'until EOF' */
-lock.l_pid = getpid(); /* process id */
-
-int fd; /* file descriptor to identify a file within a process */
-if ((fd = open(FileName, O_RDWR | O_CREAT, 0666)) < 0) /* -1 signals an error */
-report_and_exit("open failed...");
-
-if (fcntl(fd, F_SETLK, &lock) < 0) /** F_SETLK doesn't block, F_SETLKW does **/
-report_and_exit("fcntl failed to get lock...");
-else {
-write(fd, DataString, [strlen][6](DataString)); /* populate data file */
-[fprintf][7](stderr, "Process %d has written to data file...\n", lock.l_pid);
-}
-
-/* Now release the lock explicitly. */
-lock.l_type = F_UNLCK;
-if (fcntl(fd, F_SETLK, &lock) < 0)
-report_and_exit("explicit unlocking failed...");
-
-close(fd); /* close the file: would unlock if needed */
-return 0; /* terminating the process would unlock as well */
-}
-```
-
-The main steps in the _producer_ program above can be summarized as follows:
-
- * The program declares a variable of type **struct flock** , which represents a lock, and initializes the structure's five fields. The first initialization: [code]`lock.l_type = F_WRLCK; /* exclusive lock */`[/code] makes the lock an exclusive ( _read-write_ ) rather than a shared ( _read-only_ ) lock. If the _producer_ gains the lock, then no other process will be able to write or read the file until the _producer_ releases the lock, either explicitly with the appropriate call to **fcntl** or implicitly by closing the file. (When the process terminates, any opened files would be closed automatically, thereby releasing the lock.)
- * The program then initializes the remaining fields. The chief effect is that the _entire_ file is to be locked. However, the locking API allows only designated bytes to be locked. For example, if the file contains multiple text records, then a single record (or even part of a record) could be locked and the rest left unlocked.
- * The first call to **fcntl** : [code]`if (fcntl(fd, F_SETLK, &lock) < 0)`[/code] tries to lock the file exclusively, checking whether the call succeeded. In general, the **fcntl** function returns **-1** (hence, less than zero) to indicate failure. The second argument **F_SETLK** means that the call to **fcntl** does _not_ block: the function returns immediately, either granting the lock or indicating failure. If the flag **F_SETLKW** (the **W** at the end is for _wait_ ) were used instead, the call to **fcntl** would block until gaining the lock was possible. In the calls to **fcntl** , the first argument **fd** is the file descriptor, the second argument specifies the action to be taken (in this case, **F_SETLK** for setting the lock), and the third argument is the address of the lock structure (in this case, **& lock**).
- * If the _producer_ gains the lock, the program writes two text records to the file.
- * After writing to the file, the _producer_ changes the lock structure's **l_type** field to the _unlock_ value: [code]`lock.l_type = F_UNLCK;`[/code] and calls **fcntl** to perform the unlocking operation. The program finishes up by closing the file and exiting.
-
-
-
-#### Example 2. The _consumer_ program
-
-
-```
-#include
-#include
-#include
-#include
-
-#define FileName "data.dat"
-
-void report_and_exit(const char* msg) {
-[perror][4](msg);
-[exit][5](-1); /* EXIT_FAILURE */
-}
-
-int main() {
-struct flock lock;
-lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
-lock.l_whence = SEEK_SET; /* base for seek offsets */
-lock.l_start = 0; /* 1st byte in file */
-lock.l_len = 0; /* 0 here means 'until EOF' */
-lock.l_pid = getpid(); /* process id */
-
-int fd; /* file descriptor to identify a file within a process */
-if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
-report_and_exit("open to read failed...");
-
-/* If the file is write-locked, we can't continue. */
-fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
-if (lock.l_type != F_UNLCK)
-report_and_exit("file is still write locked...");
-
-lock.l_type = F_RDLCK; /* prevents any writing during the reading */
-if (fcntl(fd, F_SETLK, &lock) < 0)
-report_and_exit("can't get a read-only lock...");
-
-/* Read the bytes (they happen to be ASCII codes) one at a time. */
-int c; /* buffer for read bytes */
-while (read(fd, &c, 1) > 0) /* 0 signals EOF */
-write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
-
-/* Release the lock explicitly. */
-lock.l_type = F_UNLCK;
-if (fcntl(fd, F_SETLK, &lock) < 0)
-report_and_exit("explicit unlocking failed...");
-
-close(fd);
-return 0;
-}
-```
-
-The _consumer_ program is more complicated than necessary to highlight features of the locking API. In particular, the _consumer_ program first checks whether the file is exclusively locked and only then tries to gain a shared lock. The relevant code is:
-
-
-```
-lock.l_type = F_WRLCK;
-...
-fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
-if (lock.l_type != F_UNLCK)
-report_and_exit("file is still write locked...");
-```
-
-The **F_GETLK** operation specified in the **fcntl** call checks for a lock, in this case, an exclusive lock given as **F_WRLCK** in the first statement above. If the specified lock does not exist, then the **fcntl** call automatically changes the lock type field to **F_UNLCK** to indicate this fact. If the file is exclusively locked, the _consumer_ terminates. (A more robust version of the program might have the _consumer_ **sleep** a bit and try again several times.)
-
-If the file is not currently locked, then the _consumer_ tries to gain a shared ( _read-only_ ) lock ( **F_RDLCK** ). To shorten the program, the **F_GETLK** call to **fcntl** could be dropped because the **F_RDLCK** call would fail if a _read-write_ lock already were held by some other process. Recall that a _read-only_ lock does prevent any other process from writing to the file, but allows other processes to read from the file. In short, a _shared_ lock can be held by multiple processes. After gaining a shared lock, the _consumer_ program reads the bytes one at a time from the file, prints the bytes to the standard output, releases the lock, closes the file, and terminates.
-
-Here is the output from the two programs launched from the same terminal with **%** as the command line prompt:
-
-
-```
-% ./producer
-Process 29255 has written to data file...
-
-% ./consumer
-Now is the winter of our discontent
-Made glorious summer by this sun of York
-```
-
-In this first code example, the data shared through IPC is text: two lines from Shakespeare's play _Richard III_. Yet, the shared file's contents could be voluminous, arbitrary bytes (e.g., a digitized movie), which makes file sharing an impressively flexible IPC mechanism. The downside is that file access is relatively slow, whether the access involves reading or writing. As always, programming comes with tradeoffs. The next example has the upside of IPC through shared memory, rather than shared files, with a corresponding boost in performance.
-
-### Shared memory
-
-Linux systems provide two separate APIs for shared memory: the legacy System V API and the more recent POSIX one. These APIs should never be mixed in a single application, however. A downside of the POSIX approach is that features are still in development and dependent upon the installed kernel version, which impacts code portability. For example, the POSIX API, by default, implements shared memory as a _memory-mapped file_ : for a shared memory segment, the system maintains a _backing file_ with corresponding contents. Shared memory under POSIX can be configured without a backing file, but this may impact portability. My example uses the POSIX API with a backing file, which combines the benefits of memory access (speed) and file storage (persistence).
-
-The shared-memory example has two programs, named _memwriter_ and _memreader_ , and uses a _semaphore_ to coordinate their access to the shared memory. Whenever shared memory comes into the picture with a _writer_ , whether in multi-processing or multi-threading, so does the risk of a memory-based race condition; hence, the semaphore is used to coordinate (synchronize) access to the shared memory.
-
-The _memwriter_ program should be started first in its own terminal. The _memreader_ program then can be started (within a dozen seconds) in its own terminal. The output from the _memreader_ is:
-
-
-```
-`This is the way the world ends...`
-```
-
-Each source file has documentation at the top explaining the link flags to be included during compilation.
-
-Let's start with a review of how semaphores work as a synchronization mechanism. A general semaphore also is called a _counting semaphore_ , as it has a value (typically initialized to zero) that can be incremented. Consider a shop that rents bicycles, with a hundred of them in stock, with a program that clerks use to do the rentals. Every time a bike is rented, the semaphore is incremented by one; when a bike is returned, the semaphore is decremented by one. Rentals can continue until the value hits 100 but then must halt until at least one bike is returned, thereby decrementing the semaphore to 99.
-
-A _binary semaphore_ is a special case requiring only two values: 0 and 1. In this situation, a semaphore acts as a _mutex_ : a mutual exclusion construct. The shared-memory example uses a semaphore as a mutex. When the semaphore's value is 0, the _memwriter_ alone can access the shared memory. After writing, this process increments the semaphore's value, thereby allowing the _memreader_ to read the shared memory.
-
-#### Example 3. Source code for the _memwriter_ process
-
-
-```
-/** Compilation: gcc -o memwriter memwriter.c -lrt -lpthread **/
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include "shmem.h"
-
-void report_and_exit(const char* msg) {
-[perror][4](msg);
-[exit][5](-1);
-}
-
-int main() {
-int fd = shm_open(BackingFile, /* name from smem.h */
-O_RDWR | O_CREAT, /* read/write, create if needed */
-AccessPerms); /* access permissions (0644) */
-if (fd < 0) report_and_exit("Can't open shared mem segment...");
-
-ftruncate(fd, ByteSize); /* get the bytes */
-
-caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
-ByteSize, /* how many bytes */
-PROT_READ | PROT_WRITE, /* access protections */
-MAP_SHARED, /* mapping visible to other processes */
-fd, /* file descriptor */
-0); /* offset: start at 1st byte */
-if ((caddr_t) -1 == memptr) report_and_exit("Can't get segment...");
-
-[fprintf][7](stderr, "shared mem address: %p [0..%d]\n", memptr, ByteSize - 1);
-[fprintf][7](stderr, "backing file: /dev/shm%s\n", BackingFile );
-
-/* semaphore code to lock the shared mem */
-sem_t* semptr = sem_open(SemaphoreName, /* name */
-O_CREAT, /* create the semaphore */
-AccessPerms, /* protection perms */
-0); /* initial value */
-if (semptr == (void*) -1) report_and_exit("sem_open");
-
-[strcpy][8](memptr, MemContents); /* copy some ASCII bytes to the segment */
-
-/* increment the semaphore so that memreader can read */
-if (sem_post(semptr) < 0) report_and_exit("sem_post");
-
-sleep(12); /* give reader a chance */
-
-/* clean up */
-munmap(memptr, ByteSize); /* unmap the storage */
-close(fd);
-sem_close(semptr);
-shm_unlink(BackingFile); /* unlink from the backing file */
-return 0;
-}
-```
-
-Here's an overview of how the _memwriter_ and _memreader_ programs communicate through shared memory:
-
- * The _memwriter_ program, shown above, calls the **shm_open** function to get a file descriptor for the backing file that the system coordinates with the shared memory. At this point, no memory has been allocated. The subsequent call to the misleadingly named function **ftruncate** : [code]`ftruncate(fd, ByteSize); /* get the bytes */`[/code] allocates **ByteSize** bytes, in this case, a modest 512 bytes. The _memwriter_ and _memreader_ programs access the shared memory only, not the backing file. The system is responsible for synchronizing the shared memory and the backing file.
- * The _memwriter_ then calls the **mmap** function: [code] caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
-ByteSize, /* how many bytes */
-PROT_READ | PROT_WRITE, /* access protections */
-MAP_SHARED, /* mapping visible to other processes */
-fd, /* file descriptor */
-0); /* offset: start at 1st byte */ [/code] to get a pointer to the shared memory. (The _memreader_ makes a similar call.) The pointer type **caddr_t** starts with a **c** for **calloc** , a system function that initializes dynamically allocated storage to zeroes. The _memwriter_ uses the **memptr** for the later _write_ operation, using the library **strcpy** (string copy) function.
- * At this point, the _memwriter_ is ready for writing, but it first creates a semaphore to ensure exclusive access to the shared memory. A race condition would occur if the _memwriter_ were writing while the _memreader_ was reading. If the call to **sem_open** succeeds: [code] sem_t* semptr = sem_open(SemaphoreName, /* name */
-O_CREAT, /* create the semaphore */
-AccessPerms, /* protection perms */
-0); /* initial value */ [/code] then the writing can proceed. The **SemaphoreName** (any unique non-empty name will do) identifies the semaphore in both the _memwriter_ and the _memreader_. The initial value of zero gives the semaphore's creator, in this case, the _memwriter_ , the right to proceed, in this case, to the _write_ operation.
- * After writing, the _memwriter_ increments the semaphore value to 1: [code]`if (sem_post(semptr) < 0) ..`[/code] with a call to the **sem_post** function. Incrementing the semaphore releases the mutex lock and enables the _memreader_ to perform its _read_ operation. For good measure, the _memwriter_ also unmaps the shared memory from the _memwriter_ address space: [code]`munmap(memptr, ByteSize); /* unmap the storage *`[/code] This bars the _memwriter_ from further access to the shared memory.
-
-
-
-#### Example 4. Source code for the _memreader_ process
-
-
-```
-/** Compilation: gcc -o memreader memreader.c -lrt -lpthread **/
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include "shmem.h"
-
-void report_and_exit(const char* msg) {
-[perror][4](msg);
-[exit][5](-1);
-}
-
-int main() {
-int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* empty to begin */
-if (fd < 0) report_and_exit("Can't get file descriptor...");
-
-/* get a pointer to memory */
-caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
-ByteSize, /* how many bytes */
-PROT_READ | PROT_WRITE, /* access protections */
-MAP_SHARED, /* mapping visible to other processes */
-fd, /* file descriptor */
-0); /* offset: start at 1st byte */
-if ((caddr_t) -1 == memptr) report_and_exit("Can't access segment...");
-
-/* create a semaphore for mutual exclusion */
-sem_t* semptr = sem_open(SemaphoreName, /* name */
-O_CREAT, /* create the semaphore */
-AccessPerms, /* protection perms */
-0); /* initial value */
-if (semptr == (void*) -1) report_and_exit("sem_open");
-
-/* use semaphore as a mutex (lock) by waiting for writer to increment it */
-if (!sem_wait(semptr)) { /* wait until semaphore != 0 */
-int i;
-for (i = 0; i < [strlen][6](MemContents); i++)
-write(STDOUT_FILENO, memptr + i, 1); /* one byte at a time */
-sem_post(semptr);
-}
-
-/* cleanup */
-munmap(memptr, ByteSize);
-close(fd);
-sem_close(semptr);
-unlink(BackingFile);
-return 0;
-}
-```
-
-In both the _memwriter_ and _memreader_ programs, the shared-memory functions of main interest are **shm_open** and **mmap** : on success, the first call returns a file descriptor for the backing file, which the second call then uses to get a pointer to the shared memory segment. The calls to **shm_open** are similar in the two programs except that the _memwriter_ program creates the shared memory, whereas the _memreader_ only accesses this already created memory:
-
-
-```
-int fd = shm_open(BackingFile, O_RDWR | O_CREAT, AccessPerms); /* memwriter */
-int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* memreader */
-```
-
-With a file descriptor in hand, the calls to **mmap** are the same:
-
-
-```
-`caddr_t memptr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);`
-```
-
-The first argument to **mmap** is **NULL** , which means that the system determines where to allocate the memory in virtual address space. It's possible (but tricky) to specify an address instead. The **MAP_SHARED** flag indicates that the allocated memory is shareable among processes, and the last argument (in this case, zero) means that the offset for the shared memory should be the first byte. The **size** argument specifies the number of bytes to be allocated (in this case, 512), and the protection argument indicates that the shared memory can be written and read.
-
-When the _memwriter_ program executes successfully, the system creates and maintains the backing file; on my system, the file is _/dev/shm/shMemEx_ , with _shMemEx_ as my name (given in the header file _shmem.h_ ) for the shared storage. In the current version of the _memwriter_ and _memreader_ programs, the statement:
-
-
-```
-`shm_unlink(BackingFile); /* removes backing file */`
-```
-
-removes the backing file. If the **unlink** statement is omitted, then the backing file persists after the program terminates.
-
-The _memreader_ , like the _memwriter_ , accesses the semaphore through its name in a call to **sem_open**. But the _memreader_ then goes into a wait state until the _memwriter_ increments the semaphore, whose initial value is 0:
-
-
-```
-`if (!sem_wait(semptr)) { /* wait until semaphore != 0 */`
-```
-
-Once the wait is over, the _memreader_ reads the ASCII bytes from the shared memory, cleans up, and terminates.
-
-The shared-memory API includes operations explicitly to synchronize the shared memory segment and the backing file. These operations have been omitted from the example to reduce clutter and keep the focus on the memory-sharing and semaphore code.
-
-The _memwriter_ and _memreader_ programs are likely to execute without inducing a race condition even if the semaphore code is removed: the _memwriter_ creates the shared memory segment and writes immediately to it; the _memreader_ cannot even access the shared memory until this has been created. However, best practice requires that shared-memory access is synchronized whenever a _write_ operation is in the mix, and the semaphore API is important enough to be highlighted in a code example.
-
-### Wrapping up
-
-The shared-file and shared-memory examples show how processes can communicate through _shared storage_ , files in one case and memory segments in the other. The APIs for both approaches are relatively straightforward. Do these approaches have a common downside? Modern applications often deal with streaming data, indeed, with massively large streams of data. Neither the shared-file nor the shared-memory approaches are well suited for massive data streams. Channels of one type or another are better suited. Part 2 thus introduces channels and message queues, again with code examples in C.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/4/interprocess-communication-linux-storage
-
-作者:[Marty Kalin][a]
-选题:[lujun9972][b]
-译者:[FSSlc](https://github.com/FSSlc)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/mkalindepauledu
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
-[2]: https://en.wikipedia.org/wiki/Inter-process_communication
-[3]: http://condor.depaul.edu/mkalin
-[4]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
-[5]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
-[6]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
-[7]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
-[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html
diff --git a/translated/tech/20190415 Inter-process communication in Linux- Shared storage.md b/translated/tech/20190415 Inter-process communication in Linux- Shared storage.md
new file mode 100644
index 0000000000..3ee39094a6
--- /dev/null
+++ b/translated/tech/20190415 Inter-process communication in Linux- Shared storage.md
@@ -0,0 +1,435 @@
+[#]: collector: (lujun9972)
+[#]: translator: (FSSlc)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Inter-process communication in Linux: Shared storage)
+[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-storage)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+
+Linux 下的进程间通信:共享存储
+======
+
+学习在 Linux 中进程是如何与其他进程进行同步的。
+![Filing papers and documents][1]
+
+本篇是 Linux 下[进程间通信][2](IPC)系列的第一篇文章。这个系列将使用 C 语言代码示例来阐明以下 IPC 机制:
+
+ * 共享文件
+ * 共享内存(使用信号量)
+ * 管道(命名的或非命名的管道)
+ * 消息队列
+ * 套接字
+ * 信号
+
+在聚焦上面提到的共享文件和共享内存这两个机制之前,这篇文章将复习一些核心的概念。
+
+### 核心概念
+_进程_ 是运行着的程序,每个进程都有着它自己的地址空间,这些空间由进程被允许获取的内存地址组成。进程有一个或多个执行 _线程_,而线程是一系列执行指令的集合: _单线程_ 进程就只有一个线程,而 _多线程_ 的进程则有多个线程。一个进程中的线程共享各种资源,特别是地址空间。另外,一个进程中的线程可以直接通过共享内存来进行通信,尽管某些现代语言(例如 Go)鼓励一种更有序的方式例如使用线程安全的通道。当然对于不同的进程,默认情况下,它们 _不_ 能共享内存。
+
+启动进程后进行通信有多种方法,下面所举的例子中主要使用了下面的两种方法:
+
+ * 一个终端被用来启动一个进程,另外一个不同的终端被用来启动另一个。
+ * 在一个进程(父进程)中调用系统函数 **fork**,以此生发另一个进程(子进程)。
+
+第一个例子采用了上面使用终端的方法。这些[代码示例][3]的 ZIP 压缩包可以从我的网站下载到。
+
+### 共享文件
+
+程序员对文件获取问题应该都已经很熟识了,包括许多坑(不存在的文件、文件权限损坏等等),这些问题困扰着程序对文件的使用。尽管如此,共享文件可能是最为基础的 IPC 机制了。考虑一下下面这样一个相对简单的例子,其中一个进程(_生产者_)创建和写入一个文件,然后另一个进程(_消费者_)从这个相同的文件中进行读取:
+
+```
+ writes +-----------+ reads
+producer-------->| disk file |<-------consumer
+ +-----------+
+```
+
+在使用这个 IPC 机制时最明显的挑战是 _竞争条件_ 可能会发生:生产者和消费者可能恰好在同一时间访问该文件,从而使得输出结果不确定。为了避免竞争条件的发生,该文件在处于 _读_ 或 _写_ 状态时必须以某种方式处于被锁状态,从而阻止在 _写_ 操作执行时和其他操作的冲突。在标准系统库中与锁相关的 API 可以被总结如下:
+
+ * 生产者应该在写入文件时获得一个文件的排斥锁。一个 _排斥_ 锁最多被一个进程所拥有。这样就可以排除掉竞争条件的发生,因为在锁被释放之前没有其他的进程可以访问这个文件。
+ * 消费者应该在从文件中读取内容时得到至少一个共享锁。多个 _readers_ 可以同时保有一个 _共享_ 锁,但是没有 _writer_ 可以获取到文件内容,甚至在当一个单独的 _reader_ 保有一个共享锁时。
+
+共享锁可以提升效率。假如一个进程只是读入一个文件的内容,而不去改变它的内容,就没有什么原因阻止其他进程来做同样的事。但如果需要写入内容,则很显然需要文件有排斥锁。
+
+标准的 I/O 库中包含一个名为 **fcntl** 的实用函数,它可以被用来检查或者操作一个文件上的排斥锁和共享锁。该函数通过一个 _文件描述符_ (一个在进程中的非负整数值)来标记一个文件(在不同的进程中不同的文件描述符可能标记同一个物理文件)。对于文件的锁定, Linux 提供了名为 **flock** 的库函数,它是 **fcntl** 的一个精简包装。第一个例子中使用 **fcntl** 函数来暴露这些 API 细节。
+
+#### 示例 1. _生产者_ 程序
+
+```c
+#include
+#include
+#include
+#include
+
+#define FileName "data.dat"
+
+void report_and_exit(const char* msg) {
+ [perror][4](msg);
+ [exit][5](-1); /* EXIT_FAILURE */
+}
+
+int main() {
+ struct flock lock;
+ lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
+ lock.l_whence = SEEK_SET; /* base for seek offsets */
+ lock.l_start = 0; /* 1st byte in file */
+ lock.l_len = 0; /* 0 here means 'until EOF' */
+ lock.l_pid = getpid(); /* process id */
+
+ int fd; /* file descriptor to identify a file within a process */
+ if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
+ report_and_exit("open to read failed...");
+
+ /* If the file is write-locked, we can't continue. */
+ fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
+ if (lock.l_type != F_UNLCK)
+ report_and_exit("file is still write locked...");
+
+ lock.l_type = F_RDLCK; /* prevents any writing during the reading */
+ if (fcntl(fd, F_SETLK, &lock) < 0)
+ report_and_exit("can't get a read-only lock...");
+
+ /* Read the bytes (they happen to be ASCII codes) one at a time. */
+ int c; /* buffer for read bytes */
+ while (read(fd, &c, 1) > 0) /* 0 signals EOF */
+ write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
+
+ /* Release the lock explicitly. */
+ lock.l_type = F_UNLCK;
+ if (fcntl(fd, F_SETLK, &lock) < 0)
+ report_and_exit("explicit unlocking failed...");
+
+ close(fd);
+ return 0;
+}
+```
+
+上面 _生产者_ 程序的主要步骤可以总结如下:
+
+ * 这个程序首先声明了一个类型为 **struct flock** 的变量,它代表一个锁,并对它的 5 个域做了初始化。第一个初始化
+```c
+lock.l_type = F_WRLCK; /* exclusive lock */
+````
+使得这个锁为排斥锁(_read-write_)而不是一个共享锁(_read-only_)。假如 _生产者_ 获得了这个锁,则其他的进程将不能够对文件做读或者写操作,直到 _生产者_ 释放了这个锁,或者显式地调用 **fcntl**,又或者隐式地关闭这个文件。(当进程终止时,所有被它打开的文件都会被自动关闭,从而释放了锁)
+ * 上面的程序接着初始化其他的域。主要的效果是 _整个_ 文件都将被锁上。但是,有关锁的 API 允许特别指定的字节被上锁。例如,假如文件包含多个文本记录,则单个记录(或者甚至一个记录的一部分)可以被锁,而其余部分不被锁。
+ * 第一次调用 **fcntl**
+```c
+if (fcntl(fd, F_SETLK, &lock) < 0)
+```
+尝试排斥性地将文件锁住,并检查调用是否成功。一般来说, **fcntl** 函数返回 **-1** (因此小于 0)意味着失败。第二个参数 **F_SETLK** 意味着 **fcntl** 的调用 _不是_ 堵塞的;函数立即做返回,要么获得锁,要么显示失败了。假如替换地使用 **F_SETLKW**(末尾的 **W** 代指 _等待_),那么对 **fcntl** 的调用将是阻塞的,直到有可能获得锁的时候。在调用 **fcntl** 函数时,它的第一个参数 **fd** 指的是文件描述符,第二个参数指定了将要采取的动作(在这个例子中,**F_SETLK** 指代设置锁),第三个参数为锁结构的地址(在本例中,指的是 **& lock**)。
+ * 假如 _生产者_ 获得了锁,这个程序将向文件写入两个文本记录。
+ * 在向文件写入内容后,_生产者_ 改变锁结构中的 **l_type** 域为 _unlock_ 值:
+```c
+lock.l_type = F_UNLCK;
+```
+并调用 **fcntl** 来执行解锁操作。最后程序关闭了文件并退出。
+
+#### 示例 2. _消费者_ 程序
+
+```c
+#include
+#include
+#include
+#include