Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2018-08-16 11:04:19 +08:00
commit b68ce9c44e
18 changed files with 1840 additions and 169 deletions

View File

@ -1,24 +1,25 @@
使用 MQTT 实现项目数据收发
使用 MQTT 在项目中实现数据收发
======
> 从开源数据到开源事件流,了解一下 MQTT 发布/订阅pubsub线路协议。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP)
去年 11 月我们购买了一辆电动汽车,同时也引发了有趣的思考:我们应该什么时候为电动汽车充电?对于电动汽车充电所用的电,我希望能够对应最小的二氧化碳排放,归结为一个特定的问题:对于任意给定时刻,每千瓦时对应的二氧化碳排放量是多少,一天中什么时间这个值最低?
### 寻找数据
我住在纽约州,大约 80% 的电力消耗可以自给自足,主要来自天然气、水坝(大部分来自于<ruby>尼亚加拉<rt>Niagara</rt></ruby>大瀑布)、核能发电,少部分来自风力、太阳能和其它化石燃料发电。非盈利性组织 [<ruby>纽约独立电网运营商<rt>New York Independent System Operator</rt></ruby>][1] (NYISO) 负责整个系统的运作,实现发电机组发电与用电之间的平衡,同时也是纽约路灯系统的监管部门。
我住在纽约州,大约 80% 的电力消耗可以自给自足,主要来自天然气、水坝(大部分来自于<ruby>尼亚加拉<rt>Niagara</rt></ruby>大瀑布)、核能发电,少部分来自风力、太阳能和其它化石燃料发电。非盈利性组织 [<ruby>纽约独立电网运营商<rt>New York Independent System Operator</rt></ruby>][1] NYISO负责整个系统的运作,实现发电机组发电与用电之间的平衡,同时也是纽约路灯系统的监管部门。
尽管没有为公众提供公开 APINYISO 还是尽责提供了[不少公开数据][2]供公众使用。每隔 5 分钟汇报全州各个发电机组消耗的燃料数据。数据以 CSV 文件的形式发布于公开的档案库中,全天更新。如果你了解不同燃料对发电瓦数的贡献比例,你可以比较准确的估计任意时刻的二氧化碳排放情况。
在构建收集处理公开数据的工具时,我们应该时刻避免过度使用这些资源。相比将这些数据打包并发送给所有人,我们有更好的方案。我们可以创建一个低开销的<ruby>事件流<rt>event stream</rt></ruby>,人们可以订阅并第一时间得到消息。我们可以使用 [MQTT][3] 实现该方案。我的 ([ny-power.org][4]) 项目目标是收录到 [Home Assistant][5] 项目中;后者是一个开源的<ruby>家庭自动化<rt>home automation</rt></ruby>平台,拥有数十万用户。如果所有用户同时访问 CSV 文件服务器,估计 NYISO 不得不增加访问限制。
在构建收集处理公开数据的工具时,我们应该时刻避免过度使用这些资源。相比将这些数据打包并发送给所有人,我们有更好的方案。我们可以创建一个低开销的<ruby>事件流<rt>event stream</rt></ruby>,人们可以订阅并第一时间得到消息。我们可以使用 [MQTT][3] 实现该方案。我的项目([ny-power.org][4]目标是收录到 [Home Assistant][5] 项目中;后者是一个开源的<ruby>家庭自动化<rt>home automation</rt></ruby>平台,拥有数十万用户。如果所有用户同时访问 CSV 文件服务器,估计 NYISO 不得不增加访问限制。
### MQTT 是什么?
MQTT 是一个<ruby>发布订阅线协议<rt>publish/subscription wire protocol</rt></ruby>,为小规模设备设计。发布订阅系统工作原理类似于消息总线。你将一条消息发布到一个<ruby>主题<rt>topic</rt></ruby>上,那么所有订阅了该主题的客户端都可以获得该消息的一份拷贝。对于消息发送者而言,无需知道哪些人在订阅消息;你只需将消息发布到一系列主题,同时订阅一些你感兴趣的主题。就像参加了一场聚会,你选取并加入感兴趣的对话。
MQTT 是一个<ruby>发布订阅线协议<rt>publish/subscription wire protocol</rt></ruby>,为小规模设备设计。发布订阅系统工作原理类似于消息总线。你将一条消息发布到一个<ruby>主题<rt>topic</rt></ruby>上,那么所有订阅了该主题的客户端都可以获得该消息的一份拷贝。对于消息发送者而言,无需知道哪些人在订阅消息;你只需将消息发布到一系列主题,订阅一些你感兴趣的主题。就像参加了一场聚会,你选取并加入感兴趣的对话。
MQTT 可应用构建极为高效的应用。客户端订阅有限的几个主题,也只收到他们感兴趣的内容。不仅节省了处理时间,还降低了网络带宽使用。
MQTT 能够构建极为高效的应用。客户端订阅有限的几个主题,也只收到它们感兴趣的内容。不仅节省了处理时间,还降低了网络带宽使用。
作为一个开放标准MQTT 有很多开源的客户端和服务端实现。对于你能想到的每种编程语言,都有对应的客户端库;甚至有嵌入到 Arduino 的库,可以构建传感器网络。服务端可供选择的也很多,我的选择是 Eclipse 项目提供的 [Mosquitto][6] 服务端,这是因为它体积小、用 C 编写,可以承载数以万计的订阅者。
@ -34,7 +35,7 @@ MQTT 还有一些有趣的特性,其中之一是<ruby>遗嘱<rt>last-will-and-
NYSO 公布的 CSV 文件中有一个是实时的燃料混合使用情况。每 5 分钟NYSO 发布这 5 分钟内发电使用的燃料类型和相应的发电量(以兆瓦为单位)。
The CSV file looks something like this:
这个 CSV 文件看起来像这样:
| 时间戳 | 时区 | 燃料类型 | 兆瓦为单位的发电量 |
| --- | --- | --- | --- |
@ -65,7 +66,6 @@ ny-power/upstream/fuel-mix/Other Fossil Fuels {"units": "MW", "value": 4, "ts":
ny-power/upstream/fuel-mix/Wind {"units": "MW", "value": 41, "ts": "05/09/2018 00:05:00"}
ny-power/upstream/fuel-mix/Other Renewables {"units": "MW", "value": 226, "ts": "05/09/2018 00:05:00"}
ny-power/upstream/fuel-mix/Nuclear {"units": "MW", "value": 4114, "ts": "05/09/2018 00:05:00"}
```
这种直接的转换是种不错的尝试,可将公开数据转换为公开事件。我们后续会继续将数据转换为二氧化碳排放强度,但这些原始数据还可被其它应用使用,用于其它计算用途。
@ -74,7 +74,7 @@ ny-power/upstream/fuel-mix/Nuclear {"units": "MW", "value": 4114, "ts": "05/09/2
主题和<ruby>主题结构<rt>topic structure</rt></ruby>是 MQTT 的一个主要特色。与其它标准的企业级消息总线不同MQTT 的主题无需事先注册。发送者可以凭空创建主题,唯一的限制是主题的长度,不超过 220 字符。其中 `/` 字符有特殊含义,用于创建主题的层次结构。我们即将看到,你可以订阅这些层次中的一些分片。
基于开箱即用的 Mosquitto任何一个客户端都可以向任何主题发布消息。在原型设计过程中这种方式十分便利但一旦部署到生产环境你需要增加<ruby>访问控制列表<rt>access control list, ACL</rt></ruby>只允许授权的应用发布消息。例如,任何人都能以只读的方式访问我的应用的主题层级,但只有那些具有特定<ruby>凭证<rt>credentials</rt></ruby>的客户端可以发布内容。
基于开箱即用的 Mosquitto任何一个客户端都可以向任何主题发布消息。在原型设计过程中这种方式十分便利但一旦部署到生产环境你需要增加<ruby>访问控制列表<rt>access control list</rt></ruby>ACL只允许授权的应用发布消息。例如,任何人都能以只读的方式访问我的应用的主题层级,但只有那些具有特定<ruby>凭证<rt>credentials</rt></ruby>的客户端可以发布内容。
主题中不包含<ruby>自动样式<rt>automatic schema</rt></ruby>,也没有方法查找客户端可以发布的全部主题。因此,对于那些从 MQTT 总线消费数据的应用,你需要让其直接使用已知的主题和消息格式样式。
@ -87,8 +87,8 @@ ny-power/upstream/fuel-mix/Nuclear {"units": "MW", "value": 4114, "ts": "05/09/2
* `#` 以递归方式匹配,直到字符串结束
* `+` 匹配下一个 `/` 之前的内容
为便于理解,下面给出几个例子:
```
ny-power/#  - 匹配 ny-power 应用发布的全部主题
ny-power/upstream/#  - 匹配全部原始数据的主题
@ -107,6 +107,7 @@ ny-power/+/+/Hydro - 匹配全部两次层级之后为 Hydro 类型的主题(
利用[<ruby>美国能源情报署<rt>U.S. Energy Information Administration</rt></ruby>][7] 给出的 2016 年纽约各类燃料发电及排放情况,我们可以给出各类燃料的[平均排放率][8],单位为克/兆瓦时。
上述结果被封装到一个专用的微服务中。该微服务订阅 `ny-power/upstream/fuel-mix/+`,即数据泵中燃料组成情况的原始数据,接着完成计算并将结果(单位为克/千瓦时)发布到新的主题层次结构上:
```
ny-power/computed/co2 {"units": "g / kWh", "value": 152.9486, "ts": "05/09/2018 00:05:00"}
```
@ -123,7 +124,6 @@ ny-power/computed/co2 {"units": "g / kWh", "value": 152.9486, "ts": "05/09/2018
```
mosquitto_sub -h mqtt.ny-power.org -t ny-power/# -v
```
只要我编写或调试 MQTT 应用,我总会在一个终端中运行 `mosquitto_sub`
@ -132,7 +132,7 @@ mosquitto_sub -h mqtt.ny-power.org -t ny-power/# -v
到目前为止,我们已经有提供公开事件流的应用,可以用微服务或命令行工具访问该应用。但考虑到互联网仍占据主导地位,因此让用户可以从浏览器直接获取事件流是很重要。
MQTT 的设计者已经考虑到了这一点。协议标准支持三种不同的传输协议:[TCP][10][UDP][11] 和 [WebSockets][12]。主流浏览器都支持 WebSockets可以维持持久连接用于实时应用。
MQTT 的设计者已经考虑到了这一点。协议标准支持三种不同的传输协议:[TCP][10][UDP][11] 和 [WebSockets][12]。主流浏览器都支持 WebSockets可以维持持久连接用于实时应用。
Eclipse 项目提供了 MQTT 的一个 JavaScript 实现,叫做 [Paho][13],可包含在你的应用中。工作模式为与服务器建立连接、建立一些订阅,然后根据接收到的消息进行响应。
@ -187,21 +187,19 @@ function onMessageArrived(message) {
        };
        Plotly.newPlot('co2_graph', plot, layout);
    }
```
上述应用订阅了不少主题,因为我们将要呈现若干种不同类型的数据;其中 `ny-power/computed/co2` 主题为我们提供当前二氧化碳排放的参考值。一旦收到该主题的新消息,网站上的相应内容会被相应替换。
![NYISO 二氧化碳排放图][15]
[ny-power.org][4] 网站提供的 NYISO 二氧化碳排放图。
*[ny-power.org][4] 网站提供的 NYISO 二氧化碳排放图。*
`ny-power/archive/co2/24h` 主题提供了时间序列数据,用于为 [Plotly][16] 线表提供数据。`ny-power/upstream/fuel-mix` 主题提供当前燃料组成情况,为漂亮的柱状图提供数据。
![NYISO 燃料组成情况][18]
[ny-power.org][4] 网站提供的燃料组成情况。
*[ny-power.org][4] 网站提供的燃料组成情况。*
这是一个动态网站,数据不从服务器拉取,而是结合 MQTT 消息总线,监听对外开放的 WebSocket。就像数据泵和打包器程序那样网站页面也是一个发布订阅客户端只不过是在你的浏览器中执行而不是在公有云的微服务上。
@ -223,8 +221,8 @@ via: https://opensource.com/article/18/6/mqtt
作者:[Sean Dague][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[pinewall](https://github.com/pinewall)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,28 +3,28 @@ netdev 第一天IPsec
嗨!和去年一样,今年我又参加了 [netdev 会议][3]。([这里][14]是我上一年所做的笔记)。
在今天的会议中,我学到了很多有关 IPsec 的知识,所以下面我将介绍它们!其中 Sowmini Varadhan 和 [Paul Wouters][5] 做了一场关于 IPsec 的专题研讨会。本文中的错误 100% 都是我的错:)。
在今天的会议中,我学到了很多有关 IPsec 的知识,所以下面我将介绍它们!其中 Sowmini Varadhan 和 [Paul Wouters][5] 做了一场关于 IPsec 的专题研讨会。本文中的错误 100% 都是我的错 :)。
### 什么是 IPsec
IPsec 是一个用来加密 IP 包的协议。某些 VPN 已经是通过使用 IPsec 来实现的。直到今天我才真正意识到 VPN 使用了不只一种协议,原来我以为 VPN 只是一个通用术语指的是“你的数据包将被加密然后通过另一台服务器去发送“。VPN 可以使用一系列不同的协议OpenVPN、PPTP、SSTP、IPsec 等)以不同的方式来实现。
为什么 IPsec 和其他的 VPN 协议如此不同呢?(或者说,为什么在本次 netdev 会议中,将会给出 IPsec 的教学而不是其他的协议呢?)我的理解是有 2 点使得它如此不同:
为什么 IPsec 和其他的 VPN 协议如此不同呢?(或者说,为什么在本次 netdev 会议会有 IPsec 的教程,而不是其他的协议呢?)我的理解是有 2 点使得它如此不同:
* 它是一个 IETF 标准,例如可以在文档 [RFC 6071][1] 等中查到(你知道 IETF 是制定 RFC 标准的组织吗?我也是直到今天才知道的!)。
* 它在 Linux 内核中被实现了(所以这才是为什么本次 netdev 会议中有关于它的教学,因为 netdev 是一个跟 Linux 内核网络有关的会议 :))。
* 它在 Linux 内核中被实现了(所以这才是为什么本次 netdev 会议中有关于它的教程,因为 netdev 是一个跟 Linux 内核网络有关的会议 :))。
### IPsec 是如何工作的?
假如说你的笔记本正使用 IPsec 来加密数据包并通过另一台设备来发送它们,那这是怎么工作的呢?对于 IPsec 来说,它有 2 个部分:一个是用户空间部分,另一个是内核空间部分。
IPsec 的用户空间部分负责**密钥的交换**,使用名为 [IKE][6] "internet key exchange",网络密钥传输)的协议。总的来说,当你打开一个 VPN 连接的时候,你需要与 VPN 服务器通信,并且和它协商使用一个密钥来进行加密。
IPsec 的用户空间部分负责**密钥的交换**,使用名为 [IKE][6] <ruby>网络密钥传输<rt>internet key exchange</rt></ruby>)的协议。总的来说,当你打开一个 VPN 连接的时候,你需要与 VPN 服务器通信,并且和它协商使用一个密钥来进行加密。
IPsec 的内核部分负责数据包的实际加密工作——一旦使用 `IKE` 生成了一个密钥IPsec 的用户空间部分便会告诉内核使用哪个密钥来进行加密。然后内核便会使用该密钥来加密数据包!
IPsec 的内核部分负责数据包的实际加密工作 —— 一旦使用 IKE 生成了一个密钥IPsec 的用户空间部分便会告诉内核使用哪个密钥来进行加密。然后内核便会使用该密钥来加密数据包!
### 安全策略以及安全关联
(译者注security association 我翻译为安全关联, 参考自 https://zh.wikipedia.org/wiki/%E5%AE%89%E5%85%A8%E9%97%9C%E8%81%AF)
LCTT 译注security association 我翻译为安全关联, 参考自 https://zh.wikipedia.org/wiki/%E5%AE%89%E5%85%A8%E9%97%9C%E8%81%AF
IPSec 的内核部分有两个数据库:**安全策略数据库**SPD和**安全关联数据库**SAD
@ -33,7 +33,9 @@ IPSec 的内核部分有两个数据库:**安全策略数据库**SPD和*
而在我眼中,安全关联数据库存放有用于各种不同 IP 的加密密钥。
查看这些数据库的方式却是非常不直观的,需要使用一个名为 `ip xfrm` 的命令,至于 `xfrm` 是什么意思呢?我也不知道!
(译者注:我在 https://www.allacronyms.com/XFMR/Transformer 上查到 xfmr 是 Transformer 的简写,又根据 man7 上 http://man7.org/linux/man-pages/man8/ip-xfrm.8.html 的简介, 我认为这个说法可信。)
LCTT 译注:我在 https://www.allacronyms.com/XFMR/Transformer 上查到 xfmr 是 Transformer 的简写,又根据 [man7](http://man7.org/linux/man-pages/man8/ip-xfrm.8.html) 上的简介, 我认为这个说法可信。)
```
# security policy database
$ sudo ip xfrm policy
@ -42,7 +44,6 @@ $ sudo ip x p
# security association database
$ sudo ip xfrm state
$ sudo ip x s
```
### 为什么 IPsec 被实现在 Linux 内核中而 TLS 没有?
@ -53,11 +54,10 @@ IPsec 更容易在内核实现的原因是使用 IPsec 你可以更少频率地
而对于 TLS 来说,则存在一些问题:
a. 当你每一打开一个 TLS 连接时,每次你都要做新的密钥交换,并且 TLS 连接存活时间较短。
a. 当你每打开一个 TLS 连接时,每次你都要做新的密钥交换,并且 TLS 连接存活时间较短。
b. 当你需要开始做加密时,使用 IPsec 没有一个自然的协议边界,你只需要加密给定 IP 范围内的每个 IP 包即可,但如果使用 TLS你需要查看 TCP 流,辨别 TCP 包是否是一个数据包,然后决定是否加密它。
实际上存在一个补丁用于 [在 Linux 内核中实现 TLS][7],它让用户空间做密钥交换,然后传给内核密钥,所以很明显,使用 TLS 不是不可能的,但它是一个新事物,并且我认为相比使用 IPsec使用 TLS 更加复杂。
实际上一个补丁用于 [在 Linux 内核中实现 TLS][7],它让用户空间做密钥交换,然后传给内核密钥,所以很明显,使用 TLS 不是不可能的,但它是一个新事物,并且我认为相比使用 IPsec使用 TLS 更加复杂。
### 使用什么软件来实现 IPsec 呢?
@ -65,33 +65,29 @@ b. 当你需要开始做加密时,使用 IPsec 没有一个自然的协议边
有些让人迷糊的是,尽管 Libreswan 和 Strongswan 是不同的程序包,但它们都会安装一个名为 `ipsec` 的二进制文件来管理 IPsec 连接,并且这两个 `ipsec` 二进制文件并不是相同的程序(尽管它们担任同样的角色)。
在上面的“IPsec 如何工作”部分,我已经描述了 Strongswan 和 Libreswan 做了什么——使用 IKE 做密钥交换,并告诉内核有关如何使用密钥来做加密。
在上面的“IPsec 如何工作”部分,我已经描述了 Strongswan 和 Libreswan 做了什么 —— 使用 IKE 做密钥交换,并告诉内核有关如何使用密钥来做加密。
### VPN 不是只能使用 IPsec 来实现!
在本文的开头我说“IPsec 是一个 VPN 协议”,这是对的,但你并不必须使用 IPsec 来实现 VPN实际上有两种方式来使用 IPsec
1. “传输模式”,其中 IP 表头没有改变,只有 IP 数据包的内容被加密。这种模式有点类似于使用 TLS -- 你直接告诉服务器你正在通信(而不是通过一个 VPN 服务器或其他设备),只有 IP 包里的内容被加密。
1. “传输模式”,其中 IP 表头没有改变,只有 IP 数据包的内容被加密。这种模式有点类似于使用 TLS —— 你直接告诉服务器你正在通信(而不是通过一个 VPN 服务器或其他设备),只有 IP 包里的内容被加密。
2. ”隧道模式“,其中 IP 表头和它的内容都被加密了,并且被封装进另一个 UDP 包内。这个模式被 VPN 所使用 —— 你获取你正传送给一个秘密网站的包,然后加密它,并将它送给你的 VPN 服务器,然后 VPN 服务器再传送给你。
2. ”隧道模式“,其中 IP 表头和它的内容都被加密了,并且被封装进另一个 UDP 包内。这个模式被 VPN 所使用 -- 你获取你正传送给一个秘密网站的包,然后加密它,并将它送给你的 VPN 服务器,然后 VPN 服务器再传送给你。
### 投机的 IPsec
### opportunistic IPsec
今天我学到了 IPsec “传输模式”的一个有趣应用,它叫做 “投机的 IPsec”通过它你可以通过开启一个 IPsec 连接来直接和你要通信的主机连接,而不是通过其他的中介服务器),现在已经有一个“投机的 IPsec” 服务器了,它位于 [http://oe.libreswan.org/][8]。
今天我学到了 IPsec “传输模式”的一个有趣应用,它叫做 “opportunistic IPsec”通过它你可以通过开启一个 IPsec 连接来直接和你要通信的主机连接,而不是通过其他的中介服务器),现在已经有一个 opportunistic IPsec 服务器了,它位于 [http://oe.libreswan.org/][8]。
我认为当你在你的电脑上设定好 `libreswan` 和 unbound DNS 程序后,当你连接到 [http://oe.libreswan.org][8] 时,主要发生了如下的几件事:
我认为当你在你的电脑上设定好 `libreswan``unbound` DNS 程序后,当你连接到 [http://oe.libreswan.org][8] 时,主要发生了如下的几件事:
1. `unbound` 做一次 DNS 查询来获取 `oe.libreswan.org` (`dig ipseckey oe.libreswan.org`) 的 IPSECKEY 记录,以便获取到公钥来用于该网站(这需要 DNSSEC 是安全的,并且当我获得足够多这方面的知识后,我将用另一篇文章来说明它。假如你想看到相关的结果,并且如果你只是使用 dig 命令来运行此次 DNS 查询的话,它也可以工作)。
2. `unbound` 将公钥传给 `libreswan` 程序,然后 `libreswan` 使用它来和运行在 `oe.libreswan.org` 网站上的 IKE 服务器做一次密钥交换。
3. `libreswan`  完成了密钥交换,将加密密钥传给内核并告诉内核当和 `oe.libreswan.org` 做通信时使用该密钥。
4. 你的连接现在被加密了!即便它是 HTTP 连接!有趣吧!
### IPsec 和 TLS 相互借鉴
在今天的教程中听到一个有趣的花絮是 IPsec 和 TLS 协议实际上总是从对方学习 -- 正如他们说在 TLS 出现前, IPsec 的 IKE 协议有着完美的正向加密,而 IPsec 也从 TLS 那里学了很多。很高兴能听到不同的网络协议之间是如何从对方那里学习并与时俱进的事实!
在今天的教程中听到一个有趣的花絮是 IPsec 和 TLS 协议实际上总是从对方学习 —— 正如他们说在 TLS 出现前, IPsec 的 IKE 协议有着完美的正向加密,而 IPsec 也从 TLS 那里学了很多。很高兴能听到不同的网络协议之间是如何从对方那里学习并与时俱进的事实!
### IPsec 是有趣的!
@ -103,9 +99,9 @@ b. 当你需要开始做加密时,使用 IPsec 没有一个自然的协议边
via: https://jvns.ca/blog/2018/07/11/netdev-day-1--ipsec/
作者:[ Julia Evans][a]
作者:[Julia Evans][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -118,3 +114,4 @@ via: https://jvns.ca/blog/2018/07/11/netdev-day-1--ipsec/
[6]:https://en.wikipedia.org/wiki/Internet_Key_Exchange
[7]:https://blog.filippo.io/playing-with-kernel-tls-in-linux-4-13-and-go/
[8]:http://oe.libreswan.org/
[14]:https://jvns.ca/categories/netdev/

View File

@ -1,13 +1,15 @@
使用 EduBlocks 轻松学习 Python 编程
======
> EduBlocks 提供了 Scratch 式的图形界面来编写 Python 3 代码。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/blocks_building.png?itok=eMOT-ire)
如果你正在寻找一种方法将你的学生(或你自己)从使用 [Scratch][1] 编程转移到学习 [Python][2],我建议你了解一下 [EduBlocks][3]。它为 Python 3 编程带来了熟悉的拖放式图形用户界面GUI
从 Scratch 过渡到 Python 的一个障碍是缺少拖放式 GUI使得 Scratch 成为 K-12 学校的首选程序。EduBlocks 的拖放版的 Python 3 改变了这种范式。它的目的是“帮助教师在较早的时候向儿童介绍基于文本的编程语言,如 Python。”
从 Scratch 过渡到 Python 的一个障碍是缺少拖放式 GUI而正是这种拖放式 GUI 使得 Scratch 成为 K-12 学校的首选程序。EduBlocks 的拖放版的 Python 3 改变了这种范式。它的目的是“帮助教师在较早的时候向儿童介绍基于文本的编程语言,如 Python。”
EduBlocks 的硬件要求非常适中 - 一个 Raspberry Pi 和一条互联网连接 - 应该可以在许多教室中使用。
EduBlocks 的硬件要求非常适中 —— 一个树莓派和一条互联网连接 —— 应该可以在许多教室中使用。
EduBlocks 是由来自英国的 14 岁 Python 开发人员 Joshua Lowe 开发的。我看到 Joshua 在 2018 年 5 月的 [PyCon 2018][4] 上展示了他的项目。
@ -16,9 +18,9 @@ EduBlocks 是由来自英国的 14 岁 Python 开发人员 Joshua Lowe 开发的
安装 EduBlocks 很容易。该网站提供了清晰的安装说明,你可以在项目的 [GitHub][5] 仓库中找到详细的截图。
使用以下命令在 Raspberry Pi 命令行安装 EduBlocks
```
curl -sSL get.edublocks.org | bash
```
### 在 EduBlocks 中编程
@ -37,7 +39,7 @@ EduBlocks 附带了一系列代码库,包括 [EduPython][6]、[Minecraft] [7]
### 学习和支持
该项目维护了一个[学习门户网站][11],其中包含教程和其他资源,可以轻松地 [hack][12] Raspberry Pi 版本的 Minecraft编写 GPIOZero 和 Sonic Pi并使用 Micro:bit 代码编辑器控制 LED。可以在 Twitter [@edu_blocks][13] 和 [@all_about_code][14] 以及 [email][15] 提供对 EduBlocks 的支持。
该项目维护了一个[学习门户网站][11],其中包含教程和其他资源,可以轻松地 [hack][12] 树莓派版本的 Minecraft编写 GPIOZero 和 Sonic Pi并使用 Micro:bit 代码编辑器控制 LED。可以在 Twitter [@edu_blocks][13] 和 [@all_about_code][14] 以及 [email][15] 提供对 EduBlocks 的支持。
为了更深入的了解,你可以在 [GitHub][16] 上访问 EduBlocks 的源代码。该程序在 GNU Affero Public License v3.0 下[许可][17]。EduBlocks 的创建者(项目负责人 [Joshua Lowe][18] 和开发人员 [Chris Dell][19] 和 [Les Pounder][20])希望它成为一个社区项目,并邀请人们提出问题,提供反馈,以及提交 pull request 以向项目添加功能或修复。
@ -48,7 +50,7 @@ via: https://opensource.com/article/18/8/edublocks
作者:[Don Watkins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,63 @@
Tips for Success with Open Source Certification
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/desktop_1.jpg?itok=Nf2yTUar)
In todays technology arena, open source is pervasive. The [2018 Open Source Jobs Report][1] found that hiring open source talent is a priority for 83 percent of hiring managers, and half are looking for candidates holding certifications. And yet, 87 percent of hiring managers also cite difficulty in finding the right open source skills and expertise. This article is the second in a weekly series on the growing importance of open source certification.
In the [first article][2], we focused on why certification matters now more than ever. Here, well focus on the kinds of certifications that are making a difference, and what is involved in completing necessary training and passing the performance-based exams that lead to certification, with tips from Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation.
### Performance-based exams
So, what are the details on getting certified and what are the differences between major types of certification? Most types of open source credentials and certification that you can obtain are performance-based. In many cases, trainees are required to demonstrate their skills directly from the command line.
“You're going to be asked to do something live on the system, and then at the end, we're going to evaluate that system to see if you were successful in accomplishing the task,” said Seepersad. This approach obviously differs from multiple choice exams and other tests where candidate answers are put in front of you. Often, certification programs involve online self-paced courses, so you can learn at your own speed, but the exams can be tough and require demonstration of expertise. Thats part of why the certifications that they lead to are valuable.
### Certification options
Many people are familiar with the certifications offered by The Linux Foundation, including the [Linux Foundation Certified System Administrator][3] (LFCS) and [Linux Foundation Certified Engineer][4] (LFCE) certifications. The Linux Foundation intentionally maintains separation between its training and certification programs and uses an independent proctoring solution to monitor candidates. It also requires that all certifications be renewed every two years, which gives potential employers confidence that skills are current and have been recently demonstrated.
“Note that there are no prerequisites,” Seepersad said. “What that means is that if you're an experienced Linux engineer, and you think the LFCE, the certified engineer credential, is the right one for you…, you're allowed to do what we call challenge the exams. If you think you're ready for the LFCE, you can sign up for the LFCE without having to have gone through and taken and passed the LFCS.”
Seepersad noted that the LFCS credential is great for people starting their careers, and the LFCE credential is valuable for many people who have experience with Linux such as volunteer experience, and now want to demonstrate the breadth and depth of their skills for employers. He also said that the LFCS and LFCE coursework prepares trainees to work with various Linux distributions. Other certification options, such as the [Kubernetes Fundamentals][5] and [Essentials of OpenStack Administration][6]courses and exams, have also made a difference for many people, as cloud adoption has increased around the world.
Seepersad added that certification can make a difference if you are seeking a promotion. “Being able show that you're over the bar in terms of certification at the engineer level can be a great way to get yourself into the consideration set for that next promotion,” he said.
### Tips for Success
In terms of practical advice for taking an exam, Seepersad offered a number of tips:
* Set the date, and dont procrastinate.
* Look through the online exam descriptions and get any training needed to be able to show fluency with the required skill sets.
* Practice on a live Linux system. This can involve downloading a free terminal emulator or other software and actually performing tasks that you will be tested on.
Seepersad also noted some common mistakes that people make when taking their exams. These include spending too long on a small set of questions, wasting too much time looking through documentation and reference tools, and applying changes without testing them in the work environment.
With open source certification playing an increasingly important role in securing a rewarding career, stay tuned for more certification details in this article series, including how to prepare for certification.
[Learn more about Linux training and certification.][7]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/sysadmin-cert/2018/7/tips-success-open-source-certification
作者:[Sam Dean][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/sam-dean
[1]:https://www.linuxfoundation.org/publications/open-source-jobs-report-2018/
[2]:https://www.linux.com/blog/sysadmin-cert/2018/7/5-reasons-open-source-certification-matters-more-ever
[3]:https://training.linuxfoundation.org/certification/lfcs
[4]:https://training.linuxfoundation.org/certification/lfce
[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/openstack-administration-fundamentals
[7]:https://training.linuxfoundation.org/certification

View File

@ -0,0 +1,70 @@
3 cool productivity apps for Fedora 28
======
![](https://fedoramagazine.org/wp-content/uploads/2018/07/3-productivity-apps-2018-816x345.jpg)
Productivity apps are especially popular on mobile devices. But when you sit down to do work, youre often at a laptop or desktop computer. Lets say you use a Fedora system for your platform. Can you find apps that help you get your work done? Of course! Read on for tips on apps to help you focus on your goals.
All these apps are available for free on your Fedora system. And they also respect your freedom. (Many also let you use existing services where you may have an account.)
### FocusWriter
FocusWriter is simply a full screen word processor. The app makes you more productive because it covers everything else on your screen. When you use FocusWriter, you have nothing between you and your text. With this app at work, you can focus on your thoughts with fewer distractions.
[![Screenshot of FocusWriter][1]][2]
FocusWriter lets you adjust fonts, colors, and theme to best suit your preferences. It also remembers your last document and location. This feature lets you jump right back into focusing on writing without delay.
To install FocusWriter, use the Software app in your Fedora Workstation. Or run this command in a terminal [using sudo][3]:
```
sudo dnf install focuswriter
```
### GNOME ToDo
This unique app is designed, as you can guess, for the GNOME desktop environment. Its a great fit for your Fedora Workstation for that reason. ToDo has a simple purpose: it lets you make lists of things you need to get done.
![](https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-15-18-08-59.png)
Using ToDo, you can prioritize and schedule deadlines for all your tasks. You can also build as many tasks lists as you want. ToDo has numerous extensions for useful functions to boost your productivity. These include GNOME Shell notifications, and list management with a todo.txt file. ToDo can even interface with a Todoist or Google account if you use one. It synchronizes tasks so you can share across your devices.
To install, search for ToDo in Software, or at the command line run:
```
sudo dnf install gnome-todo
```
### Zanshin
If you are a KDE using productivity fan, you may enjoy [Zanshin][4]. This organizer helps you plan your actions across multiple projects. It has a full featured interface, and lets you browse across your various tasks to see whats most important to do next.
[![Screenshot of Zanshin on Fedora 28][5]][6]
Zanshin is extremely keyboard friendly, so you can be efficient during hacking sessions. It also integrates across numerous KDE applications as well as the Plasma Desktop. You can use it inline with KMail, KOrganizer, and KRunner.
To install, run this command:
```
sudo dnf install zanshin
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/3-cool-productivity-apps/
作者:[Paul W. Frields][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/pfrields/
[1]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-15-18-10-18-1024x768.png
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-15-18-10-18.png
[3]:https://fedoramagazine.org/howto-use-sudo/
[4]:https://zanshin.kde.org/
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot_20180715_192216-1024x653.png
[6]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot_20180715_192216.png

View File

@ -0,0 +1,46 @@
Confessions of a recovering Perl hacker
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR)
My name's MikeCamel, and I'm a Perl hacker.
There, I've said it. That's the first step.
My handle on IRC, Twitter and pretty much everywhere else in the world is "MikeCamel." This is because, back in the day, when there were no chat apps—no apps at all, in fact—I was in a technical "chatroom" and the name "Mike" had been taken. I looked around, and the first thing I noticed on my desk was the [Camel Book][1], the O'Reilly Perl Bible.
I have the second edition now, but this was the first edition. Yesterday, I happened to pick up the second edition, the really thick one, to show someone on a video conference call, and it had a thin layer of dust on it. I was a little bit ashamed, but a little bit relieved as well.
For years, I was a sysadmin. Just bits and pieces, from time to time. Nothing serious, you understand—mainly my systems, my friends' systems. Sometimes I'd admin systems owned by other people—even at work. I always had it under control, and I was always able to step away. There were whole weeks—well days—when I didn't administer a system at all. With the exception of remote systems, which felt different, somehow less serious.
What pushed it over the edge, on reflection, was the Perl. This was the '90s—the 1990s, just to be clear—when Perl was young, and free, and didn't even pretend to be object-oriented. We all know it still isn't, but those youngsters—they like to pretend, and we old lags, well, we play along.
The thing about Perl is that it just starts small, with a regexp here, a text-file line counter there. Nothing that couldn't have been managed quite easily in Bash or Sed or Awk. But once you've written a couple of scripts, you're in—there's no going back. Long-term Perl users remember how we started, and we see the newbs going the same way.
I taught myself Perl in order to collate static web pages from five disparate FoxPro databases. I did it by starting at the beginning of the Camel Book and reading as much of it as I could before my brain started to hurt, then picking up a few pages back and carrying on. And then writing some Perl, which always failed, mainly because of lack of semicolons to start with, and then because I didn't really understand much of what I was doing. But I kept with it until I wasn't just writing scripts to collate databases, but scripts to load data into a single database and using CGI to serve pages in real time. My wife knew, and some of my colleagues knew, but I don't think they fully understood how deep I was in.
You know that Perl has you when you start looking for admin tasks to automate with it. Tasks that don't need automating and that would be much, much faster if you performed them by hand. When you start scouring the web for three- or four-character commands that, when executed, alphabetise, spell-check, and decrypt three separate files in parallel and output them to STDERR, ROT13ed.
I was lucky: I escaped in time. I always insisted on commenting my Perl. I never got to the very end of the Camel Book. Not in one reading, anyway. I never experimented with the darker side-effects; three or four separate operations per line was always enough for me. Over time, as my responsibilities moved more to programming, I cut back on the sysadmin tasks. Of course, that didn't stop the Perl use completely—it's amazing how often you can find an excuse to automate a task and how often Perl is the answer. But it reduced my Perl to manageable levels, levels that didn't affect my day-to-day functioning.
I'd like to pretend that I've stopped, but you never really give up on Perl, and it never gives up on you.
I'd like to pretend that I've stopped, but you never really give up on Perl, and it never gives up on you. My Camel Book (2nd ed.) is still around, even if it's a little dusty. I always check that the core modules are installed on any systems I run. And about five months ago, I found that my 10-year-old daughter had some mathematics homework that was susceptible to brute-forcing. Just a few lines. A couple of loops. No more than that. Nothing that I didn't feel went out of scope.
I'd like to pretend that I've stopped, but you never really give up on Perl, and it never gives up on you. My Camel Book (2nd ed.) is still around, even if it's a little dusty. I always check that the core modules are installed on any systems I run. And about five months ago, I found that my 10-year-old daughter had some mathematics homework that was susceptible to brute-forcing. Just a few lines. A couple of loops. No more than that. Nothing that I didn't feel went out of scope.
I discovered after she handed in the results that it hadn't produced the correct results, but I didn't mind. It was tight, it was elegant, it was beautiful. It was Perl. My Perl.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/confessions-recovering-perl-hacker
作者:[Mike Bursell][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://en.wikipedia.org/wiki/Programming_Perl

View File

@ -0,0 +1,259 @@
How To Find The Mounted Filesystem Type In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/07/filesystem-720x340.png)
As you may already know, the Linux supports numerous filesystems, such as Ext4, ext3, ext2, sysfs, securityfs, FAT16, FAT32, NTFS, and many. The most commonly used filesystem is Ext4. Ever wondered what type of filesystem are you currently using in your Linux system? No? Worry not! We got your back. This guide explains how to find the mounted filesystem type in Unix-like operating systems.
### Find The Mounted Filesystem Type In Linux
There can be many ways to find the filesystem type in Linux. Here, I have given 8 different methods. Let us get started, shall we?
#### Method 1 Using findmnt command
This is the most commonly used method to find out the type of a filesystem. The **findmnt** command will list all mounted filesystems or search for a filesystem. The findmnt command can be able to search in **/etc/fstab** , **/etc/mtab** or **/proc/self/mountinfo**.
findmnt command comes pre-installed in most Linux distributions, because it is part of the package named **util-linux**. Just in case if it is not available, simply install this package and youre good to go. For instance, you can install **util-linux** package in Debian-based systems using command:
```
$ sudo apt install util-linux
```
Let us go ahead and see how to use findmnt command to find out the mounted filesystems.
If you run it without any arguments/options, it will list all mounted filesystems in a tree-like format as shown below.
```
$ findmnt
```
**Sample output:**
![][2]
As you can see, the findmnt command displays the target mount point (TARGET), source device (SOURCE), file system type (FSTYPE), and relevant mount options, like whether the filesystem is read/write or read-only. (OPTIONS). In my case, my root(/) filesystem type is EXT4.
If you dont like/want to display the output in tree-like format, use **-l** flag to display in simple, plain format.
```
$ findmnt -l
```
![][3]
You can also list a particular type of filesystem, for example **ext4** , using **-t** option.
```
$ findmnt -t ext4
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda2 ext4 rw,relatime,commit=360
└─/boot /dev/sda1 ext4 rw,relatime,commit=360,data=ordered
```
Findmnt can produce df style output as well.
```
$ findmnt --df
```
Or
```
$ findmnt -D
```
Sample output:
```
SOURCE FSTYPE SIZE USED AVAIL USE% TARGET
dev devtmpfs 3.9G 0 3.9G 0% /dev
run tmpfs 3.9G 1.1M 3.9G 0% /run
/dev/sda2 ext4 456.3G 342.5G 90.6G 75% /
tmpfs tmpfs 3.9G 32.2M 3.8G 1% /dev/shm
tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
bpf bpf 0 0 0 - /sys/fs/bpf
tmpfs tmpfs 3.9G 8.4M 3.9G 0% /tmp
/dev/loop0 squashfs 82.1M 82.1M 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 ext4 92.8M 55.7M 30.1M 60% /boot
tmpfs tmpfs 788.8M 32K 788.8M 0% /run/user/1000
gvfsd-fuse fuse.gvfsd-fuse 0 0 0 - /run/user/1000/gvfs
```
You can also display filesystems for a specific device, or mountpoint too.
Search for a device:
```
$ findmnt /dev/sda1
TARGET SOURCE FSTYPE OPTIONS
/boot /dev/sda1 ext4 rw,relatime,commit=360,data=ordered
```
Search for a mountpoint:
```
$ findmnt /
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda2 ext4 rw,relatime,commit=360
```
You can even find filesystems with specific label:
```
$ findmnt LABEL=Storage
```
For more details, refer the man pages.
```
$ man findmnt
```
The findmnt command is just enough to find the type of a mounted filesystem in Linux. It is created for that specific purpose only. However, there are also few other ways available to find out the filesystem type. If youre interested to know, read on.
#### Method 2 Using blkid command
The **blkid** command is used locate and print block device attributes. It is also part of the util-linux package, so you dont bother to install it.
To find out the type of a filesystem using blkid command, run:
```
$ blkid /dev/sda1
```
#### Method 3 Using df command
The **df** command is used to report filesystem disk space usage in Unix-like operating systems. To find the type of all mounted filesystems, simply run:
```
$ df -T
```
**Sample output:**
![][4]
For details about df command, refer the following guide.
Also, check man pages.
```
$ man df
```
#### Method 4 Using file command
The **file** command determines the type of a specified file. It works just fine for files with no file extension.
Run the following command to find the filesystem type of a partition:
```
$ sudo file -sL /dev/sda1
[sudo] password for sk:
/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=83a1dbbf-1e15-4b45-94fe-134d3872af96 (needs journal recovery) (extents) (large files) (huge files)
```
Check man pages for more details:
```
$ man file
```
#### Method 5 Using fsck command
The **fsck** command is used to check the integrity of a filesystem or repair it. You can find the type of a filesystem by passing the partition as an argument like below.
```
$ fsck -N /dev/sda1
fsck from util-linux 2.32
[/usr/bin/fsck.ext4 (1) -- /boot] fsck.ext4 /dev/sda1
```
For more details, refer man pages.
```
$ man fsck
```
#### Method 6 Using fstab Command
**fstab** is a file that contains static information about the filesystems. This file usually contains the mount point, filesystem type and mount options.
To view the type of a filesystem, simply run:
```
$ cat /etc/fstab
```
![][5]
For more details, refer man pages.
```
$ man fstab
```
#### Method 7 Using lsblk command
The **lsblk** command displays the information about devices.
To display info about mounted filesystems, simply run:
```
$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
loop0 squashfs /var/lib/snapd/snap/core/4327
sda
├─sda1 ext4 83a1dbbf-1e15-4b45-94fe-134d3872af96 /boot
├─sda2 ext4 4d25ddb0-5b20-40b4-ae35-ef96376d6594 /
└─sda3 swap 1f8f5e2e-7c17-4f35-97e6-8bce7a4849cb [SWAP]
sr0
```
For more details, refer man pages.
```
$ man lsblk
```
#### Method 8 Using mount command
The **mount** command is used to mount a local or remote filesystems in Unix-like systems.
To find out the type of a filesystem using mount command, do:
```
$ mount | grep "^/dev"
/dev/sda2 on / type ext4 (rw,relatime,commit=360)
/dev/sda1 on /boot type ext4 (rw,relatime,commit=360,data=ordered)
```
For more details, refer man pages.
```
$ man mount
```
And, thats all for now folks. You now know 8 different Linux commands to find out the type of a mounted Linux filesystems. If you know any other methods, feel free to let me know in the comment section below. I will check and update this guide accordingly.
More good stuffs to come. Stay tuned!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-find-the-mounted-filesystem-type-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/07/findmnt-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/07/findmnt-2.png
[4]:http://www.ostechnix.com/wp-content/uploads/2018/07/df.png
[5]:http://www.ostechnix.com/wp-content/uploads/2018/07/fstab.png

View File

@ -0,0 +1,110 @@
Users, Groups and Other Linux Beasts: Part 2
======
![](https://www.linux.com/blog/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts-part-2)
In this ongoing tour of Linux, weve looked at [how to manipulate folders/directories][1], and now were continuing our discussion of _permissions_ , _users_ and _groups_ , which are necessary to establish who can manipulate which files and directories. [Last time,][2] we showed how to create new users, and now were going to dive right back in:
You can create new groups and then add users to them at will with the `groupadd` command. For example, using:
```
sudo groupadd photos
```
will create the _photos_ group.
Youll need to [create a directory][1] hanging off the root directory:
```
sudo mkdir /photos
```
If you run `ls -l /`, one of the lines will be:
```
drwxr-xr-x 1 root root 0 jun 26 21:14 photos
```
The first _root_ in the output is the user owner and the second _root_ is the group owner.
To transfer the ownership of the _/photos_ directory to the _photos_ group, use
```
chgrp photos /photos
```
The `chgrp` command typically takes two parameters, the first parameter is the group that will take ownership of the file or directory and the second is the file or directory you want to give over to the the group.
Next, run `ls -l /` and you'll see the line has changed to:
```
drwxr-xr-x 1 root photos 0 jun 26 21:14 photos
```
You have successfully transferred the ownership of your new directory over to the _photos_ group.
Then, add your own user and the _guest_ user to the _photos_ group:
```
sudo usermod <your username here> -a -G photos
sudo usermod guest -a -G photos
```
You may have to log out and log back in to see the changes, but, when you do, running `groups` will show _photos_ as one of the groups you belong to.
A couple of things to point out about the `usermod` command shown above. First: Be careful not to use the `-g` option instead of `-G`. The `-g` option changes your primary group and could lock you out of your stuff if you use it by accident. `-G`, on the other hand, _adds_ you to the groups listed and doesn't mess with the primary group. If you want to add your user to more groups than one, list them one after another, separated by commas, no spaces, after `-G`:
```
sudo usermod <your username> -a -G photos,pizza,spaceforce
```
Second: Be careful not to forget the `-a` parameter. The `-a` parameter stands for _append_ and attaches the list of groups you pass to `-G` to the ones you already belong to. This means that, if you don't include `-a`, the list of groups you already belong to, will be overwritten, again locking you out from stuff you need.
Neither of these are catastrophic problems, but it will mean you will have to add your user back manually to all the groups you belonged to, which can be a pain, especially if you have lost access to the _sudo_ and _wheel_ group.
### Permits, Please!
There is still one more thing to do before you can copy images to the _/photos_ directory. Notice how, when you did `ls -l /` above, permissions for that folder came back as _drwxr-xr-x_.
If you read [the article I recommended at the beginning of this post][3], you'll know that the first _d_ indicates that the entry in the file system is a directory, and then you have three sets of three characters ( _rwx_ , _r-x_ , _r-x_ ) that indicate the permissions for the user owner ( _rwx_ ) of the directory, then the group owner ( _r-x_ ), and finally the rest of the users ( _r-x_ ). This means that the only person who has write permissions so far, that is, the only person who can copy or create files in the _/photos_ directory, is the _root_ user.
But [that article I mentioned also tells you how to change the permissions for a directory or file][3]:
```
sudo chmod g+w /photos
```
Running `ls -l /` after that will give you _/photos_ permissions as _drwxrwxr-x_ which is what you want: group members can now write into the directory.
Now you can try and copy an image or, indeed, any other file to the directory and it should go through without a problem:
```
cp image.jpg /photos
```
The _guest_ user will also be able to read and write from the directory. They will also be able to read and write to it, and even move or delete files created by other users within the shared directory.
### Conclusion
The permissions and privileges system in Linux has been honed over decades. inherited as it is from the old Unix systems of yore. As such, it works very well and is well thought out. Becoming familiar with it is essential for any Linux sysadmin. In fact, you can't do much admining at all unless you understand it. But, it's not that hard.
Next time, we'll be dive into files and see the different ways of creating, manipulating, and destroying them in creative ways. Always fun, that last one.
See you then!
Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts-part-2
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux
[2]:https://www.linux.com/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts
[3]:https://www.linux.com/learn/understanding-linux-file-permissions
[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,95 @@
Getting started with Etcher.io
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A)
Bootable USB drives are a great way to try out a new Linux distribution to see if you like it before you install. While some Linux distributions, like [Fedora][1], make it easy to create bootable media, most others provide the ISOs or image files and leave the media creation decisions up to the user. There's always the option to use `dd` to create media on the command line—but let's face it, even for the most experienced user, that's still a pain. There are other utilities—like UnetBootIn, Disk Utility on MacOS, and Win32DiskImager on Windows—that create bootable USBs.
### Installing Etcher
About 18 months ago, I came upon [Etcher.io][2] , a great open source project that allows easy and foolproof media creation on Linux, Windows, or MacOS. Etcher.io has become my "go-to" application for creating bootable media for Linux. I can easily download ISO or IMG files and burn them to flash drives and SD cards. It's an open source project licensed under [Apache 2.0][3] , and the [source code][4] is available on GitHub.
Go to the [Etcher.io][5] website and click on the download link for your operating system—32- or 64-bit Linux, 32- or 64-bit Windows, or MacOS.
![](https://opensource.com/sites/default/files/uploads/etcher_1.png)
Etcher provides great instructions in its GitHub repository for adding Etcher to your collection of Linux utilities.
If you are on Debian or Ubuntu, add the Etcher Debian repository:
```
$echo "deb https://dl.bintray.com/resin-io/debian stable etcher" | sudo tee
/etc/apt/sources.list.d/etcher.list
Trust Bintray.com GPG key
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 379CE192D401AB61
```
Then update your system and install:
```
$ sudo apt-get update
$ sudo apt-get install etcher-electron
```
If you are using Fedora or Red Hat Enterprise Linux, add the Etcher RPM repository:
```
$ sudo wget https://bintray.com/resin-io/redhat/rpm -O /etc/yum.repos.d/bintray-
resin-io-redhat.repo
```
Update and install using either:
```
$ sudo yum install -y etcher-electron
```
or:
```
$ sudo dnf install -y etcher-electron
```
### Creating bootable drives
In addition to creating bootable images for Ubuntu, EndlessOS, and other flavors of Linux, I have used Etcher to [create SD card images][6] for the Raspberry Pi. Here's how to create bootable media.
First, download to your computer the ISO or image you want to use. Then, launch Etcher and insert your USB or SD card into the computer.
![](https://opensource.com/sites/default/files/uploads/etcher_2.png)
Click on **Select Image**. In this example, I want to create a bootable USB drive to install Ubermix on a new computer. Once I have selected my Ubermix image file and inserted my USB drive into the computer, Etcher.io "sees" the drive, and I can begin the process of installing Ubermix on my USB.
![](https://opensource.com/sites/default/files/uploads/etcher_3.png)
Once I click on **Flash** , the installation process begins. The time required depends on the image's size. After the image is installed on the drive, the software verifies the installation; at the end, a banner announces my media creation is complete.
If you need [help with Etcher][7], contact the community through its [Discourse][8] forum. Etcher is very easy to use, and it has replaced all my other media creation tools because none of them do the job as easily or as well as Etcher.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/getting-started-etcherio
作者:[Don Watkins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/don-watkins
[1]:https://getfedora.org/en_GB/workstation/download/
[2]:http://etcher.io
[3]:https://github.com/resin-io/etcher/blob/master/LICENSE
[4]:https://github.com/resin-io/etcher
[5]:https://etcher.io/
[6]:https://www.raspberrypi.org/magpi/pi-sd-etcher/
[7]:https://github.com/resin-io/etcher/blob/master/SUPPORT.md
[8]:https://forums.resin.io/c/etcher

View File

@ -1,103 +0,0 @@
translating----geekpi
UNIX curiosities
======
Recently I've been doing more UNIXy things in various tools I'm writing, and I hit two interesting issues. Neither of these are "bugs", but behaviors that I wasn't expecting.
### Thread-safe printf
I have a C application that reads some images from disk, does some processing, and writes output about these images to STDOUT. Pseudocode:
```
for(imagefilename in images)
{
results = process(imagefilename);
printf(results);
}
```
The processing is independent for each image, so naturally I want to distribute this processing between various CPUs to speed things up. I usually use `fork()`, so I wrote this:
```
for(child in children)
{
pipe = create_pipe();
worker(pipe);
}
// main parent process
for(imagefilename in images)
{
write(pipe[i_image % N_children], imagefilename)
}
worker()
{
while(1)
{
imagefilename = read(pipe);
results = process(imagefilename);
printf(results);
}
}
```
This is the normal thing: I make pipes for IPC, and send the child workers image filenames through these pipes. Each worker _could_ write its results back to the main process via another set of pipes, but that's a pain, so here each worker writes to the shared STDOUT directly. This works OK, but as one would expect, the writes to STDOUT clash, so the results for the various images end up interspersed. That's bad. I didn't feel like setting up my own locks, but fortunately GNU libc provides facilities for that: [`flockfile()`][1]. I put those in, and … it didn't work! Why? Because whatever `flockfile()` does internally ends up restricted to a single subprocess because of `fork()`'s copy-on-write behavior. I.e. the extra safety provided by `fork()` (compared to threads) actually ends up breaking the locks.
I haven't tried using other locking mechanisms (like pthread mutexes for instance), but I can imagine they'll have similar problems. And I want to keep things simple, so sending the output back to the parent for output is out of the question: this creates more work for both me the programmer, and for the computer running the program.
The solution: use threads instead of forks. This has a nice side effect of making the pipes redundant. Final pseudocode:
```
for(children)
{
pthread_create(worker, child_index);
}
for(children)
{
pthread_join(child);
}
worker(child_index)
{
for(i_image = child_index; i_image < N_images; i_image += N_children)
{
results = process(images[i_image]);
flockfile(stdout);
printf(results);
funlockfile(stdout);
}
}
```
Much simpler, and actually works as desired. I guess sometimes threads are better.
### Passing a partly-read file to a child process
For various [vnlog][2] tools I needed to implement this sequence:
1. process opens a file with O_CLOEXEC turned off
2. process reads a part of this file (up-to the end of the legend in the case of vnlog)
3. process calls exec to invoke another program to process the rest of the already-opened file
The second program may require a file name on the commandline instead of an already-opened file descriptor because this second program may be calling open() by itself. If I pass it the filename, this new program will re-open the file, and then start reading the file from the beginning, not from the location where the original program left off. It is important for my application that this does not happen, so passing the filename to the second program does not work.
So I really need to pass the already-open file descriptor somehow. I'm using Linux (other OSs maybe behave differently here), so I can in theory do this by passing /dev/fd/N instead of the filename. But it turns out this does not work either. On Linux (again, maybe this is Linux-specific somehow) for normal files /dev/fd/N is a symlink to the original file. So this ends up doing exactly the same thing that passing the filename does.
But there's a workaround! If we're reading a pipe instead of a file, then there's nothing to symlink to, and /dev/fd/N ends up passing the original pipe down to the second process, and things then work correctly. And I can fake this by changing the open("filename") above to something like popen("cat filename"). Yuck! Is this really the best we can do? What does this look like on one of the BSDs, say?
--------------------------------------------------------------------------------
via: http://notes.secretsauce.net/notes/2018/08/03_unix-curiosities.html
作者:[Dima Kogan][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://notes.secretsauce.net/
[1]:https://www.gnu.org/software/libc/manual/html_node/Streams-and-Threads.html
[2]:http://www.github.com/dkogan/vnlog

View File

@ -1,3 +1,5 @@
translated by hopefully2333
5 open source role-playing games for Linux
======

View File

@ -1,3 +1,5 @@
translating---geekpi
Convert file systems with Fstransform
======

View File

@ -0,0 +1,231 @@
How to display data in a human-friendly way on Linux
======
![](https://images.idgesg.net/images/article/2018/08/smile-face-on-hand-100767756-large.jpg)
Not everyone thinks in binary or wants to mentally insert commas into large numbers to come to grips with the sizes of their files. So, it's not surprising that Linux commands have evolved over several decades to incorporate more human-friendly ways of displaying information to its users. In todays post, we look at some of the options provided by various commands that make digesting data just a little easier.
### Why not default to friendly?
If youre wondering why human-friendliness isnt the default - we humans are, after all, the default users of computers — you might be asking yourself, “Why do we have to go out of our way to get command responses that will make sense to everyone?” The answer is primarily that changing the default output of commands would likely interfere with numerous other processes that were built to expect the default responses. Other tools, as well as scripts, that have been developed over the decades might break in some very ugly ways if they were suddenly fed output in a very different format than what they were built to expect.
Its probably also true that some of us might prefer to see all of the digits in our file sizes — 1338277310 instead of 1.3G. In any case, switching defaults could be very disruptive, while promoting some easy options for more human-friendly responses only involves us learning some command options.
### Commands for displaying human-friendly data
What are some of the easy options for making the output of Unix commands a little easier to parse? Let's check some command by command.
#### top
You may not have noticed this, but you can change the display of overall memory usage in top by typing " **E** " (i.e., capital E) once top is running. Successive presses will change the numeric display from KiB to MiB to GiB to TiB to PiB to EiB and back to KiB.
OK with those units? These and a couple more are defined here:
```
2**10 = 1,024 = 1 KiB (kibibyte)
2**20 = 1,048,576 = 1 MiB (mebibyte)
2**30 = 1,073,741,824 = 1 GiB (gibibyte)
2**40 = 1,099,511,627,776 = 1 TiB (tebibyte)
2**50 = 1,125,899,906,842,624 = PiB (pebibyte)
2**60 = 1,152,921,504,606,846,976 = EiB (exbibyte)
2**70 = 1,180,591,620,717,411,303,424 = 1 ZiB (zebibyte)
2**80 = 1,208,925,819,614,629,174,706,176 = 1 YiB (yobibyte)
```
These units are closely related to kilobytes, megabytes, and gigabytes, etc. But, while close, there's still a significant difference between them. One set is based on powers of 10 and the other powers of 2. Comparing kilobytes and kibibytes, for example, we can see how they diverge:
```
KB = 1000 = 10**3
KiB = 1024 = 2**10
```
Here's an example of top output using the default display in KiB:
```
top - 10:49:06 up 5 days, 35 min, 1 user, load average: 0.05, 0.04, 0.01
Tasks: 158 total, 1 running, 118 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.2 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 6102680 total, 4634980 free, 392244 used, 1075456 buff/cache
KiB Swap: 2097148 total, 2097148 free, 0 used. 5407432 avail Mem
```
After one press of an E, it changes to MiB:
```
top - 10:49:31 up 5 days, 36 min, 1 user, load average: 0.03, 0.04, 0.01
Tasks: 158 total, 2 running, 118 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.6 sy, 0.0 ni, 99.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 5959.648 total, 4526.348 free, 383.055 used, 1050.246 buff/cache
MiB Swap: 2047.996 total, 2047.996 free, 0.000 used. 5280.684 avail Mem
```
After a second E, we get GiB:
```
top - 10:49:49 up 5 days, 36 min, 1 user, load average: 0.02, 0.03, 0.01
Tasks: 158 total, 1 running, 118 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
GiB Mem : 5.820 total, 4.420 free, 0.374 used, 1.026 buff/cache
GiB Swap: 2.000 total, 2.000 free, 0.000 used. 5.157 avail Mem
```
You can also change the numbers displaying per-process memory usage by pressing the letter “ **e** ”. It will change from the default of KiB to MiB to GiB to TiB to PiB (expect to see LOTS of zeroes!) and back. Here's some top output after one press of an " **e** ":
```
top - 08:45:28 up 4 days, 22:32, 1 user, load average: 0.02, 0.03, 0.00
Tasks: 167 total, 1 running, 118 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.2 us, 0.0 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 6102680 total, 4641836 free, 393348 used, 1067496 buff/cache
KiB Swap: 2097148 total, 2097148 free, 0 used. 5406396 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
784 root 20 0 543.2m 26.8m 16.1m S 0.9 0.5 0:22.20 snapd
733 root 20 0 107.8m 2.0m 1.8m S 0.4 0.0 0:18.49 irqbalance
22574 shs 20 0 107.5m 5.5m 4.6m S 0.4 0.1 0:00.09 sshd
1 root 20 0 156.4m 9.3m 6.7m S 0.0 0.2 0:05.59 systemd
```
#### du
The du command, which shows how much disk space files or directories use, adjusts the sizes to the most appropriate measurement if the **-h** option is used. By default, it reports in kilobytes.
```
$ du camper*
360 camper_10.jpg
5684 camper.jpg
240 camper_small.jpg
$ du -h camper*
360K camper_10.jpg
5.6M camper.jpg
240K camper_small.jpg
```
#### df
The df command also offers a -h option. Note in the example below how sizes are reported in both gigabytes and megabytes.
```
$ df -h | grep -v loop
Filesystem Size Used Avail Use% Mounted on
udev 2.9G 0 2.9G 0% /dev
tmpfs 596M 1.7M 595M 1% /run
/dev/sda1 110G 9.0G 95G 9% /
tmpfs 3.0G 0 3.0G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
tmpfs 596M 16K 596M 1% /run/user/121
/dev/sdb2 457G 73M 434G 1% /apps
tmpfs 596M 0 596M 0% /run/user/1000
```
The command below uses the **-h** option, but also includes **-T** to display the type of file system we are looking at.
```
$ df -hT /mnt2
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb2 ext4 457G 73M 434G 1% /apps
```
#### ls
Even ls gives us the option of adjusting size displays to the measurements that are the most reasonable.
```
$ ls -l camper*
-rw-rw-r-- 1 shs shs 365091 Jul 14 19:42 camper_10.jpg
-rw-rw-r-- 1 shs shs 5818597 Jul 14 19:41 camper.jpg
-rw-rw-r-- 1 shs shs 241844 Jul 14 19:45 camper_small.jpg
$ ls -lh camper*
-rw-rw-r-- 1 shs shs 357K Jul 14 19:42 camper_10.jpg
-rw-rw-r-- 1 shs shs 5.6M Jul 14 19:41 camper.jpg
-rw-rw-r-- 1 shs shs 237K Jul 14 19:45 camper_small.jpg
```
#### free
The free command allows you to report memory usage in bytes, kilobytes, megabytes, and gigabytes.
```
$ free -b
total used free shared buff/cache available
Mem: 6249144320 393076736 4851625984 1654784 1004441600 5561253888
Swap: 2147479552 0 2147479552
$ free -k
total used free shared buff/cache available
Mem: 6102680 383836 4737924 1616 980920 5430932
Swap: 2097148 0 2097148
$ free -m
total used free shared buff/cache available
Mem: 5959 374 4627 1 957 5303
Swap: 2047 0 2047
$ free -g
total used free shared buff/cache available
Mem: 5 0 4 0 0 5
Swap: 1 0 1
```
#### tree
While not related to file or memory measurements, the tree command also provides very human-friendly view of files by displaying them in a hierarchical display to illustrate how the files are organized. This kind of display can be very useful when trying to get an idea for how the contents of a directory are arranged.
```
$ tree
.g to
├── 123
├── appended.png
├── appts
├── arrow.jpg
├── arrow.png
├── bin
│ ├── append
│ ├── cpuhog1
│ ├── cpuhog2
│ ├── loop
│ ├── mkhome
│ ├── runme
```
#### stat
The stat command is another that displays information in a very human-friendly format. It provides a lot more metadata on files, including the file sizes in bytes and blocks, the file types, device and inode, owner and group (names and numeric IDs), file permissions in both numeric and rwx format and the dates the file was last accessed and modified. In some circumstances, it might also display when the file was initially created.
```
$ stat camper*
File: camper_10.jpg
Size: 365091 Blocks: 720 IO Block: 4096 regular file
Device: 801h/2049d Inode: 796059 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ shs) Gid: ( 1000/ shs)
Access: 2018-07-19 18:56:31.841013385 -0400
Modify: 2018-07-14 19:42:25.230519509 -0400
Change: 2018-07-14 19:42:25.230519509 -0400
Birth: -
File: camper.jpg
Size: 5818597 Blocks: 11368 IO Block: 4096 regular file
Device: 801h/2049d Inode: 796058 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ shs) Gid: ( 1000/ shs)
Access: 2018-07-19 18:56:31.845013872 -0400
Modify: 2018-07-14 19:41:46.882024039 -0400
Change: 2018-07-14 19:41:46.882024039 -0400
Birth: -
```
### Wrap-up
Linux provides many command options that can make their output easier for users to understand or compare. For many commands, the **-h** option brings up the friendlier output format. For others, you might have to specify how you'd prefer to see your output by using some specific option or pressing a key as with **top**. I hope that some of these choices will make your Linux systems seem just a little friendlier.
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3296631/linux/displaying-data-in-a-human-friendly-way-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.facebook.com/NetworkWorld/
[2]:https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,674 @@
HTTP request routing and validation with gorilla/mux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr)
The Go networking library includes the `http.ServeMux` structure type, which supports HTTP request multiplexing (routing): A web server routes an HTTP request for a hosted resource, with a URI such as /sales4today, to a code handler; the handler performs the appropriate logic before sending an HTTP response, typically an HTML page. Heres a sketch of the architecture:
```
                 +------------+     +--------+     +---------+
HTTP request---->| web server |---->| router |---->| handler |
                 +------------+     +--------+     +---------+
```
In a call to the `ListenAndServe` method to start an HTTP server
```
http.ListenAndServe(":8888", nil) // args: port & router
```
a second argument of `nil` means that the `DefaultServeMux` is used for request routing.
The `gorilla/mux` package has a `mux.Router` type as an alternative to either the `DefaultServeMux` or a customized request multiplexer. In the `ListenAndServe` call, a `mux.Router` instance would replace `nil` as the second argument. What makes the `mux.Router` so appealing is best shown through a code example:
### 1\. A sample crud web app
The crud web application (see below) supports the four CRUD (Create Read Update Delete) operations, which match four HTTP request methods: POST, GET, PUT, and DELETE, respectively. In the crud app, the hosted resource is a list of cliche pairs, each a cliche and a conflicting cliche such as this pair:
```
Out of sight, out of mind. Absence makes the heart grow fonder.
```
New cliche pairs can be added, and existing ones can be edited or deleted.
**The crud web app**
```
package main
import (
   "gorilla/mux"
   "net/http"
   "fmt"
   "strconv"
)
const GETALL string = "GETALL"
const GETONE string = "GETONE"
const POST string   = "POST"
const PUT string    = "PUT"
const DELETE string = "DELETE"
type clichePair struct {
   Id      int
   Cliche  string
   Counter string
}
// Message sent to goroutine that accesses the requested resource.
type crudRequest struct {
   verb     string
   cp       *clichePair
   id       int
   cliche   string
   counter  string
   confirm  chan string
}
var clichesList = []*clichePair{}
var masterId = 1
var crudRequests chan *crudRequest
// GET /
// GET /cliches
func ClichesAll(res http.ResponseWriter, req *http.Request) {
   cr := &crudRequest{verb: GETALL, confirm: make(chan string)}
   completeRequest(cr, res, "read all")
}
// GET /cliches/id
func ClichesOne(res http.ResponseWriter, req *http.Request) {
   id := getIdFromRequest(req)
   cr := &crudRequest{verb: GETONE, id: id, confirm: make(chan string)}
   completeRequest(cr, res, "read one")
}
// POST /cliches
func ClichesCreate(res http.ResponseWriter, req *http.Request) {
   cliche, counter := getDataFromRequest(req)
   cp := new(clichePair)
   cp.Cliche = cliche
   cp.Counter = counter
   cr := &crudRequest{verb: POST, cp: cp, confirm: make(chan string)}
   completeRequest(cr, res, "create")
}
// PUT /cliches/id
func ClichesEdit(res http.ResponseWriter, req *http.Request) {
   id := getIdFromRequest(req)
   cliche, counter := getDataFromRequest(req)
   cr := &crudRequest{verb: PUT, id: id, cliche: cliche, counter: counter, confirm: make(chan string)}
   completeRequest(cr, res, "edit")
}
// DELETE /cliches/id
func ClichesDelete(res http.ResponseWriter, req *http.Request) {
   id := getIdFromRequest(req)
   cr := &crudRequest{verb: DELETE, id: id, confirm: make(chan string)}
   completeRequest(cr, res, "delete")
}
func completeRequest(cr *crudRequest, res http.ResponseWriter, logMsg string) {
   crudRequests<-cr
   msg := <-cr.confirm
   res.Write([]byte(msg))
   logIt(logMsg)
}
func main() {
   populateClichesList()
   // From now on, this gorountine alone accesses the clichesList.
   crudRequests = make(chan *crudRequest, 8)
   go func() { // resource manager
      for {
         select {
         case req := <-crudRequests:
         if req.verb == GETALL {
            req.confirm<-readAll()
         } else if req.verb == GETONE {
            req.confirm<-readOne(req.id)
         } else if req.verb == POST {
            req.confirm<-addPair(req.cp)
         } else if req.verb == PUT {
            req.confirm<-editPair(req.id, req.cliche, req.counter)
         } else if req.verb == DELETE {
            req.confirm<-deletePair(req.id)
         }
      }
   }()
   startServer()
}
func startServer() {
   router := mux.NewRouter()
   // Dispatch map for CRUD operations.
   router.HandleFunc("/", ClichesAll).Methods("GET")
   router.HandleFunc("/cliches", ClichesAll).Methods("GET")
   router.HandleFunc("/cliches/{id:[0-9]+}", ClichesOne).Methods("GET")
   router.HandleFunc("/cliches", ClichesCreate).Methods("POST")
   router.HandleFunc("/cliches/{id:[0-9]+}", ClichesEdit).Methods("PUT")
   router.HandleFunc("/cliches/{id:[0-9]+}", ClichesDelete).Methods("DELETE")
   http.Handle("/", router) // enable the router
   // Start the server.
   port := ":8888"
   fmt.Println("\nListening on port " + port)
   http.ListenAndServe(port, router); // mux.Router now in play
}
// Return entire list to requester.
func readAll() string {
   msg := "\n"
   for _, cliche := range clichesList {
      next := strconv.Itoa(cliche.Id) + ": " + cliche.Cliche + "  " + cliche.Counter + "\n"
      msg += next
   }
   return msg
}
// Return specified clichePair to requester.
func readOne(id int) string {
   msg := "\n" + "Bad Id: " + strconv.Itoa(id) + "\n"
   index := findCliche(id)
   if index >= 0 {
      cliche := clichesList[index]
      msg = "\n" + strconv.Itoa(id) + ": " + cliche.Cliche + "  " + cliche.Counter + "\n"
   }
   return msg
}
// Create a new clichePair and add to list
func addPair(cp *clichePair) string {
   cp.Id = masterId
   masterId++
   clichesList = append(clichesList, cp)
   return "\nCreated: " + cp.Cliche + " " + cp.Counter + "\n"
}
// Edit an existing clichePair
func editPair(id int, cliche string, counter string) string {
   msg := "\n" + "Bad Id: " + strconv.Itoa(id) + "\n"
   index := findCliche(id)
   if index >= 0 {
      clichesList[index].Cliche = cliche
      clichesList[index].Counter = counter
      msg = "\nCliche edited: " + cliche + " " + counter + "\n"
   }
   return msg
}
// Delete a clichePair
func deletePair(id int) string {
   idStr := strconv.Itoa(id)
   msg := "\n" + "Bad Id: " + idStr + "\n"
   index := findCliche(id)
   if index >= 0 {
      clichesList = append(clichesList[:index], clichesList[index + 1:]...)
      msg = "\nCliche " + idStr + " deleted\n"
   }
   return msg
}
//*** utility functions
func findCliche(id int) int {
   for i := 0; i < len(clichesList); i++ {
      if id == clichesList[i].Id {
         return i;
      }
   }
   return -1 // not found
}
func getIdFromRequest(req *http.Request) int {
   vars := mux.Vars(req)
   id, _ := strconv.Atoi(vars["id"])
   return id
}
func getDataFromRequest(req *http.Request) (string, string) {
   // Extract the user-provided data for the new clichePair
   req.ParseForm()
   form := req.Form
   cliche := form["cliche"][0]    // 1st and only member of a list
   counter := form["counter"][0]  // ditto
   return cliche, counter
}
func logIt(msg string) {
   fmt.Println(msg)
}
func populateClichesList() {
   var cliches = []string {
      "Out of sight, out of mind.",
      "A penny saved is a penny earned.",
      "He who hesitates is lost.",
   }
   var counterCliches = []string {
      "Absence makes the heart grow fonder.",
      "Penny-wise and dollar-foolish.",
      "Look before you leap.",
   }
   for i := 0; i < len(cliches); i++ {
      cp := new(clichePair)
      cp.Id = masterId
      masterId++
      cp.Cliche = cliches[i]
      cp.Counter = counterCliches[i]
      clichesList = append(clichesList, cp)
   }
}
```
To focus on request routing and validation, the crud app does not use HTML pages as responses to requests. Instead, requests result in plaintext response messages: A list of the cliche pairs is the response to a GET request, confirmation that a new cliche pair has been added to the list is a response to a POST request, and so on. This simplification makes it easy to test the app, in particular, the `gorilla/mux` components, with a command-line utility such as [curl][1].
The `gorilla/mux` package can be installed from [GitHub][2]. The crud app runs indefinitely; hence, it should be terminated with a Control-C or equivalent. The code for the crud app, together with a README and sample curl tests, is available on [my website][3].
### 2\. Request routing
The `mux.Router` extends REST-style routing, which gives equal weight to the HTTP method (e.g., GET) and the URI or path at the end of a URL (e.g., /cliches). The URI serves as the noun for the HTTP verb (method). For example, in an HTTP request a startline such as
```
GET /cliches
```
means get all of the cliche pairs, whereas a startline such as
```
POST /cliches
```
means create a cliche pair from data in the HTTP body.
In the crud web app, there are five functions that act as request handlers for five variations of an HTTP request:
```
ClichesAll(...)    # GET: get all of the cliche pairs
ClichesOne(...)    # GET: get a specified cliche pair
ClichesCreate(...) # POST: create a new cliche pair
ClichesEdit(...)   # PUT: edit an existing cliche pair
ClichesDelete(...) # DELETE: delete a specified cliche pair
```
Each function takes two arguments: an `http.ResponseWriter` for sending a response back to the requester, and a pointer to an `http.Request`, which encapsulates information from the underlying HTTP request. The `gorilla/mux` package makes it easy to register these request handlers with the web server, and to perform regex-based validation.
The `startServer` function in the crud app registers the request handlers. Consider this pair of registrations, with `router` as a `mux.Router` instance:
```
router.HandleFunc("/", ClichesAll).Methods("GET")
router.HandleFunc("/cliches", ClichesAll).Methods("GET")
```
These statements mean that a GET request for either the single slash / or /cliches should be routed to the `ClichesAll` function, which then handles the request. For example, the curl request (with % as the command-line prompt)
```
% curl --request GET localhost:8888/
```
produces this response:
```
1: Out of sight, out of mind.  Absence makes the heart grow fonder.
2: A penny saved is a penny earned.  Penny-wise and dollar-foolish.
3: He who hesitates is lost.  Look before you leap.
```
The three cliche pairs are the initial data in the crud app.
In this pair of registration statements
```
router.HandleFunc("/cliches", ClichesAll).Methods("GET")
router.HandleFunc("/cliches", ClichesCreate).Methods("POST")
```
the URI is the same (/cliches) but the verbs differ: GET in the first case, and POST in the second. This registration exemplifies REST-style routing because the difference in the verbs alone suffices to dispatch the requests to two different handlers.
More than one HTTP method is allowed in a registration, although this strains the spirit of REST-style routing:
```
router.HandleFunc("/cliches", DoItAll).Methods("POST", "GET")
```
HTTP requests can be routed on features besides the verb and the URI. For example, the registration
```
router.HandleFunc("/cliches", ClichesCreate).Schemes("https").Methods("POST")
```
requires HTTPS access for a POST request to create a new cliche pair. In similar fashion, a registration might require a request to have a specified HTTP header element (e.g., an authentication credential).
### 3\. Request validation
The `gorilla/mux` package takes an easy, intuitive approach to request validation through regular expressions. Consider this request handler for a get one operation:
```
router.HandleFunc("/cliches/{id:[0-9]+}", ClichesOne).Methods("GET")
```
This registration rules out HTTP requests such as
```
% curl --request GET localhost:8888/cliches/foo
```
because foo is not a decimal numeral. The request results in the familiar 404 (Not Found) status code. Including the regex pattern in this handler registration ensures that the `ClichesOne` function is called to handle a request only if the request URI ends with a decimal integer value:
```
% curl --request GET localhost:8888/cliches/3  # ok
```
As a second example, consider the request
```
% curl --request PUT --data "..." localhost:8888/cliches
```
This request results in a status code of 405 (Bad Method) because the /cliches URI is registered, in the crud app, only for GET and POST requests. A PUT request, like a GET one request, must include a numeric id at the end of the URI:
```
router.HandleFunc("/cliches/{id:[0-9]+}", ClichesEdit).Methods("PUT")
```
### 4\. Concurrency issues
The `gorilla/mux` router executes each call to a registered request handler as a separate goroutine, which means that concurrency is baked into the package. For example, if there are ten simultaneous requests such as
```
% curl --request POST --data "..." localhost:8888/cliches
```
then the `mux.Router` launches ten goroutines to execute the `ClichesCreate` handler.
Of the five request operations GET all, GET one, POST, PUT, and DELETE, the last three alter the requested resource, the shared `clichesList` that houses the cliche pairs. Accordingly, the crudapp needs to guarantee safe concurrency by coordinating access to the `clichesList`. In different but equivalent terms, the crud app must prevent a race condition on the `clichesList`. In a production environment, a database system might be used to store a resource such as the `clichesList`, and safe concurrency then could be managed through database transactions.
The crud app takes the recommended Go approach to safe concurrency:
* Only a single goroutine, the resource manager started in the crud app `startServer` function, has access to the `clichesList` once the web server starts listening for requests.
* The request handlers such as `ClichesCreate` and `ClichesAll` send a (pointer to) a `crudRequest` instance to a Go channel (thread-safe by default), and the resource manager alone reads from this channel. The resource manager then performs the requested operation on the `clichesList`.
The safe-concurrency architecture can be sketched as follows:
```
                 crudRequest                   read/write
request handlers------------->resource manager------------>clichesList
```
With this architecture, no explicit locking of the `clichesList` is needed because only one goroutine, the resource manager, accesses the `clichesList` once CRUD requests start coming in.
To keep the crud app as concurrent as possible, its essential to have an efficient division of labor between the request handlers, on the one side, and the single resource manager, on the other side. Here, for review, is the `ClichesCreate` request handler:
```
func ClichesCreate(res http.ResponseWriter, req *http.Request) {
   cliche, counter := getDataFromRequest(req)
   cp := new(clichePair)
   cp.Cliche = cliche
   cp.Counter = counter
   cr := &crudRequest{verb: POST, cp: cp, confirm: make(chan string)}
   completeRequest(cr, res, "create")
}ClichesCreateres httpResponseWriterreqclichecountergetDataFromRequestreqcpclichePaircpClicheclichecpCountercountercr&crudRequestverbPOSTcpcpconfirmcompleteRequestcrres
```
`ClichesCreate` calls the utility function `getDataFromRequest`, which extracts the new cliche and counter-cliche from the POST request. The `ClichesCreate` function then creates a new `ClichePair`, sets two fields, and creates a `crudRequest` to be sent to the single resource manager. This request includes a confirmation channel, which the resource manager uses to return information back to the request handler. All of the setup work can be done without involving the resource manager because the `clichesList` is not being accessed yet.
The request handlercalls the utility function, which extracts the new cliche and counter-cliche from the POST request. Thefunction then creates a new, sets two fields, and creates ato be sent to the single resource manager. This request includes a confirmation channel, which the resource manager uses to return information back to the request handler. All of the setup work can be done without involving the resource manager because theis not being accessed yet.
The `completeRequest` utility function called at the end of the `ClichesCreate` function and the other request handlers
```
completeRequest(cr, res, "create") // shown above
```
brings the resource manager into play by putting a `crudRequest` into the `crudRequests` channel:
```
func completeRequest(cr *crudRequest, res http.ResponseWriter, logMsg string) {
   crudRequests<-cr          // send request to resource manager
   msg := <-cr.confirm       // await confirmation string
   res.Write([]byte(msg))    // send confirmation back to requester
   logIt(logMsg)             // print to the standard output
}
```
For a POST request, the resource manager calls the utility function `addPair`, which changes the `clichesList` resource:
```
func addPair(cp *clichePair) string {
   cp.Id = masterId  // assign a unique ID
   masterId++        // update the ID counter
   clichesList = append(clichesList, cp) // update the list
   return "\nCreated: " + cp.Cliche + " " + cp.Counter + "\n"
}
```
The resource manager calls similar utility functions for the other CRUD operations. Its worth repeating that the resource manager is the only goroutine to read or write the `clichesList` once the web server starts accepting requests.
For web applications of any type, the `gorilla/mux` package provides request routing, request validation, and related services in a straightforward, intuitive API. The crud web app highlights the packages main features. Give the package a test drive, and youll likely be a buyer.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/http-request-routing-validation-gorillamux
作者:[Marty Kalin][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mkalindepauledu
[1]:https://curl.haxx.se/
[2]:https://github.com/gorilla/mux
[3]:http://condor.depaul.edu/mkalin

View File

@ -0,0 +1,71 @@
Happy birthday, GNOME: 6 reasons to love this Linux desktop
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/happy_birthday_anniversary_celebrate_hats_cake.jpg?itok=Zfsv6DE_)
GNOME has been my favorite [desktop environment][1] for quite some time. While I always make it a point to check out other environments from time to time, there are some aspects of the GNOME desktop that are hard to live without. While there are many great desktop environments out there, [GNOME][2] feels like home to me. Here are some of the features I enjoy most about GNOME.
### Stability
Having a stable working environment is the most important aspect of a desktop for me. After all, the feature set of an environment doesn't matter at all if it crashes constantly and you lose work. For me, GNOME is rock-solid. I have heard of others experiencing crashes and instability, but it always seems to be due to either the user running GNOME on unsupported hardware or due to faulty extensions (more on that later). On my end, I run GNOME primarily on hardware that is known to be well-supported in Linux ([System76][3], for example). I also have a few systems that are not as well supported (a custom-built desktop and a Dell Latitude laptop), and I actually don't have any issues there either. For me, GNOME is rock-solid. I have compared stability in other well-known desktop environments, and I had unfortunate results. Nothing comes close to GNOME when it comes to stability.
### Extensions
I really enjoy being able to add additional functionality to my environment. I don't necessarily require any extensions, because I am perfectly fine with stock-GNOME with no extensions whatsoever. However, having the ability to add a few things here and there, is welcome. GNOME features various extensions to do things such as add a weather display to your panel, and much more. This adds a level of customization that is not typical of other environments. That said, proceed with caution. Sometimes extensions are of varying quality and may lead to stability issues. I find though that if you only install extensions you absolutely need, and you make sure they're kept up to date (and aren't abandoned by the developer) you'll generally be in good shape.
### Activities overview
Activities overview is quite possibly the easiest feature to use in GNOME, and it's barely detailed enough to justify its own section in this article. However, when I use other desktop environments, I miss this feature the most.
The thing is, I am very busy, with multiple projects going on at any one time, and dozens of different windows open. To access the activities overview, I simply press the Super key. Immediately, my workspace is "zoomed out" and I see all of my windows side-by-side. This is often a faster way to locate a window that is hidden behind others, and a good way overall to see what exactly is running on any given workspace.
When using other desktop environments, I will often find myself pressing the Super key out of habit, only to remember that I'm not using GNOME at the time. There are ways of achieving similar behavior in other environments (such as installing and tweaking Compiz), but in GNOME this feature is built-in.
### Dynamic workspaces
While working, I am not sure up-front how many workspaces I will need. Sometimes I can be working on three projects at a time, or as many as ten. With most desktop environments, I can access the settings screen and add or remove workspaces as needed. But with GNOME, I have exactly as many workspaces as I need at any given time. Every time I open applications on a workspace, I am given another blank one that I can switch to in order to start another project. Typically, I keep all windows related to a specific project on their own workspace, so it makes it very easy to locate my workflow for a given project.
Other desktop environments have really good implementations of the concept of workspaces, but GNOME's implementation works best for me.
### Simplicity
Another thing I love about GNOME is that it's simple and straight to the point. By default, there is only one panel, and it's at the top of the screen. This panel shows you a small amount of information, such as the date, time, and battery usage. GNOME 2 had two panels, so seeing GNOME stripped down to a single panel is welcome and saves room on the screen. Most of the things you don't need to see all the time are hidden within the Activities overview, leaving you with the maximum amount of screen space for the application(s) you are working on. GNOME just stays out of the way and lets you focus on getting your work done, and stays away from fancy widgets and desktop gadgets that just aren't necessary.
In addition, GNOME has really great support for keyboard shortcuts. Most of GNOME's features I can access without needing to touch my mouse, such as SUPER+Page Up and Super Page Down to switch workspaces, Super+Up arrow to maximize windows, etc. In addition, I am able to easily create my own keyboard shortcuts for all of my favorite applications.
### GNOME Boxes
GNOME's Boxes app is an underrated gem. This utility makes it very easy to spin up a virtual machine, which is a godsend among developers and those that like to test configurations on multiple distributions and platforms. With Boxes, you can spin up a virtual machine at any time, and it will even automate the installation process for you. For example, if you want a new Ubuntu VM, you simply choose Ubuntu as your desired platform, fill out your username and any related information, and you will have a new Ubuntu VM in a few minutes. When you're done with it, you can power it down or trash it.
For me, I do a lot of DevOps-style work as well as system administration. Being able to test a configuration on a virtual machine before deploying to another environment is great. Sure, you can do the exact same thing in VirtualBox, and VirtualBox is a great piece of software. However, Boxes is built right into GNOME, and desktop environments generally don't offer their own solution for virtualization.
### GNOME Music
While I work, I have difficulty tuning out noise in my environment. Therefore, I like to listen to music while I complete projects and tune out the rest of the world. GNOME's Music app is very simplistic and works very well. With most of the music industry gravitating toward streaming music online, and many once-popular [open source music players][7] becoming abandoned projects, it's nice to see GNOME support a built-in music player that can play my music collection. It's great to listen to my music collection while I work, and it helps me zone-in to what I am doing.
### GNOME Games
When work is done for the day, it's time to play! There's nothing like playing a classic game such as Final Fantasy VI or Super Metroid after a hard day's work. The thing is, I am a huge fan of classic gaming, and I have 22 working gaming consoles and somewhere near 1,000 physical games in my collection. But I may not always have a moment to hook up one of my retro-consoles, so GNOME Games allows me quick-access to emulated versions of my collection. In addition to that, it also works with Libretro cores as well, so it seems to me that the developers of this application have really thought-out what fans of classic gaming like me are looking for in a frontend for gaming.
These are the major features I enjoy most in the GNOME desktop. What are some of yours?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/what-i-love-about-gnome
作者:[Jay LaCroix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jlacroix
[1]:https://opensource.com/article/18/8/how-navigate-your-gnome-linux-desktop-only-keyboard
[2]:https://opensource.com/article/17/8/reasons-i-come-back-gnome
[3]:https://opensource.com/article/16/12/open-gaming-news-december-31
[4]:https://opensource.com/file/407221
[5]:https://opensource.com/sites/default/files/uploads/gnome3-cheatsheet.png (GNOME 3 Cheat Sheet)
[6]:https://opensource.com/downloads/cheat-sheet-gnome-3
[7]:https://opensource.com/article/18/6/open-source-music-players

View File

@ -0,0 +1,53 @@
How the L1 Terminal Fault vulnerability affects Linux systems
======
![](https://images.idgesg.net/images/article/2018/08/l1tf-copy-100768129-large.jpg)
Announced just yesterday in security advisories from Intel, Microsoft and Red Hat, a newly discovered vulnerability affecting Intel processors (and, thus, Linux) called L1TF or “L1 Terminal Fault” is grabbing the attention of Linux users and admins. Exactly what is this vulnerability and who should be worrying about it?
### L1TF, L1 Terminal Fault, and Foreshadow
The processor vulnerability goes by L1TF, L1 Terminal Fault, and Foreshadow. Researchers who discovered the problem back in January and reported it to Intel called it "Foreshadow". It is similar to vulnerabilities discovered in the past (such as Spectre).
This vulnerability is Intel-specific. Other processors are not affected. And like some other vulnerabilities, it exists because of design choices that were implemented to optimize kernel processing speed but exposed data in ways that allowed access by other processes.
**[ Read also:[22 essential Linux security commands][1] ]**
Three CVEs have been assigned to this issue:
* CVE-2018-3615 for Intel Software Guard Extensions (Intel SGX)
* CVE-2018-3620 for operating systems and System Management Mode (SMM)
* CVE-2018-3646 for impacts to virtualization
An Intel spokesman made this statement regarding this issue: _" L1 Terminal Fault is addressed by microcode updates released earlier this year, coupled with corresponding updates to operating system and hypervisor software that are available starting today. Weve provided more information on our web site and continue to encourage everyone to keep their systems up-to-date, as it's one of the best ways to stay protected. Wed like to extend our thanks to the researchers at imec-DistriNet, KU Leuven, Technion- Israel Institute of Technology, University of Michigan, University of Adelaide and Data61 and our industry partners for their collaboration in helping us identify and address this issue."_
### Does L1TF affect your Linux system?
The short answer is "probably not." You should be safe if youve patched your system since the earlier [Spectre and Meltdown vulnerabilities][2] were exposed back in January. As with Spectre and Meltdown, Intel claims that no real-world cases of systems being affected have been reported or detected. They also have said that the changes are unlikely to incur noticeable performance hits on individual systems, but they might represent significant performance hits for data centers using virtualized operating systems.
Even so, frequent patches are always recommended. To check your current kernel level, use the **uname -r** command:
```
$ uname -r
4.18.0-041800-generic
```
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3298157/linux/linux-and-l1tf.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.networkworld.com/article/3272286/open-source-tools/22-essential-security-commands-for-linux.html
[2]:https://www.networkworld.com/article/3245813/security/meltdown-and-spectre-exploits-cutting-through-the-fud.html
[3]:https://www.facebook.com/NetworkWorld/
[4]:https://www.linkedin.com/company/network-world

View File

@ -1,11 +1,10 @@
translating by lujun9972
Using User Namespaces on Docker
使用 Docker 的 User Namespaces 功能
======
User Namespaces is officially added to Docker ver. 1.10, which allows the host system to map its own `uid` and `gid` to some different `uid` and `gid` for containers' processes. This is a big improvement in Docker's security. So I will show an example of the problem that User Namespaces can resolve, and then show how to enable it.
User Namespaces 于 Docker1.10 版正式纳入其中,该功能允许主机系统将自身的 `uid``gid` 映射为容器进程中的另一个其他 `uid``gid`。这对 Docker 的安全性来说是一项巨大的改进。下面我会通过一个案例来展示一下 User Namespaces 能够解决的问题,以及如何启用该功能。
### Creating a Docker Machine
### 创建一个 Docker Machine
If you already have a docker machine to try out the User Namaspaces, you can skip this step. I'm using Docker Toolbox on my Macbook, so I simply create a Docker Machine on VirutalBox with `docker-machine` command (e.g. hostname=`host1`):
如果你已经创建好了一台用来实验 User Namespaces 的 docker machine那么可以跳过这一步。我在自己的 Macbook 上安装了 Docker Toolbox因此我只需要使用 `docker-machine` 命令就很简单地创建一个基于 VirtualBox 的 Docker Machine( 这里假设主机名为 `host1`)
```
# Create host1
$ docker-machine create --driver virtualbox host1
@ -15,9 +14,9 @@ $ docker-machine ssh host1
```
### Understanding what a non-root user can do if User Namespaces is not enabled
### 理解在 User Napespaces 未启用的情况下,非 root 用户能够做什么
Before setting up User Namespaces, let's see what the problem is. What was actually wrong with Docker? First of all, one of the great benefits on using Docker is that user can have root privilege on containers so that user can easily install software packages. But this is like a double-edged sword in Linux container technology. With some little twist, non-root user can get root access to, for instance, `/etc` of the host system. Here's how to do it.
在启用 User Namespaces 前我们先来看一下会有什么问题。Docker 到底哪个地方做错了?首先,使用 Docker 的一大优势在于用户在容器中可以拥有 root 权限,因此用户可以很方便地安装软件包。但是该项 Linux 容器技术是一把双刃剑。只要经过少许操作,非 root 用户就能以 root 的权限访问主机系统中的内容,比如 `/etc` . 下面是操作步骤。
```
# Run a container and mount host1's /etc onto /root/etc
$ docker run --rm -v /etc:/root/etc -it ubuntu
@ -33,9 +32,9 @@ $ cat /etc/hosts
```
As you can see, it is surprizingly easy, and it's obvious that Docker wasn't designed for shared computers. But now, with the User Namespaces, Docker lets you avoid this problem.
你可以看到,步骤简单到难以置信,很明显 Docker 并不适用于运行在多人共享的电脑上。但是现在,通过 User NamespacesDocker 可以让你避免这个问题。
### Enabling User Namespaces
### 启用 User Namespaces
```
# Create a user called "dockremap"
$ sudo adduser dockremap
@ -46,7 +45,7 @@ $ sudo sh -c 'echo dockremap:500000:65536 > /etc/subgid'
```
And then, open `/etc/init.d/docker`, and add `--userns-remap=default` next to `/usr/local/bin/docker daemon` like this:
然后,打开 `/etc/init.d/docker`,并在 `/usr/local/bin/docker daemon` 后面加上 `--userns-remap=default`,像这样:
```
$ sudo vi /etc/init.d/docker
:
@ -57,28 +56,28 @@ $ sudo vi /etc/init.d/docker
```
And restart Docker:
然后重启 Docker
```
$ sudo /etc/init.d/docker restart
```
That's all!
这就完成了!
**Note:** If you're using CentOS 7, there are two things you need to know.
**注意:** 若你使用的是 CentOS 7则你需要了解两件事。
**1.** User Namespaces is not enabled on the kernel by default. You can enable it by executing the following command and restart the system.
**1。** 内核默认并没有启用 User Namespaces。运行下面命令并重启系统可以启用该功能。
```
sudo grubby --args="user_namespace.enable=1" \
--update-kernel=/boot/vmlinuz-3.10.0-XXX.XX.X.el7.x86_64
```
**2.** CentOS 7 uses systemctl to manage services, so the file you need to edit is `/usr/lib/systemd/system/docker.service`.
**2。** CentOS 7 使用 systemctl 来管理服务,因此你需要编辑的文件是 `/usr/lib/systemd/system/docker.service`
### Checking if User Namespaces is working properly
### 确认 User Namespaces 是否正常工作
If everything's set properly, you shouldn't be able to edit host1's `/etc` from a container. So let's check it out.
若一切都配置妥当,则你应该无法再在容器中编辑 host1 上的 `/etc` 了。让我们来试一下。
```
# Create a container and mount host1's /etc to container's /root/etc
$ docker run --rm -v /etc:/root/etc -it ubuntu
@ -104,7 +103,7 @@ rm: cannot remove '/root/etc/hostname': Permission denied
```
Okay, great. This is how User Namespaces works.
好了,太棒了。这就是 User Namespaces 的工作方式。
--------------------------------------------------------------------------------

View File

@ -0,0 +1,102 @@
UNIX 的好奇
======
最近我在用我编写的各种工具做更多 UNIX 下的事情,我遇到了两个有趣的问题。这些都不是 “bug”而是我没想到的行为。
### 线程安全的 printf
我有一个 C 程序从磁盘读取一些图像,进行一些处理,并将有关这些图像的输出写入 STDOUT。伪代码
```
for(imagefilename in images)
{
results = process(imagefilename);
printf(results);
}
```
处理对于每个图像是独立的,因此我自然希望在各个 CPU 之间分配处理以加快速度。我通常使用 `fork()`,所以我写了这个:
```
for(child in children)
{
pipe = create_pipe();
worker(pipe);
}
// main parent process
for(imagefilename in images)
{
write(pipe[i_image % N_children], imagefilename)
}
worker()
{
while(1)
{
imagefilename = read(pipe);
results = process(imagefilename);
printf(results);
}
}
```
这是正常的事情:我为 IPC 创建管道,并通过这些管道发送子 worker 的图像名。每个 worker _能够_通过另一组管道将其结果写回主进程但这很痛苦所以每个 worker 都直接写入共享 STDOUT。这工作正常但正如人们所预料的那样对 STDOUT 的写入发生冲突,因此各种图像的结果最终会分散。这那很糟。我不想设置我自己的锁,但幸运的是 GNU libc 为它提供了函数:[`flockfile()`][1]。我把它们放进去了......但是没有用!为什么?因为 `flockfile()` 的内部最终因为 `fork()` 的写时复制行为而限制在单个子进程中。即 `fork()`提供的额外安全性(与线程相比),这实际上最终破坏了锁。
我没有尝试使用其他锁机制(例如 pthread 互斥锁),但我可以想象它们会遇到类似的问题。我想保持简单,所以将输出发送回父输出是不可能的:这给程序员和运行程序的计算机制造了更多的工作。
解决方案:使用线程而不是 fork。这有制造冗余管道的好的副作用。最终的伪代码
```
for(children)
{
pthread_create(worker, child_index);
}
for(children)
{
pthread_join(child);
}
worker(child_index)
{
for(i_image = child_index; i_image < N_images; i_image += N_children)
{
results = process(images[i_image]);
flockfile(stdout);
printf(results);
funlockfile(stdout);
}
}
```
Much simpler, and actually works as desired. I guess sometimes threads are better.
这更简单,实际按照需要的那样工作。我猜有时线程更好。
### 将部分读取的文件传递给子进程
对于各种 [vnlog][2] 工具,我需要实现这个次序:
1. 进程打开一个关闭 O_CLOEXEC 标志的文件
2. 进程读取此文件的一部分(在 vnlog 的情况下直到图例的末尾)
3. 进程调用 exec 以调用另一个程序来处理已经打开的文件的其余部分
第二个程序可能需要命令行中的文件名而不是已打开的文件描述符,因为第二个程序可能自己调用 open()。如果我传递文件名,这个新程序将重新打开文件,然后从头开始读取文件,而不是从原始程序停止的位置开始读取。这个不会在我的程序上发生很重要,因此将文件名传递给第二个程序是行不通的。
所以我真的需要以某种方式传递已经打开的文件描述符。我在使用 Linux其他操作系统可能在这里表现不同所以我理论上可以通过传递 /dev/fd/N 而不是文件名来实现。但事实证明这也不起作用。在 Linux上再说一次也许是特定于 Linux对于普通文件 /dev/fd/N 是原始文件的符号链接。所以这最终完成了与传递文件名完全相同的事情。
但有一个临时方案!如果我们正在读取管道而不是文件,那么没有什么可以符号链接,并且 /dev/fd/N 最终将原始管道传递给第二个进程,然后程序正常工作。我可以通过将上面的 open“filename”更改为 popen“cat filename”之类的东西来伪装。呸这真的是我们能做的最好的吗这在 BSD 上看上去会怎么样?
--------------------------------------------------------------------------------
via: http://notes.secretsauce.net/notes/2018/08/03_unix-curiosities.html
作者:[Dima Kogan][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://notes.secretsauce.net/
[1]:https://www.gnu.org/software/libc/manual/html_node/Streams-and-Threads.html
[2]:http://www.github.com/dkogan/vnlog