mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject into translating
This commit is contained in:
commit
265ed3b927
@ -1,40 +1,36 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12477-1.html)
|
||||
[#]: subject: (Bash Script to Check How Long the High CPU/Memory Consumption Processes Runs on Linux)
|
||||
[#]: via: (https://www.2daygeek.com/bash-script-to-check-how-long-the-high-cpu-memory-consumption-processes-runs-on-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
在 Linux 中运行 Bash 脚本检查高 CPU/内存消耗进程
|
||||
实用脚本:检查高 CPU / 内存消耗进程
|
||||
======
|
||||
|
||||
过去,我们写了三篇不同的文章来使用 Linux 命令来识别。
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/01/205420jllu1nsngu9qszu5.jpg)
|
||||
|
||||
你可以通过下面相关的 URL 立即访问。
|
||||
|
||||
* **[如何在 Linux 中查找高 CPU 消耗进程][1]**
|
||||
* **[如何找出 Linux 中的高内存消耗进程][2]**
|
||||
* **[检查进程在 Linux 中运行了多长时间的五种方法][3]**
|
||||
过去,我们写了三篇不同的文章来使用 Linux 命令来识别这些进程。
|
||||
|
||||
你可以通过下面相关的 URL 立即访问:
|
||||
|
||||
* [如何在 Linux 中找出 CPU 占用高的进程][1]
|
||||
* [如何在 Linux 中找出内存消耗最大的进程][2]
|
||||
* [在 Linux 中如何查找一个命令或进程的执行时间][3]
|
||||
|
||||
本教程中包含两个脚本,它们可以帮助你确定 Linux 上高 CPU/内存消耗进程的运行时间。
|
||||
|
||||
该脚本将显示进程 ID,进程的所有者,进程的名称以及进程的运行时间。
|
||||
|
||||
这将帮助你确定哪些(必须事先完成)作业正在超时运行。
|
||||
|
||||
在可以使用 ps 命令来实现。
|
||||
该脚本将显示进程 ID、进程的所有者、进程的名称以及进程的运行时间。这将帮助你确定哪些(必须事先完成)作业正在超时运行。这可以使用 `ps` 命令来实现。
|
||||
|
||||
### 什么是 ps 命令
|
||||
|
||||
ps 代表进程状态,它显示有关系统上活动/正在运行的进程的信息。
|
||||
`ps` 是<ruby>进程状态<rt>processes status</rt></ruby>,它显示有关系统上活动/正在运行的进程的信息。
|
||||
|
||||
它提供了当前进程的快照以及详细信息,例如用户名、用户 ID、CPU 使用率、内存使用率、进程开始日期和时间等。
|
||||
|
||||
### 1)Bash 脚本检查高 CPU 消耗进程在 Linux 上运行了多长时间
|
||||
#### 1)检查高 CPU 消耗进程在 Linux 上运行了多长时间的 Bash 脚本
|
||||
|
||||
该脚本将帮助你确定高 CPU 消耗进程在 Linux 上运行了多长时间。
|
||||
|
||||
@ -56,13 +52,13 @@ done | column -t
|
||||
echo "--------------------------------------------------"
|
||||
```
|
||||
|
||||
给 **”long-running-cpu-proc.sh“** 设置可执行**[Linux 文件权限][4]**。
|
||||
给 `long-running-cpu-proc.sh` 设置可执行的 [Linux 文件权限][4]。
|
||||
|
||||
```
|
||||
# chmod +x /opt/scripts/long-running-cpu-proc.sh
|
||||
```
|
||||
|
||||
运行此脚本时,你将获得类似以下的输出。
|
||||
运行此脚本时,你将获得类似以下的输出:
|
||||
|
||||
```
|
||||
# sh /opt/scripts/long-running-cpu-proc.sh
|
||||
@ -82,7 +78,7 @@ daygeek 6301 Web 57:40
|
||||
----------------------------------------------------
|
||||
```
|
||||
|
||||
### 2)Bash 脚本检查高内存消耗进程在 Linux 上运行了多长时间
|
||||
#### 2)检查高内存消耗进程在 Linux 上运行了多长时间的 Bash 脚本
|
||||
|
||||
该脚本将帮助你确定最大的内存消耗进程在 Linux 上运行了多长时间。
|
||||
|
||||
@ -104,13 +100,13 @@ done | column -t
|
||||
echo "--------------------------------------------------"
|
||||
```
|
||||
|
||||
给 **”long-running-memory-proc.sh“** 设置可执行 Linux 文件权限。
|
||||
给 `long-running-memory-proc.sh` 设置可执行的 Linux 文件权限。
|
||||
|
||||
```
|
||||
# chmod +x /opt/scripts/long-running-memory-proc.sh
|
||||
```
|
||||
|
||||
运行此脚本时,你将获得类似以下的输出。
|
||||
运行此脚本时,你将获得类似以下的输出:
|
||||
|
||||
```
|
||||
# sh /opt/scripts/long-running-memory-proc.sh
|
||||
@ -137,13 +133,13 @@ via: https://www.2daygeek.com/bash-script-to-check-how-long-the-high-cpu-memory-
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/how-to-find-high-cpu-consumption-processes-in-linux/
|
||||
[2]: https://www.2daygeek.com/linux-find-top-memory-consuming-processes/
|
||||
[3]: https://www.2daygeek.com/how-to-check-how-long-a-process-has-been-running-in-linux/
|
||||
[1]: https://linux.cn/article-11678-1.html
|
||||
[2]: https://linux.cn/article-11542-1.html
|
||||
[3]: https://linux.cn/article-10261-1.html
|
||||
[4]: https://www.2daygeek.com/understanding-linux-file-permissions/
|
@ -0,0 +1,62 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (JonnieWayy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12479-1.html)
|
||||
[#]: subject: (How failure-driven development makes you successful)
|
||||
[#]: via: (https://opensource.com/article/20/3/failure-driven-development)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
|
||||
|
||||
屡屡失败犯错的我为什么没有被开除
|
||||
======
|
||||
|
||||
> 我是词典里 “失败” 一词旁边的插图,这就是为什么我擅长我的工作的原因。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/02/212013q5jjc78ihwd72cij.jpg)
|
||||
|
||||
我的职称是高级软件工程师,但我最亲近的同事并不这么称呼我。由于我摧毁一切,他们管我叫“樱桃炸弹”(正巧我姓“樱桃”)。我定期会遇到的失败已经可以影响到我们的季度性收益和停机时间。简单的来说,我就是你所听说过的生产灾难:“别动,啥都别做,无论何时何地。”
|
||||
|
||||
我的职业生涯始于支持服务台,在那里我写了一些循环,破坏了高端客户的服务器。我曾在没有警告的情况下将生产应用程序关闭了长达八个小时,并且在试图使得情况好转的过程中摧毁了无数个集群,有几次只是因为我打错了字。
|
||||
|
||||
我是我们在 [Kubernetes][2] 中设有灾难恢复(DR)集群的原因。我是个混乱的工程师,我们有一个应用程序,它的故障恢复计划还从未测试过,而我在没有警告的情况下,就教人们如何快速行动和排除故障。我作为可能失败的最好例子而存在,这实际上是有史以来最酷的事情。
|
||||
|
||||
### 我和消失的 K8s 集群
|
||||
|
||||
我的正式职责之一涉及到我们的应用架构。对于任何形式的架构改动,我都要进行代码的编写与测试,看看有什么可能性。近来,据说这成了我老板史诗级的痛苦,这只是轻描淡写。
|
||||
|
||||
我们在 Kubernetes 上运行我们的大多数基础架构,Kubernetes 以其弹性著称。尽管有这样的声誉,我还是使得两个集群,好吧,消失了。你可能会好奇我是怎么做到的,很容易,`terraform destroy`。我们通过 [Terraform][3] 以代码的方式管理我们的基础架构,并且不需要任何软件知识就知道 `destroy` 可做坏事。在你惊慌失措之前,好吧,是开发集群,所以我还活着。
|
||||
|
||||
鉴于此,你们肯定会问我为什么还没丢掉饭碗,以及为什么我要写下这些事情。这很好回答:我仍然有工作,是因为我更新的基础架构代码比起起初的代码工作得更好更快了。我写下这些是因为每个人都会经常性地遭遇失败,这是非常非常正常的。如果你没有时不时遭遇失败,我认为你并没有足够努力地学习。
|
||||
|
||||
### 破坏东西并培训人们
|
||||
|
||||
你可能还会认为永远不会有人让我去培训任何人。那是最糟糕的主意,因为(就像我的团队开玩笑说的)你永远都不应该做我所做的事情。但是我的老板却让我定期去训练新人。我甚至为整个团队提供使用我们的基础设施或代码的培训,教人们如何建立自己的基础设施。
|
||||
|
||||
原因是这样的:失败是你迈向成功的第一步。失败的教训绝不只是“备份是个绝佳的主意”。不,从失败中,你学会了更快地恢复、更快地排除故障并且在你工作中取得惊人的进步。当你在工作中变得惊人的时候,你就可以培训其他人,教给他们什么事情不要做,并且帮助他们去理解一切是如何工作的。由于你的经验,他们会比你开始时更进一步 —— 而且他们也很可能以新的、惊人的、史诗般的方式失败,每个人都可以从中学到东西。
|
||||
|
||||
### 你的成功取决于你的失败
|
||||
|
||||
没有人生来就具有软件工程和云基础架构方面的天赋,就像没有人天生就会走路。我们都是从滚动和翻爬开始的。从那时起,我们学会爬行,然后能够站立一会儿。当我们开始走路后,我们会跌倒并且擦伤膝盖,撞到手肘,还有,比如像我哥哥,走着走着撞上桌子的尖角,然后在眉毛中间缝了针。
|
||||
|
||||
凡事都需要时间去学习。一路上阅读手边能获得的一切来帮助你,但这永远只是个开始。完美是无法实现的幻想,你必须通过失败来取得成功。
|
||||
|
||||
每走一步,我的失败都教会我如何把事情做得更好。
|
||||
|
||||
最终,你的成功和你累积的失败一样多,这标志着你成功的程度。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/failure-driven-development
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[JonnieWayy](https://github.com/JonnieWayy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jrepka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)
|
||||
[2]: https://www.redhat.com/en/topics/containers/what-is-kubernetes
|
||||
[3]: https://github.com/hashicorp/terraform
|
@ -1,28 +1,30 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (silentdawn-zz)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12474-1.html)
|
||||
[#]: subject: (Never forget your password with this Python encryption algorithm)
|
||||
[#]: via: (https://opensource.com/article/20/6/python-passwords)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
有了这个 Python 加密算法,你再也不用担心忘记密码了
|
||||
国王的秘密:如何保护你的主密码
|
||||
======
|
||||
本密码保护算法使用 Python 实现,基于 Shamir 秘密共享算法,可以有效避免黑客窃取和自己不经意忘记引发的风险和不便。
|
||||
![Searching for code][1]
|
||||
|
||||
很多人使用密码管理器来保密存储自己在用的各种密码。密码管理器的关键环节之一是主密码,主密码保护着所有其它密码。这种情况下,主密码本身就是风险所在。任何知道你的主密码的人,都可以视你的密码保护若无物,畅行无阻。自然而然,为了保证主密码的安全性,你会选用很难想到的密码,把它牢记在脑子里,甚至还有很多其它你能想到的 [各种方法][2]。
|
||||
> 这种使用 Python 和 Shamir 秘密共享的独特算法可以保护你的主密码,可以有效避免黑客窃取和自己不经意忘记引发的风险和不便。
|
||||
|
||||
但是万一主密码泄露了或者忘记了,后果是什么?可能你要去个心仪的没有现代技术覆盖的岛上旅行上个把月,在开心戏水之后享用美味菠萝的时刻,突然记不清自己的密码是什么了。是“山尖一寺一壶酒”?还是“一去二三里,烟村四五家”?反正当时选密码的时候感觉浑身都是机灵,现在则后悔当初何必作茧自缚。
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/01/105831kzxididbld8kdhdb.jpg)
|
||||
|
||||
很多人使用密码管理器来保密存储自己在用的各种密码。密码管理器的关键环节之一是主密码,主密码保护着所有其它密码。这种情况下,主密码本身就是风险所在。任何知道你的主密码的人,都可以视你的密码保护若无物,畅行无阻。自然而然,为了保证主密码的安全性,你会选用很难想到的密码,把它牢记在脑子里,并做[所有其他][2]你应该做的事情。
|
||||
|
||||
但是万一主密码泄露了或者忘记了,后果是什么?也许你去了个心仪的岛上旅行上个把月,没有现代技术覆盖,在开心戏水之后享用美味菠萝的时刻,突然记不清自己的密码是什么了。是“山巅一寺一壶酒”?还是“一去二三里,烟村四五家”?反正当时选密码的时候感觉浑身都是机灵,现在则后悔当初何必作茧自缚。
|
||||
|
||||
![XKCD comic on password strength][3]
|
||||
|
||||
([XKCD][4], [CC BY-NC 2.5][5])
|
||||
*([XKCD][4], [CC BY-NC 2.5][5])*
|
||||
|
||||
当然,你不会把自己的主密码告诉其它任何人,因为这是密码管理的首要原则。有没有其它变通的办法,免除这种难以承受的密码之重?
|
||||
|
||||
试试 **[Shamir 秘密共享算法][6]**,一种可以将保密内容进行分块保存,且只能将片段拼合才能恢复保密内容的算法。
|
||||
试试 <ruby>[Shamir 秘密共享算法][6]<rt>Shamir's Secret Sharing</rt></ruby>,这是一种可以将保密内容进行分块保存,且只能将片段拼合才能恢复保密内容的算法。
|
||||
|
||||
先分别通过一个古代的和一个现代的故事,看看 Shamir 秘密共享算法究竟是怎么回事吧。
|
||||
|
||||
@ -30,8 +32,7 @@
|
||||
|
||||
### 一个古代关于加解密的故事
|
||||
|
||||
古代某国,王有个大秘密,很大很大的秘密:
|
||||
|
||||
古代某国,国王有个大秘密,很大很大的秘密:
|
||||
|
||||
```
|
||||
def int_from_bytes(s):
|
||||
@ -44,62 +45,60 @@ def int_from_bytes(s):
|
||||
secret = int_from_bytes("terrible secret".encode("utf-8"))
|
||||
```
|
||||
|
||||
大到他自己的孩子都不能轻易信任。他有五个王子,但前程危机重重。他的孩子需要在他百年之后用这个秘密来保卫国家,而国王又不能忍受自己的孩子在他们还记得自己的时候就知道这些秘密,尤其是这种状态可能要持续几十年。
|
||||
|
||||
所以,国王动用大力魔术,将这个秘密分为了五个部分。他知道,可能有一两个孩子不会遵从他的遗嘱,但绝对不会同时有三个或三个以上这样:
|
||||
大到连他自己的孩子都不能轻易信任。他有五个子女,但他知道前路危机重重。他的孩子需要在他百年之后用这个秘密来保卫国家,而国王又不能忍受自己的孩子在他们还记得自己的时候就知道这些秘密,尤其是这种状态可能要持续几十年。
|
||||
|
||||
所以,国王动用大力魔术,将这个秘密分为了五个部分。他知道,可能有一两个孩子不会遵从他的遗嘱,但绝对不会同时有三个或三个以上会这样:
|
||||
|
||||
```
|
||||
from mod import Mod
|
||||
from os import urandom
|
||||
```
|
||||
|
||||
国王精通 [有限域][8] 和 _随机理论_,当然,对他来说,使用 Python 分割这个秘密也是小菜一碟。
|
||||
|
||||
第一步是选择一个大质数——第 13 个 [梅森质数][9] (`2**521 - 1`),他让人把这个数誊写到纸上,封之金匮,藏之后殿:
|
||||
国王精通 [有限域][8] 和 *随机* 魔法,当然,对他来说,使用巨蟒分割这个秘密也是小菜一碟。
|
||||
|
||||
第一步是选择一个大质数——第 13 个 [梅森质数][9](`2**521 - 1`),他让人把这个数铸造在巨鼎上,摆放在大殿上:
|
||||
|
||||
```
|
||||
`P = 2**521 - 1`
|
||||
P = 2**521 - 1
|
||||
```
|
||||
|
||||
但这不是要保守的秘密:这只是 _公钥_。
|
||||
但这不是要保密的秘密:这只是 *公开数据*。
|
||||
|
||||
国王知道,如果 `P` 是一个质数, 用 `P` 对数字取模,就形成了一个数学 [场][10]:在场中可以自由进行加、减、乘、除运算。当然,做除法运算时,除数不能为 0。
|
||||
国王知道,如果 `P` 是一个质数,用 `P` 对数字取模,就形成了一个数学 [场][10]:在场中可以自由进行加、减、乘、除运算。当然,做除法运算时,除数不能为 0。
|
||||
|
||||
国王日理万机,方便起见,他在做模运算时使用了 PyPI 中的 [`mod`][11] 模块,这个模块实现了各种模运算算法。
|
||||
国王日理万机,方便起见,他在做模运算时使用了 PyPI 中的 [mod][11] 模块,这个模块实现了各种模数运算算法。
|
||||
|
||||
他确认过,自己的秘密比 `P` 要短:
|
||||
|
||||
|
||||
```
|
||||
`secret < P`[/code] [code]`TRUE`
|
||||
secret < P
|
||||
```
|
||||
```
|
||||
TRUE
|
||||
```
|
||||
|
||||
将秘密转换为 `P` 的模,`mod P`:
|
||||
|
||||
|
||||
```
|
||||
`secret = mod.Mod(secret, P)`
|
||||
secret = mod.Mod(secret, P)
|
||||
```
|
||||
|
||||
为了使任意三个孩子掌握的片段就可以重建这个秘密,他还得生成另外两个部分,并混杂到一起:
|
||||
|
||||
|
||||
```
|
||||
polynomial = [secret]
|
||||
for i in range(2):
|
||||
polynomial.append(Mod(int_from_bytes(urandom(16)), P))
|
||||
polynomial.append(Mod(int_from_bytes(urandom(16)), P))
|
||||
len(polynomial)
|
||||
|
||||
[/code] [code]`3`
|
||||
```
|
||||
```
|
||||
3
|
||||
```
|
||||
|
||||
下一步就是在随机选择的点上计算某 [多项式][12] 的值,即计算 `polynomial[0] + polynomial[1]*x + polynomial[2]*x**2 ...`
|
||||
下一步就是在随机选择的点上计算某 [多项式][12] 的值,即计算 `polynomial[0] + polynomial[1]*x + polynomial[2]*x**2 ...`。
|
||||
|
||||
虽然有第三方模块可以计算多项式的值,但那并不是针对有限域内的运算的,所以,国王还得亲自操刀,写出计算多项式的代码:
|
||||
|
||||
|
||||
```
|
||||
def evaluate(coefficients, x):
|
||||
acc = 0
|
||||
@ -112,7 +111,6 @@ def evaluate(coefficients, x):
|
||||
|
||||
再下一步,国王选择五个不同的点,计算多项式的值,并分别交给五个孩子,让他们各自保存一份:
|
||||
|
||||
|
||||
```
|
||||
shards = {}
|
||||
for i in range(5):
|
||||
@ -123,7 +121,6 @@ for i in range(5):
|
||||
|
||||
正如国王所虑,不是每个孩子都正直守信。其中有两个孩子,在他尸骨未寒的时候,就想从自己掌握的秘密片段中窥出些什么,但穷极所能,终无所获。另外三个孩子听说了这个事,合力将这两人永远驱逐:
|
||||
|
||||
|
||||
```
|
||||
del shards[2]
|
||||
del shards[3]
|
||||
@ -131,17 +128,15 @@ del shards[3]
|
||||
|
||||
二十年弹指一挥间,奉先王遗命,三个孩子将合力恢复出先王的大秘密。他们将各自的秘密片段拼合在一起:
|
||||
|
||||
|
||||
```
|
||||
`retrieved = list(shards.values())`
|
||||
retrieved = list(shards.values())
|
||||
```
|
||||
|
||||
然后是 40 天没日没夜的苦干。这是个大工程,他们虽然都懂些 Python,但都不如前国王精通。
|
||||
|
||||
最终,揭示秘密的时刻到了。
|
||||
|
||||
用于反算秘密的代码基于 [拉格朗日差值][13],它利用多项式在 `n` 个非 0 位置的值,来计算其在 `0` 处的值。前面的 `n` 指的是多项式的阶数。这个过程的原理是,可以为一个多项式找到一个显示方程,使其满足:其在 `t[0]` 处的值是 `1`,在 `i` 不为 `0` 的时候,其在 `t[i]` 处的值是 `0`。因多项式值的计算属于线性运算,需要计算 _这些_ 多项式各自的值,并使用多项式的值进行插值:
|
||||
|
||||
用于反算秘密的代码基于 [拉格朗日差值][13],它利用多项式在 `n` 个非 0 位置的值,来计算其在 `0` 处的值。前面的 `n` 指的是多项式的阶数。这个过程的原理是,可以为一个多项式找到一个显示方程,使其满足:其在 `t[0]` 处的值是 `1`,在 `i` 不为 `0` 的时候,其在 `t[i]` 处的值是 `0`。因多项式值的计算属于线性运算,需要计算 *这些* 多项式各自的值,并使用多项式的值进行插值:
|
||||
|
||||
```
|
||||
from functools import reduce
|
||||
@ -163,14 +158,16 @@ def retrieve_original(secrets):
|
||||
这代码是在太复杂了,40 天能算出结果已经够快了。雪上加霜的是,他们只能利用五个秘密片段中的三个来完成这个运算,这让他们万分紧张:
|
||||
|
||||
```
|
||||
`retrieved_secret = retrieve_original(retrieved)`
|
||||
retrieved_secret = retrieve_original(retrieved)
|
||||
```
|
||||
|
||||
后事如何?
|
||||
|
||||
|
||||
```
|
||||
`retrieved_secret == secret`[/code] [code]`TRUE`
|
||||
retrieved_secret == secret
|
||||
```
|
||||
```
|
||||
TRUE
|
||||
```
|
||||
|
||||
数学这个魔术的优美之处就在于它每一次都是那么靠谱,无一例外。国王的孩子们,曾经的孩童,而今已是壮年,足以理解先王的初衷,并以先王的锦囊妙计保卫了国家,并继之以繁荣昌盛!
|
||||
@ -179,13 +176,12 @@ def retrieve_original(secrets):
|
||||
|
||||
现代,很多人都对类似的大秘密苦不堪言:密码管理器的主密码!几乎没有谁能有足够信任的人去完全托付自己最深的秘密,好消息是,找到至少有三个不会串通起来搞鬼的五人组不是个太困难的事。
|
||||
|
||||
同样是在现代,比较幸运的是,我们不必再像国王那样自己动手分割要守护的秘密。拜现代 _开源_ 技术所赐,这都可以使用现成的软件完成。
|
||||
同样是在现代,比较幸运的是,我们不必再像国王那样自己动手分割要守护的秘密。拜现代 *开源* 技术所赐,这都可以使用现成的软件完成。
|
||||
|
||||
假设你有五个不敢完全信任,但还可以有点信任的人:金、木、水、火、土。
|
||||
假设你有五个不敢完全信任,但还可以有点信任的人:张三、李四、王五、赵六和钱大麻子。
|
||||
|
||||
安装并运行 `ssss` 分割密钥:
|
||||
|
||||
|
||||
```
|
||||
$ echo 'long legs travel fast' | ssss-split -t 3 -n 5
|
||||
Generating shares using a (3,5) scheme with dynamic security level.
|
||||
@ -197,27 +193,24 @@ Enter the secret, at most 128 ASCII characters: Using a 168 bit security level.
|
||||
5-17da24ad63f7b704baed220839abb215f97d95f4f8
|
||||
```
|
||||
|
||||
这确实是个非常牛的主密码:`long legs travel fast`,绝不能把它完整的托付给任何人!那就把五个片段分别交给还比较可靠的伙伴,金、木、水、火、土:
|
||||
这确实是个非常牛的主密码:`long legs travel fast`,绝不能把它完整的托付给任何人!那就把五个片段分别交给还比较可靠的伙伴,张三、李四、王五、赵六和钱大麻子。
|
||||
|
||||
* 把 `1` 给金。
|
||||
* 把 `2` 给木。
|
||||
* 把 `3` 给水。
|
||||
* 把 `4` 给火。
|
||||
* 把 `5` 给土。
|
||||
* 把 `1` 给张三。
|
||||
* 把 `2` 给李四。
|
||||
* 把 `3` 给王五。
|
||||
* 把 `4` 给赵六。
|
||||
* 把 `5` 给钱大麻子。
|
||||
|
||||
然后,你开启你的惬意之旅,整整一个月,流连于海边温暖的沙滩,整整一个月,没碰过任何电子设备。没用多久,把自己的主密码忘到了九霄云外。
|
||||
|
||||
|
||||
然后,你开启你的惬意之旅,整整一个月,流连于海边温暖的沙滩,整整一个月,未接触任何电子设备。没用多久,把自己的主密码忘到了九霄云外。
|
||||
|
||||
木和水也在旅行中,你托付给他们保管的密钥片段保存的好好的,在他们各自的密码管理器中,但不幸的是,他们和你一样,也忘了自己的 _主密码_。
|
||||
李四和王五也在和你一起旅行,你托付给他们保管的密钥片段保存的好好的,在他们各自的密码管理器中,但不幸的是,他们和你一样,也忘了自己的 *主密码*。
|
||||
|
||||
没关系。
|
||||
|
||||
联系金,他保管的密钥片段是 `1-797842b76d80771f04972feb31c66f3927e7183609`;火,一直替你的班,很高兴你能尽快重返岗位,把自己掌握的片段给了你,`4-97c77a805cd3d3a30bff7841f3158ea841cd41a611`;土,收到你给的跑腿费才将自己保管的片段翻出来发给你,`5-17da24ad63f7b704baed220839abb215f97d95f4f8`。
|
||||
联系张三,他保管的密钥片段是 `1-797842b76d80771f04972feb31c66f3927e7183609`;赵六,一直替你的班,很高兴你能尽快重返岗位,把自己掌握的片段给了你,`4-97c77a805cd3d3a30bff7841f3158ea841cd41a611`;钱大麻子,收到你给的跑腿费才将自己保管的片段翻出来发给你,`5-17da24ad63f7b704baed220839abb215f97d95f4f8`。
|
||||
|
||||
有了这三个密钥片段,运行:
|
||||
|
||||
|
||||
```
|
||||
$ ssss-combine -t 3
|
||||
Enter 3 shares separated by newlines:
|
||||
@ -227,7 +220,7 @@ Share [3/3]: 5-17da24ad63f7b704baed220839abb215f97d95f4f8
|
||||
Resulting secret: long legs travel fast
|
||||
```
|
||||
|
||||
就这么简单,有了 _开源_ 技术加持,你也可以活的像国王一样滋润!
|
||||
就这么简单,有了 *开源* 技术加持,你也可以活的像国王一样滋润!
|
||||
|
||||
### 自己的安全不是自己一个人的事
|
||||
|
||||
@ -239,8 +232,8 @@ via: https://opensource.com/article/20/6/python-passwords
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[silentdawn-zz](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[silentdawn-zz](https://github.com/silentdawn-zz)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (JonnieWayy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12469-1.html)
|
||||
[#]: subject: (What you need to know about Rust in 2020)
|
||||
[#]: via: (https://opensource.com/article/20/1/rust-resources)
|
||||
[#]: author: (Ryan Levick https://opensource.com/users/ryanlevick)
|
||||
|
||||
2020 年关于 Rust 你所需要知道的
|
||||
======
|
||||
|
||||
> 尽管许多程序员长期以来一直将 Rust 用于业余爱好项目,但正如许多有关 Rust 的热门文章所解释的那样,该语言在 2019 年吸引了主要技术公司的支持。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202007/31/001101fkh88966ktvvee99.jpg)
|
||||
|
||||
一段时间以来, [Rust][2] 在诸如 Hacker News 之类的网站上引起了程序员大量的关注。尽管许多人一直喜欢在业余爱好项目中[使用该语言][3],但直到 2019 年它才开始在业界流行,直到那会儿情况才真正开始有所转变。
|
||||
|
||||
在过去的一年中,包括[微软][4]、 [Facebook][5] 和 [Intel][6] 在内的许多大公司都出来支持 Rust,许多[较小的公司][7]也注意到了这一点。2016 年,作为欧洲最大的 Rust 大会 [RustFest][8] 的第一任主持人,除了 Mozilla,我没见到任何一个人在工作中使用 Rust。三年后,似乎我在 RustFest 2019 所交流的每个人都在不同的公司的日常工作中使用 Rust,无论是作为游戏开发人员、银行的后端工程师、开发者工具的创造者或是其他的一些岗位。
|
||||
|
||||
在 2019 年, Opensource.com 也通过报道 Rust 日益增长的受欢迎程度而发挥了作用。万一你错过了它们,这里是过去一年里 Opensource.com 上关于 Rust 的热门文章。
|
||||
|
||||
### 《使用 rust-vmm 构建未来的虚拟化堆栈》
|
||||
|
||||
Amazon 的 [Firecracker][9] 是支持 AWS Lambda 和 Fargate 的虚拟化技术,它是完全使用 Rust 编写的。这项技术的作者之一 Andreea Florescu 在 《[使用 rust-vmm 构建未来的虚拟化堆栈][10]》中对 Firecracker 及其相关技术进行了深入探讨。
|
||||
|
||||
Firecracker 最初是 Google [CrosVM][11] 的一个分支,但是很快由于两个项目的不同需求而分化。尽管如此,在这个项目与其他用 Rust 所编写的虚拟机管理器(VMM)之间仍有许多得到了很好共享的通用片段。考虑到这一点,[rust-vmm][12] 项目起初是以一种让 Amazon 和 Google、Intel 和 Red Hat 以及其余开源社区去相互共享通用 Rust “crates” (即程序包)的方式开始的。其中包括 KVM 接口(Linux 虚拟化 API)、Virtio 设备支持以及内核加载程序。
|
||||
|
||||
看到软件行业的一些巨头围绕用 Rust 编写的通用技术栈协同工作,实在是很神奇。鉴于这种和其他[使用 Rust 编写的技术堆栈][13]之间的伙伴关系,到了 2020 年,看到更多这样的情况我不会感到惊讶。
|
||||
|
||||
### 《为何选择 Rust 作为你的下一门编程语言》
|
||||
|
||||
采用一门新语言,尤其是在有着建立已久技术栈的大公司,并非易事。我很高兴写了《[为何选择 Rust 作为你的下一门编程语言][14]》,书中讲述了微软是如何在许多其他有趣的编程语言没有被考虑的情况下考虑采用 Rust 的。
|
||||
|
||||
选择编程语言涉及许多不同的标准——从技术上到组织上,甚至是情感上。其中一些标准比其他的更容易衡量。比方说,了解技术变更的成本(例如适应构建系统和构建新工具链)要比理解组织或情感问题(例如高效或快乐的开发人员将如何使用这种新语言)容易得多。此外,易于衡量的标准通常与成本相关,而难以衡量的标准通常以收益为导向。这通常会导致成本在决策过程中变得越来越重要,即使这不一定就是说成本要比收益更重要——只是成本更容易衡量。这使得公司不太可能采用新的语言。
|
||||
|
||||
然而,Rust 最大的好处之一是很容易衡量其编写安全且高性能系统软件的能力。鉴于微软 70% 的安全漏洞是由于内存安全问题导致的,而 Rust 正是旨在防止这些问题的,而且这些问题每年都使公司付出了几十亿美元的代价,所以很容易衡量并理解采用这门语言的好处。
|
||||
|
||||
是否会在微软全面采用 Rust 尚待观察,但是仅凭着相对于现有技术具有明显且可衡量的好处这一事实,Rust 的未来一片光明。
|
||||
|
||||
### 2020 年的 Rust
|
||||
|
||||
尽管要达到 C++ 等语言的流行度还有很长的路要走。Rust 实际上已经开始在业界引起关注。我希望更多公司在 2020 年开始采用 Rust。Rust 社区现在必须着眼于欢迎开发人员和公司加入社区,同时确保将推动该语言发展到现在的一切都保留下来。
|
||||
|
||||
Rust 不仅仅是一个编译器和一组库,而是一群想要使系统编程变得容易、安全而且有趣的人。即将到来的这一年,对于 Rust 从业余爱好语言到软件行业所使用的主要语言之一的转型至关重要。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/rust-resources
|
||||
|
||||
作者:[Ryan Levick][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[JonnieWayy](https://github.com/JonnieWayy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ryanlevick
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
|
||||
[2]: http://rust-lang.org/
|
||||
[3]: https://insights.stackoverflow.com/survey/2019#technology-_-most-loved-dreaded-and-wanted-languages
|
||||
[4]: https://youtu.be/o01QmYVluSw
|
||||
[5]: https://youtu.be/kylqq8pEgRs
|
||||
[6]: https://youtu.be/l9hM0h6IQDo
|
||||
[7]: https://oxide.computer/blog/introducing-the-oxide-computer-company/
|
||||
[8]: https://rustfest.eu
|
||||
[9]: https://firecracker-microvm.github.io/
|
||||
[10]: https://opensource.com/article/19/3/rust-virtual-machine
|
||||
[11]: https://chromium.googlesource.com/chromiumos/platform/crosvm/
|
||||
[12]: https://github.com/rust-vmm
|
||||
[13]: https://bytecodealliance.org/
|
||||
[14]: https://opensource.com/article/19/10/choose-rust-programming-language
|
@ -0,0 +1,72 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12473-1.html)
|
||||
[#]: subject: (Defining cloud native, expanding the ecosystem, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/20/7/cloud-native-expanding-and-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
每周开源点评:定义云原生、拓展生态系统,以及更多的行业趋势
|
||||
======
|
||||
|
||||
> 每周关注开源社区和行业趋势。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202007/31/235751f5zd9l3rejd2tjss.jpg)
|
||||
|
||||
我在一家采用开源软件开发模型的企业软件公司任高级产品营销经理,我的一部分职责是为产品营销人员、经理和其他相关人定期发布有关开源社区、市场和业界发展趋势的更新。以下是该更新中我和他们最喜欢的几篇文章。
|
||||
|
||||
### 《随着云原生计算的兴起,它和代码一样在改变文化》
|
||||
|
||||
- [文章链接][2]
|
||||
|
||||
> 现在是围绕一套云原生计算的共同原则进行行业整合的时候了,因为许多企业已经意识到,他们最初进入云计算的回报有限。国际数据公司去年的一项调查发现,[80% 的受访者曾将工作负载从公有云环境遣返到企业内部][3],平均而言,他们预计在未来两年内将一半的公有云应用转移到私有场所。
|
||||
|
||||
**分析**:在云端的第一次运行主要是大量的“提升和转移”尝试,以提取工作负载并将其投放到云端。第二次运行将涉及更多的工作,以确定转移什么以及如何转移,但随着开发人员对理所当然的事情越来越满意,最终应该会带来更多价值。
|
||||
|
||||
### 《为什么云原生基础设施的自动化是所有参与者的胜利》
|
||||
|
||||
- [文章链接][4]
|
||||
|
||||
> 开发的圣杯是创建和维护安全的应用程序,产生强大的投资回报率和满意的客户。但如果这种开发不是高效、高速和可扩展的,那么这个圣杯很快就会变得遥不可及。如果你发现自己对当前的基础设施有更高的期望,那么可能是时候考虑云原生了。它不仅可以检查所有这些机器,而且为云原生基础设施进行自动化可以提高效率和结果。
|
||||
|
||||
**分析**:我还要补充一点,如果没有大量的自动化,真正采用云原生方法是不可能的;涉及的移动部件数量太多,不可能用人的头脑来处理。
|
||||
|
||||
### 《Linkerd 案例研究:满足安全要求、减少延迟和从 Istio 迁移》
|
||||
|
||||
- [文章链接][5]
|
||||
|
||||
> 最后,Subspace 分享了其使用 Linkerd 提供“光速”多人游戏的经验。虽然在超低延迟环境中使用服务网状物起初似乎有悖常理,但 Subspace 发现 Linkerd 的战略使用实际上降低了总延迟 —— 服务网状物是如此轻巧,以至于它增加的最小延迟被它通过可观察性降低的延迟所掩盖。简而言之,Linkerd 的这一独特用例使 Subspace 在运营结果上获得了巨大的净收益。[阅读完整的用户故事][6]。
|
||||
|
||||
**分析**:我听说过这样一个观点:你并不能真正降低一个系统的复杂性,你只是把它抽象化,改变它的接触对象。似乎对延迟也有类似的观察:如果你仔细选择你接受延迟的地方,你可以因此减少系统中其他地方的延迟。
|
||||
|
||||
### 一位高层管理人员解释了 IBM 的“重要支点”,以赢得开发者、初创企业和合作伙伴的青睐,这是其从微软等竞争对手手中赢得混合云市场计划的一部分
|
||||
|
||||
- [文章链接][7]
|
||||
|
||||
> 蓝色巨人正在转向一个新的战略,专注于建立一个由开发者、合作伙伴和初创公司组成的生态系统。“我们的服务组织无法接触到所有客户。获取这些客户的唯一方法是激活一个生态系统。”
|
||||
|
||||
**分析**:越来越多的公司开始接受这样的理念:有些客户的问题,他们没有帮助就无法解决。也许这可以减少从每个单独客户身上赚到的钱,因为它扩大了更广泛地参与更多问题空间的机会。
|
||||
|
||||
希望你喜欢这个列表,下周再见。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/cloud-native-expanding-and-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://siliconangle.com/2020/07/18/cloud-native-computing-rises-transforming-culture-much-code/
|
||||
[3]: https://www.networkworld.com/article/3400872/uptick-in-cloud-repatriation-fuels-rise-of-hybrid-cloud.html
|
||||
[4]: https://thenewstack.io/why-automating-for-cloud-native-infrastructures-is-a-win-for-all-involved/
|
||||
[5]: https://www.cncf.io/blog/2020/07/21/linkerd-case-studies-meeting-security-requirements-reducing-latency-and-migrating-from-istio/
|
||||
[6]: https://buoyant.io/case-studies/subspace/
|
||||
[7]: https://www.businessinsider.com/ibm-developers-tech-ecosystem-red-hat-hybrid-cloud-bob-lord-2020-7?r=AU&IR=T
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12471-1.html)
|
||||
[#]: subject: (Beginner-friendly Terminal-based Text Editor GNU Nano Version 5.0 Released)
|
||||
[#]: via: (https://itsfoss.com/nano-5-release/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
适于初学者的基于终端的文本编辑器 GNU Nano 5.0 版发布
|
||||
======
|
||||
|
||||
> 开源文本编辑器 GNU nano 已经达到了 5.0 版本的里程碑。看看这个新版本带来了哪些功能。
|
||||
|
||||
Linux 上有很多[基于终端的文本编辑器][1]。像 Emacs 和 Vim 这样的编辑器需要经历陡峭的学习曲线和掌握一堆不寻常的键盘快捷键,但公认 GNU nano 更容易使用。
|
||||
|
||||
也许这就是为什么 Nano 是 Ubuntu 和许多其他发行版中默认的基于终端的文本编辑器的原因,而即将发布的 [Fedora 33 版本][2]也将把 Nano 设置为终端的默认文本编辑器。
|
||||
|
||||
### GNU nano 5.0 的新功能
|
||||
|
||||
![][3]
|
||||
|
||||
在 GNU nano 5.0 的[变更日志][4]中提到的一些主要亮点是:
|
||||
|
||||
* `-indicator` 选项将在屏幕右侧显示一种滚动条,以指示视口在缓冲区中的位置和覆盖范围。
|
||||
* 可以用 `Alt+Insert` 键标记行,你可以用 `Alt+Page` 和 `Alt+PageDown` 键跳转到这些标记的行。
|
||||
* 执行命令提示符现在可以直接从主菜单中访问。
|
||||
* 在支持至少 256 种颜色的终端上,有新的颜色可用。
|
||||
* 新的 `-bookstyle` 模式,任何以空格开头的行都会被认为是一个段落的开始。
|
||||
* 用 `^L` 刷新屏幕现在在每个菜单中都可以使用。它还会将行与光标居中。
|
||||
* 可绑定函数 `curpos` 已经改名为 `location`,长选项 `-tempfile` 已经改名为 `-saveonexit`,短选项 `-S` 现在是 `-softwrap` 的同义词。
|
||||
* 备份文件将保留其组的所有权(如果可能的话)。
|
||||
* 数据会在显示 “……行写入” 之前同步到磁盘。
|
||||
* 增加了 Markdown、Haskell 和 Ada 语法的支持。
|
||||
|
||||
### 获取 GNU nano 5.0
|
||||
|
||||
目前 Ubuntu 20.04 中的 nano 版本是 4.8,而在这个 LTS 版本中,你不太可能在短时间内获得新版本。如果 Ubuntu 有新版本的话,你应该会通过系统更新得到它。
|
||||
|
||||
Arch 用户应该会比其他人更早得到它,就像往常一样。其他发行版也应该迟早会提供新版本。
|
||||
|
||||
如果你是少数喜欢[从源代码安装软件][5]的人,你可以从它的[下载页面][6]中获得。
|
||||
|
||||
如果你是新手,我强烈推荐这篇 [Nano 编辑器初学者指南][1]。
|
||||
|
||||
你喜欢这个新版本吗?你期待使用 Nano 5 吗?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/nano-5-release/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/nano-editor-guide/
|
||||
[2]: https://itsfoss.com/fedora-33/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/09/Nano.png?ssl=1
|
||||
[4]: https://www.nano-editor.org/news.php
|
||||
[5]: https://itsfoss.com/install-software-from-source-code/
|
||||
[6]: https://www.nano-editor.org/download.php
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12476-1.html)
|
||||
[#]: subject: (How to configure an SSH proxy server with Squid)
|
||||
[#]: via: (https://fedoramagazine.org/configure-ssh-proxy-server/)
|
||||
[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/)
|
||||
@ -10,13 +10,13 @@
|
||||
如何使用 Squid 配置 SSH 代理服务器
|
||||
======
|
||||
|
||||
![][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/01/162730tx0czx60xs6wz00c.jpg)
|
||||
|
||||
有时你无法从本地连接到 SSH 服务器。还有时,你可能想为 SSH 连接添加额外的安全层。在这些情况下,通过代理服务器连接到另一台 SSH 服务器是一种解决方式。
|
||||
有时你无法从本地连接到 SSH 服务器。还有时,你可能想为 SSH 连接添加额外的安全层。在这些情况下,通过代理服务器连接到 SSH 服务器是一种解决方式。
|
||||
|
||||
[Squid][2] 是提供缓存和代理服务的全功能代理服务器应用。通常通过在浏览过程中重用和缓存以前请求的网页来帮助缩短响应时间并减少网络带宽。
|
||||
[Squid][2] 是提供缓存和代理服务的全功能代理服务器应用。它通常用于在浏览过程中重用和缓存以前请求的网页来帮助缩短响应时间并减少网络带宽。
|
||||
|
||||
但是在本篇中,你将配置 Squid 作为 SSH 代理服务器,因为它是易于配置的强大的受信任代理服务器。
|
||||
但是在本篇中,你将配置 Squid 作为 SSH 代理服务器,因为它是强大的受信任代理服务器,易于配置。
|
||||
|
||||
### 安装和配置
|
||||
|
||||
@ -26,9 +26,9 @@
|
||||
$ sudo dnf install squid -y
|
||||
```
|
||||
|
||||
squid 配置文件非常广泛,但是我们只需要配置其中一些。Squid 使用访问控制列表来管理连接。
|
||||
squid 配置文件非常庞大,但是我们只需要配置其中一些。Squid 使用访问控制列表来管理连接。
|
||||
|
||||
编辑 _/etc/squid/squid.conf_ 文件,确保你有下面解释的两行。
|
||||
编辑 `/etc/squid/squid.conf` 文件,确保你有下面解释的两行。
|
||||
|
||||
首先,指定你的本地 IP 网络。默认配置文件已经列出了最常用的,但是如果没有,你需要添加你的配置。例如,如果你的本地 IP 网络范围是 192.168.1.X,那么这行会是这样:
|
||||
|
||||
@ -58,9 +58,9 @@ $ sudo firewall-cmd --reload
|
||||
|
||||
### 测试 ssh 代理连接
|
||||
|
||||
要通过 ssh 代理服务器连接到服务器,我们将使用 netcat。
|
||||
要通过 ssh 代理服务器连接到服务器,我们将使用 `netcat`。
|
||||
|
||||
如果尚未安装 _nmap-ncat_,请安装它:
|
||||
如果尚未安装 `nmap-ncat`,请安装它:
|
||||
|
||||
```
|
||||
$ sudo dnf install nmap-ncat -y
|
||||
@ -82,18 +82,10 @@ $ ssh user@example.com -o "ProxyCommand nc --proxy 192.168.1.63:3128 %h %p"
|
||||
|
||||
以下是这些选项的含义:
|
||||
|
||||
* _ProxyCommand_ – 告诉 ssh 使用代理命令。
|
||||
|
||||
|
||||
* _nc_ – 用于建立与代理服务器连接的命令。这是 netcat 命令。
|
||||
|
||||
|
||||
* **%**_h_ – 代理服务器的主机名或 IP 地址的占位符。
|
||||
|
||||
|
||||
* **%**_p_ – 代理服务器端口号的占位符。
|
||||
|
||||
|
||||
* `ProxyCommand` – 告诉 ssh 使用代理命令。
|
||||
* `nc` – 用于建立与代理服务器连接的命令。这是 netcat 命令。
|
||||
* `%h` – 代理服务器的主机名或 IP 地址的占位符。
|
||||
* `%p` – 代理服务器端口号的占位符。
|
||||
|
||||
有很多方法可以配置 SSH 代理服务器,但这是入门的简单方法。
|
||||
|
||||
@ -104,7 +96,7 @@ via: https://fedoramagazine.org/configure-ssh-proxy-server/
|
||||
作者:[Curt Warfield][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,38 +1,34 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12480-1.html)
|
||||
[#]: subject: (Video Trimmer: A No-nonsense, Simple Video Trimming Application for Linux Desktop)
|
||||
[#]: via: (https://itsfoss.com/video-trimmer/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Video Trimmer:Linux 桌面中的简单实用视频修剪应用
|
||||
Video Trimmer:Linux 桌面中的傻瓜级的视频修剪应用
|
||||
======
|
||||
|
||||
_**简述:一个非常简单的工具,无需重新编码即可快速修剪视频。我们来看看它提供了什么。**_
|
||||
> 一个非常简单的工具,无需重新编码即可快速修剪视频。我们来看看它提供了什么。
|
||||
|
||||
你可能已经知道 Linux 的一些[最佳免费视频编辑器][1],但并不是每个人都需要它们提供的所有功能。
|
||||
|
||||
有时,你只想快速执行一项操作,例如修剪视频。
|
||||
|
||||
你是选择探索功能完善的视频编辑器但只是执行简单的修剪操作,还是希望使用便捷工具来修剪视频?
|
||||
有时,你只想快速执行一项操作,例如修剪视频。你是选择探索功能完善的视频编辑器但只是执行简单的修剪操作,还是希望使用便捷工具来修剪视频?
|
||||
|
||||
当然,这取决于你的个人喜好以及处理视频的方式。但是,对于大多数用户而言,首选是使用非常容易使用的修剪工具。
|
||||
|
||||
因此,我想重点介绍一个简单的开源工具,即 “[Video Trimmer][2]”,它可以快速修剪视频。
|
||||
因此,我想重点介绍一个傻瓜级的开源工具,即 “[Video Trimmer][2]”,它可以快速修剪视频。
|
||||
|
||||
![][3]
|
||||
|
||||
### Video Trimmer:一个用于快速修剪视频的简单应用
|
||||
### Video Trimmer:一个用于快速修剪视频的傻瓜应用
|
||||
|
||||
Video Trimmer 是一个开源应用,它可帮助你修剪视频片段而无需重新编码。
|
||||
Video Trimmer 是一个开源应用,它可帮助你修剪视频片段而无需重新编码。因此,基本上,你可以能够修剪视频而不会失去原始质量。
|
||||
|
||||
因此,基本上,你将能够修剪视频而不会失去原始质量。
|
||||
你要做的就是使用 Video Trimmer 打开视频文件,然后使用鼠标选择要修剪的时间区域。
|
||||
|
||||
你要做的就是使用 Video Trimmer 打开视频文件,然后使用鼠标选择要修剪的区域。
|
||||
|
||||
你可以手动设置时间范围进行修剪,也可以仅使用鼠标拖动区域进行修剪。当然,如果视频文件很长且你不知道在哪里查看,那么可能需要一段时间手动设置时间戳。
|
||||
你可以手动设置要修剪的时间范围,也可以仅使用鼠标拖动区域进行修剪。当然,如果视频文件很长,而且你不知道从哪里看,手动设置时间戳可能需要一段时间。
|
||||
|
||||
为了让你有个印象,请看下面的截图,看看在使用 Video Trimmer 时可用的选项:
|
||||
|
||||
@ -40,11 +36,11 @@ Video Trimmer 是一个开源应用,它可帮助你修剪视频片段而无需
|
||||
|
||||
### 在 Linux 上安装 Video Trimmer
|
||||
|
||||
Video Trimmer 仅在 [Flathub][5] 上作为 Flatpak 软件包提供。因此,你应该能够在 Flatpak 支持的任何 Linux 发行版上安装它,而不会出现任何问题。
|
||||
Video Trimmer 仅作为 [Flathub][5] 上的 Flatpak 软件包提供。因此,你应该能够在 Flatpak 支持的任何 Linux 发行版上安装它,而不会出现任何问题。
|
||||
|
||||
以防你不了解 Flatpak,你可能想要参考我们的[使用和安装 Flatpak][6] 指南。
|
||||
|
||||
[Video Trimmer (Flathub)][5]
|
||||
- [下载 Video Trimmer(Flathub)][5]
|
||||
|
||||
### 总结
|
||||
|
||||
@ -52,7 +48,7 @@ Video Trimmer 底层使用 [ffmpeg][7]。它所做的可以在终端中轻松[
|
||||
|
||||
由于某些原因,如果你想寻找一种替代方法,也可以尝试使用 [VidCutter][9]。当然,你始终可以依靠 [Linux 中的顶级视频编辑器][10](例如 [OpenShot][11]) 来修剪视频以及执行一些高级操作的能力。
|
||||
|
||||
你认为在 Linux 中使用 ”**Video Trimmer**“ 如何?你是否有其他喜欢的视频修剪工具?在下面的评论中让我知道你的想法!
|
||||
你认为在 Linux 中使用 Video Trimmer 如何?你是否有其他喜欢的视频修剪工具?在下面的评论中让我知道你的想法!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -61,7 +57,7 @@ via: https://itsfoss.com/video-trimmer/
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -76,5 +72,5 @@ via: https://itsfoss.com/video-trimmer/
|
||||
[7]: https://ffmpeg.org/
|
||||
[8]: https://itsfoss.com/ffmpeg/
|
||||
[9]: https://itsfoss.com/vidcutter-video-editor-linux/
|
||||
[10]: https://itsfoss.com/best-video-editing-software-linux/
|
||||
[10]: https://linux.cn/article-10185-1.html
|
||||
[11]: https://itsfoss.com/openshot-video-editor-release/
|
@ -1,81 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (GNU Health expands Raspberry Pi support, Megadeth's guitarist uses open source principles, and more open source news.)
|
||||
[#]: via: (https://opensource.com/article/20/6/news-june-23)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
|
||||
GNU Health expands Raspberry Pi support, Megadeth's guitarist uses open source principles, and more open source news.
|
||||
======
|
||||
Catch up on the biggest open source headlines from the past two weeks.
|
||||
![][1]
|
||||
|
||||
In this week’s edition of our open source news roundup, GNU Health expands to Raspberry Pis, how Megadeth's guitarist uses open source principles, and more open source news.
|
||||
|
||||
### GNU Health expands its support for Raspberry Pi
|
||||
|
||||
The GNU Health project, designed to help hospitals run on low-cost software and hardware, expanded its support for Rapsberry Pi models in its recent release [according to CNX][2]. The GNU Health Embedded version that runs on Raspberry Pis is "especially suited for remote areas without internet, academic Institutions, domiciliary units, home nursing, and laboratory stations."
|
||||
|
||||
> *"GNU Health (GH) is a free and open-source Health and Hospital Information System (HIS) that can manage the internal processes of a health institution, such as financial management, electronic medical records (EMR), stock & pharmacies or laboratories (LIMS)." *
|
||||
|
||||
GNU Health is a free and open source health and hospital information system (HIS) to help healthcare systems manage finances, pharmacies, electronic medical records (EMRs), and more. The Raspberry Pi solution supports real-time monitoring of vital signs in hospitals, and retrieve information from labs.
|
||||
|
||||
More details may be found on [the official website][3].
|
||||
|
||||
### Megadeth's guitarist brings OSS approaches to music
|
||||
|
||||
Heavy metal fans likely know Kiko Loureiro as Megadeth's guitarist. Loureiro is less known in the OSS world, but that might change soon: His new solo album is called _Open Source_.
|
||||
|
||||
"By definition, 'open source' is related to softwares [in] which the original source code is made freely available and may be redistributed and modified," Loureiro shared [in a recent interview.][4] "It brings us a higher sense of community, enhances our creativity and creates new possibilities."
|
||||
|
||||
In true open source fashion, Loureiro is running an Indiegogo fundraiser to [keep his album][5] independent. His fundraiser emphasizes the "Open Source Mentality," which includes making his song's stems available for listeners to remix.
|
||||
|
||||
### The Linux Foundation partners with Harvard for a FOSS contributor security survey
|
||||
|
||||
The Linux Foundation's Core Infrastructure Initiative (CII) launched [a survey for FOSS contributors][6] addressing security concerns in open source. CII developed the survey in partnership with the Laboratory for Innovation Science at Harvard (LISH). FOSS contributors can [take the survey][7] through early August.
|
||||
|
||||
This new survey follows [the Census II analysis and report][8], which assessed popular FOSS components for vulnerabilities. David A. Wheeler, The Linux Foundation's director of open source supply chain security, said the survey is essential since open source solutions are used so widely now.
|
||||
|
||||
Along with its reports and surveys, CII built a [Best Practices badge program][9] that encourages developers to audit their solutions for security threats.
|
||||
|
||||
### In other news
|
||||
|
||||
* [OpenStack adds the StarlingX edge computing stack to its top-level projects][10]
|
||||
|
||||
* [OpenSAFELY is a new secure analytics platform for electronic health records in the NHS][11]
|
||||
|
||||
* [Linux Kernel 5.6 Reached End of Life, Upgrade to Linux Kernel 5.7 Now][12]
|
||||
|
||||
|
||||
|
||||
|
||||
Thanks, as always, to Opensource.com staff members and [Correspondents][13] for their help this week.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/6/news-june-23
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/weekly_news_roundup_tv.png?itok=tibLvjBd
|
||||
[2]: https://www.cnx-software.com/2020/06/15/gnu-health-embedded-open-source-health-platform-works-on-raspberry-pi-3-4-and-soon-olimex-sbcs
|
||||
[3]: https://www.gnuhealth.org/#/embedded
|
||||
[4]: https://www.blabbermouth.net/news/megadeths-kiko-loureiro-unveils-cover-art-for-open-source-solo-album/
|
||||
[5]: https://www.indiegogo.com/projects/kiko-loureiro-new-open-source-album#/
|
||||
[6]: https://www.linuxfoundation.org/blog/2020/06/linux-foundation-harvard-announce-free-libre-and-open-source-software-foss-contributor-survey/?SSAID=389818&sscid=61k4_isd0j
|
||||
[7]: https://hbs.qualtrics.com/jfe/form/SV_enfu6tjRM0QzwQB
|
||||
[8]: https://www.coreinfrastructure.org/programs/census-program-ii/
|
||||
[9]: https://www.linuxfoundation.org/blog/2020/06/why-cii-best-practices-gold-badges-are-important/?SSAID=389818&sscid=61k4_isyv5
|
||||
[10]: https://techcrunch.com/2020/06/11/openstack-adds-the-starlinkx-edge-computing-stack-to-its-top-level-projects/
|
||||
[11]: https://opensafely.org/
|
||||
[12]: https://9to5linux.com/linux-kernel-5-6-reached-end-of-life-upgrade-to-linux-kernel-5-7-now
|
||||
[13]: https://opensource.com/correspondent-program
|
@ -1,68 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The ultimate guide to contributing to open source, an unparallelled reliance on Linux, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/20/6/linux-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
The ultimate guide to contributing to open source, an unparallelled reliance on Linux, and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [How to Contribute to Open Source: The Ultimate Guide][2]
|
||||
|
||||
> “The biggest challenge for most people is that they don’t identify their areas of interest and where they can help us. They come to the project and ask, ‘How can I help?’” he said. “Instead, they could say, ‘This is the skill set I’d like to achieve.’ For example, ‘I’d like to develop some specific functionality for this piece of the project.’”
|
||||
|
||||
**The impact**: Saying "I want to contribute to open source" is a bit like saying "I want to work in the not-for-profit sector". Open source is a means to an end, and there is almost certainly a project working toward the end you care about that could use the skills you have.
|
||||
|
||||
## [Vulnerability Scoring Struggles to Remain Viable in the Era of Cloud Native Computing][3]
|
||||
|
||||
> To this claim, Danen said, “It was designed to indicate the severity of a flaw relative to other flaws. Nowhere will you see it described, by FIRST who created it, as a means of assessing risk. So yes, reliable to describe the mechanics of a vulnerability, but wholly inadequate to describe the risk of the vulnerability to a particular organization or environment.”
|
||||
|
||||
**The impact**: Using the [Common Vulnerability Scoring System][4] (CVSS) classification systems for vulnerabilities is becoming more difficult. Non-experts will usually use a number that describes something in the easiest possible way. The challenge is for experts is to make sure that the easiest possible way is also the right way.
|
||||
|
||||
## [The rise of parallel serverless compute][5]
|
||||
|
||||
> So why isn’t everything fast, amazing, and running this way already? One of the challenging parts about this today is that most software is designed to run on single machines, and parallelization may be limited to the number of machine cores or threads available locally. Because this architecture & “serverless compute” is so new (_cough cough 2014_), most software is not designed to leverage this approach. I see this changing in the future as more become aware of this approach.
|
||||
|
||||
**The impact**: It is actually hard to think scalably and takes a lot of practice to mentally understand what can be done alongside other things and what has to be done sequentially.
|
||||
|
||||
## [From Earth to orbit with Linux and SpaceX][6]
|
||||
|
||||
> Ordinary? Yes, ordinary. You see, spacecraft CPUs are far from the newest and greatest. They're developed for spacecraft, which takes years -- even decades -- to go from the drafting board to launch. For example, the International Space Station (ISS) runs on 1988-vintage 20 MHz Intel 80386SX CPUs. We don't know, however, what chips the Falcon 9 uses. Chances are, though, their design is at least a decade older than what you'd buy at a Best Buy now.
|
||||
|
||||
**The impact**: If your time horizon is measured in decades, there is a good chance Linux is your best option for a stable operating system.
|
||||
|
||||
## [Why the Success of Edge Computing Relies on a Linux Legacy][7]
|
||||
|
||||
> For edge computing innovation, we need to be thinking more about how we create sustainable solutions and technologies given how many deployments will require a longer life cycle and are more tightly bound to hardware and equipment refreshes. The path of innovation leads from Linux to and through the network edge. Companies that follow this approach will be better positioned to leverage the promise and power of the edge while avoiding fragmentation and lock-in.
|
||||
|
||||
**The impact**: Edge devices can't (shouldn't?) be ephemeral; to get the value we're promised by cheap, always on, always monitoring, always streaming devices they really need to be reliable over time. Linux = sustainability.
|
||||
|
||||
_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/6/linux-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://builtin.com/software-engineering-perspectives/open-source-contribution
|
||||
[3]: https://thenewstack.io/cvss-struggles-to-remain-viable-in-the-era-of-cloud-native-computing/
|
||||
[4]: https://www.first.org/cvss/
|
||||
[5]: https://davidwells.io/blog/rise-of-embarrassingly-parallel-serverless-compute
|
||||
[6]: https://www.zdnet.com/article/from-earth-to-orbit-with-linux-and-spacex/
|
||||
[7]: https://devops.com/why-the-success-of-edge-computing-relies-on-a-linux-legacy/
|
@ -1,65 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Balloon-powered internet service goes live in Kenya)
|
||||
[#]: via: (https://www.networkworld.com/article/3566295/balloon-powered-internet-service-goes-live-in-kenya.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Balloon-powered internet service goes live in Kenya
|
||||
======
|
||||
Alphabet spinout Loon uses balloons to create a floating network of cell towers.
|
||||
[Loon][1]
|
||||
|
||||
ISP [Telkom Kenya][2] is launching the first commercially available 4G LTE service using balloons that act as a network of cell towers floating in the stratosphere.
|
||||
|
||||
The service will initially cover approximately 19,000 square miles in Kenya, according to Alastair Westgarth, CEO of [Loon][1], a spinout of Alphabet and the underlying technology provider. Roughly 35 or more balloons will comprise the fleet, moving continually, drifting in the stratosphere about 12 miles above the surface of the earth, Westgarth said in an article [on Medium][3]. "We refer to Loon as a floating network of cell towers," Westgarth said.
|
||||
|
||||
Kenya is underserved by traditional internet, which is why this delivery mechanism is appropriate, said Mugo Kibati, Telkom Kenya's CEO, in a [press release][4]. "… the Internet-enabled balloons will be able to offer connectivity to the many Kenyans who live in remote regions that are underserved or totally unserved, and as such remain disadvantaged," Kibati said. Telemedicine and online education are two expected use-cases.
|
||||
|
||||
In testing, Loon achieved a downlink speed of 18.9 Mbps with 19 milliseconds latency and an uplink speed of 4.74Mbps. Westgarth said the service is capable of being used for "voice calls, video calls, YouTube, WhatsApp, email, texting, web browsing" and other applications.
|
||||
|
||||
In the bigger picture, internet service delivery from the stratosphere is an attractive proposition for [IoT][5]. At altitude, network coverage footprints can be more widespread, and coverage can be shifted as demand changes—a mining area moves, for example. In addition, there's less ground-based infrastructure to build out or deal with; developers can avoid the hassle of private property easements required for laying cables, for example.
|
||||
|
||||
Service outages are conceivably more controllable, too. A provider could launch another device instead of having to trace faults through elaborate, remote, ground infrastructure. Backup balloons could be staged, waiting to be placed into service.
|
||||
|
||||
### Drone-based internet delivery
|
||||
|
||||
Another organization that's exploring the atmospheric Internet is Softbank, which calls its 260-foot wide HAWK30 drones a "floating base station in the stratosphere." (See related story: [SoftBank plans drone-delivered IoT and internet by 2023][6])
|
||||
|
||||
One reason the major Japan telco is interested in stratosphere-delivered internet is because the archipelago is prone to natural disasters, such as earthquakes. Floating base stations above Earth can be more easily moved than traditional cell towers, enabling a quicker, more flexible response to natural disasters.
|
||||
|
||||
Loon's balloons, in fact, been used successfully to deliver internet service following a disaster: Loon provided connectivity after Puerto Rico's Hurricane Maria in 2017, for example.
|
||||
|
||||
Westgarth said Loon's balloons have come a long way since initial development. Launching is now performed by automated devices that can propel the ground-station-linked balloons to 60,000 feet once every half-hour, as opposed to by hand, as was done in earlier days.
|
||||
|
||||
Machine-learning algorithms handle navigation to attempt to provide sustained service to users. That's not always possible, however, because wind (although not as excessive as can be found at ground level) and restricted airspace can affect coverage despite what Westgarth calls a "carefully choreographed and orchestrated balloon dance."
|
||||
|
||||
Plus, the devices are solar powered, which means they only function and provide internet (or reposition themselves, or feed internet to other balloons) during daylight hours. For that and other reasons, Westgarth and Kibati have pointed out that the balloons must augment existing infrastructure and plans—they're not a total solution.
|
||||
|
||||
"To connect all the people and things that are demanding it now and into the future, we need to expand our thinking; we need a new layer to the connectivity ecosystem," Westgarth said.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3566295/balloon-powered-internet-service-goes-live-in-kenya.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://loon.com/
|
||||
[2]: https://www.telkom.co.ke/
|
||||
[3]: https://medium.com/loon-for-all/loon-is-live-in-kenya-259d81c75a7a
|
||||
[4]: https://telkom.co.ke/telkom-and-loon-announce-progressive-deployment-loon-technology-customers-july
|
||||
[5]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
[6]: https://www.networkworld.com/article/3405170/softbank-plans-drone-delivered-iot-and-internet-by-2023.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,70 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 ways I contribute to open source as a Linux systems administrator)
|
||||
[#]: via: (https://opensource.com/article/20/7/open-source-sysadmin)
|
||||
[#]: author: (Elizabeth K. Joseph https://opensource.com/users/pleia2)
|
||||
|
||||
4 ways I contribute to open source as a Linux systems administrator
|
||||
======
|
||||
You don't have to be a coder to make valuable contributions to the open
|
||||
source community; here are some other important roles you can fill as a
|
||||
sysadmin.
|
||||
![open source button on keyboard][1]
|
||||
|
||||
I recently participated in The Linux Foundation Open Source Summit North America, held virtually June 29-July 2, 2020. In the course of that event, I had the opportunity to speak with a fellow attendee about my career in Linux systems administration and how it had led me to a career focused on [open source][2]. Specifically, he asked, how does a systems administrator who doesn't do a lot of coding participate in open source projects?
|
||||
|
||||
That's a great question!
|
||||
|
||||
A lot of focus in open source projects is placed on the actual code, but there's a lot more to it than that. The following are some ways that I've been deeply involved in open source projects, without writing code.
|
||||
|
||||
### Improving documentation
|
||||
|
||||
I got my official start in open source by rewriting a quickstart guide for a project I used heavily. We spend most of our time using software in production and at scale. We routinely run into configuration gotchas and edge cases, and we often privately develop best practices for managing services effectively.
|
||||
|
||||
Inevitably, we run into things that aren't documented, have out of date documentation, or need improvements to the documentation to be made. This is a great opportunity! The developers and documentation writers are often unaware of these issues, and you have the key to solving them. Typically it starts with a bug report to the documentation project, but if you know the answer, you can often submit a patch to the documentation to improve it.
|
||||
|
||||
### Contributing "recipes"
|
||||
|
||||
We often spend too much time reinventing the wheel when we're launching common services. I remember my early days of slogging through MySQL configuration files to figure out the best settings for the databases for a particular customer. Today, a lot of that has been simplified, allowing us to use Ansible playbooks, Puppet modules, and more to get a basic configuration going. This is a place where you can contribute! Whether it's an official "recipe" you contribute to the appropriate hub or a sample rundown of your configuration or architecture diagram of Logstash, sharing your expertise in the form of examples can be incredibly helpful to others who are facing the same configuration challenges.
|
||||
|
||||
### Hosting project resources
|
||||
|
||||
I spent part of my career as a full-time systems administrator, directly working on hosting project resources for OpenStack, an infrastructure that is fully open source—every config file and Puppet change is done through public code review and tracked in a public Git repository. There are several projects out there that host their infrastructures in an open source manner, many of which are listed on the [Open Source Infrastructure (#openinfra) homepage][3]. These range from KDE and Debian to the Apache Software Foundation. In these communities, external participants can submit improvements to the infrastructure as their time and expertise allow. Since a lot of this is peer-reviewed, it's also a nice opportunity to build your skills in areas you may not be strictly focused on at work.
|
||||
|
||||
I've also done work on specific projects where the need was not broadcasted but was clear once I joined the community. For instance, one of my Linux communities needed a place to host a development website environment so we could try out new plugins and features outside of our production environment. We also found that giving shell accounts to participants was a valuable way to make sure they were always connected to IRC and had a sandbox beyond their own desktop. I now manage two virtual servers for this project to address these needs and have built up my own little systems team inside the project, so I'm not the only administrator.
|
||||
|
||||
### Supporting your fellow users
|
||||
|
||||
As someone who is using software in production, your operational experience is essential to a thriving support outlet, so don't be shy. Participation in user forums, mailing lists, and chat may seem like something that only experts can do, but regardless of your level, you will always have more experience than someone who just started out. A newcomer to the space can help out with simple questions, and give the more experienced participants the energy to answer more complicated questions. The more experience you gain, the more involved you can get in the community.
|
||||
|
||||
### Be a better sysadmin by contributing
|
||||
|
||||
Whatever way you decide to participate, the value gained from contributing to open source projects as a [systems administrator][4] cannot be understated. Your contributions will be noticed by members of the community, and often result in opportunities to chat on the latest project podcast, sit for an interview on the project blog, or speak at an event. All of these things raise your profile in the project as someone who is knowledgeable about the technology. You can also point to your public expertise when you're interviewing for your next role, having a public track record of giving advice in a project where a company is looking for expertise is a huge vote in your favor.
|
||||
|
||||
Finally, I've also found that participating in open source projects to be tremendously valuable on a personal level. I feel good about contributing to the community, and it's rewarding to know that your expertise is valuable to folks outside the walls of your organization.
|
||||
|
||||
Looking for a place to start? Find the communities behind the open source technology you already use and love. Or, if you're looking for a place to [write][5], you've found it here at Opensource.com.
|
||||
|
||||
You don't need to be a master coder to contribute to open source. Jade Wang shares 8 ways you can...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/open-source-sysadmin
|
||||
|
||||
作者:[Elizabeth K. Joseph][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pleia2
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx (open source button on keyboard)
|
||||
[2]: https://opensource.com/resources/what-open-source
|
||||
[3]: https://opensourceinfra.org/
|
||||
[4]: https://opensource.com/article/19/7/be-a-sysadmin
|
||||
[5]: https://opensource.com/how-submit-article
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Role Of SPDX In Open Source Software Supply Chain)
|
||||
[#]: via: (https://www.linux.com/audience/developers/role-of-spdx-in-open-source-software-supply-chain/)
|
||||
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
|
||||
|
||||
Role Of SPDX In Open Source Software Supply Chain
|
||||
======
|
||||
|
||||
Kate Stewart is a Senior Director of Strategic Programs, responsible for the Open Compliance program at the Linux Foundation encompassing SPDX, OpenChain, Automating Compliance Tooling related projects. In this interview, we talk about the latest release and the role it’s playing in the open source software supply chain.
|
||||
|
||||
*Here is a transcript of our interview. *
|
||||
|
||||
Swapnil Bhartiya: Hi, this is Swapnil Bhartiya, and today we have with us, once again, Kate Stewart, Senior Director of Strategic Programs at Linux Foundation. So let’s start with SPDX. Tell us, what’s new going on in there in this specification?
|
||||
|
||||
Kate Stewart: Well, the SPDX specification just a month ago released auto 2.2 and what we’ve been doing with that is adding in a lot more features that people have been wanting for their use cases, more relationships, and then we’ve been working with the Japanese automotive-made people who’ve been wanting to have a light version. So there’s lots of really new technology sitting in the SPDX 2.2 spec. And I think we’re at a stage right now where it’s good enough that there’s enough people using it, we want to probably take it to ISO. So we’ve been re-formatting the document and we’ll be starting to submit it into ISO so it can become an international specification. And that’s happening.
|
||||
|
||||
Swapnil Bhartiya: Can you talk a bit about if there is anything additional that was added to the 2.2 specification. Also, I would like to talk about some of the use cases since you mentioned the automaker. But before that, I just want to talk about anything new in the specification itself.
|
||||
|
||||
Kate Stewart: So in the 2.2 specifications, we’ve got a lot more relationships. People wanted to be able to handle some of the use cases that have come up from containers now. And so they wanted to be able to start to be able to express that and specify it. We’ve also been working with the NTIA. Basically they have a software bill of materials or SBoM working groups, and SPDX is one of the formats that’s been adopted. And their framing group has wanted to see certain features so that we can specify known unknowns. So that’s been added into the specification as well.
|
||||
|
||||
And then there are, how you can actually capture notices since that’s something that people want to use. The license has called for it and we didn’t have a clean way of doing it and so some of our tool vendors basically asked for this. Not the vendors, I guess there are partners, there are open source projects that wanted to be able to capture this stuff. And so we needed to give them a way to help.
|
||||
|
||||
We’re very much focused right now on making sure that SPDX can be useful in tools and that we can get the automation happening in the whole ecosystem. You know, be it when you build a binary to ship to someone or to test, you want to have your SBoM. When you’ve downloaded something from the internet, you want to have your SBoM. When you ship it out to your customer, you want to be able to be very explicit and clear about what’s there because you need to have that level of detail so that you can track any vulnerabilities.
|
||||
|
||||
Because right now about, I guess, 19… I think there was a stat from earlier in the year from one of the surveys. And I can dig it up for you if you’d like, but I think 99% of all the code that was scanned by Synopsys last year had open source in it. And of which it was 70% of that whole build materials was open source. Open source is everywhere. And what we need to do is, be able to work with it and be able to adhere to the licenses, and transparency on the licenses is important as is being able to actually know what you have, so you can remediate any vulnerabilities.
|
||||
|
||||
Swapnil Bhartiya: You mentioned a couple of things there. One was, you mentioned tooling. So I’m kind of curious, what sort of tooling that is already there? Whether it’s open source or open source be it basically commercialization that worked with the SPDX documents.
|
||||
|
||||
Kate Stewart: Actually, I’ve got a document that basically lists all of these tools that we’ve been able to find and more are popping up as the day goes by. We’ve got common tools. Like, some of the Linux Foundation projects are certainly working with it. Like FOSSology, for instance, is able to both consume and generate SPDX. So if you’ve got an SPDX document and you want to pull it in and cross check it against your sources to make sure it’s matching and no one’s tampered with it, the FOSSology tool can let you do that pretty easily and codes out there that can generate FOSSology.
|
||||
|
||||
Free Software Foundation Europe has a Lindt tool in their REUSE project that will basically generate an SPDX document if you’re using the IDs. I guess there’s actually a whole bunch more. So like I say, I’ve got a document with a list of about 30 to 40, and obviously the SPDX tools are there. We’ve got a free online, a validator. So if someone gives you an SPDX document, you can paste it into this validator, and it’ll tell you if it’s a valid SPDX document or not. And we’re looking to it.
|
||||
|
||||
I’m finding also some tools that are emerging, one of which is decodering, which we’ll be bringing into the Act umbrella soon, which is looking at transforming between SPDX and SWID tags, which is another format that’s commonly in use. And so we have tooling emerging and making sure that what we’ve got with SPDX is usable for tool developers and that we’ve got libraries right now for SPDX to help them in Java, Python and Go. So hopefully we’ll see more tools come in and they’ll be generating SPDX documents and people will be able to share this stuff and make it automatic, which is what we need.
|
||||
|
||||
Another good tool, I can’t forget this one, is Tern. And actually Tern, and so what Tern does is, it’s another tool that basically will sit there and it will decompose a container and it will let you know the bill of materials inside that container. So you can do there. And another one that’s emerging that we’ll hopefully see more soon is something called OSS Review Toolkit that goes into your bill flow. And so it goes in when you work with it in your system. And then as you’re doing bills, you’re generating your SBoMs and you’re having accurate information recorded as you go.
|
||||
|
||||
As I said, all of this sort of thing should be in the background, it should not be a manual time-intensive effort. When we started this project 10 years ago, it was, and we wanted to get it automated. And I think we’re finally getting to the stage where it’s going to be… There’s enough tooling out there and there’s enough of an ecosystem building that we’ll get this automation to happen.
|
||||
|
||||
This is why getting it to ISO and getting the specification to ISO means it’ll make it easier for people in procurement to specify that they want to see the input as an SPDX document to compliment the product that they’re being given so that they can ingest it, manage it and so forth. But by it being able to say it’s an ISO standard, it makes the things a lot easier in the procurement departments.
|
||||
|
||||
OpenChain recognized that we needed to do this and so they went through and… OpenChain is actually the first specification we’re taking through to ISO. But for SPDX, we’re taking it through as well, because once they say you need to follow the process, you also need some for a format. And so it’s very logical to make it easy for people to work with this information.
|
||||
|
||||
Swapnil Bhartiya: And as you’ve worked with different players, different of the ecosystem, what are some of the pressing needs? Like improve automation is one of those. What are some of the other pressing needs that you think that the community has to work on?
|
||||
|
||||
Kate Stewart: So some of the other pressing needs that we need to be working on is more playbooks, more instructions, showing people how they can do things. You know, we figured it out, okay, here’s how we can model it, here’s how you can represent all these cases. This is all sort of known in certain people’s heads, but we have not done a good job of expressing to people so that it’s approachable for them and they can do it.
|
||||
|
||||
One of the things that’s kind of exciting right now is the NTIA is having this working group on these software bill of materials. It’s coming from the security side, but there’s various proof of concepts that are going on with it. One of which is a healthcare proof of concept. And so there’s a group of about five to six device manufacturers, medical device manufacturers that are generating SBoMs in SPDX and then there are handing them into hospitals to go and be able to make sure they can ingest them in.
|
||||
|
||||
And this level of bringing people up to this level where they feel like they can do these things, it’s been really eye-opening to me. You know, how much we need to improve our handholding and improve the infrastructure to make it approachable. And this obviously motivates more people to be getting involved. From the vendors and commercial side, as well as the open source, but it wouldn’t have happened, I think, to a large extent for SPDX without this open source and without the projects that have adopted it already.
|
||||
|
||||
Swapnil Bhartiya: Now, just from the educational awareness point of view, like if there’s an open source project, how can they easily create SBoM documents that uses the SPDX specification with their releases and keep it synced?
|
||||
|
||||
Kate Stewart: That’s exactly what we’d love to see. We’d love to see the upstream projects basically generate SPDX documents as they’re going forward. So the first step is to use the SPDX license identifiers to make sure you understand what the licensing should be in each file, and ideally you can document with eTags. But then there’s three or four tools out there that actually scan them and will generate an SPDX document for you.
|
||||
|
||||
If you’re working at the command line, the REUSE Lindt tool that I was mentioning from Free Software Foundation Europe will work very fast and quickly with what you’ve got. And it’ll also help you make sure you’ve got all your files tagged properly.
|
||||
|
||||
If you haven’t done all the tagging exercising and you wonder [inaudible 00:09:40] what you got, a scan code works at the command line, and it’ll give you that information as well. And then if you want to start working in a larger system and you want to store results and looking things over time, and have some state behind it all so like there’ll different versions of things over time, FOSSology will remember from one version to another and will help you create these [inaudible 00:10:01] off of bill materials.
|
||||
|
||||
Swapnil Bhartiya: Can you talk about some of the new use cases that you’re seeing now, which maybe you did not expect earlier and which also shows how the whole community is actually growing?
|
||||
|
||||
Kate Stewart: Oh yeah. Well, when we started the project 10 years ago, we didn’t understand containers. They weren’t even not on the raw mindset of people. And there’s a lot of information sitting in containers. We’ve had some really good talks over the last couple of years that illustrate the problems. There was a report that was put out from the Linux Foundation by Armijn Hemel, that goes into the details of what’s going on in containers and some of the concerns.
|
||||
|
||||
So being able to get on top of automating, what’s going on with concern inside a container and what you’re shipping and knowing you’re not shipping more than you need to, figuring out how we can improve these sorts of things is certainly an area that was not initially thought about.
|
||||
|
||||
We’ve also seen a tremendous interest in what’s going on in IOT space. And so that you need to really understand what’s going on in your devices when they’re being deployed in the field and to know whether or not, effectively is vulnerability going to break it, or can you recover? Things like that. The last 10 years we’ve seen tremendous spectrum of things we just didn’t anticipate. And the nice thing about SPDX is, you’ve got a use case that we’re not able to represent. If we can’t tell you how to do it, just open an issue, and we’ll start trying to figure it out and start to figure if we need to add fields in for you or things like that.
|
||||
|
||||
Swapnil Bhartiya: Kate, thank you so much for taking your time out and talking to me today about this project.
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/audience/developers/role-of-spdx-in-open-source-software-supply-chain/
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/author/swapnil/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,67 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SODA Foundation: Autonomous data management framework for data mobility)
|
||||
[#]: via: (https://www.linux.com/audience/developers/soda-foundation/)
|
||||
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
|
||||
|
||||
SODA Foundation: Autonomous data management framework for data mobility
|
||||
======
|
||||
|
||||
SODA Foundation is an open source project under Linux Foundation that aims to establish an open, unified, and autonomous data management framework for data mobility from the edge, to core, to cloud. We talked to Steven Tan, SODA Foundation Chair, to learn more about the project.
|
||||
|
||||
_Here is a transcript of the interview:_
|
||||
|
||||
Swapnil Bhartiya: Hi, this is Swapnil Bhartiya, and today we have with us Steven Tan, chair of the SODA foundation. First of all, welcome to the show.
|
||||
Steven Tan: Thank you.
|
||||
|
||||
Swapnil Bhartiya: Tell us a bit about what is SODA?
|
||||
Steven Tan: The foundation is actually a collaboration among vendors and users to focus on data management for, how do you call, autonomous data mesh management. And the point of this whole thing is how do we serve the users? Because a lot of our users are getting a lot of data challenges, and that’s what this foundation is for. To get users and vendors together to help to address these data challenges.
|
||||
|
||||
Swapnil Bhartiya: What kind of data are we talking about?
|
||||
Steven Tan: The data that we’re talking about is referring to anything like data protection, data governance, data replication, data copy management and stuff like that. And also data integration, how to connect the different data silos and stuff.
|
||||
|
||||
Swapnil Bhartiya: Right. But are we talking about enterprise data or are we talking consumer data? Like there is a lot of data with Facebook, Google, and Gmail, and then there are a lot of enterprise data, which companies … Sorry, as an enterprise, I might put something on this cloud, I can put it on this cloud. So can you please clarify what data are we talking about?
|
||||
Steven Tan: Actually, the data that we’re talking about is … It depends on the users. There’re all kinds of data. Like for example, I mean, in the keynote that I gave two days ago, the example I gave was from Toyota. So Toyota use case is actually car data. So car data refers to things like the car sensor data, videos, map data and stuff. And then we have users like China Unicom. I mean, they have enterprise companies going to the cloud and so on. So they’ve all kinds of enterprise data over there. And then we also have other users like Yahoo Japan, and they have like a website. So the data that you’re talking about is web data, consumer data and stuff like that. So it’s across the board.
|
||||
|
||||
Swapnil Bhartiya: Oh, so it’s not as specific to an industry or any space or sector, okay. But why do you need it? What is the problem that you see in the market and in the current sphere that you’re like, hey, we should create something like that?
|
||||
Steven Tan: So the problem that came, I mean the reason why all these companies came together is that they are building data centers that are from small to big. But a lot of the challenges that you have is like, it’s hard for a single project to address. It’s not like a business where we have a specific problem and then we need this to be solved and so on, it’s not like that. A lot of it is like, how do you connect the different pieces together in the data center together?
|
||||
So there’s nothing like, no organization like that that can help them solve this kind of problem. Like how do you have, in order to address the data of … Or how do you address things like taking care of data protection and data privacy at the same time? And at the same time, you want to make sure that this data can be governed properly. So there isn’t any single organization that can help to take care of this kind of stuff, so we’re helping these users understand their problems and then come together and then we plan projects and roadmaps based on their problems and try to address them through these projects in the SODA foundation.
|
||||
|
||||
Swapnil Bhartiya: And you gave an example of data from the cars and all these things. Does that also mean that open source has helped solving a lot of problems by breaking down a lot of silos so that there’s a lot of interaction between different silos, which were like earlier separated and isolated? Today, as you mentioned, we are living in a data driven world. No matter what we do all the way from the Ring, to what we are doing right now, talking to each other, to the product that we’ll create in the end. But most of this data is living in their own silos. There may be a lot of value in that data, which cannot be extracted because one, it is locked into the silos. The second problem is that these days, data is kind of becoming the next oil. These companies are trying to capture all the data, irrespective of the fact of what value do they see in that data today? And by leveraging machine learning and deep learning, they can in the future … So how do you look at that, and how is SODA foundation going to break those silos, without compromising on our privacy, yet allow companies … Because the fact is, as much as I prefer my privacy, I also want Google Maps to tell me the fastest route where I want to go.
|
||||
Steven Tan: Right. So I think there are certain, I mean, there are different levels of privacy that we’re going to take care of. And in terms of like, first of all, there are all kinds of … I mean, in terms of the different countries or different States or different provinces like in different countries, there are different kinds of regulations and so on. So first of all, like the data silos you talk about. Yes, that’s one of the key problems that we’re trying to solve. How to connect all the different data silos so as to reduce fragmentation, and then try to minimize the so called dark data that you’re talking about, and then extract all the values over there. So that’s one of the things that we try to get here. I mean, we try to connect all the different pieces, like in the different … The data may be sitting in the edge in the data center or different data centers and in the cloud. We try to connect all these pieces together.
|
||||
|
||||
I mean, that’s one of the first things that we tried to do. And then we tried to have data policies. I think this is a critical piece of things that a lot of the solutions out there don’t address. You have data policies, but it may be the data policies just for a single vendor solution. But once the data gets out, that solution then is out of control. So what we’re trying to do here is say, how do you have data policies across different solutions, so no matter where the data is it’s governed the same way, consistently? That’s the key. So then you can talk about how can you really protect the data in terms of privacy or govern the data or control the data? And in terms of the, I mentioned about the regions, right? So you know where the data is, and you know what kind of regulations that need to be taken care of and you apply it right there. That’s how it should work.
|
||||
|
||||
Swapnil Bhartiya: When we look at the kind of a scenario you talked about, I see it as two-fold. One is there is a technology problem and the second is people problem. So is SODA foundation going to deal with both, or are you going to just deal with the technology aspect of it?
|
||||
Steven Tan: The technology part that we talk about, we try to define in terms of the API and so on to all the data policies and so on, and try to get as many companies to support this as possible. And then the next thing that we try to do is actually try to work with standards organizations to try to make this into a standard. I mean, that’s what we’re trying to do here.
|
||||
|
||||
And then government aspects, there are certain organizations that we are talking to. Like there’s the CESI, it’s China Electronic Standards organizations that we’re talking to that’s trying to work things into their … Actually, I’m not sure about China, because it’s, I mean, we don’t know about their sphere of influence within the CSI and so on. And then for the industry standards, there’s [inaudible 00:09:05] and so on, we’re trying to work with them and trying to get it to work.
|
||||
|
||||
Swapnil Bhartiya: Can we talk about the ecosystem that you’re trying to build around SODA foundation? One would be the participants who are actually contributing either the code or the vision, and then the users community who would actually be benefiting from it?
|
||||
Steven Tan: So the ecosystem that we are trying to build, that’s the core part, which is actually the framework. So the framework, I mean, this part will be more of the data vendors or the storage vendors that will be involved in trying to build this ecosystem. And then the outer part, what I call the outer part of the ecosystem will be things like the platforms. Things like Kubernetes, VMware, all these different vendors, and then networking kind of stuff that you need to take care of like the big data analytics and stuff.
|
||||
|
||||
And then for the users, actually, if you can see from the SODA end-user advisory committee, I mean, that’s where most of our users are participating in the communication. So most of these users, I mean, they are from different regions and different countries and different industries. So we try to serve, I mean, whichever participant is interested in, they can participate in this thing. But the main thing is that because they may be from different industries, but actually most of the issues that they have is still the same thing. So there are some commonalities among all these users.
|
||||
|
||||
Swapnil Bhartiya: We are in the middle of 2020, because of COVID-19 everything has slowed down, things have changed. What does your roadmap, what does your plan look like? The structure, the governance and the plan for ’21 or end of the year?
|
||||
Steven Tan: We are very, how do you call it? Very community-driven or focused kind of organization. We hold a lot of meetups and events and so on where we get together the users and the vendors and so on and the community in general. So with this COVID-19 thing, a lot of the plans has been upset. I mean, it’s in chaos right now. So most of the things are like what everybody is doing, moving online. So we are having some webinars and stuff, even as of right now when we are talking, we are having a mini summit going on with the Open Source Summit North America right now.
|
||||
|
||||
So for the rest of this year, most of our events will be online. We’re going to have some webinars and some meetups, you can find it out from our website. And the other plans that we have is that we are going to have, we just released the SODA federal release, which is the 1.0 release. And through the end of this year, we’re going to have two more releases, the G release and the H release at the end of this year. G release is going to be in September, and H is in the end of the year. And we’re trying to engage our users with things like the POC testing for the federal. Because each release that we have, we try to get them to do the testing, and then so that’s the way of them trying to provide feedback to us. Whether that works for them or how can we improve to make the code work for what they need.
|
||||
|
||||
Swapnil Bhartiya: Awesome. So thank you so much for taking your time out and explaining more about SODA foundation, and I look forward to talking to you again because I can see that you have a very exciting pipeline ahead. So thank you.
|
||||
Steven Tan: Thank you, thank you very much.
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/audience/developers/soda-foundation/
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/author/swapnil/
|
||||
[b]: https://github.com/lujun9972
|
76
sources/talk/20200731 Linux dominates supercomputing.md
Normal file
76
sources/talk/20200731 Linux dominates supercomputing.md
Normal file
@ -0,0 +1,76 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux dominates supercomputing)
|
||||
[#]: via: (https://www.networkworld.com/article/3568616/linux-dominates-supercomputing.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Linux dominates supercomputing
|
||||
======
|
||||
The Linux operating system runs all 500 of the world’s fastest supercomputers, which help to advance artificial intelligence, machine learning and even COVID-19 research.
|
||||
Wikipedia
|
||||
|
||||
In addition to all its other successes, Linux dominates supercomputing: All 500 of the worlds fastest supercomputers run Linux, and that has been the case since November 2017, according to the [TOP500][1] organization, which has been ranking the 500 most powerful computer systems since 1993. (A graph of Linux' ascension is available on [here][2].)
|
||||
|
||||
How did this happen?
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
|
||||
|
||||
### A little history
|
||||
|
||||
Linux began life in 1991 as the personal project of 21-year-old Finnish student Linus Torvalds. I first became aware of it several years later while working at Johns Hopkins University’s physics and astronomy department where I managed the department’s network and a number of servers with the help of a couple grad students.
|
||||
|
||||
At the time, I was intrigued by Linux, but couldn’t imagine how dominant the OS would become simply because its source code was available to anyone who wanted to work with it. I could not forsee that a significant group of large companies would grasp its value, work together and innovate to make Linux what it is today. Intense collaboration was key to this success, including contributions from countless individuals and organizations that includes IBM, Intel, NVIDIA, Red Hat, Samsung, SUSE and many others. (Browsing the [corporate members of the Linux Foundation][4] is likely to make your jaw drop.)
|
||||
|
||||
The success of Linux can also be attributed to the fact that it is open source, non-proprietary, and extendable.
|
||||
|
||||
### The TOP500
|
||||
|
||||
Twice a year, in June and November, TOP500 releases it a list of the 500 most powerful computer systems ranked by their performance on something called the [LINPACK Benchmark][5], which calls for the computer being tested to solve a dense system of linear equations.
|
||||
|
||||
I have heard it said, though not been able to verify, that Linux runs on more than 90% of public clouds, more than 60% of embedded systems and IoT devices, as much as 99% of supercomputers and more than 80% of smartphones. If these claims are even close to the truth, it attests to the success and versatility of Linux
|
||||
|
||||
In [the most recent TOP500 ranking][6], a Japanese supercomputer name [Fugaku][7] (derived from an alternate name for Mount Fuji) has taken the top spot and pushed the former leaders down a rank. Fugaku was co-developed by Riken and Fujitsu and uses Fujitsu's 48-core A64FX ARM chip. This is the first time a computer based on ARM processors has topped the list.
|
||||
|
||||
The computer was fully assembled only in May but has already helped fight COVID-19 by sorting through more than 2,000 drugs that might effectively block the virus and found a dozen that show promise.
|
||||
|
||||
### Containerization, AI and ML
|
||||
|
||||
Recently I had a chance to discuss Linux with Stefanie Chiras, vice president and general manager of the Red Hat Enterprise Linux business unit. She sees Linux as tightly linked to supercomputing because it provides the scale and flexibility to support high-performance computing and exascale computing systems – those that are capable of calculating at least 1,018 floating point operations per second (1 exaFLOPS). She also sees Linux as adding to the ongoing development of artificial intelligence and machine learning.
|
||||
|
||||
Chiras expects that containerization will enable more and more researchers and analysts to benefit from supercomputing power. And, as someone who has provided support to both scientists and analysts over the past few decades, I can appreciate what a difference this will make in their work.
|
||||
|
||||
In discussing supercomputers, Chiras pointed out that, beyond having a Linux operating system, the top three fastest – Fugaku, Summit and Sierra – are all built from commercial hardware. Summit and Sierra are Power Systems-based, while Fugaku is the first Arm-powered supercomputer to top the list. The days of purpose-built, custom hardware and software in supercomputing could be over even though new and demanding workloads like AI and complex modeling require greater and greater power.
|
||||
|
||||
(Chiras also wasn’t shy about mentioning that [Red Hat Enterprise Linux][8] is running on four of the top 10 supercomputers: the top three plus number nine, [Marconi-100][9].)
|
||||
|
||||
The character of open source and the willingness of many companies to recognize its value and work together to develop it have made Linux the top OS for both supercomputers and micro-devices. We can expect continued improvements in how the OS is deployed – including getting supercomputing into the hands of a lot more scientists and engineers – as we move forward. As a person who has spent close to 40 years working with Unix and Linux systems, I couldn’t be more pleased.
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3568616/linux-dominates-supercomputing.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://TOP500.org
|
||||
[2]: https://www.top500.org/statistics/details/osfam/1/
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.linuxfoundation.org/membership/members/
|
||||
[5]: https://www.top500.org/project/linpack
|
||||
[6]: https://www.networkworld.com/article/3563766/the-10-fastest-supercomputers-are-led-by-one-28x-faster-than-the-rest.html
|
||||
[7]: https://www.fujitsu.com/global/about/innovation/fugaku/
|
||||
[8]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
|
||||
[9]: https://www.hpc.cineca.it/hardware/marconi100
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
105
sources/talk/20200731 Why we open sourced our Python platform.md
Normal file
105
sources/talk/20200731 Why we open sourced our Python platform.md
Normal file
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why we open sourced our Python platform)
|
||||
[#]: via: (https://opensource.com/article/20/7/why-open-source)
|
||||
[#]: author: (Meredydd Luff https://opensource.com/users/meredydd-luff)
|
||||
|
||||
Why we open sourced our Python platform
|
||||
======
|
||||
Open source development philosophy makes Anvil's entire solution more
|
||||
useful and trustworthy.
|
||||
![neon sign with head outline and open source why spelled out][1]
|
||||
|
||||
The team at Anvil recently open sourced the [Anvil App Server][2], a runtime engine for hosting web apps built entirely in Python.
|
||||
|
||||
The community reaction has been overwhelmingly positive, and we, at Anvil, have already incorporated lots of that feedback into our [next release][3]. But one of the questions we keep getting asked is, "Why did you choose to open source such a core part of your product?"
|
||||
|
||||
### Why we created Anvil
|
||||
|
||||
[Anvil][4] is a tool that makes it as simple as possible to build a web app. We do that by enabling you to build the whole application in one language—Python.
|
||||
|
||||
Before Anvil, if you wanted to build a web app, you'd have to write code using a bunch of technologies like HTML, Javascript, CSS, Python, SQL, React, Redux, Bootstrap, Sass, Webpack, etc. That's a lot to learn. And that's just for a simple app; trust me, it can get [way more complicated][5].
|
||||
|
||||
![A complex framework of development tools needed for a simple web app][6]
|
||||
|
||||
Yes, really. All of these, for a simple web app.
|
||||
|
||||
But even then, you're not done! You need to know all about Git and cloud hosting providers, how to secure the (most-likely) Linux operating system, how to tune the database, and then you're on call to keep it running. Forever.
|
||||
|
||||
So, instead, we built Anvil, an online IDE where you can build your UI with a [drag-and-drop designer][7] and write all your [logic in Python][8], then Anvil takes care of the rest. We replace that whole teetering stack with "just write Python."
|
||||
|
||||
### Simple web hosting is important, but not enough
|
||||
|
||||
Anvil can also host your apps for you. And why not? There is so much complexity in deploying a web app, so running our own cloud hosting service was the only way to provide the simplicity we need. Build an app in the Anvil editor, [click a button][9], and it's live on the Internet.
|
||||
|
||||
But we kept hearing from people who said, "That's great, but…"
|
||||
|
||||
* "I need to run this on an offshore platform without reliable internet access."
|
||||
* "I want to package my app into an IoT device I sell."
|
||||
* "If I'm putting my eggs in this basket, how can I be sure I can still run my app in ten years?"
|
||||
|
||||
|
||||
|
||||
These are all good points! A cloud service isn't the right solution for everyone. If we want to serve these users, there's got to be some way for them to get their apps out of Anvil and run them locally, under their own complete control.
|
||||
|
||||
### Open source is an escape hatch, not an ejector seat
|
||||
|
||||
At conferences, we sometimes get asked, "Can I export this as a Flask+JS app?" Sure, it would be possible to export an Anvil project into its respective Python and JavaScript—we could generate a server package, compile the client-side Python to Javascript, and spit out a classic web app. But it would have serious drawbacks, because: **code generation is an ejector seat.**
|
||||
|
||||
![Code generation is an ejector seat from a structured platform][10]
|
||||
|
||||
([Image][11] licensed as public domain)
|
||||
|
||||
Generated code is better than nothing; at least you can edit it! But the moment you've edited that code, you've lost all the benefits of the system that generated it. If you're using Anvil because of its [drag-and-drop editor][12] and [Python in the browser][13], why should you have to use vim and Javascript in order to host your app locally?
|
||||
|
||||
We believe in [escape hatches, not ejector seats][14]. So we did it the right way—we [open-sourced Anvil's runtime engine][2], which is the same code that serves your app in our hosted service. It's a standalone app; you can edit your code with a text editor and run it locally. But you can also `git push` it right back into our online IDE. It's not an ejector seat; there's no explosive transition. It's an escape hatch; you can climb out, do what you need to do, and climb right back in.
|
||||
|
||||
### If it's open, is it reliable?
|
||||
|
||||
A seeming contradiction in open source is that its free availability is its strength, but also sometimes creates a perception of instability. After all, if you're not charging for it, how are you keeping this platform up and healthy for the long term?
|
||||
|
||||
We're doing what we always have—providing a development tool that makes it drastically simpler to build web applications, though the apps you build using Anvil are 100% yours. We provide hosting for Anvil apps and we offer the entire development and hosting platform onsite for [enterprise customers][15]. This enables us to offer a free plan so that everyone can use Anvil for hobby or educational purposes, or to start building something and see where it goes.
|
||||
|
||||
### More to gain, little to lose
|
||||
|
||||
Open sourcing our runtime engine isn't a detractor from our business—it makes our online IDE more useful and more trustworthy, today and in the future. We've open-sourced the Anvil App Server for the people who need it, and to provide the ultimate insurance policy. It's the right move for our users—now they can build with confidence, knowing that the open source code is [right there][3] if they need it.
|
||||
|
||||
If our development philosophy resonates with you, why not try Anvil yourself?
|
||||
|
||||
|
||||
\-----
|
||||
|
||||
_This post is an adaptation of [Why We Open Sourced the Anvil App Server][16] and is reused with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/why-open-source
|
||||
|
||||
作者:[Meredydd Luff][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/meredydd-luff
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_OSwhy_520x292_ma.png?itok=lqfhAs8L (neon sign with head outline and open source why spelled out)
|
||||
[2]: https://anvil.works/blog/open-source
|
||||
[3]: https://github.com/anvil-works/anvil-runtime
|
||||
[4]: https://anvil.works/
|
||||
[5]: https://github.com/kamranahmedse/developer-roadmap#introduction
|
||||
[6]: https://opensource.com/sites/default/files/uploads/frameworks.png (A complex framework of development tools needed for a simple web app)
|
||||
[7]: https://anvil.works/docs/client/ui
|
||||
[8]: https://anvil.works/docs/client/python
|
||||
[9]: https://anvil.works/docs/deployment
|
||||
[10]: https://opensource.com/sites/default/files/uploads/ejector-seat-opensourcecom.jpg (Code generation is an ejector seat from a structured platform)
|
||||
[11]: https://commons.wikimedia.org/wiki/File:Crash.arp.600pix.jpg
|
||||
[12]: https://anvil.works/docs/editor
|
||||
[13]: https://anvil.works/docs/client
|
||||
[14]: https://anvil.works/blog/escape-hatches-and-ejector-seats
|
||||
[15]: https://anvil.works/docs/overview/enterprise
|
||||
[16]: https://anvil.works/blog/why-open-source
|
123
sources/talk/20200801 8 tips for running a virtual hackathon.md
Normal file
123
sources/talk/20200801 8 tips for running a virtual hackathon.md
Normal file
@ -0,0 +1,123 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (8 tips for running a virtual hackathon)
|
||||
[#]: via: (https://opensource.com/article/20/8/virtual-hackathon)
|
||||
[#]: author: (Jason Blais https://opensource.com/users/jasonblais)
|
||||
|
||||
8 tips for running a virtual hackathon
|
||||
======
|
||||
Reach new developers and engage with your existing community through
|
||||
virtual hackathons.
|
||||
![Two people chatting via a video conference app][1]
|
||||
|
||||
Hackathons are events where developers, product managers, designers, and others come together to tackle problems over a short time period. They have become increasingly popular over the last 15 years after [OpenBSD ran the first hackathon in June 1999][2].
|
||||
|
||||
These events provide several benefits—greater engagement across the community, innovation and new ideas, awareness for the organizers, and networking opportunities for participants.
|
||||
|
||||
[Mattermost][3], an open source messaging platform for DevOps teams, has also run and participated in several hackathons to engage with the open source community. So far, in 2020, we participated in a [hackathon to overcome the challenges of COVID-19][4] and ran a [hackfest to create open source chatbots for developer workflows][5]. Both had thousands of participants and were run completely virtually.
|
||||
|
||||
We've come a long way from our [first hackathon in 2016][6] that had only 20 participants to building a [step-by-step guide for running a virtual hackathon][7], and we've learned many valuable lessons.
|
||||
|
||||
If you are considering running a virtual hackathon, here are eight tips to keep in mind:
|
||||
|
||||
### Choose your hackathon theme
|
||||
|
||||
Before you begin to plan your hackathon, you'll need to choose a theme or a topic for it first.
|
||||
|
||||
Is there a specific business or social problem you're trying to solve? An industry you want to reach? Technologies you want to explore together with your community?
|
||||
|
||||
Choosing the theme at the start is important, as it gives you a clear focus for the rest of your planning—which communities to reach out to, how to determine logistics, what materials to prepare, and other details. It also inspires your community to work on solving the specific kinds of problems you're organizing around.
|
||||
|
||||
### Decide dates for the hackathon early
|
||||
|
||||
With 365 days in a year, you'd think it'd be easy to find one on which to hold an event, but it can actually be a challenge to schedule even a virtual hackathon. The most important thing is not to rush the preparation. Ideally, you should schedule the hackathon at least two months out. This gives you enough time to promote and organize it and gives your participants enough time to prepare for it—whether it's researching the hackathon topic or setting aside time for it on their calendars.
|
||||
|
||||
Here are a few quick tips when trying to decide when to run hackathons:
|
||||
|
||||
* Avoid overlap with major holidays in the countries you're trying to reach. If the event is global, then avoid major holidays in the countries where you have bigger communities.
|
||||
* Avoid overlap with significant company or community events that may act as a distraction during the hackathon, such as major product releases or developer conferences.
|
||||
* Avoid starting the hackathon on a Friday, especially if you plan to reach developers around the world. If you're located in the Western Hemisphere and hold a hackathon kick-off call on Friday, those in Europe and Asia will either have to join in late Friday night or Saturday morning, effectively meaning the hackathon starts on a weekend in those areas. It is typically better to start on a Thursday instead.
|
||||
|
||||
|
||||
|
||||
Given the hackathon is online, there is also an opportunity to extend it beyond a standard 48-hour event. We have found that running a three- or four-day event, for instance, results in much higher quality submissions that are closer to completion.
|
||||
|
||||
### Identify key team members to partner with during the hackathon
|
||||
|
||||
It is much easier to promote and organize the event when you have someone you can collaborate with. At a minimum, find one or two developers who can act as leaders during the event, mentoring newcomers and answering questions from the participants. They can also be your point of contact for technical support before and during the event.
|
||||
|
||||
You should also consider if you need a partner to run operations such as scheduling calls, and a partner to create the event marketing materials.
|
||||
|
||||
Moreover, once you've picked the theme and dates of your event, reach out to your community and ask if anyone is open to help. Many may be interested. Acting as a hackathon organizer and mentor also provides a great experience for community members themselves.
|
||||
|
||||
### Produce a workback schedule
|
||||
|
||||
Start by listing everything you and your hackathon partners may need to do before, during, and after the hackathon. And I do mean everything—from deciding which platform to host your hackathon on to which video conferencing tool to use for presentations to what prizes to give out. The more comprehensive your workback schedule is, the better prepared you will be, and the more successful the hackathon will be.
|
||||
|
||||
To help you get started, check out the Mattermost ["How to Run a Hackathon" playbook][7] with key dates and activities that you can use for inspiration.
|
||||
|
||||
### Create a code of conduct
|
||||
|
||||
Code of conduct is important to promote healthy behavior within your community. It establishes what is accepted by participants and helps create a safe and welcoming environment for everyone.
|
||||
|
||||
The same is true for hackathons. The code of conduct establishes the rules and actions taken when the rules are broken. Include a clear contact or place to report harassment from the participants. Often, the members you're partnering with from tip #3 will also help enforce the code of conduct as needed.
|
||||
|
||||
To learn more about codes of conduct, including ways to enforce them, see [this excellent guide][8].
|
||||
|
||||
### Make one landing page for the hackathon
|
||||
|
||||
During the hackathon, there is a lot going on. People ask questions, share progress, and team up with others. Especially when running a hackathon with hundreds (or thousands!) of participants, it can quickly get chaotic.
|
||||
|
||||
That is why a single hackathon landing page is critical. This landing page should contain all the information community members need to know, including resources to get started, where to ask questions, and how to create a submission. Updates can also be communicated in other channels like Twitter, but they should always be reflected on the hackathon's landing page. Your landing page is the source of truth.
|
||||
|
||||
Plus, make sure to include plenty of information for first-time participants. If you've run a few events, a lot of the information you share is familiar to you and previous participants. However, everything is new and often confusing for first-time participants, so make sure to keep them in mind when building the landing page.
|
||||
|
||||
### Plan for external promotion
|
||||
|
||||
A common reason to run a hackathon is to reach new users and communities who have heard of your project before. Therefore, announcing and promoting the hackathon beyond just your social channels is very important. However, simply spamming other communities and pages is not just ineffective, but also disrespectful.
|
||||
|
||||
Go back to point #1, where you selected the hackathon theme. Are there certain technologies that are relevant to it? Or industries?
|
||||
|
||||
If so, study where those communities exist and reach out in their respective channels. Even better, find their community manager and share the hackathon with them first before sharing with the wider community.
|
||||
|
||||
### Include social interaction time
|
||||
|
||||
The biggest challenge we've found with virtual hackathons as compared to physical hackathons are the social aspects. Since people are not working in the same rooms together anymore, building connections and collaborating with others takes extra effort.
|
||||
|
||||
Make sure to add social activities throughout the event dates, accounting for different time zones. Include activity feeds where participants can share updates on what they're working on, drop-in social hours over video, coffee times, pizza hours, and more.
|
||||
|
||||
Taking the extra step to ensure there are opportunities for social interaction creates a more engaging experience for everyone involved.
|
||||
|
||||
### Ready to run your first virtual hackathon?
|
||||
|
||||
Running a virtual hackathon is not easy, but if you prepare for it well and use the above tips to plan it, you will be set up for success, and hopefully have lots of fun along the way! There is also an [excellent guide from HackerEarth][9] that you can use as a reference.
|
||||
|
||||
Ready to run your first virtual hackathon? Then use the Mattermost ["How to Run a Hackathon" playbook][7] to guide your planning and read about the [Mattermost Bot Hackfest][5] for inspiration.
|
||||
|
||||
Thank you also to Jesús Espino and Justin Reynolds for their valuable feedback on this article.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/virtual-hackathon
|
||||
|
||||
作者:[Jason Blais][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jasonblais
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chat_video_conference_talk_team.png?itok=t2_7fEH0 (Two people chatting via a video conference app)
|
||||
[2]: http://www.openbsd.org/hackathons.html
|
||||
[3]: http://mattermost.com/
|
||||
[4]: https://mattermost.com/blog/vencealvirus-virtual-hackathon/
|
||||
[5]: https://mattermost.com/blog/mattermost-bot-hackfest-winners/
|
||||
[6]: https://mattermost.com/blog/mattermost-holiday-hackfest-2016/
|
||||
[7]: https://handbook.mattermost.com/contributors/contributors/how-to-run-a-hackathon
|
||||
[8]: https://opensource.guide/code-of-conduct/
|
||||
[9]: https://www.hackerearth.com/community-hackathons/resources/e-books/guide-to-organize-hackathon
|
@ -1,4 +1,3 @@
|
||||
//translating by messon007
|
||||
Systemd Services: Monitoring Files and Directories
|
||||
======
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (messon007)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (jrglinux)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,76 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ()
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use this script to find a Raspberry Pi on your network)
|
||||
[#]: via: (https://opensource.com/article/20/6/find-raspberry-pi)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
|
||||
Use this script to find a Raspberry Pi on your network
|
||||
======
|
||||
Identify a specific Raspberry Pi in your cluster with a script that
|
||||
triggers an LED to flash.
|
||||
![Raspberries with pi symbol overlay][1]
|
||||
|
||||
We've all been there. "I'm going to get this [Raspberry Pi][2] to try out. They look kinda cool." And then, like tribbles on an Enterprise, suddenly you have [Kubernetes clusters][3] and [NFS servers][4] and [Tor proxies][5]. Maybe even a [hotel booking system][6]!
|
||||
|
||||
Pis cover the desk. They spill out onto the floor. Carrier boards for Raspberry Pi compute modules installed into lunchboxes litter the shelves.
|
||||
|
||||
…or maybe that's just me?
|
||||
|
||||
I'll bet if you have one Raspberry Pi, you've got _at least_ two others, though, and gosh darn it, they all look the same.
|
||||
|
||||
This was the situation I found myself in recently while testing a network filesystem (NFS) server I set up on one of my Raspberry Pis. I needed to plug in a USB hard drive, but … to which one? Ol' Lingonberry Pi was the chosen host, and I was SSH'd into her, but which actual, _physical_ RPi was she? There was no way of knowing…
|
||||
|
||||
Or was there?
|
||||
|
||||
![Raspberry Pis stacked up in cluster cases][7]
|
||||
|
||||
So, so many Raspberry Pis. Which one is Lingonberry? (Chris Collins, [CC BY-SA 4.0][8])
|
||||
|
||||
At a previous job, I sometimes worked on servers in our data centers, and some of them had a neat feature: an ID button on the front of the server that, when pressed, started an LED flashing on the front and back of the server. If I needed to deal with the other side of the server, I could press the ID button, then walk _allllll_ the way around to the other side of the rack, and easily find the right server.
|
||||
|
||||
I needed something like this to find Lingonberry.
|
||||
|
||||
There aren't any buttons on the Pis, but there are LEDs, and after a quick Google search, I learned that [one of them is _controllable_][9]. _Cue maniacal laughter._
|
||||
|
||||
There are three important bits to know. First, the LED path: on Raspberry Pis, at least those running Ubuntu 20.04, the front (and user-controllable) LED is found at `/sys/class/leds/led0`. If you navigate to it, you'll find it is a symlink to a directory that has a number of files in it. The two important files are `trigger` and `brightness`.
|
||||
|
||||
The `trigger` file controls what lights up the LED. If you `cat` that file, you will find a list:
|
||||
|
||||
```
|
||||
none usb-gadget usb-host rc-feedback rfkill-any
|
||||
rfkill-none kbd-scrolllock kbd-numlock kbd-capslock
|
||||
kbd-kanalock kbd-shiftlock kbd-altgrlock kbd-ctrllock
|
||||
kbd-altlock kbd-shiftllock kbd-shiftrlock kbd-ctrlllock
|
||||
kbd-ctrlrlock timer oneshot disk-activity disk-read
|
||||
disk-write ide-disk mtd nand-disk heartbeat backlight
|
||||
gpio cpu cpu0 cpu1 cpu2 cpu3 default-on input panic
|
||||
mmc1 [mmc0] bluetooth-power rfkill0
|
||||
unimac-mdio--19:01:link unimac-mdio--19:01:1Gbps
|
||||
unimac-mdio--19:01:100Mbps unimac-mdio--19:01:10Mbps
|
||||
```
|
||||
|
||||
|
||||
The item in brackets indicates what triggers the LED; in the example above, it's [mmc0]—the disk activity for when the SD card plugged into the Raspberry Pi. The trigger file isn't a normal file, though. Rather than editing it directly, you change the trigger by echoing one of the triggers into the file.
|
||||
|
||||
|
||||
To identify Lingonberry, I needed to temporarily disable the [mmc0] trigger, so I could make the LED work how I wanted it to work. In the script, I disabled all the triggers by echoing "none" into the trigger file:
|
||||
|
||||
[code]
|
||||
|
||||
```
|
||||
# You must be root to do this
|
||||
$ echo none >trigger
|
||||
|
||||
$ cat trigger
|
||||
[none] usb-gadget usb-host rc-feedback rfkill-any rfkill-none kbd-scrolllock kbd-numlock kbd-capslock kbd-kanalock kbd-shiftlock kbd-altgrlock kbd-ctrllock kbd-altlock kbd-shiftllock kbd-shiftrlock kbd-ctrlllock kbd-ctrlrlock timer oneshot disk-activity disk-read disk-write ide-disk mtd nand-disk heartbeat backlight gpio cpu cpu0 cpu1 cpu2 cpu3 default-on input panic mmc1 mmc0 bluetooth-power rfkill0 unimac-mdio--19:01:link unimac-mdio--19:01:1Gbps unimac-mdio--19:01:100Mbps unimac-mdio--19:01:10Mbps
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
In the contents of the trigger file above, you can see [none] is now the selected trigger. Now the LED is off and not flashing.
|
||||
|
||||
|
||||
Next up is the brightness file. You can control whether the LED is on (1) or off (0) by echoing either 0 or 1 into the file. Alternating
|
@ -1,235 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Digging for DNS answers on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3568488/digging-for-dns-answers-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Digging for DNS answers on Linux
|
||||
======
|
||||
Dig is a powerful and flexible tool for interrogating domain name system (DNS) servers. In this post, we’ll take a deep dive into how it works and what it can tell you.
|
||||
[Laurie Avocado][1] [(CC BY 2.0)][2]
|
||||
|
||||
Dig is a powerful and flexible tool for interrogating DNS name servers. It performs DNS lookups and displays the answers that are returned from the name servers that were involved in the process along with details related to the search. System and [DNS][3] administrators often use **dig** to help troubleshoot DNS problems. In this post, we’ll take a deep dive into how it works and see what it can tell us.
|
||||
|
||||
To get started, it's helpful to have a good mental image of how DNS or domain name system works. It's a critical part of the global Internet because it provides a way to look up and, thereby, connect with servers around the world. You can think of it as the Internet's address book and any system that is properly connected to the Internet should be able to use it to look up the IP address of any properly registered server.
|
||||
|
||||
### Getting started with dig
|
||||
|
||||
The **dig** tool is generally installed on Linux systems by default. Here’s an example of a **dig** command with a little annotation:
|
||||
|
||||
```
|
||||
$ dig www.networkworld.com
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> www.networkworld.com <== version of dig you’re using
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6034
|
||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
|
||||
|
||||
;; OPT PSEUDOSECTION:
|
||||
; EDNS: version: 0, flags:; udp: 65494
|
||||
;; QUESTION SECTION: <== details on your query
|
||||
;www.networkworld.com. IN A
|
||||
|
||||
;; ANSWER SECTION: <== results
|
||||
|
||||
www.networkworld.com. 3568 IN CNAME idg.map.fastly.net.
|
||||
idg.map.fastly.net. 30 IN A 151.101.250.165
|
||||
|
||||
;; Query time: 36 msec <== query time
|
||||
;; SERVER: 127.0.0.53#53(127.0.0.53) <== local caching resolver
|
||||
;; WHEN: Fri Jul 24 19:11:42 EDT 2020 <== date and time of inquiry
|
||||
;; MSG SIZE rcvd: 97 <== bytes returned
|
||||
```
|
||||
|
||||
If you get a response like this, is it good news? The short answer is “yes”. You got a reply in a timely manner. The status field (status: NOERROR) shows there were no problems. You’re connecting to a name server that is able to supply the requested information and getting a reply that tells you some important details about the system you’re inquiring about. In short, you’ve verified that your system and the domain name system are getting along just fine.
|
||||
|
||||
Other possible status indicators include:
|
||||
|
||||
**SERVFAIL** – The name that was queried exists, but no data is available or available data is invalid.
|
||||
|
||||
**NXDOMAIN** – The name in question does not exist.
|
||||
|
||||
**REFUSED** – The zone does not exist at the requested authority and the infrastructure is not set up to provide responses when this is the case.
|
||||
|
||||
Here's an example of what you'd see if you were looking up a domain that doesn't exist:
|
||||
|
||||
```
|
||||
$ dig cannotbe.org
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> cannotbe.org
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 35348
|
||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
|
||||
```
|
||||
|
||||
In general, **dig** provides more details than **ping**, though **ping** will respond with "Name or service not known" if the domain doesn't exit. When you ask about a legitimate system, you get to see what the domain name system knows about the system, how those records are configured and how long it takes to retrieve that data.
|
||||
|
||||
In fact, sometimes **dig** can respond with information when **ping** cannot respond at all and that kind of information can be very helpful when you're trying to nail down a connection problem.
|
||||
|
||||
### DNS record types and flags
|
||||
|
||||
One thing we can see in the first query above is the presence of both **CNAME** and **A** records. The **CNAME** (canonical name) is like an alias that refers one domain name to another. Most systems that you dig for won’t have a **CNAME** record, but only an **A** record. If you run a “dig localhost” command, you will see an **A** record that simply refers to 127.0.0.1 – the "loopback" address that every system uses. An **A** record maps a name to an IP address.
|
||||
|
||||
The DNS record types include:
|
||||
|
||||
* A or AAAA -– IPv4 and IPv6 addresses
|
||||
* CNAME –- alias
|
||||
* MX –- mail exchanger
|
||||
* NS –- name server
|
||||
* PTR –- a reversing entry that lets you find a system name when providing the IP address
|
||||
* SOA –- start of authority record
|
||||
* TXT –- some related text
|
||||
|
||||
|
||||
|
||||
We also see a series of “flags” on the fifth line of output. These are defined in [RFC 1035][4] which defines the flags included in the header of DNS messages and even shows the format of headers.
|
||||
|
||||
```
|
||||
1 1 1 1 1 1
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| ID |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
|QR| Opcode |AA|TC|RD|RA| Z | RCODE |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| QDCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| ANCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| NSCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| ARCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
```
|
||||
|
||||
The flags shown in the fifth line in the initial query above are:
|
||||
|
||||
* **qr** = query
|
||||
* **rd** = recursion desired
|
||||
* **ra** = recursion available
|
||||
|
||||
|
||||
|
||||
Other flags described in the RFC include:
|
||||
|
||||
* **aa** = authoritative answer
|
||||
* **cd** = checking disabled
|
||||
* **ad** = authentic data
|
||||
* **opcode** = a 4-bit field
|
||||
* **tc** = truncation
|
||||
* **z** (unused)
|
||||
|
||||
|
||||
|
||||
### Adding the +trace option
|
||||
|
||||
You will get a LOT more output from **dig** if you add **+trace** as an option. It will add information that shows how your DNS query rooted through the hierarchy of name servers to locate the answer you’re looking for.
|
||||
|
||||
All the **NS** records shown below reflect name servers – and this is just the first section of data you will see as the query runs through the hierarchy of name servers to track down what you're looking for.
|
||||
|
||||
```
|
||||
$ dig +trace networkworld.com
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> +trace networkworld.com
|
||||
;; global options: +cmd
|
||||
. 84895 IN NS k.root-servers.net.
|
||||
. 84895 IN NS e.root-servers.net.
|
||||
. 84895 IN NS m.root-servers.net.
|
||||
. 84895 IN NS h.root-servers.net.
|
||||
. 84895 IN NS c.root-servers.net.
|
||||
. 84895 IN NS f.root-servers.net.
|
||||
. 84895 IN NS a.root-servers.net.
|
||||
. 84895 IN NS g.root-servers.net.
|
||||
. 84895 IN NS l.root-servers.net.
|
||||
. 84895 IN NS d.root-servers.net.
|
||||
. 84895 IN NS b.root-servers.net.
|
||||
. 84895 IN NS i.root-servers.net.
|
||||
. 84895 IN NS j.root-servers.net.
|
||||
;; Received 262 bytes from 127.0.0.53#53(127.0.0.53) in 28 ms
|
||||
...
|
||||
```
|
||||
|
||||
Eventually, you'll get information tied directly to your request.
|
||||
|
||||
```
|
||||
networkworld.com. 300 IN A 151.101.2.165
|
||||
networkworld.com. 300 IN A 151.101.66.165
|
||||
networkworld.com. 300 IN A 151.101.130.165
|
||||
networkworld.com. 300 IN A 151.101.194.165
|
||||
networkworld.com. 14400 IN NS ns-d.pnap.net.
|
||||
networkworld.com. 14400 IN NS ns-a.pnap.net.
|
||||
networkworld.com. 14400 IN NS ns0.pcworld.com.
|
||||
networkworld.com. 14400 IN NS ns1.pcworld.com.
|
||||
networkworld.com. 14400 IN NS ns-b.pnap.net.
|
||||
networkworld.com. 14400 IN NS ns-c.pnap.net.
|
||||
;; Received 269 bytes from 70.42.185.30#53(ns0.pcworld.com) in 116 ms
|
||||
```
|
||||
|
||||
### Picking your responder
|
||||
|
||||
You can use the **@** sign to specify a particular name server that you want to handle your query. Here we’re asking the primary name server for Google to respond to our query:
|
||||
|
||||
```
|
||||
$ dig @8.8.8.8 networkworld.com
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> @8.8.8.8 networkworld.com
|
||||
; (1 server found)
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43640
|
||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
|
||||
|
||||
;; OPT PSEUDOSECTION:
|
||||
; EDNS: version: 0, flags:; udp: 512
|
||||
;; QUESTION SECTION:
|
||||
;networkworld.com. IN A
|
||||
|
||||
;; ANSWER SECTION:
|
||||
networkworld.com. 299 IN A 151.101.66.165
|
||||
networkworld.com. 299 IN A 151.101.194.165
|
||||
networkworld.com. 299 IN A 151.101.130.165
|
||||
networkworld.com. 299 IN A 151.101.2.165
|
||||
|
||||
;; Query time: 48 msec
|
||||
;; SERVER: 8.8.8.8#53(8.8.8.8)
|
||||
;; WHEN: Sat Jul 25 11:21:19 EDT 2020
|
||||
;; MSG SIZE rcvd: 109
|
||||
```
|
||||
|
||||
The command shown below does a reverse lookup of the 8.8.8.8 IP address to show that it belongs to Google's DNS server.
|
||||
|
||||
```
|
||||
$ nslookup 8.8.8.8
|
||||
8.8.8.8.in-addr.arpa name = dns.google.
|
||||
```
|
||||
|
||||
#### Wrap-Up
|
||||
|
||||
The dig command is an essential tool for both grasping how DNS works and troubleshooting connection problems when they arise.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3568488/digging-for-dns-answers-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.flickr.com/photos/auntylaurie/15997799384
|
||||
[2]: https://creativecommons.org/licenses/by/2.0/legalcode
|
||||
[3]: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html
|
||||
[4]: https://tools.ietf.org/html/rfc1035
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,87 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Bypass your Linux firewall with SSH over HTTP)
|
||||
[#]: via: (https://opensource.com/article/20/7/linux-shellhub)
|
||||
[#]: author: (Domarys https://opensource.com/users/domarys)
|
||||
|
||||
Bypass your Linux firewall with SSH over HTTP
|
||||
======
|
||||
Remote work is here to stay; use this helpful open source solution to
|
||||
quickly connect and access all your devices from anywhere.
|
||||
![Terminal command prompt on orange background][1]
|
||||
|
||||
With the growth of connectivity and remote jobs, accessing remote computing resources becomes more important every day. But the requirements for providing external access to devices and hardware make this task complex and risky. Aiming to reduce this friction, [ShellHub][2] is a cloud server that allows universal access to those devices, from any external network.
|
||||
|
||||
ShellHub is an open source solution, licensed under Apache 2.0, that covers all those needs and allows users to connect and manage multiple devices through a single account. It was developed to facilitate developers' and programmers' tasks, making remote access to Linux devices possible for any hardware architecture.
|
||||
|
||||
Looking more closely, the ShellHub solution uses the HTTP transport layer to encapsulate the SSH protocol. This transport layer choice allows for seamless use on most networks as it is commonly available and accepted by most companies' firewall rules and policies.
|
||||
|
||||
These examples use ShellHub version 0.3.2, released on Jun 10, 2020.
|
||||
|
||||
### Using ShellHub
|
||||
|
||||
To access the platform, just go to [shellhub.io][3] and register yourself to create an account. Your registration data will help the development team to understand the user profile and provide more insight into how to improve the platform.
|
||||
|
||||
![ShellHub registration form][4]
|
||||
|
||||
Figure 1: Registration form available in [shellhub.io][5]
|
||||
|
||||
ShellHub's design has an intuitive and clean interface that makes all information and functionality available in the fastest way. After you've registered, you will be on the dashboard, ready to register your first device.
|
||||
|
||||
### Adding a device
|
||||
|
||||
To enable the connection of devices via ShellHub, you'll need to generate an identifier that will be used to authenticate your device when it connects to the server.
|
||||
|
||||
This identification must be configured inside the agent (ShellHub client) that will be saved in the device along with the image or it must be added as a Docker container.
|
||||
|
||||
By default, ShellHub uses Docker to run the agent, which is very convenient, as it provides frictionless addition of devices on the existing system, with Docker support being the only requirement. To add a device, you need to paste the command line, which is presented inside the ShellHub Cloud dialog (see Figure 2).
|
||||
|
||||
![Figure 2: Adding a device to the ShellHub Cloud][6]
|
||||
|
||||
By default, the device uses its MAC address as its hostname. Internally, the device is identified by its key, which is generated during the device registration to authenticate it with the server.
|
||||
|
||||
### Accessing devices
|
||||
|
||||
To access your devices, just go to View All Devices in the dashboard, or click on Devices ****on the left side menu; these will list all your registered devices.
|
||||
|
||||
The device state can be easily seen on the page. The online ones show a green icon next to them and can be connected by clicking on the terminal icon. You then enter the credentials and, finally, click the Connect button, see (Figure 3).
|
||||
|
||||
![Figure 3: Accessing a device using the terminal on the web][7]
|
||||
|
||||
Another way to access your devices is from any SSH client like [PuTTY][8], [Termius][9], or even the Linux terminal. We can use the ShellHub Identification, called SSHID, as the destination address to connect (e.g., ssh [username@SSHID][10]). Figure 4 illustrates how we can connect to our machine using the Linux SSH client on the terminal.
|
||||
|
||||
![Figure 4: Connecting to a device using the Linux terminal][11]
|
||||
|
||||
Whenever you log in to the ShellHub Cloud platform, you'll have access to all your registered devices on the dashboard so you can access them from everywhere, anytime. ShellHub adds simplicity to the process of keeping communications secure with your remote machines through an open source platform and in a transparent way.
|
||||
|
||||
Join ShellHub Community on [GitHub][2] or feel free to send your suggestions or feedback to the developers' team through [Gitter][12] or by emailing [contato@ossystems.com.br][13]. We love to receive contributions from community members!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/linux-shellhub
|
||||
|
||||
作者:[Domarys][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/domarys
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
|
||||
[2]: https://github.com/shellhub-io/shellhub
|
||||
[3]: https://www.shellhub.io/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/shellhub_registration_form_0.png (ShellHub registration form)
|
||||
[5]: https://opensource.com/article/20/7/www.shellhub.io
|
||||
[6]: https://opensource.com/sites/default/files/figure2.gif
|
||||
[7]: https://opensource.com/sites/default/files/figure3.gif
|
||||
[8]: https://www.putty.org/
|
||||
[9]: https://termius.com/
|
||||
[10]: mailto:username@SSHID
|
||||
[11]: https://opensource.com/sites/default/files/figure4.gif
|
||||
[12]: https://gitter.im/shellhub-io/community?at=5e39ad8b3aca1e4c5f633e8f
|
||||
[13]: mailto:contato@ossystems.com.br
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How learning Linux introduced me to open source)
|
||||
[#]: via: (https://opensource.com/article/20/7/open-source-learning)
|
||||
[#]: author: (Manaswini Das https://opensource.com/users/manaswinidas)
|
||||
|
||||
How learning Linux introduced me to open source
|
||||
======
|
||||
An engineering student's open source internships and volunteer
|
||||
contributions helped her land a full-time developer job.
|
||||
![Woman sitting in front of her computer][1]
|
||||
|
||||
When I entered the engineering program as a freshman in college, I felt like a frivolous teenager. In my sophomore year, and in a fortunate stroke of serendipity, I joined [Zairza][2], a technical society for like-minded students who collaborated and built projects separate from the academic curriculum. It was right up my alley. Zairza provided me a safe space to learn and grow and discover my interests. There are different facets and roadways to development, and as a newbie, I didn't know where my interests lay.
|
||||
|
||||
I made the switch to Linux then because I heard it is good for development. Fortunately, I had Ubuntu on my system. At first, I found it obnoxious to use because I was used to Windows. But I slowly got the hang of it and fell in love with it over time. I started exploring development by trying to build apps using Android and creating data visualizations using Python. I built a Wikipedia Reader app using the [Wikipedia API][3], which I thoroughly enjoyed. I learned to use Git and put my projects on GitHub, which not only helped me showcase my projects but also enabled me to store them.
|
||||
|
||||
I kept juggling between Ubuntu and other Linux distributions. My machine wasn't able to handle Android Studio since it consumed a lot of RAM. I finally made a switch to Fedora in 2016, and I have not looked back since.
|
||||
|
||||
At the end of my sophomore year, I applied to [Rails Girls Summer of Code][4] with another member of Zairza, [Anisha Swain][5], where we contributed to [HospitalRun][6]. I didn't know much about the tech stack, but I tagged along with her. This experience introduced me to open source. As I learned more about it, I came to realize that open source is ubiquitous. The tools I had used for a long time, like Git, Linux, and even Fedora, were open source all the while. It was fascinating!
|
||||
|
||||
I made my first contribution when I participated in [Hacktoberfest][7] 2017\. I started diving deep and contributing to projects on GitHub. Slowly, I began gaining confidence. All the communities were newcomer-friendly, and I no longer felt like a fish out of water.
|
||||
|
||||
In November 2017, I began learning about other open source programs like [Google Summer of Code][8] and [Outreachy][9]. I discovered that Outreachy runs twice a year and decided to apply for the December to March cohort. It was late to apply, but I wanted to participate. I chose to contribute to [Ceph][10] and built some data visualizations using JavaScript. The mentors were helpful and amiable. I wasn't able to get through the project but, to be honest, I didn't think I tried hard enough. So, I decided to participate in the next cohort and contribute to projects that piqued my interest.
|
||||
|
||||
I started looking for projects as soon as they were announced on the Outreachy website. I found a Django project under the [Open Humans Foundation][11] and started contributing. I wasn't familiar with Django, but I learned it on the go. I enjoyed every bit of it! I learned about [GraphQL][12], [Django][13], and APIs in general. Three months after I started making contributions, the project announced its new interns. To my utter surprise, I got through. I was overjoyed! I learned many new things throughout my internship, and my mentor, Mike Escalante, was very supportive and helpful. I would like to extend my heartfelt gratitude to the Open Humans Foundation for extending this opportunity to me. I also attended [PyCon India][14] in Hyderabad the same year. I had never attended a conference before; it felt great to meet other passionate Pythonistas, and I could feel the power of community.
|
||||
|
||||
At the end of 2018, when I was edging closer to the end of my engineering program, I started preparing for interviews. That was a roller-coaster ride. I wasn't able to get past the second technical round in most of them.
|
||||
|
||||
In the meantime, I participated in the [Processing Foundation's fellowship program][15], where I worked with two other fellows, [Nancy Chauhan][16] and Shaharyar Shamshi, on promoting software literacy and making Processing's tools accessible to the Indian community. I applied as a mentor to open source programs, including [GirlScript Summer of Code][17] (GSSoC). Despite being a first-timer mentor, I found it really rewarding.
|
||||
|
||||
I also delivered [a talk][18] on my Outreachy project at [DjangoCon Europe][19] in April 2019. It was my first talk and also my first time alone abroad! I got a chance to interact and connect with the larger Django community, and I'm still in touch with the Djangonaut friends I made there. In July 2019, I started a [PyLadies chapter in Bhubaneswar][20], India, which held its first meetup the same month.
|
||||
|
||||
I went on job interviews relentlessly. I felt despondent and useless at times, but I realized I was getting better at them. I learned about internship openings at Red Hat in June 2019. I applied, and after several rounds, I got one! I started interning with Red Hat at the end of July and started working full time in January 2020.
|
||||
|
||||
It's been a year since I joined Red Hat, and not a single day has gone by without me learning something. In the last year, I have mentored in various open source programs, including [Google Code-In][21], GSSoC, [Red Hat Open Source Contest][22], and [Mentors Without Borders][23]. I have also discovered that I love to attend and speak at conferences. So far, I have spoken at conferences including PyCon, DjangoCon, and [Git Commit Show][24] and local meetups including Rails Girls Sekondi, PyLadies Bangalore, and Women Techmakers Bhubaneswar.
|
||||
|
||||
This journey from a confused teenager to a confident learner has been fulfilling in every possible way. To any student reading this, I advise: never stop learning. Even in these unprecedented times, the world is still your oyster. Participating in open source internships and other programs is not a prerequisite to becoming a successful programmer. Everyone is unique. Open source programs help boost your confidence, but they are not a must-have. And, if you do participate, even if you don't complete anything, don't worry. Believe in yourself, and keep looking for new opportunities to learn. Keep feeding your curiosity—and don't forget to pat yourself on your back for your efforts. The tassel is going to be worth the hassle.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/open-source-learning
|
||||
|
||||
作者:[Manaswini Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/manaswinidas
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA (Woman sitting in front of her computer)
|
||||
[2]: https://zairza.in/
|
||||
[3]: https://www.mediawiki.org/wiki/API:Main_page
|
||||
[4]: https://railsgirlssummerofcode.org/
|
||||
[5]: https://github.com/Anisha1234
|
||||
[6]: https://hospitalrun.io/
|
||||
[7]: https://hacktoberfest.digitalocean.com/
|
||||
[8]: https://summerofcode.withgoogle.com/
|
||||
[9]: http://outreachy.org/
|
||||
[10]: https://ceph.io/
|
||||
[11]: http://openhumansfoundation.org/
|
||||
[12]: https://graphql.org/
|
||||
[13]: https://www.djangoproject.com/
|
||||
[14]: https://in.pycon.org/2018/
|
||||
[15]: https://medium.com/processing-foundation/meet-our-2019-fellows-9f13d4e4a68a
|
||||
[16]: https://nancychauhan.in/
|
||||
[17]: https://www.gssoc.tech/
|
||||
[18]: https://www.youtube.com/watch?v=IJ3qMXBRUXo
|
||||
[19]: https://2019.djangocon.eu/
|
||||
[20]: https://twitter.com/pyladiesbbsr
|
||||
[21]: https://codein.withgoogle.com/archive/
|
||||
[22]: https://research.redhat.com/red-hat-open-source-contest/
|
||||
[23]: https://www.mentorswithoutborders.net/
|
||||
[24]: https://gitcommit.show/
|
284
sources/tech/20200730 Monitor systemd journals via email.md
Normal file
284
sources/tech/20200730 Monitor systemd journals via email.md
Normal file
@ -0,0 +1,284 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Monitor systemd journals via email)
|
||||
[#]: via: (https://opensource.com/article/20/7/systemd-journals-email)
|
||||
[#]: author: (Kevin P. Fleming https://opensource.com/users/kpfleming)
|
||||
|
||||
Monitor systemd journals via email
|
||||
======
|
||||
Get a daily email with noteworthy output from your systemd journals with
|
||||
journal-brief.
|
||||
![Note taking hand writing][1]
|
||||
|
||||
Modern Linux systems often use systemd as their init system and manager for jobs and many other functions. Services managed by systemd generally send their output (of all forms: warnings, errors, informational messages, and more) to the systemd journal, not to traditional logging systems like syslog.
|
||||
|
||||
In addition to services, Linux systems often have many scheduled jobs (traditionally called cron jobs, even if the system doesn't use `cron` to run them), and these jobs may either send their output to the logging system or allow the job scheduler to capture the output and deliver it via email.
|
||||
|
||||
When managing multiple systems, you can install and configure a centralized log-capture system to monitor their behavior, but the complexity of centralized systems can make them hard to manage.
|
||||
|
||||
A simpler solution is to have each system directly send "interesting" output to the administrator(s) by email. For systems using systemd, this can be done using Tim Waugh's [journal-brief][2] tool. This tool _almost_ served my needs when I discovered it recently, so, in typical open source fashion, I contributed various patches to add email support to the project. Tim worked with me to get them merged, and now I can use the tool to monitor the 20-plus systems I manage as simply as possible.
|
||||
|
||||
Now, early each morning, I receive between 20 and 23 email messages: most of them contain a filtered view of each machine's entire systemd journal (with warnings or more serious messages), but a few are logs generated by scheduled ZFS snapshot-replication jobs that I use for backups. In this article, I'll show you how to set up similar messages.
|
||||
|
||||
### Install journal-brief
|
||||
|
||||
Although journal-brief is available in many Linux package repositories, the packaged versions will not include email support because that was just added recently. That means you'll need to install it from PyPI; I'll show you how to manually install it into a Python virtual environment to avoid interfering with other parts of the installed system. If you have a favorite tool for doing this, feel free to use it.
|
||||
|
||||
Choose a location for the virtual environment; in this article, I'll use `/opt/journal-brief` for simplicity.
|
||||
|
||||
Nearly all the commands in this tutorial must be executed with root permissions or the equivalent (noted by the `#` prompt). However, it is possible to install the software in a user-owned directory, grant that user permission to read from the journal, and install the necessary units as systemd `user` units, but that is not covered in this article.
|
||||
|
||||
Execute the following to create the virtual environment and install journal-brief and its dependencies:
|
||||
|
||||
|
||||
```
|
||||
$ python3 -m venv /opt/journal-brief
|
||||
$ source /opt/journal-brief/bin/activate
|
||||
$ pip install ‘journal-brief>=1.1.7’
|
||||
$ deactivate
|
||||
```
|
||||
|
||||
In order, these commands will:
|
||||
|
||||
1. Create `/opt/journal-brief` and set up a Python 3.x virtual environment there
|
||||
2. Activate the virtual environment so that subsequent Python commands will use it
|
||||
3. Install journal-brief; note that the single-quotes are necessary to keep the shell from interpreting the `>` character as a redirection
|
||||
4. Deactivate the virtual environment, returning the shell back to the original Python installation
|
||||
|
||||
|
||||
|
||||
Also, create some directories to store journal-brief configuration and state files with:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir /etc/journal-brief
|
||||
$ mkdir /var/lib/journal-brief
|
||||
```
|
||||
|
||||
### Configure email requirements
|
||||
|
||||
While configuring email clients and servers is outside the scope of this article, for journal-brief to deliver email, you will need to have one of the two supported mechanisms configured and operational.
|
||||
|
||||
#### Option 1: The `mail` command
|
||||
|
||||
Many systems have a `mail` command that can be used to send (and read) email. If such a command is installed on your system, you can verify that it is configured properly by executing a command like:
|
||||
|
||||
|
||||
```
|
||||
`$ echo "Message body" | mail --subject="Test message" {your email address here}`
|
||||
```
|
||||
|
||||
If the message arrives in your mailbox, you're ready to proceed using this type of mail delivery in journal-brief. If not, you can either troubleshoot and correct the configuration or use SMTP delivery.
|
||||
|
||||
To control the generated email messages' attributes (e.g., From address, To address, Subject) with the `mail` command method, you must use the command-line options in your system's mailer program: journal-brief will only construct a message's body and pipe it to the mailer.
|
||||
|
||||
#### Option 2: SMTP delivery
|
||||
|
||||
If you have an SMTP server available that can accept email and forward it to your mailbox, journal-brief can communicate directly with it. In addition to plain SMTP, journal-brief supports Transport Layer Security (TLS) connections and authentication, which means it can be used with many hosted email services (like Fastmail, Gmail, Pobox, and others). You will need to obtain a few pieces of information to configure this delivery mode:
|
||||
|
||||
* SMTP server hostname
|
||||
* Port number to be used for message submission (it defaults to port 25, but port 587 is commonly used)
|
||||
* TLS support (optional or required)
|
||||
* Authentication information (username and password/token, if required)
|
||||
|
||||
|
||||
|
||||
When using this delivery mode, journal-brief will construct the entire message before submitting it to the SMTP server, so the From address, To address, and Subject will be supplied in journal-brief's configuration.
|
||||
|
||||
### Set up configuration and cursor files
|
||||
|
||||
Journal-brief uses YAML-formatted configuration files; it uses one file per desired combination of filtering parameters, delivery options, and output formats. For this article, these files are stored in `/etc/journal-brief`, but you can store them in any location you like.
|
||||
|
||||
In addition to the configuration files, journal-brief creates and manages **cursor** files, which allow it to keep track of the last message in its output. Using one cursor file for each configuration file ensures that no journal messages will be lost, in contrast to a time-based log-delivery system, which might miss messages if a scheduled delivery job can't run to completion. For this article, the cursor files will be stored in `/var/lib/journal-brief` (you can store the cursor files in any location you like, but make sure not to store them in any type of temporary filesystem, or they'll be lost).
|
||||
|
||||
Finally, journal-brief has extensive filtering and formatting capabilities; I'll describe only the most basic options, and you can learn more about its capabilities in the documentation for journal-brief and [systemd.journal-fields][3].
|
||||
|
||||
### Configure a daily email with interesting journal entries
|
||||
|
||||
This example will set up a daily email to a system administrator named Robin at `robin@domain.invalid` from a server named `storage`. Robin's mail provider offers SMTP message submission through port 587 on a server named `mail.server.invalid` but does not require authentication or TLS. The email will be sent from `storage-server@domain.invalid`, so Robin can easily filter the incoming messages or generate alerts from them.
|
||||
|
||||
Robin has the good fortune to live in Fiji, where the workday starts rather late (around 10:00am), so there's plenty of time every morning to read emails of interesting journal entries. This example will gather the entries and deliver them at 8:30am in the local time zone (Pacific/Fiji).
|
||||
|
||||
#### Step 1: Configure journal-brief
|
||||
|
||||
Create a text file at `/etc/journal-brief/daily-journal-email.yml` with these contents:
|
||||
|
||||
|
||||
```
|
||||
cursor-file: '/var/lib/journal-brief/daily-journal-email'
|
||||
output:
|
||||
- 'short'
|
||||
- ‘systemd’
|
||||
inclusions:
|
||||
- PRIORITY: 'warning'
|
||||
email:
|
||||
suppress_empty: false
|
||||
smtp:
|
||||
to: '”Robin” <[robin@domain.invalid][4]>'
|
||||
from: '"Storage Server" <[storage-server@domain.invalid][5]>'
|
||||
subject: 'daily journal'
|
||||
host: 'mail.server.invalid'
|
||||
port: 587
|
||||
```
|
||||
|
||||
This configuration causes journal-brief to:
|
||||
|
||||
* Store the cursor at the path configured as `cursor-file`
|
||||
* Format journal entries using the `short` format (one line per entry) and provide a list of any systemd units that are in the `failed` state
|
||||
* Include journal entries from _any_ service unit (even the Linux kernel) with a priority of `warning`, `error`, or `emergency`
|
||||
* Send an email even if there are no matching journal entries, so Robin can be sure that the storage server is still operating and has connectivity
|
||||
* Send the email using SMTP
|
||||
|
||||
|
||||
|
||||
You can test this configuration file by executing a journal-brief command:
|
||||
|
||||
|
||||
```
|
||||
`$ journal-brief --conf /etc/journal-brief/daily-journal-email`
|
||||
```
|
||||
|
||||
Journal-brief will scan the systemd journal for all new messages (yes, _all_ of the messages it has never seen before), identify any that match the priority filter, and format them into an email that it sends to Robin. If the storage server has been operational for months (or years) and the systemd journal has never been purged, this could produce a very large email message. In addition to Robin not appreciating such a large message, Robin's email provider may not be willing to accept it, so you can generate a shorter message by executing this command:
|
||||
|
||||
|
||||
```
|
||||
`$ journal-brief -b --conf /etc/journal-brief/daily-journal-email`
|
||||
```
|
||||
|
||||
Adding the `-b` argument tells journal-brief to inspect only the systemd journal entries from the most recent system boot and ignore any that are older.
|
||||
|
||||
After journal-brief sends the email to the SMTP server, it writes a string into the cursor file so that the next time it runs using the same cursor file, it will know where to start in the journal. If the process fails for any reason (e.g., journal entry gathering, entry formatting, or SMTP delivery), the cursor file will _not_ be updated, which means the next time it uses the cursor file, the entries that would have been in the failed email will be included in the next email instead.
|
||||
|
||||
#### Step 2: Set up the systemd service unit
|
||||
|
||||
Create a text file at `/etc/systemd/system/daily-journal-email.service` with:
|
||||
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Send daily journal report
|
||||
|
||||
[Service]
|
||||
ExecStart=/opt/journal-brief/bin/journal-brief --conf /etc/journal-brief/%N.yml
|
||||
Type=oneshot
|
||||
```
|
||||
|
||||
This service unit will run journal-brief and specify a configuration file with the same name as the unit file with the suffix removed, which is what `%N` supplies. Since this service will be started by a timer (see step 3), there is no need to enable or manually start it.
|
||||
|
||||
#### Step 3: Set up the systemd timer unit
|
||||
|
||||
Create a text file at `/etc/systemd/system/daily-journal-email.timer` with:
|
||||
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Trigger daily journal email report
|
||||
|
||||
[Timer]
|
||||
OnCalendar=*-*-* 08:30:00 Pacific/Fiji
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
This timer will start the `daily-journal-email` service unit (because its name matches the timer name) every day at 8:30am in the Pacific/Fiji time zone. If the time zone was not specified, the timer would trigger the service at 8:30am in the system time zone configured on the `storage` server.
|
||||
|
||||
To make this timer start every time the system boots, it is `WantedBy` by the multi-user target. To enable and start the timer:
|
||||
|
||||
|
||||
```
|
||||
$ systemctl enable daily-journal-email.timer
|
||||
$ systemctl start daily-journal-email.timer
|
||||
$ systemctl list-timers daily-journal-email.timer
|
||||
```
|
||||
|
||||
The last command will display the timer's status, and the `NEXT` column will indicate the next time the timer will start the service.
|
||||
|
||||
To learn more about systemd timers and building schedules for them, read [_Use systemd timers instead of cronjobs_][6].
|
||||
|
||||
Now the configuration is complete, and Robin will receive a daily email of interesting journal entries.
|
||||
|
||||
### Monitor the output of a specific service
|
||||
|
||||
The `storage` server has some filesystems on solid-state storage devices (SSD) and runs Fedora Linux. Fedora has an `fstrim` service that is scheduled to run once per week (using a systemd timer, as in the example above). Robin would like to see the output generated by this service, even if it doesn't generate any warnings or errors. While this output will be included in the daily journal email, it will be intermingled with other journal entries, and Robin would prefer to have the output in its own email message.
|
||||
|
||||
#### Step 1: Configure journal-brief
|
||||
|
||||
Create a text file at `/etc/journal-brief/fstrim.yml` with:
|
||||
|
||||
|
||||
```
|
||||
cursor-file: '/var/lib/journal-brief/fstrim'
|
||||
output: 'short'
|
||||
inclusions:
|
||||
- _SYSTEMD_UNIT:
|
||||
- ‘fstrim.service’
|
||||
email:
|
||||
suppress_empty: false
|
||||
smtp:
|
||||
to: '”Robin” <[robin@domain.invalid][4]>'
|
||||
from: '"Storage Server" <[storage-server@domain.invalid][5]>'
|
||||
subject: 'weekly fstrim'
|
||||
host: 'mail.server.invalid'
|
||||
port: 587
|
||||
```
|
||||
|
||||
This configuration is similar to the previous example, except that it will include _all_ entries related to a systemd unit named `fstrim.service`, regardless of their priority levels, and will include _only_ entries related to that service.
|
||||
|
||||
### Step 2: Modify the systemd service unit
|
||||
|
||||
Unlike in the previous example, you don't need to create a systemd service unit or timer, since they already exist. Instead, you want to add behavior to the existing service unit by using the systemd "drop-in file" mechanism (to avoid modifying the system-provided unit file).
|
||||
|
||||
First, ensure that the `EDITOR` environment variable is set to your preferred text editor (otherwise you'll get the default editor on your system), and execute:
|
||||
|
||||
|
||||
```
|
||||
`$ systemctl edit fstrim.service`
|
||||
```
|
||||
|
||||
Note that this does not edit the existing service unit file; instead, it opens an editor session to create a drop-in file (located at `/etc/systemd/system/fstrim.service.d/override.conf`).
|
||||
|
||||
Paste these contents into the editor and save the file:
|
||||
|
||||
|
||||
```
|
||||
[Service]
|
||||
ExecStopPost=/opt/journal-brief/bin/journal-brief --conf /etc/journal-brief/%N.yml
|
||||
```
|
||||
|
||||
After you exit the editor, the systemd configuration will reload automatically (which is one benefit of using `systemctl edit` instead of creating the file directly). Like in the previous example, this drop-in uses `%N` to avoid duplicating the service name; this means that the drop-in contents can be applied to any service on the system, as long as the appropriate configuration file is created in `/etc/journal-brief`.
|
||||
|
||||
Using `ExecStopPost` will make journal-brief run after any attempt to run the `fstrim.service`, whether or not it's successful. This is quite useful, as the email will be generated even if the `fstrim.service` cannot be started (for example, if the `fstrim` command is missing or not executable).
|
||||
|
||||
Please note that this technique is primarily applicable to systemd services that run to completion before exiting (in other words, not background or daemon processes). If the `Type` in the `Service` section of the service's unit file is `forking`, then journal-brief will not execute until the specified service has stopped (either manually or by a system target change, like shutdown).
|
||||
|
||||
The configuration is complete; Robin will receive an email after every attempt to start the `fstrim` service; if the attempt is successful, then the email will include the output generated by the service.
|
||||
|
||||
### Monitor without extra effort
|
||||
|
||||
With this setup, you can monitor the health of your Linux systems that use systemd without needing to set up any centralized monitoring or logging tools. I find this monitoring method quite effective, as it draws my attention to unusual events on the servers I maintain without requiring any additional effort.
|
||||
|
||||
Special thanks to Tim Waugh for creating the journal-brief tool and being willing to accept a rather large patch to add direct email support rather than running journal-brief through cron.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/systemd-journals-email
|
||||
|
||||
作者:[Kevin P. Fleming][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kpfleming
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/note-taking.jpeg?itok=fiF5EBEb (Note taking hand writing)
|
||||
[2]: https://github.com/twaugh/journal-brief
|
||||
[3]: https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html
|
||||
[4]: mailto:robin@domain.invalid
|
||||
[5]: mailto:storage-server@domain.invalid
|
||||
[6]: https://opensource.com/article/20/7/systemd-timers
|
@ -0,0 +1,273 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Bring your Mycroft AI voice assistant skill to life with Python)
|
||||
[#]: via: (https://opensource.com/article/20/7/mycroft-voice-skill)
|
||||
[#]: author: (Steve Ovens https://opensource.com/users/stratusss)
|
||||
|
||||
Bring your Mycroft AI voice assistant skill to life with Python
|
||||
======
|
||||
Put the final polishes on your Mycroft skill by managing dependencies,
|
||||
debugging, collecting user-specific data, and getting everything into
|
||||
your Python code.
|
||||
![Hands on a keyboard with a Python book ][1]
|
||||
|
||||
In the first two articles of this series on [Mycroft][2], an open source, privacy-focused digital voice assistant, I covered the [background behind voice assistants][3] and some of Mycroft's [core tenets][4]. In Part 3, I started [outlining the Python code][5] required to provide some basic functionality to a skill that adds items to [OurGroceries][6], a grocery list app. And in Part 4, I talked about the different types of [intent parsers][7] (and when to use each) and expanded the Python code so Mycroft could provide audible feedback while working through the skill.
|
||||
|
||||
In this fifth article, I will walk through the remaining sections required to build this skill. I'll talk about project dependencies, logging output for debugging purposes, working with the Mycroft web UI for setting values (such as usernames and passwords), and how to get this information into your Python code.
|
||||
|
||||
### Dealing with project dependencies
|
||||
|
||||
There are generally three sources for project dependencies when writing a Mycroft skill:
|
||||
|
||||
* Python packages from [PyPI][8]
|
||||
* System-level packages pulled from a repository
|
||||
* Other Mycroft skills
|
||||
|
||||
|
||||
|
||||
There are a couple of ways to deal with dependencies in Mycroft. You can use "requirements" files, or you can use the `manifest.yml` file.
|
||||
|
||||
Since most of the skills in the Mycroft store use requirement files, I will merely touch on the `manifest.yml` file. The `manifest.yml` file is pretty straightforward. There is a `dependencies:` section, and under this are three options: `python:`, `system:`, and `skill:`. Under each heading, you should specify the names of required dependencies. An example file could look like this:
|
||||
|
||||
|
||||
```
|
||||
dependencies:
|
||||
# Pip dependencies on PyPI
|
||||
python:
|
||||
- requests
|
||||
- gensim
|
||||
|
||||
system:
|
||||
# For simple packages, this is all that is necessary
|
||||
all: pianobar piano-dev
|
||||
|
||||
# Require the installation of other skills before installing this skill
|
||||
skill:
|
||||
- my-other-skill
|
||||
```
|
||||
|
||||
However, since the majority of skills use requirement files, I'll use that option for this project, so you can use it as an example for other skills you may wish to use or create.
|
||||
|
||||
In Python, the `requirements.txt` file, which lists all the Python dependencies a project requires, is very common. This file is pretty simple; it can either be a list of packages or a list with specific versions. I will specify a minimal version with some code I submitted to the `ourgroceries` project. There are three options for this project's `requirements.txt`:
|
||||
|
||||
* `ourgroceries==1.3.5`: Specifies that the package must be version 1.3.5
|
||||
* `ourgroceries>=1.3.5`: Specifies that the package must be version 1.3.5 or higher
|
||||
* `ourgroceries`: Allows any version of the package
|
||||
|
||||
|
||||
|
||||
My `requirements.txt` uses `ourgroceries>=1.3.5` to allow for future updates. Following this same logic, your `requirements.txt` could list different packages instead of specifying a single package.
|
||||
|
||||
The entirety of my `requirements.txt` file is one line:
|
||||
|
||||
|
||||
```
|
||||
`ourgroceries>=1.3.5`
|
||||
```
|
||||
|
||||
You can also opt to use `requirements.sh`. This is a shell script that can be used to install packages, download modules from Git, or do any number of things. This file executes while installing a new skill. The [Zork skill][9] has an example of a `requirements.sh` script. However, while you can use this, if you want to submit your skill to the store, the `requirements.sh` will be scrutinized fairly heavily to mitigate security issues.
|
||||
|
||||
### Debug your skill
|
||||
|
||||
There are a couple of ways to debug your skill. You can use the Mycroft logger, or you can use standard Python debugging tools. Both methods are available in the Mycroft command-line interface (CLI), which is very handy for debugging.
|
||||
|
||||
#### Use Mycroft logger
|
||||
|
||||
To get started with the Mycroft logger, you just need to have the `MycroftSkill` imported because logger is part of the base class. This means that as long as you are working inside the class for your skill, logger is available. For example, the following code demonstrates how to create a very basic skill with a log entry:
|
||||
|
||||
|
||||
```
|
||||
from mycroft import MycroftSkill
|
||||
|
||||
class MyFakeSkill(MycroftSkill):
|
||||
def __init__(self):
|
||||
self.log.info("Skill starting up")
|
||||
|
||||
def create_skill():
|
||||
return MyFakeSkill()
|
||||
```
|
||||
|
||||
Logger has all the log levels you might expect:
|
||||
|
||||
* **debug:** Provides the highest level of detail but is _not_ logged by default
|
||||
* **info:** Provides general information when a skill is running as expected; it is always logged
|
||||
* **warning:** Indicates something is wrong, but it is not fatal
|
||||
* **error:** Fatal problems; they are displayed in red in the CLI
|
||||
* **exception:** Similar to errors except they include stack traces
|
||||
|
||||
|
||||
|
||||
Along with showing in the CLI, logger writes to `skills.log`. The file's location varies depending on how you installed Mycroft. Common locations are `/var/log/mycroft/skills.log`, `~/snap/mycroft/common/logs/skills.log`, and `/var/opt/mycroft/skills.log`.
|
||||
|
||||
There may be times when you want to use the Mycroft logger outside the instantiated class. For example, if you have some global functions defined outside the class, you can import `LOG` specifically:
|
||||
|
||||
|
||||
```
|
||||
from mycroft import MycroftSkill
|
||||
from mycroft.util import LOG
|
||||
|
||||
def my_global_funct():
|
||||
LOG.info("This is being logged outside the class")
|
||||
|
||||
class MyFakeSkill(MycroftSkill):
|
||||
def __init__(self):
|
||||
self.log.info("Skill starting up")
|
||||
|
||||
def create_skill():
|
||||
return MyFakeSkill()
|
||||
```
|
||||
|
||||
#### Use Python's debugging tools
|
||||
|
||||
If you want something that stands out more, you can use the built-in Python `print()` statements to debug. I have found that there are occasions where the Mycroft logger is slow to produce output. Other times, I just want something that jumps out at me visually. In either case, I prefer using `print()` statements when I am debugging outside an IDE.
|
||||
|
||||
Take the following code, for example:
|
||||
|
||||
|
||||
```
|
||||
if category_name is None:
|
||||
self.log.info("---------------> Adding %s to %s" % (item_to_add, list_name))
|
||||
print("-------------> Adding %s to %s" % (item_to_add, list_name))
|
||||
```
|
||||
|
||||
This produces the following output in the `mycroft-cli-client`:
|
||||
|
||||
|
||||
```
|
||||
~~~~ings:104 | Skill settings successfully saved to /opt/mycroft/skills/fallback-wolfram-alpha.mycroftai/settings.json
|
||||
~~~~1 | mycroft.skills.mycroft_skill.mycroft_skill:handle_settings_change:272 | Updating settings for skill AlarmSkill
|
||||
~~~~save_settings:104 | Skill settings successfully saved to /opt/mycroft/skills/mycroft-alarm.mycroftai/settings.json
|
||||
10:50:38.528 | INFO | 51831 | ConfigurationSkill | Remote configuration updated
|
||||
10:50:43.862 | INFO | 51831 | OurGroceriesSkill | ---------------> Adding hot dogs to my shopping
|
||||
\---------------> Adding hot dogs to my shopping
|
||||
~~~~7.654 | INFO | 51831 | mycroft.skills.skill_loader:reload:108 | ATTEMPTING TO RELOAD SKILL: ourgroceries-skill
|
||||
~~~~831 | mycroft.skills.skill_loader:_execute_instance_shutdown:146 | Skill ourgroceries-skill shut down successfully
|
||||
```
|
||||
|
||||
I find that, as the text scrolls, it is much easier to visually identify a print statement that does not have the uniform header of the other messages. This is a personal preference and not meant as any sort of recommendation for programming best practices.
|
||||
|
||||
### Get input from users
|
||||
|
||||
Now that you know how to see output from your skill, it's time to get some environment-specific information from your users. In many cases, your skill will need some user information to function properly. Most of the time, this is a username and password. Often, this information is required for the skill to initialize properly.
|
||||
|
||||
#### Get user input with internet-connected Mycroft
|
||||
|
||||
If your Mycroft device has a connection to the internet, you can use Mycroft's web UI to enter user information. Log into <https://account.mycroft.ai> and navigate to the [skills][10] section. Once you have configured your skill correctly, you will see something like this:
|
||||
|
||||
![Mycroft Web UI][11]
|
||||
|
||||
Here, you can discover which devices have your skill installed. In my case, there are two devices: `Arch Pi4` and `Asus`. There are also input text boxes to get information from the user.
|
||||
|
||||
This interface is created automatically if you have configured Mycroft's Settings file. You have two choices for file types: you can create a `settingsmeta.yaml` or a `settingsmeta.json`. I prefer the YAML syntax, so that is what I used for this project. Here is my `settingsmeta.yaml` for this skill:
|
||||
|
||||
|
||||
```
|
||||
skillMetadata:
|
||||
sections:
|
||||
- name: OurGroceries Account
|
||||
fields:
|
||||
- type: label
|
||||
label: "Provide your OurGroceries username/password and then Connect with the button below."
|
||||
- name: user_name
|
||||
type: text
|
||||
label: username
|
||||
value: ''
|
||||
- name: password
|
||||
type: password
|
||||
label: Ourgroceries password
|
||||
value: ''
|
||||
- name: default_list
|
||||
type: text
|
||||
label: Default Shopping List
|
||||
value: ''
|
||||
```
|
||||
|
||||
The structure of this file is pretty easy to understand. Each file must start with a `skillsMetadata` heading. Next, there is a `sections` heading. Every new section is denoted by `- name:`, which is YAML syntax for an item on a list. Above, there is only a single section called `OurGroceries Account`, but you can have as many sections as you want.
|
||||
|
||||
Fields are used to both convey and store information. A field can be as simple as a label, which can provide an instruction to the user. More interesting for this skill, however, are the `text` and `password` fields. Text fields allow the user to view what they are typing and are displayed in plain text. This is suitable for non-sensitive information. Password fields are not specific to passwords but are intended to hide sensitive information. After the users enter their information and click the `save` button, Mycroft replaces the `settings.json` file created the first time the skill initializes. The new file contains the values the user input in the web UI. The skill will also use this file to look up credentials and other information. If you are having problems using the correct values in your skill, take a look at the `settings.json` file for proper naming of variables and whether or not values are being stored in the JSON file.
|
||||
|
||||
#### Get user input with offline Mycroft
|
||||
|
||||
As you may have surmised, without internet connectivity, it is more difficult to receive information from end users. There are only a few options. First, you could write your skill such that, on the first run, it prompts the user for the information your skill requires. You could then write this out to `settings.json` if you wish to use the built-in settings parser, or you could write this to a file of your choice and your skill could handle the parsing. Be aware that if you write to `settings.json`, there is a chance that this file could be overwritten if Mycroft re-initializes your skill.
|
||||
|
||||
Another method is having static values in a `settings.json` or another file that is stored with the project. This has some obvious security implications, but if your repository is secure, this is a viable option.
|
||||
|
||||
The third and final option is to enable the user to edit the file directly. This could be done through Network File System (NFS) or [Samba][12] file sharing protocols, or you could simply grant the appropriate permissions to a secure shell (SSH) user who could use any Unix editor to make changes.
|
||||
|
||||
Since this project requires access to the internet, I will not explore these options. If you have questions, you can always engage the community on [Mattermost][13].
|
||||
|
||||
### Access settings from your skill
|
||||
|
||||
Provided that the other parts in the chain are working (i.e., the users updated their settings via the web UI, and Mycroft updated `settings.json` based on those settings), using user-provided settings is easy to understand.
|
||||
|
||||
As I mentioned in the [third article][5] (where I discussed the `__init__` and `initialize` methods), it is impossible to retrieve values from `settings.json` with the `__init__(self)` method. Therefore, you must use another method to handle the settings. In my case, I created an appropriately named `_create_initial_grocery_connection` method:
|
||||
|
||||
|
||||
```
|
||||
def _create_initial_grocery_connection(self):
|
||||
"""
|
||||
This gets the username/password from the config file and gets the session cookie
|
||||
for any interactions
|
||||
:return: None
|
||||
"""
|
||||
self.username = self.settings.get('user_name')
|
||||
self.password = self.settings.get('password')
|
||||
self.ourgroceries_object = OurGroceries(self.username, self.password)
|
||||
asyncio.run(self.ourgroceries_object.login())
|
||||
```
|
||||
|
||||
As you can see, you can extract information from `settings.json` by using `self.settings.get()`. The only thing to note is that the value you pass in _must_ match the name in `settingsmeta.yaml`. In this case, because I am not using the username or password outside this method, I could have opted to not make these variables part of the class scope (i.e., I could have called them `password` instead of `self.password`). This is because I am setting the `ourgroceries_object` to the class scope, and it contains all the information required for the rest of the skill to function.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
Voice assistants are expanding into a multi-million (if not -billion) dollar business and some analysts think a majority of homes in the next few years will have one (or more). With Apple, Google, Facebook, and others frequently in the news for privacy violations, not to mention the constant stream of data breaches reported, it is important to have an open source, privacy-focused alternative to the big players. Mycroft puts your privacy first, and its small but dedicated team of contributors is making inroads into the most common scenarios for voice assistants.
|
||||
|
||||
This series dove into the nitty-gritty of skill development, talking about the importance of thinking things through before you start and having a good outline. Knowing where you are going in the big picture helps you organize your code. Breaking the tasks down into individual pieces is also a key part of your strategy. Sometimes, it's a good idea to write bits or significant chunks outside the Mycroft skill environment to ensure that your code will work as expected. This is not necessary but can be a great starting point for people who are new to skill development.
|
||||
|
||||
The series also explored intent parsers and how to understand when to use each one. The [Padatious][14] and [Adapt][15] parsers each have strengths and weaknesses.
|
||||
|
||||
* Padatious intents rely on phrases and entities within those phrases to understand what the user is attempting to accomplish, and they are often the default used for Mycroft skills.
|
||||
* On the other hand, Adapt uses regular expressions to accomplish similar goals. When you need Mycroft to be context-aware, Adapt is the only way to go. It is also extremely good at parsing complex utterances. However, you need to take great care when using regular expressions, or you will end up with unexpected results.
|
||||
|
||||
|
||||
|
||||
I also covered the basics of dealing with a project. It's an important step in complex skill development to ensure that a skill has all the proper dependencies to work. Ensuring maximum portability is paramount for a skill, and dependency resolution is a key part of that, as your skill may not work properly with unsatisfied dependencies.
|
||||
|
||||
Finally, I explained how to get skill-specific settings from users, whether the device is internet-connected or not. Which method you choose really depends on your use case.
|
||||
|
||||
While it was not my aim to provide an encyclopedia of Mycroft skill development, by working through this series, you should have a very solid foundation for developing most skills you want to create. I hope the concrete examples in this series will show you how to handle the majority of tasks you may want to accomplish during skill development. I didn't go line-by-line through the whole skill but the code is hosted on [GitLab][16] if you'd like to explore it further. Comments and questions are always welcome. I am very much still learning and growing as a fledgling Mycroft developer, so hit me up on [Twitter][17] or the [Mycroft Mattermost][18] instance, and let's learn together!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/mycroft-voice-skill
|
||||
|
||||
作者:[Steve Ovens][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/stratusss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd (Hands on a keyboard with a Python book )
|
||||
[2]: https://mycroft.ai/
|
||||
[3]: https://opensource.com/article/20/6/open-source-voice-assistant
|
||||
[4]: https://opensource.com/article/20/6/mycroft
|
||||
[5]: https://opensource.com/article/20/6/mycroft-voice-assistant-skill
|
||||
[6]: https://www.ourgroceries.com/overview
|
||||
[7]: https://opensource.com/article/20/6/mycroft-intent-parsers
|
||||
[8]: https://pypi.org/
|
||||
[9]: https://github.com/forslund/white-house-adventure/blob/6eba5df187bc8a7735b05e93a28a6390b8c6f40c/requirements.sh
|
||||
[10]: https://home.mycroft.ai/skills
|
||||
[11]: https://opensource.com/sites/default/files/mycroft_skills_webui.png (Mycroft Web UI)
|
||||
[12]: https://www.samba.org/
|
||||
[13]: https://chat.mycroft.ai/community/channels/skills
|
||||
[14]: https://mycroft-ai.gitbook.io/docs/skill-development/user-interaction/intents/padatious-intents
|
||||
[15]: https://mycroft-ai.gitbook.io/docs/skill-development/user-interaction/intents/adapt-intents
|
||||
[16]: https://gitlab.com/stratus-ss/mycroft-ourgroceries-skill
|
||||
[17]: https://twitter.com/linuxovens
|
||||
[18]: https://chat.mycroft.ai/community/channels/town-square
|
@ -0,0 +1,198 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Real-time noise suppression for video conferencing)
|
||||
[#]: via: (https://fedoramagazine.org/real-time-noise-suppression-for-video-conferencing/)
|
||||
[#]: author: (lkiesow https://fedoramagazine.org/author/lkiesow/)
|
||||
|
||||
Real-time noise suppression for video conferencing
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
With people doing video conferencing all day, good audio has recently become much more important. The best option is obviously a proper audio studio. Unfortunately, this is not something you will always have and you might need to make do with a much simpler setup.
|
||||
|
||||
In such situations, a noise reduction filter that keeps your voice but filters out ambient noises (street noise, keyboard, …) can be very helpful. In this article, we will take a look at how to integrate such a filter into PulseAudio so that it can easily be used in all applications with no additional requirements on their part.
|
||||
|
||||
Example of switching on noise reduction
|
||||
|
||||
### The Idea
|
||||
|
||||
We set up [PulseAudio][2] for live noise-reduction using [an LADSPA filter][3].
|
||||
|
||||
This creates a new PulseAudio source which can be used as a virtual microphone. Other applications will not even realize that they are not dealing with physical devices and you can select it as if you had an additional microphone connected.
|
||||
|
||||
### Terminology
|
||||
|
||||
Before we start, it is good to know the following two PulseAudio terms to better understand what we are doing:
|
||||
|
||||
* _source_ – represents a source from which audio can be obtained. Like a microphone
|
||||
* _sink_ – represents a consumer of audio like a speaker
|
||||
|
||||
|
||||
|
||||
Each PulseAudio sink also has a source called monitor which can be used to get the audio put into that sink. For example, you could have audio put out by your headphones while using the monitor of your headphone device to record the output.
|
||||
|
||||
### Installation
|
||||
|
||||
While PulseAudio is usually pre-installed, we need to get the LADSPA filter for noise reduction. You can [build and install the filter manually][3], but it is much easier to install the filter via Fedora Copr:
|
||||
|
||||
```
|
||||
sudo dnf copr enable -y lkiesow/noise-suppression-for-voice
|
||||
sudo dnf install -y ladspa-realtime-noise-suppression-plugin
|
||||
```
|
||||
|
||||
Note that the Copr projects are not maintained and quality-controlled by Fedora directly.
|
||||
|
||||
### Enable Noise Reduction Filter
|
||||
|
||||
First, you need to identify the name of the device you want to apply the noise reduction to. In this example, we’ll use the RODE NT-USB microphone as input.
|
||||
|
||||
```
|
||||
$ pactl list sources short
|
||||
0 alsa_input.usb-RODE_Microphones_RODE_NT-USB-00.iec958-stereo …
|
||||
1 alsa_output.usb-0c76_USB_Headphone_Set-00.analog-stereo.monitor …
|
||||
```
|
||||
|
||||
Next, we create a new PulseAudio sink, the filter and a loopback between microphone and filter. That way, the output from the microphone is used as input for the noise reduction filter. The output from this filter will then be available via the null sink monitor.
|
||||
|
||||
To visualize this, here is the path the audio will travel from the microphone to, for example, a browser:
|
||||
|
||||
```
|
||||
mic → loopback → ladspa filter → null sink [monitor] → browser
|
||||
```
|
||||
|
||||
While this sounds complicated, it is set up with just a few simple commands:
|
||||
|
||||
```
|
||||
pacmd load-module module-null-sink \
|
||||
sink_name=mic_denoised_out
|
||||
pacmd load-module module-ladspa-sink \
|
||||
sink_name=mic_raw_in \
|
||||
sink_master=mic_denoised_out \
|
||||
label=noise_suppressor_stereo \
|
||||
plugin=librnnoise_ladspa \
|
||||
control=50
|
||||
pacmd load-module module-loopback \
|
||||
source=alsa_input.usb-RODE_Microphones_RODE_NT-USB-00.iec958-stereo \
|
||||
sink=mic_raw_in \
|
||||
channels=2
|
||||
```
|
||||
|
||||
That’s it. You should now be able to select the new device.
|
||||
|
||||
![New recording devices in pavucontrol][4]
|
||||
|
||||
### Chromium
|
||||
|
||||
Unfortunately, browsers based on Chromium will hide monitor devices by default. This means, that we cannot select the newly created noise-reduction device in the browser. One workaround is to select another device first, then use pavucontrol to assign the noise-reduction device afterward.
|
||||
|
||||
But if you do this on a regular basis, you can work around the issue by using the _remap-source_ module to convert the null sink monitor to a regular PulseAudio source. The module is actually meant for remapping audio channels – e.g. swapping left and right channel on stereo audio – but we can just ignore these additional capabilities and create a new source similar to the monitor:
|
||||
|
||||
```
|
||||
pacmd load-module module-remap-source \
|
||||
source_name=denoised \
|
||||
master=mic_denoised_out.monitor \
|
||||
channels=2
|
||||
```
|
||||
|
||||
The remapped device delivers audio identical to the original one so that assigning this with PulseAudio will yield no difference. But this device does now show up in Chromium:
|
||||
|
||||
![Remapped monitor device in Chrome][5]
|
||||
|
||||
### Improvements
|
||||
|
||||
While the guide above should help you with all the basics and will get you a working setup, there are a few things you can improve.
|
||||
|
||||
But while the commands above should generally work, you might need to experiment with the following suggestions.
|
||||
|
||||
#### Latency
|
||||
|
||||
By default, the loopback module will introduce a slight audio latency. You can hear this by running an echo test:
|
||||
|
||||
```
|
||||
gst-launch-1.0 pulsesrc ! pulsesink
|
||||
```
|
||||
|
||||
You might be able to reduce this latency by using the _latency_msec_ option when loading the _loopback_ module:
|
||||
|
||||
```
|
||||
pacmd load-module module-loopback \
|
||||
latency_msec=1 \
|
||||
source=alsa_input.usb-RODE_Microphones_RODE_NT- USB-00.iec958-stereo \
|
||||
sink=mic_raw_in \
|
||||
channels=2
|
||||
```
|
||||
|
||||
#### Voice Threshold
|
||||
|
||||
The noise reduction library provides controls for a voice threshold. The filter will return silence if the probability for sound being voice is lower than this threshold. In other words, the higher you set this value, the more aggressive the filter becomes.
|
||||
|
||||
You can pass different thresholds to the filter by supplying them as control argument when the _ladspa-sink_ module is being loaded.
|
||||
|
||||
```
|
||||
pacmd load-module module-ladspa-sink \
|
||||
sink_name=mic_raw_in \
|
||||
sink_master=mic_denoised_out \
|
||||
label=noise_suppressor_stereo \
|
||||
plugin=librnnoise_ladspa \
|
||||
control=95
|
||||
```
|
||||
|
||||
#### Mono vs Stereo
|
||||
|
||||
The example above will work with stereo audio. When working with a simple microphone, you may want to use a mono signal instead.
|
||||
|
||||
For switching to mono, use the following values instead when loading the different modules:
|
||||
|
||||
* _label=noise_suppressor_mono_ – when loading the _ladspa-sink_ module
|
||||
* _channels=1_ – when loading the _loopback_ and remap-source modules
|
||||
|
||||
|
||||
|
||||
#### Persistence
|
||||
|
||||
Using the _pacmd_ command for the setup, settings are not persistent and will disappear if PulseAudio is restarted. You can add these commands to your PulseAudio configuration file if you want them to be persistent. For that, edit _~/.config/pulse/default.pa_ and add your commands like this:
|
||||
|
||||
```
|
||||
.include /etc/pulse/default.pa
|
||||
|
||||
load-module module-null-sink sink_name=mic_denoised_out
|
||||
load-module module-ladspa-sink …
|
||||
…
|
||||
```
|
||||
|
||||
### Limitations
|
||||
|
||||
If you listen to the example above, you will notice that the filter reliably reduces background noise. But unfortunately, depending on the situation, it can also cause a loss in voice quality.
|
||||
|
||||
The following example shows the results with some street noise. Activating the filter reliably removes the noise, but in this example, the voice quality noticeably drops as well:
|
||||
|
||||
Noise reduction of constant street noise
|
||||
|
||||
As a conclusion, we can say that this can help if you find yourself in less than ideal audio scenarios. It is also very effective if you are not the main speaker in a video conference and you do not want to constantly mute yourself.
|
||||
|
||||
Still, good audio equipment and a quiet environment will always be better.
|
||||
|
||||
Have fun.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/real-time-noise-suppression-for-video-conferencing/
|
||||
|
||||
作者:[lkiesow][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/lkiesow/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/07/noise-reduction-816x345.png
|
||||
[2]: https://freedesktop.org/wiki/Software/PulseAudio/
|
||||
[3]: https://github.com/werman/noise-suppression-for-voice
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2020/07/pavucontrol-white-1024x379.png
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2020/07/chrome-1024x243.png
|
@ -0,0 +1,154 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manjaro vs Arch Linux: What’s the Difference? Which one is Better?)
|
||||
[#]: via: (https://itsfoss.com/manjaro-vs-arch-linux/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
|
||||
|
||||
Manjaro vs Arch Linux: What’s the Difference? Which one is Better?
|
||||
======
|
||||
|
||||
_**Manjaro or Arch Linux? If Manjaro is based on Arch, how come is it different from Arch? Read how Arch and Manjaro are different in this comparison article.**_
|
||||
|
||||
Most of the [beginner-friendly Linux distributions][1] are based on Ubuntu. As Linux users gains more experience, some try their hands on the more ‘advanced distributions’, mostly in the ‘Arch domain’.
|
||||
|
||||
This Arch domain dominated by two distributions: [Arch Linux][2] itself and [Manjaro][3]. There are other [Arch-based Linux distributions][4] but none are as popular as these two.
|
||||
|
||||
If you are confused between Arch and Manjaro then this comparison should help you out.
|
||||
|
||||
### Manjaro and Arch Linux: How different or similar are they?
|
||||
|
||||
![][5]
|
||||
|
||||
I have tried to compare these two distributions on various points. Please keep in mind that I have not exclusively focused on the differences. I have also pointed out where they are similar.
|
||||
|
||||
#### Both are rolling release distributions but not of the same kind
|
||||
|
||||
There are no “releases” every few months or years in Arch and Manjaro like Ubuntu or Fedora. Just [keep your Arch or Manjaro system updated][6] and you’ll always have the latest version of the operating system and the software packages. You don’t need to worry about upgrading your installed version like ever.
|
||||
|
||||
If you are planning to do a fresh install at some point, keep in mind that both Manjaro and Arch update the installation ISO regularly. It is called ISO refresh and it ensures that newly installed systems don’t have to install all the new system updates made available in last few months.
|
||||
|
||||
But there is a difference between the rolling release model of Arch and Manjaro.
|
||||
|
||||
Manjaro maintains its own independent repositories except for the community-maintained Arch User Repository (AUR). These repositories also contain software packages not provided by Arch. Popular software packages initially provided by the official Arch repositories will first be thoroughly tested (and if necessary, patched), prior to being released, usually about two weeks behind Arch, to Manjaro’s own Stable Repositories for public use.
|
||||
|
||||
![][7]
|
||||
|
||||
A consequence of accommodating this testing process is that Manjaro will never be quite as bleeding-edge as Arch. But then, it makes Manjaro slightly more stable than Arch and less susceptible to breaking your system.
|
||||
|
||||
#### Package Management – Pacman and Pamac
|
||||
|
||||
Both Arch and Manjaro ship with command-line based package management tool called Pacman which was coded in C and uses tar to package applications. In other words, you can [use the same pacman commands][8] for managing packages in both distributions.
|
||||
|
||||
In addition to the Pacman, Manjaro has also developed a GUI application called Pamac for easily installing software on Manjaro. This makes using Manjaro easier than Arch.
|
||||
|
||||
![Pamac GUI Package Manager by Manjaro][9]
|
||||
|
||||
Do note that you may also install Pamac from AUR in Arch Linux but the tool is integral part of Manjaro.
|
||||
|
||||
#### Manjaro Hardware Detection Tool (MHWD)
|
||||
|
||||
Pamac is not the only GUI tool developed by Manjaro team to help its users. Manjaro also has a dedicated tool for detecting hardware and suggest drivers for them.
|
||||
|
||||
![Manjaro hardware configuration GUI tool][10]
|
||||
|
||||
This hardware detection tool is so useful that it can be one of the [main reasons why Manjaro is loved by the community][11]. It is insanely easy to detect/install/use or switch from one driver to another and makes the hardware compatibility an issue from the past.
|
||||
|
||||
#### Drivers support
|
||||
|
||||
Manjaro offers great support for GPU drivers. As we all know for many years Linux has issues installing drivers (Specially Nvidia).
|
||||
|
||||
While [installing Manjaro][12] it gives options to start with open source (free) or non-open source (non-free) graphics driver installation. When you choose “non-free” it automatically detects your graphics card and install the most appropriate driver for it and hence GPU works out of the box.
|
||||
|
||||
Installing graphics driver is easier even after installing Manjaro thanks to the hardware detection tool you saw in the previous section.
|
||||
|
||||
And if you have a system with Nvidia Optimus card (Hybrid GPU) it works fine with Manjaro. You will get plenty of options of get it working.
|
||||
|
||||
In Arch Linux, you have to install (if you could find) the appropriate drivers for your machine.
|
||||
|
||||
#### Access to the Arch User Repository (AUR)
|
||||
|
||||
[Arch User Repository][13] (AUR) is a community-driven repository for Arch-based Linux distributions users. The AUR was created to organize and share new packages from the community and to help accelerate popular packages’ inclusion into the [community repository][14].
|
||||
|
||||
A good number of new packages that enter the official repositories start in the AUR. In the AUR, users are able to contribute their own package builds (PKGBUILD and related files).
|
||||
|
||||
You can use AUR in both Arch and Manjaro.
|
||||
|
||||
#### Desktop environments
|
||||
|
||||
Alright! You can use virtually any desktop environments on any Linux distribution. Arch and Manjaro are no exceptions.
|
||||
|
||||
However, a dedicated desktop flavor or version makes it easier for users to have a seamless experience of the said desktop environments.
|
||||
|
||||
The default Arch ISO doesn’t include any desktop environment. For example, you want to [install KDE on Arch Linux][15], you will have to either download and install it while [installing Arch Linux][16] or after that.
|
||||
|
||||
Manjaro, on the other hand, provides different ISO for desktop environments like Xfce, KDE and GNOME. Manjaro community also maintains ISO for MATE, Cinnamon, LXDE, LXQt, OpenBox and more.
|
||||
|
||||
#### Installation procedure
|
||||
|
||||
![Arch Live Boot][17]
|
||||
|
||||
Manjaro is based on Arch Linux and it is Arch compatible, but **it is not Arch**. It’s not even a pre-configured version of Arch with just a graphical installer. Arch doesn’t come with the usual comfort out of the box, which is why most people prefer something easier. Manjaro offers you the easy entry, but supports you on your way to becoming an experienced user or power user.
|
||||
|
||||
#### Documentation and support
|
||||
|
||||
Both Arch and Manjaro have their own wiki pages and support forums to help their respective users.
|
||||
|
||||
While Manjaro has a decent [wiki][18] for documentation, the [Arch wiki][19] is in a different league altogether. You can find detailed information on every aspect of Arch Linux in the Arch wiki.
|
||||
|
||||
#### Targeted audience
|
||||
|
||||
The key difference is that [Arch is aimed to users with a do-it-yourself attitude][20] who are willing to read the documentation, and solve their own problems.
|
||||
|
||||
On the other hand Manjaro is targeted at Linux users who are not that experienced or who don’t want to spend time assembling the operating system.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Some people often say that Manjaro is for those who can’t install Arch. But I think that’s not true. Not everyone wants to configure Arch from scratch or doesn’t have much time.
|
||||
|
||||
Manjaro is definitely a beast, but a very different kind of beast than Arch. **Fast, powerful, and always up to date**, Manjaro provides all the benefits of an Arch operating system, but with an especial emphasis on **stability, user-friendliness and accessibility** for newcomers and experienced users.
|
||||
|
||||
Manjaro doesn’t take its minimalism as far as Arch Linux does. With Arch, you start with a blank canvas and adjust each setting manually. When the default Arch installation completes, you have a running Linux instance at the command line. Want a [graphical desktop environment][21]? Go right ahead—there’s plenty to choose from. Pick one, install, and configure it. You learn so much doing that especially if you are new to Linux. You get a superb understanding of how the system goes together and why things are installed they way are.
|
||||
|
||||
I hope you have a better understanding of Arch and Manjaro now. You understand how they are similar and yet different.
|
||||
|
||||
_**I have voiced my opinion. Don’t hesitate to share yours in the comment section. Between Arch and Manjaro, which one do you prefer and why.**_
|
||||
|
||||
_With additional inputs from Abhishek Prakash._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/manjaro-vs-arch-linux/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-linux-beginners/
|
||||
[2]: https://www.archlinux.org/
|
||||
[3]: https://manjaro.org/
|
||||
[4]: https://itsfoss.com/arch-based-linux-distros/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/arch-vs-manjaro.png?ssl=1
|
||||
[6]: https://itsfoss.com/update-arch-linux/
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/repositories.png?ssl=1
|
||||
[8]: https://itsfoss.com/pacman-command/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/Pamac.png?resize=800%2C534&ssl=1
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/hardware-detection.png?ssl=1
|
||||
[11]: https://itsfoss.com/why-use-manjaro-linux/
|
||||
[12]: https://itsfoss.com/install-manjaro-linux/
|
||||
[13]: https://itsfoss.com/aur-arch-linux/
|
||||
[14]: https://wiki.archlinux.org/index.php/Community_repository
|
||||
[15]: https://itsfoss.com/install-kde-arch-linux/
|
||||
[16]: https://itsfoss.com/install-arch-linux/
|
||||
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/Arch-live-boot.jpg?ssl=1
|
||||
[18]: https://wiki.manjaro.org/index.php?title=Main_Page
|
||||
[19]: https://wiki.archlinux.org/
|
||||
[20]: https://itsfoss.com/why-arch-linux/
|
||||
[21]: https://itsfoss.com/best-linux-desktop-environments/
|
@ -1,71 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (JonnieWayy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What you need to know about Rust in 2020)
|
||||
[#]: via: (https://opensource.com/article/20/1/rust-resources)
|
||||
[#]: author: (Ryan Levick https://opensource.com/users/ryanlevick)
|
||||
|
||||
2020 年关于 Rust 你所需要知道的
|
||||
======
|
||||
尽管许多程序员长期以来一直将 Rust 用于业余爱好项目,但正如 Opensource.com 上许多有关 Rust 的热门文章所解释的那样,该语言在 2019 年吸引了主要技术公司的支持。
|
||||
![用笔记本电脑的人][1]
|
||||
|
||||
一段时间以来, [Rust][2] 在诸如 Hacker News 之类的网站上引起了程序员大量的关注。尽管许多人一直喜欢在业余爱好项目中[使用该语言][3],但直到 2019 年它才开始在工业界流行,直到那会儿情况才真正开始有所转变。
|
||||
|
||||
在过去的一年中,包括 [Microsoft][4]、 [Facebook][5] 和 [Intel][6] 在内的许多大公司都出来支持 Rust,许多[较小的公司][7]也注意到了这一点。2016 年,作为欧洲最大的 Rust 大会 [RustFest][8] 的第一主持人,我没见到任何一个人工作中使用 Rust 但却不在 Mozilla 工作的。三年后,似乎我在 RustFest 2019 有所交流的每个人都将 Rust 用于izard其他公司的日常工作,无论是作为游戏开发人员、银行的后端工程师、开发者工具的创造者或是其他的一些岗位。
|
||||
|
||||
在 2019 年, Opensource.com 也通过报道 Rust 日益增长的受欢迎程度而发挥了作用。万一您错过了它们,这里是过去一年里 Opensource.com 上关于 Rust 的热门文章。
|
||||
|
||||
### 使用 rust-vmm 构建未来的虚拟化堆栈
|
||||
|
||||
Amazon 的 [Firecracker][9] 是支持 AWS Lambda 和 Fargate 的虚拟化技术,完全使用 Rust 编写。这项技术的作者之一 Andreea Florescu 在 [**《使用 rust-vmm 构建未来的虚拟化堆栈》**][10]中提供了对 Firecracker 及其相关技术的深刻见解。
|
||||
|
||||
Firecracker 最初是 Google [CrosVM][11] 的一个分支,但是很快由于两个项目的不同需求而分化。尽管如此,在这个项目与其他用 Rust 所编写的虚拟机管理器(VMM)之间仍有许多得到了很好共享的通用片段。考虑到这一点, [rust-vmm][12] 起初是以一种让 Amazon 和 Google, Intel 和 Red Hat 以及其余开源社区去相互共享通用 Rust “crates” (即程序包)的方式开始的。其中包括 KVM 接口(Linux 虚拟化 API)、 Virtio 设备支持以及内核加载程序。
|
||||
|
||||
看到软件行业的一些巨头围绕用 Rust 编写的通用技术栈协同工作,实在是很神奇。鉴于这种和其他[使用 Rust 编写的技术堆栈][13]之间的伙伴关系,到了 2020 年,看到更多这样的情况我不会感到惊讶。
|
||||
|
||||
### 为何选择 Rust 作为你的下一门编程语言
|
||||
|
||||
采用一门新语言,尤其是在有着建立已久技术栈的大公司,并非易事。我很高兴写了[《为何选择 Rust 作为你的下一门编程语言》][14],书中讲述了 Microsoft 是如何在没有考虑其他这么多有趣的编程语言的情况下选择了采用 Rust。
|
||||
|
||||
选择编程语言涉及许多不同的标准——从技术上到组织上,甚至是情感上。 其中一些标准比其他的更容易衡量。比方说,了解技术变更的成本(例如调整构建系统和构建新工具)要比理解组织或情感问题(例如高效或快乐的开发人员将如何使用这种新语言)容易得多。 此外,易于衡量的标准通常与成本相关,而难以衡量的标准通常以收益为导向。 这通常会导致成本在决策过程中变得越来越重要,即使这不一定就是说成本要比收益更重要——只是成本更容易衡量。 这使得公司不太可能采用新的语言。
|
||||
|
||||
然而,Rust 最大的好处之一是很容易衡量其编写安全且高性能系统软件的能力。鉴于 Microsoft 70% 的安全漏洞是由于 Rust 旨在防止的内存安全问题导致的,而且这些问题每年都使公司付出了几十亿美元的代价,很容易衡量并理解采用这门语言的好处。
|
||||
|
||||
是否会在 Microsoft 全面采用 Rust 尚待观察,但是仅凭着相对于现有技术具有明显且可衡量的好处这一事实, Rust 的未来一片光明。
|
||||
|
||||
### 2020 年的 Rust
|
||||
|
||||
尽管要达到 C++ 等语言的流行度还有很长的路要走。Rust 实际上已经开始在工业界引起关注。我希望更多公司在 2020 年开始采用 Rust。 Rust 社区现在必须着眼于欢迎开发人员和公司加入社区,同时确保将推动该语言发展到现在的一切都保留下来。
|
||||
|
||||
Rust 不仅仅是一个编译器和一组库,而是一群想要使系统编程变得容易、安全而且有趣的人。即将到来的这一年,对于 Rust 从业余爱好语言到软件行业所使用的主要语言之一的转型至关重要。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/rust-resources
|
||||
|
||||
作者:[Ryan Levick][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[JonnieWayy](https://github.com/JonnieWayy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ryanlevick
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
|
||||
[2]: http://rust-lang.org/
|
||||
[3]: https://insights.stackoverflow.com/survey/2019#technology-_-most-loved-dreaded-and-wanted-languages
|
||||
[4]: https://youtu.be/o01QmYVluSw
|
||||
[5]: https://youtu.be/kylqq8pEgRs
|
||||
[6]: https://youtu.be/l9hM0h6IQDo
|
||||
[7]: https://oxide.computer/blog/introducing-the-oxide-computer-company/
|
||||
[8]: https://rustfest.eu
|
||||
[9]: https://firecracker-microvm.github.io/
|
||||
[10]: https://opensource.com/article/19/3/rust-virtual-machine
|
||||
[11]: https://chromium.googlesource.com/chromiumos/platform/crosvm/
|
||||
[12]: https://github.com/rust-vmm
|
||||
[13]: https://bytecodealliance.org/
|
||||
[14]: https://opensource.com/article/19/10/choose-rust-programming-language
|
@ -1,62 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (JonnieWayy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How failure-driven development makes you successful)
|
||||
[#]: via: (https://opensource.com/article/20/3/failure-driven-development)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
|
||||
|
||||
失败驱动的开发如何使你成功
|
||||
======
|
||||
我是词典里 “failure” 一词旁边的插图,这就是为什么我擅长我的工作。
|
||||
![failure sign at a party, celebrating failure][1]
|
||||
|
||||
我的职称是高级软件工程师,但这不是我最亲近的同事对我的称呼。由于我摧毁的一切,他们管我叫“樱桃炸弹”。我定期会遇到的失败已经可以追溯到我们的季度性收益和停机时间。字面上看,我是你所读到过的生产灾难,里面写道:“无论何时何地,什么事情永远都不要做。”
|
||||
|
||||
我的职业生涯始于服务台,在那里我写了一些损坏高端公司服务器的循环。我将生产应用程序关闭了长达八个小时而没有发出警告,并且在试图使得情况好转的过程中摧毁了无数个集群,其中两个是因为我输错了一些东西。
|
||||
|
||||
我是我们在 [Kubernetes][2] 中设有灾难恢复 (DR)集群的原因。我是个混乱的工程师,当我们有从未经过停机恢复计划测试的应用程序时,我会丝毫不带警告地去教人们如何快速采取行动并进行故障排除。我作为可能失败的最好例子而存在,这实际上是有史以来最酷的事情。
|
||||
|
||||
### Jess 和消失的 K8s 集群
|
||||
|
||||
我的官方职责之一涉及到我们的应用架构。对于任何形式的架构改动,我都要进行代码的编写与测试,看看有什么可能性。近来,温和一点说,这成了我老板史诗级的痛苦。
|
||||
|
||||
We run most of our infrastructure on Kubernetes, which is known for its resiliency. Despite that reputation, I managed to make two clusters just, well, disappear. You may be wondering how I could do that; it's pretty easy: **terraform destroy**. We manage our infrastructure as code through [Terraform][3], and it won't take any knowledge of the software to know that **destroy** can do something bad. Before you panic, it was the dev clusters, so life went on.
|
||||
|
||||
我们在 Kubernetes 上运行我们的大多数基础架构, Kubernetes 以其弹性著称。尽管有这样的声誉,我还是使得两个集群,好吧,消失了。你可能会好奇我是怎么做到的,很容易,**Terraform 破坏**。我们通过 [Terraform][3] 将我们的基础架构作为代码进行管理,并且不需要任何软件知识就知道**破坏**可做坏事。在你恐慌之前,它是开发人员集群 (dev cluster),所以生活仍在继续。
|
||||
|
||||
考虑到这一点,有理由问我为什么还没丢掉饭碗,以及为什么我要写下关于这些事情的内容。这很好回答:我仍然有工作,因我的基础架构代码比起起初之时更新工作更好更快了。我写下关于这些事情的内容是因为每个人都时常会遭遇失败,这是非常非常正常的。如果你没有定期遭遇失败,我认为你并没有足够尽力地在学习。
|
||||
|
||||
### 破坏东西并训练人们
|
||||
|
||||
你可能还会认为永远不会有人让我去训练任何人。那是最糟糕的主意,因为(就像我的团队开玩笑说的)你永远都不应该做我所做的事情。但是我的老板定期让我去训练新人。我甚至会给整个团队提供训练,用我们自己的基础架构或代码去教人们如何构建他们自己的基础架构。
|
||||
|
||||
原因如下:失败是你迈向成功的第一步。失败的教训绝不只是“备份是个绝佳的主意”。不,从失败中,你学会更快地恢复、更快地排除故障并且在你工作中取得惊人的进步。当你对自己的工作感到惊叹时,你就可以训练其他人,教他们关于什么事情不要做,并且帮助他们去理解一切的工作原理。由于你的经验,他们会比你开始的地方更进一步 —— 并且他们也很可能会以每个人都能从中学到东西的新颖、惊人、史诗般的方式失败。
|
||||
|
||||
### 你的成功取决于你的失败
|
||||
|
||||
没有人生来就具有软件工程和云基础架构方面的天赋,就像没有人天生就会走路。我们都是从翻滚和碰撞中开始的。从那里开始,我们学会爬行,然后能够站立一会儿。当我们开始走路后,我们会跌倒并且擦伤膝盖,撞到手肘,还有 —— 至少在我哥哥的情况下 —— 走着走着撞上桌子的尖角,然后在眉毛中间缝了针。
|
||||
|
||||
凡事都需要时间去学习。一路上阅读手边能获得的一切来帮助你,但这永远只是个开始。完美是无法实现的幻想,你必须通过失败来取得成功。
|
||||
|
||||
每走一步,我的失败都教会我如何把事情做得更好。
|
||||
|
||||
最终,你的成功正如你失败的总和一样多,因为这标志着你成功的程度。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/failure-driven-development
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[JonnieWayy](https://github.com/JonnieWayy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jrepka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)
|
||||
[2]: https://www.redhat.com/en/topics/containers/what-is-kubernetes
|
||||
[3]: https://github.com/hashicorp/terraform
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (JonnieWayy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Balloon-powered internet service goes live in Kenya)
|
||||
[#]: via: (https://www.networkworld.com/article/3566295/balloon-powered-internet-service-goes-live-in-kenya.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
气球驱动的互联网服务在肯尼亚上线
|
||||
======
|
||||
Alphabet 的衍生产品 [Loon][1] 使用气球创建了一个由蜂窝塔构成的浮动网络。
|
||||
|
||||
ISP [Telkom Kenya][2] 正在启动第一个使用气球的商业化 4G LTE 服务,气球的作用是在漂浮在平流层中作为蜂窝塔网络。
|
||||
|
||||
据 Alphabet 的衍生产品 [Loon][1] 的首席执行官及潜在的技术提供商 Alastair Westgarth 所说,这项服务起初将会覆盖肯尼亚接近 19000 平方英里的范围。 Westgarth 在 [Medium][3] 上的一篇文章中说,将会有大约 35 个或更多的气球组成编队,它们持续不断地移动,漂浮在地表上方大约 12 英里的平流层中。“我们将 Loon 称为浮动的蜂窝塔网络。” Westgarth 说道。
|
||||
|
||||
Telkom Kenya 的首席执行官 Mugo Kibati 在一篇[新闻稿][4]中提到,传统互联网对肯尼亚的服务不足,这是采用这种输送装置的原因。“…… 具有互联网功能的气球能够为生活在服务不足或是完全没有服务的偏远地区的许多肯尼亚人提供联系,因此仍然处于不利低位。” Kibati 说道。远程医疗和在线教育是两个预期的用例。
|
||||
|
||||
在测试中, Loon 实现了 有 19 毫秒延迟的 18.9 Mbps 下行速度,以及 4.74 Mbps 的上行速度。 Westgarth 说,该服务能够用于“语音通话、视频通话、 YouTube、 WhatsApp、电子邮件、发短信、网页浏览”和其他应用程序。
|
||||
|
||||
从更大的角度看,从平流层提供互联网服务对于[物联网( IoT )][5]来说是一个诱人的主张。在高海拔地区,网络覆盖范围可能会更广泛,并且覆盖范围可以随着需求的变化而变化(例如,采矿区的移动)。此外,要建立或处理的地面基础设施更少。 比方说,开发人员可以避免铺设电缆所需的私有财产纠纷。
|
||||
|
||||
可以想象,服务中断也更加可控。提供商可以启动另一台设备,而不必通过复杂的远程地面基础设施来跟踪故障。备用气球可以发挥作用,时刻准备着投入使用。
|
||||
|
||||
### 基于无人机的互联网交付
|
||||
|
||||
另一家正在探索大气层互联网的组织是软银( Softbank ),它称其 260 英尺宽的 HAWK30 无人机是“平流层中的浮动基站”。(参见相关故事:[软银计划到 2023 年实现无人机交付的物联网和互联网][6])
|
||||
|
||||
日本主要的电信公司对平流层传送的互联网感兴趣的原因之一是,其群岛易于发生自然灾害,例如地震。与传统的基站相比,地球上空的浮动基站更容易移动,从而可以更快、更灵活地应对自然灾害。
|
||||
|
||||
实际上,在一场灾害过后, Loon 的气球已成功用于提供互联网服务:例如, Loon 在 2017 年波多黎各的飓风 Maria 之后提供了连接。
|
||||
|
||||
Westgarth 说, Loon 的气球自最初开发以来已经走了很长一段路。现如今,发射是通过自动设备执行的,该设备可以每半小时将连接在地面站点的气球推到 60000 英尺高空,而不像以前那样人工进行。
|
||||
|
||||
机器学习算法会处理导航,以尝试向用户提供持续的服务。但是,这并非总是可能的,因为风(尽管在地面上没有那么大)和受限的空域都会影响覆盖范围,尽管 Westgarth 称之为“精心编排组织的气球舞蹈”。
|
||||
|
||||
此外,这些设备是太阳能供电的,这意味着它们仅仅能够在白天工作并提供互联网(或重新定位自身,或向其他气球传输互联网)。出于上述原因和其他的一些原因, Westgarth 和 Kibati 指出,气球必须扩大现有的基础设施和计划,但这并不是一个完整的解决方案。
|
||||
|
||||
Westgarth 说:“为了连接现在和将来需要它的所有人员和事物,我们需要开阔我们的思维;我们需要在连通性生态系统中增加新的一层。”
|
||||
|
||||
加入 [Facebook][7] 和 [LinkedIn][8] 上的 Network World 社区,以评论最首要的主题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3566295/balloon-powered-internet-service-goes-live-in-kenya.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[JonnieWayy](https://github.com/JonnieWayy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://loon.com/
|
||||
[2]: https://www.telkom.co.ke/
|
||||
[3]: https://medium.com/loon-for-all/loon-is-live-in-kenya-259d81c75a7a
|
||||
[4]: https://telkom.co.ke/telkom-and-loon-announce-progressive-deployment-loon-technology-customers-july
|
||||
[5]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
[6]: https://www.networkworld.com/article/3405170/softbank-plans-drone-delivered-iot-and-internet-by-2023.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
227
translated/tech/20200728 Digging for DNS answers on Linux.md
Normal file
227
translated/tech/20200728 Digging for DNS answers on Linux.md
Normal file
@ -0,0 +1,227 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Digging for DNS answers on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3568488/digging-for-dns-answers-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
在 Linux 上挖掘 DNS 应答中的秘密
|
||||
======
|
||||
|
||||
> dig 是一个强大而灵活的工具,用于查询域名系统(DNS)服务器。在这篇文章中,我们将深入了解它的工作原理以及它能告诉你什么。
|
||||
|
||||
![Laurie Avocado][1]
|
||||
|
||||
`dig` 是一款强大而灵活的查询 DNS 名称服务器的工具。它执行 DNS 查询,并显示参与该过程的名称服务器返回的应答以及与搜索相关的细节。系统管理员和 [DNS][3] 管理员经常使用 `dig` 来帮助排除 DNS 问题。在这篇文章中,我们将深入了解它的工作原理,看看它能告诉我们什么。
|
||||
|
||||
开始之前,对 DNS(域名系统)的工作方式有一个基本的印象是很有帮助的。它是全球互联网的关键部分,因为它提供了一种查找世界各地的服务器的方式,从而可以与之连接。你可以把它看作是互联网的地址簿,任何正确连接到互联网的系统,都应该能够使用它来查询任何正确注册的服务器的 IP 地址。
|
||||
|
||||
### dig 入门
|
||||
|
||||
Linux 系统上一般都默认安装了 `dig` 工具。下面是一个带有一点注释的 `dig` 命令的例子:
|
||||
|
||||
```
|
||||
$ dig www.networkworld.com
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> www.networkworld.com <== 你使用的 dig 版本
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6034
|
||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
|
||||
|
||||
;; OPT PSEUDOSECTION:
|
||||
; EDNS: version: 0, flags:; udp: 65494
|
||||
;; QUESTION SECTION: <== 你的查询细节
|
||||
;www.networkworld.com. IN A
|
||||
|
||||
;; ANSWER SECTION: <== 结果
|
||||
|
||||
www.networkworld.com. 3568 IN CNAME idg.map.fastly.net.
|
||||
idg.map.fastly.net. 30 IN A 151.101.250.165
|
||||
|
||||
;; Query time: 36 msec <== 查询用时
|
||||
;; SERVER: 127.0.0.53#53(127.0.0.53) <== 本地缓存解析器
|
||||
;; WHEN: Fri Jul 24 19:11:42 EDT 2020 <== 查询的时间
|
||||
;; MSG SIZE rcvd: 97 <== 返回的字节数
|
||||
```
|
||||
|
||||
如果你得到了一个这样的应答,是好消息吗?简短的回答是“是”。你得到了及时的回复。状态字段(`status: NOERROR`)显示没有问题。你正在连接到一个能够提供所要求的信息的名称服务器,并得到一个回复,告诉你一些关于你所查询的系统的重要细节。简而言之,你已经验证了你的系统和域名系统相处得很好。
|
||||
|
||||
其他可能的状态指标包括:
|
||||
|
||||
- `SERVFAIL`:被查询的名称存在,但没有数据或现有数据无效。
|
||||
- `NXDOMAIN`:所查询的名称不存在。
|
||||
- `REFUSED`:该区域的数据不存在于所请求的权威服务器中,并且在这种情况下,基础设施没有设置为提供响应。
|
||||
|
||||
下面是一个例子,如果你要查找一个不存在的域名,你会看到什么?
|
||||
|
||||
```
|
||||
$ dig cannotbe.org
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> cannotbe.org
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 35348
|
||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
|
||||
```
|
||||
|
||||
一般来说,`dig` 比 `ping` 会提供更多的细节,如果域名不存在,`ping` 会回复 “名称或服务未知”。当你查询一个合法的系统时,你可以看到域名系统对该系统知道些什么,这些记录是如何配置的,以及检索这些数据需要多长时间。
|
||||
|
||||
事实上,有时 `dig` 可以在 `ping` 完全不能响应的时候进行响应,当你试图确定一个连接问题时,这种信息是非常有用的。
|
||||
|
||||
### DNS 记录类型和标志
|
||||
|
||||
在上面的第一个查询中,我们可以看到一个问题,那就是同时存在 `CNAME` 和 `A` 记录。`CNAME`(<ruby>规范名称<rt>canonical name</rt></ruby>)就像一个别名,把一个域名指向另一个域名。你查询的大多数系统不会有 `CNAME` 记录,而只有 `A` 记录。如果你运行 `dig localhost` 命令,你会看到一个 `A` 记录,它就指向 `127.0.0.1` —— 这是每个系统都使用的“回环”地址。`A` 记录用于将一个名字映射到一个 IP 地址。
|
||||
|
||||
DNS 记录类型包括:
|
||||
|
||||
* `A` 或 `AAAA`:IPv4 或 IPv6 地址
|
||||
* `CNAME`:别名
|
||||
* `MX`:邮件交换器
|
||||
* `NS`:名称服务器
|
||||
* `PTR`:一个反向条目,让你根据 IP 地址找到系统名称
|
||||
* `SOA`:表示授权记录开始
|
||||
* `TXT` 一些相关文本
|
||||
|
||||
我们还可以在上述输出的第五行看到一系列的“标志”。这些定义在 [RFC 1035][4] 中 —— 它定义了 DNS 报文头中包含的标志,甚至显示了报文头的格式。
|
||||
|
||||
```
|
||||
1 1 1 1 1 1
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| ID |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
|QR| Opcode |AA|TC|RD|RA| Z | RCODE |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| QDCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| ANCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| NSCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
| ARCOUNT |
|
||||
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|
||||
```
|
||||
|
||||
在上面的初始查询中,第五行显示的标志是:
|
||||
|
||||
* `qr` = 查询
|
||||
* `rd` = 进行递归查询
|
||||
* `ra` = 递归数据可用
|
||||
|
||||
RFC 中描述的其他标志包括:
|
||||
|
||||
* `aa` = 权威答复
|
||||
* `cd` = 检查是否禁用
|
||||
* `ad` = 真实数据
|
||||
* `opcode` = 一个 4 位字段
|
||||
* `tc` = 截断
|
||||
* `z`(未使用)
|
||||
|
||||
### 添加 +trace 选项
|
||||
|
||||
如果你添加 `+trace` 选项,你将从 `dig` 得到更多的输出。它会添加更多信息,显示你的 DNS 查询如何通过名称服务器的层次结构找到你要找的答案。
|
||||
|
||||
下面显示的所有 `NS` 记录都反映了名称服务器 —— 这只是你将看到的数据的第一部分,因为查询通过名称服务器的层次结构来追踪你要找的东西:
|
||||
|
||||
```
|
||||
$ dig +trace networkworld.com
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> +trace networkworld.com
|
||||
;; global options: +cmd
|
||||
. 84895 IN NS k.root-servers.net.
|
||||
. 84895 IN NS e.root-servers.net.
|
||||
. 84895 IN NS m.root-servers.net.
|
||||
. 84895 IN NS h.root-servers.net.
|
||||
. 84895 IN NS c.root-servers.net.
|
||||
. 84895 IN NS f.root-servers.net.
|
||||
. 84895 IN NS a.root-servers.net.
|
||||
. 84895 IN NS g.root-servers.net.
|
||||
. 84895 IN NS l.root-servers.net.
|
||||
. 84895 IN NS d.root-servers.net.
|
||||
. 84895 IN NS b.root-servers.net.
|
||||
. 84895 IN NS i.root-servers.net.
|
||||
. 84895 IN NS j.root-servers.net.
|
||||
;; Received 262 bytes from 127.0.0.53#53(127.0.0.53) in 28 ms
|
||||
...
|
||||
```
|
||||
|
||||
最终,你会得到与你的要求直接挂钩的信息:
|
||||
|
||||
```
|
||||
networkworld.com. 300 IN A 151.101.2.165
|
||||
networkworld.com. 300 IN A 151.101.66.165
|
||||
networkworld.com. 300 IN A 151.101.130.165
|
||||
networkworld.com. 300 IN A 151.101.194.165
|
||||
networkworld.com. 14400 IN NS ns-d.pnap.net.
|
||||
networkworld.com. 14400 IN NS ns-a.pnap.net.
|
||||
networkworld.com. 14400 IN NS ns0.pcworld.com.
|
||||
networkworld.com. 14400 IN NS ns1.pcworld.com.
|
||||
networkworld.com. 14400 IN NS ns-b.pnap.net.
|
||||
networkworld.com. 14400 IN NS ns-c.pnap.net.
|
||||
;; Received 269 bytes from 70.42.185.30#53(ns0.pcworld.com) in 116 ms
|
||||
```
|
||||
|
||||
### 挑选响应者
|
||||
|
||||
你可以使用 `@` 符号来指定一个特定的名称服务器来处理你的查询。在这里,我们要求 Google 的主名称服务器响应我们的查询:
|
||||
|
||||
```
|
||||
$ dig @8.8.8.8 networkworld.com
|
||||
|
||||
; <<>> DiG 9.16.1-Ubuntu <<>> @8.8.8.8 networkworld.com
|
||||
; (1 server found)
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43640
|
||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
|
||||
|
||||
;; OPT PSEUDOSECTION:
|
||||
; EDNS: version: 0, flags:; udp: 512
|
||||
;; QUESTION SECTION:
|
||||
;networkworld.com. IN A
|
||||
|
||||
;; ANSWER SECTION:
|
||||
networkworld.com. 299 IN A 151.101.66.165
|
||||
networkworld.com. 299 IN A 151.101.194.165
|
||||
networkworld.com. 299 IN A 151.101.130.165
|
||||
networkworld.com. 299 IN A 151.101.2.165
|
||||
|
||||
;; Query time: 48 msec
|
||||
;; SERVER: 8.8.8.8#53(8.8.8.8)
|
||||
;; WHEN: Sat Jul 25 11:21:19 EDT 2020
|
||||
;; MSG SIZE rcvd: 109
|
||||
```
|
||||
|
||||
下面所示的命令对 `8.8.8.8` IP 地址进行反向查找,以显示它属于 Google 的 DNS 服务器。
|
||||
|
||||
```
|
||||
$ nslookup 8.8.8.8
|
||||
8.8.8.8.in-addr.arpa name = dns.google.
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
`dig` 命令是掌握 DNS 工作原理和在出现连接问题时排除故障的重要工具。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3568488/digging-for-dns-answers-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2017/01/05_tools-100704412-large.jpg
|
||||
[2]: https://creativecommons.org/licenses/by/2.0/legalcode
|
||||
[3]: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html
|
||||
[4]: https://tools.ietf.org/html/rfc1035
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
128
translated/tech/20200730 10 cheat sheets for Linux sysadmins.md
Normal file
128
translated/tech/20200730 10 cheat sheets for Linux sysadmins.md
Normal file
@ -0,0 +1,128 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (10 cheat sheets for Linux sysadmins)
|
||||
[#]: via: (https://opensource.com/article/20/7/sysadmin-cheat-sheets)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Linux 系统管理员的 10 份速查表
|
||||
======
|
||||
|
||||
> 这些快速参考指南让系统管理员的生活和日常工作变得更轻松,而且它们都是免费提供的。
|
||||
|
||||
![People work on a computer server with devices][1]
|
||||
|
||||
作为一名系统管理员,你做所的不是一件工作,而是**全部**工作,而且往往每一件工作都是随时随地出现,毫无预兆。除非你每天都只做一项任务,否则当需要的时候,你可能并不总是都能将所有的命令和选项都记在脑海里。这就是为什么我喜欢速查表的原因。
|
||||
|
||||
速查表可以帮助你避免愚蠢的错误,它们可以让你不必翻阅数页的文档,并让你高效地完成任务。我为每位系统管理员挑选了我最喜欢的 10 个速查表,无论他的经验水平如何。
|
||||
|
||||
### 网络
|
||||
|
||||
我们的《[Linux 网络][2]》速查表是速查表界的的瑞士军刀,它包含了最常见的网络命令的简单提醒,包括 `nslookup`、`tcpdump`、`nmcli`、`netstat`、`traceroute` 等。最重要的是,它用了 `ip` 命令,所以你终于可以不用再默认使用 `ifconfig` 命令了!
|
||||
|
||||
### 防火墙
|
||||
|
||||
系统管理员有两种:了解 `iptables` 的和使用前一类人编写的 `iptables` 配置文件的。如果你是第一类人,你可以继续使用你的 `iptables` 配置,有没有 [firewalld][3] 都无所谓。
|
||||
|
||||
如果你是第二类人,你终于可以放下你的 iptables 焦虑,拥抱 firewalld 的轻松。阅读《[用 firewall-cmd 保护你的 Linux 网络][4]》,然后下载我们的《[firewalld 速查表][5]》来记住你所学到的东西,保护你的网络端口从未如此简单。
|
||||
|
||||
### SSH
|
||||
|
||||
许多系统管理员都用的是 [POSIX][6] shell,所以可以在别人的计算机上运行的远程 shell 是 Linux 上最重要的工具之一也就不足为奇了。任何学习服务器管理的人通常很早就学会了使用 SSH,但我们中的许多人只学习了基础知识。
|
||||
|
||||
当然,SSH 可以在远程机器上打开一个交互式的 shell,但它的功能远不止这些。比如说,你需要在远程机器上进行图形化登录。远程主机的用户要么不在键盘旁边,要么无法理解你启用 VNC 的指令。只要你有 SSH 权限,就可以为他们打开端口。
|
||||
|
||||
```
|
||||
$ ssh -L 5901:localhost:5901 <remote_host>
|
||||
```
|
||||
|
||||
通过我们的《[SSH 速查表][7]》了解更多。
|
||||
|
||||
### Linux 用户和权限
|
||||
|
||||
传统的大型机和 UNIX 超级计算机风格的用户账户现在基本上已经被 Samba、LDAP 和 OpenShift等系统所取代。然而,这并没有改变对管理员和服务账户仔细管理的需求。为此,你仍然需要熟悉`useradd`、`usermod`、`chown`、`chmod`、`passwd`、`gpasswd`、`umask` 等命令。
|
||||
|
||||
把我的《[用户和权限速查表][8]》放在手边,你就可以随时对与用户管理有关的任务进行合理的概览,并有实例命令演示你需要做的任何事情的正确语法。
|
||||
|
||||
### 基本的 Linux 命令
|
||||
|
||||
并不是所有的系统管理员都会把所有的时间花在终端上。无论你是否喜欢在桌面上工作,还是刚开始使用 Linux,有时为常用的终端命令提供一个任务导向的参考是很好的。
|
||||
|
||||
对于一个为灵活性和即兴性而设计的界面来说,找到所有你可能需要的东西是很困难的,但我的《[常用命令速查表][9]》是相当全面的。这张速查表以任何技术型桌面用户的典型生活为蓝本,涵盖了用命令在计算机内导航、寻找文件的绝对路径、复制和重命名文件、建立目录、启动系统服务等内容。
|
||||
|
||||
### Git
|
||||
|
||||
在计算机的历史上,版本控制曾经是只有开发者才需要的东西。但那是过去,而 Git 是现在。对于任何希望跟踪从 Bash 脚本到配置文件、文档和代码的变化的人来说,版本控制是一个重要的工具。Git 适用于每个人,包括程序员、网站可靠性工程师(SRE),甚至系统管理员。
|
||||
|
||||
获取我们的《[Git 速查表][10]》来学习要领、基本工作流程和最重要的 Git 标志。
|
||||
|
||||
### Curl
|
||||
|
||||
Curl 不一定是系统管理员专用的工具,从技术上讲,它“只是”一个用于终端的非交互式 Web 浏览器。你可能几天都不用它一次。然而,你很有可能会发现 Curl 对你每天要做的事情很有用,不管是快速参考网站上的一些信息,还是排除网络主机的故障,或者是验证你运行或依赖的一个重要 API。
|
||||
|
||||
Curl 是一个向服务器传输数据的命令,它支持的协议包括 HTTP、FTP、IMAP、LDAP、POP3、SCP、SFTP、SMB、SMTP 等。它是一个重要的网络工具,所以下载我们的《[Curl 速查表][11]》,开始探索 Curl 吧。
|
||||
|
||||
### SELinux
|
||||
|
||||
Linux 的安全策略在默认情况下是很好的,root 权限和用户权限之间有很强的分离,但 SELinux 使用标签系统对其进行了改进。在配置了 SELinux 的主机上,每个进程和每个文件对象(或目录、网络端口、设备等)都有一个标签。SELinux 提供了一套规则来控制进程标签对对象(如文件)标签的访问。
|
||||
|
||||
有时候你需要调整 SELinux 策略,或者调试一些安装时没有正确设置的东西,或者深入了解当前的策略。我们的《[SELinux 速查表][12]》可以提供帮助。
|
||||
|
||||
### Kubectl
|
||||
|
||||
无论你是已经迁移到了开放的混合云、封闭的私有云,还是你还在研究这样的迁移需要准备什么,你都需要了解 Kubernetes。虽然云确实还需要人去摆弄物理服务器,但作为一个系统管理员,你的未来肯定会涉及到容器,而没有什么比 Kubernetes 更能做到这一点。
|
||||
|
||||
虽然 [OpenShift][13] 为 Kubernetes 提供了流畅的“仪表盘”体验,但有时需要一种直接的方法,这正是 `kubectl` 提供的。下一次当你不得不到处推送容器时,请确保你手头有我们的《[kubectl 速查表][14]》。
|
||||
|
||||
### awk
|
||||
|
||||
近几年来,Linux 经历了很多创新,有虚拟机、容器、新的安全模型、新的初始化系统、云等等。然而有些东西似乎永远不会改变。尤其是,系统管理员需要从日志文件和其它无尽的数据流中解析和隔离信息。仍然没有比 Aho、Weinberger 和 Kernighan 的经典 `awk` 命令更适合这项工作的工具。
|
||||
|
||||
当然,自从 1977 年它被编写出来后,`awk` 已经走过了很长的路,新的选项和功能使它更容易使用。但如果你不是每天都在使用 `awk`,那么多的选项和语法可能会让你有点不知所措。下载我们的《[awk 速查表][15]》,了解 GNU awk 的工作原理。
|
||||
|
||||
### 赠品:Bash 脚本编程
|
||||
|
||||
速查表是有用的,但如果你想找更全面的东西,你可以下载我们的《[Bash 脚本编程手册][16]》。这本指南教你如何将你从速查表中了解到的所有命令和经验结合到脚本中,帮助你建立一个随时能用的自动化解决方案库来解决你的日常问题。本书内容丰富,详细讲解了 Bash 的工作原理、脚本与交互式命令的区别、如何捕捉错误等。
|
||||
|
||||
### 赋能系统管理员
|
||||
|
||||
你是一名系统管理员吗?
|
||||
|
||||
你正在成为一名系统管理员的路上吗?
|
||||
|
||||
你是否对系统管理员一天都在做什么感到好奇?
|
||||
|
||||
如果是的话,请查看《[赋能系统管理员][17]》,这里有来自业界最勤奋的系统管理员的新文章,讲述他们的工作,以及 Linux 和开源如何让这一切成为可能。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/sysadmin-cheat-sheets
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices)
|
||||
[2]: https://opensource.com/downloads/cheat-sheet-networking
|
||||
[3]: https://firewalld.org/
|
||||
[4]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
|
||||
[5]: https://opensource.com/downloads/firewall-cheat-sheet
|
||||
[6]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[7]: https://opensource.com/downloads/advanced-ssh-cheat-sheet
|
||||
[8]: https://opensource.com/downloads/linux-permissions-cheat-sheet
|
||||
[9]: https://opensource.com/downloads/linux-common-commands-cheat-sheet
|
||||
[10]: https://opensource.com/downloads/cheat-sheet-git
|
||||
[11]: https://opensource.com/downloads/curl-command-cheat-sheet
|
||||
[12]: https://opensource.com/downloads/cheat-sheet-selinux
|
||||
[13]: https://opensource.com/tags/openshift
|
||||
[14]: https://opensource.com/downloads/kubectl-cheat-sheet
|
||||
[15]: https://opensource.com/downloads/cheat-sheet-awk-features
|
||||
[16]: https://opensource.com/downloads/bash-scripting-ebook
|
||||
[17]: http://redhat.com/sysadmin
|
Loading…
Reference in New Issue
Block a user